uuid
int64 541B
3,299B
| dataset
stringclasses 1
value | text
stringlengths 1
4.29M
|
---|---|---|
2,869,038,154,328 | arxiv | \section{Introduction}
Mie resonances have recently triggered significant interest in the field of nanophotonics due to their unique optical properties,\cite{kuznetsov2016optically} which can lead to novel optical devices such as super-cavities,\cite{rybin2017optical} optical sensors,\cite{yesilkoy2019ultrasensitive} light-emitting metasurfaces,\cite{murai2020light,vaskin2019,bidault2019,sun2019electromagnetic} and lasers.\cite{ha2018directional} These resonances arise from displacement currents in dielectric or semiconducting nanoparticles with high refractive indices. Dielectric nanoparticles offer an alternative platform to plasmonic nanostructures for nanophotonics due to the low material absorption and the rich diversity of their electromagnetic modes.\cite{evlyukhin2010optical} For instance, the interference of electric and magnetic resonances in dielectric nanoantennas satisfying the Kerker condition leads to the full suppression of backward scattering.~\cite{kerker1983electromagnetic,staude2013tailoring,babicheva2017resonant,li2018engineering,cihan2018silicon} The combination of dielectric nanoparticles in arrays can lead to collective resonances with narrow line-widths.\cite{castellanos2019lattice} Symmetry protection for radiation losses in these arrays can generate bound states in the continuum with infinite quality factors.\cite{stillinger1975bound,marinica2008bound,kodigala2017lasing}
The coupling of Mie resonances with excitons is expected to tailor the performance of optoelectronic materials.\cite{ebbesen2016hybrid}, e.g., two-dimensional (2D) semiconductors, and enable new phenomena such as polariton lasing and condensation.\cite{kena2010room,Plumhof_2013,Daskalakis_2014,ramezani2017plasmon}
Atomically thin transition metal dichalcogenides (TMDs) such as MoS$_2$ or WS$_2$ are typical 2D semiconductors, exhibiting unique optoelectronic and structural properties. Monolayer TMDs display a direct band gap compared with bulk or multilayer TMDs.~\cite{mak2010atomically,splendiani2010emerging} The photogenerated bright excitons in monolayer TMDs possess very large (hundreds of meV) binding energies and are stable at room temperature.~\cite{mak2010atomically,splendiani2010emerging,chernikov2014exciton} Despite being atomically thin, monolayer TMDs show strong absorption in the visible and near infrared. Further enhancement and control of light–matter interactions is still possible through the integration of monolayers into optical architectures.
Different types of photonic or plasmonic nanostructures have been demonstrated to couple with monolayer TMDs.~\cite{lu2017nearly,ao2018unidirectional,bucher2019tailoring,wang2016coherent,zheng2017manipulating, wen2017room, zhang2018photonic,chervy2018room,wang2019limits,xie2020coherent} In the weak coupling regime, the external nanostructures enhance the absorption efficiency,~\cite{lu2017nearly} modify the local density of optical states through the Purcell effect,~\cite{sortino2019enhanced} and increase the directivity of the emission of TMDs.~\cite{ao2018unidirectional,bucher2019tailoring} When the coherent energy exchange between excitonic transitions in TMDs and the optical resonances in nanostructures is faster than any damping rate in the system, the interaction between them reaches the strong coupling regime and leads to the formation of hybrid light-matter states, i.e., exciton-polariton states. Very recently, theoretical works have proposed that magnetic resonances supported by single silicon nanoparticles can lead to Mie exciton-polaritons in monolayer TMDs.~\cite{tserkezis2018mie,lepeshov2018tunable,wang2019resonance} These works use a core-shell structure with the monolayer TMD onto Si spheres to maximize the coupling strength. However, in experiments it is only possible to deposit Si nanoparticles on top of flat TMDs.~\cite{lepeshov2018tunable,wang2019resonance} The weak electromagnetic field on the surface of the Si nanoparticle and the high radiative damping rates of Mie resonances hinder the interaction with the monolayer TMDs.
In this letter, we successfully achieve strong coupling of bright excitons in monolayer TMDs with collective Mie resonances in periodic arrays of nanoparticles, i.e., Mie surface lattice resonances (Mie-SLRs). We use monolayer WS$_2$ coupled to low-loss polycrystalline Si nanodisk arrays embeded in an optically homogeneous medium. We resolve both electric and magnetic Mie-SLRs in the dispersion measurements with different polarizations. The atomically thin monolayer provides a simple tool to detect the intensity of the electromagnetic field supported by different optical modes.\cite{yesilkoy2019ultrasensitive} When we deposit the monolayer WS$_2$ on top of the array, the energy of electric Mie-SLRs (e-SLRs) redshifts 10 meV, while the magnetic Mie-SLRs (m-SLRs) shows a smaller redshift of only 2 meV. At the zero detuning condition between excitons and SLRs, the angular dispersion of e-SLRs exhibits a clear anti-crossing with a Rabi-splitting of 32 meV between the upper and the lower polariton bands. For the m-SLRs, the dispersion crosses the energy of excitons. The different coupling strength and Rabi-splitting of the e-SLRs and m-SLRs indicates that the electric field of m-SLRs is dominated by out-of-plane components that do not couple efficiently with the in-plane excitonic dipoles in monolayer TMDs. In contrast, e-SLRs in dielectric nanoparticle arrays with relatively high quality factors (Q $\sim$ 120) facilitate the formation of collective Mie exciton-polaritons. Finally, we use numerical simulations to compare Si and Ag nanodisk arrays with the same symmetry and lattice constant. Si arrays with sharper resonance and stronger near-field enhancement are more efficient for achieving strong coupling and forming exciton-polaritons in atomically thin TMDs. Our results enrich the nanophotonic tools for investigating the polariton physics in 2D semiconductors and pave the way for the design of novel polaritonic devices.
\section{Methods}
\textbf{Sample fabrication}. The Si nanoparticle array was fabricated using electron-beam lithography and selective dry etching. Polycrystalline Si thin films were grown on a fused silica substrate by low-pressure chemical vapor deposition employing SiH$_4$ gas as the source of Si. A resist (NEB22A2, Sumitomo) was spin cast onto the Si film and patterned by electron-beam lithography and development. The Si film was vertically etched using a Bosch process with SF$_6$ and C$_4$H$_8$ gases, and the resist residue was etched away by oxygen dry etching. A high quality monolayer of WS$_2$ with a size of 30 x 70 $\mu m^2$ was mechanically exfoliated from a synthetic single crystal (HQ Graphene). The monolayer region of the flake was determined with white light microscopy and extinction measurements. The monolayer sample was exfoliated onto an optically transparent and flexible PDMS substrate. The monolayer on the PDMS was aligned under a microscope and softly transferred mostly onto the Si nanoparticle array and partially onto the flat quartz substrate for reference measurements.
\textbf{Optical extinction measurements of the bare array}. The optical extinction of the bare nanoparticle array (2 x 2 mm$^2$) covered with a 200 mm thick PDMS superstrate was measured with a collimated white light beam and with the sample mounted on a rotation stage to change the angle of incidence. The zero-order transmission spectrum was recorded with a fiber-coupled spectrometer (USB2000+, Ocean Optics). The extinction is defined as 1-$T/T_0$, where $T$ is the transmittance through the nanoparticle array and $T_0$ is the transmittance through the reference measured outside the array.
\textbf{Optical extinction measurements under a microscope}. The extinction spectra of the monolayer flake on the quartz substrate and the Si nanoparticle array were measured using an optical microscope. The samples were aligned along the optical axis of the microscope and illuminated with quasi-collimated white light. The light transmitted through the samples was collected using a microscope objective lens (Nikon CFI S Plan Fluor ELWD 20x, N.A. = 0.45), and imaged with a spectrometer (Princeton Instrument SpectraPro 300i) and an electron-multiplying charge-coupled device camera (Princeton Instruments ProEM: 512). The angular dependent extinction spectra of the nanoparticle array with/without the monolayer were recorded by rotating the sample.
\textbf{Coupled oscillators model fit}. The hybrid system can be approximately described with a model of two coupled harmonic oscillators:\begin {equation}\begin{bmatrix}\omega_{e-SLRs}-\frac{i\Gamma_{e-SLRs}}{2} & g \\g & \omega_{ex}-\frac{i\gamma_{ex}}{2} \end{bmatrix}\left(\begin{array}{c}\alpha\\ \beta\end{array}\right)=\omega\left(\begin{array}{c}\alpha\\ \beta\end{array}\right),\end {equation} where $\omega_{e-SLRs}$ and $\omega_{ex}$ are the resonant energies of the bare SLRs and excitons, respectively; $\Gamma_{e-SLRs}$ and $\gamma_{ex}$ represent the damping rates of e-SLRs and excitons, and $g$ is the coherent coupling strength. Diagonalizing the Hamitonian matrix yields the new exciton-polaritonic eigenvalues $\omega_{\pm}$, defining the energies of the UP and LP bands and the Hopfield coefficients $\alpha$ and $\beta$, the squares of which give the weight fractions of excitons and SLRs with $\mid\alpha\mid^2+\mid\beta\mid^2=1$. The value of the Rabi splitting, $\omega_{+}-\omega_{-}=\sqrt{4g^{2}-\frac{(\Gamma_{e-SLRs}-\gamma_{ex})^{2}}{4}}$, is given at the condition of zero detuning, namely, $\omega_{e-SLRs}=\omega_{ex}$. We extract $\omega_{e-SLRs}$ and the weight fractions assuming $\omega_{ex}$ = 2.016~eV and fitting the peak positions of the UP and LP bands to the model.
\textbf{Simulations}. We use a high-accuracy surface integral method for periodic scatterers,~\cite{Gallinet_2010,Raziman_2015} and modelled the nanoparticles as cylinders (Height $H=90$ nm and diameter $D=126$ nm for silicon, $H=40$ nm and $D=74$ nm for silver) to obtain the best agreement with the experimental extinction. The size of the unit cell is 420 $\times $420 nm$^2$ in the $xy$-plane, with periodic boundary conditions applied to simulate an infinite array. The refractive index of the surrounding medium is set to 1.43. For the dielectric function of silicon, we used the measured values by Aspnes and Studna,~\cite{aspnes1983dielectric} with the imaginary part increased by 5 times to achieve the most accurate results compared to the measurements, as the etching process is likely to introduce imperfections on the particles that lead to an increased absorption. The permittivity of silver was taken from Palik,~\cite{palik_handbook_1998} and a conformal 5 nm thick alumina spacer layer (material parameters from Boidin~\cite{boidin_pulsed_2016}) was placed on top of the silver particle array to simulate experimental conditions.
\section{Results $\&$ Discussion}
\textbf{Mechanism and Sample Design.} Figures 1a and b illustrate the mechanism of in-plane strong light-matter coupling that leads to the formation of collective Mie exciton-polaritons. The resonant in-plane electric field associated with the nanoparticle array couples to the confined excitons in the 2D semiconductor on top of the array (Figure 1a). This coupling leads to hybridization and the formation of the lower (LP) and upper (UP) exciton-polaritons separated by the Rabi energy $\Omega_{R}$ (Figure 1b). To observe Mie exciton-polaritons, we fabricated a polycrystalline-Si nanoparticle array with a size of 2 $\times$ 2 mm$^2$ on a fused quartz substrate by electron-beam lithography and selective dry etching (see Methods for details). A scanning electron microscope image of the nanoparticle array is shown in Figure 1d. We softly transferred an exfoliated WS$_2$ flake on top of the array and left the flexible polydimethylsiloxane (PDMS) as a superstrate (Figure 1c).\cite{eizagirre2019preserving} Accordingly, the Si nanoparticle array and the flake were sandwiched in a nearly homogeneous dielectric environment to enhance the coherent in-plane scattering of light from the nanoparticles.\cite{de2007colloquium,auguie2008collective,giannini2010lighting} The array has a square symmetry with a lattice constant $P$ = 420 nm. The individual nanoparticles are nanodisks with a height of 90 nm and a diameter of 130 $\pm $ 4 nm. A bright-field microscope image of the nanoparticle array with the WS$_2$ flake partially on top is shown in Figure S1.
\textbf{Mie-SLRs of the Bare Array.} We first characterize the optical resonances of the bare Si nanoparticle array without the flake on top but with a PDMS superstrate. We illuminated the particle array sample with a collimated white light beam and measured its angular extinction dispersion (see Methods). The incident wave vector lies in the $yz$ plane and projects its parallel component $k_\parallel$ onto the surface of the array ($xy$ plane). We set the polarization of the incident beam along the $x$ ($y$)-axis as TE (TM) polarization. The dispersion curves of both TE and TM polarized modes (Figures 2a and b) exhibit a sharp resonance ($\sim$2.0~eV) and a broad one ($\sim$2.4~eV), which are evident at the normal incidence condition (Figure 2c). The broad extinction peak is associated with the localized Mie-resonances in individual nanoparticles and shows almost no dispersion. The wave vector dependence of the sharp resonance follows the condition of in-plane diffraction orders, i.e., Rayleigh anomalies (RAs). Interestingly, TE/TM polarization modes show both linear and parabolic dispersion curves, which are associated with the (1, 0), (-1, 0), and (0, $\pm 1$) order RAs. The results are different from previous studies of plasmonic nanoparticle arrays, in which linear and parabolic dispersion are selected by the different polarization due to the existence of only electric dipole resonances.\cite{zakharko2018radiative,wang2018rich,le2019enhanced} In addition, the sharp peak with a high quality factor (Q-factor) of 120 approximately, estimated by fitting it with a Lorentzian, displays a pronounced extinction as large as ~0.92 at the energy of 2.0~eV. This peak corresponds to the so-called Mie-SLRs, emerging from the coherent radiative coupling between the Mie scatters enhanced by the in-plane diffracted orders, i.e., the RAs.\cite{murai2020sc}
We analyse more in detail the extinction spectra around the peak of the Mie-SLR (Figures 2d-f). A weak extinction shoulder at the energy around 2.05~eV can be appreciated at normal incidence (Figures 2c and f). When increasing the in-plane wave vector, this weak extinction peak follows a parabolic (linear) dispersion for TE (TM) polarization. The opposite relation between the polarization and the dispersion of the main Mie-SLR peak and the weaker peak is due to the excitation of both electric and magnetic dipolar resonances in arrays of dielectric nanoparticles. The polarized incident beam generates an electric dipole in the nanoparticles oriented along the polarization direction, and the displacement current density generates an orthogonal magnetic dipole. The radiative coupling of electric or magnetic dipoles on the nanoparticles is enhanced by the orthogonal diffraction orders to form e- and m-SLRs, respectively.\cite{babicheva2017resonant,li2018engineering,castellanos2019lattice,murai2020sc}
\textbf{Rabi Splitting of the Hybrid System.} We only need to consider the in-plane components of the electric field on the upper surface of the nanoparticle array for coupling with the in-plane excitonic dipoles of the TMD monolayer. We use surface integral equations (SIE, see Methods) to simulate the field enhancement at this surface with a plane wave illumination at normal incidence ($k_\parallel$ = 0). The in-plane field enhancement distribution at the energies of 2.006~eV (Figure 3a) and 2.036~eV (Figure 3b) are very similar, with vertical RA bands of enhanced field. This similar field distribution indicates that the in-plane field at these two energies originates from the same mode, namely the e-SLRs. As expected, the in-plane field enhancement is more pronounced at the peak position (2.006~eV) than at the edge (2.036~eV). The simulations of the total field (Figures 3d, S5-S7) indicate that out-of-plane field components are dominant for the m-SLRs.~\cite{castellanos2019lattice} In particular, we see a total field enhancement distribution at 2.006 eV (Figure 3c) very different to that at 2.036 eV (Figure 3d) with the field enhanced along horizontal bands. This different total field distribution indicates that the m-SLR dominates the total field at 2.036 eV, but with mainly a out-of-plane field. Therefore, the light-matter coupling with TMD monolayers is dominated by the in-plane field of the e-SLRs rather than by the out-of-plane field of m-SLRs. When the energy exchange between the excitonic transitions and the e-SLRs is faster than their damping rates, the regime of strong light-matter coupling is reached. The energy of excitons and e-SLRs splits into two new hybrid light-matter states, i.e., the UP and LP states, that we call Mie exciton-polaritons. The corresponding extinction spectra of the uncoupled and hybrid systems are shown in Figure 1f. The energy difference between UP and LP defines the Rabi-splitting $\Omega_{R}$ and is equal to twice the coupling strength $g$.
Next, we evaluate the strength of light-matter interactions with e- and m-SLRs from the extinction measurements. We record the extinction spectra of the nanoparticle array without (Figures 4a and c) and with (Figures 4b and d) the WS$_2$~1L on top by varying the angle of incidence from $\theta=0^{\circ}$ to $22^{\circ}$. For TM polarization, the (0, $\pm 1$) e-SLRs dominate the extinction spectra as indicated by the solid-red guide to the eye in Figure 4a. We note that the spectral full width at half-maximum ($\Gamma _{e-SLRs}$) increases for larger angles of incidence. This increase is similar to the one observed in plasmonic arrays and it is due to the reduction in detuning between the Mie-resonances and the RAs.\cite{le2019enhanced,murai2020sc} For the case of TE polarization, the ($\pm 1$, 0) e-SLRs of the bare array, which are degenerate at $\theta=0^{\circ}$, split into two bands (denoted by the black arrows in Figure 4c) as the angle of incidence increases. The m-SLR around the resonant energy of the excitons (vertical black dashed line in Figures 4a-d) has a weaker extinction as indicated by the solid-red guide to the eye in Figure 4c. Due to the large dispersion of the ($\pm 1$, 0) diffraction orders, we only investigate the (0, $\pm 1$) SLRs of both magnetic (TE) and electric (TM) modes. With WS$_2$~1L on top of the array, we find that the dispersion of the e-SLR splits into the LP and UP bands (the solid-blue guide to the eye in Figure 4b). Unlike the e-SLR, the m-SLR (solid-blue guide to the eye in Figure 4d) is similar to the bare nanoparticle array shown in Figure 4c (solid-red guide to the eye). The main difference between Figures 4c and d is that the extinction spectrum of the excitons is superimposed on that of the m-SLRs. These results indicate that strong coupling of excitons in WS$_2$~1L takes only place with e-SLRs and not with m-SLRs.
To quantify the coupling strength between Mie-SLRs and excitons, we analyze the angular dispersion of the hybrid system. We plot the energies of the extinction peaks as a function of the incident in-plane momentum $k_\parallel$ in Figures 5a and c. The blue triangles represent the high and low energy bands of the coupled system. The red squares correspond to the energies of the bare SLRs. In the case of m-SLRs (Figure 5c), the upper and lower energy bands follow the dispersion curves of the bare Mie-SLR and excitons, respectively. The energies of bare m-SLRs redshift 2 meV and overlap with the upper band. Therefore, we confirm that magnetic dipole resonances cannot strongly couple with bright excitons in monolayer TMDs. Instead, the electric dipole resonances are efficient to achieve strong coupling with 2D semiconductors. The two new bands in Figure 5a, corresponding to the UP and LP bands, exhibit a clear anti-crossing at the zero detuning between e-SLR and excitons. We fit the angular dispersion of the UP and LP bands to a model with two coupled harmonic oscillators as shown by the blue curves in Figure 5a (see Methods for details). We find that the band of uncoupled e-SLRs (red solid curve) redshifts 10 meV when the monolayer WS$_2$ is transferred on top of the nanoparticle array compared with the bare array (also see Figure S4b). We also obtain the Rabi energy $\Omega_{R}$ = 32 meV at $k_\parallel$= 2.4 $\mu m^{-1}$, when the UP or LP state is half mixed with e-SLRs and half with excitons (Figure 5a). Due to the high Q-factor (Q$\sim$120) of e-SLRs in the Si nanoparticle array, the Rabi energy satisfies the strong coupling condition, i.e., $\Omega_{R}>\gamma _{ex}, \Gamma _{e-SLRs} $, where $\gamma _{ex}$ = 25 meV\cite{wang2016coherent,chervy2018room,wang2019limits,eizagirre2019preserving}, $\Gamma _{e-SLRs} $ = 18 meV are the line widths of the extinction spectra of excitons and e-SLRs, respectively.
These linewidths were estimated by Lorentzian fits to the spectra. The results demonstrate that Si nanoparticle arrays are a reasonable platform for the generation of collective Mie exciton-polaritons in an atomically thin semiconductor.
\textbf{Comparison of the Mie and Plasmonic Array.} Combining these results with our previous investigations of strong light-matter coupling with plasmonic arrays\cite{wang2019limits,ramezani2017plasmon,berghuis2019enhanced,ramezani2019ultrafast}, we raise the question of whether Mie or plasmonic nanoparticle arrays are more efficient for achieving strong coupling with bright excitons in monolayer TMDs. We used simulations to obtain the answer. The coupling efficiency of Mie or plasmonic SLRs with in-plane excitonic dipoles is proportional to the in-plane field supported by SLRs. Hence, we first calculate the in-plane field enhancement of SLRs and define the ratio of coupling efficiency of Mie and plasmonic nanoparticle arrays. To enter the strong coupling regime, the Rabi-energy should be larger than the line widths of SLRs. Sharper resonances allow more cycles of Rabi oscillations and are easier to satisfy the criterion of strong coupling. We thus continue to evaluate the line widths of Mie and plasmonic SLRs.
Considering the measurements, we simulate the extinction spectra and in-plane field enhancement factor of silicon and metallic nanoparticle arrays using SIE (details in Methods). We use the experimental values of the height of the Si nanoparticles (90 nm) and 40 nm for the Ag nanoparticles.\cite{wang2019limits,berghuis2019enhanced,ramezani2017plasmon,le2019enhanced} The lattice constant of the square array is the same for both Si and Ag arrays and equal to 420 nm. To find the same energy e-SLRs, we tune the diameter of the Si and Ag nanodisks to 126 nm and 74 nm, respectively. The extinction spectra at normal incidence and the root-mean-square value of the in-plane enhancement factor within a unit cell of the bare particle array are shown in Figures 6a and b, respectively. The simulated spectrum of Si array (blue curve in Figure 6a) qualitatively agrees with the measured result (Figure 2f). For comparison, we focus on the e-SLRs of both Si and Ag array. The line-width of the Si array ($\Gamma _{Si} \sim $ 20 meV) is narrower than that of the Ag array ( $\Gamma _{Ag} \sim $ 34 meV) due to the lower material absorption of Si comparing with Ag. The simulated damping rates of Si and Ag array are qualitatively comparable with the current ($\Gamma _{Si} \sim $ 18 meV) and previous experimental measurements in similar Ag arrays ($\Gamma _{Ag} \sim $ 43 meV) ,\cite{wang2019limits} respectively. The extinction of Si array is a factor of 1.35 larger than the extinction of the Ag array (Figure 6a). The narrower and higher extinction peak of Si array leads to a stronger in-plane near field enhancement (Figure 6b), suggesting that Si array is more efficient than Ag array to achieve strong coupling with monolayer TMDs.
\section{Conclusions}
In summary, we have demonstrated an alternative nanophotonic structure, Si nanoparticle arrays, for achieving strong light-matter coupling and collective Mie exciton-polaritons in monolayer WS$_2$. The in-plane electromagnetic field associated with e-SLRs in Si nanoparticle arrays allows to strongly couple to in-plane excitonic dipoles in monolayer TMDs. At room temperature, we observe Rabi splitting when the energy of e-SLR is tuned to the energy of the excitons. However, the orthogonality between the out-of-plane field distribution with the in-plane excitons prevents the formation of exciton-polaritons for the m-SLR. We would expect that the m-SLR can be applied to couple with out-of-plane dipolar emitters, e.g., direct-bandgap interlayer excitons in TMD heterostructures.\cite{paik2019interlayer} In addition, Si nanoparticle arrays benefit from lower absorption and stronger electromagnetic field enhancement compared to Ag nanoparticle arrays with similar dimensions. Our findings contribute to the understanding of light-matter interactions at the nanoscale and pave the way for the investigation and design of low-loss polaritonic devices.
\section{Associated Content}
The Supporting Information includes the microscope image of the sample, extinction spectra of the sample, electric field enhancement factor of bare Si nanoparticle array, and Ag nanoparticle array.
\section{Author Information}
Notes
The authors declare no competing financial interest.
\section{Acknowledgements}
The authors thank the Innovational Research Incentives Schemes of the Nederlandse Organisatie voor Wetenschappelijk Onderzoek (NWO) (Vici grant nr. 680-47-628 and Gravitation grant nr. 024.002.033), and the Ministry of Education, Culture, Sports, Science and Technology (MEXT, Japan)(17KK0133, 19H02434) for financial support. S.Wang was supported by Priority Academic Program Development (PAPD) of Jiangsu Higher Education Institutions. Numerical simulations in this work were carried out on the Dutch national e-infrastructure with the support of SURF Cooperative.
|
2,869,038,154,329 | arxiv |
\section{Introduction}
Major objective of the neural text-to-speech (TTS) research is generating a natural voice that corresponds to a given sentence. Neural TTS models have become the mainstream in modern TTS research because they can synthesize natural sounding speech and comparable synthetic quality \cite{wang2017tacotron,shen2018natural,li2019neural}. In \cite{li2019neural}, a feed-forward Transformer (FFT) block was primarily used
to improve the synthetic quality of the mel-spectrogram.
Recently, non-autoregressive FFT-based TTS models have been proposed \cite{ren2019fastspeech,ren2020fastspeech,lancucki2020fastpitch}. Acoustic features, such as duration, pitch, and energy, were applied in the acoustic decoder of a TTS model and to predict the target mel-spectrogram. FastPitch \cite{lancucki2020fastpitch}, in particular, can control the prosody of synthesized speech at the fine-grained level by changing the character-level of the synthesized pitch values.
In FastPitch \cite{lancucki2020fastpitch}, it was reported that FastPitch can generate the voice with manipulated pitch, which is referred to \emph{pitch-shift}, while preserving speaker characteristics. In our preliminary experiments, however, we observed that pitch expressiveness and speaker similarity decreased when the pitch values shifted with a deviation from the average pitch. We believe that this performance degradation is probably due to the structural vulnerabilities in FastPitch. The acoustic decoder in FastPitch handles not only text but pitch information together and generates speech from the pitch-conditioned text information. Therefore, the decoder is prone to learning the relationship between the text and pitch.
To separately handle text and prosodic information, an additional neural network, which was trained in an unsupervised manner, extracted the latent variables of the acoustic features
\cite{prosody_tacotron,gst,zhang2019learning,hsu2018hierarchical,lee2019robust,sun2020generating,sun2020fully}.
Then, the latent variables were applied in the acoustic decoder of the TTS model. In these studies, the prosody was controlled by changing the reference speech or modifying the extracted latent variables. However, because the latent variables were learned in the unsupervised manner, the desired prosodic information may not be included in the latent variables.
Another approach uses the source-filter theory \cite{fant1970acoustic} which describes human speech production. Speech sounds are described as the responses of a sound source and a vocal tract filter in a source-filter model. The sound source and formant frequencies formulated by the vocal tract filter affect the fundamental frequency and phonation, respectively \cite{goldstein1973optimum,peterson1952control}. Several researchers have proposed singing voice synthesis models based on this approach \cite{lee2019adversarially,lee2020disentangling}. However speech is the domain where the duration per character is short compared to singing, and the pitch also changes more frequently within a shorter time. In speech synthesis, the approach based on the source-filter theory have been applied in several researches for modeling waveform \cite{yoshimura1999simultaneous, Ai2020, wang2019neural}. However, to the best of our knowledge, the source-filter theory has not yet been applied in neural TTS for generating the mel-spectrogram.
In this paper, we propose a non-autoregressive FFT-based TTS model based on the source-filter theory called FastPitchFormant. The main approaches in FastPitchFormant are (1) decomposed structure and (2) learning objective. With these approaches, FastPitchFormant can generate the mel-spectrogram using formant- and excitation-related representations which are separately modeled.
We evaluated the pitch controllability with several objective measurements. Furthermore, speech quality and speaker preservation of speech with pitch-shift were also evaluated using subjective listening tests.
\section{FastPitchFormant}
Figure 1 depicts the FastPitchFormant structure. %
FastPitchFormant has four module types: (1) text encoder, (2) temporal predictor, (3) formant and excitation generators, and (4) spectrogram decoder. All components except temporal predictors consist of stacks of feed forward Transformer (FFT) blocks. The number of FFT blocks are six, four, and two, respectively.
The temporal predictors consist of 2 one-dimensional convolutional layers and predict ground-truth duration and pitch. For multi-speaker TTS, we applied a speaker embedding lookup table to obtain speaker embedding. The remainder of this section provides further details on each module type.
\subsection{The Text Encoder and The Temporal Predictor}
The phoneme embedding vectors are represented by a phoneme sequence and a look-up embedding table with positional embedding. The phoneme embedding vectors pass through the text encoder, which then predict the hidden embedding. The hidden embedding is the input of two temporal predictors, the duration and pitch. The pitch embedding is obtained from the predicted pitch values passed through the one-dimensional convolutional layer. The hidden and pitch embedding are combined with the speaker embedding, respectively. The two representations are then discretely up-sampled and aligned with the predicted duration. We represent the up-sampled phoneme representation as $h \in\mathbb{R}^{D\times T}$, and the up-sampled pitch representation as $p \in\mathbb{R}^{D\times T}$, where $D$ is the dimension of the vectors, and $T$ is the total number of frames. $h$ and $p$ pass through the formant and excitation generators, respectively.
\subsection{The Formant and Excitation Generator}
We introduce formant and excitation generators into the model that are inspired by the source-filter theory \cite{fant1970acoustic}. The formant generator predicts the formant representation including formant-related information such as the linguistic information only using $h$. The excitation generator predicts the excitation representation including the excitation-related information such as the prosody using both $h$ and $p$. In our preliminary experiments, we observed that the pitch control accuracy is compromised when the excitation representation only utilizes $p$. To improve the pitch control accuracy, we applied a similar extension as that in \cite{selfattn_w_pos} to the self-attention mechanism. In first self-attention layer in the excitation generator, the attention matrix and query $Q$ are calculated as follows:
\begin{equation}
\text{Attention}(Q,K,V)=\text{softmax}(\frac{QK^T}{\sqrt{d}})V,
\end{equation}
\begin{equation}
Q=W_Q(h+p)+b_Q,
\label{q_att}
\end{equation}
\vspace{0pt}
\noindent where $K$ and $V$ are the matrices for the key and value in the self-attention mechanism, respectively, and $W_Q$ and $b_Q$ are the weight matrix and bias for the query, respectively. The effectiveness of this query extension is detailed in Section 3.3.1.
\begin{figure}[t]
\centering
\centerline{\includegraphics[width=1\linewidth]{figure/fpf.pdf}}
\caption{Diagram of FastPitchFormant. Dashed and dotted lines represent formant and excitation representations, respectively.}
\label{fig:fpf}
\end{figure}
\subsection{The Spectrogram Decoder}
The spectrogram decoder is comprised of two stacked FFT blocks and three fully connected (FC) layers. Each FC layer generates the target mel-spectrograms.
The first spectrogram is generated by the summation of the projected formant and excitation representations through the first FC layer. The first FC layer is shared by two representations.
To produce the second and third mel-spectrograms, the summation of the formant and excitation representations is passed to the two stacked FFT blocks and then projected to mel-spectrogram by the second and third FC layers. In the source-filter theory, the source spectrum is multiplied by the vocal-tract filter. However, because our model handles log-scaled mel-spectrograms, we substitute the multiplication operation to summation.
The outputs of each FC layer are used for the learning objective which includes all $L_2$ losses as an iterative loss in \cite{elias2020parallel}. Because of the iterative loss, the spectrogram decoder is trained to generate the final mel-spectrogram from the summation of the formant and excitation representations, while the two generators are trained to form those representations. In the inference stage, the mel-spectrogram from the third FC layer is the final output of FastPitchFormant.
\subsection{Learning objective}
The learning objective of FastPitchFormant is as follows:
\begin{equation}
\mathcal{L}_{final}={\frac{1}{TM}}\sum^{3}_{i=1}\mathcal{L}_{spec_i}+\alpha\mathcal{L}_p+\beta\mathcal{L}_d,
\label{loss}
\end{equation}
where $M$ is the number of mel-spectrogram bins, $\mathcal{L}_{spec_i}$ is the $L_2$ loss between the target and $i$-th predicted mel-spectrogram from the $i$-th FC layer. $\mathcal{L}_{p}$ and $\mathcal{L}_{d}$ are $L_2$ loss between target and predicted pitch value and that of duration value, respectively. Note that any additional targets are not required to supervise the model to separately generate formant and excitation representations.
\section{Experiments}
\subsection{Dataset}
We used an internal Korean speaker dataset sampled at $22.05$ kHz, that contains $22$ h of speech from a female speaker and $17$ h of speech from a male speaker. One percent of the dataset was randomly selected for the test set.
We calculated an $80$-bin log-mel spectrogram with a fast Fourier transform size of $1024$, a hop size of $256$, and a window size of $1024$. We used a speech recognizer to extract a forced alignment with a phoneme sequence. Thus, we calculated phoneme-level pitch values by averaging the F0 values over every phoneme. The F0 values were extracted using the PRAAT toolkit \cite{praat}.
\subsection{Training Setup}
We trained FastPitch (FP) and FastPitchFormant (FPF) for up to $1000$k iterations using a mini-batch size of $16$ and the Adam optimizer \cite{kingma2014adam} with initial learning rate of $0.005$. The parameters of Adam optimizer were $({\beta}_1,{\beta}_2)=(0.5, 0.9)$, and ${\epsilon}=10^{-6}$. The learning rate decreased by half every $200$k iterations.
For the FFT block and temporal predictors of the models, we followed the same network architecture and hyperparameters as those in \cite{lancucki2020fastpitch}. VocGAN \cite{yang2020vocgan} was trained as the neural vocoder using a database containing approximately $40$ h of speech recorded by six speakers.
\subsection{Objective Evaluation}
To objectively evaluate the pitch controllability of the models, pitch-shifted speech was synthesized by FP and FPF. All audio were generated by manipulating the input pitch values of model in a semitone unit. The $\lambda$ semitone shifted pitch value, $f_\lambda$, can be calculated as follows:
\begin{equation}
f_{\lambda}=2^{\frac{\lambda}{12}}\times f_{0},
\label{f0_shift}
\end{equation}
where $f_0$ is the original pitch value before shifted.
We then generated pitch-shifted speech of the test set with ground-truth duration and pitch for $\mathcal{\lambda} \in \{-8,-6,-4,0,4,6,8\}$. It implies that the original pitch shifts to $63\%$, $71\%$, $79\%$, $100\%$, $126\%$, $141\%$, and $159\%$ of its magnitude, respectively.
\setlength{\tabcolsep}{4pt}
\begin{table}[t]
\centering
\caption{FFE (\%) results of pitch-shifted speech. The numbers in parentheses are the ratio varied from the original pitch.}
\label{tab:FFE}
\centering
\begin{tabular}{l|cccccc}
\toprule
& \multicolumn{6}{c}{\textbf{Pitch shift scale ($\lambda$)}} \\ \hline
\textbf{Method} & \begin{tabular}[c]{@{}c@{}}\textbf{-8}\\\scriptsize{(63\%)}\end{tabular} & \begin{tabular}[c]{@{}c@{}}\textbf{-6}\\ \scriptsize{(71\%)}\end{tabular} &
\begin{tabular}[c]{@{}c@{}}\textbf{-4}\\ \scriptsize{(79\%)}\end{tabular} &
\begin{tabular}[c]{@{}c@{}}\textbf{+4}\\ \scriptsize{(126\%)}\end{tabular} &
\begin{tabular}[c]{@{}c@{}}\textbf{+6}\\ \scriptsize{(141\%)}\end{tabular} & \begin{tabular}[c]{@{}c@{}}\textbf{+8}\\ \scriptsize{(159\%)}\end{tabular} \\ \hline
\textit{FP (baseline)} & 44.90 & 32.81 & 21.36 & 15.59 & 25.60 & 37.10 \\ \hline
\textit{FPF} & 44.83 & 32.76 & 19.61 & 13.04 & 20.81 & 29.66 \\
\textit{FPF w/o Q} & 56.06 & 42.72 & 26.04 & 16.86 & 26.27 & 39.80 \\
\bottomrule
\end{tabular}
\end{table}
\subsubsection{Pitch Control Accuracy}
To evaluate the pitch control accuracy, we calculated the f0 frame error (FFE) \cite{chu2009reducing} between the extracted pitch values from the pitch-shifted speech generated using $f_{\lambda}$ and the shifted input pitch $f_{\lambda}$.
We also measured the FFE of FPF without the extension for query (FPF w/o Q), which is represented in Equation (\ref{q_att}), to evaluate its effectiveness. The results are listed in Table \ref{tab:FFE}. A low FFE value represents that the model can generate speech with the desired pitch. The results exhibited that FPF improved speech reproducibility with a wider range of pitch control compared to that of the baseline. When $\lambda$ was bigger than $0$, the difference between the FFE from FPF and FP was significant.
In FPF w/o Q, the FFE was higher than that of the other two models. We observed that the formant generator took over most of the mel-spectrogram generation task as the number of training epochs increased in training FPF w/o Q.
The mel-spectrogram examples from the excitation and formant representations are depicted in Figures \ref{fig:spectrogram}a and \ref{fig:spectrogram}b, respectively. They were generated by passing the excitation and formant representations through the spectrogram decoder individually.
The first row in Figure \ref{fig:spectrogram} shows that the formant and excitation generators in FPF were trained to model the action of the vocal cord and the vocal tract for generating speech. We conjecture that the separation comes from the difference in features exposed to each generator. In the early stage of training, prosodic-related parts such as pitch contours were generated first by the excitation generator handling the distribution of pitch that is a relatively low-level feature. As training proceeded, we can observe that the phonation had been formed gradually by the formant generator modeling linguistic features that are high-level feature.
In the formant representation from FPF w/o Q, we observed contours that were similar to the contours from the final mel-spectrogram and a pitch contour in the excitation representation was flat compared that of the FPF. Therefore, because of the extension, tasks for generating speech were properly distributed to generators.
\begin{figure}[t]
\centering
\centerline{\includegraphics[width=0.8\linewidth]{figure/spectrograms.png}}
\caption{Generated mel-spectrograms of (a) excitation representation, (b) formant representation and (c) final system output.
Mel-spectorgrams of first and second rows are from FPF and FPF w/o Q, respectively.
%
Inaccurate and undesired pitch contours are observed in the excitation and formant representations from FPF w/o Q.}%
\label{fig:spectrogram}
\end{figure}
\begin{figure}[t]
\centering
\centerline{\includegraphics[width=0.9\linewidth]{figure/mcd.pdf}}
\caption{MCD comparison between FP and FPF. with 95 \% Confidence Interval (CI). For the variation ratio from the average pitch in Hz unit, see Table \ref{tab:FFE}.}
\label{fig:MCD}
\end{figure}
\subsubsection{Robustness of Pitch Control}
When the formant and excitation generator decompose speech into the formant and excitation representations well,
the spectral envelope of the speech with pitch-shift should be same as that of the the speech without pitch-shift.
Therefore, we calculated the mel-cepstral distortion (MCD) \cite{kubichek1993mel} between speech with pitch-shift and speech without pitch-shift. Figure \ref{fig:MCD} illustrates the results of the MCD according to $\lambda$ in both cases of FP and FPF.
FPF had lower MCD compared to FP for all $\lambda$.
It can be elucidated that FPF can synthesize less distorted speech compared to FP even with a significant pitch variance.
We examined the changes in spectral envelopes for every $\lambda$ to visually confirm this hypothesis. Figure \ref{fig:sp} depicts the example of spectral envelopes in the same frame of synthesized speech from FP and FPF. For FPF, the spectral envelopes for every $\lambda$ appeared to maintain their original shape, while those from FP were distorted according to $\lambda$. This indicate that FPF can synthesize speech with pitch-shift which has more consistent pronunciation compared to FP because of its decomposed structure.
\begin{figure}[t]
\centering
\centerline{\includegraphics[scale=1.5,width=1\linewidth]{figure/hpjun_both2.pdf}}
\caption{Examples of spectral envelopes according to shifting pitch from (a) FP and (b) FPF for same speech frame. Black solid line is the spectral envelope of speech with $\lambda=0$, dashed lines are spectral envelopes for for different values of $\lambda$ with different colors and grey dotted lines are the power spectrum of speech. (Best viewed in color).}
\label{fig:sp}
\end{figure}
\begin{table}[t]
\caption{MOS results of speech without pitch-shift with 95\% CI.}
\centering
\label{tab:MOS_norm}
\begin{tabular}{lc}
\toprule
\textbf{Method} & \multicolumn{1}{c}{\textbf{MOS}} \\ \midrule
\textit{GT} & \multicolumn{1}{c}{4.66 $\pm$ 0.09} \\
\textit{GT (Mel+VOC)} & \multicolumn{1}{c}{4.68 $\pm$ 0.09} \\ \midrule
\textit{FP (baseline)} & \multicolumn{1}{c}{4.08 $\pm$ 0.14} \\
\textit{FPF} & \multicolumn{1}{c}{4.12 $\pm$ 0.14} \\
\bottomrule
\end{tabular}
\end{table}
\subsection{Subjective Evaluation}
For the subjective evaluation, we compared a mean opinion scores (MOS) and speaker preservation of pitch-shifted speech generated by FP and FPF. Pitch values were manipulated same as Equation (\ref{f0_shift}) with $\mathcal{\lambda} \in \{-8,-6,-4,4,6,8\}$. In addition, to evaluate MOS of speech without pitch-shift, ground-truth audio (GT) and audio generated by converting ground-truth mel-spectrogram to waveform with VocGAN (GT (Mel+VOC)) were compared together. All methods generated speech from the same input transcripts and predicted duration and pitch.
\subsubsection{Audio Quality}
Twenty samples from each model were randomly listed and evaluated, a total of $80$ samples. A total of $18$ native Korean speakers participated and were asked to score from $1$ to $5$ for each sample\footnote{Samples are available at \url{https://nc-ai.github.io/speech/publications/fastpitchformant}.}.
Table \ref{tab:MOS_norm} presents the MOS results of the pitch-shifted case and Table \ref{tab:MOS_shift} presents the MOS results of the not pitch-shifted case. We found that FPF results in MOS were comparable to those of the baseline in the not pitch-shifted case. In the pitch-shift case, FPF generated speech with close audio quality to that of the synthesized speech without pitch-shift, even when $|\lambda|=4$. As the magnitude of pitch shift scale was increased, the difference between the FP and FPF grew larger. We can therefore conclude that FPF generates speech exhibiting improved speech quality compared to that of FP even though the pitch was significantly shifted.
\setlength{\tabcolsep}{4pt}
\begin{table}[t]
\caption{MOS results of pitch-shifted speech with 95\% CI. The numbers in parentheses are the ratio varied from the average pitch.}
\centering
\label{tab:MOS_shift}
\begin{tabular}{l|cccccc}
\toprule
& \multicolumn{6}{c}{\textbf{pitch shift scale ($\lambda$)}} \\ \hline
\textbf{Method} &\begin{tabular}[c]{@{}c@{}}\textbf{-8}\\\scriptsize{(63\%)}\end{tabular} & \begin{tabular}[c]{@{}c@{}}\textbf{-6}\\ \scriptsize{(71\%)}\end{tabular} &
\begin{tabular}[c]{@{}c@{}}\textbf{-4}\\ \scriptsize{(79\%)}\end{tabular} &
\begin{tabular}[c]{@{}c@{}}\textbf{+4}\\ \scriptsize{(126\%)}\end{tabular} &
\begin{tabular}[c]{@{}c@{}}\textbf{+6}\\ \scriptsize{(141\%)}\end{tabular} & \begin{tabular}[c]{@{}c@{}}\textbf{+8}\\ \scriptsize{(159\%)}\end{tabular} \\ \hline
\textit{\begin{tabular}[c]{@{}l@{}} FP (baseline)\\ $\quad\pm$C.I.\end{tabular}} & \begin{tabular}[c]{@{}c@{}}1.69\\ 0.12\end{tabular} & \begin{tabular}[c]{@{}c@{}}2.57\\ 0.23\end{tabular} & \begin{tabular}[c]{@{}c@{}}3.38\\ 0.17\end{tabular} & \begin{tabular}[c]{@{}c@{}}3.67\\ 0.16\end{tabular} & \begin{tabular}[c]{@{}c@{}}2.92\\ 0.18\end{tabular} & \begin{tabular}[c]{@{}c@{}}1.97\\ 0.15\end{tabular} \\ \hline
\textit{\begin{tabular}[c]{@{}l@{}} FPF\\ $\quad\pm$C.I.\end{tabular}} & \begin{tabular}[c]{@{}c@{}}2.77\\ 0.17\end{tabular} & \begin{tabular}[c]{@{}c@{}}3.38\\ 0.16\end{tabular} & \begin{tabular}[c]{@{}c@{}}3.55\\ 0.15\end{tabular} & \begin{tabular}[c]{@{}c@{}}3.8\\ 0.16\end{tabular} & \begin{tabular}[c]{@{}c@{}}3.33\\ 0.15\end{tabular} & \begin{tabular}[c]{@{}c@{}}2.74\\ 0.17\end{tabular} \\
\bottomrule
\end{tabular}
\end{table}
\begin{figure}[t]
\centering
\centerline{\includegraphics[width=0.95\linewidth]{figure/preference.pdf}}
\caption{Results of speaker similarity preference tests. For the variation ratio from the average pitch in Hz unit, see Table \ref{tab:MOS_shift}.}
\label{fig:preference}
\end{figure}
\subsubsection{Speaker Preservation}
To evaluate the speaker preservation of speech with pitch-shift, we conducted a speaker similarity preference test.
Participants were requested to select the speech more similar to original speaker voice from the FP and FPF samples.
The same samples that were used in the MOS evaluation for pitch-shift case were used. The results are depicted on Figure \ref{fig:preference}. There was no significant difference in speaker similarity when the pitch-shift scale was relatively small ($|\lambda|\leq4$). However, most participants answered that samples from FPF preserved the speaker characteristics when $|\lambda|>4$ better than samples from FP. Thus, we verified that FPF can synthesize speech with pitch-shift, preserving speaker characteristics.
\section{Conclusion}
This study presents a non-autorgressive FFT-based TTS model called FastPitchFormant. Based on the source-filter theory, FastPitchFormant has the decomposed structure for separately handling text and acoustic features and generates speech from them. Objective results verified that FastPitchFormant has improved reproducibility of pitch and stability in pronunciation. Subjective results also showed that FastPitchFormant can synthesize speech which has better audio quality even for widely adjusted pitch values compared to FastPitch.
\bibliographystyle{IEEEtran}
|
2,869,038,154,330 | arxiv | \section{INTRODUCTION}
According to modern cold dark matter cosmology, galaxies are hierarchically assembled by the merging
or accretion of small fragments~\citep{Bau96,Kly99,Moo99,Diemand2007}. In this theory, the stellar
halo of galaxies, such as the Milky Way, are mostly built up from small substructures such as satellite
galaxies~\citep{Searle1978,Joh98,Bul01,Aba06,Fon06,Moo06}. These satellite systems suffer significant
tidal disruption and mass loss by the tidal force and shock of host galaxies during the process of accretion,
thereby producing a number of stellar substructures, such as tidal tails or streams in the galactic halo~\citep{Bul05}.
Thus, the study of stellar streams in the Milky Way is a valuable for reconstructing the accretion history of the
Galaxy~\citep{Koposov2010,Law2009} and for understanding the potential of the Galaxy~\citep{Ode09}
In the last two decades, numerous stellar streams and tidal tails have been discovered in the Galactic halo.
The Sagittarius dwarf galaxy and its stellar streams~\citep{Iba94,Iba95,Iba97,Iba01,Viv01,Maj03,New03,Mar04,Bel06a}
are the most well studied out of the many other recently discovered stellar streams
~\citep{Hel99,Ive00,Yan00,Yan03,New02,Mart04,Roc04,Mar05,Duf06,Gri06,Jur08}.
Sky survey projects, such as the Sloan Digital Sky Survey (SDSS) and the Two Micron All Sky Survey (2MASS),
are discovering more stellar substructures in the Galactic halo; newly-discovered stellar streams include
the Virgo stellar stream~\citep{Vivas2006, Vivas2008}, the Orphan Stream~\citep{Gri06,Zuc06,Bel07}, and
the Cetus stream~\citep{New09, Koposov2012}. More recent works have reported that some of these
streams are associated with globular clusters~\citep{Drake2013,Grillmair2013}.
Globular clusters have been one of the most investigated stellar systems. They have provided crucial information
about the formation and evolutionary mechanisms of the Galaxy. However, recent photometric and spectroscopic studies are still
amending the accepted the view of how the globular clusters formed and their contribution to the formation of the Milky Way.
It appears that they are not just simple stellar populations, as was previously thought~\citep{Gratton2004,Carretta2009},
and some of them, such as $\omega$ Centauri~\citep{Lee99} and NGC 6656~\citep{Lee09},
are even considered to be surviving remnants of the first building block that merged into the Milky Way.
Several globular clusters in the Milky Way~\citep[about 27\% of the Milky Way's globular clusters;][]{Mac04} could have
formed via the accretion or merging of more complex systems. In addition, recent work suggests that globular clusters
were $8-25$ times more massive than they are present when they first formed~\citep{Conroy2011,Schaerer2011}.
Thus, the stellar streams around globular clusters are important objects to study for understanding the merging or accretion
history of the Milky Way and to gather information regarding the dynamical evolution of globular clusters.
Indeed, the remarkable long tidal tail of Palomar 5~\citep{Ode01,Gril06a} and NGC 5466~\citep{Bel06b,Grill06}, as well as
the presumed globular cluster stream GD-1 ~\citep{Gril06b}, are
spectacular examples of globular cluster streams.
The tidal bridge-like features and common envelope structures around M53 and NGC 5053~\citep{Chu10} are also particularly
interesting, as they are the evidence of an accretion event of dwarf galaxies into the Milky Way.
Slightly extended tidal substructures also appear in the vicinity of several globular clusters~\citep{Gri95,Leo00,Soh03}.
Despite numerous discoveries of globular cluster streams, most of the globular cluster streams found to date have been
in the Galactic outer halo. However, there are more than 40 globular clusters in the Galactic bulge region,
and the origin of metal-poor globular clusters in the bulge region is still unclear.
There have been a few studies of the stellar streams of the globular clusters in the bulge region.
The stellar substructure around globular cluster NGC 6626 was the first discovery in the bulge region~\citep{Chun2012}.
In a hierarchical model, it can be seen that vigorous merging events of subclumps
exist in the bulge region of galaxies like the Milky Way~\citep{Kat92,Bau96,Zoc06}. These merging events then result in a wide
metallicity distribution~\citep[$-1.5\leq\lbrack Fe/H\rbrack<0.5$;][]{McW94,Zoc03} of stars in the bulge region~\citep{Nak03}.
Terzan 5 is an example of a merging event in the bulge region in the past~\citep{Fer09}.
Therefore, we can expect to discover extratidal substructures around some of the globular clusters in the
bulge region.
In this study, we investigated the spatial density distribution of stars around four metal-poor globular clusters
in the Galactic bulge region - NGC 6266, NGC 6626, NGC 6642, and NGC 6723.
We assigned the bulge region as the area within $3$ kpc from the Galactic center.
Table ~\ref{para} shows the basic parameters of the four globular clusters.
In order to reduce the effect of high extinction toward the bulge,
we used wide-field ($45'\times45'$) near-infrared $JHK$ photometric data obtained
from the observation of the Wide Field Camera (WFCAM) array attached to the United Kingdom Infrared Telescope (UKIRT).
Section 2 presents our observations, data reduction process, and photometric measurements.
The statistical analysis and filtering technique used for member star selection are described in Section 3.
In Section 4, we investigate two-dimensional stellar density maps and the radial profile of the clusters
to trace the stellar density features.
The discussion of our investigation is presented in Section 5.
Lastly, we summarize the results and discussion in Section 6.
\section{OBSERVATION, DATA REDUCTION AND PHOTOMETRY}
Photometric imaging data for four globular clusters were observed using the WFCAM on the 3.8 m
UKIRT in Hawaii in April and July of 2010. The WFCAM is an infrared mosaic detector of four Rockwell
Hawaii-II (HgCdTe 2048$\times$2048) arrays with $12^{'}.83$ gap between the arrays.
Four separately pointed observations (four tiles) result in a filled-in sky area of 0.75 square degrees with a pixel scale of $0'.4$.
Our target clusters were observed using the four-tile observations in three band filters ($J, H$, and $K$) to get continuous sky images covering
a total field-of-view of 0.75 square degrees.
The individual image of each cluster for one tile was recorded in short ($1$ sec for $JHK$) and long ($5$ sec for $JH$, and $10$ sec for $K$) exposures to optimize the photometry of bright and faint stars. A five-point dithering pattern was applied to reject bad pixels and cosmic rays.
At each dithered position, $2\times2$ micro-stepping observation was also carried out to get well-sampled stars.
A separate sky observation was obtained for removing thermal background emission after observing the target images.
We also observed several comparison fields in the bulge region using the same observation strategy for observing the clusters.
The stars in the comparison field area were used in the processes of color-magnitude (C-M) mask filtering technique and the optimal contrast filtering technique in the following section in order to estimate the field star contamination around the globular clusters on the
color-magnitude diagram (see Section 3).
Three comparison fields were finally selected using the following condition: the comparison field was not very distant from the clusters on the sky, and the morphology
of the color-magnitude diagram for the field stars was similar to that of the globular clusters.
The coordinates of the selected comparison fields were indicated in Table ~\ref{para}.
Table ~\ref{log} provides the exposure time of each filter for the four
globular clusters.
Standard data reduction for near-infrared imaging, which includes dark subtraction, flat fielding, and the removal of
crosstalk, was completed by the pipeline of the Cambridge Astronomy Survey Unit (CASU).
Then thermal emission backgrounds were made by median-combining the CASU-processed images of the separate sky observations.
The resulting blank sky images were subtracted from the all target images. The residual sky background level of each target
image was also removed in each target image. All sky-subtracted images were interleaved into a single image for photometric
analysis using Swarp~\citep{Bertin2002}.
The final resampled images of the four globular clusters have a wide-field area of about $45'\times45'$, which is sufficiently
large area to cover from the center of each target cluster to two times its tidal radius.
The average seeing condition of stars in the resampled images is between $0.75\sim1.05$ arcsec. Table~\ref{log}
summarizes the average FWHM values for each filter.
Stellar photometry on each detector was performed using the point-spread function (PSF) fitting routine ALLSTAR ~\citep{Stetson1988}.
The PSF varying quadratically through position was first constructed with
$100-150$ bright and isolated stars using the DAOPHOT II program~\citep{Stetson1987}.
The quality of the PSF was improved by removing the neighboring faint stars around the PSF stars and iteratively reconstructing the PSF.
Then, the instrumental magnitude of the individual stars on each array was estimated by the ALLSTAR process using improved PSF.
The raw positions of the stars on the detector were transformed into an equatorial coordinated system using the Two Micron
All Sky Survey (2MASS) point-source catalog. The instrumental magnitudes of the stars were transformed on to the 2MASS filter system
using the color term between WFCAM and the 2MASS system~\citep{Dye2006}. Then the photometric zero-points were finally
computed and calibrated by comparing the magnitudes of common stars in our photometric catalog and 2MASS catalog.
The astrometric and photometric data of each chip on a mosaic were finally combined into
a whole set of data for the target cluster. Stellar objects with photometric measurement error larger than $0.1$ mag were removed
in order to reduce the spurious detection.
We also measured the individual extinction value of each star according the position of the
sky by using the map of ~\citet{Sch98}; the mean E(J-K) and extinction values in $K$ are
$E(J-K)=0.19$ and $A_K=0.14$ for NGC 6266,
$E(J-K)=0.24$ and $A_K=0.18$ for NGC 6626,
$E(J-K)=0.20$ and $A_K=0.15$ for NGC 6642, and
$E(J-K)=0.12$ and $A_K=0.08$ for NGC 6723.
We subtracted derived extinction values from the observed magnitude.
\section{PHOTOMETRIC FILTERING FOR MEMBER STAR SELECTION}
In order to accurately trace the stellar distribution around the globular clusters,
it is important to reduce the contamination of the field stars and to enhance
the density contrast between the cluster candidate stars and the field stars.
Although many statistical methods for filtering field stars have been introduced in the past few decades,
the color-magnitude (C-M) mask filtering technique ~\citep{Gri95} and the optimal contrast filtering technique ~\citep{Ode03}
were frequently used. We also basically followed these two methods ~\citep[for a detailed description, see][]{Gri95,Ode03,Chun2012}.
We first define new orthogonal color indices $c_1$ and $c_2$ from the one-dimensional distribution of stars in a $(J-K)$ versus $(J-H)$
color-color diagram~\citep[see Figure 2 of][]{Chun2012}. The color indices were chosen in such a way that the $c_1$ axis was placed along the main distribution of the stars, while the $c_2$ axis was perpendicular to the $c_1$ axis. Equation~\ref{eq:rela1} shows the general forms of two orthogonal color indices, and Table~\ref{coefficients} indicates the coefficients $a$ and $b$ of the new color indices for each cluster.
\begin{eqnarray}
\label{eq:rela1}
c_1=a(J-K)+b(J-H), \\
c_2=-b(J-K)+a(J-H) \nonumber
\end{eqnarray}
In the $(c_2, K)$ color-magnitude diagram (CMD), we rejected all stars with $|c_2|>2\sigma_{c_2}(K)$, where $\sigma_{c_2}(K)$ is the dispersion
in $c_2$ for stars with magnitude $K$.
The stars within $2\sim5r_h$ for each cluster were used when we defined the rejecting limit.
The left panel of Figure~\ref{c1c2cmd} shows a $(c_2, K)$ CMD for stars in $2\sim5r_h$ from the cluster center.
The lines indicate our rejecting limit; we considered that stars outside this boundary were
unlikely to be cluster member stars.
After this preselection in the $(c_2, K)$ plane,
we defined the locus of cluster on the CMD where the signal-to-noise ratio (S/N) of the cluster star count was maximized in contrast to the comparison field stars using the C-M mask filtering technique in the $(c_1, K)$ plane.
First, we made a representative sample of CMDs for the cluster and the fields using the stars in the central region of the cluster and the observed comparison fields.
The second and third panels of Figure~\ref{c1c2cmd}
show the $(c_1, K)$ CMDs of stars within $3'.0\sim4'.0$ from each cluster center and
the selected comparison region, respectively. The right panel shows the $(c_1, K)$ CMD of the stars in the total
survey region for the cluster. Then, the CMDs of the cluster and comparison were subdivided into small
subgrid elements, and the signal-to-noise ratio in each subgrid element was calculated using Equation~\ref{eq:sn1}:
\begin{eqnarray}
\label{eq:sn1}
s(c_1,K)=\frac{n_{cl}(c_1,K)-gn_f(c_1,K)}{\sqrt{n_{cl}(c_1,K)+g^2n_f(c_1,K)}},
\end{eqnarray}
where $n_{cl}(c_1,K)$ and $n_{f}(c_1,K)$ are the number of stars in the subgrid elements for the cluster and comparison region, respectively;
$g$ is the area ratio of the cluster region to comparison region.
From array $s$, we computed the cumulative number of stars for the cluster $N_{cl}(k)$ and comparison $N_f(k)$, respectively,
by sorting the elements of $s(c_1,K_s)$ into a series of descending order with a one-dimensional index of $k$.
Then, a cumulative signal-to-noise ratio $S(k)$ was calculated by Equation~(\ref{eq:csn1}):
\begin{eqnarray}
\label{eq:csn1}
S(k)=\frac{N_{cl}(k)-gN_f(k)}{\sqrt{N_{cl}(k)+g^2N_f(k)}}
\end{eqnarray}
$S(k)$ become a maximum value for a specific subarea of the C-M plane, and the $s(c_1, K)$ corresponding to the maximum
value of $S(k)$ was chosen as an optimal threshold, $s_{lim}$.
The filtering mask area in the $(c_1,K)$ plane was then determined by selecting subgrid elements
with larger $s(c_1,K)$ values than the determined $s_{lim}$.
The solid lines in the second, third, and fourth panels in Figure~\ref{c1c2cmd}
represent the selected filtering mask envelope.
The entire sample of stars in the determined filtering mask
area was considered in the following filtering analysis.
Finally, we applied the optimal contrast filtering technique to the stars in the determined filtering mask envelope obtained from the C-M mask
filtering technique. We calculated the number density distribution of star in $(c_1, K)$ C-M plane (Hess diagram)
for cluster and comparison field. The bin size of the Hess diagram is the same as that of the C-M mask filtering technique.
Then, the density of cluster stars $n_c(k)$ at a given position $k$ on the sky was derived by Equation~\ref{eq:optimal}:
\begin{eqnarray}
\label{eq:optimal}
n_{c}(k)=\frac{\sum_j[n(k, j)f_c(j)/f_F(j)-n_F(k, j)f_c(j)/f_F(j)]}{\sum_jf^2_c(j)/f_F(j)},
\end{eqnarray}
where $n_c(k, j)$ and $n_F(k, j)$ are cluster star density and field star density in the $j$th subgrid in the
optimal mask envelope of the C-M plane and in the $k$th bin in position on the sky; $f_c$ and $f_F$ are the
normalized density distribution of the cluster and comparison field in the optimal mask envelope in the Hess diagram of $(c_1, K)$.
In the optimal contrast filtering technique, the ratio $ f_c(j)/f_F(j)$ of the number density distribution of the cluster stars to the comparison field
stars in optimal mask envelope of $(c_1, K)$ CMD was used as conditional weight to determine cluster membership.
The number density of stars in the sky was calculated by summing up the conditional weights of all stars and dividing this
sum by the factor $a=\sum_jf^2_c(j)/f_F(j)$. Thus, this returns the estimated number of cluster stars $n_c$ plus a term of $n_F/a$,
the number of contaminating field stars attenuated by $a$.
\section{Spatial density features of stars in the vicinity of the four globular clusters}
In this section, we present the spatial density features of the stars in the vicinity of the four globular clusters.
The large area of the WFCAM data ($45'\times45'$ of the sky) enable us to examine the features of the stellar
density distribution from the cluster center to a distance of at least two times the tidal radius.
The two-dimensional density distribution and the radial density profile for each cluster were investigated using the
selected stars by C-M mask filtering technique and the weighted number obtained from the optimal contrast filtering technique.
The two-dimensional stellar surface density maps of the clusters were constructed using Equation~\ref{eq:optimal}.
The sky plane of cluster was divided into small grids with pixel width of ~$0'.9\times0'.9$, and the weighted
counts of stars were calculated in those pixels.
The field stars contamination was then constructed by masking the central region within $1\sim1.5r_t$ and fitting
a low-order bivariate polynomial model. Figure~\ref{background} shows the constructed field stars contamination for each
cluster, and the density gradients or variations of field stars across the globular cluster were represented with gray scale.
We subtracted these field stars contamination maps and made the field across the
globular cluster essentially flat. The residual background density map was also made using the same method with field
star contamination maps, but in this case, we just
subtracted the mean density level of residual background density map.
The star number density map of each cluster was then smoothed with a Gaussian smoothing algorithm to
increase signal-to-noise ratio and enhance the spatial frequencies of interest.
The isodensity level was described by contour with a standard deviation unit $(\sigma)$ of the
background level on the smoothed map with the various kernel values.
The distribution map of the E(B-V) value for the observed region was also derived from the
map of ~\citet{Sch98} to examine possible extinction effects.
The radial number density profiles of the globular clusters are useful for understanding the internal and outer structure of the globular clusters.
This overall structure has been described for a long time by~\citet{King66} model, which is characterized by a truncated density
profile at the outer edge. However, according to the results of a recent wide-field observation,
the radial number density profiles of several globular clusters are not truncated at their outer edges; instead, they have an extended
overdensity feature that departs from the behavior predicted by~\citet{King66} model and smoothly drops toward
the background level~\citep{Gri95,Leo00,Tes00,Roc02,Lee03,Ols09, Carballo2012}.
Numerical simulations also reproduced and characterized this overdensity feature as a break in the slope of
radial profile due to the extratidal stars around globular clusters~\citep{Com99,Joh99,Joh02}.
In these models, the break in the slope of the radial profile was described by the power law $r^{-\gamma}$.
~\citet{Wilson1975} models has also been used to describe the structure of globular clusters.
Indeed,~\citet{McLau05} fitted Wilson model to the radial density structure of globular clusters in the Galaxy and the
Magellanic Clouds. We note that Wilson model is spatially more extended than King model~\citep[see][]{McLau05}.
We also derived the radial surface density profile of each cluster and tried to find
the evidence of an extratidal extension at the outer edges of the clusters.
In order to construct the radial density profiles, we used concentric annuli with a width of $0^{'}.45$ ranging
from the cluster center out to a radius of $20^{'}.0$, and then counted the weighted number of stars in each annulus.
The number density of stars was then calculated by dividing the sum by the area of the annulus.
The field stars contribution on the radial profile was estimated from the field star contamination maps in Figure~\ref{background}, and
subtracted from the radial profile. Then the residual background density level which was measured on the residual background density map was also removed from the counts.
The error on the number density was estimated by the error propagated from a Poisson statistic for star counts.
We examined the radial completeness for measuring the crowding effect of the inner regions of the clusters by applying the artificial star test
and then recovered the crowding effect using the radial complete ratio.
However, the most central regions of the clusters were not resolved enough to derive the
number density profile because of the crowding effect even though we compensated for the number density. Therefore, for the central regions,
we combined our number density profiles with the previously published surface brightness profiles of ~\citet{Tra95}.
The surface brightness profile of ~\citet{Tra95} was converted into a number density scale by the equation, $log N(r)=-\mu(r)/2.5+C$, where
$C$ is an arbitrary constant to match the number density profile and surface brightness profile.
Then, the final number density profiles were empirically fitted by King model and Wilson model.
We also derived radial surface density profiles for a different direction,
for which we divided an annulus into eight sections (S1-S8)
with an angle of $45^{\circ}$, as shown in Figure~\ref{annulus}.
The annulus widths were assigned to be $\sim0'.5$ at innermost region with $r<5'$, $\sim1'.0$ at the middle region
with $5'<r<10'$, and $2'.0$ at outer region with $10' < r < 20'$.
We note that some radial density points with small number statistics could not be plotted
because the number densities in these regions were lower than background density level.
\subsection{NGC 6266 (M62)}
The star count map around NGC 6266 and the surface density maps smoothed by a Gaussian kernel value of
$0^{\circ}.045$ and $0^{\circ}.12$ are shown in Figure~\ref{N6266contour}, from the top-left panel to
the bottom-left panel. The grey density map in the bottom-right panel of Figure~\ref{N6266contour}
is the distribution map of E(B-V) of ~\citet{Sch98}. The contour lines indicate $0.5\sigma, 1.0\sigma, 2.0\sigma, 3.0\sigma, 4.0\sigma,
6.0\sigma$ and $10.0\sigma$. The contour lines with a Gaussian kernel value of
$0^{\circ}.12$ are overlaid in a star count map and the E(B-V) map. The direction toward the Galactic center and the perpendicular
direction to the Galactic plane are indicated as a solid line and dashed line, respectively. The proper motion of NGC 6266, i.e.
$\mu_{\alpha}\cos\delta=-3.50\pm0.37$ mas yr$^{-1}$ and $\mu_{\delta}=-0.82\pm0.37$mas
yr$^{-1}$ ~\citep{Din2003}, is indicated by a long arrow.
The circle in each panel is the tidal radius of $r_t=8'.97$ ~\citep{Har96}.
Figure~\ref{N6266contour} clearly shows overdensity substructures around NGC 6266 that extend toward east and north-west
directions to $\sim1.5r_t$ at levels larger than $0.5\sigma$.
The stellar substructure in the east direction lies along the direction of the Galactic center and the opposite direction of the proper motion.
In addition, the extended structure to the north-west is likely aligned with the opposite perpendicular direction to the Galactic
plane, and its marginal extension seems to bend to the direction of proper motion.
We note that the density feature at the southern region of the cluster is likely to be affected by the dust extinction as shown in bottom-right panel
of Figure~\ref{N6266contour}.
In the upper panel of Figure~\ref{N6266radial}, the radial surface density profile of NGC 6266 is plotted,
along with King model and Wilson model, which are arbitrarily normalized to our measurements.
In the central region of the cluster, we replaced the number density profile with the surface brightness of~\citet{Tra95}.
The profile of ~\citet{Tra95} connects smoothly with our number density profile at a radius of $log(r')\sim0.4$.
However, the number density profile shows an overdensity feature which departs from the King model and profile of ~\citet{Tra95},
with a break in the slope at the radius of log($r'$)$\sim0.65$ ($\sim0.5r_t$).
Here, we note that the profile of~\citet{Tra95} in the outer region might suffer from the background contamination
and biases of bright stars~\citep[see][]{Chun2012,Noy06}, while our number density profile in this study has no bias due to bright stars.
The overdensity feature seems to extend to the radius of log($r'$)$\sim1.15$ ($\sim1.5r_t$).
Wilson model shows a more extended profile to the outer region and seems to fit better with our measurements than the King model.
The excess density at this radial distance resembles a radial power law with a slope of $\gamma =-2.90\pm0.19$, which is
steeper than the case of $\gamma = -1$, predicted for a constant mass-loss rate over a long time ~\citep{Joh99}.
Thus, the overdensity feature at the outer region of NGC 6266 is indeed evidence of the extended substructures shown in Figure~\ref{N6266contour}.
The lower panel of Figure~\ref{N6266radial} shows the radial surface number density profiles for eight angular sections
with a different direction as shown in Figure ~\ref{annulus}.
We note that the some radial density points in section 5,6,7, and 8 were
not presented because the number densities in these
region were lower than subtracted residual background level.
The radial profiles in sections 1,2,3, and 4 show the overdensity features at the radius of $0.5r_t\lesssim r \lesssim 1.5r_t$.
The overdensity features in sections 1,3 and 4 seem to extend to more distant radius than $1.5r_t$.
The mean surface densities ($\mu$) in these sections are particularly
higher than the total average density and those in the other sections.
In addition, the slope of profile in the sections 1 and 4
are shallower than the mean slope of profile and those of other angular sections.
These mean density levels and shallow slopes in these sections are in
good agreement with the extended stellar substructures in the direction of the Galactic center, in the opposite direction
of the proper motion, and in the opposite perpendicular direction to the Galactic plane.
On the other hand, the overdensity features do not appear in section 5,6, and 7 where prominent stellar substructures
were not shown in two-dimensional contour map.
The mean densities without overdensity features were also somewhat lower than total average density.
\subsection{NGC 6626 (M28)}
The top-left to bottom-right panels of Figure~\ref{N6626contour} show the star count map of NGC 6626, the isodensity contour map smoothed by
a Gaussian kernel value of $0^{\circ}.045$ and $0^{\circ}.12$, and the distribution of E(B-V) extinction value of ~\citet{Sch98}.
The isodensity contour lines correspond to the level of
$1.0\sigma, 1.5\sigma, 2.0\sigma, 3.0\sigma, 4.0\sigma$, and $6.0\sigma$.
The proper motion of $\mu_{\alpha}\cos\delta=0.63\pm0.67$ mas yr$^{-1}$ and $\mu_{\delta}=-8.46\pm0.67$
mas yr$^{-1}$~\citep{Casetti2013} is represented by a long arrow.
The dashed line and solid line indicate the direction of the Galactic center and the perpendicular direction of the Galactic plane, respectively.
The tidal radius of NGC 6626 (i.e., $r_t=11'.27$) from~\citet{Har96} is also plotted as a circle.
In Figure~\ref{N6626contour}, it is apparent that the stellar density distribution around NGC 6626
shows distorted overdensity features and extended tidal tails beyond the tidal radius.
The tidal tails seem to stretch out symmetrically to both
sides of the cluster, extending toward east and west directions from the cluster center to the radial distance of $\sim2r_t$.
In addition, the two tidal tails are likely to be aligned with the directions of the Galactic center and anti-center.
Although there is no apparent extending features toward the direction of the
proper motion in the surface density maps, there is a clumpy substructure in the northern area, which is
aligned with the opposite direction of proper motion.
~\citet{Chun2012} first found the prominent overdensity feature that extends toward the
perpendicular direction to the Galactic plane within the tidal radius of NGC 6626.
We also found a stellar substructure similar to that found by~\citet{Chun2012} in the contour map with a kernel
values of $0^{\circ}.045$.
This substructure extends toward north-west direction within the tidal radius but is not as prominent as the substructure observed
by~\citet{Chun2012}.
We note here that our spatial density distribution has a wider field of view than that of ~\citet{Chun2012}, which has enabled us
to estimate and calibrate the underlying background substructure more accurately. Thus, the stellar density structure in this
study is more homogeneous and less affected by field star contamination.
A radial surface density profile for NGC 6626 was presented in the upper panel of Figure~\ref{N6626radial}.
In the central region of the cluster, the number density profile was substituted with the surface brightness profile of~\citet{Tra95},
which connects to the number density profile at the middle range. The theoretical King model and Wilson model were also plotted
to characterize the observed radial profile. It is apparent that our number density profile does not trace the
King model and Wilson model at the outer region of the cluster;
instead, it shows overdensity feature with a break in the slope of profile at the radius of log($r'$)$\sim0.5$
$(\sim0.28 r_t)$.
The overdensity feature extends out to the log($r'$)$\sim1.1$ $(\sim1.5 r_t)$,
and the profile in this region is characterized by a power law with a slope of $\gamma=-1.29\pm0.08$.
This slope is not very different from the slope of $\gamma = -1$, predicted for a constant mass-loss rate over a long time ~\citep{Joh99}.
The overdensity feature in the radial profile is indicative of extended tidal tails and substructures shown
in Figure~\ref{N6626contour}.
The radial surface density profiles of eight angular sections for NGC 6626 are plotted in
the lower panel of Figure~\ref{N6626radial}.
In general, all the radial profile show the overdensity features with a break in slope at the outer region of the cluster.
The estimated mean surface densities $(\mu)$ in angular sections 4,5 and 6 are higher than those in the other sections,
and the slopes ${\gamma}$ of profiles in section 4 and 5 are somewhat shallower than those in the other sections.
Furthermore, the radial profiles in section 4 and 5 still maintain the overdensity features at the radial distance of log($r'$)$\sim1.3$.
These overdensity features correspond to the apparent extratidal tails extending toward the direction perpendicular to the
Galactic plane and direction of the Galactic center, as shown in Figure~\ref{N6626contour}.
Although the density in section 1 is not as high as that of section 5, this is because that
the density near the tidal radius is low.
Indeed, the weak connection between the cluster and tails is shown in contour map with a low kernal value of $0^{\circ}.045$.
However, the tail extends out to a radial distance of $2r_t$, and this density feature is represented by a shallow slope in the
and high density at the outer radius in radial profile of section 1.
\subsection{NGC 6642}
We plot star count map, isodensity contour maps, and the distribution map of
E(B-V) value~\citep{Sch98} from the top-left to bottom-right panel in Figure~\ref{N6642contour}
in order to investigate of stellar distribution of NGC 6642. Gaussian kernel widths of $0^{\circ}.045$, and $0^{\circ}.12$
were applied to find spatial coherence in the stellar distribution.
The isodensity contour levels are $0.5\sigma, 1.0\sigma, 2.0\sigma, 3.0\sigma, 5.0\sigma, 8.0\sigma$,
and $10.0\sigma$.
The contour lines in the star count map and the distribution map of E(B-V) correspond to those of a smoothed map
with a Gaussian kernel width of $0^{\circ}.12$.
The circle centered on the cluster indicates the tidal radius of $r_t=10.07^{'}$~\citep{Har96}.
The solid and dashed lines represent the direction of the Galactic center and a perpendicular direction to the
Galactic plane, respectively.
As can be seen in Figure~\ref{N6642contour},
the stellar distribution of NGC 6642 seems to show a drop in density around the tidal radius in the specific direction, and
clumpy structures outside of tidal radius.
However, the prominent extended stellar substructure elongates in a northern direction beyond the tidal radius.
A clumpy chunk in the southern region seems to be a counterpart to the overdensity feature in the northern region.
A marginal extension also appears in an eastern direction, which seems to be aligned with the opposite perpendicular
direction to the Galactic plane. Unfortunately, we could not find substructures that might be associated with
the proper motion of the cluster, because the proper motion of NGC 6642 has not yet been reported.
The upper panel of Figure~\ref{N6642radial} shows the radial surface density
profile of NGC 6642 along with the King model, the Wilson model and surface brightness profile of ~\citet{Tra95}.
The radial surface profile of NGC 6642 shows apparent overdensity feature that depart from both model predictions
from the radius of log($r'$)$\sim0.5$ $(\sim0.3r_t)$ to the tidal radius. However, the overdensity feature does not continue, and
the density of radial profile decreases abruptly at the tidal radius.
The drop in density and local clumpy substructures around/outside tidal radius shown in Figure~\ref{N6642contour}
seem to be associated with this fall of the radial profile and low density feature at the outer region.
The overdensity feature within tidal radius was fitted by a power law with a slope $\gamma=-1.27\pm0.10$.
The radial surface density profiles for eight angular sections, which are plotted in the lower
panel of Figure~\ref{N6642radial}, represent better the stellar density distribution around
NGC 6642 than average radial density profile in the upper panel of Figure~\ref{N6642radial}.
The radial profiles in angular sections 1 and 8 show clear overdensity features within tidal radius, and section 1 has the largest mean
density of the eight sections. Although some radial points in these sections are not plotted because of small number statistics,
there are still considerable number densities at the outer region.
This is in agreement with the overdensity feature extending toward the eastern side on the two-dimensional surface density map.
In contrast to these angular sections, the overdensity feature in section 2 is disconnected at the tidal radius because of its
low density at the outer region.
In the two dimensional surface contour map, we can not find obvious stellar substructures in that region.
The radial density profile in section 3 shows the most prominent overdensity features with high density and the flattest slope, and contains
the density excess at the outer radius. These are representative of a prominent extending substructure in northern region
on the isodensity contour map in Figure~\ref{N6642contour}.
In the section 7, the radial density profile is likely to show the overdensity feature associated with the counterpart of the
extended substructure in the northern region.
In the sections 4 and 5, we could not find clear overdensity feature, and
two-dimensional contour map does not also show prominent stellar substructures in those regions.
\subsection{NGC 6723}
Figure~\ref{N6723contour} shows a star count map around NGC 6723,
surface density maps smoothed with Gaussian kernel values
of $0^{\circ}.07$ and $0^{\circ}.11$ and a distribution map of E(B-V) value~\citep{Sch98}
from the upper-left panel to the lower-right panel.
We note that different Gaussian kernel values for NGC 6723 were selected in order to
highlight the structures with similar spatial extents.
Isodensity contours were overlaid on the maps with contour levels of
$2.0\sigma, 2.5\sigma, 3.0\sigma, 4.0\sigma, 5.0\sigma$, and $7.0\sigma$.
The long dashed line and solid line
indicate the perpendicular direction to the Galactic plane and the direction of the Galactic center, respectively.
The proper motion of NGC 6723, i.e., $\mu_{\alpha}\cos\delta=-0.17\pm0.45$ mas yr$^{-1}$ and $\mu_{\delta}=-2.16\pm0.50$ mas
yr$^{-1}$ ~\citep{Din2003}, was indicated with an arrow.
The tidal radius of $r_t=10'.51$~\citep{Har96} was represented by circle.
As can be seen in Figure~\ref{N6723contour}, there are weak extended substructures beyond the tidal radius of NGC 6723 at
levels larger than $\sigma$.
Small density lobes are likely to extend toward the eastern and western sides;
the direction of the Galactic center, perpendicular direction to the Galactic plane or their opposite directions.
In addition, the isodensity contour lines show a horn-shaped structure in the northern region, which corresponds to the opposite direction
of the proper motion.
The marginal extension appears near the tidal radius in the southern region,
but this weak substructure does not seem to be outgrowing anymore.
We note that there is a huge reflection nebula in the distant souther region of the cluster.
Thus dust in the outskirts of reflection nebula in the southern region could affect the detected number
density of marginal extension in the southern region. Indeed, the value of $E(B-V)$ in the southeast region is higher than
the values of other regions. Thus, there is a possibility that more extended stellar substructures could exist in the obscured region.
The radial surface densities for NGC 6723, measured in each concentric annulus, are shown
in the upper panel of Figure ~\ref{N6723radial}.
The theoretical King model and Wilson model were arbitrarily normalized to our measurements, and
the surface brightness profile of~\citet{Tra95} was used as a substitute for our measurements in the central region.
Apparently, the radial number density profile departs from the theoretical King model at the outer region of the cluster, while
Wilson model has a slightly better fit to our radial profile of the cluster.
The overdensity feature, which departs from the King model with a break in slope, is shown at the radius of log($r'$)$\sim$0.8 ($\sim$0.6$r_t$),
and extends to the radius of log($r'$)$\sim$1.15 ($\sim$1.5$r_t$).
The slope in this overdensity region is characterized by a power law with a slope of $\gamma =-1.89\pm0.28$, which is steeper than
the value predicted from a theoretical simulation with a constant orbit-averaged mass-loss rate~\citep{Joh99}.
The radial surface profiles of eight angular sections are also shown in the lower panel of Figure~\ref{N6723radial}.
The overdensity feature in the region of $0.6r_t\lesssim r \lesssim 1.5r_t$
is commonly detected in specific angular sections.
The radial profile in section 1, where the prominent substructure appears in the two-dimensional contour map, has the highest
mean number density out of all the profiles of the eight sections. In addition, the densities in section 2, 5, and 8 have somewhat
higher density levels in the overdensity region, which is in good agreement with the horn-shaped substructure, the side lobe in the west side,
and the marginal structure in the south region shown in the Figure~\ref{N6723contour}.
Some radial points near tidal radius in section 7 were not plotted, because of its low number density.
This low density feature also appears as a bay-shaped structure with low level contours in the southern region on the contour map with
a kernel value of $0^{\circ}.07$.
The possible narrow dust lane, which might extend from the reflection nebular in southern region, can cause this low number density feature.
Unfortunately, the extinction map of~\citep{Sch98} does not show an apparent dust lane because of its low resolution.
Thus, the study of dust extinction with high resolution is necessary in order to study stellar distribution around the cluster.
\section{DISCUSSION}
All of our target clusters reside within $\sim3$ kpc from the Galactic center.
The ancient globular clusters in the inner region can provide clues regarding the formation of the Galactic bulge and disk.
Indeed, the spatial, chemical and kinematical properties of several globular clusters in the inner regions
are consistent with those of bulge, disk and even bar membership~\citep{Burkert1997,Barbuy1998,Cote1999,Heitsch1999}.
However, the definite decomposition of globular clusters between the bulge and disk or bar components is not a
trivial task in the central region of the Galaxy because the inner regions of the Galaxy are superimposed by
various stellar populations of bulge, disk, and bar.
In addition, in the inner region of the Galaxy, the globular clusters experience extreme dynamical evolution due to
bulge/disk shock and strong tidal effects~\citep{Aguilar1988,Shin2008}.
Thus, reliable measurements of metallicities, orbits, and distances for globular clusters in the central regions are necessary
in order to understand the evolution of the globular cluster in the bulge region.
In this section, we investigated the previous results of our target globular clusters, and discuss the properties of clusters that
could affect the substructures around the globular clusters.
NGC 6266 is a high-density~\citep[log$\rho_c\sim5.34$;][]{Jacoby2002,Possenti2003} and the ninth most luminous globular
cluster located $\sim1.7$ kpc from the Galactic center~\citep{Har96}.~\citet{Din2003} found that NGC 6266 has a large
rotation velocity, suggesting that this cluster belongs to a disk system rather than to a pressure-supported system.
The total destruction rate of this cluster is about $\nu_{tot}=0.644$ per Hubble time, and destruction rate ratio is
$\nu_{tot}/\nu_{evap}=1.0$~\citep{Gnedin1997}, where $\nu_{evap}$ is evaporation rate per Hubble time. These destruction
rate and ratio indicate that the main process of dynamical evolution for this cluster is internally driven, such as two-body relaxation.
In this study, we found that apparent marginal extensions around the cluster bent to the direction of
proper motion and were likely to show S-shaped features. Thus, we can interpret this
orientation of stellar substructure as the signature that the evaporated stars by internal two-body relaxation are finding
themselves in a Galactic orbit similar to that of the parent cluster. However, we also found that
the stellar density configuration around NGC 6266 shows a spatial coherence associated with the effect of
dynamical interaction with the Galaxy. The prominent density feature extended toward the direction of
the Galactic center.
Thus, our stellar density distribution around NGC 6266 could be interpreted as an example of dynamical cluster evolution that
tidal shocking accelerates two-body relaxation~\citep{Kundic1995,Din99}.
~\citet{Lee07} classified NGC 6266 as an extended blue horizontal branch (EHB) cluster, and also suggested
that the clusters with EHB could be a remnant of the first building blocks in the early universe. Thus, the stellar density feature
around NGC 6266 would be tidally associated with these unknown first building blocks.
More accurate orbit information and an investigation of the field stars contamination around NGC 6266 are necessary to understand
the dynamical evolution of NGC 6266.
NGC 6626 is massive and moderately metal-poor globular cluster in the Galactic bulge region.
~\citet{Din99} first found the thick-disk orbit for this cluster and then proposed the possibility that
this cluster had been produced in a satellite galaxy and then departed from its parent during the accretion process.
In addition, according to~\citet{Lee07}, this cluster is
moderate EHB cluster and could be a remnant of first building blocks.
~\citet{Chun2012} first found the stellar density substructure around NGC 6626, which extends toward the direction
perpendicular to the Galactic plane, indicating a disk-shock effect. They also discussed the
possibility of the accretion scenario for the origin of this cluster.
More recently,~\citet{Casetti2013} updated the proper motion and orbit of this cluster. They
noted that the cluster is located at the apocenter and that its orbit is rather eccentric and disruptive, indicating
that the cluster may experience substantial mass loss. In our study, we did find prominent stellar substructures
extending toward the Galactic center and anti-center directions. This spatial orientation of the tidal tails is in good agreement with
the findings of ~\citet{Mon07}; namely, that the inner tails are oriented toward the Galactic center direction in the apocenter position.
Moreover, we confirmed the stellar substructure extending toward the perpendicular direction to the Galactic plane, which was
found by ~\citet{Chun2012}.~\citet{Casetti2013} mentioned that their newly updated proper motion vector with the Solar
motion subtracted is aligned with this substructure. They then speculated that the recent disk plane crossing about 4 Myr ago
might have contributed to the construction of tidal extension in the perpendicular direction to the Galactic plane.
Therefore, this cluster is very likely to suffer from the disk/bulge shock and the Galactic tidal force.
However, the total destruction rate of $\nu_{tot}=0.546$ per Hubble time and destruction ratio of
$\nu_{tot}/\nu_{evap}\sim1.0$ by ~\citet{Gnedin1997} do not seem to be in agreement with our interpretation.
Thus, we here note that recalculation of the destruction rate is necessary using recently updated proper motion and eccentric orbit.
NGC 6642 has dense central core with a high concentration $c=1.99$~\citep{Har96}. Previous studies~\citep{Tra95,Balbinot2009}
have also indicated that there is not a well-resolved core in the radial profile and have classified the cluster as core-collapsed cluster.
~\citet{Barbuy2006} noted that the age of NGC 6642 is comparable to that of M5 and then suggested that the cluster
is one of the few genuine metal-poor and old clusters in the bulge region. Therefore it can contain fossil information about the Galaxy.
On the other hand, ~\citet{Balbinot2009} examined the position of NGC 6642 in the HB type versus the metallicity diagram
and found that NGC 6642 lies on the position of young halo populations, not on the position of
old halo and disc/bulge populations. They concluded that NGC 6626 is a transition
cluster between the inner halo and outer bulge, considering its age, its position in the Galaxy, and its HB morphology.
They also found clear evidence for the mass segregation
and depletion of low-luminosity stars in the luminosity and mass function and attributed this
dynamical structure to the disk and bulge shocking.
Indeed, we did find clumpy stellar substructures which extend toward north direction and the opposite
direction perpendicular to the Galactic plane beyond the tidal radius.
The large total destruction rate of $\nu_{tot}=1.90$ per Hubble time
and destruction ratio of $\nu_{tot}/\nu_{evap}\sim8.3$ by ~\citet{Gnedin1997} also indicate
that a strong environmental force, such as a tidal effect due to disk and bulge shocking,
might affect the dynamical evolution of this cluster.
Thus, we interpret that although NGC 6642 is a core-collapse cluster and two-body relaxation would affect
the dynamical evolution of the cluster, the extended stellar substructures around NGC 6642
would be the result of a strong gravitational interaction with the Galaxy.
Unfortunately, we were unable to
investigate a definite association between the observed stellar substructure and the cluster's motion because
there is no published proper motion information for this cluster. If we could obtain accurate proper motion for this cluster,
we would be able to understand the dynamical evolution of the cluster and figure out the origin of the cluster; e.g., whether
this cluster is a genuine metal-poor and old cluster in the bulge region or a transition cluster between the inner halo region and
the outer bulge region.
NGC 6723, which has a relatively low concentration of density~\citep[$c=1.05$,][]{Har96} and a moderate EHB morphology~\citep{Lee07},
is an old globular clusters in the Galactic bulge region.
Although it has long been considered one of the genuine Galactic bulge globular clusters, the origin of the cluster
is not obvious because there have only been few studies on this cluster.
~\citet{Van1993} suggested that NGC 6723 would have a circular orbit motion in the central region of the Galaxy. However,
~\citet{Din2003} measured the proper motion of the cluster and found that its orbit is highly inclined.
Based on its kinematics and low metallicity, they concluded
that NGC 6723 is a member of the halo system.
Recently,~\citet{Lee07} suggested a different origin scenario that metal-poor EHB globular clusters such as NGC 6723
would be relics of the first building blocks.
The total destruction rate of $\nu_{tot}=0.321$ per Hubble time and
a rather high destruction ratio of $\nu_{tot}/\nu_{evap}\sim2.0$ by~\citet{Gnedin1997} indicate that there would be weak tidal
substructures around this cluster.
Indeed, in our study we have found the tidal stripping features that are likely to be associated with the
interaction with the Galaxy.
The weak stellar density lobes, which extend toward the directions of the Galactic center and anti-center,
were detected in the surface iso-density map.
In addition, a marginal extension tracing the opposite direction of the proper motion was also detected.
However, we could not exclude dust extinction effect on the stellar density distribution map.
The thin dust lane of reflection nebula in the southern region of the cluster might hide the stellar extensions.
\section{Summary}
In this study, we investigated the stellar spatial density distribution around four metal-poor globular clusters
(NGC 6266, NGC 6626, NGC 6642, and NGC 6723) in the Galactic
bulge region using wide-field ($45'\times45'$) near-infrared
$J, H,$ and $K$ images obtained with the WFCAM camera on the UKIRT. In order to discard the field stars contamination and
enhance density contrast between cluster member and field stars, we used the C-M mask filtering technique and
the optimal contrast filtering technique on the cluster CMDs. Two-dimensional density contour maps of the clusters were
examined, and radial density profiles were also investigated with King and Wilson models.
The two-dimensional stellar density contour maps for the four globular clusters showed asymmetric and extended features of the
stars around the globular clusters. In particularly, three globular clusters (NGC 6266,
NGC 6626, and NGC 6642) showed tidal extension and a stellar chunk beyond the tidal radius, while NGC 6723 showed weak density lobes
near the tidal radius. Such extended stellar substructures were aligned with the direction of the Galactic center and anti-center, the
perpendicular direction to the Galactic plane or the direction of the cluster orbit motion.
Thus, it is highly probably that target clusters are affected by the dynamical environment effect such as a tidal force and bulge/disk shock.
The radial profile also represented the extended substructures in the two-dimensional contour maps as overdensity features
with break in a slope.
Although the observed radial profiles of clusters departed from the King and Wilson model at the outer region of cluster,
Wilson model appeared to have a better fit with the observed radial density profile for the cluster, which seemed to experience less
of an dynamical environment effect (e.g. NGC 6266 and NGC 6723).
We here note that internal driven such as two-body relaxations, also affects the extended stellar substructure around the globular clusters.
The stellar extensions around NGC 6266 show a good example of internal driven.
In this cluster, the stars might be primarily evaporated by two-body relaxation, and then be affected by environmental tidal force
of the Galaxy. For other clusters, it is obvious that the two-body relaxation contributes to the extended stellar substructures around clusters.
In contrast to the case of NGC 6266, however, the environmental effects such as tidal force of the Galaxy and bulge/disk
shock would be main factors for the extended stellar substructures, even though the cluster such as NGC 6642 had very active two-body relaxations.
In addition, a more accurate proper motion and orbit that is calculated using an accurate galaxy potential model
could increase the portion of external effect on the dynamical evolution of the globular clusters in the bulge region.
Indeed, although the tidal-shock rate of NGC 6626 was not comparable or larger
than two-body relaxation rates in previous studies~\citep{Gnedin1997,Din99},
the orbit of NGC 6626, which was derived using more accurate proper motion
and an axisymmetric and barred model of the Galaxy~\citep{Casetti2013}, was found to be eccentric and more disruptive.
Therefore, further studies of the dynamical evolution of the globular clusters in the bulge region with accurate proper motion and
a galaxy model are required to provide theoretical constraints of the configuration of stars around the globular clusters.
In addition, further deep and wide-field photometric data for metal-poor and metal-rich globular clusters in the Galactic
bulge could provide more accurate information about the dynamical evolution of globular clusters, thereby leading to an increased
understanding of the origin of globular clusters in the bulge region, as well as the formation of the bulge region.
\acknowledgments
We are grateful to an anonymous referee for detailed comments that greatly improved this paper.
This research was supported by the Basic Science Research Program through
the National Research Foundation of Korea (NRF) funded by the Ministry of
Education, Science and Technology (2013R1A1A2006826).
This work is grateful for partial support from KASI-Yonsei DRC program of Korea Research Council of
Fundamental Science and Technology (DRC-12-2-KASI).
|
2,869,038,154,331 | arxiv | \section{Introduction}
Human action recognition has many applications in smart surveillance, human-computer interaction and sports. The Kinect and other depth cameras have become popular for this task because depth sequences do not suffer from the problems induced by variations in illumination and clothing texture. However, the presence of occlusion, sensor noise and most importantly viewpoint variations still make action recognition a challenging task.
Designing an efficient depth sequence representation is an important task in many computer vision problems. Most existing action recognition techniques (e.g.,~\cite{CCD,MyWACV14,DMM}) treat depth sequences the same way as color videos and use color-based action recognition methods. However, while these methods are suitable for color video sequences, simply extending them to depth sequences may not be optimal~\cite{HON4D}. Information captured by depth cameras actually allows geometric features to be extracted to form rich descriptors. For instance, Tang et al.~\cite{HONV} used histograms of the normal vectors for object recognition in depth images. Given a depth image, they computed spatial derivatives, transformed them to the polar coordinates and used the 2D histograms as object descriptors. Recently, Oreifej and Liu~\cite{HON4D} extended the same technique to the temporal dimension by adding time derivative. A downside of treating depth sequences this way is that the noise in the depth images is enhanced by the differential operations~\cite{Wang2012}. Histogramming, on the other hand, is analogous to integration and is more resilient to the effect of noise. Furthermore, viewpoint variations are unavoidable in real scenarios. However, none of the existing 3D sensor based techniques is designed for cross-view action recognition where training is performed on sequences acquired from one view and testing is performed on sequences acquired from a significantly different view ($>25^\circ$).
We directly process the 3D pointcloud sequences (Fig.~\ref{fig:PCloudSeq}) and extract point descriptors which are robust to noise and viewpoint variations. We propose a novel descriptor, the {\it Histogram of Oriented Principal Components} (HOPC), to capture the local geometric characteristics around each point within a sequence of 3D pointclouds. To extract HOPC at a point ${\bf p}$, PCA is performed on an adaptive spatio-temporal support volume around ${\bf p}$ (see Fig.~\ref{fig:KeyPointAlg}) which gives us a $3 \times 3$ matrix of eigenvectors and the corresponding eigenvalues. Each eigenvector is projected onto $m$ directions corresponding to the vertices of a {\it regular m-sided polyhedron} and scaled by its eigenvalue. HOPC is formed by concatenating the projected eigenvectors in decreasing order of their eigenvalues.
\begin{figure}[t]
\begin{center}
\includegraphics[width=11 cm]{Fig1a}
\end{center}
\vspace{-6mm}
\caption{Two sequences of 3D pointclouds of a subject performing the {\it holding head} action. Notice how the depth values (colours) have significantly changed with the change in viewpoint. Simple normalization cannot compensate for such depth variations. Existing depth based action recognition algorithms will not be accurate in such cases}
\label{fig:PCloudSeq}
\end{figure}
HOPC is used in a holistic and local setting. In the former approach, the sequence of 3D pointclouds is divided into spatio-temporal cells and HOPC descriptors of all points within a cell are accumulated and normalized to form a single cell descriptor. All cell descriptors are concatenated to form a holistic HOPC descriptor. In the latter approach, local HOPC are extracted at candidate spatio-temporal keypoints (STKP) and a HOPC quality factor is defined to rank the STKPs. Only high quality STKPs are retained. All points within the adaptive spatio-temporal support volume of each STKP are aligned along the eigenvectors of the spatial support around STKP. Thus the support volume is aligned with a local object centered coordinate basis and extracting HOPC, or any other feature, at the STKP will be view invariant. See Section \ref{Local} for details. Since humans may perform the same action at different speeds, to achieve speed invariance, we propose automatic temporal scale selection by minimizing the eigenratios over a varying temporal window size. The main contributions of this paper include:
\begin{itemize}
\item A HOPC descriptor for 3D pointclouds.
\item A spatio-temporal key-point (STKP) detector and a view invariant descriptor.
\item A technique for speed normalization of actions.
\end{itemize}
Moreover, we introduce a new 3D action dataset which has scale variations of subjects and viewpoint variations. It contains thirty actions which is larger number than any existing 3D action dataset. This dataset will be made public.
Experimental comparison on four datasets, including three benchmark ones \cite{Bag3DPoints,HON4D,ActionLet2012}, with eight state-of-the-art methods \cite{CCD,3Dgrad,HON4D,MyWACV14,Wang2012,ActionLet2012,DSTIP,ViewInvariantJoint3D} shows the efficacy of our algorithms. Data and code of our technique are available \cite{HOPC_code}.
\section{Related Work}
Based on the input data, human action recognition methods can be divided into three categories including RGB based, skeleton-based and depth based methods. In RGB videos, in order to recognize actions across viewpoint changes, mostly view independent representations are proposed such as view invariant spatio-temporal features \cite{3,17,21,22,23,28}. Some methods infer the 3D scene structure and use geometric transformations to achieve view invariance \cite{4,10,15,26,29}. Another approach is to find a view independent latent space \cite{8,7,virtualviews,14} in which features extracted from the actions captured at different view points are directly comparable. Our proposed approach also falls in this category. However, our approach is only for 3D pointclouds captured by depth sensors. To the best of our knowledge, we are the first to propose cross-view action recognition using 3D pointclouds. We propose to normalize the spatio-temporal support volume of each candidate keypoint in the 3D pointcloud such that the feature extracted from the normalized support volume becomes view independent.
In skeleton based methods, 3D joint positions are used for action recognition. Multi-camera motion capture
(MoCap) systems \cite{mocap} have been used for human action
recognition, but such special equipment is marker-based
and expensive. Moreover, due to the different quality of
the motion data, action recognition methods designed
for MoCap are not suitable for 3D pointcloud sequences which is the focus of this paper \cite{ActionLet2012}.
On the other hand, some methods~\cite {ViewInvariantJoint3D,Wang2012,eigenjoints} use the human joint positions extracted by the OpenNI tracking framework (OpenNI)~\cite{SingleDepth} as interest points. For example, Yang and Tian~\cite{eigenjoints} proposed pairwise 3D joint position differences in each frame and temporal differences across frames to represent an action. Since 3D joints cannot capture all the discriminative information, the action recognition accuracy is compromised. Wang et al.\ \cite{ActionLet2012} extended the previous approach by computing the histogram of occupancy pattern of a fixed region around each joint in a frame. In the temporal dimension, they used low frequency Fourier components as features and an SVM to find a discriminative set of joints. It is important to note that the estimated joint positions are not reliable and can fail when the human subject is not in an upright and frontal view position (e.g. lying on sofa) or when there is clutter around the subject.
Action recognition methods based on depth maps can be divided into holistic \cite{HON4D,MyWACV14,Bag3DPoints,DMM,STOP} and local approaches~\cite{ActionLet2012,DSTIP,STIP,Wang2012}. Holistic methods use global features such as silhouettes and space-time volume information. For example, Li et al.\ \cite{Bag3DPoints} sampled boundary pixels from 2D silhouettes as a bag of features. Yang et al.\ \cite{DMM} added temporal
derivative of 2D projections to get Depth Motion Maps (DMM). Vieira et al.\ \cite{STOP} computed silhouettes in 3D by using the space-time occupancy patterns. Recently, Oreifej and Liu~\cite{HON4D} extended histogram of oriented 3D normals \cite{HONV} to 4D by adding time derivative. The gradient vector was normalized to unit magnitude and projected on a refined basis of 600-cell Polychrome to make histograms. The last component of normalized gradient vector was inverse of the gradient magnitude. As a result, information from very strong derivative locations, such as edges and silhouettes, may get suppressed~\cite{MyWACV14}. The proposed HOPC descriptor is more informative than HON4D as it captures the spread of data in three principal directions. Thus, HOPC achieves more action recognition accuracy than exiting methods on three benchmark datasets.
Depth based local methods use local features where a set of interest points are extracted from the depth sequence and a feature descriptor is computed for each interest point. For example, Cheng et al.~\cite{CCD} used interest point detector proposed by Doll\'{a}r et al.~\cite{STIP} and proposed a Comparative Coding Descriptor (CCD). Due to the presence of noise in depth sequences, simply extending color-based interest point detectors such as~\cite{Dollar} and~\cite{STIP} may degrade the efficiency of these detectors~\cite{HON4D}.
Motion trajectory based action recognition methods\cite{DensTraj,Traj} are also not reliable in depth sequences~\cite{HON4D}. Therefore, recent depth based action recognition methods resorted to alternative ways to extract more reliable interest points. Wang et al. \cite{Wang2012} proposed Haar features to be extracted from each random subvolume. Xia and Aggarwal in~\cite{DSTIP} proposed a filtering method to extract spatio-temporal interest points. Their approach fails when the action execution speed is faster than the flip of the signal caused by the sensor noise. Both techniques are sensitive to viewpoint variations.
In contrast to previous interest point detection methods, the proposed STKP detector is robust to variations in action execution speed, sensor viewpoint and the spatial scale of the actor. Since the proposed HOPC descriptor is not strictly based on the depth derivatives, it is more robust to noise. Moreover, our methods do not require skeleton data which may be noisy or unavailable especially in the case of side views.
\section{Histogram of Oriented Principal Component (HOPC)}
\label{HOPCDescriptor}
Let $Q=\{Q_1, Q_2,\cdots,Q_t,\cdots,Q_{n_f}\}$ represent a sequence of 3D pointclouds captured by a 3D sensor, where $n_f$ denotes the number of frames (i.e. number of 3D pointclouds in the sequence) and $Q_t$ is the 3D pointcloud at time $t$. We make a spatio-temporal accumulated 3D pointcloud by merging the sequence of individual pointclouds in the time interval $[t-\tau,t+\tau]$. Consider a point ${\bf p} = (x_t \; y_t \; z_t)^{\top}, 1 \leq t \leq n_f$ in $Q_t$. We define the spatio-temporal support of ${\bf p}$, $\Omega({\bf p})$, as the 3D points which are in a sphere of radius $r$ centered at ${\bf p}$ (Fig. \ref{fig:KeyPointAlg}). We propose a point descriptor based on the eigenvalue decomposition of the scatter matrix $C$ of the points ${{\bf q} \in \Omega({{\bf p}})}$:
\begin{equation} C = \frac{1}{n_p} \sum_{{\bf q} \in \Omega({{\bf p}})}
{({\bf q} - {\mu}){({\bf q} - {\mu})}^{\top}}, \text{where}\; \mu = \frac{1}{n_p} \sum_
{{\bf q} \in \Omega({{\bf p}})}{\bf q},\label{eq:ScatterMatrix} \end{equation}
and $n_p=|\Omega({{\bf p}})|$ denotes the number of points in the spatio-temporal support of ${\bf p}$. Performing PCA on the scatter matrix $C$ gives us $CV = EV$, where $E$ is a diagonal matrix of the eigenvalues $\lambda_1 \geq \lambda_2 \geq \lambda_3$, and $V$ contains three orthogonal eigenvectors $[{\bf v}_1 \; {\bf v}_2 \; {\bf v}_3]$ arranged in the order of decreasing magnitude of their associated eigenvalues.
We propose a new descriptor, the Histogram of Oriented Principal Components (HOPC), by projecting each eigenvector onto $m$ directions obtained from a {\it regular m-sided polyhedron}. We use $m=20$ to make a {\it regular icosahedron} which is composed of $20$ {\it regular pentagonal} facets and each facet corresponds to a histogram bin. Let $U \in \mathds{R}^{3 \!\times\! m}$ be the matrix of the center positions ${\bold u}_1, {\bold u}_2, \cdots , {\bold u}_{m}$ of facets:
\begin{equation}U=[{\bold u}_1,{\bold u}_2,\cdots,{\bf u}_i,\cdots,{\bold u}_{m}]\end{equation} For a {\it regular icosahedron} with center at the origin, these normalized vectors are
\begin{equation}
\left(\frac{\pm1}{L_u},\frac{\pm1}{L_u},\frac{\pm1}{L_u}\right),\left(0,\frac{\pm\varphi^{-1}}{L_u},\frac{\pm\varphi}{L_u}\right),\left(\frac{\pm\varphi^{-1}}{L_u},\frac{\pm\varphi}{L_u},0\right),\left(\frac{\pm\varphi}{L_u},0,\frac{\pm\varphi^{-1}}{L_u}\right),
\end{equation}
where $\varphi={(1+\sqrt{5})}/{2}$ is the golden ratio, and $L_u=\sqrt{\varphi^2+{1}{/\varphi^2}}$ is the length of vector ${\bf u}_i, 1 \leq i \leq m$. The eigenvectors are basically directions of maximum variance of the points in 3D space. Thus, they have a $180^\circ$ ambiguity. To overcome this problem, we consider the distribution of vector directions and their magnitudes within the support volume of ${\bf p}$. We determine the sign of each eigenvector ${\bf v}_j$ from the sign of the inner products of ${\bf v}_j$ and all vectors within the support of ${\bf p}$:
\begin{equation}
{\bf v}_j={\bf v}_j.sign\left(\sum_{{\bf q} \in \Omega({\bf p})}{sign({\bf o}^{\top}{\bf v}_j)({\bf o}^{\top}{\bf v}_j})^2 \right)
\end{equation}
where ${\bf o}={\bf q}-{\bf p}$ and the $sign$ function returns the sign of an input number. Note that the squared projection ensures the suppression of small projections, which could be due to noise. If the signs of eigenvectors ${\bf v}_1, {\bf v}_2,$ and ${\bf v}_3$ disagree i.e. ${\bf v}_1 \times {\bf v}_2 \neq {\bf v}_3$, we switch the sign of the eigenvector whose $|\sum_{w=1}^{n_p}{sign({\bf o}_w^{\top}{\bf v}_j)({\bf o}_w{\bf v}_j})^2|$ value is the smallest. We then project each eigenvector ${\bf v}_j$ onto $U$ to give us:
\begin{equation}{\bf b}_j=U^{\top} {\bf v}_j \in \mathds{R}^{m}, \textbf{ for } 1 \le j \le 3. \end{equation} In case ${\bf v}_j$ is perfectly aligned with ${\bf u}_i \in U$, it should vote into only $i^{\text{th}}$ bin. However, all ${\bf u}_i$'s are not orthogonal, therefore ${\bf b}_j$ will have non-zero projection in other bins as well. To overcome this effect, we quantize the projection of ${\bf b}_j$. For this purpose, a threshold value $\psi$ is computed by projecting any two {\it neighbouring} vectors ${\bf u}_k$ and ${\bf u}_l$,
\begin{equation}\psi={{\bf u}_k}^{\top} {\bf u}_l=\frac{\varphi+\varphi^{-1}}{{L_u}^2},\;\;{\bf u}_k,{\bf u}_l \in U.\end{equation} Note that for any ${\bf u}_k \in U$, we can find a ${\bf u}_l \in U$ such that $\psi=({\varphi+\varphi^{-1}})/{L_u}^2$. The quantized vector is given by
\[ \hat{{\bf b}}_j(z) = \left\{
\begin{array}{l l}
0 & \quad \text{if ${\bf b}_j(z) \le \psi$}\\
{\bf b}_j(z)-\psi & \quad \text{otherwise},
\end{array} \right.\]
where $1 \leq z \leq m$. We define ${\bf h}_j$ to be $\hat{{\bf b}}_j$ scaled by the corresponding eigenvalue $\lambda_j$,
\begin{equation} {\bf h}_j=\frac {\lambda_j \cdot \hat{{\bf b}}_j}
{||\hat{{\bf b}}_j||_2} \in \mathds{R}^{m}, \textbf{ for } 1 \le j \le 3.
\end{equation} We concatenate the histograms of oriented principal components of all three eigenvectors in decreasing order of their eigenvalues to form a descriptor of point ${\bf p}$:
\begin{equation} \label{HOPC} {\bf h}_{\bf p}=[{\bf h}_1^{\top}\; {\bf h}_2^{\top}\; {\bf h}_3^{\top}]^{\top} \in \mathds{R}^{3m}. \end{equation}
The spatio-temporal HOPC descriptor at point ${\bf p}$ encodes information from both shape and motion in the support volume around it. Since the smallest principal component of the local surface is in fact the total least squares estimate of the surface normal~\cite{surfaceNormal}, our descriptor, which inherently encodes the surface normal, is more robust to noise than gradient-based surface normal used in~\cite{HONV,HON4D}. Using this descriptor, we propose two different action recognition algorithms in the following section.
\section{Action Recognition}
We propose a holistic and a local approach for human action recognition. Our holistic method is suitable for actions under occlusions, more inter-class similarities of local motions, and where the subjects do not change their spatial locations. On the other hand, our local method is more suitable for cross-view action recognition and in cases where the subjects change their spatial locations.
\subsection{Action Recognition with Holistic HOPC}
\label{Holistic}
A sequence of 3D pointclouds is divided into $\gamma= n_x \times n_y \times n_t$ spatio-temporal cells along $X$, $Y$, and $T$ dimensions. We use $c_s, \text{where}\; s=1 \cdots \gamma$, to denote the $s^{\text{th}}$ cell. The spatio-temporal HOPC descriptor ${\bf h}_{\bf p}$ in \eqref{HOPC} is computed for each point ${\bf p}$ within the sequence. The cell descriptor ${\bf h}_{c_s}$ is computed by accumulating ${\bf h}_{c_s}=\sum_{p \in c_s}{{\bf h}_{\bf p}}$ and then normalizing ${\bf h}_{c_s}\leftarrow{{\bf h}_{c_s}}/{||{\bf h}_{c_s}||_2}$. The final descriptor ${\bf h}_{v}$ for the given sequence is a concatenation of ${\bf h}_{c_s}$ obtained from all the cells:${\bf h}_{v}={[{\bf h}_{c_1}^{\top}\; {\bf h}_{c_2}^{\top}\; ...\; {\bf h}_{c_s}^{\top}\; ...\; {\bf h}_{c_{\gamma}}^{\top}]}^{\top}$. We use ${\bf h}_{v}$ as the holistic HOPC descriptor and use SVM for classification.
\subsubsection{Computing a Discriminative Cell Descriptor:}
The HOPC descriptor is highly correlated to the order of eigenvalues of the spatio-temporal support volume around ${\bf p}$. Therefore, for each point a pruning approach is introduced to eliminate the ambiguous eigenvectors of each point. For this purpose, we define two eigenratios:
\begin{equation}\delta_{12}=\frac{\lambda_1}{\lambda_2},
\delta_{23}=\frac{\lambda_2}{\lambda_3}.\end{equation}
For 3D symmetrical surfaces, the values of $\delta_{12}$ or $\delta_{23}$ will be equal to $1$. The principal components of symmetrical surfaces are ambiguous. To get a discriminative ${\bf h}_{\bf p}$, the values of $\delta_{12}$ and $\delta_{23}$ must be greater than 1. However, to manage noise we choose a threshold value $\theta > 1+\epsilon$, where $\epsilon$ is a margin and select only the discriminative eigenvectors as follows:
\begin{enumerate
\item If $\delta_{12} > \theta$ and $\delta_{23} > \theta$: ${\bf
h}_{\bf p}=[{\bf h}_1^{\top}\;\;{\bf h}_2^{\top}\;\; {\bf
h}_3^{\top}]^\top$.\vspace{+1mm}
\item If $\delta_{12} \le \theta$ and $\delta_{23} > \theta$:
${\bf h}_{\bf p}=[{\bf 0}^{\top}\;\;{\bf 0}^{\top}\;\; {\bf
h}_3^{\top}]^\top$.\vspace{+1mm}
\item If $\delta_{12} > \theta$ and $\delta_{23} \le \theta$:
${\bf h}_{\bf p}=[{\bf h}_1^{\top}\;\;{\bf 0}^{\top}\;\; {\bf
0}^{\top}]^\top$.\vspace{+1mm}
\item If $\delta_{12} \le \theta$ and $\delta_{23} \le \theta$: In this case,
we discard ${\bf p}$.
\end{enumerate}
\subsection{STKP: Spatio-Temporal Key-Point Detection}
\label{Local}
Consider a point ${\bf p}=(x_t\; y_t\; z_t)^{\top}$ within a sequence of 3D pointclouds. In addition to the spatio-temporal support volume around ${\bf p}$ defined in section~\ref{HOPCDescriptor}, we further define a spatial only support volume around ${\bf p}$ as the 3D points of $Q_t$ that fall inside a sphere of radius $r$ centered at ${\bf p}$. Thus, we perform PCA on both the spatial and the spatio-temporal scatter matrices $C'$ and $C$.
Let $\lambda_1' \geq \lambda_2' \geq \lambda_3'$ and $\lambda_1 \geq \lambda_2 \geq \lambda_3$ represent the eigenvalues of the spatial $C'$ and spatio-temporal $C$ scatter matrix, respectively. We define the following ratios:
\begin{equation}
\delta_{12}'=\frac{\lambda_1'}{\lambda_2'},\;
\delta_{23}'=\frac{\lambda_2'}{\lambda_3'},\;
\delta_{12}=\frac{\lambda_1}{\lambda_2},\;
\delta_{23}=\frac{\lambda_2}{\lambda_3}.\;
\end{equation}
For a point to be identified as a potential keypoint, the condition $\{\delta_{12},\delta_{23},\delta_{12}',\delta_{23}'\} > \theta$ must be satisfied. This process prunes ambiguous points and produces a subset of candidate keypoints. It reduces the computational burden of the subsequent steps. Let ${\bf h}'_{\bf p} \in \mathds{R}^{3m}$ represent the spatial HOPC and ${\bf h}_{\bf p} \in \mathds{R}^{3m}$ represent the spatio-temporal HOPC. A {\it quality} factor is computed at each candidate keypoint ${\bf p}$ as follows:
\begin{equation}
\eta_p=\frac{1}{2}\sum_{i=1}^{3m}{\frac{({\bf h}'_{\bf p}(i)-{\bf h}_{\bf p}(i))^2}{({\bf h}'_{\bf p}(i)+{\bf h}_{\bf p}(i))}}.
\end{equation}
\noindent When ${\bf h}_{\bf p}'={\bf h}_{\bf p}$, the {\it quality} factor has the minimum value of $\eta_p=0$. It means that the candidate keypoint ${\bf p}$ has a stationary spatio-temporal support volume with no motion.
\begin{figure}[t]
\begin{center}
\includegraphics[width=7.5 cm]{KeypointDetectionAlg}
\end{center}
\vspace{-6mm}
\caption{STKP: Spatio-Temporal Key-Point detection algorithm}
\label{fig:KeyPointAlg}
\end{figure}
We define a locality as a sphere of radius $r'$ ( with $r'\ll r$) and a time interval $2\tau'+1$ (with $\tau' \le \tau$). We sort the candidate STKPs according to their quality values and starting from the highest quality keypoint, all STKPs within its locality are removed. The same process is repeated on the remaining STKPs. Fig.~\ref{fig:KeyPointAlg} shows the steps of our STKP detection algorithm. Fig.~\ref{fig:Keypoints}-a shows the extracted STKPs from three different views for a sequence of 3D pointclouds corresponding to the {\it holding head} action.
\subsection{View-Invariant Key-Point Descriptor}
Let ${\bf p}=(x_t\;y_t\;z_t)^{\top}$ represent an STKP. All points within the spatio-temporal support volume of ${\bf p}$ i.e., $\Omega({\bf p})$, are aligned along the eigenvectors of its spatial scatter matrix, $B=PV'$, where $P \in \mathds{R}^{n_p \!\times\! 3}$ is a matrix of points within $\Omega({\bf p})$ and $V'=[{\bf v}_1' \;{\bf v}_2' \;{\bf v}_3']$ denotes the $3 \times 3$ matrix of eigenvectors of the spatial scatter matrix $C'$. Recall that the signs of these eigenvectors have a $180^\circ$ ambiguity. As mentioned earlier, we use the sign disambiguation method to overcome this problem. As a result, any feature (e.g. raw depth values or HOPC) extracted from the aligned spatio-temporal support volume around ${\bf p}$ will be view invariant.
In order to describe the points within the spatio-temporal support volume of keypoint ${\bf p}$, the spatio-temporal support of ${\bf p}$ is represented as a 3D hyper-surface in the 4D space $(X,Y,Z)$ and $T$. We fit a 3D hyper-surface to the aligned points within the spatio-temporal support volume of ${\bf p}$. A uniform $m_x \times m_y \times m_t$ grid is used to sample the hyper-surface and its raw values are used as the descriptor of keypoint ${\bf p}$.
We use the bag-of-words approach to represent each 3D pointcloud sequence and build a codebook by clustering the keypoint descriptors using K-means. Codewords are defined by the cluster centers and descriptors are assigned to codewords using Euclidean distance. For classification, we use SVM with the histogram intersection kernel~\cite{HIKSVM}.
\begin{figure}[t]
\begin{center}
\includegraphics[width=11 cm]{Keypoints}
\end{center}
\vspace{-6mm}
\caption{(a)-STKPs projected onto $XY$ dimensions on top of all points within a sequence of 3D pointclouds corresponding to the {\it holding head} action (from three different views). Note that a large number of STKPs are detected only where movement is performed. (b)-Sample pointclouds at different views from the UWA3D Multiview Activity dataset}
\label{fig:Keypoints}
\end{figure}
\section{Adaptive Support Volume}
So far we have used a fixed spatial ($r$) and temporal ($\tau$) support volume to detect and describe each keypoint ${\bf p}$. However, subjects can have different scales (in height and width) and perform actions with different speeds. Therefore, simply using a fixed spatial ($r$) and temporal ($\tau$) support volume is not optimal. Large values of $r$ and $\tau$ enable the proposed descriptors to encapsulate more information about shape and motion of a subject. However, this also increases sensitivity to occlusion and action speed.
A simple approach to finding the optimal spatial scale ($r$) for a STKP is based on the subject's height ($h_s$) e.g.~$r=e \times h_s$, where $e$ is a constant that is empirically chosen to make a trade-off between descriptiveness and occlusion. This approach is unreliable and may fail when a subject touches the background or is not in an upright position. Several automatic spatial scale detection methods \cite{3DkeypointSurvey} have been proposed for 3D object recognition. In this paper, we use the automatic spatial scale detection method proposed by Mian et al.~\cite{Mian} to determine the optimal spatial scale for each keypoint. The optimal spatial scale ($r_b$) is selected as the one for which the ratio between the first two eigenvalues of the spatial support of a keypoint reaches a local maximum. Our results show that the automatic spatial scale selection~\cite{Mian} achieves the same accuracy as the fixed scale when the height ($h_s$) of each subject is available.
For temporal scale selection, most previous works~\cite{HON4D,DSTIP,MyWACV14,STOP,Dollar} used a fixed number of frames. However, we propose automatic temporal scale selection to make our descriptor robust to action speed variations. Our proposed method follows the automatic spatial scale detection method by Mian et al.~\cite{Mian}. Let $Q=\{Q_1, Q_2,\cdots,Q_t,\cdots,Q_{n_f}\}$ represent a sequence of 3D pointclouds. For a point ${\bf p}=[x_t\;y_t\;z_t]^{\top}$, we start with points in $[Q_{t-\tau},\cdots,Q_{t+\tau}]$ for $\tau=1$ which are within its spatial scale $r$ (note that we assume $r$ as the optimal spatial scale for ${\bf p}$) and calculate the summation of ratio between the first two eigenvalues (${\lambda_2}/{\lambda_1}$) and the last two eigenvalues (${\lambda_3}/{\lambda_2}$) as:
\begin{equation}
A_{\bf p}^{\tau}=\frac{\lambda_2}{\lambda_1}+\frac{\lambda_3}{\lambda_2},
\end{equation}
where $\lambda_1 \geq \lambda_2 \geq \lambda_3$. This process continues for all $\tau=1,\cdots,\Delta$ and the optimal temporal scale $\tau$ corresponding to the local minimum value of $A_{\bf p}$ found for point ${\bf p}$. A point which does not have a local minimum is not considered as a candidate keypoint.
\section{Experiments}
\label{Experiments}
The proposed algorithms were evaluated on three benchmark datasets including MSRAction3D~\cite{Bag3DPoints}, MSRGesture3D~\cite{Wang2012}, and ActionPairs3D~\cite{HON4D}. We also developed a new ``UWA3D Multiview Activity'' dataset to evaluate the proposed cross-view action recognition algorithm. This dataset consists of 30 daily activities of ten subjects performed at different scales and viewpoints (Subsection~\ref{MyDataset}). For our algorithms, we used $k=1000, \theta=1.12$, $m_x=m_y=20$ and $m_t=3$ in all experiments. To test the performance of our holistic approach, each sequence of 3D pointclouds was divided into $6 \times 5 \times 3$ spatio-temporal cells along $X$, $Y$, and $T$ dimensions, respectively.
The performance of the proposed algorithms was compared with seven state-of-the-art methods including Histogram of Oriented Gradient (HOG3D)~\cite{3Dgrad}, Random Occupancy Pattern (ROP)~\cite{Wang2012}, Histogram of 3D joints(HOJ3D)~\cite{ViewInvariantJoint3D}, Actionlet Ensemble~\cite{ActionLet2012}, Histogram of 4D Oriented Normals (HON4D)~\cite{HON4D}, Depth Spatio-Temporal Interest Points (DSTIP)~\cite{DSTIP}, and Histograms of Depth Gradient (HDG)~\cite{MyWACV14}. The accuracy is reported from the original papers or from the authors' implementations of DSTIP~\cite{DSTIP}, HDG~\cite{MyWACV14}, HOG3D~\cite{3Dgrad}, and HON4D~\cite{HON4D}. The implementation of HOJ3D~\cite{ViewInvariantJoint3D} is not available, therefore we used our own implementation.
\begin{figure}[t]
\begin{center}
\includegraphics[width=12 cm]{DatasetSamples}
\end{center}
\vspace{-6mm}
\caption{Sample 3D pointclouds from the MSRAction3D, MSRGesture3D, ActionPairs3D, and UWA3D Multiview Activity datasets}
\label{fig:DataSamples}
\end{figure}
\subsection{MSRAction3D dataset}
\label{MSRAction3D dataset}
MSRAction3D dataset~\cite{Bag3DPoints} consists of 20 actions each performed by 10 subjects 2-3 times (Fig.~\ref{fig:DataSamples}). The dataset is challenging due to high inter-action similarities. To test our holistic approach, we used five subjects for training and five for testing and repeated the experiments 252 folds exhaustively as proposed by~\cite{HON4D}. To show the effectiveness of our automatic spatio-temporal scale selection, we used four different settings using fixed and varying values of $r$ and $\tau$. Table \ref{tab:Action} compares our algorithms with existing state-of-the-art. Note that the proposed algorithm outperformed all techniques under all four settings. The maximum accuracy was achieved using constant $r$ and adaptive $\tau$. Adaptive $r$ did not improve results since there is little scale variation in this dataset. Note that HOJ3D~\cite{ViewInvariantJoint3D}, Moving Pose \cite{Zanfir} and Actionlet~\cite{ActionLet2012} use skeleton data which is not always available.
\begin{table}[t]
\caption{\small Accuracy comparison on MSRAction3D dataset. Mean $\pm$ STD is computed over 252 folds. Fold 5/5 means subjects \{1,3,5,7,9\} used for training and the rest for testing. $^a$ Moving Pose~\cite{Zanfir} used different setting}
\centering
\begin{tabular}{lcccccc}
\toprule
{Method}\;\; & {Mean$\pm$STD}\;\;& {Max}\;\; & {Min}\;\; & 5/5\;\;\\
\midrule
HOJ3D~\cite{ViewInvariantJoint3D}\;\; & 63.55$\pm$5.23\;\; & 75.91 \;\;& 44.05 \;\;& 75.80\;\;\\
HOG3D~\cite{3Dgrad}\;\; & 70.38$\pm$4.40\;\; & 82.78 \;\;& 55.26 \;\;& 82.78\;\;\\
ROP~\cite{Wang2012} & - \;\;& -\;\; & -\;\; & 86.50\;\;\\
Moving Pose~\cite{Zanfir} & - \;\;& -\;\; & -\;\; & \;\;$91.70^a$\;\;\\
Actionlet~\cite{ActionLet2012}\;\; & - \;\;& -\;\; & -\;\; & 88.20\;\;\\
HON4D~\cite{HON4D}\;\; & 81.88$\pm$4.45\;\; & 90.61\;\; & 69.31\;\; & 88.36\;\;\\
DSTIP~\cite{DSTIP}\;\; & -\;\; & 89.30\;\; & -\;\; & -\;\;\\
HDG~\cite{MyWACV14}\;\; & 77.68$\pm$4.97\;\; & 86.13\;\; & 60.55\;\; & 83.70\;\;\\
\midrule
{\bf Holistic HOPC}\\
constant $r$, constant $\tau$ \; & 85.45$\pm$2.31\;\; & 92.39\;\; & 73.54\;\; & 91.64\;\;\\
adaptive $r$, constant $\tau$ \; & 84.78$\pm$2.89\;\; & 91.64\;\; & 72.41\;\; & 90.90\;\;\\
constant $r$, adaptive $\tau$ \; & 86.49$\pm$2.28\;\; & 92.39\;\; & 74.36\;\; & 91.64\;\;\\
adaptive $r$, adaptive $\tau$ \; & 85.01$\pm$2.44\;\; & 92.39\;\; & 72.94\;\; & 91.27\;\;\\
\bottomrule
\end{tabular}
\label{tab:Action}
\end{table}
We also evaluated our local method with automatic spatial and temporal scale selection and achieved $90.90\%$ accuracy (subjects \{1,3,5,7,9\} used for training and the rest for testing). This is higher than $89.30\%$ of DSTIP~\cite{DSTIP} and $88.36\%$ of HON4D~\cite{HON4D}. Note that DSTIP~\cite{DSTIP} only reported the accuracy of the best fold and used additional steps such as mining discriminative features which can be applied to improve the accuracy of any descriptor. We did not include such steps in our method.
\subsection{MSRGesture3D dataset}
The MSRGesture3D dataset~\cite{Wang2012} contains 12 American sign language gestures performed 2-3 times by 10 subjects. For comparison with previous techniques, we use the leave-one-subject-out cross validation scheme proposed by~\cite{Wang2012}. Because of the absence of full body subjects (only hands are visible), we evaluate our methods in two settings only. Table \ref{tab:Gesture} compares our method to existing state-of-the-art methods excluding HOJ3D~\cite{ViewInvariantJoint3D} and Actionlet~\cite{ActionLet2012} since they require 3D joint positions which are not present in this dataset. Note that both variants of our method outperform all techniques by a significant margin achieving an average accuracy of $96.23\%$ which is $3.5\%$ higher than the nearest competitor HDG~\cite{MyWACV14}. We also tested our local method with automatic spatial and temporal scale selection and obtained an accuracy of $93.61\%$.
\begin{table}[t]
\centering \caption{\small Comparison with state-of-the-art methods on MSRGesture3D dataset}
\begin{tabular}{lccccc}
\toprule
{Method}\;\; & {Mean$\pm$STD}\;\; & {Max}\;\; & {Min}\;\;\\
\midrule
\midrule
HOG3D~\cite{3Dgrad} \;\; & 85.23$\pm$12.12\;\; & 100\;\; & 50.00\;\; \\
ROP~\cite{Wang2012}\;\; & 88.50\;\; & - & - \\
HON4D~\cite{HON4D}\;\; & 92.45$\pm$8.00\;\; & 100\;\; & 75\;\;\\
HDG~\cite{MyWACV14}\;\; & 92.76$\pm$8.80\;\; & 100 \;\;& 77.78\;\;\\
\midrule
{\bf Holistic HOPC} \\
adaptive $r$, constant $\tau$\;\; & 95.29$\pm$6.24\;\; & 100\;\; & 83.67\;\;\\
adaptive $r$, adaptive $\tau$\;\; & 96.23$\pm$5.29\;\; & 100\;\; & 88.33\;\;\\
\bottomrule
\end{tabular}
\label{tab:Gesture}
\end{table}
\subsection{ActionPairs3D dataset}
The ActionPairs3D dataset ~\cite{HON4D} consists of depth sequences of six pairs of actions (Fig.~\ref{fig:DataSamples}) performed by 10 subjects. This dataset is challenging as each action pair has similar motion and shape. We used half of the subjects for training and the rest for testing as recommended by~\cite{HON4D} and repeated the experiments 252 folds. Table \ref{tab:Pairs} compares the proposed holistic HOPC descriptor in two settings with existing state-of-the-art methods. Our algorithms outperformed all techniques with $2.23\%$ improvement over the nearest competitor. Adaptive $\tau$ provides better improvement on this dataset compared to the previous two. We also evaluated our local method with automatic spatial and temporal scale selection and obtained $98.89\%$ accuracy using subjects \{6.7.8.9.10\} for training and the rest for testing.
\begin{table}[t]
\centering \caption{\small Accuracy comparisons on the ActionPairs3D dataset. Mean$\pm$STD are computed over 252 folds. 5/5 means subjects \{6,7,8,9,10\} used for training and the rest for testing}
\begin{tabular}{lcccccc}
\toprule
{Method}\;\; & {Mean$\pm$STD}\;\;& {Max}\;\; & {Min}\;\; & 5/5\;\;\\
\midrule
HOJ3D~\cite{ViewInvariantJoint3D}\;\; & 63.81$\pm$5.94\;\; & 67.22 \;\;& 50.56 \;\;& 66.67\;\;\\
HOG3D~\cite{3Dgrad}\;\; & 85.76$\pm$4.66\;\; & 85.56 \;\;& 65.00 \;\;& 82.78\;\;\\
Actionlet~\cite{ActionLet2012}\;\; & - \;\;& -\;\; & -\;\; & 82.22\;\;\\
HON4D~\cite{HON4D}\;\; & 96.00$\pm$1.74\;\; & 100\;\; & 91.11\;\; & 96.67\;\;\\
\midrule
{\bf Holistic HOPC}\\
constant $r$, constant $\tau$\;\; & 97.15$\pm$2.21\;\; & 100\;\; & 88.89\;\; & 97.22\;\;\\
constant $r$, adaptive $\tau$\;\; & 98.23$\pm$2.19\;\; & 100\;\; & 88.89\;\; & 98.33\;\;\\
\bottomrule
\end{tabular}
\label{tab:Pairs}
\end{table}
\subsection{UWA3D Multiview Activity dataset}
\label{MyDataset}
We collected a new dataset using the Kinect to emphasize three factors: (1) Scale variations between subjects. (2) View-point variations. (3) All actions were performed in a continuous manner with no breaks or pauses. Thus, the start and end positions of body for the same actions are different. Our dataset consists of 30 activities performed by 10 human subjects of varying scales: {\it one hand waving, one hand Punching, sitting down, standing up, holding chest, holding head, holding back, walking, turning around, drinking, bending, running, kicking, jumping, moping floor, sneezing, sitting down(chair), squatting, two hand waving, two hand punching, vibrating, falling down, irregular walking, lying down, phone answering, jumping jack, picking up, putting down, dancing,} and {\it coughing} (Fig.~\ref{fig:DataSamples}). To capture depth videos from front view, each subject performed two or three random permutations of the 30 activities in a continuous manner. For cross-view action recognition, 5 subjects performed 15 activities from 4 different side views (see Fig.~\ref{fig:Keypoints}-b). We organized our dataset by segmenting the continuous sequences. The dataset is challenging due to self-occlusions and high similarity. For example, {\it drinking} and {\it phone answering} actions have very similar motion and only the hand location in these actions is slightly different. As another example, {\it lying down} and {\it falling down} actions have very similar motion, but the speed of action execution is different. Moreover, some actions such as: {\it holding back, holding head,} and {\it answering phone} have self-occlusions. The videos were captured at 30 frames per second at a spatial resolution of $640 \times 480$.
We evaluate our proposed methods in the same-view, and cross-view action recognition settings. The holistic approach is used to classify actions captured from the same view and the local approach is used for cross-view action recognition where the training videos are captured from front view and the test videos from side views.
\subsubsection{Same-view Action Recognition}
We selected half of the subjects as training and the rest as testing and evaluated our holistic method in two settings: (1) constant $r$, constant $\tau$, (2) constant $r$, adaptive $\tau$. Table \ref{tab:InHouse3Dsingleview} compares our methods with existing state of the art. Both variants of our algorithm outperform all methods achieving a maximum of $84.93\%$ accuracy. The adaptive $\tau$ provides minor improvement because there is no explicit action speed variation in the dataset.
To further test the robustness of our temporal scale selection (adaptive $\tau$) to action speed variations we use depth videos of actions performed by half of the subjects captured at 30 frames per second as training data and depth videos of actions performed by the remaining subjects captured at 15 frames per second as test data. The average accuracy of our method using automatic temporal scale selection was $84.64\%$ which is higher than $81.92\%$ accuracy achieved by our method using constant temporal scale and the $76.43\%$ accuracy achieved by HON4D. Next, we swap the frame rates of the test and training data. The average accuracy of our method using automatic temporal scale selection was $84.70\%$ which is higher than $81.01\%$ accuracy achieved by our method using constant temporal scale. The accuracy of HON4D was $75.81\%$ in this case.
\begin{table}[t]
\centering \caption{\small Accuracy comparison on the UWA3D Activity dataset for same-view action recognition}
\begin{tabular}{lcccccc}
\toprule
{Method}\;\; & {Mean$\pm$STD}\;\;& {Max}\;\; & {Min}\;\; \\
\midrule
HOJ3D~\cite{ViewInvariantJoint3D}\;\; & 48.59$\pm$5.77\;\; & 58.70 \;\;& 28.93 \;\;\\
HOG3D~\cite{3Dgrad}\;\; & 70.09$\pm$4.40\;\; & 82.78 \;\;& 51.60 \;\;\\
HON4D~\cite{HON4D}\;\; & 79.28$\pm$2.68\;\; & 88.89\;\; & 70.14\;\; \\
HDG~\cite{MyWACV14}\;\; & 75.54$\pm$3.64\;\; & 85.07\;\; & 61.90\;\; \\
\midrule
{\bf Holistic HOPC}\\
constant $r$, constant $\tau$\;\; & 83.77$\pm$3.09\;\; & 92.18\;\; & 74.67\;\;\\
constant $r$, adaptive $\tau$\;\; & 84.93$\pm$2.75\;\; & 93.11\;\; & 74.67\;\;\\
\bottomrule
\end{tabular}
\label{tab:InHouse3Dsingleview}
\end{table}
\subsubsection{Cross-view Action Recognition}
In order to evaluate the STKP detector and HOPC descriptor for cross-view action recognition, we used front views of five subjects as training and side views of the remaining five subjects as test. Table~\ref{tab:InHouse3Dcrossview} compares our method with existing state-of-the- art holistic and local methods for cross-view action recognition. Note that the performance of all other methods degrades when the subjects perform actions at different viewing angles. This is not surprising as existing methods assume that actions are observed from the same viewpoint i.e. frontal. For example, HON4D achieved $86.55\%$ accuracy when the training and test samples were in the same view (frontal). The average accuracy of HON4D dropped to $48.89\%$ when the training samples were captured from front view and the test samples were captured from four different side views. We also observed that the performance of existing methods did not degrade only for actions like {\it standing up, sitting down}, and {\it turning around}. This is due to the distinctness of these actions regardless of the viewpoint.
We test two variants of our method. First, we apply our STKP detector on 3D pointcloud sequences and use the raw values of fitted hyper-surface as features. The average accuracy obtained over the four different side views ($\pm25^\circ$ and $\pm50^\circ$) was $76.56\%$ in this case. Next, we use the STKP detector combined with the proposed HOPC descriptor. This combination achieved the best average accuracy i.e. $82.23\%$. Comparison with other methods and the accuracy of each method on different side views are shown in Table~\ref{tab:InHouse3Dcrossview}. These experiments demonstrate that our STKP detector in conjunction with HOPC descriptor significantly outperforms state-of-the-art methods for cross-view as well as same-view action recognition.
\begin{table}[t]
\centering \caption{\small Cross-view action recognition on the UWA3D Multiview Activity dataset. Depth sequences of five subjects at $0^o$ are used for training and the remaining subjects at $0^o$ and 4 different side-views are used for testing. Average accuracy is computed only for the cross-view scenario}
\begin{tabular}{lcccccc}
\toprule
\;\; & \;\;& \;\;& View angle\;\; & \;\; & \;\;& \;\;\\
{Method}\;\; & {$0^\circ$}\;\;& {$-25^\circ$}\;\;& {$+25^\circ$}\;\; & {$-50^\circ$}\;\; & {$+50^\circ$}\;\;& {Average}\;\;\\
\midrule
{\bf Holistic Methods}\\
HON4D~\cite{HON4D}\;\; & 86.55\;\; & 62.22\;\; & 60.00\;\;& 35.56\;\;&37.78\;\;&48.89\\
HDG~\cite{MyWACV14}\;\; & 79.13\;\; & 60.00\;\; & 64.44\;\;& 33.33\;\;&35.56\;\;&48.33\\
\midrule
{\bf Local Methods}\\
HOJ3D~\cite{ViewInvariantJoint3D}\;\; & 63.34\;\; & 60.00\;\; & 62.22\;\;& 37.78\;\;&40.00\;\;&50.00\\
DSTIP+DCSF~\cite{DSTIP}\;\; & 80.80\;\; & 66.67\;\; & 71.11\;\;&35.56\;\;&40.00\;\;&53.33\\
\midrule
STKP+hyper-surface fitting\;\; & 87.39\;\; & 81.33\;\; & 82.67\;\;& 71.11\;\;& 71.11\;\;& 76.56\\
STKP+HOPC\;\; & {\bf 91.79}\;\; & {\bf 86.67}\;\; & {\bf 88.89}\;\;&{\bf 75.56}\;\;&{\bf 77.78}\;\;&{\bf 82.23}\\
\bottomrule
\end{tabular}
\label{tab:InHouse3Dcrossview}
\end{table}
\section{Conclusion}
Performance of current 3D action recognition techniques degrades in the presence of viewpoint variations across the test and the training data. We proposed a novel technique for action recognition which is more robust to action speed and viewpoint variations. A new descriptor, Histogram of Oriented Principal Components (HOPC), and a keypoint detector are presented. The proposed descriptor and detector were evaluated for activity recognition on three benchmark datasets. We also introduced a new multiview public dataset and showed the robustness of our proposed method to viewpoint variations.
\section*{Acknowledgment}
This research was supported by ARC Discovery Grant DP110102399.
\bibliographystyle{splncs03}
|
2,869,038,154,332 | arxiv |
\section{Introduction}
V391~Peg was the first case of a post-red giant branch star showing
evidence of the presence of a planet (\citealt{silvotti07} (hereafter SSJ07),
\citealt{silvotti08}), indicating that giant planets may survive the first
giant expansion of a star, provided that the orbital distance is large enough.
For V391~Peg~b, a minimum mass of 3.2 $\rm M_{\rm Jup}$\ was found, with an orbital period
of 3.2 yr, corresponding to an orbital distance of about 1.7 AU.
The presence of the planet was inferred by measuring the arrival times of the
maxima and minima of the stellar light, given that V391~Peg is a pulsating
subdwarf B (sdB) star with at least four $p$-mode pulsation periods between
344 and 354~s \citep{silvotti02, silvotti10}, and a few longer-period $g$-modes
\citep{lutz09}.
A recent review on hot subdwarfs of spectral type O and B is given by
\citet{heber16}.
V391~Peg~b is not the first case in which the light travel-time delay is used
to detect secondary low-mass bodies.
In principle, the timing technique may be used on any star or stellar system
that has a sufficiently stable clock, which may be given by the oscillations of
the stellar flux in pulsating stars (like in this case), but also radio signals
in pulsars or eclipse timing in eclipsing binaries.
Radio timing was used to detect the first planetary system around the
pulsar PSR~1257+12 \citep{wolszczan92}. The extremely high precision of
the radio pulse made it possible to detect PSR~1257+12~b, the Moon-mass
innermost planet of the system \citep{konacki03}.
Of the planets detected through eclipse timing, the most convincing case is
given by two circumbinary planets orbiting the pre-cataclysmic binary NN~Ser.
Eight years after the discovery paper (\citealt{qian09}, see also
\citealt{beuermann10}) and 26 years after the first data, their existence
remains the best explanation for the observed eclipse time variations
\citep{bours16}.
Many other detached close binaries show eclipse time variations:
for some of them, the presence of planets is excluded by dynamic stability
computations and the periodic O--C trends may be caused by other reasons,
such as Applegate-like mechanisms (\citealt{applegate92}, \citealt{lanza06}).
However, for some others, the energy required to produce the quasi-periodic
changes in the quadrupole moment of the secondary star referred to as the
Applegate mechanism, is too high; and the presence of Jovian planets remains
the most plausible explanation \citep{volschow16}.
The idea of using stellar pulsation to measure the reflex motion that is due
to a companion is not new (e.g., \citealt{barnes75}).
Recently, the high photometric accuracy achievable from space, in particular
with the {\em Kepler}\ mission, has led to a renewed interest in this technique
\citep{silvotti11}, and two systematic approaches based on frequency modulation
(FM) and phase modulation (PM, equivalent to the O--C method) were proposed
(\citealt{shibahashi12}, \citealt{telting12}, \citealt{shibahashi15};
\citealt{murphy14, murphy16b}).
However, to detect low-mass (substellar) companions, we need very
stable pulsators.
When we exclude all the solar-like oscillators, good candidates are the delta
Scuti stars (\citealt{compton16}; see also recent discovery by
\citealt{murphy16a}) and compact stars like white dwarfs or sdB stars.
As for white dwarfs, many articles in the literature have addressed this issue
(e.g., \citealt{kepler91}), but it has become increasingly evident
that other effects are present that can mimic light travel time effects
in the O--C diagrams of these stars (e.g., \citealt{dalessio15}).
For sdB stars the situation looks more promising, perhaps because these stars
have a fully radiative envelope, and there is at least one case in which the
presence of a low-mass stellar companion detected from pulsation timing was
confirmed by radial velocity measurements \citep{barlow11b}.
Another recent case of a pulsation-timing detection of an F5V companion to an
sdB pulsator is reported by \citet{otani+17}.
After the detection of V391~Peg~b, some other planet or brown dwarf (BD)
candidates orbiting sdB stars were proposed using different detection methods.
From eclipse timing, about one-third of the known detached sdB/sdO+dM
(dM=M-dwarf) post-common-envelope binaries (PCEB) are suspected to host
planets/BDs:
HW~Vir (\citealt{beuermann12} and references therein),
HS~0705+6700 (alias V470~Cam, \citealt{qian13} and references therein),
HS~2231+2441 (\citealt{qian10} and references therein; but see also
\citealt{lohr14}),
NSVS~14256825 (\citealt{almeida13}; \citealt{hinse14} and references therein),
NY~Vir (\citealt{lee14} and references therein),
and 2M~1938+4603 \citep{baran15}.
Interesting explorations on the origin of PCEB (and specifically sdB+MS/BD)
circumbinary planets can be found in \citet{zorotovic13}, \citet{schleicher14},
\citet{bear14}, and \citet{volschow16}.
Very different planets or planetary remnants with terrestrial radii have been
proposed from tiny reflection effects detected by the {\em Kepler}\ spacecraft in
KIC~05807616 \citep{charpinet11} and KIC~10001893 \citep{silvotti14}.
However, none of these sdB planet/BD candidates has been confirmed with at
least two independent detection methods.
More robust detections of a few brown dwarfs (BDs) in eclipsing sdB binaries
(also called HW Vir systems from the sdB+dM protoptype) were obtained
by combining stellar radial velocities (RVs) with photometric measurements:
J08205+0008, J1622+4730 and V2008-1753 have companion masses of about 71, 67,
and 69 $\rm M_{\rm Jup}$, respectively \citep{geier11,schaffenroth14,schaffenroth15}.
At least two more sdB+BD eclipsing systems were recently found from the OGLE
survey (Schaffenroth in preparation, private communication).
Finally, two more BD candidates in sdB binaries were found by combining
radial velocities (RVs) with photometric reflection effects: CPD-64$\degr$6481
and PHL~457, with minimum masses of 50 and 28 $\rm M_{\rm Jup}$, respectively
\citep{schaffenroth14b}.
In this paper we reconsider the case of V391~Peg, for which we have collected
six years of new photometric time-series data, increasing the number of data
points by a factor of about 2.5.
The main stellar parameters of V391~Peg are summarized in Table~1.
We note that the JHK magnitudes are compatible with a single sdB star and do
not indicate any near-IR excess.
In section 2 a short summary of the data acquisition and reduction is given,
including the extraction of the pulsation frequencies.
The analysis of the amplitude spectrum of the $p$-modes at different frequency
resolutions is presented in section 3.
Section 4 is dedicated to the O--C analysis of the two main $p$-modes.
In section 5 we discuss the presence of the planet in the light of the new
O--C results, including a perspective on future developments.
In section 6 we present an analysis of the $g$-mode amplitude spectrum.
Finally, a summary of our results is given in section 7.
\begin{table}[h]
\begin{center}
\caption[]{Stellar parameters.}
{\small
\begin{tabular}{lc}
\hline
U & \hspace{1.2mm}$13.35 \pm 0.03^1$\\
B & \hspace{1.2mm}$14.35 \pm 0.02^1$\\
V & \hspace{1.2mm}$14.57 \pm 0.02^1$\\
J (2MASS) & $15.17 \pm 0.05$\\
H (2MASS) & $15.16 \pm 0.10$\\
K (2MASS) & $15.38 \pm 0.20$\\
\hline
\ensuremath{T_{\mathrm{eff}}} & $29\,300 \pm 500$ K$^2$\\
\ensuremath{\log g} & $5.4 \pm 0.1$ (cgs)$^2$\\
$\log$(N(He)/N(H)) & $-3.0 \pm 0.3^2$\\
M & $0.47^3$ $M_{\odot}$\\
R=R(M, $g$) & $0.23$ R$_{\odot}$\\
L=L(\ensuremath{T_{\mathrm{eff}}}, R) & $34$ \lsun\\
M$_{\rm V}$=M$_{\rm V}$(L, BC) & 3.88$^4$\\
d=d(V, M$_{\rm V}$) & $1\,400$ pc\\
\hline
\end{tabular}
}
\end{center}
{\small Notes: $^1$ Our calibration at TNG.}
{\small \hspace{8.3mm} $^2$ From \citealt{ostensen01}.}
{\small \hspace{8.3mm} $^3$ SdB canonical mass (assumed), see e.g.
\citealt{heber16}}.
{\small \hspace{8.3mm} $^4$ Absolute V mag assuming a bolometric correction
BC=--2.95.}
\end{table}
\section{Time-series photometric data: extraction of the pulsation frequencies}
\begin{table}
\centering
\caption[]{Time-series photometry.}
{\small
\begin{tabular}{lcrc}
\bf Telescope/instrument & \multicolumn{1}{c}{\hspace{-3mm} \bf Observers} & \multicolumn{1}{c}{\hspace{-0mm} \bf \#~runs}
& \multicolumn{1}{c}{\hspace{-2.5mm} \bf \#~hours}\\
\hline
Previous data (1999-2006)$^1$ & &168 & \multicolumn{1}{c}{\hspace{-0mm} 421.3} \\
\hline
Loiano 1.5m/BFOSC & \multicolumn{1}{c}{\hspace{-3mm} RS} & 20 & \multicolumn{1}{c}{\hspace{1.7mm} 75.4} \\
Piszk\'{e}stet\H{o} 1.0m/CCD & \multicolumn{1}{c}{\hspace{-3mm} MP/LM} & 14 & \multicolumn{1}{c}{\hspace{1.7mm} 67.5} \\
Moletai 1.6m/CCD & \multicolumn{1}{c}{\hspace{-3mm} RJ} & 26 & \multicolumn{1}{c}{\hspace{1.7mm} 79.4} \\
Wise 1.0m/CCD & \multicolumn{1}{c}{\hspace{-3mm} EL} & 6 & \multicolumn{1}{c}{\hspace{1.7mm} 35.7} \\
Lulin 1.0m/CCD & \multicolumn{1}{c}{\hspace{-3mm} WSH} & 7 & \multicolumn{1}{c}{\hspace{1.7mm} 24.2} \\
MDM 1.3m/CCD & \multicolumn{1}{c}{\hspace{-3mm} MR} & 7 & \multicolumn{1}{c}{\hspace{1.7mm} 33.4} \\
LOAO 1.0m/CCD & \multicolumn{1}{c}{\hspace{-3mm} SLK} & 47 & \multicolumn{1}{c}{\hspace{-0mm} 134.1} \\
Monet-N 1.2m/CCD & \multicolumn{1}{c}{\hspace{-3mm} SS/RL} & 20 & \multicolumn{1}{c}{\hspace{1.7mm} 55.0} \\
Baker 0.6m??/CCD & \multicolumn{1}{c}{\hspace{-3mm} MR} & 4 & \multicolumn{1}{c}{\hspace{1.7mm} 11.5} \\
Mercator 1.2m/CCD & \multicolumn{1}{c}{\hspace{-3mm} R\O+students} & 24 & \multicolumn{1}{c}{\hspace{1.7mm} 69.8} \\
WHT 4.2m/ULTRACAM & \multicolumn{1}{c}{\hspace{-3mm} TRM/VSD} & 7 & \multicolumn{1}{c}{\hspace{1.7mm} 36.7} \\
NOT 2.6m/ALFOSC & \multicolumn{1}{c}{\hspace{-3mm} R\O} & 3 & \multicolumn{1}{c}{\hspace{1.7mm} 11.2} \\
TNG 3.6m/DOLORES & \multicolumn{1}{c}{\hspace{-3mm} RS} & 8 & \multicolumn{1}{c}{\hspace{1.7mm} 18.7} \\
Calar Alto 2.2m/CAFOS & \multicolumn{1}{c}{\hspace{-3mm} SS/RL} & 10 & \multicolumn{1}{c}{\hspace{1.7mm} 25.9} \\
\hline
Tot new data (2007-2012) & &203 & \multicolumn{1}{c}{\hspace{1.4mm} 644.9$^2$} \\
All data (1999-2012) & &371 & \multicolumn{1}{c}{\hspace{-1.4mm} 1066.2} \\
\hline
\\
\multicolumn{4}{l}{Notes: $^1$ See SSJ07 Supplementary Information for more details}\\
\multicolumn{4}{l}{\hspace{11.3mm} (a Monet-N run of Nov 2006 was added to that list).}\\
\multicolumn{4}{l}{\hspace{8.3mm} $^2$ This number is smaller than the sum of col.~4 given that}\\
\multicolumn{4}{l}{\hspace{11.3mm} sometimes overlapping data from different telescopes}\\
\multicolumn{4}{l}{\hspace{11.3mm} were averaged using a weighted mean.}
\end{tabular}
}
\end{table}
\begin{figure}
\includegraphics[width=9.0cm]{lc-eps-converted-to.pdf}
\caption{Distribution of the 217,232 data points over 13 years.
The overall duty cycle is 0.92\%, and the best coverage is obtained in 2007
with a duty cycle of 5.55\%.
The varying relative intensity is caused by the beating between the main
frequencies and also depends on the varying quality of the data.}
\label{fig1}
\end{figure}
The new time-series photometric data were obtained using different
telescopes and instruments (see Table~2) with at least one and often two or
more comparison stars close to the target in order to remove spurious
photometric modulations that are due to atmospheric transparency variations.
The distribution of the data during the 13 years of observation is shown in
Fig.~1.
Most of the data were taken using a standard Johnson B filter.
Only at NOT and MERCATOR did we use a Bessell B and a Geneva B filter,
respectively.
Moreover, a SLOAN g filter was used in the WHT-MDM run of October
2007\footnote{The WHT data were simultaneously obtained with ULTRACAM in three
photometric bands (u, g, and r) but only the g-band data are
used in this article, while multi-band data were previously used
to identify the main pulsation modes of V391~Peg \citep{silvotti10}.}.
The data obtained in October 2007 at the Piszk\'{e}stet\H{o}, Loiano, and
Lulin Observatories were collected without any filter in order to maximize
the signal-to-noise ratio (S/N) of that run.
The differences introduced by the different filters in terms of amplitudes
or phases of the pulsation modes were considered and found to be
negligible because of the much larger volume of standard B measurements.
From nonadiabatic models, these differences (in particular the phase
differences) are expected to be very small for $l$=0 and $l$=1 modes
(\citealt{randall05}; see in particular their Figs.~13 and 14).
The data were reduced mainly by the observers using standard procedures
for aperture
differential photometry.
The times of all the data (new and old) were converted into
Barycentric Dynamical Times (BJD$_{\rm TDB}$) following \citet{eastman10}).
From the reduced data we extracted accurate pulsation frequencies using
a classical prewhitening technique: an iterative Fourier transform (FT)
process was applied subtracting the main frequency from the data
residuals at each iteration, until no frequencies with amplitudes larger than
four times the FT mean noise level were present.
At the end of this iterative process, the pulsation frequencies, amplitudes,
and phases were optimized through a multi-sinusoidal fit, whose results are
given in Table~3.
Appropriate statistical weights were set and considered in the sinusoidal fits
of the $p$-modes \citep{silvotti06} in order to take the varying
quality of the data into account that is due to different telescope apertures,
instrument efficiencies, and weather conditions.
\begin{table*}
\centering
\caption[]{Pulsation frequencies.}
{\small
\begin{tabular}{llllcl}
& & ~~~~~$\overline{\bf F}$~~[$\mu$Hz] & ~~~~~~~~$\overline{\bf P}$~~[\rm s] & ~$\overline{\bf A}$~[\rm ppt]$^1$ & \bf ~~~Phase$^2$\\
\hline
\bf \hspace{28mm} \textit{p}-modes$^3$ & $f_1$ & 2860.938272(06) & 349.5356784(07) & 7.56 & 0.7327(06) \\
& $f_2$ & 2824.096225(10) & 354.0955832(13) & 4.06 & 0.7492(11) \\
& $f_3$ & 2881.123233(62) & 347.0868544(74) & 0.77 & 0.3285(58) \\
& $f_4$ & 2909.995332(63) & 343.6431630(75) & 0.65 & 0.2560(58) \\
& $f_2^-$ & 2823.932963(57) & 354.1160549(72) & 0.93 & 0.1015(54) \\
\hline
\bf \hspace{28mm} \textit{g}-modes$^4$ & $F_1$ & 201.96312(16) & 4951.3991(40) & 1.01 & 0.116(09) \\
& $F_2$ & 295.11065(23) & 3388.5596(26) & 0.78 & 0.475(12) \\
& $F_3$ & 320.19726(23) & 3123.0748(22) & 0.71 & 0.918(13) \\
\hline
\\
\multicolumn{6}{l}{Notes: $^1$ ppt~=~parts per thousand~=~0.1\%.}\\
\multicolumn{6}{l}{\hspace{8.2mm} $^2$ Normalized phases corresponding to BJD$_{\rm TDB}$\ 2451470.476568 (1$^{st}$ datum).}\\
\multicolumn{6}{l}{\hspace{8.2mm} $^3$ For the $p$-modes, frequencies and periods are the mean values in the period 1999-2012,
corresponding to BJD$_{\rm TDB}$\ $\sim$2454090 (or year}\\
\multicolumn{6}{l}{\hspace{10.5mm} $\approx$2007.0), which is the weighted mean time.
We note that in 10 years of observation, the secular variations of the pulsation frequen-}\\
\multicolumn{6}{l}{\hspace{10.5mm} cies and periods are larger than the 1$\sigma$
errors reported here, obtained from a Monte Carlo simulation assuming constant frequencies.}\\
\multicolumn{6}{l}{\hspace{8.2mm} $^4$ Because of the noise in the Fourier transform at low frequencies (Fig.~11), the multi-sinusoidal
fits for the $g$-modes are less stable than}\\
\multicolumn{6}{l}{\hspace{10.5mm} those for the $p$-modes, and therefore the 1$\sigma$ frequency/period errors for the $g$-modes
reported here are underestimated.}\\
\end{tabular}
}
\end{table*}
\section{$P$-modes}
The first problem in analyzing a data set of several
years is that the pulsation frequencies are no longer constant.
This was already known for V391~Peg, and a quantitative measurement of $\dot{\it p}$\
had been obtained from previous data giving
$\dot{\it p}$=1.46$\pm$0.07$\times 10^{-12}$ and
2.05$\pm$0.26$\times 10^{-12}$ for $f_1$ and $f_2$, respectively (SSJ07).
In general, the time variation of a pulsation frequency gradually
broadens the width of the peak in the Fourier transform and may split it
into different close peaks if the data set is long enough.
For a linear frequency variation, the time needed to split a pulsation
frequency into different close peaks is given by
\vspace{-4mm}
\begin{equation}
T \approx P~ \left( \frac{1.5}{\dot{P}} \right) ^{1/2}
,\end{equation}
\noindent
where P is the pulsation period, and the value 1.5 comes from the actual
frequency resolution, given by $\sim$1.5/T \citep{loumos77}.
For V391~Peg we obtain T$\approx$10 years.
However, after a few years, this effect already becomes important and makes the
standard prewhitening technique (which assumes fixed frequencies and
amplitudes) less efficient in returning precise frequencies.
For this reason, after several tests we decided to split our analysis of the
amplitude spectrum into three steps with data sets of different length
and different frequency resolution.
It is useful to recall here that the two main pulsation modes of V391~Peg
were identified as $l$=0 and $l$=1 from high-precision
multi-color photometry obtained with ULTRACAM at the WHT \citep{silvotti10}.
We show below that this identification is well supported by the current
analysis.
\subsection{Low-frequency resolution: main pulsation frequencies}
\begin{figure}
\includegraphics[width=9.0cm]{dft_2007_10_g_MDM_UCAM-eps-converted-to.pdf}
\caption{$P$-mode amplitude spectrum of our best-quality run of 7.9 days,
with a duty cycle of 35\%, obtained in October 2007 with a SLOAN g filter
using two telescopes at different longitudes: the WHT 4.2m in La Palma,
equipped with ULTRACAM, and the MDM 1.3m at Kitt Peak.
The upper panel shows the spectral window (red), while the other panels from
top to bottom show the amplitude spectra of the data and of the residuals
after one, two, three, and four prewhitening steps.
A plot showing the high quality of the ULTRACAM data is presented
in \citet{silvotti10}.}
\label{fig2}
\end{figure}
\begin{figure}[h!]
\includegraphics[width=9.0cm]{dft_2007-eps-converted-to.pdf}
\caption{Same as Fig.~2, but using all the data of 2007, the year with the best
coverage.
Thanks to the increased frequency resolution, we see that after four
prewhitening steps, there is still significant power, with secondary peaks
near $f_2$ and $f_3$ that may be due to the rotational splitting of these
modes.}
\label{fig3}
\end{figure}
\begin{figure*}[t!]
\includegraphics[width=18.14cm]{dft_2_9_1999_2012-eps-converted-to.pdf}
\caption{Same as Figs.~2 and 3, but using the whole data set (1999-2012).
Upper panels: amplitude spectrum of the data and of the residuals (on the
same vertical scale) after subtracting the four main pulsation frequencies
($f_1$ to $f_4$).
We note that the residual power is significantly higher than in Fig.~3.
The small box shows the normalized spectral window (red) with the one-day
aliases at $\pm$11.57~$\mu$Hz.
Lower panels (from top to bottom): normalized spectral window (red) with the
one-year aliases at $\pm$31.7~nHz, and details of the amplitude spectrum of
data and residuals near $f_1$ (left) and $f_2$ (right).
The horizontal scale in the left and right panels is the same.
Two vertical dashed lines (green) highlight two components of a possible
rotational splitting. See text for more details.}
\label{fig4}
\end{figure*}
As a first step, we consider our best-quality run of October 2007, with a
length of 7.9 days and a duty cycle of 35\%.
At this level of frequency resolution, $\delta$f$\simeq$2.2~$\mu$Hz,
the amplitude spectrum is very clean and shows only four pulsation modes
without any trace of multiplets of close frequencies (Fig.~2).
\subsection{Medium-frequency resolution: rotational splitting of $f_2$~?}
As a second step, we consider a larger data set of about 220 days, collected
in 2007. This data set is a compromise between best duty cycle, best data
quality, and relatively long duration in order to detect possible rotational
splitting of the pulsation modes with $l$>0.
At the same time, with 220 days, the effects of the long-term variations
of the pulsation frequencies are still small, which keeps the amplitude
spectrum relatively clean (Fig.~3).
When we removed the four main pulsation frequencies through prewhitening, two
low-amplitude peaks emerged from the noise, close to $f_2$ and
$f_3$, while nothing appeared close to $f_1$, which confirms that this must be
an $l$=0 mode.
The peak close to $f_3$ ($f_3^+$)
is only $\sim$3.4$\sigma$ above the noise, which is below our detection
threshold of 4$\sigma$. Secondary peaks close to $f_3$ are also visible
when we use the whole data set (1999-2012), but with a very low S/N.
The peak close to $f_2$ ($f_2^-$), at about 4.3$\sigma$ above the noise,
differs by --0.181~$\mu$Hz\ from $f_2$ and is also detected in the whole data
set, but at a lower S/N and smaller separation of
--0.163~$\mu$Hz\ (Fig.~4 lower right panel).
Using the latter separation, which is more precise, and assuming that
$f_2^-$ is part of an $l$=1 triplet split by stellar rotation
in which $f_2$ is the central component, we obtain a stellar rotation
period of about 40 days.
This value is obtained in the slow rotation approximation
($\Omega_{\rm ROT}$<<$f$, see \citealt{ledoux51}),
\vspace{-4mm}
\begin{equation}
f_{k,l,m} = f_{k,l,0} + m \, \Omega_{\rm ROT} \, (1 - C_{k,l})
,\end{equation}
\noindent
in which we have used a value of 0.43 for the Coriolis term C$_{k,l}$
according to the adiabatic evolutionary models by \citealt{charpinet02}
(the model that fits best \ensuremath{T_{\mathrm{eff}}}, \ensuremath{\log g}\ and P of V391~Peg is model 19 of
sequence 4).
The low amplitude of the secondary peak suggests a low inclination.
This interpretation is consistent with the previous identification of $f_2$
as an $l$=1 mode by \citet{silvotti10}.
A rotation period of $\sim$40 days would be compatible with the
distribution of rotation periods as recently measured by the {\em Kepler}\
spacecraft in a sample of 18 sdB $g$-mode pulsators (see \citealt{zong17}
and references therein).
Thirteen of them show periods between 6 and 88 days, with a mean value of
about 33 days.
The other five do not show any rotational splitting of the frequencies,
indicating that they may have very low inclinations and/or extremely long
rotation periods.
\subsection{High-frequency resolution: frequency and amplitude variations}
\begin{figure}
\includegraphics[width=9.0cm]{dft_5_4_1999_2008-eps-converted-to.pdf}
\caption{Comparison between the amplitude spectrum near $f_1$ of the data
(left) and the amplitude spectrum near $f_1$ of a simulated data set (right)
with the same time distribution.
In this test we used only the data up to 2009.0 because in this period
it is easier to simulate the behavior of $f_1$.
For the simulated data we used a single pure sine wave (no noise)
with the same frequency and amplitude of $f_1$ and also with similar
long-term frequency and amplitude variations (linear variation of the
period with $\dot{\it p}$=1.34$\times$10$^{-12}$, as derived by the O--C analysis,
and sinusoidal variation of the amplitude like in Fig.~7 upper right panel,
green section). Like in the previous figures, the upper left panel is the
normalized spectral window (red), while the other
panels are the amplitude spectra of data and residuals after one, two, and
three prewhitening steps.
This simple test shows that up to the secondary peak on the right side
of $f_1$, the data are well reproduced by the simulation, both in terms of
frequency and amplitude. See text for more details.}
\label{fig5}
\end{figure}
\begin{figure*}[ht!]
\includegraphics[width=9.0cm]{MC_f1_a1-eps-converted-to.pdf}
\includegraphics[width=9.0cm]{MC_f2_a2-eps-converted-to.pdf}
\caption{Distribution of the frequency and amplitude deviations for the two
main pulsation modes of V391~Peg.
The deviations, in units of 1$\sigma$ errors, are the differences between
the values obtained from the original light curve and those obtained from 1000
artificial light curves created by the MC simulator of Period04 \citep{lenz04}.
The synthetic light curves are built using the five $p$-modes of Table~3
and adding Gaussian noise at the same level as the original data.
The 2D distributions are also projected into 1D histograms and compared
with a normal distribution (red).}
\label{fig6}
\end{figure*}
\begin{figure*}[ht!]
\includegraphics[width=9.0cm]{f_variations-eps-converted-to.pdf}
\includegraphics[width=9.0cm]{a_variations-eps-converted-to.pdf}
\caption{Period and amplitude variations of the two main pulsation modes
of V391~Peg. The variation of $p_1$ is compatible with a linear increase up to
2009.0, when a change of regime appears. The same change is also visible for
the amplitude: up to 2009.0, $a_1$ shows a fairly regular sinusoidal
shape with a period of about 3400 days or 9.3 years.
A linear increase of the pulsation period is visible also for $p_2$ when
considering the whole data set, while the irregular variations of $a_2$ can be
at least partially attributed to the beating between $f_2$ and $f_2^-$.
More details are given in the text.
}
\label{fig7}
\end{figure*}
When we further increase the length of the data set and consider the
whole light curve in the period 1999-2012, the amplitude spectrum is
much more complex because of the effects of the frequency variations, which
become important (Fig.~4).
When we subtract the main pulsation frequencies from the light curve through
prewhitening, secondary peaks emerge very close to the main pulsation
frequencies.
The reason is that prewhitening subtracts from the data at each step a sine
wave with constant frequency and amplitude, while on timescales of many years,
pulsation frequencies and amplitudes are no longer constant.
This effect, which is well visible for $f_1$ (Fig.~4 lower left
panels), adds noise to the amplitude spectrum of the residuals and may lead
to incorrect determinations of the low-amplitude frequencies.
In this respect, the average values of $f_3$ and $f_4$ might be slightly
different from those reported in Table~3, with differences even larger than
the errors reported there.
In order to decipher the information contained in the peaks close to $f_1$,
we conducted a small experiment with a synthetic light curve.
Since the behavior of $f_1$ is fairly regular and relatively easy to model
in the period up to 2009.0, while it becomes more irregular later
on (see Figs.~7, 8, and 9), we considered only the period
up to 2009.0.
The synthetic light curve contains a single sine wave without noise with
the same time distribution as the data, a frequency and amplitude equal to
$f_1$, and similar frequency and amplitude variations.
In practice, we imposed a linear variation of the period with
$\dot{\it p}$=1.34$\times 10^{-12}$ (the value found from the O--C analysis described
in section 4) and a sinusoidal variation of the amplitude corresponding to
the sinusoidal fit shown in Fig.~7 (top right panel).
The amplitude spectrum of this synthetic light curve near $f_1$ is shown
in Fig.~5 (right panels) and can be compared with the real data in the left
panels.
Up to the secondary peak on the right side of $f_1$, the agreement between
real and synthetic data is very good both in terms of frequency and amplitude:
we obtain 2860.9418~$\mu$Hz\ and 2.74~ppt vs 2860.9414~$\mu$Hz\ and 2.61~ppt,
respectively (the main peak being at 2860.9382~$\mu$Hz\ with an amplitude of
8.84~ppt).
Thus we verified that a linear time variation of a pulsation period
splits the frequency into three close peaks almost equally spaced in frequency.
If the amplitude is constant, the two secondary peaks have the same amplitude.
If the amplitude is variable as in this case, the two secondary peaks have
different amplitudes.
Before proceeding with our analysis on frequency and amplitude variations,
it is important to verify that the uncertainties associated with frequencies
and amplitudes such as those reported in Table~3 are correctly estimated.
These uncertainties are the 1$\sigma$ errors obtained from a Monte Carlo (MC)
simulation on 1000 synthetic light curves in which random Gaussian noise
(at the same level as the data) was added to the five $p$-modes listed in
Table~3.
In Fig.~6 the distribution of frequencies and amplitudes obtained from the
MC simulations is shown for the two main pulsation modes of V391~Peg
($f_1$ and $f_2$).
After we verified that the error bars of our measurements were reliable,
we measured the pulsation periods and amplitudes for $f_1$ and $f_2$
in each observing season (Fig.~7), where observing season means the period
from May to December of the same year in which V391~Peg is observable.
The frequencies and amplitudes shown in Fig.~7 were obtained from
multi-sinusoidal fits considering only four frequencies ($f_1$ to $f_4$),
while $f_2^-$ was excluded because it is not detected in most of these
one-season runs. The same exercise was repeated using all five frequencies,
but the results were less reliable.
\begin{figure*}
\includegraphics[width=18.14cm]{oc1a_1999_2012_sr-eps-converted-to.pdf}
\includegraphics[width=18.14cm]{oc1b_1999_2012_sr-eps-converted-to.pdf}
\caption{O--C diagram of the main pulsation mode of V391~Peg when
using monthly runs (each point represents the data collected within one month).
Upper panels: fit of the O--C data with a parabola (long-term variation, blue
continuous line) plus a sine wave (``planetary
component'', red dashed line) and planetary component alone after subtracting
the long-term component.
This solution gives satisfactory results only up to the end of 2008, and the
fit was made considering only the data up to 2009.0.
Lower panels: same as upper panels, but using two sinusoids.
In this case, the fit was made using all the data, but a reasonable fit
is obtained only up to $\sim$2010, indicating that two components are simply
not enough to obtain a reasonable fit of all the data.
When we compare the planetary component alone in the period 2000-2009.0,
the fit is better when we use parabola + sine wave ($\chi^2$=762) with respect
to the double sine wave ($\chi^2$=1267); for comparison, a straight line would
give $\chi^2$=1376.}
\label{fig8}
\end{figure*}
When we consider only the data up to 2009.0, corresponding to the green part
of Fig.~7, the variation of
$p_1$ can be fit with a straight line whose slope corresponds to
$\dot{\it p}$$_1$=(1.60$\pm$0.20)$\times10^{-12}$.
In the same period, the amplitude $a_1$ shows a fairly regular sinusoidal
pattern with a period of about 3400 days (9.3 yr) and an amplitude of 29\%.
After 2009.0, the trend of the period and amplitude variations of $p_1$
changes and $p_1$ starts to decrease. The reason for this behavior,
which is also confirmed by the O-C analysis in Figs.~8 and 9, is not known.
Although we normally attribute period and amplitude variations to nonlinear
interactions between different pulsation modes, in this case, with an $l$=0
mode, we cannot invoke the resonant mode coupling between the components of
a multiplet of modes split by the stellar rotation,
nor even the three-mode resonance, which would require that $f_1$ corresponds
to a linear combination of the other two pulsation modes that we do not see.
These two mechanisms were recently invoked as a possible explanation for the
frequency and amplitude variations observed in the sdB $g$- and $p$-mode
pulsator KIC~10139564 \citep{zong16}.
The lower left panel of Fig.~7 shows that when we use all the available data,
the variation in $p_2$ can be fit with a straight line whose slope corresponds
to $\dot{\it p}$$_2$=(1.47$\pm$0.41)$\times~10^{-12}$.
In the lower right panel we see quite irregular variations of $a_2$,
but these apparent variations can be at least partially attributed to the
interaction (beating) between $f_2$ and $f_2^-$.
When we also consider $f_2^-$ in the fit, the individual measurements of
$a_2$ may vary by several tenths of ppt, indicating that the 1$\sigma$ error
bars of $a_2$ are underestimated.
At shorter timescales, we did not find any periodicity in the amplitude
variations of $a_2$ that could confirm the beating effect and thus the
rotation period of the star around 40 days.
The mean quality of the data is not sufficient for detecting this effect.
Based on our best-quality run of October 2007 at the WHT-MDM, we can only
exclude short timescale variations (from night to night) for both $a_1$ and
$a_2$.
We also attempted to fit the data from 1999 to
the end of 2008 with two sine waves corresponding to $f_1$ and $f_2$, leaving
as free parameters not only the frequencies, amplitudes, and phases, but also
$\dot{\it p}$$_1$ and $\dot{\it p}$$_2$.
The fit converged only when we fixed $\dot{\it p}$$_2$, but the value that
we obtained for $\dot{\it p}$$_1$ is about ten times higher than the value obtained
from the direct measurements.
This method is less reliable than the direct method or the O--C method
described in the next section because it makes use of constant amplitudes,
but we know that the amplitudes are not constant, and in particular, $a_1$
varies significantly (Fig.~7).
While amplitude variations in sdB $p$-mode pulsators have been known for
a long time, with time scales ranging from days to years, the results reported
in this section show that even the frequencies are less stable than previously
believed and may suffer significant variations that are not simply due to the
long-term modifications of the stellar structure.
Amplitude and frequency variations have recently been detected in most of
the sdB pulsators observed by the {\em Kepler}\ spacecraft, with complex patterns
that sometimes are stochastic \citep{ostensen14} and sometimes more regular
and periodic (e.g., \citealt{zong16}).
\section{O--C analysis}
\begin{figure*}[ht]
\includegraphics[width=9.0cm]{oc1_1999_2006_bjd_nat-eps-converted-to.pdf}
\includegraphics[width=9.0cm]{oc2_1999_2006_bjd_nat-eps-converted-to.pdf}
\includegraphics[width=9.0cm]{oc1_1999_2012_bjd-eps-converted-to.pdf}
\includegraphics[width=9.0cm]{oc2_1999_2012_bjd-eps-converted-to.pdf}
\includegraphics[width=9.0cm]{oc1_1999_2012_bjd_NEWFIT-eps-converted-to.pdf}
\includegraphics[width=9.0cm]{oc2_1999_2012_bjd_NEWFIT-eps-converted-to.pdf}
\caption{Same as Fig.~8 for $f_1$ (left) and $f_2$ (right) for
one-season runs.
Panels 1A, 1B, 2A, and 2B are obtained using only the data up to 2007.0,
so that we can directly compare the current results (blue and red
lines) with those obtained by SSJ07 (green lines shifted by -20~s and -5~s
in panels 1A and 2A and 1B and 2B, respectively).
The small horizontal shifts of the first and last points are due to the
addition of three observing runs that were not present in SSJ07.
Panels 1B and 2B show that in the current results, the period of
the sinusoid is slightly shorter for $f_1$ but longer for $f_2$, so
that at the end the agreement between $f_1$ and $f_2$ is worse with respect
to SSJ07.
The reasons of these differences are discussed in the text.
When we add the new data, the longer period of the sinusoidal component of
$f_2$ with respect to $f_1$ is confirmed (panels 3B and 4B), and moreover, we
note a further difference in amplitude.
Panel 3A confirms the change of regime of $f_1$ near 2009 that was already
visible in Figs.~7 and 8.
This change also tends to worsen the fit of $f_2$ (4A), and for this reason,
the fits shown in panels 3A to 4B are obtained considering only the data
up to 2009.0.
Panels 5A and 5B show an alternative solution obtained using a low-frequency
sine wave for the long-term component of $f_1$, as in the lower panels of
Fig.~8.
The fits shown in panels 5A to 6B were obtained using all the available data.
More comments are given in the text.}
\label{fig9}
\end{figure*}
The O--C analysis (\citealt{sterken05}; and subsequent articles in the same
volume) is a powerful method for detecting tiny variations of the pulsation
periods on long timescales that cannot be seen or clearly seen from direct
independent measurements (like in Fig.~7).
The O--C method is more sensitive than the direct method because
instead of directly measuring the period change, it measures the phase
variations induced by the period change.
When we consider a period that changes linearly in time (a good approximation
on timescales of a few years, extremely short with respect to the
evolutionary timescales), the phase variations have the great advantage of
being proportional to T$^2$, where T is the duration of the observation.
In order to reduce the phase errors, the data for the O--C analysis were
considered in monthly subsets.
A four-sinusoid fit was applied to each subset using the best (fixed)
frequencies from Table~3 ($f_1$ to $f_4$) and leaving amplitudes and phases
as free parameters.
$f_2^-$ was not used because it is not detected in the monthly subsets.
The difference between these monthly phases and those obtained from the
whole data set are the O-C differences shown in Fig.~8, in which the phase
differences have been converted into time differences.
In Fig.~8 we see the same effect as was already seen in Fig.~7: since 2009,
the curvature in the O--C diagram of $f_1$ changes.
We do not know the reasons for this change, it might be related to nonlinear
interactions between different pulsation modes.
In any case, it is clear from Fig.~8 (upper panels) that a two-component
fit with a parabola plus a sinusoid (like in SSJ07) can give satisfactory
results only up to $\sim$2009.
When considering only the data up to 2009.0, the long-term parabolic variation
of the main pulsation period corresponds to
$\dot{\it p}$$_1$=(1.36$\pm$0.06)$\times$10$^{-12}$.
In order to also fit the more recent data, we tried a different approach
using two sinusoids (lower panels of Fig.~8).
Even in this way, we did not obtain a resonable fit of the whole data set,
and moreover, the quality of the fit up to 2009 is lower, indicating that a
sinusoidal $\dot{\it p}$\ is not the solution.
As a second step, the O-C analysis was repeated using larger data subsets
covering a whole observing season (that is, from May to December for V391~Peg)
and using the same pulsation frequencies as before.
Again, $f_2^-$ was not used because it is not detected in almost all runs.
These larger subsets are particularly useful for $f_2$ (the secondary
pulsation frequency), in order to reduce the phase errors that are very
large when we use the monthly subsets.
The results are shown in Fig.~9.
In the upper panels (from 1A to 2B), we see the O--C diagram of $f_1$ and
$f_2$ when using only the data from 1999 to 2007.0, basically the same data
as in SSJ07 (only three short runs were added), but with the new updated
frequencies.
These plots show that when we use better values for $f_3$ and $f_4$,
the sinusoidal components of $f_1$ and $f_2$ (panels 1B and 2B) differ:
even if the amplitudes and the initial phases are still in agreement
(like in SSJ07), the periods are now different.
In the central panels (from 3A to 4B), we see the new fits when we use the
data from 1999 to 2009.0, before the change of sign of $\dot{\it p}$$_1$:
the sinusoidal components of $f_1$ and $f_2$ (panels 3B and 4B) are similar to
the previous ones (panels 1B and 2B), except for a larger amplitude for $f_2$,
which increases the differences between $f_1$ and $f_2$ .
The parabolic components (panels 3A and 4A) correspond to
$\dot{\it p}$$_1$=(1.34$\pm$0.04)$\times$10$^{-12}$ and
$\dot{\it p}$$_2$=(1.62$\pm$0.22)$\times$10$^{-12}$, in good agreement with the
previous measurements of SSJ07.
These numbers also agree with adiabatic theoretical expectations for the
secular variation of the pulsation periods \citep{charpinet02}.
However, the fact that $\dot{\it p}$$_1$ changed sign near 2009
indicates that in real stars, these processes may be more complicated.
Finally, in the lower panels of Fig.~9 (from 5A to 6B), we show the best
two-component fits of the whole data set using two sinusoids
with different periods for $f_1$, and a parabola plus a sinusoid for $f_2$.
Except for the last points, these fits can reproduce the general trend of
the O--C data (panels 5A and 6A), but show a large dispersion, particularly
for $f_1$: the sinusoidal fits in panels 5B and 6B (chi-squared equal to
894 and 276, respectively) are only slightly better than a simple straight
line ($\chi^2$=1075 and 322).
At the same time, the two sinusoidal components have similar periods,
amplitudes, and phases within 4\%, 8\%, and 7\% respectively.
In order to explore this in more detail, we made a weighted average
of the O--C data in panels 5B and 6B (which means a weighted average
of the O--C data of $f_1$ and $f_2$ after subtracting their
long-term component).
The result is illustrated in Fig.~10 and shows that when we sum
the information from $f_1$ and $f_2$, the fit of the sinusoidal component
improves, and at the end, we have 9 points out of 13 that are consistent
with a sine wave with a period of 1127$\pm$45 days (or 3.09$\pm$0.12 years)
and an amplitude of 3.02$\pm$0.85 light seconds.
Assuming that the sine wave is caused by the planet and
that the mass of the sdB star is 0.47$M_{\odot}$, these numbers correspond to
an orbital distance of 1.6 AU and a minimum mass of 1.8 $\rm M_{\rm Jup}$.
Although not shown in Fig.~9, we also tried to fit the O--C plots of
$f_1$ and $f_2$ with a parabola plus two sinusoids (corresponding to two
potential planets), but we were unable to find any solution for which the six
parameters of the two sinusoids were in reassonable agreement between $f_1$
and $f_2$.
Several checks were made in order to ensure that the new O--C results
reported in this section are correct and robust and to understand
why in SSJ07 periods, amplitudes, and phases of the sinusoidal components of
the O--C diagrams of $f_1$ and $f_2$ agreed so well.
As stated previously, the current O--C results were obtained using four
frequencies ($f_1$ to $f_4$), also including the data taken with filters
different from Johnson B, and making use of statistical weights.
However, we also tested different combinations without statistical weights,
excluding all the data taken in filters different from Johnson B (see section
2), and considering only the two main frequencies $f_1$ and $f_2$.
In all these tests, the results varied little\footnote{When we consider only
$f_1$ and $f_2$ in the multi-sinusoidal fits instead of four frequencies,
the results are almost identical to those reported in panels 3A to 4B of
Fig.~9.
When we use only Johnson-B-filter data, the main difference is that the period
of the sinusoidal component of $f_1$ increases by 7\%.
When we do not use statistical weights, we obtain the largest difference, with
the amplitude of the sinusoidal component of $f_2$ reduced from
9.4 to 5.4~s, while all other parameters remain about the same.}.
Thus it is not easy to understand the differences between our current results
and those obtained in SSJ07 (even in that analysis, similar tests with
different combinations were made).
We conclude that the good agreement found in SSJ07 was a coincidence due to
a few small differences between the two analyses: slightly different pulsation
frequencies, two NOT observing runs that were excluded in SSJ07 because they
were taken with a
Bessell B filter and that are now included (after careful tests of the effects
on phase and amplitude), and one new standard-B-filter Monet-N observing run
that was not yet available in SSJ07.
Of these factors, the greatest is probably given by the different
frequencies that were used.
In SSJ07 we used $f_1=2860.9387$, $f_2=2824.0965$, $f_3=2880.6842$,
$f_4=2921.8463$, and $f_5=2882.0039$~$\mu$Hz.
Comparing these values with those in Table~3, we see very small differences
for $f_1$ and $f_2$, compatible with real period variations; the new value
of $f_3$ is higher by 0.4390~$\mu$Hz; $f_5$ is not confirmed and used
not at all in the new analysis, but its influence must be small because
of the very low amplitude.
Finally and mostly important, the updated value of $f_4$
is lower by 11.8510~$\mu$Hz\ with respect to the old value, which means that
in SSJ07, because of the poorer spectral window, we used an incorrect value
corresponding to the one-day alias on the right side of the correct peak.
This is probably the mean reason of the different results.
An incorrect value of $f_4$ can modify the multi-sinusoidal fits and thus
slightly modify the phases of $f_1$ and $f_2$ as well.
\begin{figure}
\includegraphics[width=9.0cm]{oc1_oc2_1999_2012-eps-converted-to.pdf}
\caption{O--C diagram obtained by combining the information from $f_1$ and
$f_2$. In practice, we have computed the weighted average of the points in
panels 5B and 6B of Fig.~9 and recomputed the best fit with a sine wave.
Compared with these panels, the fit is significantly improved and the residuals
of 9 points out of 13 (including all those with smaller error bars) are close
to zero.}
\label{fig10}
\end{figure}
\section{V391~Peg~b: real planet or false detection\,?}
Whether V391~Peg~b is a real planet or a false detection is an open question.
The O--C diagrams of $f_1$ and $f_2$ provide arguments in favour
and against the presence of V391~Peg~b.
\noindent
1) $f_1$: considering the period up to 2009.0, the O--C diagram of $f_1$ still
has a sinusoidal component that can be explained by the presence
of a giant planet with a minimum mass of 3.5 $\rm M_{\rm Jup}$, orbiting V391~Peg in 3.1
years at a distance of 1.7 AU.
However, the behavior of $f_1$ after 2009.0 shows that this is more complex,
and we see from Fig.~8 and 9 that a simple two-component fit of the O--C data
is not enough to interpret the whole data set up to 2012.
Using two sinusoids with different periods allows us to fit the O--C data
up to 2010 or 2011, but the quality of the fit is much poorer. When we use
two sinusoids, the period of the sine wave corresponding to the planet
(Fig.~9/5B) is longer than the period obtained with a parabola plus a sine wave
(Fig.~9/3B).
\noindent
2) $f_2$: up to 2009.0, the O--C diagram of $f_2$ also shows a sinusoidal
component, but now, unlike SSJ07, the period and the amplitude differ from
$f_1$ by $\sim$20\% and $\sim$36\%, respectively.
The new data support the previous identification of $f_2$ as an $l$=1 mode,
and this implies that frequency splitting due to stellar rotation must
be at work.
Regardless of whether our detection of $f_2^-$ is real,
these modes split by stellar rotation must be there, close to $f_2$, and
this is a source of noise for the O--C computations of $f_2$.
This argument makes the O--C results from $f_1$ (which is an $l$=0 mode)
more reliable, and this is one of the reasons why the presence of the planet
cannot be excluded.
At the same time, this argument can partially explain the discrepancies
between the O--C diagrams of $f_1$ and $f_2$.
\noindent
3) $f_1$+$f_2$: when we try to fit the whole set of O--C data using a
sine wave plus a longer-period sinusoid for $f_1$ and a parabola for $f_2$
(panels 5 and 6 of Fig.~9), we see that the sine wave corresponding to the
planet is very similar for $f_1$ and $f_2$ in terms of period, amplitude, and
phase (panels 5B and 6B of Fig.~9).
Although these fits are of poor quality, it is possible to obtain a substantial
improvement when we use both pulsation frequencies together (Fig.~10).
If we interpret this effect with the presence of the planet, we obtain
a minimum mass of 1.8 $\rm M_{\rm Jup}$, while the orbital period and distance,
3.1 years and 1.65 AU, do not change much with respect to the
values obtained previously.
In conclusion, while in SSJ07 the presence of a planet orbiting V391~Peg
was robustly and independently suggested by the two main pulsation modes
of the star, these two modes now give contradictory indications.
A sinusoidal component is still visible in the O--C diagrams of both $f_1$
and $f_2$, but the parameters of the two sinusoids are different in general.
The presence of a planet orbiting V391~Peg is clearly much less robust than
before, although it cannot be entirely excluded.
The peculiar behavior of $f_1$ with a quite sudden change of sign of its
time derivative after 2008 suggests that pulsation timing is a delicate
method, with aspects that are still unclear and are likely related to nonlinear
pulsation effects.
As a consequence, the reliability of the O--C method to find low-mass
companions should be questioned, without forgetting, however, that for sdB
stars we have at least two cases in which the presence of a stellar
companion was detected through pulsation timing \citep{barlow11a,otani+17},
and in one case, for CS~1246, this detection was confirmed by
radial velocity (RV) measurements \citep{barlow11b}.
With respect to V391~Peg, the O--C detection was easier in both cases
because of the much higher companion mass, and for CS~1246,
also because of the much shorter orbital period of $\sim$14 days, which meant
no problems with the long-term variation of the pulsation period.
Unlike CS~1246, which exhibits a single large-amplitude radial mode, and
EC~20117-4014, which shows three low-amplitude pulsation modes with frequency
separations of $\sim$250 and $\sim$680~$\mu$Hz\ \citep{otani+17}, with V391~Peg
we have the additional difficulty that all four pulsation modes are
concentrated within 86~$\mu$Hz, which makes it more difficult to measure
the phases accurately.
In order to confirm or definitively reject the presence of V391~Peg~b,
an independent confirmation with another method is needed.
Given that Gaia astrometry is not accurate enough at a distance of about
1400~pc, spectroscopic RVs seem the most natural way to proceed.
However, the RV ``noise'' produced by the pulsations is a serious concern
and can easily reach several hundred m/s, while the expected planetary signal
is no more than 100 m/s.
Given the very different time scales, it is in principle possible
to remove or reduce the noise due to the
pulsations, provided that we know the
Fourier spectrum and the main pulsation modes in detail.
This is true for the high-frequency part of the spectrum (the $p$-modes),
which is relatively simple, with only two dominant modes that
have similar periods.
The noise due to the $p$-modes can be reduced by choosing an exposure time
close to an integer multiple of $\sim$350~s.
For the $g$-modes, the situation is more complicated as the low-frequency part
of the Fourier spectrum is not well known (see next section).
The noise can be reduced by averaging the results obtained from different
spectra taken in the same epoch at different pulsation phases.
A great help for a precise determination of the $g$-modes may come
from TESS (Transiting Exoplanet Survey Satellite, \citealt{ricker16}),
which can observe V391~Peg continuosly for 54 days in some years
from now, with a sampling time of 20 or 120~s.
\section{$G$-modes}
$G$-modes were detected in V391~Peg by \citet{lutz09}.
Our new larger data set has been used to confirm this detection.
Given that the $g$-modes are particularly disturbed by the atmospheric
variations that act at similar frequencies, we selected a subset of
high-quality data with a length of each single run of at least a few hours.
This subset, which has a total duration of 192.8 hours spread over 5.8 years
(between 2002 and 2008), was corrected for differential atmospheric
extinction (the comparison stars are always much redder than the sdB) and
analyzed. The amplitude spectrum in Fig.~11 shows two regions with an excess
of power near 180 and 310~$\mu$Hz\ and three peaks that emerge from
the noise at more than 5$\sigma$.
The corresponding frequencies, amplitudes, and phases are listed in Table~3.
The noise threshold, which was 4$\sigma$ for the $p$-modes, was increased
to 5$\sigma$ because the spectrum is much more noisy in this region.
After these three peaks were subtracted from the data, the lower
panel of Fig.~11 shows that some residual power is still there, suggesting that
further low-amplitude frequencies are likely present below the noise
threshold.
As anticipated in the previous section, in two years from now, TESS will be
able to shed light on this part of the Fourier spectrum and likely measure
the rotation period of the star, confirming or refuting the tentative rotation
period of $\sim$40 days suggested by the $p$-mode analysis in section 3.2.
\begin{figure}[t]
\includegraphics[width=9.0cm]{dft_g_modes-eps-converted-to.pdf}
\caption{$G$-mode amplitude spectrum using our best-quality runs
between 2002 and 2008 (192.8 hours of observations in total).
The upper right panel shows the spectral window (red), while the other panels
from top to bottom show amplitude spectrum
and residuals after one, two, and three prewhitening steps.
We note an excess of power in two main regions near 180 and 310~$\mu$Hz.
After prewhitening, this excess of power is not completely removed near
180~$\mu$Hz, suggesting that further low-amplitude frequencies are present
in that region.}
\label{fig11}
\end{figure}
\section{Summary}
Interpreting the new O-C results shown in Fig.~8 and 9 is more complicated
than it was ten years ago.
At that time, the very good agreement between the sine-wave component of $f_1$
and $f_2$ strongly supported the presence of a giant planet (SSJ07).
Now, with many more data, this agreement is much more uncertain and the
presence of V391~Peg~b is weaker and requires confirmation with an
independent method.
Like in SSJ07, a two-component fit (parabola + sine wave) still gives
satisfactory results for both $f_1$ and $f_2$, at least up to 2009.
The sinusoidal components of $f_1$ and $f_2$ , however, now differ in period
and amplitude by $\sim$20\% and $\sim$36\%, respectively.
Starting in phase, after two cycles the O-C sine wave of $f_2$ is antiphased
with respect to $f_1$.
When we consider all the O--C data from 1999 to 2012, a two-component fit
is in general not satisfactory.
For $f_1$, we tried to fit the O--C data with a double sine wave,
corresponding to a sinusoidal behavior of $\dot{\it p}$$_1$.
The result is a very poor fit.
However, this solution produces a certain agreement between the sinusoidal
components of $f_1$ and $f_2$.
The change in sign of the time derivative of the main pulsation period
near 2009 is an intriguing phenomenon that is difficult to explain.
Nonlinear interactions between pulsation modes seem the most natural
explanation, but the $l$=0 identification \citep{silvotti10}, which is
confirmed by the new data, does not help as we cannot invoke resonant mode
coupling between the components of a multiplet nor resonance between modes
linked by linear combinations that we do not see.
The irregular behavior of $f_1$ agrees to a certain extent with recent {\em Kepler}\
results, which showed that sdB pulsation frequencies are in general less
stable than previously believed.
The {\em Kepler}\ results are mostly focused on $g$-modes, but a similar behavior
seems also relatively common for the $p$-modes.
At least this is suggested by our results.
The $l$=1 identification for $f_2$ \citep{silvotti10} is also
confirmed
by the new data (or at least $l$ must be $>$0).
A retrograde mode is detected, although at the limit of our detection
threshold, and this suggests a stellar rotation period of about 40 days.
Using only the data up to 2009.0, we can improve our previous measurements of
$\dot{\it p}$\ for $f_1$ and $f_2$ and obtain
$\dot{\it p}$$_1$=(1.34$\pm$0.04)$\times$10$^{-12}$ and
$\dot{\it p}$$_2$=(1.62$\pm$0.22)$\times$10$^{-12}$.
The order of magnitude of these numbers is in agreement with theoretical
expectations for evolved models of extreme horizontal branch stars
\citep{charpinet02}, and their positive sign would normally be interpreted as
an indicator of a stellar expansion.
At least for $f_1$, however, the change in curvature near 2009 implies that
these numbers are not simply or directly related to the evolutionary timescales
expected from theory, and the situation is more complicated.
Finally, the new data confirm that V391~Peg is a hybrid pulsator,
showing both $p$- and $g$-modes.
The next opportunity for a more detailed study of this star, and in particular
for the study of the low-frequency part of its Fourier spectrum, is given by
the TESS mission, which may observe V391~Peg continuously for 54 days
in about two years from now.
With a better knowledge of the Fourier spectrum at low frequencies
as well, it
should be easier to confirm or reject the presence of a planet orbiting
V391~Peg by measuring the spectroscopic radial velocities of the star.
\begin{acknowledgements}
We thank Elia Leibowitz, who made the data collected at the
Wise Observatory available to us, Christopher D.~J. Savoury for helping us with
the ULTRACAM observations and data reduction, and Wen-Shan Hsiao for
contributing the Lulin data.
We also thank Patrick Lenz for providing us with a modified version of
period04, which facilitated the error estimation from the MC simulations.
V.~S.~D. and ULTRACAM are supported by STFC grant ST/J001589/1.
L.~M. was supported by the the Hungarian National Research, Development and
Innovation Office (NKFIH) grant PD-116175 and the J\'anos Bolyai Research
Scholarship of the Hungarian Academy of Sciences.
\end{acknowledgements}
|
2,869,038,154,333 | arxiv | \section*{Acknowledgments}
We would like to thank all members of the Information Field Theory group at MPA
for many useful discussions in particular, and alphabetical order:
Philipp Arras, Jakob Knollm\"uller, Daniel Pumpe, Reimar Leike and Martin Reinecke.
\newpage
\subsection{German Internet}
In 1992 \todo{find correct date and source} the German internet providers
started to use the TV cable infrastructure to offer broadband connections for
most homes in Germany.
Because of the historic origin in TV signal distribution the cable net
infrastructure is mostly tree-like and for the scope of this example we simplify
this to a fully tree like system with different levels such as international
knots, federal knots and regional knots.
The main idea of this example is to find out whether a knot is malfunctioning,
based on incident reports from homes in Germany.
For our model we assume that every connection in Germany passes the
international \emph{commercial internet eXchange} knot $\eta^{(0)}_0$ in Frankfurt a.M.
(DE-CIX) and use this as the root of our tree (level 0).
$\eta^{(0)}_0$ is either $0$ if it is broken or $1$ if it works fine.
$20$ \emph{federal knots} $\eta^{(1)}_i$ (level 1) are connected directly to
this root knot (although there are only $16$ states in Germany).
Connected to the federal knots are thousands of \emph{regional knots}
$\eta^{(2)}$ which also define the smallest resolution for the origins of the
incident reports i.e., for each region $j$ the number of incidents $d_j$ is
reported.
We also assume that these regions are quadratic, have a population $N_j$ which is very
roughly proportional to the actual population and assume a constant rate
$\alpha$ throughout Germany of people who have a cable internet connection and
are willing to report incidents.
Additionally there is always a portion of people who report incidents which
actually have nothing to do with the knots but rather with inadequate router
setups in their homes.
We call this artifact the \emph{stupidity noise rate} $\beta$ and also assume
that this is a national problem and does not depend on the region.
Therefore the probability distribution for incidents by region $d_j$ is
Poissonian:
\begin{equation*}
P(d_j | \lambda_j ) = \text{Poisson}(d_j ; \lambda_j)
\end{equation*}
where $\lambda_j = N_j (\alpha \omega_j + \beta)$.
Here $\omega_j$ is the binary variable stating whether the region $j$ has
internet access or not.
Its value can be formalized as
\begin{equation}\label{eq:regcon}
\omega_j = \sum_{i,j} \eta^{(0)}_0 \cdot C^{(0)}_{0, i} \cdot \eta^{(1)}_i \cdot C^{(1)}_{i, j} \cdot \eta^{(2)}_j
\end{equation}
Where
\begin{equation*}
C^{(l)}_{m, n} = \left\{
\begin{split}
1, &\text{ if knot $m$ of level $l$ is connected to knot $n$ of level $l+1$} \\
0, &\text{ else}
\end{split}
\right.
\end{equation*}
With equation (\ref{eq:regcon}) $\omega_j$ is either zero if at least one of the
knots connecting region $j$ to DE-CIX (or DE-CIX itself) is broken and one if
all connecting knots are working.
\subsection{Log Normal}
The Wiener filter example is nice as an proof of concept
but does not exactly show the advantages of an HMC approach.
The analytical solution calculated via conjugate gradients is essentially as fast as one
leapfrog integration step in the HMC approach and gives a nearly perfect solution.
To present a problem where an HMC approach is actually better
than other approaches we introduce a still rather simple
log-normal Wiener filter scheme.
Since there is no analytical solution for calculating the mean value
most algorithms calculate a maximum-a-posteriori (MAP) solution
which is most of the time good enough but does not in general
represent the actual mean value of the distribution.
In this example, again a Gaussian prior is set on $s$
\begin{equation}
\Prob(s|S) \propto \exp \left( - \frac 12 s^\top S^{-1} s \right)
\end{equation}
but the data model for $d$ is changed to
\begin{equation}
\label{eq:lndatagen}
d = R e^s + n
\end{equation}
with $n$ being Gaussian noise.
This imposes a log-normal prior on the variable $x = e^s$ implicitly.
The energy functional for the full posterior can be written as
\begin{equation}
H(s|d) = \frac 12 s^\top S^{-1} s + \frac 12 (d - R e^s)^\top N^{-1}(d - Re^s) + \text{const}
\end{equation}
In this example we work on a (one-dimensional) 1024 pixel regular space
to keep the sampling time relatively short.
Once again the covariance $S$ is assumed to be diagonal in the harmonic space of $s$
with its power spectrum given by (\ref{eq:powspec}).
\subsubsection[Implementation in \purehmcif]{Implementation in \hmcif}
The following code snippets can also be found in the \hmcif\ package's demo
folder in the nonlinear\_wiener\_filter\_demo.py script.
A similar demo is also available in \nifty\ solving the problem with a
MAP algorithm.
In the \hmcif\ demo we use the MAP solution to compare our mean solution to.
The demo works by default on a (one-dimensional) 1024 pixel regular space.
For simplicity we assume again that for the following example $D$, $S$, $R$ and $N$
are already well defined as \niftyo s.
But our instrument $R$ is broken in that it has a blind spot 200 pixels wide where it only
returns zeros to make the problem a little more interesting.
As in the case of the Wiener filter example a \niftyo\ defined as \code{HT}
transforms fields from the harmonic to the position representation.
We further introduce a field $\powvar = \text{diag}(S)^{\frac 12}$
which imprints the power spectrum in the harmonic representation of $s$ onto a
white noise field.
With that we can introduce a new random variable $z = \powvar^{-1} \odot s$
($\odot$ represents the Hadamard Product) for sampling which has the identity as prior covariance since
\begin{equation}
s^\top S^{-1}s = (\powvar^{-1}\odot s)^\top (\powvar^{-1} \odot s) = z^\top z .
\end{equation}
The field representation of $z$ is the prior eigenbasis.
Sampling in the space of $z$ but being interested in $e^s$ also means that we need a non-linear transformation
of our samples before expectation values can be calculated:
\begin{CodeChunk}
\begin{CodeInput}
import nifty4 as ift
import hmcf
[...]
non_linearity = ift.library.nonlinearities.Exponential()
[...]
def sample_transform(z):
return non_linearity(HT(power*z))
\end{CodeInput}
\end{CodeChunk}
The energy for the full posterior in equation (\ref{eq:wiener}) is already
implemented as \eon\ in \nifty .
First we use the MAP approach of \nifty\ for comparison:
\begin{CodeChunk}
\begin{CodeInput}
m = ift.Field.full(h_space, 1e-7)
posterior_energy = ift.library.NonlinearWienerFilterEnergy(
m, d, R, non_linearity, HT, power, N, S, inverter=inverter)
# Minimization with chosen minimizer
posterior_energy = minimizer(posterior_energy)[0]
map_solution = posterior_energy.position
\end{CodeInput}
\end{CodeChunk}
where \code{minimizer} is an \nifty\ algorithm returning an \eon\
at the MAP position.
As starting points for our Markov chains we create samples from the
prior covariance $S$ (still the identity) with mean \code{map_solution}:
\begin{CodeChunk}
\begin{CodeInput}
x_initial = [map_solution + S.draw_sample() for _ in range(num_processes)]
\end{CodeInput}
\end{CodeChunk}
Creating an \hmcsc\ instance and adjusting some parameters works as before
with the Wiener filter:
\begin{CodeChunk}
\begin{CodeInput}
nl_hmc = hmcf.HMCSampler(potential=posterior_energy,
num_processes=num_processes,
sample_transform=sample_transform)
nl_hmc.display = hmcf.TableDisplay
nl_hmc.epsilon = hmcf.EpsilonPowerLawDivergence
nl_hmc.mass.reevaluations = 3
nl_hmc.run(num_samples=1000, max_burn_in=5000,
convergence_tolerance=1.,
x_initial=x_initial)
\end{CodeInput}
\end{CodeChunk}
\subsubsection{Results}
Afterwards the HMC mean is compared to the MAP solution of the problem:
\begin{CodeChunk}
\begin{CodeInput}
hmc_mean = nl_hmc.mean
hmc_std = ift.sqrt(nl_hmc.var)
map_solution = sample_transform(map_solution)
diff = abs(map_solution - hmc_mean)
\end{CodeInput}
\end{CodeChunk}
and again visualized:
\begin{CodeChunk}
\begin{CodeInput}
lo = np.min([true_sky.min(), map_solution.min(), data.min()])
hi = np.max([true_sky.max(), map_solution.max(), data.max()])
plotdict = {"colormap": "Planck-like", "ymin": lo, "ymax": hi}
ift.plot(true_sky, name="true_sky.png", **plotdict)
ift.plot(map_solution, name="reconstructed_sky.png", **plotdict)
ift.plot(data, name="data.png", **plotdict)
ift.plot(hmc_mean, name='hmc_mean.png', **plotdict)
ift.plot(diff, name='difference.png', **plotdict)
ift.plot(hmc_std, name='hmc_std.png', colormap="Planck-like",
ymin=0., ymax=hmc_var.max())
\end{CodeInput}
\end{CodeChunk}
These images generated with \code{numpy.random.seed(42)} are depicted in figure
\ref{fig:lndemo}.
\begin{figure}
\captionsetup[subfigure]{oneside,margin={-2cm,0cm}}
\begin{subfigure}{0.5\textwidth}
\includegraphics{log_normal/mock_signal}
\caption{true signal}
\label{fig:lnsig}
\end{subfigure}
\begin{subfigure}{0.5\textwidth}
\includegraphics{log_normal/data}
\caption{data field}
\label{fig:lndata}
\end{subfigure}
\begin{subfigure}{0.5\textwidth}
\includegraphics{log_normal/hmc_mean}
\caption{HMC mean}
\label{fig:lnhmc}
\end{subfigure}
\begin{subfigure}{0.5\textwidth}
\includegraphics{log_normal/map_sol}
\caption{MAP solution}
\label{fig:lnmap}
\end{subfigure}
\begin{subfigure}{0.5\textwidth}
\includegraphics{log_normal/hmc_var}
\caption{HMC std}
\label{fig:lnhmcvar}
\end{subfigure}
\begin{subfigure}{0.5\textwidth}
\includegraphics{log_normal/difference}
\caption{difference}
\label{fig:lndiff}
\end{subfigure}
\caption{Results of the log\_normal\_demo.py script.
It shows that the MAP solution (figure \subref{fig:lnmap}) is in this case a very
good approximation to the true (HMC) mean (figure \subref{fig:lnhmc}).
The HMC mean was calculated using in total 5000 samples.
The problem includes a partially 'broken' instrument
which can be easily seen as a prominent zero-valued feature in
the data field (figure \subref{fig:lndata}).
In figure \subref{fig:lnhmcvar} the high uncertainty of the HMC sampler in
this region can be observed.}
\label{fig:lndemo}
\end{figure}
\subsection{Wiener Filter}
A rather easy inference problem assuming that a measurement with an instrument
provides a data field $d$ with the stochastic equation
\begin{equation}\label{eq:data}
d = Rs + n
\end{equation}
where $n$ is Gaussian white noise (covariance $N$) and $s$ the signal.
$R$ is a linear operator representing the instrument's response.
If $s$ has a Gaussian prior with covariance $S$ one can show that the
full posterior is
\begin{equation}\label{eq:wiener}
\begin{split}
\Prob(s|d) &\propto \exp \left( -\frac 12 s^\top S^{-1} s - \frac 12 (d - Rs)^\top N^{-1}(d - Rs) \right)\\
&\propto \exp \left( -\frac 12 (s-m)^\top D^{-1}(s-m) \right)
\end{split}
\end{equation}
where $D^{-1} = S^{-1} + R^\top N^{-1} R$ and $m = DR^\top N^{-1}d$ the
mean value.
The advantage of choosing this problem is that there exists a analytic solution
for the mean value $m$ and therefore a comparable solution for the HMC sampler.
We assume the covariance $S$ to be diagonal in the harmonic representation
of the space for $s$ with a power spectrum $\P(k)$:
\begin{equation}
\label{eq:powspec}
P(k) \propto \left( 1 + \frac kk_0 \right)^{-4}
\end{equation}
where $k_0$ defines the correlation length.
Furthermore the signal-to-noise ratio (SNR) is 1,
meaning that the covariance of $n$ is roughly as big as the (mean) variance
of $s$ in the position space.
\subsubsection[Implementation in \purehmcif]{Implementation in \hmcif}
The following code snippets can also be found in the \hmcif\ package's demo
folder in the wiener\_filter\_demo.py script.
A full Wiener filter demo (without HMC sampling) can also be found in the \nifty\ package.
The demo works by default on a 64x64 pixel regular space
with the covariance of the signal $s$ being given as described above.
This means that there are 4,096 degrees of freedom.
For simplicity assume that for the following example $D$, $S$, $R$ and $N$
are already well defined as \niftyo s.
Also our data field $d$ is available as \niftyf .
Since $S$ is only diagonal in the harmonic space the sampling itself
takes place in that representation.
Another \niftyo\ defined as \code{HT} transforms fields from the harmonic
to the position representation.
The energy for the full posterior in equation (\ref{eq:wiener}) is already
implemented as \eon\ in \nifty , too.
With that an \hmcsc\ instance can be defined as:
\begin{CodeChunk}
\begin{CodeInput}
import nifty4 as ift
import hmcf
[...]
posterior_energy = ift.library.WienerFilterEnergy(position=start_position,
d=d, R=R, N=N, S=S,
inverter=inverter)
wiener_hmc = hmcf.HMCSampler(potential=posterior_energy, num_processes=5,
sample_transform=HT)
\end{CodeInput}
\end{CodeChunk}
where \code{start_position} is just a field in the harmonic representation of
the space for $s$ with zero value everywhere.
The \code{inverter} is a tool in \nifty\ for numerically inverting operators.
To optimize the sampling process additional features are set:
\begin{CodeChunk}
\begin{CodeInput}
wiener_hmc.epsilon = hmcf.EpsilonPowerLaw
wiener_hmc.mass.reevaluations = 2
wiener_hmc.display = hmcf.TableDisplay
\end{CodeInput}
\end{CodeChunk}
And finally a run is initiated:
\begin{CodeChunk}
\begin{CodeInput}
wiener_hmc.run(num_samples=200, convergence_tolerance=0.5)
\end{CodeInput}
\end{CodeChunk}
\subsubsection{Results}
Afterwards the HMC mean is compared to the analytic solution of the problem:
\begin{CodeChunk}
\begin{CodeInput}
hmc_mean = wiener_hmc.mean
analytic_mean = HT(D*R.adjoint*N.inverse(d))
diff = abs(hmc_mean - analytic_mean)
\end{CodeInput}
\end{CodeChunk}
and visualized using the \nifty\ plot functions:
\begin{CodeChunk}
\begin{CodeInput}
lo = hmc_mean.min()
hi = hmc_mean.max()
plotdict = {"colormap": "Planck-like", "ymin": lo, "ymax": hi}
ift.plot(hmc_mean, name="hmc_mean.png", **plotdict)
ift.plot(analytic_mean, name="analytic_mean.png", **plotdict)
ift.plot(diff, name='difference.png', **plotdict)
\end{CodeInput}
\end{CodeChunk}
When running the script the HDF5 file is saved to a
(new) subdirectory ``samples'' where the script is located.
Additionally the last lines in the code above generate plots using the
\python\ package \pkg{matplotlib} in the directory from
where the script was started.
Figure \ref{fig:wienerdemo} displays these plots generated
using \code{numpy.random.seed(42)}.
\begin{figure}[ht]
\captionsetup[subfigure]{oneside,margin={-2cm,0cm}}
\begin{subfigure}{0.5\textwidth}
\includegraphics{wiener_filter/hmc_solution}
\caption{}
\label{fig:wienerhmc}
\end{subfigure}
\begin{subfigure}{0.5\textwidth}
\includegraphics{wiener_filter/analytical_solution}
\caption{}
\label{fig:wienerana}
\end{subfigure}
\begin{subfigure}{0.5\textwidth}
\includegraphics{wiener_filter/difference}
\caption{}
\label{fig:wienerdiff}
\end{subfigure}
\begin{subfigure}{0.5\textwidth}
\includegraphics{wiener_filter/data}
\caption{}
\label{fig:wienerdata}
\end{subfigure}
\caption{Results of the wiener\_filter\_demo.py script.
The first row represents the two approaches in solving the problem,
where \subref{fig:wienerhmc} is the mean of the HMC samples and
\subref{fig:wienerana} the conjugate gradient solution.
In the bottom row the absolute difference of both approaches is
displayed in \subref{fig:wienerdiff}
and finally in \subref{fig:wienerdata} the mock data field from
which the reconstruction was performed.}
\label{fig:wienerdemo}
\end{figure}
\section[Using \purehmcif ]{Using \hmcif }\label{sec:eg}
\input{tex/demo_multi_field}
\section{Installation}\label{sec:install}
\subsection{Dependencies}
\hmcif\ relies on the following other \python\ packages:
\begin{description}
\item[\np ]: The very basic \python\ package for numerical analysis
on multidimensional arrays.
\item[\pkg{SciPy}] \citep{oliphant2007} : Library implementing advanced
algorithms and functions in \python .
\item[\pkg{h5py}]: A \python\ wrapper for the HDF5 file format.
\item[\nifty] (4.1.0 or newer, \citet{steininger2017, nifty4}) :
A package for statistical field inference problems.
\item[\pkg{matplotlib}] \citep{hunter2007}, optional :
A package for producing figures.
Necessary for the \hmcif\ \pkg{tools} sub-package.
\item[\pkg{PyQt5}] optional : Necessary for the \pkg{tools} sub-package.
\end{description}
\nifty\ supports multi-processing in many calculations via \pkg{mpi4py}
(\cite{mpi4py}) but \hmcif\ needs to restrict
each individual Markov chain to one core.
If \pkg{mpi4py} is already installed
the user should switch multi-processing off by setting
the environment variables \code{MKL_NUM_THREADS} and
\code{OMP_NUM_THREADS} to 1 in a terminal:
\begin{CodeChunk}
\begin{CodeInput}
export MKL_NUM_THREADS = 1
export OMP_NUM_THREADS = 1
\end{CodeInput}
\end{CodeChunk}
\subsection{Installing via Git}
Installing \hmcif\ along with all required packages is possible via
\begin{CodeChunk}
\begin{CodeInput}
pip install git+https://gitlab.mpcdf.mpg.de/ift/HMCF
\end{CodeInput}
\end{CodeChunk}
\section{Symplectic Integrators}\label{sec:sympintegr}
\todo{reference the point where higher orders can be set in HMCF}
One of the most challenging tasks when implementing HMC is to solve Hamilton's
equations (\ref{eq:hamiltondyn}).
Fortunately solving Hamilton's equations is very central to modern Physics.
Therefore literature on numerical integrators tackling Hamiltonian dynamics
is quite broad and advanced (see e.g. \cite{leimkuhler2004, sympintegr01,
sympintegr02}).
One very important class of such integrators are symplectic integrators which
keep the volume preserving property of Hamiltonian dynamics.
This section is supposed to give some intuition on how symplectic integrators
are derived and follows largely \cite{leimkuhler2004}.
We start with decomposing the Hamiltonian $H$ into two
pseudo-Hamiltonians $H_1$ and $H_2$:
\begin{equation*}
H(x, p) = \frac 12 p^\top M^{-1} p + V(q), ~~H_1(q, p) = \frac 12 p^\top M^{-1} p, ~~H_2(q, p) = V(q)
\end{equation*}
Let $\Phi_{\epsilon, H_i}$ be the map projecting a position $(q, p)$
forward in time by $\epsilon$ according to the Hamiltonian $H_i$.
The dynamics can be derived exactly for each of these:
\begin{align*}
\Phi_{\epsilon, H_1} (q, p) &= \twovector{q + \epsilon \nabla_p K(p)}{p}\\
\Phi_{\epsilon, H_2} (q, p) &= \twovector{q}{p - \epsilon \nabla_q V(q)}
\end{align*}
Since these maps depict Hamilton's equations exactly,
they preserve volume and are therefore symplectic.
Additionally since the composition of two symplectic maps is again symplectic,
the map
\begin{equation*}
\Psi_{\epsilon} = \Phi_{\epsilon, H_1} \circ \Phi_{\epsilon, H_2}
\end{equation*}
is symplectic, too.
It remains to show that $\Psi_{\epsilon}$ approximates the true Hamiltonian
dynamics, represented by the map $\Phi_{\epsilon}$ as introduced in (\ref{eq:phih}).
A look at the behavior of $\Psi_{\epsilon}$ reveals
\begin{align}\label{eq:symplecticone}
\Psi_{\epsilon}(q, p) &= \Phi_{\epsilon, H_1}(\Phi_{\epsilon, H_2}(q, p)\nonumber\\
&= \Phi_{\epsilon, H_1}(q, p + \epsilon \nabla_q V(q))\nonumber\\
&= \twovector{q}{p} + \epsilon \twovector{\nabla_{p^\prime} K(p^\prime)}{- \nabla_q V(q)}
\end{align}
where $p^\prime = p + \epsilon \nabla_q V(q)$.
On the other hand a Taylor expansion of $\Phi_{\epsilon}$ to first order in $\epsilon$ reads
\begin{equation*}
\Phi_{\epsilon} = \twovector{q}{p} +\epsilon \twovector{\nabla_p}{-\nabla_q} H(q, p) + \mathcal{O}({\epsilon}^2)
\end{equation*}
which shows immediately that $\Psi_{\epsilon}$ approximates the true
Hamiltonian dynamics up to first order in $\epsilon$.
But the volume preserving property of Hamiltonian dynamics is not the only neat
side effect when using it in the context of MCMC algorithms.
For example the time-reversal property is necessary to ensure detailed balance.
Therefore it is necessary to impose an additional ``symmetry'' condition
on the symplectic integrator:
\begin{equation*}
\Psi_{\epsilon} = \Psi^\dagger_{\epsilon}
\end{equation*}
Where the adjoint map $\Psi^\dagger_{\epsilon}$ is defined as
\begin{equation*}
\Psi^\dagger_{\epsilon} = \left( \Psi_{- \epsilon} \right)^{-1} \quad .
\end{equation*}
In other words: For a symmetric symplectic integrator $\Psi_{\epsilon}$ the
inverse is just the same transformation propagating backwards in time by
$\epsilon$.
It is easy to see that if a numerical method as well as its adjoint map are known
a symmetric method is available as
\begin{equation}\label{eq:symmetriccomp}
\hat \Psi_{\epsilon} = \Psi_{\epsilon /2}^\dagger \circ \Psi_{\epsilon /2}
\end{equation}
The integrator presented in (\ref{eq:symplecticone}) for example is in general
not symmetric.
But the $\Phi_{\epsilon, H_i}$ are and therefore the adjoint of
(\ref{eq:symplecticone}) is given by
\begin{align*}
\Psi_\epsilon^\dagger &= \left( \Phi_{\epsilon, H_1} \circ \Phi_{\epsilon, H_2} \right)^\dagger\\
&= \Phi_{\epsilon, H_2}^\dagger \circ \Phi_{\epsilon, H_1}^\dagger\\
&= \Phi_{\epsilon, H_2} \circ \Phi_{\epsilon, H_1} \quad .
\end{align*}
Using (\ref{eq:symmetriccomp}) a symmetric integrator can be constructed
which reads:
\begin{align}\label{eq:leapfrogpsi}
\hat \Psi_\epsilon &= \Phi_{\epsilon /2, H_2} \circ \Phi_{\epsilon /2, H_1} \circ
\Phi_{\epsilon /2, H_1} \circ \Phi_{\epsilon /2, H_2}\nonumber\\
&= \Phi_{\epsilon /2, H_2} \circ \Phi_{\epsilon, H_1} \circ \Phi_{\epsilon /2, H_2}
\end{align}
where the second equality holds since the shift introduced by $\Phi_{\epsilon
/2, H_1}$ is only dependent on $p$ and only changes $q$.
This is the well known leapfrog integrator as can be seen when applying it to a
state $(q(t), p(t))$:
\begin{equation}\label{eq:leapfrog}
\begin{split}
p(t+\epsilon /2) &= p(t) - \frac \epsilon 2 \nabla_q V(q(t))\\
q(t+ \epsilon ) &= q(t) + \epsilon \nabla_p K(p(t+\epsilon /2))\\
p(t+ \epsilon ) &= p(t+\epsilon /2) - \frac\epsilon 2 \nabla_q V(q(t+\epsilon ))
\end{split}
\end{equation}
It can be shown that the leapfrog integrator approximates the true
Hamiltonian flow up to second order in $\epsilon$ exactly \cite{leimkuhler2004}.
The leapfrog integrator is the most used integrator for HMC sampling.
For evolving the system in time by $\tau$ the integration consists of $L$
successive applications of leapfrog steps such that $\tau = \epsilon L$.
The whole integration is of order 1 \cite{leimkuhler2004}.
\subsection{Higher Order Symplectic Integrators}
In principle one is not bound to second order integrators.
Higher order integrators can have better performance on high-dimensional spaces
\citet{higherordersymp01, higherordersymp02}.
The splitting of the Hamiltonian into potential $V$ and kinetic term $K$ can be
generalized to
\begin{equation*}
H = \sum_{i=1}^k c_i K + \sum_{i=1}^k d_i V
\end{equation*}
where $c_i$ and $d_i$ are normalized weights.
This leads to a symplectic map of the form
\begin{equation*}
\hat \Psi_\epsilon = \Phi_{c_1\epsilon , K} \circ \Phi_{d_1\epsilon, V} \circ
\Phi_{c_2\epsilon, K} \circ \Phi_{d_2\epsilon, V} \circ \ldots
\end{equation*}
Although in principle the weights can be chosen arbitrarily the main idea of
further splitting the Hamiltonian is to generate higher or in particular
$k$-order symplectic integrators.
This means that $c_i$ and $d_i$ should ideally be chosen such that
the final symplectic map $\hat \Psi_\epsilon$ equals the Taylor expansion of
$\Phi_{\epsilon, H}$ up to order $k$ in $\epsilon$ while maintaining symmetry.
This is a very non-trivial task and it is hopeless to get higher order integrators by brute force methods \citep{neri1987}.
\cite{yoshida1990} developed a method to derive higher order methods
by using known lower order expressions such as (\ref{eq:leapfrogpsi}).
Under the assumption a symmetric integrator $\hat \Psi_{\epsilon}^{(2k)}$
of order $2k$ is known, a symmetric
integrator of order $2(k+1)$ is constructed via
\begin{equation*}
\hat \Psi_{\epsilon}^{(2(k+1))} = \hat \Psi_{\omega_0\epsilon}^{(2k)} \circ
\hat \Psi_{\omega_1\epsilon}^{(2k)} \circ \hat \Psi_{\omega_2\epsilon}^{(2k)}
\end{equation*}
where the $\omega_i$s are again normalized weights.
For the integrator to be symmetric the condition $\omega_0 = \omega_2$ must hold.
Additionally normalizing gives
\begin{equation*}
\omega_1 = 1 - 2\omega_0 \quad .
\end{equation*}
A third condition on the weights is given by the requirement that the
symplectic integrator needs to be equal to the Taylor expansion of
$\Phi_{\epsilon, H}$ up to order $2(k+1)$.
\cite{yoshida1990} derives the condition
\begin{equation*}
\omega_1^{2k+1} + 2\omega_0^{2k+1} = 0
\end{equation*}
by making use of an alternative approach using Poisson brackets and the
Baker-Campbell-Hausdorff formula.
The analytical solution for the weights of a $2(k+1)$-order integrator
is given by
\begin{equation*}
\begin{split}
\omega_0 &= \frac{1}{2-2^{1/(2k+1)}}\\
\omega_1 &= - \frac{2^{1/(2k+1)}}{2-2^{1/(2k+1)}}\\
\end{split}
\end{equation*}
In principal this enables the creation of integrators of arbitrary high order.
A drawback is that the number of intermediate steps grows exponentially with
$3^{k-1}+1$ for an $2k$ order integrator.
Although there are approaches (without analytical solutions for the weights)
which get rid of the exponential scaling \cite{yoshida1990}, they would go beyond the scope of this
work.
\section{Summary}\label{sec:summary}
Efficient HMC sampling with the high number of degrees of freedom
of a numerically represented field is a very complicated task.
\hmcif\ takes care of most challenges arising while working on such problems.
It provides good default values and adjusting strategies for crucial
parameters such as the integration step size or the mass matrix.
Nonetheless the user is still able to customize many details
of how the sampler deals with a given problem.
These features include
\begin{itemize}
\item Simultaneous sampling of all free parameters and hyperparameters
\item Setting the order of the symplectic integrator
\item Defining the adjustment strategy for $\epsilon$ and related properties
\item Defining the convergence measure strategy and related properties
\item Defining how often and how well the mass matrix is reevaluated
\item Providing a clear, in-depth overview of relevant parameters of all chains
especially during burn-in phase
\end{itemize}
Apart from a diverse set of different options to choose from
the structure of the module even eases the creation of new, customized
options.
We explained the usage of \hmcif\ and demonstrated its performance using the
demonstrator coming with the \hmcif\ package.
\subsection{A Simple Hierarchical Model}
Consider a telescope observing the large-scale structure of the universe.
Billions of galaxies producing photons eventually reaching the earth.
This photon flux reaching the sensors of the telescope is denoted $x$.
The spatial properties of $x$ can be described as a mathematical field living on
a continuous space.
Our telescope measures this photon flux $x$ which means that it converts it into
an electronic signal based on how many photons reach each of its CCD sensors.
Errors in lenses and small differences between individual sensors alter the true
nature of the original $x$ field but can be coped with by calibration.
In our model, this transformation is represented by a linear \emph{response}
operator $R$ acting on $x$.
Note, that the domain and the target domain of $R$ are different.
Whereas, in theory, the domain of $x$ is continuous, the output of the telescope
is definitely discretized.
Additionally to the response of the instrument, we assume a certain Gaussian
noise due to imperfect electronics with known covariance $N$.
The measured data $d$ produced by the telescope is then described by the
measurement equation
\begin{equation}\label{eq:wienermodel}
d = R(x) + n\,.
\end{equation}
What we are interested in is the ``true'' signal, based on the data $d$ we got
from the telescope.
This information is reflected in the conditional probability $\Prob(x | d)$.
In terms of Bayesian statistics this is often referred to as the
\emph{posterior} and Bayes formula relates the posterior to our assumed model
and prior knowledge we may have on our signal $x$:
\begin{equation*}
\Prob(x | d) \propto \Prob(d | x) \Prob(x)
\end{equation*}
While the \emph{likelyhood} $\Prob(d | x)$ is defined by our model in equation
(\ref{eq:wienermodel}) and the noise statistics, the \emph{prior} $\Prob(x)$
needs more consideration.
Observations dating back to \citet{hubble1934} suggest that for the large-scale
structure, at least to some extend, a log-normal prior is sufficient.
We therefore introduce another field $s = \log (x)$ with a multivariate Gaussian
distribution and covariance $S$ such that $x$ is log-normal distributed.
We further want to ensure that our field $s$ is somewhat smooth.
This can be enforced by imposing a power law $P(k)$ on the covariance $S$ in its
harmonic representation,
\begin{equation*}
S_{kk^\prime} = \delta_{k,k^\prime} P(k)\,.
\end{equation*}
This power law can be chosen such that high curvatures in the position space get
punished and are therefore improbable.
For illustration, we assume a power law
\begin{equation}\label{eq:mfpowlaw}
P(k) = \left( \frac{\lc}{1 + \lc k} \right)^4
\end{equation}
with the \emph{correlation length} $\lc$ essentially defining how strong
curvature is allowed to be.
If we do not know the correlation length, we can treat it as a free hyperparameter
making the problem a full Bayesian hierarchical model.
Since $\lc$ is strictly positive we assume another log-normal prior
\begin{equation*}
\Prob(\lc) \propto \frac 1 {\lc} \exp \left( - \frac {(\log \lc - \mu_c)^2}{2 \sigma^2_c}\right)
\end{equation*}
where $\mu_c$ and $\sigma^2_c$ are appropriate parameters for this hyperprior.
Our full posterior then turns into
\begin{equation}\label{eq:mfpost}
\Prob(s, \lc | d) \propto \Prob(d | s)\Prob(s| \lc) \Prob(\lc )
\end{equation}
with the likelihood
\begin{equation*}
\begin{split}
\Prob(d | s) &= \int \delta (d - Re^s - n) \Prob(n) dn\\
&\propto \exp \left( - \frac 12 \left( d - Re^s \right)^\top N^{-1} \left( d - Re^s \right)\right)
\end{split}
\end{equation*}
and the prior for $s$
\begin{equation*}
\Prob(s | \lc) = \left| 2 \pi S \right|^{-\frac 12} \exp \left( - \frac 12 s^\top S^{-1} s \right)
\end{equation*}
The dependence on $\lc$ is encoded in the covariance $S$.
The norm of $S$ is defined as the determinant
\begin{equation*}
|S| = \det S = \prod_k P(k)
\end{equation*}
However, for HMC sampling we need a potential $\Psi(s, \lc)$ i.e., the negative
logarithm of the posterior in (\ref{eq:mfpost}).
For better readability, we divide the potential in a prior and a likelihood part
as well, such that
\begin{equation}\label{eq:mfpot}
\Psi(s, \lc) = \Psil (s, \lc) + \Psip(\lc)
\end{equation}
with
\begin{equation*}
\begin{split}
\Psil(s, \lc) &= - \log \Prob (d | s) - \log \Prob(s | \lc)\\
\Psip(\lc) &= - \log \Prob(\lc)
\end{split}
\end{equation*}
We omit terms constant in $\lc$ and $s$ since they are not important for HMC
sampling and arrive at
\begin{equation}\label{eq:mfenergies}
\begin{split}
\Psil(s, \lc) &= \frac 12 \left( d - Re^s \right)^\top N^{-1} \left( d - Re^s \right)
+ \frac 12 \left( s^\top S^{-1}s + \sum_k \log (P(k)) \right)\\
\Psip(\lc) &= \frac 12 \left( \frac 1{\sigma^2_c}(\log \lc - \mu_c)^2\right) + \log(\lc)
\end{split}
\end{equation}
Additionally, the gradient of the potential $\Psi (s, \lc)$ is needed for the
time evolution part during HMC sampling.
For the likelihood part the gradient boils down to the following expressions:
\begin{equation}\label{eq:mflikegrad}
\begin{split}
\partial_s \Psil &= S^{-1}s - \left( R e^s \right) \odot N^{-1}\left( d - Re^s \right) \\
\partial_{\lc}\Psil &= s^\top \left(\partial_{\lc}S^{-1}\right)s + 4 \sum_k \left( \frac 1{\lc} - \frac{k}{1 + \lc k} \right)
\end{split}
\end{equation}
where $\odot$ denotes point-wise multiplication of vectors.
For deriving the exact expression for $\partial_{\lc}S^{-1}$, observe that
$\left[ S^{-1} \right]_{kk^\prime} = \delta_{kk^\prime} \left( P(k)
\right)^{-1}$ and therefore
\begin{equation*}
\left[ \partial_{\lc}S^{-1}\right]_{kk^\prime} = -4 \frac{(1+\lc k)^3}{\lc^5} \delta_{kk^\prime}
\end{equation*}
For the prior part of the potential the gradient can be written as
\begin{equation}\label{eg:mfpriorgrad}
\begin{split}
\partial_s \Psip &= 0 \\
\partial_{\lc}\Psip &= \frac 1{\lc}\left(\frac 1{\sigma_c^2} ( \log \lc - \mu_c ) -1 \right)\,.
\end{split}
\end{equation}
\subsection[Implementation within NIFTy]{Implementation in \nifty }
\label{sec:mfimpl}
In this section we will take a look at how such a model can be implemented in
\nifty\ in general.
For more details on \nifty\ see \citet{nifty4}.
In \nifty\ there are a variety of classes representing certain aspects of a
typical inference problem, among which the most important ones are:
\begin{description}
\item [\code{Domain} :] A base class representing the underlying space.
For example a regular grid can be defined as a \code{RGSpace} which inherits
from \code{Domain}.
\item [\code{Field} / \code{MultiField} :] A class representing fields such as
$x$, $n$ or $d$.
It carries information about the underlying \code{Domain} as well as the
field's value on this domain.
The \code{Field} supports every arithmetic operation.
Other operations such as trigonometric or exponential functions can be applied
point-wise using e.g., \code{ift.exp(x)}.
The notation for these functions is the same as the one used by \np .
\code{MultiField}s are essentially dictionaries of \code{Field}s which also
support all operations above.
This can be used to represent all free parameters in a hierarchical model in
just one object.
\item [\code{LinearOperator} :] An abstract base class for explicitly or
implicitly defined linear operators such as the response $R$.
The \code{LinearOperator} class carries information about the operator's
\code{Domain} as well as its target domain.
The operator can be applied to a \code{Field} \code{x} in various ways by
calling one of the methods \code{times(x)}, \code{inverse_times(x)},
\code{adjoint_times(x)}, or \code{adjoint_inverse_times(x)}, although not
every of these methods is available for every linear operator.
\item [\eon\ :] An abstract base class representing the negative
logarithm of a distribution $\Prob(x)$.
The \eon\ class is only defined at a certain value of a \code{Field} or
\code{MultiField} but the same energy for a different \code{Field y} on the
same \code{Domain} can be obtained by calling the \code{at(y)} method.
Additionally, the \eon\ class defines a gradient at the position as well as
the curvature.
This class is the most important one for \hmcif\ since it defines the
potential $\Psi(s, \lc)$ and thereby the whole model.
\end{description}
The model introduced in the previous section can be implemented as an \nifty\
\eon\ but since this paper is about \hmcif\ we will not go into detail with
this.
The curious reader can, however, look into the demonstration script and will
find the implementations of the likelihood and the prior energy from
(\ref{eq:mfenergies}) along with their respective gradients in
(\ref{eq:mflikegrad}) and (\ref{eg:mfpriorgrad}) in \code{/demos/energies.py}
in the package's repository.
\subsubsection[Implementation of the Hierarchical Model in NIFTy]
{Implementation of the Hierarchical Model in \nifty }
This and the following section summarize the \code{demos/multi_field_demo.py}
script.
The reader may want to have a look at the script, to have an overview.
We first import \nifty\ and \hmcif\ among other packages via
\begin{CodeChunk}
\begin{CodeInput}
import nifty4 as ift
import hmcf
\end{CodeInput}
\end{CodeChunk}
A mock dataset for our algorithm to operate on is generated using the
\code{get_hierarchical_data} function which was written for this purpose only.
\begin{CodeChunk}
\begin{CodeInput}
d, sl, R, N = get_hierarchical_data()
\end{CodeInput}
\end{CodeChunk}
It returns \nifty\ objects for the data field $d$, the free parameters $s$ and
$\lc$ (as the \code{MultiField sl}), the response $R$ and the noise covariance
$N$.
\code d is a \code{Field} containing the data, \code{sl} is a \code{MultiField}
with entries \code{'signal'} for $s$ and \code{'l_c'} for $\lc$.
The mock dataset is generated by first sampling a signal
field $s$ with the power spectrum defined in (\ref{eq:mfpowlaw}) and a
pre-defined value for $\lc$.
Together with a noise drawn from a normal distribution with covariance $N$ the
measurement equation (\ref{eq:wienermodel}) is applied and the mock data set is
available.
The signal space as well as the data space consist of $100\times 100$ pixels
which means that, together with $\lc$, our problem has $10,001$ free parameters.
In figure \ref{fig:mfsource} this mock data set generated with random seed $41$
can be observed.
For this simple example the instrument $R$ transforms the photon flux perfectly
to an electrical signal, except for a square in the bottom right region of the
image where the instrument is broken and just returns zeros.
\begin{figure}[h]
\centering
\includegraphics[clip, trim=4.5cm 0cm 3.5cm 0cm]{multi_field/source}
\caption{Output of the \code{get\_hierarchical\_data} function.
The first image on the left displays the original photon flux $x$ before it
hits the detector.
For this simple example the instrument transforms the photon flux perfectly
to an electrical signal, except for a square in the bottom left region of the
image where it is broken.
The instrument returns just zeros in that region.
The resulting image is displayed in the middle figure.
Finally in the right figure, the data field $d$ is displayed where Gaussian
random noise was added.
}
\label{fig:mfsource}
\end{figure}
For our sampling process we wrote an \niftye\ class called
\code{SimpleHierarchicalEnergy} which implements the potential $\Psi(s, \lc)$
described in (\ref{eq:mfpot}).
The constructor needs the following arguments:
\begin{description}
\item[\code{position} :]
\nifty\ \code{MultiField}\\
The position $(s, \lc)$ where the energy $\Psi(s, \lc)$ is evaluated.
The \code{MultiField} has two entries: \code{'signal'} and \code{'l_c'}.
The \niftyf\ in \code{'signal'} is the harmonic representation of the $s$
parameter.
\item[\code{d} :]
\niftyf \\
The data vector.
\item[\code{R} :]
\niftyo \\
The instrument's response.
\item[\code{N} :]
\niftyo \\
The covariance of the noise.
\item[\code{l\_c\_mu} :]
\float \\
Hyperparameter for the log-normal distribution of $\lc$.
\item[\code{l\_c\_sig2} :]
\float \\
Hyperparameter for the log-normal distribution of $\lc$.
\item[\code{HT} :]
\niftyo \\
Harmonic transformation operator, capable of transforming the
\code{position['signal']} field from the harmonic representation to the
position space.
\item[\code{inverter} :]
\nifty\ \code{Minimizer} \\
Numerical method for inverting \niftyo s.
This is necessary to be able to sample from the curvature of the energy
which is required if the mass matrix is supposed to be reevaluated.
\end{description}
Using a \code{MultiField} as position enables us to sample the signal and
the hyperparameter $\lc$ at the same time.
\code{d}, \code{R} and \code{N} are already given by the
\code{get\_hierarchical\_data} function.
Values for \code{l\_c\_mu} and \code{l\_c\_sig2} are chosen such that values for
$\lc$ in the range of $0.05$ to $10.$ are probable (the true value used during
data generation is $0.6$).
A harmonic transformation operator is easily defined via
\begin{CodeChunk}
\begin{CodeInput}
s_space = sl['signal'].domain
HT = ift.HarmonicTransformOperator(s_space)
\end{CodeInput}
\end{CodeChunk}
and as the inverter needed for the mass reevaluation we use a conjugate gradient
implementation in \nifty :
\begin{CodeChunk}
\begin{CodeInput}
ICI = ift.GradientNormController(iteration_limit=2000,
tol_abs_gradnorm=1e-3)
inverter = ift.ConjugateGradient(controller=ICI)
\end{CodeInput}
\end{CodeChunk}
Finally, for the initial definition of the energy we use the \code{sl} as
position since it has the correct \code{MultiField} structure.
An instance of the energy for our model is then created by calling the
constructor
\begin{CodeChunk}
\begin{CodeInput}
energy = SimpleHierarchicalEnergy(sl, d, R, N, HT, -0.3, 2.,
inverter=inverter)
\end{CodeInput}
\end{CodeChunk}
\subsection[Sampling with \purehmcif ]{Sampling with \hmcif}
\label{sec:hmcifsampling}
Up to this point we only introduced and used \nifty\ objects.
But now that an \eon\ is properly defined we can start using \hmcif .
First, we create an instance of the \hmcsc\ class.
This object represents the HMC sampler and the constructor has only one required
argument:
The potential $\Psi$.
However, for this example we also set the optional \code{num\_processes}
argument which states the number of chains or CPU cores we use during sampling
and the \code{sample\_transform} argument which is necessary since we sample $s$
but are mainly interested in the photon flux $x = \exp(s)$.
\begin{CodeChunk}
\begin{CodeInput}
def sample_trafo(s):
val = dict(s)
val['signal'] = ift.exp(HT(val['signal']))
return ift.MultiField(val=val)
sampler = hmcf.HMCSampler(energy, num_processes=6,
sample_transform=sample_trafo)
\end{CodeInput}
\end{CodeChunk}
The \code{sample_transform} argument requires a function and represents
essentially $f$ in (\ref{eq:approxexpval}).
It is important that the \code{sample_transform} function takes \niftyf s or
\code{MultiField}s of the same kind as the position argument of the \eon\
instance which is passed to the constructor of the \hmcsc\ class (in particular
they need to live on the same domain).
The output of the function can be any kind of \code{Field} or \code{MultiField}
regardless of what was put in.
Before we start sampling we need to define initial positions for the Markov
chains.
In principle this could be completely random but unfortunately, in
high-dimensional cases we need to be somewhere close to non-vanishing parts of
the target distribution because otherwise the gradients are so large that
numerical integration during time evolution will fail in any case.
In this example we know the true signal \code{sl} and will use small derivations
from that, but under real circumstances one would need to first use an
approximating algorithm to get close to the mean or maximum-a-posteriori
solution of the problem and start from there.
This can be done using algorithms already implemented in \nifty .
Afterwards, we call the \code{run} method of our sampler which has again only one
required argument: The number of samples per Markov chain drawn after the
burn-in phase has finished.
Additionally, we set the optional argument \code{x_initial} with a list of
initial positions (the length of this list does not have to be the same as the
number of Markov chains).
\begin{CodeChunk}
\begin{CodeInput}
x_initial = [sl*c for c in [.5, .7, 1.5, 2.]]
sampler.run(500, x_initial=x_initial)
\end{CodeInput}
\end{CodeChunk}
This will initiate a sampling process where the sampler starts in a burn-in
phase during which the integration step size $\epsilon$ from (\ref{eq:leapfrog})
is adjusted to meet the default target acceptance rate of $0.8$.
Additionally, the mass matrix is reevaluated once in the beginning and then the
sampler waits until the individual Markov chains have converged with respect to
a measure based on diagnostics first introduced by \cite{gelman1992}.
All these things can be adjusted to the users needs and specific problem and a
detailed description of all options can be found in appendix \ref{sec:soft}.
The whole sampling process may take up to 10 minutes depending on the machine
the script is executed on.
If the sampling process seems to freeze in the beginning this is probable due to
the mass reevaluation which can take some time.
A much shorter execution time is possible by setting the \code{n\_pixels}
argument of the \code{get\_hierarchical\_data} function to \code{10}.
\subsection{Retrieving Results after Sampling}\label{sec:hmcifresults}
After some time the sampler is finished and one can access the mean value (of
the transformed samples) as a \nifty\ \code{MultiField} via the \code{mean}
attribute of the \code{sampler}.
To get the photon flux values as a \np\ array one can write
\begin{CodeChunk}
\begin{CodeInput}
mean_val = sampler.mean['signal'].to_global_data()
\end{CodeInput}
\end{CodeChunk}
The \code{'signal'} statement selects the \niftyf\ representing the signal $s$
and the \code{to_global_data()} method returns the \code{Field}'s value on the
regular grid as a \np\ array.
To get the inferred mean value of the correlation length $\lc$ as a \float\ the
following statement does the trick:
\begin{CodeChunk}
\begin{CodeInput}
l_c_val = sampler.mean['l_c'].to_global_data()[0]
print(l_c_val)
\end{CodeInput}
\begin{CodeOutput}
0.637494646985
\end{CodeOutput}
\end{CodeChunk}
The same thing is possible by loading the mean value from the respective \hdf\
file:
\begin{CodeChunk}
\begin{CodeInput}
mean_val = hmcf.load_mean(path/to/file)['signal']
l_c_val = hmcf.load_mean(path/to/file)['l_c'][0]
\end{CodeInput}
\end{CodeChunk}
Obviously, this is possible even if the \code{HMCSampler} instance was deleted
already (for example after a reboot of your system).
The variance of the samples can be obtained in the same way using the \code{var}
attribute of the \hmcsc\ class or calling the \code{load_var} function.
The results of the sampling process are displayed in figure \ref{fig:mfres}.
The most prominent difference between the original flux $x$ and the HMC mean
value is where the instrument was broken in the bottom right region.
In particular, the standard deviation of the samples drawn from the full
posterior distribution is remarkably high there as one would expect since
information is missing.
\begin{figure}[ht]
\centering
\includegraphics[clip, trim=2cm 0cm 1.5cm 0cm]{multi_field/result}
\caption{Results of the multi\_field\_demo.py script.
In the upper row the true photon flux $x$ along with the reconstructed picture
based on the samples generated by the HMC algorithm.
The reconstructed picture gets very blurry where the instrument was broken in
the bottom right region and information got lost.
In the second row the absolute difference between the original flux and the
reconstructed picture and the standard deviation of the HMC samples is
displayed.
Again, the region in which the instrument is broken is very prominent in both
cases.}
\label{fig:mfres}
\end{figure}
The samples itself can be loaded by either using the \code{samples} attribute of
the \hmcsc\ class or calling the \code{load} function of \hmcif .
As an example the shape of the returned \np\ array can be displayed with
\begin{CodeChunk}
\begin{CodeInput}
print(hmcf.load(path/to/file.h5)['signal'].shape)
\end{CodeInput}
\begin{CodeOutput}
(6, 500, 100, 100)
\end{CodeOutput}
\end{CodeChunk}
where the first element reflects the number of independent Markov chains, the
second element is the number of sample each chain generated and the third and
fourth element reflect the regular grid on which the whole process was carried
out.
To have a better overview of these samples we can use the
\code{show\_trajectories} function of \hmcif\ which displays the chain
trajectories through parameter space by pixel.
It is an interactive GUI consisting of two graphs as depicted in figure
\ref{fig:gui01}.
The left graph shows the inferred mean value and the right graph trajectories of
each chain for one selected pixel.
The pixel can either be set in the top row by defining the coordinates and then
clicking on ``show'' or by just clicking on some pixel in the left graph.
The function itself is located in the \hmcif\ \code{tools} sub-package and can
be called by executing
\begin{CodeChunk}
\begin{CodeInput}
from hmcf.tools import show_trajectories
show_trajectories(path/to/file, field_name='signal')
\end{CodeInput}
\end{CodeChunk}
where the \code{field\_name} statement defines which element of the
\code{MultiField} is displayed.
\begin{figure}[ht]
\centering
\includegraphics[width=0.9\textwidth]{gui02}
\caption{The evaluation GUI for displaying the Markov chain trajectories
of selected pixels. On the left the mean value for $s$ in our example is
displayed.
In the right graph the trajectories of the six Markov chains at the pixel
coordinates $(x=12, y=52)$ (as stated in the top row) are shown.}
\label{fig:gui01}
\end{figure}
\section{Dynamic Step-Size Adjustment}\label{sec:epsadj}
Hamiltonian Monte Carlo needs much more computation time per sample proposal than other
MCMC approaches.
This is a disadvantage that is compensated by much higher acceptance rates and
lower autocorrelation between samples.
Because of the energy conserving property of Hamiltonian dynamics,
the acceptance rate is only dependent on the performance of the numerical integrator.
The numerical integrator has one free parameter: the step size $\epsilon$.
In principle the bigger it is the bigger is the integration error $\Delta E$
and thereby the smaller the acceptance rate.
On the other hand, a too small value for $\epsilon$ results in a sampler which does not
explore the typical set on bearable timescales.
In practice an acceptance rate of 0.7 to 0.9 \citep{accrange01, accrange02} is
said to be ideal for an HMC sampler to make up for the higher computation time.
To this end we adjust $\epsilon$ during the burn-in phase such that a
user-defined acceptance rate is matched.
The first step in constructing a good strategy is to derive a relation between
acceptance rate $r_A$ and the integration error $\Delta E$.
We developed the following approximation.
Given the acceptance probability $\pacc$ from equation (\ref{eq:accprobhmc}),
the \emph{expected} acceptance rate is given by
\begin{equation}\label{eq:accrate}
r_A(\epsilon) = \left\langle \pacc (\Delta E) \right\rangle_{P(\Delta E | \epsilon)}
= \left\langle \min \left( 1, e^{-\Delta E} \right) \right\rangle_{P(\Delta E | \epsilon)}
\end{equation}
where $P(\Delta E | \epsilon)$ is the probability distribution for $\Delta E$
conditioned on $\epsilon$.
To tackle the $\min$ function properly it is assumed that the probability
distribution for the sign of $\Delta E$ is not dependent on the absolute value
of $\Delta E$:
\begin{equation*}
P(\Delta E) \approx P(|\Delta E |) P(\sgn(\Delta E))
\end{equation*}
This reflects the plausible situation that errors are symmetrically probable displacements
of trajectories in regions of the phase space that are dominated
by a potential gradient and not by a minimum.
In this case we can further assume that
\begin{equation*}
P(\sgn(\Delta E) = 1) \approx P(\sgn(\Delta E) = - 1) \approx 0.5~ .
\end{equation*}
With this, equation (\ref{eq:accrate}) can be written as
\begin{equation*}
r_A(\epsilon) = \frac 12 \left( 1 +
\left\langle e^{-|\Delta E|} \right\rangle_{P(|\Delta E| | \epsilon)}\right)
\approx \frac 12 \left( 2 - \left\langle |\Delta E| \right\rangle \right)
+ \order(\langle |\Delta E|^2 \rangle)\,,
\end{equation*}
where the exponential-function was expanded to first order.
In practice a certain value for $r_A$ like $0.8$ is recommended.
This means for $\Delta E$
\begin{equation}\label{eq:derelation}
\langle |\Delta E| \rangle \overset != 2 (1 - r_A(\epsilon)) =: \tde
\end{equation}
This is the relation that lies at the core of most epsilon-adjusting
strategies available in \hmcif .
Note that even if a step is accepted for sure because $\Delta E$ is negative,
adjusting $\epsilon$ is still possible since only the absolute value of $\Delta E$ is needed.
This is of great use in cases where nearly every step during burn-in
produces a negative $\Delta E$
(This happens sometimes if the Markov chains start far off the mean value).
For more on how $\epsilon$ is adjusted exactly see section \ref{sec:eps}.
\section{Hamiltonian Monte Carlo Sampling}\label{sec:theory}
The goal of most statistical problems is to find expectation values
$\left\langle f(x) \right\rangle_{\Prob(x)}$ of a function $f(x)$
given a distribution $\Prob (x)$, with
\begin{equation*}
\left\langle f(x) \right\rangle_{\Prob(x)} = \int_\paramspace f(x) \Prob(x) dx \,,
\end{equation*}
where $\paramspace$ is the space of all possible values for $x$.
However, especially in high dimensional cases the integral may become
intractable.
One approach to circumvent this problem is to use a form of Monte Carlo
integration.
Samples $(x_i)$ which are distributed like $\Prob(x)$ are used
to approximate the expectation value:
\begin{equation}\label{eq:approxexpval}
\left\langle f(x) \right\rangle_{\Prob(x)} \approx \frac 1N \sum_{i=1}^N f(x_i)
\end{equation}
The law of large numbers ensures that this approximation converges to the true
expectation value in non-pathological situations.
Then, the problem is reduced to finding a strategy to generate the samples
$(x_i)$.
While there are straight forward solutions in some cases such as normal
distributions, often more advanced algorithms are needed.
A large subset of such algorithms are Markov chain Monte Carlo (MCMC) methods
which have in common that a Markov process is constructed which ultimately
converges to the wanted \emph{target distribution} $\Prob(x)$.
HMC sampling is a MCMC method especially applicable for once-differentiable
probability distributions on high-dimensional spaces.
\subsection{The Algorithm}\label{sec:hmcalgo}
The Hamilton Monte Carlo (HMC) approach (first introduced by \citet{duane1987},
good summaries: \citet{neal2011, betancourt2017})
uses a variation of the Metropolis-Hastings algorithm \citep{hastings1970,
metropolis1953} with less random walk behavior.
The idea is to describe the Markov process as a physical Hamiltonian time
evolution, exploiting for MCMC methods advantageous properties of the dynamics
of this system.
The samples $(x_i)$ can then be thought of as snapshots of the trajectory of a
particle moving through an energy landscape $\Psi(x)$.
This \emph{energy} is defined by the target distribution $\Prob(x)$ via
\begin{equation*}
\Psi(x) := - \log(\Prob(x))
\end{equation*}
This convention originates from statistical physics where the probability of a
system being in state $i$ with energy $E(i)$ is
\begin{equation*}
\Prob(x) \propto \exp(-\beta E(i))
\end{equation*}
where $\beta$ is a temperature-dependent scaling parameter.
Additionally, a new normal distributed random variable $p \in \paramspace$ with
covariance $M$ called \emph{momentum} is introduced.
The negative logarithm of the joint distribution $\Prob (x, p)$ then looks like
a typical physical Hamiltonian:
\begin{equation*}
H(x, p) = \frac 12 p^\top M^{-1} p + \Psi(x) + \text{const}
\end{equation*}
The central idea of HMC sampling is to evolve this system in time
according to Hamilton's equations of motion
\begin{equation}\label{eq:hamiltondyn}
\begin{split}
\dot x^k &= ~~ \frac{\partial H}{\partial p^k} ~ = \left[ M^{-1}p\right]^k\\
\dot p^k &= - \frac{\partial H}{\partial x^k} ~ = - \frac{\partial \Psi}{\partial x^k}
\end{split}
\end{equation}
for $k = 1, \ldots, \text{dim}(\paramspace )$.
After some time $T$ the position we arrived at is considered as a new
Metropolis-Hastings proposal $(x(T), p(T)) =: (x^\prime, p^\prime)$.
This is approach is possible since Hamiltonian dynamics have some convenient
properties such as volume preservation.
Also, in theory, the new sample is exactly as probable as the starting point
since the process is energy conserving.
In practice however, the discretization of the integration in time leads to
errors in the energy conservation, which is why a accept-reject step is necessary
where the proposal is accepted with probability
\begin{equation}\label{eq:accprobhmc}
\rho_A = \min \left( 1, \exp (-\Delta E) \right) \,,
\end{equation}
where $\Delta E = H(x^\prime, p^\prime) - H(x, p)$.
The whole algorithm then looks like this:
\begin{algorithm}[H]
Set initial $x_{0}$\\
\For{$i=1$ \KwTo \#samples}{
$\left.
\begin{tabular}{lll}
Generate momentum sample $p \sim \mathcal N (0, M)$\\
Evolve system for time $T$\\
New position: $(x^\prime, p^\prime) = (x(T), p(T))$\\
\end{tabular}
\right\}$ MH proposal\\
\vspace{0.5em}
Generate sample $r \sim \mathcal U ([0,1])$\\
\eIf{$r \leq \rho_A$}{
Set $x_{i} = x^\prime$
}{
Set $x_{i} = x_{i-1}$
}
}
\end{algorithm}
The resampling of $p$ in each iteration ensures that the whole parameter space
$\paramspace$ can be reached.
At this point we omit the full proof that the samples $(x_i)$ are then
distributed like $\Prob (x)$. See \citet{neal2011} for details.
\subsection{Further Technicalities}
For HMC to work, the integrator for the time evolution of the system needs to be
symplectic.
Most of the time the \emph{leapfrog} integrator is used, which is a second order
symplectic integrator.
Symplectic integrators of higher orders based on work presented in
\citet{higherorder} are possible as well in \hmcif .
They have proven to be advantageous for HMC sampling in high-dimensional
non-linear cases \citep{blanes2014}.
One step in time of length $\epsilon$ with the leapfrog integrator is calculated
via
\begin{equation}\label{eq:leapfrog}
\begin{split}
p\left(t+\frac \epsilon 2\right) &= p(t) - \frac \epsilon 2 \left. \frac{\partial \Psi}{\partial x} \right|_{x(t)} \\
x\left(t+ \epsilon \right) &= x(t) + \epsilon M ^{-1}p \\
p\left(t+ \epsilon \right) &= p\left(t + \frac \epsilon 2\right) - \left. \frac\epsilon 2 \frac{\partial \Psi}{\partial x} \right|_{x(t+\epsilon)}\,.
\end{split}
\end{equation}
This single leapfrog step is applied $L$ times to generate a new sample such
that $T = \epsilon L$.
The integration step size $\epsilon$ determines the overall acceptance rate of
the sampling process.
The advantages of HMC are only present if the acceptance rate is in the range of
0.6 to 0.9 \citep{accrange01, accrange02}.
To relate $\epsilon$ to the acceptance rate we developed an approximation
further discussed in appendix \ref{sec:epsadj}.
Finally, for an HMC sampling process to work properly it is crucial to find a
good mass matrix $M$, which serves as covariance of the momentum $p$.
There are several approaches but one very popular strategy is
to use samples from the chain itself and base the mass matrix on the variance of
these samples.
\begin{equation}\label{eq:masseval}
M^{-1} = \frac 1N \sum_{i=1}^N \left( (x_i-\mu)(x_i-\mu)^\top \right)
\end{equation}
with $\mu$ being the mean value.
However, in specific cases other approaches might be better,
for example as documented in \citet{taylor2008}.
We found that using samples from the chain itself is unfeasible in
high-dimensional cases ($\text{dim}(\paramspace)>10000$) because a bad initial
mass matrix leads to extremely small integration step sizes and highly
correlated samples.
Thus, in \hmcif\ the samples are generated by drawing samples from the curvature
of $\Psi(x)$ at the current position of the chain.
In other words: For the purpose of estimating the mass matrix, we approximate
the target distribution $\Prob (x)$ with a normal distribution and then draw
samples from this distribution.
\section{Introduction}\label{sec:intro}
\subsection{Purpose}
\hmcif\ implements a Hamiltonian Monte Carlo (HMC) sampler \citep{duane1987}
for the \nifty\ (``Numerical Information Field Theory'') framework
\citep{selig2013, steininger2017, nifty4}).
It is available for \pkg{Python3} on Unix-like systems.
The main purpose of \hmcif\ is to create samples which are distributed
according to arbitrary, once-differentiable probability distributions.
These samples can, for example be used to approximate expectation values in
cases where brute-force integration is infeasible.
\nifty\ is a \python\ library developed for computational work with
information field theory (IFT, \citet{ensslin2009, ensslin2010}).
IFT extends classical probability theory onto functional spaces.
\nifty\ is interesting for spatially correlated inference
problems such as image reconstruction \citep{d3po, resolve, d4po} or work on
geospatial datasets \citep{pysesa}.
A main advantage is the resolution-independent calculation of statistical
estimates.
With \hmcif , Bayesian models already implemented in \nifty\ can
be reused for an HMC sampling approach, easily.
This can help estimating the impact of approximations present in other
approaches, or enable tackling entirely new problems.
\subsection{Features}
At the heart of \hmcif\ lies the \hmcsc\ class which constructs an HMC sampler
based on a predefined \nifty\ \eon\ class describing a probability
distribution $\Prob(x)$ as an \emph{energy} $\Psi(s) = - \log (\Prob(x))$.
Samples drawn from this distribution are saved to the disk as \hdf\
archives \citep{hdf}.
To ensure a successful sampling process \hmcif\ implements a variety of
additional features.
The sampler calculates a convergence measure to determine when the burn-in
phase has finished.
Several predefined strategies on how to exactly calculate the measure are
available and can be chosen from.
It is critical for an HMC sampler to use a proper \emph{mass matrix} which is
why \hmcif\ can recalculate it several times during burn-in achieving better
performance by orders of magnitude in comparison to the usage of an identity as
mass matrix.
Another important sampling parameter, the \emph{integration step size} of the
symplectic integrator, is also adjusted such that a predefined acceptance rate
is matched.
Again, the exact adjusting strategy can be chosen from a predefined set of
options.
Although for most purposes a second order leapfrog integrator is sufficient,
higher order integrators are available as
well.
Furthermore, \hmcif\ uses multi-processing in that individual Markov chains use
separate cores.
\hmcif\ is optimized to ease the work with HMC sampling.
All of the above can be done in only a few lines of code
if a well-defined \nifty\ \eon\ class is available.
\subsection{Comparison to other Packages}
There are many software packages for HMC sampling available in many different languages.
But unlike \hmcif\ most packages are static in that they use in general the identity
as the mass matrix or need a mass matrix specified in the beginning.
Since especially in high-dimensional cases a good mass matrix estimation
is crucial for a successful sampling process we will concentrate
on packages which estimate the mass matrix.
A very popular cross-platform package for HMC is \stan\ \citep{stan}.
\stan\ provides interfaces for \proglang{R}, \python , \proglang{shell},
\proglang{MATLAB}, \proglang{Julia}, \proglang{Stata}, \proglang{Mathematica}
and \proglang{Scala}.
Its biggest advantage is the \proglang{C++} back-end which makes it by far
the fastest sampler if the same parameters such as integration step size
and mass matrix are chosen.
Another notable advantage over \hmcif\ is an implementation of the no-u-turn
sampler (NUTS, \citet{hoffman2014})
which can be seen as an extension to the standard HMC approach.
In \stan\ the mass matrix is set to the identity initially, but is recalculated
from the generated samples during the burn-in phase.
The mass matrix can but does not have to be restricted to a diagonal matrix.
\hmcif\ differs in that the user is able to define their own
mass matrix which can be an advantage in some cases (see e.g. \citet{taylor2008}).
The \stan\ developers announced such a feature in future versions, though.
Using the samples generated by the initial chain itself involves the risk of having
highly correlated samples in case the sampler was malfunctioning due to the
usage of the initial mass matrix.
To avoid this, \hmcif\ uses samples drawn from the curvature of the full posterior at
the current position to reevaluate the mass matrix.
We found this approach to be much more efficient.
Reevaluated mass matrices are always diagonal in \hmcif\ but since it is targeted at
high-dimensional problems where non-sparse matrices can not be represented explicitly
this is not really a disadvantage.
Furthermore, more recent \nifty\ based algorithms use
harmonic space degrees of freedom as field variables \citep{knollmuller2017}
which fits better to a mass matrix being diagonal in these field parameters.
Another important package for HMC sampling in \python\ is \pymc\ \citep{pymc}.
\pymc\ provides a huge variety of different samplers among other functions for
statistical applications.
When it comes to the HMC sampler in \pymc\ the main difference to \hmcif\
is that the mass matrix is again evaluated based on the samples of the Markov
chain itself which might be problematic as described in the paragraph above.
Again \pymc\ has a NUTS feature.
Apart from that the main advantage of \hmcif\ is that it is easy to use for
already written algorithms in \nifty\ and its optimization for high-dimensional
statistical problems.
\subsection{Structure of this Paper}
This introduction is followed by a short installation guide.
In section \ref{sec:theory} we give an introduction to HMC sampling on a
theoretical / mathematical level.
Afterwards, we illustrate the work with \nifty\ and \hmcif\ using a simple
Bayesian hierarchical model as an example in section \ref{sec:eg}.
This document ends with a short summary in section \ref{sec:summary} on
why there is a need for a distinct \hmcif\ package.
\section{Detailed Software Description}\label{sec:soft}
This section describes all features and functionalities of \hmcif .
\subsection[The HMCSampler Class]{The \hmcsc\ Class}
The \hmcif\ package is optimized for a fast HMC implementation for a \niftye\ class.
At its heart lies the \hmcsc\ class handling the whole sampling process.
It is able to run several Markov chains on different CPUs
using the \python\ \pkg{multiprocessing} module.
The samples are saved to an HDF5-file which is generated every time a new run is initialized
and can be loaded back as needed via the package's own \code{load} function.
During the burn-in phase \hmcsc\ takes care of adjusting the integration
step size $\epsilon$ (see equation (\ref{eq:leapfrog})) such that a user
defined acceptance rate is reached,
as well as setting and possibly reevaluating the mass matrix $M$.
After a run has finished the mean of the samples is calculated.
Of course in practice one may want to fine-tune some of the specific features
and parameters implemented in \hmcif .
This section is dedicated to introduce and explain those.
\subsubsection{Instantiating}
An instance of \hmcsc\ is created with the following arguments of which only the first is mandatory:
\begin{description}
\item[\code{potential} :]
\niftye\ \\
The HMC potential $\Psi(x)$.
Also defines the domain on which the sampling takes place through its position attribute.
\item[\code{sample\_transform} :]
\code{func}, optional\\
In some cases it is preferable to sample a field not in its position space,
but in another domain, such as in its harmonic space representation,
or maybe even in a domain where there is no linear transformation
to the position space.
To ensure correct calculation of expectation values such as the mean or the variance
the samples are transformed by \code{sample\_transform} before being saved to disk.
The \code{sample\_transform} function has to have exactly one argument which
is a \code{Field} or \code{MultiField} similar to the \code{position}
attribute of the \nifty\ \eon\ given as the \code{potential} argument.
The \code{sample\_transform} function has to return a \code{Field} or
\code{MultiField}.
There are, however, no further restrictions on the exact structure of this
\code{Field} or \code{MultiField}.
\item[\code{num\_processes} :]
\code{int}\\
Number of cores involved in the sampling process.
This is equal to the number of individual Markov chains started when the instance method \code{run} is called.\\
Default: 1
\item[\code{sampler\_dir\_path} :]
\code{str}\\
A path where the HDF5-file containing the samples is going to go.\\
Default: a new folder called ``samples'' in the `\code{\_\_main\_\_}' script's directory
\end{description}
\subsubsection{Running the Sampling Process}
In principle the sampling process can be started immediately
after creating an instance of \hmcsc\ by calling the \code{run()} method
with the following arguments of which again only the first is mandatory.
\begin{description}
\item[\code{num\_samples} :]
\code{int}\\
Number of samples to be drawn per chain after burn in.
\item[\code{max\_burn\_in} :]
\code{int}, optional\\
Maximum number of steps for the chain to converge before it is forced into sampling mode.
If no value is stated, forced transition is not going to happen.
In this case the chain will only start the actual sampling process if it has converged.
\item[\code{convergence\_tolerance} :]
\code{float}, optional\\
If the convergence measure for the sampling process falls below this value,
the chain is assumed to have converged
and starts with the actual sampling process.
If no value is stated the \code{tolerance} property of
the \hmcsc\ property \code{convergence} is used.
The default value for said property is 1.
For more on this see section \ref{sec:conv}.
\item[\code{target\_acceptance\_rate} :]
\code{float}, optional\\
Value between 0. and 1., stating what ratio of accepted / rejected samples is preferred.
The integration step size is adjusted during burn-in to approximately match this ratio.\\
If not stated the corresponding property of the \code{epsilon} property is
used (for which the default value is 0.8).
\item[\code{order} :]
\code{int}\\
The order of the symplectic integrator.
The default value corresponds to a simple leapfrog integration.
Default: 2
\item[\code{mass} :]
\nifty\ \code{EndomorphicOperator}, optional\\
HMC mass matrix used during sampling (or until it is reevaluated).
For more on the mass matrix see section \ref{sec:mass}.
If no mass is given, an identity matrix is used (at least as initial guess).
\item[\code{x\_initial} :]
\niftyf\ or \pylist\ of \niftyf s, optional\\
Starting point(s) for the HMC sampler.
If more than one Markov chain needs to be initialized
they get their respective initial positions by iterating through the list.
The list does not have to have the same length as the number of chains.
If there are more chains than elements in the \pylist ,
some starting positions are reused for the additional chains.
If only a \code{Field} is given, all chains get the same initial position.
If no initial field is passed,
a random sample is drawn from a Gaussian distribution centered at the position of the \code{Energy} instance
given to the constructor of \hmcsc\ with the \code{Energy}'s metric as covariance.
\end{description}
\subsubsection{Getting the Results of an HMC-Run}
After a run has finished, the sampled mean as a \niftyf\ or \code{MultiField} is
accessible via the instance's property `\code{mean}'.
\begin{lstlisting}[language=iPython]
|\iin| hmc_sampler = HMCSampler(nifty_energy, num_processes=5)
|\iin| hmc_sampler.run(200)
|\iin| hmc_sampler.mean
|\out| nifty4.Field instance
- domain = DomainTuple, len: 1
RGSpace(shape=(256, 256), distances=(0.5, 0.5),
harmonic=True)
- val = array([[ 0.63, -0.16, ..., 1.04, -0.64],
...,
[ 0.03 , 0.02, ..., 1.22, 0.21 ]])
\end{lstlisting}
Accessing the values of the samples is possible via calling the \code{samples}
property.
It consists of a $2+n$ dimensional \np\ \code{ndarray} where the first
dimension represents the different Markov Chains,
the second dimension represents the individual samples
and the other $n$ dimensions represent the value of the sampled \nifty\ \code{Field}s.
If the sampled objects were \code{MultiField}s an dictionary with \np\
\code{ndarray}s is returned.
Remember though that calling this will load all samples into memory
which might crash the process if not enough memory is available.
\ContinueLineNumber
\begin{lstlisting}[language=iPython]
|\iin| all_samples = hmc_sampler.samples
|\iin| all_samples.shape
|\out| (5, 200, 256, 256)
\end{lstlisting}
\subsubsection[Attributes and Properties of HMCSampler]{Attributes and Properties of \hmcsc}
The \hmcsc\ has a number of other properties and attributes which are mostly used for fine-tuning the sampling
process.
These are:
\begin{description}
\item[\code{potential} :]
\nifty\ \eon , read-only\\
The potential $\Psi$ for the HMC sampler.
\item[\code{sampler\_dir} :]
\code{str}\\
Setting or getting the path to the directory where the sample-files are stored.
Corresponds to the parameter of the same name passed in the constructor of \hmcsc\ class.
\item[\code{save\_burn\_in\_samples} :]
\code{bool}\\
Whether or not to save the samples generated during burn-in phase to disk.
Be aware of the fact that if set to \code{True} (default value) together with a high or
non-existent \code{max_burn_in} parameter (in the constructor of \hmcsc\
class)
could fill your hard drive.\\
Default: True
\item[\code{burn\_in\_samples} :]
\np\ \code{ndarray} or \code{dict}(\code{str -> ndarray}), read-only\\
The same as the \code{samples} property but with the samples generated
during burn-in phase.
\item[\code{var} :]
\nifty\ \code{Field} or {MultiField}, read-only\\
The variance of the samples (after \code{sample\_transform}).
\item[\code{convergence} :]
\hmcif\ \convc\\
For choosing how to calculate the convergence measure (see section \ref{sec:conv}).\\
Default: \code{HansonConvergence if num_processes == 1 else GelmanRubinConvergence}
\item[\code{epsilon} :]
\hmcif\ \epsc\\
For choosing how to adjust the integration step size parameter
during burn-in (see section \ref{sec:eps}).\\
Default: \code{EpsilonPowerLawDivergence}
\item[\code{mass} :]
\hmcif\ \massc\\
For choosing how to handle the HMC mass during sampling.
For more on this see section \ref{sec:mass}.
\item[\code{display} :]
\hmcif\ \dispc\\
For choosing how to display the progress of the sampling process (see section \ref{sec:disp})\\
Default: \code{LineByLineDisplay}
\item[\code{n\_limits} :]
\pylist\ of \integer s\\
To avoid periodic trajectories the number of
leapfrog integration steps is randomized.
\code{n\_limits} defines the range from which the number of
integration steps is drawn uniformly.\\
Default: \code{[60, 70]}
\end{description}
\subsection[Convergence]{\convc}\label{sec:conv}
The \convc\ class handles everything related to the convergence of the Markov chain(s).
In principle a chain in \hmcif\ has converged if a \emph{convergence measure}
calculated for each degree of freedom in the sampled \niftyf\ or \code{MultiField}
drops below a given \emph{tolerance}.
Additionally \hmcif\ implements intermediate steps of convergence
via so-called convergence \emph{levels}.
For the time being their main purpose is to define a time
where the HMC mass is reevaluated during burn-in
(See also section \ref{sec:mass}).
A chain is said to have converged with respect to its current
convergence level, if
\begin{equation}\label{eq:converged}
\max(\text{measure}) < \text{tolerance} \cdot 10^\text{level}
\end{equation}
In other words:
The level is the number of digits the decimal separator
of the tolerance is shifted to the left.
The idea behind this is to decrease the level by one
each time an intermediate convergence is reached,
while at that point recalculating the mass matrix.
If the level drops to zero, equation (\ref{eq:converged})
simplifies to $\max(\text{measure}) < \text{tolerance}$
and the next time this requirement is met,
the Markov chain has finished the burn-in phase.
It remains to explain how the convergence measure is calculated.
There are several different approaches implemented in \hmcif\
as child classes of the \convc\ class.
Choosing one of them is done by setting
the \code{convergence} property of the \hmcsc\ class with
one of the \convc 's child classes, e.g.:
\begin{Code}
hmc_sampler.convergence = GelmanRubinConvergence
\end{Code}
For now there are four different possibilities:
\begin{description}
\item[\code{MeanConvergence}]
This rather simple way of calculating the convergence needs at least two Markov chains.
It compares the mean of the samples from all chains (\code{total\_mean}) to the mean of each individual chain (\code{chain\_means}).
The measure is defined as \code{abs(chain\_mean / total\_mean - 1.)}
such that it fulfills the non-negativity and the identity of indiscernibles criteria for metrics.
It proves to be rather unstable if e.g. the total mean is close to zero.
\item[\code{VarConvergence}]
Very similar to \code{MeanConvergence} only with the variances of individual chains and all chains.
Measure is equal to \code{abs(chain\_var / total\_var - 1.)}.
\item[\code{HansonConvergence}]
So far the only convergence measure which can be used even if there is only one chain.
It follows \citet{hanson2001}
(Again: the measure is the absolute value of the ratio minus one).
\item[\code{GelmanRubinConvergence}]
An implementation of the among MCMC folks very popular
Gelman and Rubin convergence criteria \citet{gelman1992}
(Again: the measure is the absolute value of the ratio minus one).
\end{description}
\subsubsection[Attributes and Properties of Convergence]{Attributes and Properties of \convc}
Regardless of which \convc\ child class has been used additional features can be set
via its class properties, e.g. the \code{locality} property which defines
the number of recent samples considered when calculating the \code{convergence}
(see below for details):
\begin{Code}
hmc_sampler.convergence = GelmanRubinConvergence
hmc_sampler.convergence.locality = 200
\end{Code}
For the common user the following properties are the most important ones:
\begin{description}
\item[\code{locality} :]
\integer\\
The number of recent samples to be considered in calculating the convergence
measure.
On the one hand this is a form of `forgetting' very old parts of the chain's
trajectory which do not represent the current state of convergence.
On the other hand this is necessary because of memory issues
i.e. if the burn-in phase takes very long the memory would blow up
since every sample ever created has to be available to calculate the
measure.\\
Default: 250
\item[\code{tolerance} :]
\float \\
Equivalent to the \code{convergence\_tolerance} parameter
of the \hmcsc 's \code{run} method.
In fact, setting this property as described above has only an effect
if the (optional) \code{convergence\_tolerance} parameter
is not passed to the \code{run} method.\\
In practice the latter approach might be slightly more convenient.
If the maximum value of a chain's convergence measure is below this value
the chain is said to have converged and transitions from the burn-in phase
into the sampling phase.
See also: \code{converged} (below)\\
Default: 1.
\end{description}
The following additional properties of \convc\ are mostly only important for
\hmcsc\ itself and not of relevance for the common user:
\begin{description}
\item[\code{converged} :] \nparray\ of \bool\ (1 dim)\\
Contains the information of whether the individual chains have converged
with respect to the following law:\\
\code{converged = measure\_max < tolerance * 10**level}\\
\item[\code{measure} :] \nparray\ ($1 + n$ dim)\\
Represents the value of the measure
(calculated dependent on which child class of the \convc\ class has been used)
for each element of the degrees of freedom in the sampled \niftyf .
The first dimension represents the individual chains.
\item[\code{measure\_max} :] \nparray\ ($1$ dim)\\
The highest value of the \convc\ class property `\code{measure}' per chain.
\item[\code{level} :] \nparray\ of \integer\ (1 dim)\\
See class property \code{converged}.
The idea is that after a Markov chain has converged
with respect to its current level the level is decreased by one.
There are \convc\ class methods \code{dec\_level} and \code{inc\_level}
for decreasing and increasing the level by 1, respectively.
For more details on these methods see below.\\
Setting this property is also possible with a simple \integer\
which sets the whole \nparray\ to that value.
\item[\code{quota} :] \nparray\ of \float (1 dim)\\
The ratio of elements in the sampler's position \niftyf\ which have converged
with respect to \code{tolerance} and \code{level}
(i.e. the 'intermediate' convergence)
\end{description}
\subsubsection[Additional Methods of Convergence]{Additional Methods of \convc}
Internally the convergence levels are decreased and increased by calling
\begin{description}
\item[\code{dec\_level(chain\_identifier=None)}] \hfill \\
Decreases the convergence level of \code{chain\_identifier} (\integer )
by one. If \code{chain\_identifier} is \code{None}
the level of all chains is decreased by one.
Either way if the level of a chain is already zero it is left unchanged.
\item[\code{inc\_level(chain\_identifier=None)}] \hfill \\
Increases the convergence level of \code{chain\_identifier} (\integer )
by one. If \code{chain\_identifier} is \code{None}
the level of all chains is increased by one.
\end{description}
The convergence level is set under the hood
dependent on specific properties of the \massc\ class
in the beginning of the \hmcsc 's \code{run} method.
\subsection{Epsilon}\label{sec:eps}
The $\epsilon$ parameter defines the leapfrog integration step size
(equation (\ref{eq:leapfrog})).
In principle the bigger it is the bigger is the integration error $\Delta E$
and thereby the smaller the acceptance rate.
To achieve an approximate acceptance rate
defined via the \code{target\_acceptance\_rate} parameter of
\hmcsc 's \code{run} method, $\epsilon$ has to be adjusted during burn-in.
Every adjusting-strategy relies on thoughts presented in appendix
\ref{sec:epsadj} connecting the \code{target\_acceptance\_rate} to the
integration error $\Delta E$.
In \hmcif\ the \epsc\ class, much like the \convc\ class, is just a base class
and much more interesting for the common user are its child classes defining
exactly how $\epsilon$ is adjusted.
The class also keeps track of how much $\epsilon$ has changed in recent steps
and how close the mean value of recent integration errors
$\langle \Delta E \rangle$ is to $\tde$.
If $\epsilon$ has not changed very much and
$\langle \Delta E \rangle \approx \tde$, \epsc\ is said to have converged.
If \epsc\ has converged its value is locked.
\subsubsection{Available Adjusting Strategies}
\begin{description}
\item[\code{EpsilonConst}]
$\epsilon$ stays constant throughout the whole sampling process.
The value can be set by setting its \code{val} attribute:
\begin{Code}
hmc_sampler.epsilon = EpsilonConst
hmc_sampler.epsilon.val = 0.005
\end{Code}
\item[\code{EpsilonSimple}]
$\epsilon$ gets reduced or increased if $\Delta E$ is bigger or smaller
than $\tde$ respectively independent of the absolute value of $\Delta E$.
In practice, \code{EpsilonSimple} has an attribute \code{change_range}
(\float\ between 0 and 1, Default: 0.1), which can be set via:
\begin{Code}
hmc_sampler.epsilon = EpsilonSimple
hmc_sampler.epsilon.change_range = 0.2
\end{Code}
This attribute is only available in \code{EpsilonSimple}.
Given the \code{change_range} the current value of $\epsilon$
is multiplied by a factor drawn from
a uniform distribution $\uni ([a, b])$ where
\[
[a, b] = \left\{
\begin{array}{ll}
{[1 - \text{\code{change\_range}}, 1]} & \text{if } \Delta E > \tde\\
{[1, 1 + \text{\code{change\_range}}]} & \text{if } \Delta E < \tde
\end{array}
\right.
\]
The randomness is necessary to prevent recurrent behavior
if the integration error $\Delta E$ is very sensitive to $\epsilon$.
\item[\code{EpsilonPowerLaw}]
$\epsilon$ gets adjusted just like \code{EpsilonSimple} but the
\code{change\_range} is now defined by the relative difference between
$\Delta E$ and $\tde$
(\code{EpsilonPowerLaw} has no attribute \code{change_range}!).\\
Given this class's attribute \code{power} (positive \integer , Default: 5),
set via
\begin{Code}
hmc_sampler.epsilon = EpsilonPowerLaw
hmc_sampler.epsilon.power = 4
\end{Code}
the \code{change\_range} in \code{EpsilonSimple} is defined as:
\begin{equation}\label{eq:epspower}
\text{\code{change\_range}} = \left| \frac{\Delta E - \tde}{\Delta E + \tde} \right|^{\text{power}}
\end{equation}
\item[\code{EpsilonPowerLawDivergence}]
In practice working with Poissonian or log-normal distributions on high
dimensional spaces the integration error $\Delta E$ proved to be
very sensitive to small changes in $\epsilon$.
With this class $\epsilon$ is adjusted just like \code{EpsilonPowerLaw}
with the difference,
that in case of a divergent $\Delta E$ (e.g. during integration an overflow
occurs) the \code{change\_range} becomes more sensitive.
A \code{divergence\_counter} keeps track of the number of times a
divergent behavior was detected and the \code{change\_range} defined
in equation (\ref{eq:epspower}) gets a prefactor $2^{-\text{\code{divergence\_counter}}}$
\item[\code{EpsilonExponential}]
In this case a simple connection between $\Delta E$ and $\epsilon$ is assumed:
\begin{equation}\label{eq:epsexp}
|\Delta E|(\epsilon) = a \cdot \epsilon^b
\end{equation}
where $a > 0$ and $b > 0$ are fitting parameters.
This assumption is motivated by the fact that $\Delta E$ tends to zero
if $\epsilon$ does and diverges for large $\epsilon$.
Former `measurements', i.e. sampling steps of
$\Delta E$ given $\epsilon$ are used to calculate $a$ and $b$.
This approach asks for a rather large value for the \code{locality}
property (see below).
$\epsilon$ is adjusted by rearranging equation (\ref{eq:epsexp})
such that:
\begin{equation}
\epsilon_{\text{new}}= \left( \frac 1a \left|\tde \right| \right)^{\frac 1b}
\end{equation}
\item[\code{EpsilonOneOverX}]
In this case another connection between $\Delta E$ and $\epsilon$ is assumed:
\begin{equation}\label{eq:epsx}
|\Delta E|(\epsilon) = \frac{a}{(\epsilon_0 - \epsilon)^b} + \text{const}
\end{equation}
where $a > 0$ and $b > 1$ are again fitting parameters and
$\text{const}$ is such that $|\Delta E|(\epsilon=0) = 0$.
The idea behind this relation is an updated version of \code{EpsilonExponential}
where there is a finite $\epsilon_0$ for which $\Delta E$ diverges already.
$a$ and $b$ are again fitted with former $\Delta E$s given $\epsilon$,
whereas $\epsilon_0$ is set to the current value of $\epsilon$
every time a divergent behavior is detected.
$\epsilon$ gets adjusted by rearranging equation (\ref{eq:epsx}), such that
\begin{equation}
\epsilon_{\text{new}} = \epsilon_0 \left( 1 - \left( 1 + \frac{\epsilon_0^b \tde}{a} \right)^{\frac 1b} \right)
\end{equation}
\end{description}
\subsubsection[Attributes and Properties of Epsilon]{Attributes and Properties of \epsc}
Regardless of which \epsc\ class has been used, additional features can be set
via its class properties, e.g. the \code{locality} property which defines
the scope of fitting points for classes like \code{EpsilonExponential} and
for calculating the convergence measure
(see below for details):
\begin{Code}
hmc_sampler.epsilon = EpsilonPowerLawDivergence
hmc_sampler.epsilon.locality = 20
\end{Code}
For the common user the following properties and attributes
are the most important ones:
\begin{description}
\item[\code{val} :]
\float \\
The (current) value of $\epsilon$.
It is possible to use this property to set a good initial guess
(although most of the time unnecessary). \\
Default: 0.005
\item[\code{locality} :]
\integer\\
The number of recent samples to be considered in calculating the convergence
measure of \code{epsilon}.\\
Default: 50
\item[\code{target\_acceptance\_rate} :]
\float \\
Equivalent to the \code{target\_acceptance\_rate} parameter
of the \hmcsc 's \code{run} method.
In fact, setting this property only has an effect
if the (optional) \code{target\_acceptance\_rate} parameter
is not passed to the \code{run} method.\\
Default: 0.8
\item[\code{convergence\_tolerance} :]
\float \\
Essentially the same thing as the \code{tolerance} property of
\code{Convergence} (section \ref{sec:conv}) but for \code{epsilon}
A value $> 1$ is unreasonable because of how the convergence measure
for \code{epsilon} is calculated (see below)\\
Default: 0.5
\item[\code{divergence\_threshold} :]
\float \\
The value of $\Delta E$ for which the integration is said to have diverged.\\
Default: $1E50$
\item[\code{epsilon\_limits} :]
\pylist\ of \float \\
Minimum and maximum value for \epsc .
If the adjusting algorithm proposes a value `out of range'
the value gets coerced.\\
Default: \code{[1.E-40, 10]}
\end{description}
Under the hood \hmcsc\ uses the following additional properties of \epsc :
\begin{description}
\item[\code{converged} :] \bool , read-only\\
Whether or not \code{epsilon} has converged or not,
i.e. whether the \code{convergence\_measure} is smaller than the
\code{convergence\_tolerance}
\item[\code{measure} :] \code{list} of \float , read-only\\
The convergence measure for \code{epsilon} contains two values.
The first is the relative variance of the value $\epsilon$,
the second represents how close the mean value of $\Delta E$
is to $\tde$.
If both are smaller than \code{convergence_tolerance},
\code{epsilon} has converged.
\end{description}
\subsection{Mass}\label{sec:mass}
Finding an appropriate mass matrix is the most challenging
task for a good HMC sampler.
\hmcif\ provides the user with the standard evaluation procedure
introduced in equation (\ref{eq:masseval}) as well as the
possibility to define a problem specific mass matrix.
(Re-)evaluation is done by drawing samples $\{x^{(i)}\}$ from the curvature
at the current position of the \nifty\ \eon\
given as \code{potential} parameter in the constructor.
To keep the complexity of the problem bearable
only the diagonal of the mass matrix in equation (\ref{eq:masseval})
is calculated and used:
\begin{equation}\label{eq:massevaldiag}
M_{jk} = \delta_{jk} \frac 1{N-1} \left( \sum_{i=1}^N {x_j^{(i)}}^2 \right)^{-1}
\end{equation}
This also removes the problem that for a non-degenerate mass matrix in
$n$ dimensions at least $n$ independent samples are required.
For typical applications in \nifty\ $n=10^6 .. 10^8$ easily.
The idea behind the \hmcif\ \massc\ class is to easily define
a strategy of handling the mass matrix of an HMC process.
This includes reevaluating the mass matrix several times
during burn-in phase and defining an initial mass matrix.
As default the identity is used as mass matrix but an initial
mass matrix can be evaluated without reaching any level of convergence.
By default there is one such initial mass evaluation and no reevaluations.
\begin{description}
\item[\code{get\_initial\_mass} :]
\bool \\
Whether or not to evaluate an initial mass.
Setting this to \code{True}/\code{False} will increase/decrease
\code{reevaluations} by 1 respectively.\\
Default: \code{True}
\item[\code{reevaluations} :]
\nparray\ of \integer \\
The number of reevaluations (including the initial one if set)
for each chain.
Reevaluation takes place if the chain has converged with respect to
its current convergence level introduced in section \ref{sec:conv}.
Setting the number of reevaluations is also possible (and recommended) with a
simple \integer\ which sets all chains to that \integer .\\
Default: \code{numpy.ones(num_chains)}
(i.e. one reevaluation for every chain)
\item[\code{operator} :]
\nifty\ \code{EndomorphicOperator}\\
The actual mass operator.
If there is a problem specific mass operator it can be set with this property
before initiating the run.
If \code{get_initial_mass} is \code{True},
setting \code{operator} will set \code{get_initial_mass} to \code{False}
and decrease \code{reevaluations} by 1.\\
Default: Identity (as \nifty\ \code{DiagonalOperator})
\item[\code{num\_samples} :]
\integer \\
Defines the number of samples drawn from the curvature
when reevaluating the mass matrix.\\
Default: 50
\item[\code{shared} :]
\bool \\
If \code{True} reevaluating a mass matrix is done at the mean
of all individual chain positions.
All chains have to meet the conditions for evaluation mentioned above.
Afterwards each chain gets the same mass matrix.
If \code{False} each chain reevaluates its mass matrix individually
if the chain meets the conditions for evaluation.\\
Default: \code{False}
\end{description}
\subsection{Display}\label{sec:disp}
Naturally HMC sampling can take quite a while.
Disadvantageous settings of parameters might lead to a malfunctioning
sampling process.
To be able to discover a pathological run can save hours or even days.
For this reason \hmcif\ offers a number of display modes for diagnostic purposes.
On the other hand displaying indicators of tens of parallel running
chains might be overwhelming.
All available display options have in common that they display more or less
information based on a \emph{logging level} just like the logging levels in
the \pkg{logging} package \citep{pylog} of the \python\ standard library.
It is in fact possible to use the appropriate module variables from the
\pkg{logging} package.
These are:
\begin{description}
\item[\code{CRITICAL}] (numerical value: \code{50})
\item[\code{ERROR}] (numerical value: \code{40})
\item[\code{WARNING}] (numerical value: \code{30})
\item[\code{INFO}] (numerical value: \code{20})
\item[\code{DEBUG}] (numerical value: \code{10})
\item[\code{NOSET}] (numerical value: \code{0})
\end{description}
The logging level can be set using the \code{display}s method \code{setLevel} e.g.,
\begin{Code}
from logging import INFO
hmc_sampler.display.setLevel(INFO)
\end{Code}
Similar to the \epsc\ and \convc\ classes there are several \dispc\
classes which define the three displaying modes.
\begin{description}
\item[\code{Display} :]
Displays nothing at all.
Serves as base class for the two other display classes.
\item[\code{LineByLineDisplay} :]
If the level of the \hmcif\ logger is set to \code{INFO} or below
certain indicators are printed at every sampling step
as depicted in figure \ref{fig:lbld}.
Otherwise only warnings and errors are printed.
\item[\code{TableDisplay} :]
The most advanced version of displaying indicators.
A table is generated and dynamically updated, containing information
for each chain as depicted in figure \ref{fig:td}.
This class relies heavily on the \pkg{curses} \python\ package and
therefore alters the terminal behavior during sampling.
\end{description}
\begin{figure}
\centering
\begin{subfigure}{0.9\textwidth}
\includegraphics[width=0.9\linewidth]{lbldisp_burnin01}
\caption{\code{LineByLineDisplay} with \code{level = DEBUG} during burn in}
\label{fig:lbld}
\end{subfigure}
\begin{subfigure}{0.9\textwidth}
\includegraphics[width=0.9\linewidth]{tabledisp_burnin01}
\caption{\code{TableDisplay} with \code{level = DEBUG} during burn in}
\label{fig:td}
\end{subfigure}
\caption{}
\label{fig:displays}
\end{figure}
The columns in \code{TableDisplay} display the following parameters:\\
\begin{tabularx}{\textwidth}{lX}
\code{ch} & The chain number or `identifier'.\\
\code{acc.\ rate} & The acceptance rate for each chain.
During burn in this is only the acceptance rate of the last 10 samples
since this highly depends on the value of \epsc .
After burn in \code{acc.\ rate} displays the overall acceptance rate.\\
\code{dEnergy} & The most recent value of the integration error
$\Delta E$.\\
\code{Convergence} & \\
\utab\code{conv.} & Whether the chain has converged with respect to the current
convergence level.\\
\utab\code{lev} & The current convergence level.\\
\utab\code{meas} & The maximum value of the convergence level of the chain.\\
\utab\code{quota} & The percentage of points in the sampled \niftyf\ which have
converged with respect to the current convergence level.\\
\code{Epsilon} & \\
\utab\code{value} & The current value of \epsc . \\
\utab\code{conv.} & Whether or not \epsc\ has converged with respect to its measure.\\
\utab\code{meas.} & The maximum value of the \epsc\ convergence measure.
\end{tabularx}
\subsubsection[Additional Methods of Display]{Additional Methods of \dispc}
Regardless of which \dispc\ class has been used the quantity of displayed
information can be changed via the \code{setLevel} method.
\begin{description}
\item[\code{setLevel(level)}] \hfill \\
This is equivalent to the \code{setLevel} function in the \python\ package
\pkg{logging}.
Default: \code{logging.INFO}
\end{description}
\subsection{Additional Functions}\label{sec:addfunc}
\hmcif\ provides a number of additional functions targeted at
easing the handling of the HDF5-files generated during sampling.
These files have rather cryptic
default names of the form `runYYYY-MM-DD\_hh-mm-ss.h5',
where Y, M, D, h, m and s represent year, month, day, hour, minute and second
digits at the time of the initialization of that run, respectively.
There are three functions handling data stored in these files:
\code{load}, \code{load\_mean}, \code{load\_var}
\subsubsection[The load Function]{The \code{load} Function}
The \code{load} function takes the following arguments
of which only the first is mandatory.
Keep in mind that for high dimensional problems the amount
of data loaded to memory can easily be several GiBytes.
\begin{description}
\item[\code{path} :]
\str \\
Either the path to the HDF5 file
or a path to the directory where the HDF5 file(s) are stored.
If \code(path) is a directory, the latest `run' file
(with respect to the file name) is loaded.
\item[\code{attr\_type} :]
\str \\
The `attribute' to load.
For files generated with the \hmcsc\ class possible values are:
\code{'burn_in'} and \code{'samples'}.
For \code{'burn_in'} the samples generated during burn in are loaded
(of course, this is only possible if the \hmcsc\ class attribute
\code{save_burn_in_samples} was set to \code{True}).
For \code{'samples'} the samples generated after burn in are loaded.
Default: \code{'samples'}
\item[\code{start} :]
\integer \\
Only loads the samples from step \code{start} onward.\\
Default: 0
\item[\code{stop} :]
\integer , optional\\
Only loads the samples up to step \code{stop}.
Loads until the end if no value is passed
\item[\code{step} :]
\integer \\
Only loads every $n$th sample, where $n$ is given by \code{step}. \\
Default: 1
\end{description}
In all cases the function returns a \nparray\ if \code{Field}s were sampled and
\python dictionaries if \code{MultiField}s were sampled.
In the latter case the respective \nparray\ for each individual field can be
obtained by using the key used in the \code{MultiField}.
In all cases the array has $2+n$ dimensions,
where again, the first and second dimension represent chains and steps
respectively.
If not all chains have the same number of samples
(e.g. \code{attr_type = 'burn_in'}, every chain needs a different number of
steps to reach convergence)
shorter chains are filled with \code{numpy.nan}s in the output array
to match the size of the longest chain.
\subsubsection[The load_mean Function]{The \code{load\_mean} Function}
The \code{load\_mean} function calculates the mean value based
on the samples saved to a HDF5 file.
\begin{description}
\item[\code{path} :]
\str \\
Either the path to the HDF5 file
or a path to the directory where the HDF5 file(s) are stored.
If \code(path) is a directory, the latest `run' file
(with respect to the file name) is loaded.
\item[\code{domain} :]
\niftyd\ or \nifty\ \code{MultiDomain} , optional\\
If \code{domain} is given the function output is a \niftyf\ or a \nifty\
\code{MultiField}
with the respective domain and the calculated mean as value.
\end{description}
The function returns the mean value either as \nparray\, a \python\ dictionary
of \nparray s, a \niftyf\ or a \nifty\ \code{MultiField} dependent on what was
sampled originally and whether the \code{domain} argument is given.
\subsubsection[The load_var Function]{The \code{load\_var} Function}
The \code{load\_var} function calculates the variance value based
on the samples saved to an HDF5 file.
\begin{description}
\item[\code{path} :]
\str \\
Either the path to the HDF5 file
or a path to the directory where the HDF5 file(s) are stored.
If \code(path) is a directory, the latest 'run' file
(with respect to the file name) is loaded.
\item[\code{domain} :]
\niftyd , optional\\
If \code{domain} is given the function output is a \niftyf\
with domain \code{domain} and the calculated variance as value.
\end{description}
The function returns the variance either as \nparray\, a \python\ dictionary
of \nparray s, a \niftyf\ or a \nifty\ \code{MultiField} dependent on what was
sampled originally and whether the \code{domain} argument is given.
\subsection{Tools}\label{sec:tools}
\pkg{Tools} is a sub-module of \hmcif\ which provides two handy functions
to evaluate an already finished sampling process.
The one, \code{show_trajectories}, provides a GUI to quickly visualize
the individual Markov chains, the other, \code{get_autocorrelation},
calculates the autocorrelation of the chains.
\subsubsection[The show_trajectories Function]{The \code{show\_trajectories} Function}
A simple GUI for visualizing Markov chains based on one- or
two-dimensional problems.
The GUI is divided into two graphs as displayed in figure \ref{fig:gui01}.
The left one represents the underlying problem space
i.e. its geometry with either the `mean' value or a similar user-defined field.
The right graph displays the trajectory of the different chains
for a selected pixel in the left graph.
A pixel can be selected by either entering its coordinates in the top row
and clicking on show, or by just clicking on it in the left reference
picture.
The function takes the following parameters of which only the first
is mandatory:
\begin{description}
\item[\code{path} :]
\str \\
Either the path to an HDF5 file
or a path to the directory where the HDF5 file(s) are stored.
If \code(path) is a directory, the latest `run' file
(with respect to the file name) in said directory is loaded.
\item[\code{reference\_field} :]
\niftyf\ or \code{MultiField}, \np\ \code{ndarray} or \code{dict(str ->
\code{ndarray})}, optional\\
The field displayed on the left graph of the GUI.
If none is given, the mean value of the samples is used as reference.
\item[\code{field\_name} :]
str, optional \\
If \code{MultiFields} were used, this argument is mandatory and specifies
which of the sub-fields is supposed to be displayed.
\item[\code{solution} :]
\niftyf\ or \code{MultiField}, \np\ \code{ndarray} or \code{dict(str ->
\code{ndarray})}, optional\\
In case a `right' answer is available (e.g. a mock data example)
it can passed here and is displayed in the trajectories graph as
horizontal line as additional reference.
\item[\code{start} :]
\integer \\
Only loads the samples from step \code{start} onward.\\
Default: 0
\item[\code{stop} :]
\integer , optional\\
Only loads the samples up to step \code{stop}.
Loads until the end if no value is passed
\item[\code{step} :]
\integer \\
Only loads every $n$th sample, where $n$ is given by \code{step}. \\
Default: 1
\end{description}
\subsubsection[The get_autocorrelation Function]{The \code{get\_autocorrelation} Function}
Calculates the autocorrelation of the samples (after burn in)
for a given $t$, where $t$ is the shift in
\begin{equation}
\text{auto\_corr}[t] = \sum_i x(i)\bar{x}(i+t)
\end{equation}
The function takes the following two arguments:
\begin{description}
\item[\code{path} :]
\str \\
Either the path to an HDF5 file
or a path to the directory where the HDF5 file(s) are stored.
If \code(path) is a directory, the latest 'run' file
(with respect to the file name) in said directory is loaded.
\item[\code{shift} :]
\integer \\
The shift $t$ as described above.\\
Default: 1
\item[\code{field\_name} :]
str, optional \\
If \code{MultiFields} were used, this argument is mandatory and specifies
which of the sub-fields is supposed to be used.
\end{description}
The function returns a $1+n$ dimensional \nparray\
where the first dimension represents the different chains
and the other $n$ dimensions the dimensionality of the problem.
|
2,869,038,154,334 | arxiv |
\section{Introduction}
\label{sec:Introduction}
Linear codes codes have attracted much attention from researchers ever since the first days of Information Theory, as a mean of approaching the capacity of additive discrete memoryless channels (DMC) with a reasonable encoding and decoding complexity. Interestingly, lattice codes, which are their Euclidean counterparts, were first considered for communication over additive white Gaussian noise (AWGN) channels only in the mid $1970$s by de Buda~\cite{deBuda75}. The relatively late interest in lattice codes may be in part due to the power constrained nature of the AWGN channel, which requires an additional shaping procedure, that is not needed for linear codes over DMCs.
Lattice codes are attractive candidates for coding over the AWGN channel due to their structure, which can be exploited by encoding and decoding procedures with reduced complexity compared to non-structured codes. The question as to whether such codes can achieve the capacity of the AWGN channel with lattice encoding and decoding was fully resolved in \cite{ErezZamirAWGN} where a coding scheme based on nested lattice codes in conjunction with Wiener estimation was proposed.
Using this scheme, which we describe in detail in Section~\ref{subsec:EZscheme}, it was shown that for the AWGN channel, lattice codes with lattice encoding and decoding can achieve good error exponents. As a corollary, it followed that the AWGN channel's capacity is achievable with lattice encoding and decoding. In~\cite{lmk06} it was shown that for rates sufficiently close to capacity (i.e., above the critical rate of the channel) nested lattice codes actually achieve the optimal error exponent.
In the last decade lattice codes were found to play a new role in Information Theory.
Following the emergence of wireless communication as a leading technology, the interest in understanding the fundamental limits of Gaussian networks had grown. In such networks, there are multiple transmitters and receivers, where each receiver sees a linear combination of the signals sent by the different transmitters, corrupted by AWGN. Surprisingly, in networks of this type, coding schemes that utilize lattice codes often outperform the best known random coding schemes. Thus, lattice codes are used in order to obtain new achievable rate regions, rather than reducing complexity.
As a consequence, the nested lattices coding scheme of~\cite{ErezZamirAWGN} has been extensively used for proving new coding theorems for Gaussian networks. Since for the majority of these networks the capacity is not known, the question of finding the best error exponent is far out of scope. Therefore, when~\cite{ErezZamirAWGN} is used in the context of Gaussian networks, it is usually the capacity result that is used and not the error exponent results.
The aim of this paper is to make the capacity result from~\cite{ErezZamirAWGN} more accessible. To this end, we derive this result directly, without going through error exponent analysis. This leads to a simplified proof, that is completely self contained and uses only elementary probabilistic and geometrical arguments. A major advantage of the proof technique is that it can be easily extended to more complicated structures of nested lattices.
There are two main differences between the approach taken in this work and previous works. In previous work, a two-step construction was considered.
In the first step, assuming a coarse lattice that is good for covering (and hence also for quantization) is given, a dense self-similar nested lattice pair is generated via scaling.
In the second step, the fine lattice is diluted. In this paper we simultaneously construct the nested lattices by taking a chain of subcodes of one linear code. This is a direct extension of
the construction of nested linear binary codes proposed by Zamir and Shamai in \cite{zs98}. Another difference from previous works is that rather than requiring that the coarse lattice will be good for covering, we only require that it will be good for quantization.
\section{Preliminaries on Lattice Codes}
\label{sec:Preliminaries}
A lattice $\Lambda$ is a discrete subgroup of $\mathbb{R}^n$ which is closed under reflection and real addition. Any lattice $\Lambda$ in $\mathbb{R}^n$ is spanned by some $n\times n$ matrix $\mathbf{F}$ such that
\begin{align}
\Lambda=\{\mathbf{t}=\mathbf{F}\mathbf{a}:\mathbf{a}\in\mathbb{Z}^n\}.\nonumber
\end{align}
We denote the nearest neighbor quantizer associated with the lattice $\Lambda$ by
\begin{align}
Q_{\Lambda}(\mathbf{x})\triangleq\arg\min_{\mathbf{t}\in\Lambda}\|\mathbf{x}-\mathbf{t}\|.
\label{NNquantizer}
\end{align}
The basic Voronoi region of $\Lambda$, denoted by $\mathcal{V}$, is the set of all points in $\mathbb{R}^n$ which are quantized to the zero vector, where ties in~\eqref{NNquantizer} are broken in a systematic manner.
The modulo operation returns the quantization error w.r.t. the lattice,
\begin{align}
\left[\mathbf{x}\right]\bmod\Lambda\triangleq\mathbf{x}-Q_{\Lambda}(\mathbf{x}),\nonumber
\end{align}
and satisfies the distributive law,
\begin{align}
\big[[\mathbf{x}]\bmod\Lambda+\mathbf{y}\big]\bmod\Lambda=\left[\mathbf{x}+\mathbf{y}\right]\bmod\Lambda.\nonumber
\end{align}
Let $V(\Lambda)$ be the volume of a fundamental cell of $\Lambda$, e.g., the volume of $\mathcal{V}$, and let $\mathbf{U}$ be a random variable uniformly distributed over $\mathcal{V}$.
We define the second moment per dimension associated with $\Lambda$ as
\begin{align}
\sigma^2(\Lambda)\triangleq\frac{1}{n}\mathbb{E}\|\mathbf{U}\|^2=\frac{1}{n}\frac{\int_{\mathcal{V}}\|\mathbf{x}\|^2d\mathbf{x}}{V(\Lambda)}.\nonumber
\end{align}
The normalized second moment (NSM) of a lattice $\Lambda$ is defined by
\begin{align}
G(\Lambda)\triangleq\frac{\sigma^2(\Lambda)}{V(\Lambda)^{\frac{2}{n}}}.\nonumber
\end{align}
Note that this quantity is invariant to scaling of the lattice $\Lambda$.
It is often useful to compare the properties of the Voronoi region $\mathcal{V}$ with those of a ball.
\vspace{1mm}
\begin{definition}
Let
\begin{align}
\mathcal{B}(\mathbf{s},r)\triangleq\left\{\mathbf{x}\in\mathbb{R}^n \ : \ \|\mathbf{x}-\mathbf{s}\|\leq r\right\},\nonumber
\end{align}
denote the closed $n$-dimensional ball with radius $r$ centered at $\mathbf{s}$.
We refer to the volume of an $n$-dimensional ball with unit radius as $V_n$. In general $V\left(\mathcal{B}(\mathbf{s},r)\right)=V_n r^n$. Note that \mbox{$n V_n^{2/n}<2\pi e$} for all $n$~\cite{ConwaySloane}, and
\begin{align}
\lim_{n\rightarrow\infty}n V_n^{\frac{2}{n}}={2\pi e}.\label{VnAsymptotic}
\end{align}
\end{definition}
\vspace{1mm}
The ball $\mathcal{B}(\mathbf{0},r)$ has the smallest second moment per dimension out of all sets in $\mathbb{R}^n$ with volume $V_n r^n$, and it is given by
\begin{align}
\sigma^2\left(\mathcal{B}(\mathbf{0},r) \right)&=\frac{1}{n}\frac{1}{V_n r^n}\int_{\mathbf{x}\in\mathcal{B}(\mathbf{0},r)}\|\mathbf{x}\|^2 d\mathbf{x}\nonumber\\
&=\frac{1}{n}\frac{1}{V_n r^n}\int_0^{r}r'^2 d(V_n r'^n)\nonumber\\
&=\frac{1}{n}\frac{1}{V_n r^n} \frac{nV_n r^{n+2}}{n+2}\nonumber\\
&=\frac{r^2}{n+2}.\label{ballsecondmoment}
\end{align}
It follows that $\mathcal{B}(\mathbf{0},r)$ has the smallest possible NSM
\begin{align}
G\left(\mathcal{B}(\mathbf{0},r)\right)=\frac{\sigma^2\left(\mathcal{B}(\mathbf{0},r)\right)}{V\left(\mathcal{B}(\mathbf{0},r)\right)}
=\frac{1}{n+2}V_n^{-\frac{2}{n}},
\end{align}
which approaches $1/(2\pi e)$ from above as $n\rightarrow\infty$.
Thus, the NSM of any lattice in any dimension satisfies \mbox{$G(\Lambda)\geq 1/(2\pi e)$}.
A sequence of lattices $\Lambda^{(n)}$ with growing dimension is called good for mean squared error (MSE) quantization if
\begin{align}
\lim_{n\rightarrow\infty}G\left(\Lambda^{(n)}\right)=\frac{1}{2\pi e}.\nonumber
\end{align}
We define the effective radius $r_{\text{eff}}(\Lambda)$ as the radius of a ball which has the same volume as $\Lambda$, i.e.,
\begin{align}
r^2_{\text{eff}}(\Lambda)\triangleq\frac{V(\Lambda)^{2/n}}{V_n^{2/n}}.\label{reffdef}
\end{align}
Since $\mathcal{B}(\mathbf{0},r_{\text{eff}}(\Lambda))$ has the smallest second moment of all sets in $\mathbb{R}^n$ with volume $V(\Lambda)$, we have
\begin{align}
\sigma^2\left(\mathcal{B}(\mathbf{0},r_{\text{eff}}(\Lambda))\right)=\frac{r^2_{\text{eff}}(\Lambda)}{n+2}\leq\sigma^2(\Lambda)\nonumber.
\end{align}
Thus,
\begin{align}
r_{\text{eff}}(\Lambda)\leq\sqrt{(n+2)\sigma^2(\Lambda)}.\label{reffbound}
\end{align}
Note that for large $n$ we have
\begin{align}
\frac{r^2_{\text{eff}}(\Lambda)}{n}\approx\frac{V(\Lambda)^{2/n}}{2\pi e}.\nonumber
\end{align}
\vspace{1mm}
A lattice $\Lambda_c$ is said to be nested in $\Lambda_f$ if $\Lambda_c\subset\Lambda_f$. The lattice $\Lambda_c$ is referred to as the coarse lattice and $\Lambda_f$ as the fine lattice. The \emph{nesting ratio} is defined as $\left(V(\Lambda_c)/V(\Lambda_f)\right)^{1/n}$.
\begin{definition}
The operation of coset nearest neighbor decoding with respect to the pair of nested lattices $\Lambda_c\subset\Lambda_f$ is defined by
\begin{align}
g(\mathbf{Y})=\left[Q_{\Lambda_f}(\mathbf{Y})\right]\bmod\Lambda_c.\nonumber
\end{align}
In words, coset nearest neighbor decoding refers to finding the closest lattice point in $\Lambda_f$ and identifying its coset leader by reducing modulo $\Lambda_c$.
\end{definition}
\vspace{1mm}
\begin{definition}
We say that a sequence in $n$ of random noise vectors $\mathbf{Z}^{(n)}$ of length $n$ with (finite) effective variance \mbox{$\sigma^2_{\mathbf{Z}}\triangleq\frac{1}{n}\mathbb{E}\|\mathbf{Z}^{(n)}\|^2$}, is \emph{semi norm-ergodic} if for any $\epsilon>0$, $\delta>0$ and $n$ large enough
\begin{align}
\Pr\left(\mathbf{Z}^{(n)}\notin\mathcal{B}(\mathbf{0},\sqrt{(1+\delta)n\sigma^2_{\mathbf{Z}}} \right)\leq\epsilon.\label{normergodicDef}
\end{align}
Note that by the law of large numbers, any i.i.d. noise is semi norm-ergodic. However, even for non i.i.d. noise, the requirement~\eqref{normergodicDef} is not very restrictive.
In the sequel we omit the dimension index, and denote the sequence $\mathbf{Z}^{(n)}$ simply by $\mathbf{Z}$.
\end{definition}
\vspace{2mm}
Next, we define ``good'' pairs of nested lattices. Our definition for the ``goodness'' of nested lattices pairs is different from the one used in~\cite{ErezZamirAWGN}.
\begin{definition}
\label{def:goodpairs}
A sequence of pairs of nested lattices \mbox{$\Lambda^{(n)}_c\subset\Lambda^{(n)}_f$} is called ``good'' if:
\begin{enumerate}
\item The sequence of coarse lattices $\Lambda_c^{(n)}$ is good for MSE quantization.
\item For any lattice point $\mathbf{t}\in\Lambda^{(n)}_f$, and additive semi norm-ergodic noise $\mathbf{Z}$ with effective variance $\sigma^2_{\mathbf{Z}}=\frac{1}{n}\mathbb{E}\|\mathbf{Z}\|^2$
\begin{align}
\lim_{n\rightarrow\infty}\Pr\left(\left[Q_{\Lambda^{(n)}_f}(\mathbf{t}+\mathbf{Z})\right]\bmod\Lambda^{(n)}_c\neq \mathbf{t}\bmod\Lambda^{(n)}_c \right)= 0,\nonumber
\end{align}
provided that\footnote{In~\cite{ErezZamirAWGN} the volume-to-noise ratio (VNR) was defined as $$\mu=V(\Lambda_f^{(n)})^{\frac{2}{n}}/{2\pi e \sigma^2_{\mathbf{Z}}}.$$ Thus, the condition $V(\Lambda_f^{(n)})^{\frac{2}{n}}>{2\pi e \sigma^2_{\mathbf{Z}}}$ is equivalent to $\text{VNR}>1$.}
$$\lim_{n \rightarrow \infty} V(\Lambda_f^{(n)})^{\frac{2}{n}}>{2\pi e \sigma^2_{\mathbf{Z}}}.$$
That is, the error probability under coset nearest neighbor decoding in the presence of semi norm-ergodic additive noise $\mathbf{Z}$ vanishes with $n$ if \mbox{$r_{\text{eff}}(\Lambda^{(n)}_f)>\sqrt{n\sigma^2_{\mathbf{Z}}}$}.
\end{enumerate}
\end{definition}
\subsection{The mod-$\Lambda$ Transmission Scheme}
\label{subsec:EZscheme}
Consider the AWGN channel
$$Y=X+N,$$
where $N\sim \mathcal{N}(0,1)$ and $X$ is subject to the average power constraint $\mathbb{E}(X^2)\leq \mathsf{SNR}$.
We now briefly recall the nested lattices coding scheme proposed in~\cite{ErezZamirAWGN} for communication over this channel.
A pair of nested lattices $\Lambda_c\subset\Lambda_f$ is used in order to construct the codebook $\mathcal{C}=\Lambda_f\cap\mathcal{V}_c$ with rate\footnote{All logarithms in this paper are to the base $2$, and therefore all rates are expressed in bits per (real) channel use.}
\begin{align}
R=\frac{1}{n}\log\left(\frac{V(\Lambda_c)}{V(\Lambda_f)}\right)=\log(\text{nesting ratio}).\nonumber
\end{align}
Each of the $2^{nR}$ messages is mapped to a codeword in $\mathcal{C}$. Assume the transmitter wants to send the message $w$ which corresponds to the codeword $\mathbf{t}\in\mathcal{C}$. It transmits
\begin{align}
\mathbf{X}=[\mathbf{t}-\mathbf{U}]\bmod\Lambda_C,\nonumber
\end{align}
where $\mathbf{U}$ is a random dither that is uniformly distributed over $\mathcal{V}_c$ and is statistically independent of $\mathbf{t}$, and it is common randomness known to both the transmitter and the receiver. Due to the Crypto Lemma~\cite[Lemma 1]{ErezZamirAWGN}, $\mathbf{X}$ is also uniformly distributed over $\mathcal{V}$ and is statistically independent of $\mathbf{t}$. Thus, the average transmission power is $\nicefrac{1}{n}\mathbb{E}\|\mathbf{X}\|^2=\sigma^2(\Lambda_c)$. In the sequel we assume that $\Lambda_c$ is scaled such that $\sigma^2(\Lambda_c)=\mathsf{SNR}$, which satisfies the power constraint.
The receiver scales its observation by a factor $\alpha>0$ to be specified later, adds back the dither $\mathbf{U}$ and reduces the result modulo the coarse lattice
\begin{align}
\mathbf{Y}_{\text{eff}}&=\left[\alpha \mathbf{Y}+\mathbf{U} \right]\bmod\Lambda_c\nonumber\\
&=\left[\mathbf{X}+\mathbf{U}+(\alpha-1)\mathbf{X}+\alpha\mathbf{N}\right]\bmod\Lambda_c\nonumber\\
&=\left[\mathbf{t}+(\alpha-1)\mathbf{X}+\alpha\mathbf{N}\right]\bmod\Lambda_c\nonumber\\
&=\left[\mathbf{t}+\mathbf{Z}_{\text{eff}}\right]\bmod\Lambda_c,
\label{Yeff}
\end{align}
where
\begin{align}
\mathbf{Z}_{\text{eff}}=(\alpha-1)\mathbf{X}+\alpha\mathbf{N}
\label{Zeff}
\end{align}
is effective noise statistically independent of $\mathbf{t}$ with effective variance
\begin{align}
\sigma_{\text{eff}}^2\triangleq\frac{1}{n}\mathbb{E}\|\mathbf{Z}_{\text{eff}}\|^2=\alpha^2+(1-\alpha)^2\mathsf{SNR}.\label{vareff}
\end{align}
Thus, using nested lattice codes, the original AWGN channel is transformed to an effective modulo-additive channel whose output is a lattice point from $\Lambda_f$ corrupted by effective noise statistically independent of it.
It was shown in~\cite{ErezZamirAWGN,elz05} that there exist sequences of pairs of nested lattices $\Lambda^{(n)}_c\subset\Lambda^{(n)}_f$ that achieve a good error exponent with coset nearest neighbor decoding using this transformation. As a corollary, it followed that such pairs of nested lattices achieve any rate below the capacity of the AWGN channel $C=\nicefrac{1}{2}\log(1+\mathsf{SNR})$.
Here, we prove a weaker result: we only prove the existence of sequences of pairs $\Lambda^{(n)}_c\subset\Lambda^{(n)}_f$ that allow to achieve vanishing error probability for any rate below the capacity of the AWGN channel with coset nearest neighbor decoding.
Namely, we show that there exist sequences of pairs that are ``good'' according to Definition~\ref{def:goodpairs}, and that the ``goodness'' conditions from Definition~\ref{def:goodpairs} suffice to achieve any rate below capacity.
Although these ``goodness'' conditions do not suffice for achieving good error exponents, using them instead of those consider in~\cite{ErezZamirAWGN}, gives rise to simpler proofs, which is the aim of this paper.
Note that since the coarse lattice is normalized such that $\sigma^2(\Lambda^{(n)}_c)=\mathsf{SNR}$, if $\Lambda^{(n)}_c$ is good for MSE quantization than \mbox{${\frac{1}{n}}\log V\left(\Lambda_c^{(n)}\right)\rightarrow \frac{1}{2}\log (2\pi e\mathsf{SNR})$} as \mbox{$n\rightarrow\infty$}.
In the next section we show that for any lattice density in the range
\begin{align}
\frac{1}{2}\log \left(2\pi e\sigma^2_{\text{eff}}\right)<{\frac{1}{n}}\log V(\Lambda_f^{(n)})<\frac{1}{2}\log (2\pi e\mathsf{SNR})\nonumber
\end{align}
there exists a sequence of ``good'' pairs of nested lattices. We also show that for such sequences of pairs, $\mathbf{Z}_{\text{eff}}$ is semi norm-ergodic. Therefore, any rate satisfying
\begin{align}
R&<\lim_{n\rightarrow\infty}\left[\frac{1}{n}\log V(\Lambda^{(n)}_c)-\frac{1}{n}\log V(\Lambda^{(n)}_f)\right] \nonumber\\
&<\frac{1}{2}\log\left(2\pi e\mathsf{SNR}\right)-\frac{1}{2}\log\left(2\pi e\sigma^2_{\text{eff}}\right)\nonumber\\
&=\frac{1}{2}\log\left(\frac{\mathsf{SNR}}{\sigma^2_{\text{eff}}}\right).\label{capapcity}
\end{align}
is achievable using nested lattice codes.
Setting \mbox{$\alpha=\mathsf{SNR}/(1+\mathsf{SNR})$} in~\eqref{vareff}, which is the linear minimum MSE coefficient for estimating $\mathbf{X}$ from $\mathbf{Y}$, we see that any rate below $C=\nicefrac{1}{2}\log(1+\mathsf{SNR})$ is achievable with nested lattice codes and coset nearest neighbor decoding.
\section{Existence Proof}
\label{sec:proofs}
Previous proofs for the existence of capacity achieving pairs of nested lattices used random Construction A, introduced by Loeliger~\cite{loeliger97}, for creating a fine lattice, and then rotated it using a lattice that is good for MSE quantization.\footnote{More precisely, the coarse lattice in the previous proofs was Rogers good, which also implies goodness for MSE quantization.} Here, we take a slightly different approach that is a direct extension of the original approach of~\cite{zs98} to creating nested binary linear codes. We use random Construction A to simultaneously create both the fine and the coarse lattice. Namely, we randomly draw a linear code and lift it to the Euclidean space in order to obtain the fine lattice. The coarse lattice is obtained by lifting a subcode from the same linear code to the Euclidean space. The ensemble of nested lattice pairs is defined as follows.
Let $\gamma=2\sqrt{n\mathsf{SNR}}$, and $p=\xi n^{\frac{3}{2}}$, where $\xi$ is chosen as the largest number in the interval \mbox{$[1/2,1)$} such that $p$ is prime.
For some $\epsilon_1>0$, to be specified later, choose $k_1$ such that\footnote{For ease of exposition we disregard the fact that by this definition $k_1$ may not take an integer value. We can always round $k_1$ to the nearest larger integer with no effect on the derivations in the sequel.}
\begin{align}
\frac{k_1}{n}\log p=\frac{1}{2}\log\left(\frac{4}{V_n^{2/n}}\right)+\epsilon_1.\label{rdcond}
\end{align}
\begin{enumerate}
\item Draw a generating $k\times n$ ($k>k_1$), matrix $\mathbf{G}_f$ whose entries are i.i.d. and uniform over the elements of the prime field $\mathbb{Z}_p=\{0,1,\cdots,p-1\}$. Let $\mathbf{G}_c$ be a $k_1\times n$ matrix whose rows are the first $k_1$ rows of $\mathbf{G}_f$.
\item Define the discrete codebooks
\begin{align}
\mathcal{C}_c=\left\{\mathbf{x}\in\mathbb{Z}_p^n \ : \ \mathbf{x}=[\mathbf{w}^T\mathbf{G}_c]\bmod p \ \ \ \mathbf{w}\in\mathbb{Z}_p^{k_1} \right\}\nonumber
\end{align}
and
\begin{align}
\mathcal{C}_f=\left\{\mathbf{x}\in\mathbb{Z}_p^n \ : \ \mathbf{x}=[\mathbf{w}^T\mathbf{G}_f]\bmod p \ \ \ \mathbf{w}\in\mathbb{Z}_p^k \right\}.\nonumber
\end{align}
\item Apply Construction A to lift $\mathcal{C}_c$ and $\mathcal{C}_f$ to $\mathbb{R}^n$ and form the lattices
\begin{align}
\Lambda_c=\gamma p^{-1}\mathcal{C}_c+\gamma\mathbb{Z}^n, \ \ \ \Lambda_f=\gamma p^{-1}\mathcal{C}_f+\gamma\mathbb{Z}^n.\nonumber
\end{align}
\end{enumerate}
\vspace{1mm}
\begin{remark}
We have chosen to specify our ensemble in terms of the linear codes' generating matrices
\begin{align}
\mathbf{G}_f=\left[
\begin{array}{c}
\mathbf{G}_c \\
---\\
\mathbf{G}' \\
\end{array}
\right].\nonumber
\end{align}
We could have equally defined the ensemble using the linear codes' parity check matrices
\begin{align}
\mathbf{H}_c=\left[
\begin{array}{c}
\mathbf{H}_f \\
---\\
\mathbf{H}' \\
\end{array}
\right],\nonumber
\end{align}
as done in~\cite{zs98,nestedLattices} for ensembles of nested binary linear codes.
\end{remark}
\vspace{1mm}
Clearly, in this construction $\Lambda_c\subset\Lambda_f$. Moreover, if the linear code's generating matrix $\mathbf{G}_f$ is full-rank over $\mathbb{Z}_p$, which by construction implies that $\mathbf{G}_c$ is also full-rank, we have $V(\Lambda_c)=\gamma^n p^{-k_1}$ and $V(\Lambda_f)=\gamma^n p^{-k}$. Thus, the nesting ratio is $p^{\frac{k-k_1}{n}}$ and the rate of the induced codebook \mbox{$\mathcal{C}=\Lambda_f\cap\mathcal{V}_c$} is therefore \mbox{$R=\frac{k-k_1}{n}\log p$}. The probability that $\mathbf{G}_f$ is not full-rank can be bounded~\cite{elz05} by
\begin{align}
\Pr\left(\mathop{\mathrm{rank}}(\mathbf{G}_f)<k\right)<p^{k-n},\nonumber
\end{align}
which vanishes as long as $k<\beta n$ for some $0<\beta<1$. We take $k$ to satisfy this restriction, which ensures that $\mathbf{G}_c$ is full-rank with high probability. Thus, in the sequel we will assume that $\mathbf{G}_c$ is indeed full-rank. Otherwise, the obtained nested lattices pair is regarded as ``bad''.
Next, we show that almost all members of the ensemble satisfy the first ``goodness'' condition from Definition~\ref{def:goodpairs}, and that almost all members of the ensemble satisfy the second condition of Definition~\ref{def:goodpairs}. It follows that almost all pairs of nested lattices in the ensemble satisfy both conditions simultaneously, and are therefore ``good''. We then show that for coarse lattices $\Lambda_c$ which are good for MSE quantization, a random dither uniformly distributed over the Voronoi region is semi norm-ergodic. This in turn implies that for such lattices, the effective noise $\mathbf{Z}_{\text{eff}}$ from the mod-$\Lambda$ transmission scheme is also semi-spherical. Finally, we conclude that with ``good'' pairs of nested lattices, the capacity of the AWGN channel can be achieved using the mod-$\Lambda$ transmission scheme.
Before going into the proofs we need to introduce some more notation. Let $\mathop{\mathrm{CUBE}}\triangleq[-1/2,1/2)^n$ denote the unit cube centered at zero. Also, denote the operation of reducing each component of $\mathbf{x}\in\mathbb{R}^n$ modulo $\gamma$ by $\mathbf{x}^*\triangleq[\mathbf{x}]\bmod \gamma\mathbb{Z}^n$. If $\mathcal{S}$ is a set of points in $\mathbb{R}^n$, $\mathcal{S}^*$ is the set obtained by reducing all points in $\mathcal{S}$ modulo $\gamma \mathbb{Z}^n$. If $\mathcal{S}$ and $\mathcal{T}$ are sets, $\mathcal{S}+\mathcal{T}$ is their Minkowski sum.
In the sequel, we use the following lemma, which follows from simple geometric arguments and is illustrated in Figure~\ref{fig:spheregrid}.
\input{spheregridfig}
\begin{lemma}
\label{lem:gridsphere}
For any $\mathbf{s}\in\mathbb{R}^n$ and $r>0$ the number of points of $\mathbb{Z}^n$ inside $\mathcal{B}(\mathbf{s},r)$ can be bounded as
\begin{align}
\left(\max\left\{r-\frac{\sqrt{n}}{2},0\right\}\right)^n V_n\leq \left|\mathbb{Z}^n\cap\mathcal{B}(\mathbf{s},r)\right|\leq \left(r+\frac{\sqrt{n}}{2}\right)^n V_n\nonumber
\end{align}
\end{lemma}
\begin{proof}
Let $\mathcal{S}\triangleq\left(\mathbb{Z}^n\cap\mathcal{B}(\mathbf{s},r)\right)+\mathop{\mathrm{CUBE}}$, and note that $\left|\mathbb{Z}^n\cap\mathcal{B}(\mathbf{s},r)\right|=\mathrm{Vol}(\mathcal{S})$.
We have
\begin{align}
\mathcal{B}\left(\mathbf{s},r-\frac{\sqrt{n}}{2}\right)\subseteq \mathcal{S}.\label{inclusion}
\end{align}
To see this, note that any $\mathbf{x}\in\mathcal{B}\left(\mathbf{s},r-\frac{\sqrt{n}}{2}\right)$ lies inside \mbox{$\mathbf{a}+\mathop{\mathrm{CUBE}}$} for some $\mathbf{a}\in\mathbb{Z}^n$, and for this $\mathbf{a}$ the inequality \mbox{$\|\mathbf{a}-\mathbf{x}\|\leq\sqrt{n}/2$} holds. Applying the triangle inequality gives
\begin{align}
\|\mathbf{a}-\mathbf{s}\|=\|(\mathbf{a}-\mathbf{x})+(\mathbf{x}-\mathbf{s})\|\leq\|(\mathbf{a}-\mathbf{x})\|+\|(\mathbf{x}-\mathbf{s})\|\leq r,\nonumber
\end{align}
which implies~\eqref{inclusion}.
On the other hand,
\begin{align}
\mathcal{S}\subseteq \mathcal{B}(\mathbf{s},r)+\mathop{\mathrm{CUBE}}\subseteq \mathcal{B}(\mathbf{s},r)+\mathcal{B}(0,\frac{\sqrt{n}}{2})=\mathcal{B}(\mathbf{s},r+\frac{\sqrt{n}}{2}).\nonumber
\end{align}
Thus,
\begin{align}
\mathrm{Vol}\left(\mathcal{B}\left(\mathbf{s},r-\frac{\sqrt{n}}{2}\right)\right)\leq \mathrm{Vol}\left(\mathcal{S}\right)\leq\mathrm{Vol}\left(\mathcal{B}\left(\mathbf{s},r+\frac{\sqrt{n}}{2}\right)\right).\nonumber
\end{align}
\end{proof}
\subsection{Goodness for MSE Quantization}
\label{subsec:MMSE}
In this subsection we prove the following lemma, which follows from rate-distortion considerations.
\begin{lemma}
\label{lem:MSE}
For any $\delta_1>0$ and $n$ large enough, almost all lattices $\Lambda_c$ in the defined Construction A ensemble satisfy
\begin{align}
G(\Lambda_c)<(1+\delta_1)\frac{1}{2\pi e}.\nonumber
\end{align}
\end{lemma}
\begin{proof}
We begin by bounding the average MSE distortion the ensemble achieves, and then we connect it to the normalized second moment. For any $\mathbf{x}\in\mathbb{R}^n$, define
\begin{align}
d(\mathbf{x},\Lambda_c)&\triangleq\frac{1}{n}\min_{\lambda\in\Lambda_c}\|\mathbf{x}-\lambda\|^2\nonumber\\
&=\frac{1}{n}\min_{\mathbf{a}\in\mathbb{Z}^n,\mathbf{c}\in\mathcal{C}_c}\|\mathbf{x}-\gamma p^{-1}\mathbf{c}-\gamma\mathbf{a}\|^2\nonumber\\
&=\frac{1}{n}\min_{\mathbf{c}\in\mathcal{C}_c}\|(\mathbf{x}-\gamma p^{-1}\mathbf{c})^*\|^2.\nonumber
\end{align}
Note that $d(\mathbf{x},\Lambda_c)\leq \gamma^2/4$ for any $\mathbf{x}\in\mathbb{R}^n$, regardless of $\mathcal{C}_c$.
For any $\mathbf{w}\in\mathbb{Z}_p^{k_1}\setminus{\mathbf{0}}$, define the random vector \mbox{$\mathbf{C}(\mathbf{w})=\left[\mathbf{w}^T\mathbf{G}_c\right]\bmod p$}, and note that $\mathbf{C}(\mathbf{w})$ is uniformly distributed over $\mathbb{Z}_p^n$. For all \mbox{$\mathbf{w}\in\mathbb{Z}_p^{k_1}\setminus{\mathbf{0}}$} and $\mathbf{x}\in\mathbb{R}^n$, we have
\begin{align}
\varepsilon&\triangleq\Pr\left(\frac{1}{n}\|\left(\mathbf{x}-\gamma p^{-1}\mathbf{C}(\mathbf{w})\right)^*\|^2\leq \mathsf{SNR}\right)\nonumber\\
&=p^{-n}\left|(\gamma p^{-1}\mathbb{Z}_p^n)\bigcap\mathcal{B}^*(\mathbf{x},\sqrt{n\mathsf{SNR}}) \right|\nonumber\\
&=p^{-n}\left|(\gamma p^{-1}\mathbb{Z}^n)\bigcap\mathcal{B}(\mathbf{x},\sqrt{n\mathsf{SNR}}) \right|\label{modisok}\\
&\geq p^{-n}V_n\left(p\gamma^{-1}\sqrt{n\mathsf{SNR}}-\frac{\sqrt{n}}{2}\right)^n\label{coveringPeBoundTmp}\\
&=V_n (\gamma^{-2}n\mathsf{SNR})^{\frac{n}{2}}\left(1-\frac{\gamma}{2p\sqrt{\mathsf{SNR}}}\right)^n\nonumber\\
&=V_n \left(\frac{1}{4}\right)^{\frac{n}{2}}\left(1-\frac{\sqrt{n}}{p}\right)^n,\label{coveringPeBound}
\end{align}
where~\eqref{modisok} follows since $\gamma=2\sqrt{n\mathsf{SNR}}$, and hence, for any two distinct points $\mathbf{b}_1,\mathbf{b}_2\in\mathcal{B}(\mathbf{x},\sqrt{n\mathsf{SNR}})$ we have $\mathbf{b}_1^*\neq\mathbf{b}_2^*$ (that is, the ball $\mathcal{B}(\mathbf{x},\sqrt{n\mathsf{SNR}})$ is contained in a cube with side $\gamma$), and~\eqref{coveringPeBoundTmp} follows from Lemma~\ref{lem:gridsphere}. Substituting $p=\xi n^{\frac{3}{2}}$ gives
\begin{align}
\varepsilon&>V_n 2^{-n}\left(1-\frac{1}{\xi n}\right)^n\nonumber\\
&>V_n 2^{-n}\left(1-\frac{2}{n}\right)^n\nonumber\\
&>\frac{1}{n^2}V_n 2^{-n},\label{coveringPeBound2}
\end{align}
where~\eqref{coveringPeBound2} holds for all $n>4$.
Let $M\triangleq p^{k_1}-1$. Label each of the vectors $\mathbf{w}\in\mathbb{Z}_p^{k_1}\setminus{\mathbf{0}}$ by an index \mbox{$i=1,\ldots,M$}, and refer to its corresponding codeword as $\mathbf{C}_i$. Define the indicator random variable related to the point $\mathbf{x}\in\mathbb{R}^n$
\begin{align}
\chi_i=\begin{cases}
1 & \text{if }\frac{1}{n}\|\left(\mathbf{x}-\gamma p^{-1}\mathbf{C}_i\right)^*\|^2\leq \mathsf{SNR} \\
0 & \text{otherwise}
\end{cases}.\nonumber
\end{align}
Since each $\chi_i$ occurs with probability $\varepsilon$, we have
\begin{align}
\Pr\bigg(\sum_{i=1}^M\chi_i=0\bigg)&=\Pr\left(\frac{1}{M}\sum_{i=1}^M\chi_i-\varepsilon=-\varepsilon\right)\nonumber\\
&\leq\Pr\left(\left|\frac{1}{M}\sum_{i=1}^M\chi_i-\varepsilon\right|\geq\varepsilon\right)\nonumber\\
&\leq\frac{\mathrm{Var}\left(\frac{1}{M}\sum_{i=1}^M\chi_i\right)}{\varepsilon^2},\label{chebyshev}
\end{align}
where the last inequality follows from Chebyshev's inequality. In order to further bound the variance term from~\eqref{chebyshev}, we note that $\mathbf{C}(\mathbf{w}_1)$ and $\mathbf{C}(\mathbf{w}_2)$ are statistically independent unless \mbox{$\mathbf{w}_1=[a\mathbf{w}_2]\bmod p$} for some $a\in\mathbb{Z}_p$. Therefore, each $\chi_i$ is statistically independent of all but $p$ different $\chi_j$'s. Thus,
\begin{align}
\mathrm{Var}\left(\frac{1}{M}\sum_{i=1}^M\chi_i\right)&=\frac{1}{M^2}\sum_{i=1}^M\sum_{j=1}^M\mathrm{Cov}(\chi_i,\chi_j)\nonumber\\
&\leq \frac{Mp\varepsilon}{M^2}.\nonumber
\end{align}
Substituting into~\eqref{chebyshev} and using~\eqref{coveringPeBound2}, we see that for any \mbox{$\mathbf{x}\in\mathbb{R}^n$}
\begin{align}
\Pr\bigg((d(\mathbf{x},\Lambda_c)>\mathsf{SNR})\bigg)&=\Pr\bigg(\sum_{i=1}^M\chi_i=0\bigg)\nonumber\\
&<\frac{p}{M\varepsilon}\nonumber\\
&<n^{\frac{3}{2}}\frac{1}{p^{k_1}-1}n^2 2^n V_n^{-1}\nonumber\\
&<2n^{\frac{7}{2}} p^{-k_1}2^n V_n^{-1}.
\end{align}
It follows that for any distribution on $\mathbf{X}$ we have
\begin{align}
\mathbb{E}&_{\mathbf{X},\Lambda_c}\left(d(\mathbf{X},\Lambda_c)\right)\nonumber\\
&\leq\mathsf{SNR}\Pr\left(d(\mathbf{X},\Lambda_c)\leq\mathsf{SNR}\right)
+\frac{\gamma^2}{4}\Pr\left(d(\mathbf{X},\Lambda_c)>\mathsf{SNR}\right)\nonumber\\
&\leq\mathsf{SNR}\left(1+2n^{\frac{9}{2}}p^{-k_1}V_n^{-1}2^n\right)\nonumber\\
&=\mathsf{SNR}\left(1+2n^{\frac{9}{2}} 2^{-n\left[\frac{k_1}{n}\log p-\frac{1}{2}\log\left(\frac{4}{V_n^{2/n}}\right) \right]}\right).\nonumber
\end{align}
Our choice of $k_1$, as given in~\eqref{rdcond}, ensures that the upper bound on the distortion averaged over $\mathbf{X}$ and over the ensemble of coarse lattices $\Lambda_c$ becomes arbitrary close to $\mathsf{SNR}$ as $n$ increases.
Since this is true for all distributions on $\mathbf{X}$, we may take $\mathbf{X}\sim\mathop{\mathrm{Unif}}\left(\gamma[0,1)^n\right)$. Let $\mathbf{U}$ be a random variable uniformly distributed over the Voronoi region $\mathcal{V}_c$ of a lattice $\Lambda_c$ randomly drawn from the ensemble. By construction, for any lattice $\Lambda_c$ in the defined ensemble \mbox{$\left[\gamma p^{-1}\mathcal{C}_c+\mathcal{V}_c\right]^*=\gamma[0,1)^n$}. Moreover, reducing the set \mbox{$\gamma p^{-1}\mathcal{C}_c+\mathcal{V}_c$} modulo $\gamma\mathbb{Z}^n$ does not change its volume. Therefore
\begin{align}
\mathbb{E}_{\Lambda_c}\left(\sigma^2(\Lambda_c)\right)=\mathbb{E}_{\mathbf{U},\Lambda_c}\left(\frac{1}{n}\|\mathbf{U}\|^2\right)
=\mathbb{E}_{\mathbf{X},\Lambda_c}\left(d(\mathbf{X},\Lambda_c)\right).\nonumber
\end{align}
It follows that
\begin{align}
\lim_{n\rightarrow\infty}\mathbb{E}_{\Lambda_c}\left(\sigma^2(\Lambda_c)\right)\leq\mathsf{SNR}.\nonumber
\end{align}
The normalized volume of a fundamental cell of $\Lambda_c$ is lower bounded by
\begin{align}
V(\Lambda_c)^{\frac{2}{n}} & \geq \left(\gamma^n p^{-k_1}\right)^\frac{2}{n} \nonumber \\
& =4n\mathsf{SNR} p^{-2\frac{k_1}{n}} \nonumber \\
& =2^{-\epsilon_1} n V_n^{\frac{2}{n}}\mathsf{SNR},\label{vlambdac}
\end{align}
with equality if and only if $\mathbf{G}_c$ is full-rank.
This implies that
\begin{align}
\lim_{n\rightarrow\infty}\mathbb{E}_{\Lambda_c}\left(G(\Lambda_c^{(n)}) \right)&=\lim_{n\rightarrow\infty}\mathbb{E}_{\Lambda_c}\left(\frac{\sigma^2(\Lambda_c)}{V(\Lambda_c)}\right)\nonumber\\
&\leq\lim_{n\rightarrow\infty}\frac{\mathbb{E}_{\Lambda_c}\left(\sigma^2(\Lambda_c)\right)}{2^{-\epsilon_1} n V_n^{\frac{2}{n}}\mathsf{SNR}}\nonumber\\
&=2^{\epsilon_1}\lim_{n\rightarrow\infty} \frac{1}{n} V_n^{-\frac{2}{n}}\nonumber\\
&=2^{\epsilon_1}\frac{1}{2\pi e}.\label{meannsm}
\end{align}
The normalized second moment of any set of points in $\mathbb{R}^n$ cannot be smaller than that of an $n$-dimensional ball, which approaches $1/(2\pi e)$ as $n$ increases. Therefore, for all lattices in the ensemble \mbox{$G(\Lambda_c)\geq 1/2\pi e$}. Applying Markov's inequality for the non-negative random variable \mbox{$T(\Lambda_c)=G(\Lambda_c)-1/(2\pi e)$} shows that for any $m>1$ at least a fraction of $(m-1)/m$ of the sequences of lattices in the ensemble satisfy
\begin{align}
\lim_{n\rightarrow\infty}G(\Lambda^{(n)}_c)\leq\left(1+m(2^{\epsilon_1}-1)\right)\frac{1}{2\pi e}.\nonumber
\end{align}
Choosing $\epsilon_1=\log(1+\delta_1/(2m))$ gives
\begin{align}
\lim_{n\rightarrow\infty}G(\Lambda^{(n)}_c)\leq\left(1+\delta_1/2\right)\frac{1}{2\pi e},\nonumber
\end{align}
for at least a fraction of $(m-1)/m$ of the sequences of lattices in the ensemble. Taking $m$ as large as desired establishes the lemma.
Note that for $m>2\log(e)$ we have $\epsilon_1<\delta_1$. We will use this fact in Subsection~\ref{subsec:proofsum}.
\end{proof}
\subsection{Goodness for Coding Under Nearest Neighbor Coset Decoding}
\label{subsec:NN}
The next lemma shows that most nested lattice pairs in the ensemble achieve arbitrary low error probabilities in the presence of additive semi norm-ergodic noise as long as their point density is not too high.
\begin{lemma}
\label{lem:codinggoodness}
Let $\mathbf{Z}$ be an additive semi norm-ergodic noise with effective variance \mbox{$\sigma^2_{\mathbf{Z}}=\frac{1}{n}\mathbb{E}\|\mathbf{Z}\|^2$}. For any \mbox{$\mathbf{t}\in\Lambda_f$}, \mbox{$\epsilon>0$}, \mbox{$\delta>0$} and $n$ large enough, for almost all lattices in the ensemble, the error probability under coset nearest neighbor decoding is bounded by
\begin{align}
P_e\triangleq\Pr\left(\left[Q_{\Lambda_f}(\mathbf{t}+\mathbf{Z})\right]\bmod\Lambda_c\neq \mathbf{t}\bmod\Lambda_c \right)\leq\varepsilon,\nonumber
\end{align}
provided that
\begin{align}
\frac{V(\Lambda_f)^{2/n}}{2\pi e\sigma^2_{\mathbf{Z}}}>1+\delta.\nonumber
\end{align}
\end{lemma}
An error event under coset nearest neighbor decoding has two possible sources. Either the additive noise $\mathbf{Z}$ is too large, which rarely happens since it is semi norm-ergodic, or there are competing codewords which are ``too close'' to the transmitted codeword $\mathbf{t}$. Without loss of generality, we may assume that the zero codeword was transmitted. Before proving Lemma~\ref{lem:codinggoodness}, we would first like to upper bound the probability that a competing codeword (i.e., a codeword that does not belong to the coset of the zero codeword $\Lambda_c$) falls inside a ball centered at the origin. Rather than computing this probability directly, we upper bound the probability of a larger event, namely the event that a lattice point from \mbox{$\Lambda_f\setminus\gamma\mathbb{Z}^n$} (rather than \mbox{$\Lambda_f\setminus\Lambda_c$}) falls inside a ball centered at zero. This bound is given in the following proposition, whose proof follows from straightforward volume considerations.
\begin{proposition}
\label{prop:impersonation}
Let $\mathbf{Z}$ be some $n$-dimensional random vector with effective variance \mbox{$\sigma^2_{\mathbf{Z}}=\frac{1}{n}\mathbb{E}\|\mathbf{Z}\|^2$} and let \mbox{$r_{\mathbf{Z}}=\sqrt{(1+\rho)n\sigma^2_{\mathbf{Z}}}$} for some $\rho>0$. Then for any $\epsilon>0$, $\rho>0$ and $n$ large enough, for almost all lattices in the ensemble
\begin{align}
\Pr\left(\left(\Lambda_f\setminus\gamma\mathbb{Z}^n\right)\bigcap\mathcal{B}(\mathbf{Z},r_{\mathbf{Z}})\neq\emptyset\right)<\epsilon,\nonumber
\end{align}
provided that
\begin{align}
\frac{V(\Lambda_f)^{2/n}}{2\pi e(1+\rho)\sigma^2_{\mathbf{Z}}}>1+\rho.\nonumber
\end{align}
\begin{proof}[Proof of Proposition~\ref{prop:impersonation}]
We show that the average probability over the ensemble vanishes with $n$, and therefore for almost all members of the ensemble this probability is small. Let $\mathbbm{1}(\mathcal{A})$ be the indicator function of the event $\mathcal{A}$.
\begin{align}
&\mathbb{E}_{\Lambda_f}\left(\Pr\left(\left(\Lambda_f\setminus\gamma\mathbb{Z}^n\right)\bigcap\mathcal{B}(\mathbf{Z},r_{\mathbf{Z}})\neq\emptyset\right)\right)\nonumber\\
&=\mathbb{E}_{\Lambda_f}\mathbb{E}_{\mathbf{Z}}\left(\mathbbm{1}\left(\left(\gamma p^{-1}\mathcal{C}_f\setminus{\mathbf{0}}\right)\bigcap\mathcal{B}^*(\mathbf{Z},r_{\mathbf{Z}})\neq\emptyset\right) \ \big| \ \Lambda_f\right)\nonumber\\
&=\mathbb{E}_{\mathbf{Z}}\mathbb{E}_{\Lambda_f}\left(\mathbbm{1}\left(\left(\gamma p^{-1}\mathcal{C}_f\setminus{\mathbf{0}}\right)\bigcap\mathcal{B}^*(\mathbf{Z},r_{\mathbf{Z}})\neq\emptyset\right) \ \big| \ \mathbf{Z}\right)\nonumber\\
&=\mathbb{E}_{\mathbf{Z}}\Pr\left(\left(\gamma p^{-1}\mathcal{C}_f\setminus{\mathbf{0}}\right)\bigcap\mathcal{B}^*(\mathbf{Z},r_{\mathbf{Z}})\neq\emptyset \ \big| \ \mathbf{Z}\right).
\end{align}
Since each codeword in $\mathcal{C}_f\setminus{\mathbf{0}}$ is uniformly distributed over $\mathbb{Z}_p^n$, and there are less than $p^k$ such codewords (i.e., $p^k-1$), applying the union bound gives
\begin{align}
&\mathbb{E}_{\Lambda_f}\left(\Pr\left(\left(\Lambda_f\setminus\gamma\mathbb{Z}^n\right)\bigcap\mathcal{B}(\mathbf{Z},r_{\mathbf{Z}})\neq\emptyset\right)\right)\nonumber\\
&\leq \mathbb{E}_{\mathbf{Z}}\left(p^{k-n}\cdot\left|\gamma p^{-1}\mathbb{Z}_p^n\bigcap\mathcal{B}^*(\mathbf{Z},r_{\mathbf{Z}})\right| \ \bigg| \ \mathbf{Z} \right)\nonumber\\
&\leq \mathbb{E}_{\mathbf{Z}}\left(p^{k-n}\cdot\left|\gamma p^{-1}\mathbb{Z}^n\bigcap\mathcal{B}(\mathbf{Z},r_{\mathbf{Z}})\right| \ \bigg| \ \mathbf{Z} \right)\nonumber\\
&\leq p^{k-n}V_n\left(\frac{p}{\gamma}r_{\mathbf{Z}}+\frac{\sqrt{n}}{2}\right)^n\label{intersectionbound2}\\
&=p^k\gamma^{-n}V_n r_{\mathbf{Z}}^n\left(1+\frac{\gamma\sqrt{n}}{2p \ r_{\mathbf{Z}}}\right)^n\nonumber\\
&\leq p^k\gamma^{-n}V_n r_{\mathbf{Z}}^n\left(1+\frac{1}{n}\sqrt{\frac{4\mathsf{SNR}}{\sigma^2_{\mathbf{Z}}}}\right)^n\nonumber\\
&<\frac{V_n((1+\rho)n\sigma^2_{\mathbf{Z}})^{n/2}}
{\gamma^n p^{-k}} e^{\sqrt{\frac{4\mathsf{SNR}}{\sigma^2_{\mathbf{Z}}}}}\nonumber\\
&=\left(\frac{nV_n^{2/n}(1+\rho)\sigma^2_{\mathbf{Z}}}
{(\gamma^n p^{-k})^{2/n}} \right)^{n/2} e^{\sqrt{\frac{4\mathsf{SNR}}{\sigma^2_{\mathbf{Z}}}}}\nonumber\\
&<\left(\frac{2\pi e(1+\rho)\sigma^2_{\mathbf{Z}}}
{(\gamma^n p^{-k})^{2/n}} \right)^{n/2} e^{\sqrt{\frac{4\mathsf{SNR}}{\sigma^2_{\mathbf{Z}}}}}\label{Vnbound}
\end{align}
where~\eqref{intersectionbound2} follows from Lemma~\ref{lem:gridsphere} and~\eqref{Vnbound} from the fact that \mbox{$nV_n^{2/n}<2\pi e$}. If $\mathbf{G}_f$ is full-rank, as is the case for almost all lattices in the ensemble, we have
\begin{align}
&\mathbb{E}_{\Lambda_f}\left(\Pr\left(\left(\Lambda_f\setminus\gamma\mathbb{Z}^n\right)\bigcap\mathcal{B}(\mathbf{Z},r_{\mathbf{Z}})\neq\emptyset\right)\right)\nonumber\\
&<\left(\frac{2\pi e(1+\rho)\sigma^2_{\mathbf{Z}}}
{V(\Lambda_f)^{2/n}} \right)^{n/2} e^{\sqrt{\frac{4\mathsf{SNR}}{\sigma^2_{\mathbf{Z}}}}}\nonumber\\
&<(1+\rho)^{-\frac{n}{2}}e^{\sqrt{\frac{4\mathsf{SNR}}{\sigma^2_{\mathbf{Z}}}}},\nonumber
\end{align}
which vanishes for large $n$.
\end{proof}
\end{proposition}
\begin{proof}[Proof of Lemma~\ref{lem:codinggoodness}]
We upper bound the error probability of the nearest neighbor coset decoder using the \emph{bounded distance} coset decoder, which is inferior. More precisely, we analyze the performance of a decoder that finds all lattice points of $\Lambda_f$ within an Euclidean distance $r$ from \mbox{$\mathbf{t}+\mathbf{Z}$}, and reduces them modulo the coarse lattice $\Lambda_c$. Namely, the decoder's output is the set \mbox{$\left[\Lambda_f\bigcap\mathcal{B}(\mathbf{t}+\mathbf{Z},r)\right]\bmod\Lambda_c$}. Note that all points in this set are codewords in \mbox{$\mathcal{C}=\Lambda_f\cap\mathcal{V}_c$}.
If there is a unique codeword in this set, this is the decoded codeword. Otherwise, the decoder declares an error. It is easy to see that regardless of the choice of $r$, the nearest neighbor coset decoder makes the correct decision whenever the bounded distance coset decoder does. Therefore, the error probability of the nearest neighbor coset decoder is upper bounded by that of the bounded distance coset decoder.
The bounded distance coset decoder makes an error only if at least one of the events
\begin{align}
\mathbf{t}\notin\mathcal{B}(\mathbf{t}+\mathbf{Z},r),\label{event1}
\end{align}
or
\begin{align}
\left(\Lambda_f\setminus(\mathbf{t}+\Lambda_c)\right)\bigcap\mathcal{B}(\mathbf{t}+\mathbf{Z},r)\neq\emptyset\label{event2}
\end{align}
occurs. The event in~\eqref{event1} is equivalent to $\mathbf{Z}\notin\mathcal{B}(0,r)$, whereas the event in~\eqref{event2} is equivalent to $\left(\Lambda_f\setminus\Lambda_c\right)\bigcap\mathcal{B}(\mathbf{Z},r)\neq\emptyset$ which is included in the event $\left(\Lambda_f\setminus\gamma\mathbb{Z}^n\right)\bigcap\mathcal{B}(\mathbf{Z},r)\neq\emptyset$.
Therefore, for any $r>0$, we may apply the union bound, which gives
\begin{align}
&P_e\leq \Pr\left(\mathbf{Z}\notin\mathcal{B}(0,r)\right)+\Pr\left(\left(\Lambda_f\setminus\gamma\mathbb{Z}^n\right)\bigcap\mathcal{B}(\mathbf{Z},r)\neq\emptyset\right).\nonumber
\end{align}
Let $\rho=\sqrt{1+\delta}-1>0$ and note that
\begin{align}
\frac{V(\Lambda_f)^{2/n}}{2\pi e\sigma_Z^2}>1+\delta\nonumber
\end{align}
implies
\begin{align}
\frac{V(\Lambda_f)^{2/n}}{2\pi e(1+\rho)\sigma_Z^2}>1+\rho.\nonumber
\end{align}
Thus, setting \mbox{$r=r_{\mathbf{Z}}=\sqrt{(1+\rho)n\sigma^2_{\mathbf{Z}}}$}, we have
\begin{align}
&P_e\leq \Pr\left(\mathbf{Z}\notin\mathcal{B}(0,r_{\mathbf{Z}})\right)\nonumber\\
&\ \ \ +\Pr\left(\left(\Lambda_f\setminus\gamma\mathbb{Z}^n\right)\bigcap\mathcal{B}(\mathbf{Z},r_{\mathbf{Z}})\neq\emptyset\right).\label{perepslion}
\end{align}
The first summand in~\eqref{perepslion} can be made smaller than $\epsilon/2$ for $n$ large enough, since $\mathbf{Z}$ is semi norm-ergodic. The second summand is smaller than $\epsilon/2$ for almost all members in the ensemble, by Proposition~\ref{prop:impersonation}. It follows that for almost all members of the ensemble $P_e<\epsilon$.
\end{proof}
\subsection{Mixture Noise Is Semi Norm-Ergodic}
\label{subsec:mixture}
Our aim is to show that a mixture noise composed of AWGN and a dither from a lattice that is good for MSE quantization, as appears in the mod-$\Lambda$ channel \eqref{Yeff}, is semi norm-ergodic. First, we show that if $\Lambda_c$ is good for MSE quantization, i.e., if its normalized second moment is close to $1/2\pi e$, then a random dither uniformly distributed over $\mathcal{V}_c$ is semi norm-ergodic. To that end, we first prove the following lemma, which is a simple extension of~\cite{ez02techreport}.
\vspace{1mm}
\begin{lemma}
\label{lem:effrad}
Let $\mathcal{S}\in\mathbb{R}^n$ be a set of points with volume $V(\mathcal{S})$ and normalized second moment
\begin{align}
G(\mathcal{S})=\frac{1}{nV(\mathcal{S})}\frac{\int_{\mathcal{S}}\|\mathbf{x}\|^2d\mathbf{x}}{V(\mathcal{S})^{\frac{2}{n}}}.\nonumber
\end{align}
Let $r_{\text{eff}}$ be the radius of an $n$-dimensional ball with the same volume as $V(\mathcal{S})$, i.e., \mbox{$V(\mathcal{S})=V_n r_{\text{eff}}^n$}. For any $0<\epsilon<1$ define
\begin{align}
r_{\epsilon}\triangleq\sqrt{\frac{2\pi e G(\mathcal{S})-\frac{n}{n+2}(1-\epsilon)^{1+\frac{2}{n}}}{\epsilon}} r_{\text{eff}}.\nonumber
\end{align}
The probability that a random variable \mbox{$\mathbf{U}\sim\mathop{\mathrm{Unif}}(\mathcal{S})$} leaves a ball with radius $r_{\epsilon}$ is upper bounded by
\begin{align}
\Pr\left(\mathbf{U}\notin \mathcal{B}(\mathbf{0},r_{\epsilon})\right)\leq\epsilon.\nonumber
\end{align}
\end{lemma}
\begin{proof}
Let $\tilde{r}_{\epsilon}$ be the radius of a ball that contains exactly a fraction of $1-\epsilon$ of the volume of $\mathcal{S}$, i.e.,
\begin{align}
\mathrm{Vol}\left(\mathcal{S}\bigcap\mathcal{B}(\mathbf{0},\tilde{r}_{\epsilon}) \right)=(1-\epsilon)V(\mathcal{S}).\nonumber
\end{align}
Clearly, $\Pr\left(\mathbf{U}\notin \mathcal{B}(\mathbf{0},\tilde{r}_{\epsilon})\right)=\epsilon$. In order to establish the lemma we have to show that $\tilde{r}_{\epsilon}<r_{\epsilon}$.
To that end, we write
\begin{align}
n G(\mathcal{S}) V(\mathcal{S})^{\frac{2}{n}}&=\frac{1}{V(\mathcal{S})}\int_{\mathbf{x}\in\mathcal{S}}\|\mathbf{x}\|^2 d\mathbf{x} \nonumber\\
&=\frac{1}{V(\mathcal{S})}\bigg(\int_{\mathbf{x}\in\left(S\cap\mathcal{B}(\mathbf{0},\tilde{r}_{\epsilon})\right)}\|\mathbf{x}\|^2 d\mathbf{x}\nonumber\\
& \ \ \ \ \ \ \ \ \ \ + \int_{\mathbf{x}\in\left(S\cap(\mathbb{R}^n\setminus\mathcal{B}(\mathbf{0},\tilde{r}_{\epsilon}\right)}\|\mathbf{x}\|^2 d\mathbf{x}\bigg).\label{tmpIntegral}
\end{align}
The first integral in~\eqref{tmpIntegral} may be lower bounded by replacing its integration boundaries with an $n$-dimensional ball $\mathcal{B}(\mathbf{0},\rho_{\epsilon})$, where
\begin{align}
\rho_{\epsilon}^2=V_n^{-\frac{2}{n}}(1-\epsilon)^{\frac{2}{n}}V(\mathcal{S})^{\frac{2}{n}}
\end{align}
is chosen such that $V_n \rho_{\epsilon}^n=(1-\epsilon)V(\mathcal{S})$. Thus
\begin{align}
\int_{\mathbf{x}\in\left(S\cap\mathcal{B}(\mathbf{0},\tilde{r}_{\epsilon}\right)}\|\mathbf{x}\|^2 d\mathbf{x}&\geq\int_{\mathbf{x}\in\mathcal{B}(\mathbf{0},\rho_{\epsilon})}\|\mathbf{x}\|^2 d\mathbf{x}\nonumber\\
&=nV_n \rho_{\epsilon}^n\sigma^2\left({\mathcal{B}(\mathbf{0},\rho_{\epsilon})}\right)\nonumber\\
&=\frac{n}{n+2}V_n \rho_{\epsilon}^n \rho_{\epsilon}^2\label{secondmomentbound}\\
&=\frac{n}{n+2}\frac{V(\mathcal{S})^{1+\frac{2}{n}}(1-\epsilon)^{1+\frac{2}{n}}}{V_n^{\frac{2}{n}}}\nonumber\\
&=\frac{n}{n+2}V(\mathcal{S})(1-\epsilon)^{1+\frac{2}{n}}r_{\text{eff}}^2,\label{intbound1}
\end{align}
where we have used~\eqref{ballsecondmoment} to get~\eqref{secondmomentbound}.
The second integral in~\eqref{tmpIntegral} is over a set of points with volume $\epsilon V(\mathcal{S})$ which are all at distance greater than $\tilde{r}_{\epsilon}$ from the origin. Therefore, it can be bounded as
\begin{align}
\int_{\mathbf{x}\in\left(S\cap(\mathbb{R}^n\setminus\mathcal{B}(\mathbf{0},\tilde{r}_{\epsilon}))\right)}\|\mathbf{x}\|^2d\mathbf{x}\geq \epsilon V(\mathcal{S})\tilde{r}_{\epsilon}^2.\label{intbound2}
\end{align}
Substituting~\eqref{intbound1} and~\eqref{intbound2} into~\eqref{tmpIntegral} gives
\begin{align}
n G(\mathcal{S}) V(\mathcal{S})^{\frac{2}{n}}\geq\left(\frac{n}{n+2}(1-\epsilon)^{1+\frac{2}{n}}r_{\text{eff}}^2+ \epsilon \tilde{r}_{\epsilon}^2 \right).\label{repsilonbound}
\end{align}
Using the fact that $V(\mathcal{S})^{\frac{2}{n}}=V_n^{\frac{2}{n}}r_{\text{eff}}^2$,~\eqref{repsilonbound} reduces to
\begin{align}
\tilde{r}^2_{\epsilon}&\leq\frac{n V_n^{\frac{2}{n}}G(\mathcal{S})-\frac{n}{n+2}(1-\epsilon)^{1+\frac{2}{n}}}{\epsilon} r_{\text{eff}}^2\nonumber\\
&\leq\frac{2\pi e G(\mathcal{S})-\frac{n}{n+2}(1-\epsilon)^{1+\frac{2}{n}}}{\epsilon} r_{\text{eff}}^2\nonumber,
\end{align}
as desired.
\end{proof}
We would like to apply Lemma~\ref{lem:effrad} to a sequence of random variables uniformly distributed over the Voronoi regions $\mathcal{V}_c^{(n)}$ of a sequence of lattices $\Lambda_c^{(n)}$. From~\eqref{reffbound}, we have
\begin{align}
r_{\text{eff}}\left(\Lambda_c^{(n)}\right)\leq\sqrt{(n+2)\sigma^2\left(\Lambda_c^{(n)}\right)}.\label{reffbound2}
\end{align}
If the sequence of lattices $\Lambda_c^{(n)}$ is good for MSE quantization, then for any $\delta_1>0$ and $n$ large enough
\begin{align}
G(\Lambda^{(n)}_c)<(1+\delta_1)\frac{1}{2\pi e}.\label{Gbound}
\end{align}
Combining~\eqref{reffbound2} and~\eqref{Gbound}, it follows that if $\Lambda_c^{(n)}$ is good for MSE quantization, for any $\delta_1>0$ and $n$ large enough $r_{\epsilon}$ of Lemma~\ref{lem:effrad} satisfies
\begin{align}
r_{\epsilon}\leq\sqrt{\left(1+\frac{\delta_1}{\epsilon}\right)(n+2)\sigma^2(\Lambda_c^{(n)})}.\label{refflim}
\end{align}
For any $\delta>0$ and $\epsilon>0$ we may choose $\delta_1=\delta\epsilon/2$, which gives the following proposition.
\vspace{1mm}
\begin{proposition}
\label{prop:dither}
If the sequence of lattices $\Lambda_c^{(n)}$ is good for MSE quantization, then the sequence of random vectors \mbox{$\mathbf{U}\sim\mathop{\mathrm{Unif}}(\mathcal{V}^{(n)}_c)$} is semi norm-ergodic.
\end{proposition}
\vspace{1mm}
The next lemma states that any linear combination of semi norm-ergodic noise and a dither from a lattice which is good for MSE quantization is itself semi norm-ergodic.
\vspace{1mm}
\begin{lemma}
\label{lem:effball}
Let $\mathbf{Z}=\alpha\mathbf{N}+\beta\mathbf{U}$, where $\alpha,\beta\in\mathbb{R}$, $\mathbf{N}$ is semi norm-ergodic noise with effective variance $\sigma^2_{\mathbf{N}}=\frac{1}{n}\mathbb{E}\|\mathbf{N}\|^2$, and $\mathbf{U}$ is a dither statistically independent of $\mathbf{N}$ with effective variance $\sigma^2_U=\frac{1}{n}\mathbb{E}\|\mathbf{U}\|^2$, uniformly distributed over the Voronoi region $\mathcal{V}$ of a lattice $\Lambda$ which is good for MSE quantization. Then, the sequence of random vectors $\mathbf{Z}$ is semi norm-ergodic.
\end{lemma}
\begin{proof}
Since $\mathbf{N}$ and $\mathbf{U}$ are statistically independent, the effective variance of $\mathbf{Z}$ is
\begin{align}
\sigma^2_{\mathbf{Z}}=\frac{1}{n}\mathbb{E}\|\mathbf{Z}\|^2=\alpha^2\sigma_{\mathbf{N}}^2+\beta^2\sigma^2_{\mathbf{U}}.\nonumber
\end{align}
We have to prove that
for any $\epsilon>0$, $\delta>0$ and $n$ large enough
\begin{align}
\Pr&\left(\mathbf{Z}\notin\mathcal{B}(\mathbf{0},\sqrt{(1+\delta)n\sigma^2_{\mathbf{Z}}}\right)<\epsilon.\nonumber
\end{align}
For any $\epsilon>0$, $\delta>0$ and $n$ large enough we have
\begin{align}
&\Pr\left(\mathbf{Z}\notin\mathcal{B}(\mathbf{0},\sqrt{(1+\delta)n\sigma^2_{\mathbf{Z}}}) \right)\nonumber\\
&=\Pr\left(\|\mathbf{Z}\|^2>(1+\delta)n\sigma^2_{\mathbf{Z}}\right)\nonumber\\
&=\Pr\left(\|\mathbf{N}\|^2>(1+\delta)n\sigma^2_{\mathbf{N}}\right)\nonumber\\
& \ \ \ \cdot\Pr\left(\|\mathbf{Z}\|^2>(1+\delta)n\sigma^2_{\mathbf{Z}} \ \big| \ \|\mathbf{N}\|^2>(1+\delta)n\sigma^2_{\mathbf{N}} \right)\nonumber\\
&+\Pr\left(\|\mathbf{N}\|^2\leq(1+\delta)n\sigma^2_{\mathbf{N}}\right)\nonumber\\
& \ \ \ \cdot\Pr\left(\|\mathbf{Z}\|^2>(1+\delta)n\sigma^2_{\mathbf{Z}} \ \big| \ \|\mathbf{N}\|^2\leq(1+\delta)n\sigma^2_{\mathbf{N}} \right)\nonumber\\
&\leq \frac{\epsilon}{3}+\Pr\bigg(\beta^2\|\mathbf{U}\|^2+2\alpha\beta\mathbf{N}^T\mathbf{U}\nonumber\\
& \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ >(1+\delta)n\beta^2\sigma_{\mathbf{U}}^2 \ \big| \ \|\mathbf{N}\|^2\leq(1+\delta)n\sigma^2_{\mathbf{N}} \bigg)\label{useprop}\\
&\leq \frac{\epsilon}{3}+\Pr\left(\beta^2\|\mathbf{U}\|^2>n\beta^2\sigma_{\mathbf{U}}^2(1+\delta/2) \right)\nonumber\\
&\ \ \ \ \ +\Pr\left(2\alpha\beta\mathbf{N}^T\mathbf{U}>n\beta^2\sigma_{\mathbf{U}}^2\delta/2 \ \big| \ \|\mathbf{N}\|^2\leq(1+\delta)n\sigma^2_{\mathbf{N}} \right)\label{unionbound}\\
&\leq \frac{2\epsilon}{3}+\Pr\left(2\alpha\beta\mathbf{N}^T\mathbf{U}>n\beta^2\sigma_{\mathbf{U}}^2\delta/2 \ \big| \ \|\mathbf{N}\|^2\leq(1+\delta)n\sigma^2_{\mathbf{N}} \right),\label{peboundtmp}
\end{align}
where~\eqref{useprop} follows from the fact that $\mathbf{N}$ is semi norm-ergodic,~\eqref{unionbound} from the union bound and~\eqref{peboundtmp} from the fact that $\mathbf{U}$ is semi norm-ergodic due to Proposition~\ref{prop:dither}. We are left with the task of showing that the last probability in~\eqref{peboundtmp} can be made smaller than $\epsilon/3$ for $n$ large enough. This requires some more work.
Since $\mathbf{U}$ is semi norm-ergodic noise, than for any $\epsilon_2>0$, $\delta_2>0$ and $n$ large enough
\begin{align}
\Pr\left(\|\mathbf{U}\|>\sqrt{(1+\delta_2)n\sigma_{\mathbf{U}}^2} \right)<\epsilon_2.\nonumber
\end{align}
Let $r_{\mathbf{U}}=\sqrt{(1+\delta_2)n\sigma_{\mathbf{U}}^2}$, and $f_{\mathbf{U}}(\mathbf{u})$ be the probability density function (pdf) of $\mathbf{U}$. For any $r>0$ we have
\begin{align}
\Pr&\left(\mathbf{N}^T\mathbf{U}>r \ \big| \mathbf{N}=\mathbf{n}\right)=\int_{|\mathbf{u}|\leq r_{\mathbf{u}}}f_{\mathbf{U}}(\mathbf{u})\mathbbm{1}(\mathbf{n}^T\mathbf{u}>r)d\mathbf{u}\nonumber\\
&\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ +\int_{|\mathbf{u}|> r_{\mathbf{u}}}f_{\mathbf{U}}(\mathbf{u})\mathbbm{1}(\mathbf{n}^T\mathbf{u}>r)d\mathbf{u}\nonumber\\
&\leq \int_{|\mathbf{u}|\leq r_{\mathbf{u}}}\frac{1}{V(\Lambda)}\mathbbm{1}(\mathbf{n}^T\mathbf{u}>r)d\mathbf{u}+\epsilon_2\nonumber\\
&=\frac{V(\mathcal{B}(\mathbf{0},r_{\mathbf{U}}))}{V(\Lambda)}\int_{|\mathbf{u}|\leq r_{\mathbf{u}}}\frac{1}{V(\mathcal{B}(\mathbf{0},r_{\mathbf{U}}))}\mathbbm{1}(\mathbf{n}^T\mathbf{u}>r)d\mathbf{u}+\epsilon_2.\nonumber
\end{align}
Using the fact that $\Lambda$ is good for MSE quantization we have $V(\Lambda)^{2/n}\rightarrow 2\pi e \sigma^2_{\mathbf{U}}$, and hence, for $n$ large enough,
\begin{align}
\left(\frac{V(\mathcal{B}(\mathbf{0},r_{\mathbf{U}}))}{V(\Lambda)}\right)^{\frac{2}{n}}<(1+2\delta_2).\nonumber
\end{align}
Let $\tilde{\mathbf{U}}$ be a random vector uniformly distributed over $\mathcal{B}(\mathbf{0},r_{\mathbf{U}})$. We have
\begin{align}
\Pr&\left(\mathbf{N}^T\mathbf{U}>r \ \big| \mathbf{N}=\mathbf{n}\right)
<\epsilon_2+(1+2\delta_2)^{\frac{n}{2}}\Pr(\mathbf{n}^T\tilde{\mathbf{U}}>r).\label{inprod}
\end{align}
Let $\tilde{\mathbf{Z}}$ be AWGN with zero mean and variance $r^2_{\mathbf{U}}/n$. Using a similar approach to that taken in~\cite[Lemma 11]{ErezZamirAWGN}, we would now like to upper bound the pdf of $\tilde{\mathbf{U}}$ using that of $\tilde{\mathbf{Z}}$. For any $\mathbf{x}\in\mathbb{R}^n$ we have
\begin{align}
\frac{f_{\tilde{\mathbf{U}}}(\mathbf{x})}{f_{\tilde{\mathbf{Z}}}(\mathbf{x})}=\frac{f_{\tilde{\mathbf{U}}}(\|\mathbf{x}\|)}{f_{\tilde{\mathbf{Z}}}(\|\mathbf{x}\|)}\leq\frac{f_{\tilde{\mathbf{U}}}(r_{\mathbf{U}})}{f_{\tilde{\mathbf{Z}}}(r_{\mathbf{U}})}=\left(\frac{2\pi e}{nV_n^{2/n}} \right)^{\frac{n}{2}}.\nonumber
\end{align}
Thus, for any $\mathbf{x}\in\mathbb{R}^n$
\begin{align}
f_{\tilde{\mathbf{U}}}(\mathbf{x})\leq 2^{\frac{n}{2}\log\left(\frac{2\pi e}{n} V_n^{-2/n}\right)}f_{\tilde{\mathbf{Z}}}(\mathbf{x})\nonumber.
\end{align}
We can further bound~\eqref{inprod} for large enough $n$ as
\begin{align}
\Pr&\left(\mathbf{N}^T\mathbf{U}>r \ \big| \mathbf{N}=\mathbf{n}\right)\nonumber\\
&\leq\epsilon_2+2^{\frac{n}{2}\log\left((1+2\delta_2)\frac{2\pi e}{n} V_n^{-2/n}\right)}\Pr(\mathbf{n}^T\tilde{\mathbf{Z}}>r)\nonumber\\
&=\epsilon_2+2^{\frac{n}{2}\log\left((1+2\delta_2)\frac{2\pi e}{n} V_n^{-2/n}\right)}Q\left(\frac{\sqrt{n}r}{\|\mathbf{n}\|r_{\mathbf{U}}}\right),\nonumber
\end{align}
where $Q(\cdot)$ is the standard $Q$-function, which satisfies \mbox{$Q(x)<e^{-x^2/2}$}. It follows that
\begin{align}
&\Pr\left(2\alpha\beta\mathbf{N}^T\mathbf{U}>n\beta^2\sigma_{\mathbf{U}}^2\delta/2 \ \big| \ \|\mathbf{N}\|^2\leq(1+\delta)n\sigma^2_{\mathbf{N}} \right)\nonumber\\
&\leq \epsilon_2\nonumber\\
&+2^{\frac{n}{2}\log\left((1+2\delta_2)\frac{2\pi e}{n} V_n^{-2/n}\right)}Q\left(\frac{\sqrt{n}\beta\sigma_{\mathbf{U}}\delta/2}{2\alpha\sigma_{\mathbf{N}}\sqrt{(1+\delta)(1+2\delta_2)}}\right).\nonumber
\end{align}
Taking $\delta_2$ sufficiently smaller than $\delta$ and $\epsilon_2<\epsilon/6$, for $n$ large enough we have
\begin{align}
\Pr\left(2\alpha\beta\mathbf{N}^T\mathbf{U}>n\beta^2\sigma_{\mathbf{U}}^2\delta/2 \ \big| \ \|\mathbf{N}\|^2\leq(1+\delta)n\sigma^2_{\mathbf{N}} \right)<\frac{\epsilon}{3}.\nonumber
\end{align}
\end{proof}
The next two corollaries are simple consequences of Lemma~\ref{lem:effball}. The first follows since any i.i.d. noise is semi norm-ergodic, and the second follows by iterating over Lemma~\ref{lem:effball}.
\begin{corollary}
\label{cor:mixturenoise}
Let $\mathbf{N}$ be i.i.d. random vector and $\mathbf{U}$ be a dither random vector that is statistically independent of $\mathbf{N}$ and is uniformly distributed over the Voronoi region $\mathcal{V}$ of a lattice $\Lambda$ that is good for MSE quantization.
Then, any linear combination of the form \mbox{$\mathbf{Z}=\alpha N+\beta\mathbf{U}$}, where $\alpha,\beta\in\mathbb{R}$, is semi norm-ergodic.
\end{corollary}
\vspace{1mm}
\begin{corollary}
\label{cor:cofmixturenoise}
Let \mbox{$\mathbf{U}_1,\cdots,\mathbf{U}_K$} be statistically independent dither random vectors, each uniformly distributed over the Voronoi region $\mathcal{V}_k$ of $\Lambda_k$, $k=1,\ldots,K$, that are all good for MSE quantization. Let $\mathbf{N}$ be an i.i.d. random vector statistically independent of \mbox{$\left\{\mathbf{U}_1,\cdots,\mathbf{U}_K\right\}$}. For any \mbox{$\alpha,\beta_1,\cdots,\beta_K\in\mathbb{R}$} the random vector \emph{$\mathbf{Z}=\alpha\mathbf{N}+\sum_{k=1}^K\beta_k\mathbf{U}_k$} is semi norm-ergodic.
\end{corollary}
\subsection{Tying It All Together}
\label{subsec:proofsum}
First, we combine Lemma~\ref{lem:MSE} and Lemma~\ref{lem:codinggoodness} to obtain an existence theorem for nested lattice pairs which are ``good'' in the sense of Definition~\ref{def:goodpairs}.
Note that in the considered lattice construction, the normalized volume of the coarse lattice is given by
\begin{align}
\frac{1}{n}\log V(\Lambda_c)&=\frac{1}{n}\log (\gamma^n p^{-k_1})=\log(\gamma)-\frac{k_1}{n}\log p\nonumber\\
&=\log\left(\frac{\gamma\sqrt{V_n^{2/n}}}{2} \right)-\epsilon_1\nonumber\\
&\stackrel{n\rightarrow\infty}{\longrightarrow}\frac{1}{2}\log(2\pi e\mathsf{SNR})-\epsilon_1.
\end{align}
Using the parameter $k>k_1$ in the definition of the nested lattices ensemble, we can set the normalized volume of the fine lattice to any desired value of \mbox{$\frac{1}{n}\log V(\Lambda_f)<\frac{1}{2}\log (2\pi e \mathsf{SNR})-\epsilon_1$}. Since $\delta_1$ from Lemma~\ref{lem:MSE} is larger than $\epsilon_1$, this means that we can set the normalized volume of the fine lattice to any desired value of \mbox{$\frac{1}{n}\log V(\Lambda_f)<\frac{1}{2}\log (2\pi e \mathsf{SNR})-\delta_1$}. Thus, we have the following theorem.
\begin{theorem}
\label{thm:goodpairs}
For any $\sigma^2_{\mathbf{Z}}>0$, $\epsilon>0$, $\delta>0$ and
\begin{align}
\frac{1}{2}\log&\left(2\pi e\sigma^2_{\mathbf{Z}}\right)+\delta&<\frac{1}{n}\log V(\Lambda_f)
<\frac{1}{2}\log (2\pi e\mathsf{SNR})-\delta,\nonumber
\end{align}
for $n$ large enough, there exists a pair of nested lattices \mbox{$\Lambda_c\subset\Lambda_f$} such that
\begin{enumerate}
\item $\sigma^2(\Lambda_c)=\mathsf{SNR}$ and $G(\Lambda_c)<\frac{1+\delta}{2\pi e}$.
\item For any lattice point $\mathbf{t}\in\Lambda_f$, and additive semi norm-ergodic noise $\mathbf{Z}$ with effective variance $\sigma^2_{\mathbf{Z}}=\frac{1}{n}\mathbb{E}\|\mathbf{Z}\|^2$
\begin{align}
\Pr\left(\left[Q_{\Lambda_f}(\mathbf{t}+\mathbf{Z})\right]\bmod\Lambda_c\neq \mathbf{t}\bmod\Lambda_c \right)<\epsilon.\nonumber
\end{align}
\end{enumerate}
\end{theorem}
\vspace{1mm}
We now apply Theorem~\ref{thm:goodpairs} to show that nested lattice codes achieve the capacity of the AWGN channel using the mod-$\Lambda$ transmission scheme with coset nearest neighbor decoding. Theorem~\ref{thm:goodpairs} states that nested lattice codes achieve arbitrary low error probabilities over channels of the form $\mathbf{Y}=\mathbf{t}+\mathbf{Z}$, where $\mathbf{Z}$ is semi norm-ergodic additive noise. However, the output of the equivalent channel when applying the mod-$\Lambda$ transmission scheme is not exactly of this form. Rather, it is given by
\begin{align}
\mathbf{Y}_{\text{eff}}&=\left[\mathbf{t}+\mathbf{Z}_{\text{eff}}\right]\bmod\Lambda_c\nonumber\\
&=\mathbf{t}-Q_c(\mathbf{t}+\mathbf{Z}_{\text{eff}})+\mathbf{Z}_{\text{eff}}.\nonumber
\end{align}
Thus, it is equivalent to an additive channel with input \mbox{$\mathbf{t}-Q_c(\mathbf{t}+\mathbf{Z}_{\text{eff}})$}. Nevertheless, as
\begin{align}
\left[\mathbf{t}-Q_c(\mathbf{t}+\mathbf{Z}_{\text{eff}}) \right]\bmod\Lambda_c=\mathbf{t},\nonumber
\end{align}
the coset nearest neighbor decoder cannot distinguish between the inputs $\mathbf{t}$ and $\mathbf{t}-Q_c(\mathbf{t}+\mathbf{Z}_{\text{eff}})$. Therefore, when the coset nearest neighbor decoder is used, the output of the induced channel is equivalent to \mbox{$\mathbf{t}+\mathbf{Z}_{\text{eff}}$}. See Figure~\ref{fig:cosetdecoder} for an illustration of the nearest neighbor coset decoder's operation.
\input{cosetdecodingfig}
The effective noise $\mathbf{Z}_{\text{eff}}$ is a linear combination of an AWGN and a dither uniformly distributed over $\mathcal{V}_c$. When the coarse lattice $\Lambda_c$ is good for MSE quantization, $\mathbf{Z}_{\text{eff}}$ is semi norm-ergodic by Corollary~\ref{cor:mixturenoise}.
It follows that for ``good'' pairs of nested lattices, the error probability in coset nearest neighbor decoding of $\mathbf{Y}_{\text{eff}}$, the output of the equivalent channel induced by the mod-$\Lambda$ scheme, can be made arbitrary small for all
\begin{align}
R<\frac{1}{2}\log\left(\frac{\mathsf{SNR}}{\sigma^2_{\text{eff}}}\right),\nonumber
\end{align}
where $\sigma^2_{\text{eff}}$ is the effective variance of $\mathbf{Z}_{\text{eff}}$.
\begin{remark}
Note that we have not used the Gaussianity of $\mathbf{N}$ in the proof. Therefore, any rate below $\nicefrac{1}{2}\log(\mathsf{SNR}/\sigma^2_{\text{eff}})$ is in fact achievable using nested lattice codes and coset nearest neighbor decoding for any additive i.i.d. noise $\mathbf{N}$, or more generally, any additive semi norm-ergodic noise. This is analogous to the results of~\cite{LapidothNNdecoding} where it is shown that $\nicefrac{1}{2}\log(1+\mathsf{SNR})$ is the highest rate a coding scheme that utilizes Gaussian codebooks and nearest neighbor decoding can achieve over an additive ergodic noise channel.
\end{remark}
\begin{remark}
Note that for unshaped transmission, i.e., where the coarse lattice is the cubic lattice, a random dither $\mathbf{U}$ uniformly distributed over its Voronoi region is an i.i.d. random vector. Hence, a linear combination of such a dither and AWGN is semi norm-ergodic. It follows that the effective noise in the mod-$\Lambda$ transmission scheme is semi norm-ergodic even when no shaping is used. If the fine lattice is good for coding in the presence of additive semi norm-ergodic noise, which as Lemma~\ref{lem:codinggoodness} indicates, is the case for almost all Construction A lattices, then any rate satisfying
\begin{align}
R<\frac{1}{2}\log\left(\frac{\mathsf{SNR}}{\sigma^2_{\text{eff}}}\right)-\frac{1}{2}\log\left(\frac{2\pi e}{12}\right)
\end{align}
is achievable with a one-dimensional coarse lattice.
\end{remark}
\section{Extensions}
\label{sec:extensions}
During the last decade, nested lattice codes have been extensively used in the literature in the context of Gaussian network problems, see e.g.~\cite{Philosof,compAndForIeee,ncl10,Bresler,CoFTransformFull}. For such problems, a pair of nested lattices is often not sufficient, and more complicated \emph{chains} of nested lattices are required. A major advantage of the nested lattice ensemble we have used in Section~\ref{sec:proofs} for proving the existence of ``good'' pairs, is that it can be naturally extended to construct any (finite) chain of nested lattices. Specifically, we can draw a linear code with a generating matrix $\mathbf{G}$ of dimensions $k\times n$ over $\mathbb{Z}_p$, and then for \mbox{$k\geq k_L>\cdots>k_1$} construct a sequence of nested linear codebooks \mbox{$\mathcal{C}_1\subset\cdots\subset\mathcal{C}_L$}, by taking the generating matrix of the $\ell$th codebook to be the first $k_\ell$ rows of $\mathbf{G}$. These codewords can be lifted to the Euclidean space using Construction A in order to form a chain of nested lattices.
In Lemma~\ref{lem:MSE} we have shown that almost all random Construction A lattices are good for MSE quantization. In Lemma~\ref{lem:codinggoodness} we have shown that almost all random Construction A pairs of nested lattices achieve low error probabilities in the presence of additive semi norm-ergodic noise, under coset nearest neighbor decoding. It follows that for any finite number $L$ and any \mbox{$\frac{1}{n}\log V(\Lambda_L)<\cdots<\frac{1}{n}\log V(\Lambda_1)$}, there exists a chain of nested lattices \mbox{$\Lambda_1\subset\cdots\subset\Lambda_L$} such that all lattices are good for MSE quantization and all pairs within the chain achieve low error probabilities under coset nearest neighbor decoding in the presence of additive semi norm-ergodic noise.
In certain applications, chains of nested lattice codes are used in order to convert a Gaussian multiple access channel (MAC) into an effective modulo-lattice channel whose output is a fine lattice point plus effective noise reduced modulo a coarse lattice. Such a situation arises for example in the compute-and-forward framework~\cite{compAndForIeee}, where a receiver is interested in decoding linear combinations with integer valued coefficients of the codewords transmitted by the different users of the MAC. In such applications, the effective noise is often a linear combination of AWGN and \emph{multiple} statistically independent dithers uniformly distributed over the Voronoi region of the coarse lattice. By Corollary~\ref{cor:cofmixturenoise}, such an effective noise is semi norm-ergodic regardless of the number of dithers contributing to it, as long as they are all independent and are induced by lattices which are good for MSE quantization.
In our proofs we took the cardinality $p$ of the finite field over which we construct the linear codes to grow polynomially with $n$. While this facilitates the derivations, it does not seem to be necessary in practice when targeting a fixed (small) gap to capacity. Indeed, in~\cite{kk07} it was shown that construction A lattices with small values of $p$, such as $2$, $3$ and $5$, achieve an NSM very close to $1/(2\pi e)$. For additive noise channels with a PAM input constellation with prime cardinality, linear codes perform just as well as random codes. Therefore, it seems that for low \slash moderate values of SNR the ensemble of nested lattice codes we have used can achieve rates close to the capacity of the AWGN channel even with small values of $p$.
\section*{Acknowledgment}
The authors thank Bobak Nazer, Yair Yona and Ram Zamir for discussions that helped prompt this work.
\bibliographystyle{IEEEtran}
|
2,869,038,154,335 | arxiv | \section{Introduction}
The well-known Josephson Relation\cite{Josephson} (JR) relates applied
electric and magnetic fields to motion of the Abrikosov vortex lattice
in a thin superconducting sample. Taking a simple form,
\begin{equation}
\label{JR}
\langle \Ep \rangle = -\frac{1}{c} \vl \times \langle \Bloc \rangle,
\end{equation}
the JR is easily understood intuitively via arguments involving the
Faraday Law, and motion of magnetic flux, and thought to
represent an approximation of general applicability, therefore being widely
employed by experimentalists. It is thus time-tested and, though not of a
fundamental theoretical nature, is an essential and useful tool.
Here, $\Eloc$ and $\Bloc$ will denote the local electric
and magnetic fields, and $\langle\cdots\rangle$ denotes an averaging in
space.
Relation \eqref{JR} has been known for decades, and does also stand up
to more modern considerations in the context of time-dependent
Ginzburg-Landau (TDGL) theory. Indeed, the JR is derived by
Kopnin\cite{Kopnin}, using some simple assumptions about the rigidity
of the vortex-lattice motion.
In this analysis, the premise is that the vortex lattice already
represents some solution to TDGL theory, and its motion must remain
compatible with that theory; therefore there is an implicit assumption
that the vortex lattice is triangular.
More recently, a derivation along the
same lines but accommodating inertial effects of the condensate
has been exhibited\cite{LipavskyIJR}, which reproduces from the point
of view of TDGL theory an Inertial Josephson Relation (IJR), and which had
been obtained much earlier in a hydrodynamic
context\cite{Abrikosov,Hocquet}. The IJR contains a term
reflecting the inertia of the condensate itself, and
is thus an extension of the JR, applicable at high
frequencies. We shall review this result below in section
\secref{secJRIJR}, obtaining the IJR using the requirement of gauge
invariance, combined with simple ingredients of rigid lattice motion
and the current obtained from TDGL theory.
In the present work we shall discuss ways in which this analysis can
naturally be taken further, producing what we shall call an Extended
Josephson Relation. The Inertial Josephson relation mentioned above
represents one extension of the Josephson relation to accommodate
condensate inertia. By Extended Josephson Relation, we shall mean
a relation further extended to include the effects discussed below.
The starting point is chosen to be the Floating-Kernel\cite{LL08,LL09}
(FK) version of the TDGL equation, which contains the normal current;
so named because it can be understood by shifting to the reference
frame `floating' with this normal current.
Not only can a normal-current correction be included, but it is not
necessary to work with the assumption of a rigid Abrikosov vortex
lattice. Of course, should the (average) magnetic field change with
time, so must the density of unit-flux vortices.
Finally, since the electric field and also the inertia of the vortex
lattice itself provide anisotropic stimuli in the plane of the vortex
lattice, one ought consider deformations of the lattice which include
not only the density fluctuations mentioned above, but a group of
geometrical deformations which represent a response to these
anisotropies.
An Extended Josephson Relation is calculated in section
\secref{secEJR} which accommodates all these novel ingredients. The
result can be considered an extension of the Josephson Relation (and
Inertial Josephson Relation) which can describe density fluctuations
of the vortex lattice which allow for a time-varying magnetic field,
and anisotropic deformations of the vortex lattice from the electric
field and vortex acceleration.
\section{Notation, conventions and basics}
We consider the usual configuration of a planar sample in the $x$-$y$
plane, with $\Ep \perp \hat{z}$ and vortices produced by $\Bloc
\parallel \hat{z}$. We will make use of the usual definitions $\Ep =
- \frac{1}{c} \partial_t {\bf A} - \nabla \phi$ and $\Bloc = \nabla
\times {\bf A}$.
We write a gauge transformation as
\begin{eqnarray}
\label{gauge1}
{\bf A} &\rightarrow& {\bf A} + c \nabla\theta \\
\label{gauge2}
\phi &\rightarrow& \phi - \partial_t \theta \\
\label{gauge3}
\psi &\rightarrow& e^{ie^*\theta/\hbar}\psi ,
\end{eqnarray}
where $\theta$ is a function of space and time.
Our calculation will be gauge-invariant; we will make no gauge choice.
This allows us to use the principle of gauge covariance.
\section{IJR from TDGL}
\label{_IJRfromTDGL}
\label{secJRIJR}
Within this section, we carefully extract the Inertial Josephson
Relation from the TDGL equation. We follow closely the derivation of
the Josephson Relation given by Kopnin \cite{Kopnin} or the IJR given
in \olcite{LipavskyIJR}; we pay special attention to gauge invariance,
in anticipation of extending the IJR later.
The TDGL equation is
\begin{eqnarray}
\label{_TDGL}
&&\frac{-\hbar^2}{2m^*} {{\bf D}}^2 \psi + \alpha \psi + \beta \psi |\psi|^2 \nonumber\\
&=&\frac{-\Gamma}{\sqrt{1+C^2|\psi|^2}} \big( \partial_t + \frac{i}{\hbar}e^*\phi + \frac{c^2}{2}\partial_t |\psi|^2 \big)\psi
\end{eqnarray}
where the gauge-covariant derivative is
\begin{equation}
{{\bf D}}\psi := (\nabla - \frac{ie^*}{\hbar c} {\bf A} )\psi
.
\end{equation}
$\Js$ resulting from \eqref{_TDGL} is given by variation of ${\bf A}$;
\begin{equation}
\label{_superJ}
\Js = \frac{i\hbar e^*}{2 m^*} \bar{\psi} \big( \overleftarrow{\overline{\D}} - \D \big) \psi
.
\end{equation}
Now we must formulate the assumption of rigid motion of the vortex lattice.
We define by $\psi_0$ the configuration at time zero, $\psi_0({\bf r}) := \psi({\bf r},0)$.
With the displacement of the vortex lattice given by $\rl(t)$, we thus require
\begin{equation}
\label{shiftpsi2}
|\psi({\bf r},t)|^2 = |\psi_0({\bf r}-\rl(t))|^2
\end{equation}
so that for some \emph{pure-gauge} function $\omega$ we have
\begin{equation}
\label{shiftpsi}
\psi({\bf r},t) = e^{-i\omega({\bf r},t)} \psi_0({\bf r}-\rl(t)).
\end{equation}
This equation must be gauge-invariant; transforming both
sides using $\theta({\bf r},t)$ (see \eqref{gauge1} -- \eqref{gauge3}),
we see that $\omega$ must have the gauge transformation law
\begin{equation}
\label{omegagauge}
\omega({\bf r},t) \rightarrow \omega({\bf r},t) + \frac{e^*}{\hbar}\big( \theta({\bf r} - \rl(t),0) - \theta({\bf r},t)\big).
\end{equation}
Whatever physical assumption we specify, in the present case equation \eqref{shiftpsi},
must be independent of any gauge choice. Although we do not at this stage know
the form of the function $\omega$, we do know how it must transform,
and we shall make use of the transformation law \eqref{omegagauge} in what follows.
Writing $\vl = \partial_t \rl$, we use \eqref{shiftpsi} to calculate
\begin{eqnarray}
\partial_t \big[ \bar{\psi} \big( \overleftarrow{\overline{\D}} - \D \big) \psi \big]
&=& -\vl\cdot\nabla \big[ \bar{\psi} \big( \overleftarrow{\overline{\D}} - \D \big) \psi \big] \\
&+& 2i|\psi|^2 (\vl\cdot\nabla + \partial_t ) \big( \nabla\omega + \frac{e^*}{\hbar c} {\bf A}\big) .\nonumber
\end{eqnarray}
Equation \eqref{omegagauge} implies that $(\vl\cdot\nabla + \partial_t ) \omega$ gauge-transforms as
\begin{equation}
\label{omegagauge2}
(\vl\cdot\nabla + \partial_t ) \omega \rightarrow (\vl\cdot\nabla + \partial_t ) \omega
- \frac{e^*}{\hbar} (\vl\cdot\nabla + \partial_t ) \theta,
\end{equation}
which determines
\begin{equation}
(\vl\cdot\nabla + \partial_t ) \omega = \frac{e^*}{\hbar}\big( \phi - \frac{1}{c} \vl\cdot{\bf A} \big).
\end{equation}
Using
\begin{equation}
\label{_covvs}
\vs := \frac{\Js}{ e^* |\psi|^2}
\end{equation}
we then obtain
\begin{equation}
\partial_t \vs = -\vl \cdot \nabla \vs + \frac{e^*}{m^*} \big[ \frac{1}{c} \nabla(\vl\cdot{\bf A})
- \frac{1}{c}\vl\cdot\nabla {\bf A} + \Ep \big]
.
\end{equation}
Finally, use the identity $\nabla ( {\bf v} \cdot {\bf V}) - {\bf v}
\cdot \nabla {\bf V} = {\bf v} \times \nabla \times {\bf V}$ for
constant ${\bf v}$ to write
\begin{equation}
\partial_t \vs = -\vl \cdot \nabla \vs + \frac{e^*}{m^*c} \vl\times\Bloc + \frac{e^*}{m^*} \Ep
.
\end{equation}
Note that this tells us the field $\vs$ does not move with the lattice; otherwise we would have $(\partial_t + \vl\cdot\nabla)\vs=0$.
We may now take the unit-cell average. Since $\vs$ is periodic, the first term does not contribute and
\begin{equation}
\label{_IJR}
\partial_t \langle \vs \rangle = \frac{e^*}{m^*}\langle\Ep\rangle + \frac{e^*}{m^*c} \vl \times \langle B\rangle
.
\end{equation}
This is the IJR as presented in \olcite{LipavskyIJR}.
It can be thought of as the consequence of describing a
rigidly moving vortex lattice using TDGL theory, and includes
the inertial term $\partial_t \langle \vs \rangle $, absent
from the original Josephson Relation.
\section{Floating-Kernel TDGL}
\label{SecFKTDGL}
Our starting point will not be the TDGL equation as
presented in \eqref{_TDGL}, but a version of TDGL supplemented by a
floating kernel (FK) term,\cite{LL08,LL09}
\begin{eqnarray}
\label{_TDGLfk}
&&\frac{1}{2m^*} \big( -i\hbar\nabla - \frac{e^*}{c} {\bf A} - \frac{m^*}{en} \Jn \big)^2 \psi + \alpha \psi + \beta \psi |\psi|^2 \nonumber\\
&=&\frac{-\Gamma}{\sqrt{1+C^2|\psi|^2}} \big( \partial_t \frac{i}{\hbar}e^*\phi + \frac{c^2}{2}\partial_t |\psi|^2 \big)\psi
.
\end{eqnarray}
The supercurrent is defined though variation of ${\bf A}$,
\begin{equation}
\label{_fksuperJ}
\Js = \frac{e^*}{m^*} \bar{\psi}
\big( \frac{i\hbar}{2} ( \overleftarrow{\overline{\D}} - \D ) - \frac{m^*}{en}\Jn \big)
\psi
.
\end{equation}
In the spirit of a two-fluid model of superconductivity, we write
\begin{equation}
\label{Jsvsvn}
\Js = e^*|\psi|^2(\vs-\vn)
\end{equation}
and write the total current as a sum
\begin{eqnarray}
{\bf J} &:=& \Js + \Jn \\
&=& e^*n_s\vs + en_n \vn
\end{eqnarray}
where $\vn := \Jn/en$, $\Jn = \sigma_n {\bf E}$ and $n = n_n + 2 n_s$.
In the following section, we do not attempt to solve TDGL theory to
determine the density or deformation dynamics of the vortex lattice,
just as in the previous section there was no attempt to solve TDGL to
determine rigid motion of the vortex lattice. Instead, we allow that
it may happen and study the consequences. In fact, rigid lattice
motion may have been too strict an assumption even in the case of no
normal current; there is no \emph{a priori} reason why the magnetic
field must always remain constant, and the lattice rigid.
\section{Extended Josephson relation}
\label{secEJR}
In this section we allow much greater freedom for the moving vortex
lattice; in particular, we allow the magnitude of $\Bloc$ to change
with time, so that the vortex lattice density is time-dependent.
Further, we allow the shape of the vortex lattice to undergo a global
time-dependent deformation, in response to anisotropic stimuli.
We shall generalise the rigid-motion requirement \eqref{shiftpsi} to
\begin{equation}
\label{psiomegatilde}
\psi({\bf r},t) = e^{-i\tilde{\omega}({\bf r},t)}\psi(\lambda(t)\Sigma(t)({\bf r} - \rl(t)),0)
.
\end{equation}
Here $\lambda(t)$ is a dynamic scaling factor for the vortex lattice.
We can expect such a simple dynamic scaling to be valid for a
$\Bloc$ field which does not vary excessively or too quickly with time.
We require $\rl(0)=0$ and $\lambda(0)=1$, and define $B_0$ as the
magnitude of the average magnetic field when the vortex lattice density is $\lambda=1$,
therefore $B(t) \equiv \langle |{\bf B}(t)| \rangle = B_0 \lambda^2(t)$.
$\Sigma(t)$ is a two-dimensional dynamic deformation matrix for the
vortex lattice. Since we have accommodated scaling with $\lambda$, we
shall require that $\Sigma$ be an element of $SL(2,\mathbb{R})$, and
we will write
\begin{equation}
\label{deformS}
\Sigma(t)=e^{S(t)}.
\end{equation}
We shall leave discussion of the matrix $S$ to the following section.
Beginning with the vortex lattice \eqref{psiomegatilde} and the TDGL
equation with floating kernel \eqref{_TDGLfk}, we proceed as in the
previous section.
Using equation \eqref{psiomegatilde} we calculate
\begin{equation}
\label{dtpsi}
\partial_t \psi = {\bf \Upsilon}\cdot\nabla \psi + i \Omega \psi
\end{equation}
where
\begin{eqnarray}
\label{_Upsilon}
{\bf \Upsilon} &=& \lambda^{-1}\Sigma^{-1} \partial_t \big[ \lambda(t)\Sigma(t)({\bf r} - \rl(t)) \big] ,\\
\label{_Omega}
\Omega &=& {\bf \Upsilon}\cdot\nabla\tilde{\omega} - \partial_t \tilde{\omega},
\end{eqnarray}
so that
\begin{eqnarray}
\partial_t {\bf D} \psi &=& ({\bf \Upsilon}\cdot\nabla + i\Omega ) D \psi + \nabla({\bf \Upsilon}\cdot\nabla+i\Omega) \psi \nonumber\\
&+& \frac{ie^*}{\hbar c} ({\bf \Upsilon}\cdot\nabla - \partial_t ){\bf A} \psi
\end{eqnarray}
and
\begin{eqnarray}
\partial_t \frac{\bar{\psi} (\overleftarrow{\overline{\D}} - \D)\psi }{|\psi|^2} &=& [ {\bf \Upsilon}\cdot\nabla + \nabla{\bf \Upsilon}\cdot ] \frac{\bar{\psi} (\overleftarrow{\overline{\D}} - \D)\psi }{|\psi|^2} \nonumber\\
&+& \frac{2ie^*}{\hbar c} [ \partial_t - {\bf \Upsilon}\cdot\nabla - \nabla{\bf \Upsilon}\cdot ]{\bf A} \nonumber\\
&-& 2i\nabla\Omega .
\end{eqnarray}
This allows us to use \eqref{Jsvsvn} and evaluate
\begin{eqnarray}
\label{dtvsvn}
\partial_t (\vs - \vn) &=& ({\bf \Upsilon}\cdot\nabla + \nabla{\bf \Upsilon}\cdot ) (\vs - \vn) \nonumber\\
&+&\frac{e^*}{m^*c} \big[ \nabla({\bf \Upsilon}\cdot{\bf A}) - {\bf \Upsilon}\times{\bf B} \big] \nonumber\\
&+& \frac{e^*}{m^*} \big[ {\bf E} + \nabla\phi + \frac{\hbar}{e^*} \nabla\Omega \big] \nonumber\\
&-&\frac{2}{n}\partial_t\Jn
\end{eqnarray}
where this time we have used the identity
\begin{equation}
\nabla({\bf v}\cdot{\bf V}) - {\bf v}\cdot\nabla{\bf V} =
{\bf v}\times \nabla\times {\bf V} + \nabla{\bf v}\cdot{\bf V}.
\end{equation}
Now, although ${\bf \Upsilon}$ is gauge-invariant, it may be seen from
\eqref{_Omega} or \eqref{dtpsi} that $\Omega$ must transform as
\begin{equation}
\Omega \rightarrow \Omega - \frac{e^*}{\hbar}[ {\bf \Upsilon}\cdot\nabla\theta - \partial_t\theta ]
\end{equation}
which determines
\begin{equation}
\Omega = - \frac{e^*}{\hbar}[ \frac{1}{c} {\bf \Upsilon}\cdot{\bf A} + \phi ].
\end{equation}
\def({\bf r}-\rl){({\bf r}-\rl)}
Substituting $\Omega$ and ${\bf \Upsilon}$ into \eqref{dtvsvn}, we find
\begin{eqnarray}
\label{dtvsvnsimp}
\partial_t (\vs - \vn) &=& \big[ \frac{\partial_t\lambda}{\lambda}({\bf r}-\rl)\cdot\nabla - \vl\cdot\nabla \big](\vs-\vn)\nonumber\\
&+& \big[ ({\bf r}-\rl)\partial_tS^T\nabla - \frac{\partial_t\lambda}{\lambda} + \partial_tS^T \big] (\vs-\vn) \nonumber\\
&+& \frac{e^*}{m^*}(1-\tau\partial_t){\bf E} - \frac{e^*}{m^*c}
\big[ \frac{\partial_t\lambda}{\lambda}({\bf r}-\rl) \nonumber\\
&+& \partial_tS ({\bf r}-\rl) - \vl \big]\times{\bf B}
\end{eqnarray}
where $\tau = m^*\sigma_n/2e^2n$. Finally we take the spatial average and find
\begin{eqnarray}
\label{EJRS}
\partial_t \langle\vs - \vn\rangle &=& \frac{e^*}{m^*}(1-\tau\partial_t) \langle{\bf E}\rangle
+ \frac{e^*}{m^*c}\vl\times\langle{\bf B}\rangle \nonumber\\
&+& \big[ \partial_tS^T - \frac{\partial_t B}{2B} \big] \big( \langle\vs-\vn\rangle + \frac{2}{n}\sigma_n \langle{\bf E}\rangle \big) \nonumber\\
&+& \frac{e^*}{m^*c}\big[ \frac{\partial_t B}{2B} \rl + \partial_tS\rl \big]\times\langle{\bf B}\rangle
\end{eqnarray}
where we have substituted for $\lambda$. We have used that $\vs$ is
periodic and $\vn$ is uniform, and for the sake of simplicity we have
taken the origin ${\bf r}=0$ to be at the centroid of the sample.
Equation \eqref{EJRS} is our Extended Josephson Relation (EJR), the main
result of this paper, and we pause to comment. The first line of
\eqref{EJRS} is simply the Inertial Josephson Relation of
\eqref{_IJR}, with the addition of the $\tau$ term accounting for the
floating-kernel normal current contribution. The remaining corrections
consist of parts which depend on the time derivative of the magnetic
field, and parts which depend on geometrical lattice deformation; when
deformation is absent, $S=0$.
In the following section we complete the expression of the EJR
\eqref{EJRS} by showing the explicit parametrisation of the
deformation matrix $S$,
\section{Lattice deformation}
\label{LatDef}
\def\TDM#1#2#3#4{\left[ \begin{tabular}{cc}$ #1$ &$ #2$ \\$ #3$ & $#4$ \end{tabular}\right]}
A two-dimensional space may be expanded by a scale factor $e^\sigma$ in the $x$ direction,
and $e^{-\sigma}$ in the $y$ direction by the matrix
\begin{equation}
\Sigma(0,\sigma) = \TDM{e^\sigma}{0}{0}{e^{-\sigma}}
\end{equation}
which has unit determinant.
Rotating so that the expansion is along a line at angle $\vartheta$ and the contraction
perpendicular,
\def{\boldsymbol \mu}{{\boldsymbol \mu}}
\def{\boldsymbol \nu}{{\boldsymbol \nu}}
\begin{eqnarray}
\Sigma(\vartheta,\sigma) &=& R(-\vartheta) \Sigma(0,\sigma) R(\vartheta) \nonumber\\
&=& \exp\left\{ \sigma \TDM{1-2\sin^2\vartheta}{2\sin\vartheta\cos\vartheta}{2\sin\vartheta\cos\vartheta}{1-2\cos^2\vartheta} \right\} \nonumber\\
&:=& \exp S_{{\boldsymbol \mu}}.
\end{eqnarray}
Here we have parametrised the deformation by ${{\boldsymbol \mu}} := (\sigma \cos \vartheta , \sigma \sin \vartheta) $.
This set of deformations is not closed; composition can produce rotations.
When combined with rotations all elements of $SL(2,{\mathbb R})$ can be constructed.
Some identities are $\Sigma(\vartheta,\sigma)^{-1}=\Sigma(\vartheta+\pi/2,\sigma)=\Sigma(\vartheta,-\sigma)$,
$\Sigma(\vartheta,0)=1$ and $\Sigma(\vartheta,\sigma_2)\Sigma(\vartheta,\sigma_1)=\Sigma(\vartheta,\sigma_1+\sigma_2)$.
If we define convenient matrices which depend on the deformation parameter ${\boldsymbol \mu}$,
\begin{equation}
\label{mandn}
m_{{\boldsymbol \mu}} := \TDM{\mu_x}{\mu_y}{\mu_y}{-\mu_x}\quad \textup{and}\quad n_{{\boldsymbol \mu}} := \TDM{\mu_x}{\mu_y}{-\mu_y}{\mu_x},
\end{equation}
we may write $ S_{{\boldsymbol \mu}} = {m_{{\boldsymbol \mu}} n_{{\boldsymbol \mu}}} / {\mu} $.
For a given time-dependent deformation parameter ${{\boldsymbol \mu}}(t)$, we may calculate
\begin{equation}
\label{dtSmu}
\partial_t S_{{{\boldsymbol \mu}}(t)} = \frac{m_{{\boldsymbol \mu}}}{\mu} \big[ 2n_{{{\boldsymbol \mu}}'} - \frac{{{\boldsymbol \mu}}\cdot{{\boldsymbol \mu}}'}{\mu^2} n_{{\boldsymbol \mu}} \big].
\end{equation}
Now, let us recall the $S$ matrix introduced in section \ref{secEJR},
equation \eqref{deformS}. Given some deformations parametrised by
${\boldsymbol \mu}$, ${\boldsymbol \nu}$, $\dots$, we would set $S = S_{{\boldsymbol \mu}} + S_{{\boldsymbol \nu}} +
\cdots$. In fact, it is possible immediately to write down several
plausible examples of physical sources of deformation.
Let us anticipate that the application of an electric field (in the
plane), or a time-derivative of this field may tend to deform the
vortex lattice. We can accommodate this with a deformation parameter
\begin{equation}
\label{def_field}
{\boldsymbol \nu} := E_0 {\bf E} + E_1 \partial_t {\bf E}.
\end{equation}
Another possibility is that the global inertia or acceleration of
the vortex lattice would coincide with a deformation; in this
case,
\begin{equation}
\label{def_iner}
{\boldsymbol \mu} := v_0 \vl + v_1 \partial_t\vl.
\end{equation}
Obviously, there is an approximation involved here in that our
consideration is limited to a global deformation.
Adding the above two deformation contributions together,
\begin{equation}
S(t) = S_{{\boldsymbol \nu}} + S_{{\boldsymbol \mu}}.
\end{equation}
$\partial_tS(t)$ may be calculated for each of the two terms by using \eqref{dtSmu};
\begin{eqnarray}
\partial_t S(t) &=&
\frac{m_{{\boldsymbol \mu}}}{\mu} \big[ 2n_{{{\boldsymbol \mu}}'} - \frac{{{\boldsymbol \mu}}\cdot{{\boldsymbol \mu}}'}{\mu^2} n_{{\boldsymbol \mu}} \big] \nonumber\\
&+&
\frac{m_{{\boldsymbol \nu}}}{\nu} \big[ 2n_{{{\boldsymbol \nu}}'} - \frac{{{\boldsymbol \nu}}\cdot{{\boldsymbol \nu}}'}{\nu^2} n_{{\boldsymbol \nu}} \big].
\end{eqnarray}
Upon substitution of $\partial_tS$ and $\partial_tS^T$ the Extended Josephson
Relation \eqref{EJRS} will contain the parameters $E_0$, $E_1$, $v_0$ and $v_1$.
In principle, these parameters could be fit to experimental data,
characterising response beyond the usual JR or IJR.
\section{Conclusions}
\label{secConc}
As we mentioned in the introduction, the Josephson Relation, though
simple in form, is often used by experimentalists to understand the
motion of vortices in the presence of an applied field. It has been
shown in the literature how to recover the Josephson
Relation\cite{Kopnin} and a form valid at higher frequencies, the
Inertial Josephson Relation,\cite{LipavskyIJR} in the context of TDGL
theory. We have taken this technique further, using the assumption of
a vortex lattice solution to TDGL theory, to extend the Josephson
Relation to a form covering a floating-kernel formulation of TDGL with
a normal current in the spirit of a two-fluid model. Additionally, we
have shown how to include effects of time-varying magnetic flux, and
also global vortex-lattice deformations; our considerations are
expected to be valid for small changes in magnetic field and small
lattice deformations. This is not due to any approximation in the
calculation itself, rather it is due to the implicit assumption of the
characteristics of the vortex lattice dynamics, equation
\eqref{psiomegatilde}, which nevertheless represents a far weaker
assumption than that of perfectly rigid global motion
\eqref{shiftpsi2}.
\section*{Acknowledgements}
The author is grateful to P.-J.~Lin and P.~Lipavsk\'y for fruitful discussions
regarding Josephson Relations.
|
2,869,038,154,336 | arxiv |
\subsection{Conditional Autoregressive Prior for Player-Specific Coefficients}
\label{subsec:CAR}
Sharing information between players is critical for our estimation problem, but standard hierarchical models encode an assumption of exchangeability between units that is too strong for NBA players, even between those who are classified by the league as playing the same position. For instance, LeBron James is listed at the same position (small forward) as Steve Novak, despite the fact that James is one of the NBA's most prolific short-range scorers whereas Novak has not scored a layup since 2012. To model between-player variation more realistically, our hierarchical model shares information across players based on a localized notion of player similarity that we represent as an $L \times L$ binary adjacency matrix $\mathbf{H}$: $H_{\ell k} = 1$ if players $\ell$ and $k$ are similar to each other and $H_{\ell k} = 0$ otherwise. We determine similarity in a pre-processing step that compares the spatial distribution of where players spend time on the offensive half-court; see Appendix \ref{subsec:H} for exact details on specifying $\mathbf{H}$.
Now let $\beta^{\ell}_{ji}$ be the $i$th component of $\boldsymbol{\beta}^{\ell}_j$, the vector of coefficients for the time-referenced covariates for player $\ell$'s hazard $j$ \eqref{hazard-equation}. Also let $\boldsymbol{\beta}_{ji}$ be the vector representing this component across all $L = 461$ players, $(\beta^{1}_{ji} \: \: \beta^{2}_{ji} \: \ldots \: \beta^{L}_{ji})'$. We assume independent conditional autogressive (CAR) priors \cite{besag1974spatial} for $\boldsymbol{\beta}_{ji}$:
\begin{align}
\beta^{\ell}_{ji} | \beta^{(-\ell)}_{ji}, \tau_{\boldsymbol{\beta}_{ji}}^2 &\sim \mathcal{N} \left( \frac{1}{n_{\ell}} \sum_{k : H_{\ell k} = 1} \beta^{k}_{ji}, \frac{\tau_{\boldsymbol{\beta}_{ji}}^2}{n_{\ell}} \right) \nonumber \\
\tau^2_{\boldsymbol{\beta}_{ji}} &\sim \text{InvGam}(1, 1)
\label{car}
\end{align}
where $n_{\ell} = \sum_{k} H_{\ell k}$. Similarly, let $\boldsymbol{\beta}_{\textrm{s} i} = (\beta^1_{\textrm{s} i} \: \: \beta^2_{\textrm{s} i} \: \ldots \: \beta^L_{\textrm{s} i})$ be the vector of the $i$th component of the shot probability model \eqref{shotprob} across players $1, \ldots, L$. We assume the same CAR prior \eqref{car} independently for each component $i$. While the inverse gamma prior for $\tau^2_{*}$ terms seems very informative, we want to avoid very large or small values of $\tau^2_{*}$, corresponding to 0 or full shrinkage (respectively), which we know are inappropriate for our model. Predictive performance for the 0 shrinkage model ($\tau^2_{*}$ very large) is shown in Table \ref{loglik-table}, whereas the full shrinkage model ($\tau^2_{*} = 0$) doesn't allow parameters to differ by player identity, which precludes many of the inferences EPV was designed for.
\subsection{Spatial Effects $\xi$}
\label{subsec:spat_effects}
Player-tracking data is a breakthrough because it allows us to model the fundamental spatial component of basketball. In our models, we incorporate the properties of court space in spatial effects $\xi_j^{\ell}, \tilde{\xi}_j^{\ell}, \xi_\textrm{s}^{\ell}$, which are unknown real-valued functions on $\mathbb{S}$, and therefore infinite dimensional. We represent such spatial effects using Gaussian processes (see \citeasnoun{rasmussen2006gaussian} for an overview of modeling aspects of Gaussian processes). Gaussian processes are usually specified by a mean function and covariance function; this approach is computationally intractable for large data sets, as the computation cost of inference and interpolating the surface at unobserved locations is $\mathcal{O}(n^3)$, where $n$ is the number of different points at which $\xi_j^{\ell}$ is observed (for many spatial effects $\xi_j^{\ell}$, the corresponding $n$ would be in the hundreds of thousands). We instead provide $\xi$ with a low-dimensional representation using functional bases \cite{higdon2002space,quinonero2005unifying}, which offers three important advantages. First, this representation is more computationally efficient for large data sets such as ours. Second, functional bases allow for a non-stationary covariance structure that reflects unique spatial dependence patterns on the basketball court. Finally, the finite basis representation allows us to apply the same between-player CAR prior to estimate each player's spatial effects.
Our functional basis representation of a Gaussian process $\xi_j^{\ell}$ relies on $d$ deterministic basis functions $\phi_{j1}, \ldots, \phi_{jd}: \mathbb{S} \rightarrow \mathbb{R}$ such that for any $\mathbf{z} \in \mathbb{S}$,
\begin{equation}\label{GP-basis}
\xi_j^{\ell}(\mathbf{z}) = \sum_{i=1}^d w^{\ell}_{ji}\phi_{ji}(\mathbf{z}),
\end{equation}
where $\mathbf{w}^{\ell}_j = (w^{\ell}_{j1} \: \ldots \: w^{\ell}_{jd})'$ is a random vector of loadings, $\mathbf{w}^{\ell}_j \sim \mathcal{N}(\boldsymbol{\omega}^{\ell}_j, \boldsymbol{\Sigma}^{\ell}_j)$. Letting $\Phi_j(\mathbf{z}) = (\phi_{j1}(\mathbf{z}) \: \ldots \: \phi_{jd}(\mathbf{z}))'$, we can see that $\xi_j^{\ell}$ given by \eqref{GP-basis} is a Gaussian process with mean function $\Phi_j(\mathbf{z})'\boldsymbol{\omega}^{\ell}_j$ and covariance function $\text{Cov}[\xi_j^{\ell}(\mathbf{z}_1), \xi_j^{\ell}(\mathbf{z}_2)] = \Phi_j(\mathbf{z}_1)'\boldsymbol{\Sigma}^{\ell}_j\Phi_j(\mathbf{z}_2)$. However, since bases $\phi_{ji}$ are deterministic, each $\xi^{\ell}_j$ is represented as a $d$-dimensional parameter. Note that we also use \eqref{GP-basis} for pass receiver spatial effects and the spatial effect term in the shot probability model, $\tilde{\xi}^{\ell}_j$ and $\xi_\textrm{s}^{\ell}$, respectively. For these terms we have associated bases $\tilde{\phi}_{ji}, \phi_{\textrm{s} i}$ and weights, $\tilde{w}^{\ell}_{ji}, w^{\ell}_{\textrm{s} i}$. As our notation indicates, bases functions $\Phi_j(\mathbf{z})$ differ for each macrotransition type but are constant across players; whereas weight vectors $\mathbf{w}^{\ell}_j$ vary across both macrotransition types and players.
\begin{figure}[t]
\centering
\includegraphics[width=1.0\textwidth]{graphics/bplots_3}
\caption{The functional bases $\phi_{ji}$ for $i=1, \ldots, 10$ and $j$ corresponding to the shot-taking macrotransition, $j=5$. There is no statistical interpretation of the ordering of the bases; we have displayed them in rough order of the shot types represented, from close-range to long-range.}
\label{bases}
\end{figure}
Using $d=10$, we determine the functional bases in a pre-processing step, discussed in Appendix \ref{subsec:psi}. These basis functions are interpretable as patterns/motifs that constitute players' decision-making tendencies as a function of space; please see Figure \ref{bases} for an example, or \citeasnoun{miller2013icml} for related work in a basketball context. Furthermore, we use a CAR model \eqref{car} to supply the prior mean and covariance matrix ($\boldsymbol{\omega}^{\ell}_j, \boldsymbol{\Sigma}^{\ell}_j$) for the weights:
\begin{align}
\boldsymbol{w}^{\ell}_j | \boldsymbol{w}^{-(\ell)}_j, \tau_{\mathbf{w}_j}^2 &\sim
\mathcal{N} \left( \frac{1}{n_{\ell}} \sum_{k : H_{\ell k} = 1} \mathbf{w}^{k}_j, \frac{\tau_{\mathbf{w}_j}^2}{n_{\ell}} \mathbf{I}_d\right) \nonumber \\
\tau^2_{\mathbf{w}_{j}} &\sim \text{InvGam}(1, 1).
\label{car3}
\end{align}
As with \eqref{car},
we also use \eqref{car3} for terms $\tilde{\mathbf{w}}_{j}$ and $\mathbf{w}_\textrm{s}$. Combining the functional basis representation \eqref{GP-basis} with the between-player CAR prior \eqref{car3} for the weights, we get a prior representation for spatial effects $\xi^{\ell}_j, \tilde{\xi}^{\ell}_j, \tilde{\xi}^{\ell}$ that is low-dimensional and shares information both across space and between different players.
\subsection{Parameter Estimation}
\label{subsec:estimation}
As discussed in Section \ref{sec:Macro}, calculating EPV requires the parameters that define the multiresolution transition models \ref{M3}--\ref{M4}---specifically, the hazards $\lambda_j^{\ell}$, shot probabilities $p^{\ell}$, and all parameters of the microtransition model \ref{M3}. We estimate these parameters in a Bayesian fashion, combining the likelihood of the observed optical tracking data with the prior structure discussed earlier in this section. Using our multiresolution models, we can write the likelihood for the full optical tracking data, indexed arbitrarily by $t$:
\begin{align} \label{partial}
\prod_{t} \mathbb{P}(Z_{t + \epsilon} | \mathcal{F}^{(Z)}_t) & =
\Bigg( \overbrace{\prod_{t} \mathbb{P}(Z_{t + \epsilon}|M(t)^c, \mathcal{F}^{(Z)}_t)^{\mathbf{1}[M(t)^c]}}^{L_{\text{mic}}} \Bigg) \Bigg( \overbrace{\prod_{t}\prod_{j=1}^6 \mathbb{P}(Z_{t + \epsilon}|M_j(t), C_{\delta_t}, \mathcal{F}^{(Z)}_t)^{\mathbf{1}[M_j(t)]}}^{L_{\text{rem}}}\Bigg) \nonumber \\
& \hspace{-2.5cm} \times \Bigg( \underbrace{\prod_{t} \mathbb{P}(M(t)^c | \mathcal{F}^{(Z)}_t)^{\mathbf{1}[M(t)^c]} \prod_{j=1}^6 \mathbb{P}(M_j(t)|\mathcal{F}^{(Z)}_t)^{\mathbf{1}[M_j(t)]}}_{L_{\text{entry}}} \Bigg) \Bigg( \underbrace{\prod_t \prod_{j=1}^6 \mathbb{P}(C_{\delta_t} | M_j(t), \mathcal{F}^{(Z)}_t)^{\mathbf{1}[M_j(t)]}}_{L_{\text{exit}}} \Bigg),
\end{align}
The factorization used in \eqref{partial} highlights data features that inform different parameter groups: $L_{\text{mic}}$ is the likelihood term corresponding to the microtransition model \ref{M3}, $L_{\text{entry}}$ the macrotransition entry model \ref{M1}, and $L_{\text{exit}}$ the macrotransition exit model \ref{M2}.
The remaining term $L_{\text{rem}}$ is left unspecified, and ignored during inference.
Thus, $L_{\text{mic}}, L_{\text{entry}},$ and $L_{\text{exit}}$ can be though of as partial likelihoods \cite{cox1975partial}, which under mild conditions leads to consistent and asymptotically well-behaved estimators \cite{wong1986theory}. When parameters in these partial likelihood terms are given prior distributions, as is the case for those comprising the hazards in the macrotransition entry model, as well as those in the shot probability model, the resulting inference is partially Bayesian \cite{cox1975note}.
The microtransition partial likelihood term $L_{\text{mic}}$ factors by player:
\begin{equation}
L_{\text{mic}} \propto \prod_t \prod_{\ell = 1}^L \mathbb{P}(\mathbf{z}_{\ell}(t + \epsilon) | M(t)^c, \mathcal{F}^{(Z)}_t)^{\mathbf{1}[M(t)^c \text{ and } \ell \text{ on court at time } t]}.
\label{Lmic}
\end{equation}
Depending on whether or not player $\ell$ is on offense (handling the ball or not) or defense, $\mathbb{P}(\mathbf{z}_{\ell}(t + \epsilon) | M(t)^c, \mathcal{F}^{(Z)}_t)$ is supplied by the offensive \eqref{micro} or defensive\eqref{micro-defense} microtransition models. Parameters for these models \eqref{micro}--\eqref{micro-defense} are estimated using R-INLA, where spatial acceleration effects $\mu^{\ell}_x, \mu^{\ell}_y$ are represented using a Gaussian Markov random field approximation to a Gaussian process with Mat\'ern covariance \cite{lindgren2011explicit}. Appendix \ref{subsec:psi} provides the details on this approximation. We do perform any hierarchical modeling for the parameters of the microtransition model---because this model only describes movement (not decision-making), the data for every player is informative enough to provide precise inference. Thus, microtransition models are fit in parallel using each player's data separately; this requires $L=461$ processors, each taking at most 18 hours at 2.50Ghz clock speed, using 32GB of RAM.
For the macrotransition entry term, we can write
\begin{equation}
L_{\text{entry}} \propto \prod_{l=1}^L \prod_{j=1}^6 L_{\text{entry}_j}^{\ell} (\lambda_j^{\ell}(\cdot)),
\label{Lmac}
\end{equation}
recognizing that (for small $\epsilon$),
\begin{align}\label{everything}
L^{\ell}_{\text{entry}_j}(\lambda_j^{\ell}(\cdot))
&\propto
\left(\prod_{\substack{t \: : \: M_j(t) \\ t \in \mathcal{T}^{\ell}}} \lambda^{\ell}_j(t) \right) \exp \left( - \sum_{\substack
t \in \mathcal{T}^{\ell}}} \lambda^{\ell}_j(t) \right) \nonumber \\
\text{where}\hspace{1cm} \log(\lambda^{\ell}_j(t)) &=
[\mathbf{W}_j^{\ell}(t)]'\boldsymbol{\beta}_j^{\ell} +
\boldsymbol{\phi}_j(\mathbf{z}_{\ell}(t))'\mathbf{w}_j^{\ell} + \left(\tilde{\boldsymbol{\phi}}_j(\mathbf{z}_{j}(t) )' \tilde{\mathbf{w}}_j^{\ell}\mathbf{1}[j \leq 4]\right)
\end{align}
and $\mathcal{T}^{\ell}$ is the set of time $t$ for which player $\ell$ possesses the ball.
Expression \eqref{everything} is the likelihood for a Poisson regression; combined with prior distributions \eqref{car}--\eqref{car3}, inference for $\boldsymbol{\beta}^{\ell}_j, \mathbf{w}_j^{\ell}, \tilde{\mathbf{w}}_j^{\ell}$ is thus given by a hierarchical Poisson regression. However, the size of our data makes implementing such a regression model computationally difficult as the design matrix would have 30.4 million rows and a minimum of $L(p_j + d) \geq 5993$ columns, depending on macrotransition type. We perform this regression through the use of integrated nested Laplace approximations (INLA) \cite{rue2009approximate}.
Each macrotransition type can be fit separately, and requires approximately 24 hours using a single 2.50GHz processor with 120GB of RAM.
Recalling Section \ref{macro_exit}, the macrotransition exit model \ref{M2} is deterministic for all macrotransitions except shot attempts ($j=5$). Thus, $L_{\text{exit}}$ only provides information on the parameters of our shot probability model \eqref{shotprob}. Analogous to the Poisson model in \eqref{everything}, $L_{\text{exit}}$ is the likelihood of a logistic regression, which factors by player. We also use INLA to fit this hierarchical logistic regression model, though fewer computational resources are required as this likelihood only depends on time points where a shot is attempted, which is a much smaller subset of our data.
\iffalse
\begin{algorithm}[h!]
\caption{Calculating EPV ($\nu_t$). }
\label{alg:EPV}
\begin{algorithmic}
\Require{Player $\ell$ possess the ball at time $t$}
\bigskip
\Function{macro}{$\mathcal{F}^{(Z)}_s, \boldsymbol{\Theta}$} \Comment{Simulates a possible macrotransition in $(s, s + \epsilon]$}
\For{$j$ in $1, \ldots, 6$}
\State Set $M_j(s) = 1$ with probability min$\{1, \lambda_j^{\ell}(s)\}$
\EndFor
\If{$\sum_j M_j(s) > 1$}
\State Keep only one $j$ such that $M_j(s) = 1$, choosing it proportional to $\lambda_j^{\ell}(s)$
\EndIf
\State \Return{$\{M_j(s), j=1, \ldots, 6\}$}
\EndFunction
\bigskip
\Function{EPVdraw}{$\mathcal{F}^{(Z)}_t, \boldsymbol{\Theta}$} \Comment{Gets EPV from single simulation of next macro}
\State Initialize $s \leftarrow t$
\State Initialize $M_j(s) \leftarrow \textsc{macro}(\mathcal{F}^{(Z)}_s, \boldsymbol{\Theta})$
\While{$M_j(s) = 0$ for all $j$}
\State Draw $Z_{s+\epsilon} \sim \mathbb{P}(Z_{s + \epsilon} | M(s)^c, \mathcal{F}^{(Z)}_s)$
\State $\mathcal{F}^{(Z)}_{s + \epsilon} \leftarrow \{\mathcal{F}^{(Z)}_t, Z_{s + \epsilon}\}$
\State $s \leftarrow s + \epsilon$
\State $M_j(s) \leftarrow \textsc{macro}(\mathcal{F}^{(Z)}_s, \boldsymbol{\Theta})$
\EndWhile
\State Draw $C_\delta \sim \mathbb{P}(C_\delta | M_j(s), \mathcal{F}^{(Z)}_s)$
\State $\nu_t \leftarrow \E[h(C_T)|C_\delta]$
\State \Return{$\nu_t$}
\EndFunction
\bigskip
\Function{EPV}{$N, \mathcal{F}^{(Z)}_t, \boldsymbol{\Theta}$} \Comment{Averages over simulations of next macrotransition}
\State Initialize $\nu_t \leftarrow 0$
\For{$i$ in $1, \ldots, N$}
\State $\nu_t \leftarrow \nu_t + \textsc{EPVdraw}(\mathcal{F}^{(Z)}_t, \boldsymbol{\Theta})$
\EndFor
\State \Return $\nu_t/N$
\EndFunction
\end{algorithmic}
\end{algorithm}
\fi
\end{document}
\subsection{Time-Varying Covariates in Macrotransition Entry Model}
\label{Covariates}
As revealed in \eqref{hazard-equation}, the hazards $\lambda_j^{\ell}(t)$ are parameterized by spatial effects ($\xi_j^{\ell}$ and $\tilde{\xi}_j^{\ell}$ for pass events), as well as coefficients for situation covariates, $\boldsymbol{\beta}_j^{\ell}$. The covariates used may be different for each macrotransition $j$, but we assume for each macrotransition type the same covariates are used across players $\ell$.
Among the covariates we consider, \texttt{dribble} is an indicator of whether the ballcarrier has started dribbling after receiving possession. \texttt{ndef} is the distance between the ballcarrier and his nearest defender (transformed to $\log(1 + d)$). \texttt{ball\_lastsec} records the distance traveled by the ball in the previous one second. \texttt{closeness} is a categorical variable giving the rank of the ballcarrier's teammates' distance to the ballcarrier. Lastly, \texttt{open} is a measure of how open a potential pass receiver is using a simple formula relating the positions of the defensive players to the vector connecting the ballcarrier with the potential pass recipient.
For $j \leq 4$, the pass event macrotransitions, we use \texttt{dribble}, \texttt{ndef}, \texttt{closeness}, and \texttt{open}. For shot-taking and turnover events, \texttt{dribble}, \texttt{ndef}, and \texttt{ball\_lastsec} are included. Lastly, the shot probability model (which, from \eqref{shotprob} has the same parameterization as the macrotransition model) uses \texttt{dribble} and \texttt{ndef} only. All models also include an intercept term. As discussed in Section \ref{subsec:CAR}, independent CAR priors are assumed for each coefficient in each macrotransition hazard model.
\subsection{Player Similarity Matrix $\mathbf{H}$ for CAR Prior}
\label{subsec:H}
The hierarchical models used for parameters of the macrotransition entry model, discussed in Section \ref{subsec:CAR}, are based on the idea that players who share similar roles for their respective teams should behave similarly in the situations they face. Indeed, players' positions (point guard, power forward, etc.) encode their offensive responsibilities: point guards move and distribute the ball, small forwards penetrate and attack the basket, and shooting guards get open for three-point shots. Such responsibilities reflect spatiotemporal decision-making tendencies, and therefore informative for our macrotransition entry model \eqref{hazard-def}--\eqref{hazard-equation}.
Rather than use the labeled positions in our data, we define position as a distribution of a player's location during his time on the court. Specifically, we divide the offensive half of the court into 4-square-foot bins (575 total) and count, for each player, the number of data points for which he appears in each bin. Then we stack these counts together into a $L \times 575$ matrix (there are $L=461$ players in our data), denoted $\mathbf{G}$, and take the square root of all entries in $\mathbf{G}$ for normalization. We then perform non-negative matrix factorization (NMF) on $\mathbf{G}$ in order to obtain a low-dimensional representation of players' court occupancy that still reflects variation across players \cite{miller2013icml}. Specifically, this involves solving:
\begin{equation}\label{nmf}
\hat{\mathbf{G}} = \underset{\mathbf{G}^*}{\text{argmin}}\{D(\mathbf{G}, \mathbf{G}^*)\}, \text{ subject to } \mathbf{G}^* = \left(\underset{L \times r}{\mathbf{U}}\right)\left(\underset{r \times 575}{\mathbf{V}}\right) \text{ and } U_{ij},V_{ij} \geq 0 \text{ for all } i,j,
\end{equation}
where $r$ is the rank of the approximation $\hat{\mathbf{G}}$ to $\mathbf{G}$ (we use $r=5$), and $D$ is some distance function; we use a Kullback-Liebler type
$$D(\mathbf{G}, \mathbf{G}^*) = \sum_{i,j} G_{ij}\log \left( G_{ij}/G_{ij}^*\right) - G_{ij} + G_{ij}^*.$$ The rows of $\mathbf{V}$ are non-negative basis vectors for players' court occupancy distributions (plotted in Figure \ref{H_bases}) and the rows of $\mathbf{U}$ give the loadings for each player. With this factorization, $\mathbf{U}_i$ (the $i$th row of $\mathbf{U}$) provides player $i$'s ``position''---a $r$-dimensional summary of where he spends his time on the court. Moreover, the smaller the difference between two players' positions, $||\mathbf{U}_i - \mathbf{U}_j||$, the more alike are their roles on their respective teams, and the more similar we expect the parameters of their macrotransition models to be a priori.
\begin{figure}[h]
\centering
\includegraphics[width=1.0\linewidth]{graphics/H_bases}
\caption{The rows of $\mathbf{V}$ (plotted above for $r=5$) are bases for the players' court occupancy distribution. There is no interpretation to the ordering.}
\label{H_bases}
\end{figure}
Formalizing this, we take the $L \times L$ matrix $\mathbf{H}$ to consist of 0s, then set $H_{ij} = 1$ if player $j$ is one of the eight closest players in our data to player $i$ using the distance $||\mathbf{U}_i - \mathbf{U}_j||$ (the cutoff of choosing the closest eight players is arbitrary). This construction of $\mathbf{H}$ does not guarantee symmetry, which is required for the CAR prior we use, thus we set $H_{ji} = 1$ if $H_{ij} = 1$. For instance, LeBron James' ``neighbors'' are (in no order): Andre Iguodala, Harrison Barnes, Paul George, Kobe Bryant, Evan Turner, Carmelo Anthony, Rodney Stuckey, Will Barton, and Rudy Gay.
\subsection{Basis Functions for Spatial Effects $\xi$}
\label{subsec:psi}
Recalling \eqref{GP-basis}, for each player $\ell$ and macrotransition type $j$, we have $\xi_j^{\ell}(\mathbf{z}) = \sum_{i=1}^d w^{\ell}_{ji} \phi_{ji}(\mathbf{z})$, where $\{\phi_{ji}, i=1, \ldots, d\}$ are the basis functions for macrotransition $j$. During the inference discussed in Section \ref{sec:Computation}, these basis functions are assumed known. They are derived from a pre-processing step. Heuristically, they are constructed by approximately fitting a simplified macrotransition entry model with stationary spatial effect for each player, then performing NMF to find a low-dimensional subspace (in this function space of spatial effects) that accurately captures the spatial dependence of players' macrotransition behavior. We now describe this process in greater detail.
Each basis function $\phi_{ji}$ is itself represented as a linear combination of basis functions,
\begin{equation}
\label{phi_basis}
\phi_{ji}(\mathbf{z}) = \sum_{k=1}^{d_0} v_{jik} \psi_k(\mathbf{z}),
\end{equation}
where $\{\psi_k, k=1, \ldots, d_0\}$ are basis functions (as the notation suggests, the same basis is used for all $j$, $i$). The basis functions $\{\psi_k, k=1, \ldots, d_0\}$ are induced by a triangular mesh of $d_0$ vertices (we use $d_0 = 383$) on the court space $\mathbb{S}$. In practice, the triangulation is defined on a larger region that includes $\mathbb{S}$, due to boundary effects. The mesh is formed by partitioning $\mathbb{S}$ into triangles, where any two triangles share at most one edge or corner; see Figure \ref{triangulation} for an illustration. With some arbitrary ordering of the vertices of this mesh, $\psi_k:\mathbb{S} \rightarrow \mathbb{R}$ is the unique function taking value 0 at all vertices $\tilde{k} \neq k$, 1 at vertex $k$, and linearly interpolating between any two points within the same triangle used in the mesh construction. Thus, with this basis, $\phi_{ji}$ (and consequently, $\xi^{\ell}_j$) are piecewise linear within the triangles of the mesh.
\begin{figure}[h]
\centering
\includegraphics[width=0.33\linewidth]{graphics/triangulation_2}
\caption{Triangulation of $\mathbb{S}$ used to build the functional basis $\{\psi_k, k=1, \ldots, d_0\}$. Here, $d_0=383$.}
\label{triangulation}
\end{figure}
This functional basis $\{\psi_k, k=1, \ldots, d_0\}$ is used by \citeasnoun{lindgren2011explicit}, who show that it can approximate a Gaussian random field with Mat\'ern covariance. Specifically, let $x(\mathbf{z}) = \sum_{k=1}^{d_0} v_k\psi_k(\mathbf{z})$ and assume $(v_1 \: \ldots \: v_k)' = \mathbf{v} \sim \mathcal{N}(0, \boldsymbol{\Sigma}_{\nu, \kappa, \sigma^2})$. The form of $\boldsymbol{\Sigma}_{\nu, \kappa, \sigma^2}$ is such that the covariance function of $x$ approximates a Mat\'ern covariance:
\begin{equation}
\label{gmrf}
\text{Cov}[x(\mathbf{z}_1), x(\mathbf{z}_2)] = \boldsymbol{\psi}(\mathbf{z}_1)'\boldsymbol{\Sigma}_{\nu, \kappa, \sigma^2} \boldsymbol{\psi}(\mathbf{z}_2) \approx \frac{\sigma^2}{\Gamma(\nu)2^{\nu-1}}(\kappa ||\mathbf{z}_1 - \mathbf{z}_2||)^{\nu} K_{\nu}(\kappa ||\mathbf{z}_1 - \mathbf{z}_2||),
\end{equation}
where $\boldsymbol{\psi}(\mathbf{z}) = (\psi_1(\mathbf{z}) \: \ldots \: \psi_{d_0}(\mathbf{z}))'$. As discussed in Section \ref{subsec:spat_effects}, the functional basis representation of a Gaussian process offers computational advantages in that the infinite dimensional field $x$ is given a $d_0$-dimensional representation, as $x$ is completely determined by $\mathbf{v}$. Furthermore, as discussed in \citeasnoun{lindgren2011explicit}, $\boldsymbol{\Sigma}_{\nu, \kappa, \sigma^2}^{-1}$ is sparse (\eqref{gmrf} is actually a Gaussian Markov random field (GMRF) approximation to $x$), offering additional computational savings \cite{rue2001fast}.
The GMRF approximation given by \eqref{phi_basis}--\eqref{gmrf} is actually used in fitting the microtransition models for offensive players \eqref{micro}. We give the spatial innovation terms $\mu^{\ell}_x, \mu^{\ell}_y$ representations using the $\psi$ basis. Then, as mentioned in Section \ref{subsec:estimation}, \eqref{micro} is fit independently for each player in our data set using the software R-INLA.
We also fit simplified versions of the macrotransition entry model, using the $\psi$ basis, in order to determine $\{v_{jik}, k=1, \ldots, d_0\}$, the loadings of the basis representation for $\phi$, \eqref{phi_basis}. This simplified model replaces the macrotransition hazards \eqref{hazard-equation} with
\begin{equation}
\label{hazard_preprocess}
\log(\lambda^{\ell}_j(t)) = c_j^{\ell} + \sum_{k=1}^{d_0} u^{\ell}_{jk} \psi_k(\mathbf{z}^{\ell}(t))
+ \mathbf{1}[j \leq 4]\sum_{k=1}^{d_0}\tilde{u}_{jk}^{\ell}\psi_k\left(\mathbf{z}_{j}(t)\right),
\end{equation}
thus omitting situational covariates ($\boldsymbol{\beta}^{\ell}_j$ in \eqref{hazard-equation}) and using the $\psi$ basis representation in place of $\xi_j^{\ell}$. Note that for pass events, like \eqref{hazard-equation}, we have an additional term based on the pass recipient's location, parameterized by $\{\tilde{u}^{\ell}_{jk}, k=1, \ldots, d_0\}$. As discussed in Section \ref{subsec:estimation}, parameters in \eqref{hazard_preprocess} can be estimated by running a Poisson regression. We perform this independently for all players $\ell$ and macrotransition types $j$ using the R-INLA software. Like the microtransition model, we fit \eqref{hazard_preprocess} separately for each player across $L = 461$ processors (each hazard type $j$ is run in serial), each requiring at most 32GB RAM and taking no more than 16 hours.
For each macrotransition type $j$, point estimates $\hat{u}^{\ell}_{jk}$ are exponentiated\footnote{The reason for exponentiation is because estimates $\hat{u}^{\ell}_{jk}$ inform the log hazard, so exponentiation converts these estimates to a more natural scale of interest. Strong negative signals among the $\hat{u}^{\ell}_{jk}$ will move to 0 in the entries of $\mathbf{U}_j$ and not be very influential in the matrix factorization \eqref{U_factorization}, which is desirable for our purposes.}, so that $[\mathbf{U}_j]_{\ell k} = \exp(\hat{u}^{\ell}_{jk})$. We then perform NMF \eqref{nmf} on $\mathbf{U}_j$:
\begin{equation}
\label{U_factorization}
\mathbf{U}_j \approx \left(\underset{L \times d}{\mathbf{Q}_j}\right)\left(\underset{d \times d_0}{\mathbf{V}_j}\right).
\end{equation}
Following the NMF example in Section \ref{subsec:H}, the rows of $\mathbf{V}_j$ are bases for the variation in coefficients $\{u^{\ell}_{jk}, k=1, \ldots, d_0\}$ across players $\ell$. As $1 \leq k \leq d_0$ indexes points on our court triangulation (Figure \ref{triangulation}), such bases reflect structured variation across space. We furthermore use these terms as the coefficients for \eqref{phi_basis}, the functional basis representation of $\phi_{ji}$, setting $v_{jik} = [\mathbf{V}_j]_{ik}$. Equivalently, we can summarize our spatial basis model as:
\begin{equation}
\label{all_bases}
\xi^{\ell}_j (\mathbf{z}) = [\mathbf{w}^{\ell}_j]'\boldsymbol{\phi}_j(\mathbf{z}) = [\mathbf{w}^{\ell}_j]' \mathbf{V}_j \boldsymbol{\psi}(\mathbf{z}).
\end{equation}
The preprocessing steps described in this section---fitting a simplified macrotransition entry model \eqref{hazard_preprocess} and performing NMF on the coefficient estimates \eqref{U_factorization}---provide us with basis functions $\phi_{ji}(\mathbf{z})$ that we treat as fixed and known during the modeling and inference discussed in Section \ref{sec:Computation}.
Note that an analogous expression for \eqref{all_bases} is used for $\tilde{\xi}^{\ell}_j$ in terms of $\tilde{\mathbf{w}}^{\ell}_j$ and $\tilde{\mathbf{V}}_j$ for pass events; however, for the spatial effect $\xi^{\ell}_\textrm{s}$ in the shot probability model, we simply use $\mathbf{V}_5$. Thus, the basis functions for the shot probability model are the same as those for the shot-taking hazard model.
\subsection{Calculating EPVA: Baseline EPV for League-Average Player}
\label{subsec:EPVA}
To calculate the baseline EPV for a league-average
player possessing the ball in player $\ell$'s shoes, denoted $\nu_t^{r(\ell)}$ in \eqref{EPVA}, we start by considering an alternate version of the transition
probability matrix between coarsened states $\mathbf{P}$. For each
player $\ell_1, \ldots, \ell_5$ on offense, there is a disjoint subset
of rows of $\mathbf{P}$, denoted $\mathbf{P}_{\ell_i}$, that
correspond to possession states for player $\ell_i$. Each row of
$\mathbf{P}_{\ell_i}$ is a probability distribution over transitions
in $\mathcal{C}$ given possession in a particular state. Technically, since
states in $\mathcal{C}_{\text{poss}}$ encode player identities, players on
different teams do not share all states which they have a nonzero
probability of transitioning to individually. To get around this, we
remove the columns from each $\mathbf{P}_{\ell_i}$ corresponding to
passes to players not on player $\ell_i$'s team, and reorder the
remaining columns according to the position (guard, center, etc.) of
the associated pass recipient. Thus, the interpretation of transition
distributions $\mathbf{P}_{\ell_i}$ across players $\ell_i$ is as
consistent as possible.
We create a baseline transition profile of a hypothetical
league-average player by averaging these transition probabilities across all players: (with
slight abuse of notation) let $\mathbf{P}_r = \sum_{\ell=1}^L
\mathbf{P}_{\ell}/L$. Using this, we create a new transition
probability matrix $\mathbf{P}_r(\ell)$ by replacing player $\ell$'s
transition probabilities ($\mathbf{P}_{\ell}$) with the league-average
player's ($\mathbf{P}_r$). The baseline (league-average) EPV at time
$t$ is then found by evaluating $\nu^{r(\ell)}_t = \E_{\mathbf{P}_r(\ell)}[ X |
C_t]$. Note that $\nu^{r(\ell)}_t$ depends only on the coarsened state $C_t$ at time $t$, rather than the full history of the possession, $\mathcal{F}^{(Z)}_t$, as in $\nu_t$ \eqref{epveqn}. This ``coarsened'' baseline $\nu_t^{r(\ell)}$ exploits
the fact that, when averaging possessions over the entire season, the
results are (in expectation) identical to using a full-resolution baseline EPV that
assumes the corresponding multiresolution transition probability models
for this hypothetical league-average player.
\end{document}
\subsection{Player-Tracking Data}
In 2013 the National Basketball Association (NBA), in partnership with data provider STATS LLC, installed optical tracking systems in the arenas of all $30$ teams in the league. The systems track the exact two-dimensional locations of every player on the court (as well as the three-dimensional location of the ball) at a resolution of 25Hz, yielding over 1 billion space-time observations over the course of a full season.
Consider, for example, the following possession recorded using this player tracking system. This is
a specific Miami Heat possession against the Brooklyn Nets from the second quarter of a game on November 1, 2013, chosen arbitrarily among those during which LeBron James (widely considered the best NBA player as of 2014) handles the ball.
\begin{figure}[h!]
\centering
\includegraphics[width=1.0\linewidth]{graphics/poss_path}
\caption{Miami Heat possession against Brooklyn Nets. Norris Cole wanders into the perimeter (A) before driving toward the basket (B). Instead of taking the shot, he runs underneath the basket (C) and eventually passes to Rashard Lewis(D), who promptly passes to LeBron James (E). After entering the perimeter (F), James slips behind the defense (G) and scores an easy layup (H).}
\label{heat_poss}
\end{figure}
In this particular possession, diagrammed in Figure \ref{heat_poss}, point guard Norris Cole begins with possession of the ball crossing the halfcourt line (panel A). After waiting for his teammates to arrive in the offensive half of the court, Cole wanders gradually into the perimeter (inside the three point line), before attacking the basket through the left post. He draws two defenders, and while he appears to beat them to the basket (B), instead of attempting a layup he runs underneath the basket through to the right post (C). He is still being double teamed and at this point passes to Rashard Lewis (D), who is standing in the right wing three position. As defender Joe Johnson closes, Lewis passes to LeBron James, who is standing about 6 feet beyond the three point line and drawing the attention of Andray Blatche (E). James wanders slowly into the perimeter (F), until just behind the free throw line, at which point he breaks towards the basket. His rapid acceleration (G) splits the defense and gains him a clear lane to the basket. He successfully finishes with a layup (H), providing the Heat two points.
\subsection{Expected Possession Value}
Such detailed data hold both limitless analytical potential for basketball spectators and new methodological challenges to statisticians. Of the dizzying array of questions that could be asked of such data, we choose to focus this paper on one particularly compelling quantity of interest, which we call \textit{expected possession value} (EPV), defined as
the expected number of points the offense will score on a particular possession conditional on that possession's evolution up to time $t$.
For illustration, we plot the EPV curve corresponding to the example Heat possession in Figure \ref{heat_epv}, with EPV estimated using the methodology in this paper. We see several moments when the expected point yield of the possession, given its history, changes dramatically. For the first 2 seconds of the possession, EPV remains around 1. When Cole drives toward the basket, EPV rises until peaking at around 1.34 when Cole is right in front of the basket. As Cole dribbles past the basket (and his defenders continue pursuit), however, EPV falls rapidly, bottoming out at 0.77 before ``resetting'' to 1.00 with the pass to Rashard Lewis. The EPV increases slightly to 1.03 when the ball is then passed to James. As EPV is sensitive to small changes in players' exact locations, we see EPV rise slightly as James approaches the three point line and then dip slightly as he crosses it. Shortly afterwards, EPV rises suddenly as James breaks towards the basket, eluding the defense, and continues rising until he is beneath the basket, when an attempted layup boosts the EPV from 1.52 to 1.62.
\begin{figure}[h!]
\centering
\includegraphics[width=1.0\linewidth]{graphics/ticker_1_edit}
\caption{Estimated EPV over time for the possession shown in Figure \ref{heat_poss}. Changes in EPV are induced by changes in players' locations and dynamics of motion; macrotransitions such as passes and shot attempts produce immediate, sometimes rapid changes in EPV. The black line slightly smooths EPV evaluations at each time point (gray dots), which are subject to Monte Carlo error.}
\label{heat_epv}
\end{figure}
In this way, EPV corresponds naturally to a coach's or spectator's sense of how the actions that basketball players take in continuous time help or hurt their team's cause to score in the current possession, and quantifies this in units of expected points.
EPV acts like a stock ticker, providing an instantaneous summary of the possession's eventual point value given all available information, much like a stock price values an asset based on speculation of future expectations.
\subsection{Related Work and Contributions}
Concepts similar to EPV, where final outcomes are modeled conditional on observed progress, have had statistical treatment in other sports, such as in-game win probability in baseball \cite{bukiet1997markov,yang2004two} and football \cite{lock2014using}, as well as in-possession point totals in football \cite{burke2010,goldner2012markov}. These previous efforts can be categorized into either marginal regression/classification approaches, where features of the current game state are mapped directly to expected outcomes, or process-based models that use a homogeneous Markov chain representation of the game to derive outcome distributions. Neither of these approaches is ideal for application to basketball. Marginal regression methodologies ignore the natural martingale structure of EPV, which is essential to its ``stock ticker'' interpretation. On the other hand, while Markov chain methodologies do maintain this ``stock ticker'' structure, applying them to basketball requires discretizing the data, introducing an onerous bias-variance-computation tradeoff that is not present for sports like baseball that are naturally discrete in time.
To estimate EPV effectively, we introduce a novel multiresolution approach in which we model basketball possessions at two separate levels of resolution, one fully continuous and one highly coarsened. By coherently combining these models we are able to obtain EPV estimates that are reliable, sensitive, stochastically consistent, and computationally feasible (albeit intensive).
While our methodology is motivated by basketball, we believe that this research
can serve as an informative case study for analysts working in other application areas where continuous monitoring data are becoming widespread, including traffic monitoring \cite{ihler2006adaptive}, surveillance, and digital marketing \cite{shao2011data}, as well as other sports such as soccer and hockey \cite{thomas2013competing}.
Section \ref{sec:Multiresolution} formally defines EPV within the context of a stochastic process for basketball, introducing the multiresolution modeling approach that make EPV calculations tractable as averages over future paths of a stochastic process. Parameters for these models, which represent players' decision-making tendencies in various spatial and situational circumstances, are discussed in Section \ref{sec:Macro}. Section \ref{sec:Computation} discusses inference for these parameters using hierchical models that share information between players and across space. We highlight results from actual NBA possessions in Section \ref{sec:Results}, and show how EPV provides useful new quantifications of player ability and skill. Section \ref{sec:Discussion} concludes with directions for further work.
\end{document}
\iffalse
With its inherently discrete batter/pitcher dynamics, the sport of baseball has received tremendous focus from the statistics community over the past several decades. Because the pitches thrown in a baseball game define a natural set of outcomes, such as a ball, strike, or home run, the entire sequence of a given baseball game may be modeled based on these individual events. Because a batter faces a given set of pitches (or, more coarsely, a set of `at bats') each with its own outcome, understanding batter characteristics boils down to characterizing a set of discrete events. Early examples of statistics' prominent role in baseball include \citeasnoun{efron1975data}, who look at shrinkage estimators of player batting averages, \citeasnoun{albright1993a-statistical}, who call into question the notion of streakiness in batting, and \citeasnoun{albert1994exploring}, who looks at the effect of situational covariates on hitting efficiency. While these reference highlight statisticians' easrly foray into baseball analysis, analyses have projected even further back in time: because basic statistics have been kept from historical games, recent work has applied modern statistical techniques to this historical data to determine, for example, that the best pitcher in 1876 was Albert Spalding, with a 47-12 win-loss record \cite{james2010the-new-bill}.
Baseball is not the only naturally discrete sport to attain significant attention from statisticians. Both golf \cite[e.g.][]{connolly2008skill, berry2001how-ferocious} and chess \cite[e.g.][]{draper1963does} have a history of being studied statistically. Another way in which sports have been studied more generally is to ignore within-game events, and focus solely on modeling the outcomes, or the winners, of the game. A common approach is that of paired comparisons \cite{bradley1952rank}, where each team has a latent skill parameter, and their odds of winning a game is proportional to their skill relative to their competitor's. This notion was later extended to ties \cite{davidson1970extending, rao1967ties} and subsequently to dynamic skill parameters \cite{glickman1999parameter}. While these results have had significant impact on our understanding and prediction of sports outcomes, this focus on discrete events has limited our understanding of within-game processes in dynamic sports such as soccer, hockey, and basketball, which have received considerably less attention from the statistics community. The historical neglect of these sports stems largely from their continuous nature, which is not naturally quantified by a boxscore, or play-by-play description, of the game. With players interacting in real-time, it is inherently difficult to represent the characteristics of a game, or of individual players, with a small set of statistics.
In contrast to baseball or chess, dynamic sports such as basketball consist of complex space-time interactions which \textit{create} discrete outcomes such as assists and points. In this way, basketball is like a high-speed chess match, where teams and players employ tactics which do not necessarily generate points immediately, but can yield higher-value opportunities several ``moves'' down the line. From this viewpoint, it becomes apparent that the decisive moment in a given possession may not have been the three-point shot which resulted in a point, but rather the pass that led to that shot, or even the preceding drive that scrambled the defense. This idea of players interacting and dynamically making decisions which result in creating value at the endpoint of a play differentiates dynamic sports, and hints at the need for more intricate statistical modeling which accounts for the micro-scale spatio-temporal flow inherent to these sports.
Unfortunately, contemporary basketball analyses fail to account for this core idea. Despite many recent innovations, most advanced metrics (PER \cite{hollinger2004pro} and +/- variations \cite{omidiran2011pm}, for example) remain based on simple tallies relating to the terminal states of possessions like points, rebounds, and turnovers. Similarly in hockey, \citeasnoun{thomas2013competing} have discretized the game according to player line changes, considering outcomes as points, censored by player substitutions. While these approaches have shed light on their respective games, they are akin to analyzing a chess match based only on the move that resulted in checkmate, leaving unexplored the possibility that the key move occurred several turns before. This leaves a major gap to be filled, as an understanding of how players contribute to the whole possession---not just the events that end it---can be critical in evaluating players, assessing the quality of their decision-making, and predicting the success of particular in-game tactics. The major obstacle to closing this gap is the current inability to evaluate the individual tactical decisions that form the substructure of every possession of every basketball game. For example, there is no current method to estimate the value of a drive to the basket, or to compare the option of taking a contested shot to the option of passing to an open teammate.
A major opportunity to fill this void is presented by recent data acquisitition systems in sports arenas. In the NBA, for instance, starting with the $2013-2014$ season there are multiple cameras mounted in the rafters of every stadium, capturing video which is post-processed to find the spatial location of all players and the ball, $25$ times per second. Similar systems have existed in soccer and football stadiums for years. This optical player-tracking data opens up the opportunity to study the fine-scale spatio-temporal interactions of the players. However, because of the unique character of the data and underlying processes, new statistical methods are required to extract insight from these new data sources.
In this paper, we develop a coherent, quantitative representation of a whole possession that summarizes each moment of the possession in terms of the number of points the offense is expected to score---a quantity we call \textit{expected possession value}, or EPV. To capture the unique characteristics inherent in the endpoint-valued spatio-temporal process which characterizes basketball possessions, we employ a multiresolution semi-Markov process model to encode how ball handlers make decisions on the court based upon their unique skill, their location, and the locations of their team-mates and opponents. By assigning a point value to every tactical option available to a player in a given instant, EPV allows for a decomposition of player's decisions, differentiating for instance between good shots versus shots taken when a more favorable pass was available.
We define EPV in Section \ref{sec:EPV}, and subsequently represent it as a multiresolution semi-Markov process in Section \ref{sec:Multiresolution}. The subsequent two sections break apart the multiresolution structure, detailing the macro- and micro-transition aspects of the model, respectively. In Section \ref{sec:Results} we highlight details of the method's computational implementation, demonstrating the resulting output. Lastly, in Section \ref{sec:Discussion} we conclude the work and highlight a variety of possible uses of the method, several of which are included in the Appendix.
\fi
\section{Introduction}
\label{sec:Intro}
\subfile{EPV_intro.tex}
\section{Multiresolution Modeling}
\label{sec:Multiresolution}
\subfile{multires2.tex}
\section{Transition Model Specification}
\label{sec:Macro}
\subfile{EPV_macro.tex}
\section{Hierarchical Modeling and Inference}
\label{sec:Computation}
\subfile{EPV_computation.tex}
\section{Results}
\label{sec:Results}
\subfile{EPV_results.tex}
\section{Discussion}
\label{sec:Discussion}
\subfile{EPV_discuss.tex}
\newpage
\subsection{Microtransition Model}
The microtransition model describes player movement with the ballcarrier held constant. In the periods between transfers of ball possession (including passes, shots, and turnovers), all players on the court move in order to influence the character of the next ball movement (macrotransition). For instance, the ballcarrier might drive toward the basket to attempt a shot, or move laterally to gain separation from a defender, while his teammates move to position themselves for passes or rebounds, or to set screens and picks. The defense moves correspondingly, attempting to deter easy shot attempts or passes to certain players while simultaneously anticipating a possible turnover.
Separate models are assumed for offensive and defensive players, as we shall describe.
Predicting the motion of offensive players over a short time window is driven by the players' dynamics (velocity, acceleration, etc.). Let the location of offensive player $\ell$ ($\ell \in \{1, \ldots, L = 461\}$) at time $t$ be $\mathbf{z}^{\ell}(t) = (x^{\ell}(t), y^{\ell}(t))$. We then model movement in each of the $x$ and $y$ coordinates usin
\begin{align}
x^{\ell}(t + \epsilon) &= x^{\ell}(t) + \alpha^{\ell}_x[x^{\ell}(t) - x^{\ell}(t - \epsilon)] + \eta^{\ell}_x(t)
\label{micro}
\end{align}
(and analogously for $y^{\ell}(t)$). This expression derives from a Taylor series expansion to the ballcarrier's position for each coordinate, such that $\alpha^{\ell}_x[x^{\ell}(t) - x^{\ell}(t - \epsilon)] \approx \epsilon x^{\ell \prime}(t)$, and $\eta^{\ell}_x(t)$ provides stochastic innovations representing the contribution of higher-order derivatives (acceleration, jerk, etc.).
Because they are driven to score, players' dynamics on offense are nonstationary. When possessing the ball, most players accelerate toward the basket when beyond the three-point line, and decelerate when very close to the basket in order to attempt a shot. Also, players will accelerate away from the edges of the court as they approach these, in order to stay in bounds. To capture such behavior, we assume spatial structure for the innovations, $\eta^{\ell}_x(t) \sim \mathcal{N} (\mu^{\ell}_x(\mathbf{z}^{\ell}(t)), (\sigma^{\ell}_x)^2)$, where $\mu^{\ell}_x$ maps player $\ell$'s location on the court to an additive effect in \eqref{micro}, which has the interpretation of an acceleration effect; see Figure \ref{accelerations} for an example
\begin{figure}[h]
\centering
\begin{tabular}{cccc}
\subfloat[]{\includegraphics[width=0.23\linewidth]{graphics/parker_with}} &
\subfloat[]{\includegraphics[width=0.23\linewidth]{graphics/parker_without}} &
\subfloat[]{\includegraphics[width=0.23\linewidth]{graphics/howard_with}} &
\subfloat[]{\includegraphics[width=0.23\linewidth]{graphics/howard_without}}
\end{tabular}
\caption{Acceleration fields $(\mu_x(\mathbf{z}(t)), \mu_y(\mathbf{z}(t)))$ for Tony Paker (a)--(b) and Dwight Howard (c)--(d) with and without ball possession. The arrows point in the direction of the acceleration at each point on the court's surface, and the size and color of the arrows are proportional to the magnitude of the acceleration. Comparing (a) and (c) for instance, we see that when both players possess the ball, Parker more frequently attacks the basket from outside the perimeter. Howard does not accelerate to the basket from beyond the perimeter, and only tends to attack the basket inside the paint.}
\label{accelerations}
\end{figure}
The defensive components of $\mathbb{P}(Z_{t+\epsilon} | M(t)^c, \mathcal{F}^{(Z)}_t)$, corresponding to the positions of the five defenders, are easier to model conditional on the evolution of the offense's positions. Following \citeasnoun{franks2014defensive}, we assume each defender's position is centered on a linear combination of the basket's location, the ball's location, and the location of the offensive player he is guarding. \citeasnoun{franks2014defensive} use a hidden Markov model (HMM) based on this assumption to learn which offensive players each defender is guarding, such that conditional on defender $\ell$ guarding offender $k$ his location $\mathbf{z}^{\ell}(t) = (x^{\ell}(t), y^{\ell}(t))$ should be normally distributed with mean $\mathbf{m}^k_{\text{opt}}(t) = 0.62\mathbf{z}^k(t) + 0.11\mathbf{z}_{\text{bask}} + 0.27\mathbf{z}_{\text{ball}}(t)$.
Of course, the dynamics (velocity, etc.) of defensive players' are still hugely informative for predicting their locations within a small time window. Thus our microtransition model for defender $\ell$ balances these dynamics with the mean path induced by the player he is guarding:
\begin{align}
x^{\ell}(t + \epsilon)|m^k_{\text{opt}, x}(t) & \sim \mathcal{N} \bigg( x^{\ell}(t) + a^{\ell}_x[x^{\ell}(t) - x^{\ell}(t-\epsilon)] \nonumber \\
& \hspace{-2cm} + b^{\ell}_x[m^k_{\text{opt}, x}(t + \epsilon) - m^k_{\text{opt}, x}(t)] + c_x^{\ell}[x^{\ell}(t) - m^k_{\text{opt}, x}(t + \epsilon)], (\tau^{\ell}_x)^2\bigg)
\label{micro-defense}
\end{align}
and symmetrically in $y$. Rather than implement the HMM procedure used in \citeasnoun{franks2014defensive}, we simply assume each defender is guarding at time $t$ whichever offensive player $j$ yields the smallest residual $||\mathbf{z}^{\ell}(t) - \mathbf{m}^j_{\text{opt}}(t)||$, noting that more than one defender may be guarding the same offender (as in a ``double team''). Thus, conditional on the locations of the offense at time $t+\epsilon$, \eqref{micro-defense} provides a distribution over the locations of the defense at $t + \epsilon$.
\subsection{Macrotransition Entry Model}
The macrotransition entry model $\mathbb{P}(M(t) | \mathcal{F}^{(Z)}_t)$ predicts ball movements that instantaneously shift the course of the possession---passes, shot attempts, and turnovers. As such,
we consider a family of macrotransition entry models $\mathbb{P}(M_j(t)
|\mathcal{F}^{(Z)}_t)$, where $j$ indexes the type of macrotransition corresponding
to $M(t)$. There are six such types: four pass options (indexed,
without loss of generality, $j \leq 4$), a shot attempt ($j = 5$), or
a turnover $(j=6)$. Thus, $M_j(t)$ is the event that a macrotransition
of type $j$ begins in the time window $(t, t + \epsilon]$, and $M(t) =
\bigcup_{j=1}^6 M_j(t)$. Since macrotransition types are disjoint, we
also know $\mathbb{P}(M(t) | \mathcal{F}^{(Z)}_t) = \sum_{j=1}^6 \mathbb{P}(M_j(t) | \mathcal{F}^{(Z)}_t)$.
We parameterize the macrotransition entry models as competing risks \cite{prentice1978analysis}: assuming player $\ell$ possesses the ball at time $t > 0$ during a possession, denote
\begin{equation}\label{hazard-def}
\lambda^{\ell}_j (t) = \lim_{\epsilon \rightarrow 0} \frac{\mathbb{P}(M_j(t) |\mathcal{F}^{(Z)}_t)}{\epsilon}
\end{equation}
as the hazard for macrotransition $j$ at time $t$.
We assume these are log-linear,
\begin{equation}\label{hazard-equation}
\log(\lambda^{\ell}_j(t)) = [\mathbf{W}_j^{\ell}(t)]'\boldsymbol{\beta}_j^{\ell} + \xi_j^{\ell}\left(\mathbf{z}^{\ell}(t)\right) + \left(\tilde{\xi}_j^{\ell}\left(\mathbf{z}_{j}(t)\right)\mathbf{1}[j \leq 4]\right),
\end{equation}
where $\mathbf{W}_j^{\ell}(t)$ is a $p_j \times 1$ vector of time-varying covariates, $\boldsymbol{\beta}_j^{\ell}$ a $p_j \times 1$ vector of coefficients, $\mathbf{z}^{\ell}(t)$ is the ballcarrier's 2D location on the court (denote the court space $\mathbb{S}$) at time $t$, and $\xi_j^{\ell}: \mathbb{S} \rightarrow \mathbb{R}$ is a mapping of the player's court location to an additive effect on the log-hazard, providing spatial variation. The last term in \eqref{hazard-equation} only appears for pass events $(j \leq 4)$ to incorporate the location of the receiving player for the corresponding pass: $\mathbf{z}_j(t)$ (which slightly abuses notation) provides his location on the court at time $t$, and $\tilde{\xi}_j^{\ell}$, analogously to $\xi_j^{\ell}$, maps this location to an additive effect on the log-hazard. The four different passing options are identified by the (basketball) position of the potential pass recipient: point guard (PG), shooting guard (SG), small forward (SF), power forward (PF), and center (C).
The macrotransition model \eqref{hazard-def}--\eqref{hazard-equation} represents the ballcarrier's decision-making process as an interpretable function of the unique basketball predicaments he faces. For example, in considering the hazard of a shot attempt, the time-varying covariates ($\mathbf{W}_j^{\ell}(t)$) we use are the distance between the ballcarrier and his nearest defender (transformed as $\log(1+d)$ to moderate the influence of extremely large or small observed distances), an indicator for whether the ballcarrier has dribbled since gaining possession, and a constant representing a baseline shooting rate (this is not time-varying)\footnote{Full details on all covariates used for all macrotransition types are included in Appendix \ref{Covariates}}. The spatial effects $\xi_j^{\ell}$ reveal locations where player $\ell$ is more/less likely to attempt a shot in a small time window, holding fixed the time-varying covariates $\mathbf{W}_j^{\ell}(t)$. Such spatial effects (illustrated in Figure \ref{fig:spatial_effects}) are well-known to be nonlinear in distance from the basket and asymmetric about the angle to the basket \cite{miller2013icml}.
\begin{figure}
\centering
\begin{tabular}{cccc}
\subfloat[$\xi_1, \tilde{\xi}_1$ (pass to PG)]{\includegraphics[width=0.23\linewidth,height=0.08\textheight]{graphics/lebron_pass1_spatial_2}} &
\subfloat[$\xi_2, \tilde{\xi}_2$ (pass to SG)]{\includegraphics[width=0.23\linewidth,height=0.08\textheight]{graphics/lebron_pass2_spatial_2}} &
\multirow{-2}[8]{*}{\subfloat[$\xi_5$ (shot-taking)]{\includegraphics[width=0.23\linewidth,height=0.17\textheight]{graphics/lebron_take_spatial_2}}} &
\multirow{-2}[8]{*}{\subfloat[$\xi_6$ (turnover)]{\includegraphics[width=0.23\linewidth,height=0.17\textheight]{graphics/lebron_TO_spatial_2}}}
\\
\subfloat[$\xi_3, \tilde{\xi}_3$ (pass to PF)]{\includegraphics[width=0.23\linewidth,height=0.08\textheight]{graphics/lebron_pass3_spatial_2}} &
\subfloat[$\xi_4, \tilde{\xi}_4$ (pass to C)]{\includegraphics[width=0.23\linewidth,height=0.08\textheight]{graphics/lebron_pass4_spatial_2}}
& &
\end{tabular}
\caption[Sample plots of spatial effects]{Plots of estimated spatial effects $\xi$ for LeBron James as the ballcarrier. For instance, plot (c) reveals the largest effect on James' shot-taking hazard occurs near the basket, with noticeable positive effects also around the three point line (particularly in the ``corner 3'' shot areas). Plot (a) shows that he is more likely (per unit time) to pass to the point guard when at the top of the arc---more so when the point guard is positioned in the post area.}
\label{fig:spatial_effects}
\end{figure}
All model components---the time-varying covariates, their coefficients, and the spatial effects $\xi, \tilde{\xi}$ differ across macrotransition types $j$ for the same ballcarrier $\ell$, as well as across all $L=461$ ballcarriers in the league during the 2013-14 season. This reflects the fact that players' decision-making tendencies and skills are unique; a player such as Dwight Howard will very rarely attempt a three point shot even if he is completely undefended, while someone like Stephen Curry will attempt a three point shot even when closely defended.
\subsection{Macrotransition Exit Model}
\label{macro_exit}
Using the six macrotransition types introduced in the previous subsection, we can express the macrotransition exit model \ref{M2} when player $\ell$ has possession as
\begin{align}
\mathbb{P}(C_{\delta_t} | M(t), \mathcal{F}^{(Z)}_t) &= \sum_{j=1}^6 \mathbb{P}(C_{\delta_t} | M_j(t), \mathcal{F}^{(Z)}_t) \mathbb{P}(M_j(t) | M(t), \mathcal{F}^{(Z)}_t) \nonumber \\
&= \sum_{j=1}^6 \mathbb{P}(C_{\delta_t} | M_j(t), \mathcal{F}^{(Z)}_t) \frac{\lambda_j^{\ell}(t)}{\sum_{k=1}^6 \lambda^{\ell}_k(t)},
\label{exitmodel}
\end{align}
using the competing risks model for $M_j(t)$ given by \eqref{hazard-def}--\eqref{hazard-equation}. As terms $\lambda^{\ell}_j(t)$ are supplied by \eqref{hazard-equation}, we focus on the macrotransition exit model conditional on the macrotransition type, $\mathbb{P}(C_{\delta_t} | M_j(t), \mathcal{F}^{(Z)}_t)$.
For each $j=1, \ldots, 4$, $M_j(t)$ represents a pass-type macrotransition, therefore $C_{\delta_t}$ is a possession state $c' \in \mathcal{C}_{\text{poss}}$ for the player corresponding to pass option $j$. Thus, a model for $\mathbb{P}(C_{\delta_t} | M_j(t), \mathcal{F}^{(Z)}_t)$ requires us to predict the state $c' \in \mathcal{C}_{\text{poss}}$ the $j$th pass target will occupy upon receiving the ball. Our approach is to simply assume $c'$ is given by the pass target's location at the time the pass begins. While this is naive and could be improved by further modeling, it is a reasonable approximation in practice, because with only seven court regions and two defensive spacings comprising $\mathcal{C}_{\text{poss}}$, the pass recipient's position in this space is unlikely to change during the time the pass is traveling en route, $\delta_t - t$ (a noteable exception is the alley-oop pass, which leads the pass recipient from well outside the basket to a dunk or layup within the restricted area). Our approach thus collapses $\mathbb{P}(C_{\delta_t} | M_j(t), \mathcal{F}^{(Z)}_t)$ to a single state in $\mathcal{C}_{\text{poss}}$, which corresponds to pass target $j$'s location at time $t$.
When $j=5$, and a shot attempt occurs in $(t, t + \epsilon]$, $C_{\delta_t}$ is either a made/missed 2 point shot, or made/missed three point shot. For sufficiently small $\epsilon$, we observe at $Z_t$ whether the shot attempt in $(t, t + \epsilon]$ is a two- or three-point shot, therefore our task in providing $\mathbb{P}(C_{\delta_t} | M_j(t), \mathcal{F}^{(Z)}_t)$ is modeling the shot's probability of success. We provide a parametric shot probability model, which shares the same form as the macrotransition entry model \eqref{hazard-def}--\eqref{hazard-equation}, though we use a logit link function as we are modeling a probability instead of a hazard. Specifically, for player $\ell$ attempting a shot at time $t$, let $p^{\ell}(t)$ represent the probability of the shot attempt being successful (resulting in a basket). We assume
\begin{equation}\label{shotprob}
\text{logit}(p^{\ell}(t)) = [\mathbf{W}_\textrm{s}^{\ell}(t)]'\boldsymbol{\beta}_\textrm{s}^{\ell} + \xi_\textrm{s}^{\ell}(\mathbf{z}^{\ell}(t))
\end{equation}
with components in \eqref{shotprob} having the same interpretation as their $j$-indexed counterparts in the competing risks model \eqref{hazard-equation}; that is, $\mathbf{W}_\textrm{s}^{\ell}$ is a vector of time-varying covariates (we use distance to the nearest defender---transformed as $\log(1 + d)$---an indicator for whether the player has dribbled, and a constant to capture baseline shooting efficiency) with $\boldsymbol{\beta}_\textrm{s}^{\ell}$ a corresponding vector of coefficients, and $\xi_\textrm{s}^{\ell}$ a smooth spatial effect, as in \eqref{hazard-equation}.
Lastly, when $j=6$ and $M_j(t)$ represents a turnover, $C_{\delta_t}$ is equal to the turnover state in $\mathcal{C}_{\text{end}}$ with probability 1.
Note that the macrotransition exit model is mostly trivial when no player has ball possession at time $t$, since this implies $C_t \in \mathcal{C}_{\text{trans}} \cup \mathcal{C}_{\text{end}}$ and $\tau_t = t$. If $C_t \in \mathcal{C}_{\text{end}}$, then the possession is over and $C_{\delta_t} = C_t$. Otherwise, if $C_t \in \mathcal{C}_{\text{trans}}$ represents a pass attempt or turnover in progress, the following state $C_{\delta_t}$ is deterministic given $C_t$ (recall that the pass recipient and his location are encoded in the definition of pass attempt states in $\mathcal{C}_{\text{trans}}$). When $C_t$ represents a shot attempt in progress, the macrotransition exit model reduces to the shot probability model \eqref{shotprob}. Finally, when $C_t$ is a rebound in progress, we ignore full-resolution information and simply use the Markov transition probabilities from $\mathbf{P}$\footnote{Our rebounding model could be improved by using full-resolution spatiotemporal information, as players' reactions to the missed shot event are informative of who obtains the rebound.}.
\subsection{Transition Probability Matrix for Coarsened Process}
The last model necessary for calculating EPV is \ref{M4}, the
transition probability matrix for the embedded Markov chain
corresponding to the coarsened process $C^{(0)}, C^{(2)}, \ldots,
C^{(K)}$. This transition probability matrix is used to compute the
term $\E[X | C_{\delta_t} = c]$ that appears in Theorem
\ref{epvtheorem}. Recall that we denote the transition probability
matrix as $\mathbf{P}$, where $P_{qr} = \mathbb{P}(C^{(i+1)} = c_r |
C^{(i)} = c_q)$ for any $c_q, c_r \in \mathcal{C}$.
Without any other probabilistic structure assumed for $C^{(i)}$ other
than Markov, for all $q,r$, the maximum likelihood estimator of
$P_{qr}$ is the observed transition frequency,
$\hat{P}_{qr} = \frac{N_{qr}}{\sum_{r'}N_{qr'}}$, where $N_{qr}$ counts the number of
transitions $c_q \rightarrow c_r$. Of course, this estimator has
undesirable performance if the number of visits to any particular
state $c_q$ is small (for instance, Dwight Howard closely defended in the corner 3 region), as the estimated transition probabilities from
that state may be degenerate.
Under our multiresolution model for basketball possessions, however,
expected transition counts between many coarsened states $C^{(i)}$ can
be computed as summaries of our macrotransition models \ref{M1}--\ref{M2}. To show this,
for any arbitrary $t > 0$ let $M_j^r(t)$ be the event
$$M_j^r(t) = \{\mathbb{P}(M_j(t) \text{ and } C_{t + \epsilon} = c_r | \mathcal{F}^{(Z)}_t) > 0\}.$$
Thus $M_j^r(t)$ occurs if it is possible for a macrotransition of type
$j$ into state $c_r$ to occur in $(t, t + \epsilon]$. When applicable, we can use this to get the expected number of $c_q
\rightarrow c_r$ transitions:
\begin{equation}
\label{eq:shrunken_tprob}
\tilde{N}_{qr} = \epsilon \sum_{t : C_t = c_q} \lambda^{\ell}_j(t)\mathbf{1}[M_j^r(t)].
\end{equation}
When $c_q$ is a shot attempt state from $c_{q'} \in \mathcal{C}_{\text{poss}}$, \eqref{eq:shrunken_tprob} is
adjusted using the shot probability model \eqref{shotprob}:
$\tilde{N}_{qr} = \epsilon \sum_{t : C_t = c_{q'}}
\lambda^{\ell}_j(t)p(t)\mathbf{1}[M_j^r(t)]$ when $c_r$ represents an
eventual made shot and $\tilde{N}_{qr} = \epsilon \sum_{t : C_t = c_{q'}}
\lambda^{\ell}_j(t)(1 - p(t))\mathbf{1}[M_j^r(t)]$ when $c_r$ represents an
eventual miss.
By replacing raw counts with their expectations conditional on higher-resolution data, leveraging the hazards $\eqref{eq:shrunken_tprob}$ provides a Rao-Blackwellized (unbiased, lower variance) alternative to counting observed transitions. Furthermore, due to the hierarchical parameterization of hazards $\lambda_j^{\ell}(t)$ (discussed in Section \ref{sec:Computation}), information is shared across space and player combinations so that estimated hazards are well-behaved even in situations without significant observed data. Thus, when $c_q \rightarrow c_r$ represents a macrotransition, we use $\tilde{N}_{qr}$ in place of $N_{qr}$ when calculating $\hat{P}_{qr}$.
\end{document}
\subsection{Possession case study}\label{subsec:study}
After obtaining parameter estimates for the multresolution transition models, we can calculate EPV using Theorem \ref{epvtheorem} and plot $\nu_t$ throughout the course of any possession in our data. We view such (estimated) EPV curves as the main contribution of our work, and their behavior and potential inferential value has been introduced in Section \ref{sec:Intro}. We illustrate this value by revisiting the possession highlighted in Figure \ref{heat_poss} through the lens of EPV. Analysts may also find meaningful aggregations of EPV curves that summarize players' behavior over a possession, game, or season in terms of EPV. We offer two such aggregations in this section.
\subsection{Predictive Performance of EPV}
Before analyzing EPV estimates, it is essential to check that such estimates are properly calibrated \cite{gneiting2007probabilistic} and accurate enough to be useful to basketball analysts. Our paper introduces EPV, and as such there are no existing results to benchmark the predictive performance of our estimates. We can, however, compare the proposed implementation for estimating EPV with simpler models, based on lower resolution information, to verify whether our multiresolution model captures meaningful features of our data. Assessing the predictive performance of an EPV estimator is difficult because the estimand is a curve whose length varies by possession. Moreover, we never observe any portion of this curve; we only know its endpoint. Therefore, rather than comparing estimated EPV curves between our method and alternative methods, we compare estimated transition probabilities. For any EPV estimator method that is stochastically consistent, if the predicted transitions are properly calibrated, then the derived EPV estimates should be as well.
For the inference procedure in Section \ref{sec:Computation}, we use only 90\% of our data set for parameter inference, with the remaining 10\% used to evaluate the out-of-sample performance of our model. We also evaluated out-of-sample performance of simpler macrotransition entry/exit models, which use varying amounts of information from the data. Table~\ref{loglik-table} provides the out-of-sample log-likelihood for the macrotransition models applied to the 10\% of the data not used in model fitting for various simplified models. In particular, we start with the simple model employing constant hazards for each player/event type, then successively add situational covariates, spatial information, then full hierarchical priors. Without any shrinkage, our full model performs in some cases worse than a model with no spatial effects included, but with shrinkage, it consistently performs the best (highest log-likelihood) of the configurations compared. This behavior justifies the prior structure introduced in Section \ref{sec:Computation}.
\begin{table}[ht]
\centering
\begin{tabular}{lrrrr}
\toprule
& \multicolumn{4}{c}{Model Terms} \\
\midrule
Macro. type & Player & Covariates & Covariates + Spatial & Full \\
\midrule
Pass1 & -29.4 & -27.7 & -27.2 & -26.4 \\
Pass2 & -24.5 & -23.7 & -23.2 & -22.2 \\
Pass3 & -26.3 & -25.2 & -25.3 & -23.9 \\
Pass4 & -20.4 & -20.4 & -24.5 & -18.9 \\
Shot Attempt & -48.9 & -46.4 & -40.9 & -40.7 \\
Made Basket & -6.6 & -6.6 & -5.6 & -5.2 \\
Turnover & -9.3 & -9.1 & -9.0 & -8.4 \\
\bottomrule
\end{tabular}
\caption{Out of sample log-likelihood (in thousands) for macrotransition entry/exit models under various model specifications. ``Player'' assumes constant hazards for each player/event type combination. ``Covariates'' augments this model with situational covariates, $\mathbf{W}^{\ell}_j(t)$ as given in \eqref{hazard-equation}. ``Covariates + Spatial'' adds a spatial effect, yielding \eqref{hazard-equation} in its entirety. Lastly, ``Full'' implements this model with the full hierchical model discussed in Section \ref{sec:Computation}.}
\label{loglik-table}
\end{table}
\subsection{Possession Inference from Multiresolution Transitions}
Understanding the calculation of EPV in terms of multiresolution transitions is a valuable exercise for a basketball analyst, as these model components reveal precisely how the EPV estimate derives from the spatiotemporal circumstances of the time point considered. Figure \ref{heat_detail} diagrams four moments during our example possession (introduced originally in Figures \ref{heat_poss} and \ref{heat_epv}) in terms of multiresolution transition probabilities. These diagrams illustrate Theorem \ref{epvtheorem} by showing EPV as a weighted average of the value of the next macrotransition. Potential ball movements representing macrotransitions are shown as arrows, with their respective values and probabilities graphically illustrated by color and line thickness (this information is also annotated explicitly). Microtransition distributions are also shown, indicating distributions of players' movement over the next two seconds. Note that the possession diagrammed here was omitted from the data used for parameter estimation.
\captionsetup[subfigure]{labelformat=empty}
\begin{figure}[h!]
\centering
\begin{tabular}{ccc}
\subfloat[]{\includegraphics[width=0.325\textwidth]{graphics/ticker_2}} &
\subfloat[]{\includegraphics[width=0.325\textwidth]{graphics/micro_1_2}} &
\subfloat[]{\includegraphics[width=0.325\textwidth]{graphics/legend}} \\
\subfloat[]{\includegraphics[width=0.325\textwidth]{graphics/micro_2_2}} &
\subfloat[]{\includegraphics[width=0.325\textwidth]{graphics/micro_3_2}} &
\subfloat[]{\includegraphics[width=0.325\textwidth]{graphics/micro_4_2}}
\end{tabular}
\caption{Detailed diagram of EPV as a function of multiresolution transition probabilities for four time points (labeled A,B,C,D) of the possession featured in Figures \ref{heat_poss}--\ref{heat_epv}. Two seconds of microtransitions are shaded (with forecasted positions for short time horizons darker) while macrotransitions are represented by arrows, using color and line thickness to encode the value (V) and probability (P) of such macrotransitions. The value and probability of the ``other'' category represents the case that no macrotransition occurs during the next two seconds.}
\label{heat_detail}
\end{figure}
Analyzing Figure \ref{heat_detail}, we see that our model estimates largely agree with basketball intuition. For example, players are quite likely to take a shot when they are near to and/or moving towards the basket, as shown in panels A and D. Additionally, because LeBron James is a better shooter than Norris Cole, the value of his shot attempt is higher, even though in the snapshot in panel D he is much farther from the basket than Cole is in panel A. While the value of the shot attempt averages over future microtransitions, which may move the player closer to the basket, when macrotransition hazards are high this average is dominated by microtransitions on very short time scales.
We also see Ray Allen, in the right corner 3, as consistently one of the most valuable pass options during this possession, particularly when he is being less closely defended as in panels A and D. In these panels, though, we never see an estimated probability of him receiving a pass above 0.05, most likely because he is being fairly closely defended for someone so far from the ball, and because there are always closer passing options for the ballcarrier. Similarly, while Chris Bosh does not move much during this possession, he is most valuable as a passing option in panel C where he is closest to the basket and without any defenders in his lane.
From this, we see that the estimated probabilities and values of the macrotransitions highlighted in Figure \ref{heat_detail} match well with basketball intuition.
The analysis presented here could be repeated on any of hundreds of thousands of possessions available in a season of optical tracking data. EPV plots as in Figure \ref{heat_epv} and diagrams as in Figure \ref{heat_detail} provide powerful insight as to how players' movements and decisions contribute value to their team's offense. With this insight, coaches and analysts can formulate strategies and offensive schemes that make optimal use of their players' ability---or, defensive strategies that best suppress the motifs and situations that generate value for the opposing offense.
\subsection{EPV-Added}
Aggregations of EPV estimates across possessions can yield useful summaries for player evaluation. For example, \textit{EPV-Added} (EPVA) quantifies a player's overall offensive value through his movements and decisions while handling the ball, relative to the estimated value contributed by a league-average player receiving ball possession in the same situations. The notion of \textit{relative} value is important because the martingale structure of EPV ($\nu_t$) prevents any meaningful aggregation of EPV across a specific player's possessions. $\E[\nu_t - \nu_{t + \epsilon}] = 0$ for all $t$, meaning that \textit{on average} EPV does not change during any specific player's ball handling. Thus, while we see the EPV skyrocket after LeBron James receives the ball and eventually attack the basket in Figure \ref{heat_epv}, the definition of EPV prevents such increases being observed on average.
If player $\ell$ has possession of the ball starting at time $t_s$ and ending at $t_e$, the quantity $\nu_{t_e} - \nu_{t_s}^{r(\ell)}$ estimates the value contributed player by $\ell$ relative to the hypothetical league-average player during his ball possession (represented by $\nu_{t_s}^{r(\ell)}$). We calculate EPVA for player $\ell$ (EPVA($\ell$)) by summing such differences over all a player's touches (and dividing by the number of games played by player $\ell$ to provide standardization):
\begin{equation}\label{EPVA}
\text{EPVA}(\ell) = \frac{1}{\# \text{ games for $\ell$}}\sum_{\{t_s, t_e\} \in \mathcal{T}^{\ell}} \nu_{t_e} - \nu_{t_s}^{r(\ell)}
\end{equation}
where $\mathcal{T}^{\ell}$ contains all intervals of form $[t_s, t_e]$ that span player $\ell$'s ball possession. Specific details on calculating $\nu_t^{r(\ell)}$ are included in Appendix \ref{subsec:EPVA}.
Averaging over games implicitly rewards players who have high usage, even if their value added per touch might be low. Often, one-dimensional offensive players accrue the most EPVA per touch since they only handle the ball when they are uniquely suited to scoring; for instance, some centers (such as the Clippers' DeAndre Jordan) only receive the ball right next to the basket, where their height offers a considerable advantage for scoring over other players in the league. Thus, averaging by game---not touch---balances players' efficiency per touch with their usage and importance in the offense. Depending on the context of the analysis, EPVA can also be adjusted to account for team pace (by normalizing by 100 possession) or individual usage (by normalizing by player-touches).
Table~\ref{epv_tab} provides a list of the top and bottom 10 ranked players by EPVA using our 2013-14 data. Generally, players with high EPVA effectively adapt their decision-making process to the spatiotemporal circumstances they inherit when gaining possession. They receive the ball in situations that are uniquely suited to their abilities, so that on average the rest of the league is less successful in these circumstances. Players with lower EPVA are not necessarily ``bad'' players in any conventional sense; their actions simply tend to lead to fewer points than other players given the same options. Of course, EPVA provides a limited view of a player's overall contributions since it does not quantify players' actions on defense, or other ways that a player may impact EPV while not possessing the ball (though EPVA could be extended to include these aspects).
\begin{table}[ht]
\centering
\begin{tabular}{rlr}
\toprule
Rank & Player & EPVA \\
\midrule
1 & Kevin Durant & 3.26 \\
2 & LeBron James & 2.96 \\
3 & Jose Calderon & 2.79 \\
4 & Dirk Nowitzki & 2.69 \\
5 & Stephen Curry & 2.50 \\
6 & Kyle Korver & 2.01 \\
7 & Serge Ibaka & 1.70 \\
8 & Channing Frye & 1.65 \\
9 & Al Horford & 1.55 \\
10 & Goran Dragic & 1.54 \\
\bottomrule
\end{tabular}
\quad
\begin{tabular}{rlr}
\toprule
Rank & Player & EPVA \\
\midrule
277 & Zaza Pachulia & -1.55 \\
278 & DeMarcus Cousins & -1.59 \\
279 & Gordon Hayward & -1.61 \\
280 & Jimmy Butler & -1.61 \\
281 & Rodney Stuckey & -1.63 \\
282 & Ersan Ilyasova & -1.89 \\
283 & DeMar DeRozan & -2.03 \\
284 & Rajon Rondo & -2.27 \\
285 & Ricky Rubio & -2.36 \\
286 & Rudy Gay & -2.59 \\
\bottomrule
\end{tabular}
\caption[]{Top/bottom 10 players by EPVA per game in 2013-14, minimum 500 touches in season.}
\label{epv_tab}
\end{table}
As such, we stress the idea that EPVA is not a best/worst players in the NBA ranking. Analysts should also be aware that the league-average player being used
as a baseline is completely hypothetical, and we heavily extrapolate our model output by
considering value calculations assuming this nonexistant player possessing the ball in all the
situations encountered by an actual NBA player. The extent to which such an extrapolation is valid is a judgment a basketball expert can make. Alternatively, one can consider EPV-added over \textit{specific} players (assuming player $\ell_2$ receives the ball in the same situations as player $\ell_1$), using the same framework developed for EPVA. Such a quantity may actually be more useful, particularly if the players being compared play similar roles on their teams and face similar situations and the degree of extrapolation is minimized.
\subsection{Shot Satisfaction}
Aggregations of the individual components of our multiresolution transition models can also provide useful insights. For example, another player metric we consider is called \textit{shot satisfaction}. For each shot attempt a player takes, we wonder how satisfied the player is with his decision to shoot---what was the expected point value of his most reasonable passing option at the time of the shot? If for a particular player, the EPV measured at his shot attempts is higher than the EPV conditioned on his possible passes at the same time points, then by shooting the player is usually making the best decision for his team. On the other hand, players with pass options at least as valuable as shots should regret their shot attempts (we term ``satisfaction'' as the opposite of regret) as passes in these situations have higher expected value.
\begin{table}[h!]
\centering
\begin{tabular}{rlr}
\toprule
Rank & Player & Shot Satis. \\
\midrule
1 & Mason Plumlee & 0.35 \\
2 & Pablo Prigioni & 0.31 \\
3 & Mike Miller & 0.27 \\
4 & Andre Drummond & 0.26 \\
5 & Brandan Wright & 0.24 \\
6 & DeAndre Jordan & 0.24 \\
7 & Kyle Korver & 0.24 \\
8 & Jose Calderon & 0.22 \\
9 & Jodie Meeks & 0.22 \\
10 & Anthony Tolliver & 0.22 \\
\bottomrule
\end{tabular}
\quad
\begin{tabular}{rlr}
\toprule
Rank & Player & Shot Satis. \\
\midrule
277 & Garrett Temple & -0.02 \\
278 & Kevin Garnett & -0.02 \\
279 & Shane Larkin & -0.02 \\
280 & Tayshaun Prince & -0.03 \\
281 & Dennis Schroder & -0.04 \\
282 & LaMarcus Aldridge & -0.04 \\
283 & Ricky Rubio & -0.04 \\
284 & Roy Hibbert & -0.05 \\
285 & Will Bynum & -0.05 \\
286 & Darrell Arthur & -0.05 \\
\bottomrule
\end{tabular}
\caption[]{Top/bottom 10 players by shot satisfaction in 2013-14, minimum 500 touches in season.}
\label{satis_tab}
\end{table}
Specifically, we calculate
\begin{equation}\label{satisfaction}
\text{SATIS}(\ell) = \frac{1}{|\mathcal{T}^{\ell}_{\text{shot}}|} \sum_{t \in \mathcal{T}^{\ell}_{\text{shot}}} \nu_t - \E\left[X \mid \bigcup_{j=1}^4 M_j(t), \mathcal{F}^{(Z)}_t \right]
\end{equation}
where $\mathcal{T}^{\ell}_{\text{shot}}$ indexes times a shot attempt occurs, $\{t : M_5(t) \}$, for player $\ell$. Recalling that macrotransitions $j=1, \ldots, 4$ correspond to pass events (and $j=5$ a shot attempt), $\bigcup_{j=1}^4 M_j(t)$ is equivalent to a pass happening in $(t, t + \epsilon]$. Unlike EPVA, shot satisfaction SATIS($\ell$) is expressed as an average per shot (not per game), which favors player such as three point specialists, who often take fewer shots than their teammates, but do so in situations where their shot attempt is extremely valuable. Table \ref{satis_tab} provides the top/bottom 10 players in shot satisfaction for our 2013-14 data. While players who mainly attempt three-pointers (e.g. Miller, Korver) and/or shots near the basket (e.g. Plumlee, Jordan) have the most shot satisfaction, players who primarily take mid-range or long-range two-pointers (e.g. Aldridge, Garnett) or poor shooters (e.g. Rubio, Prince) have the least. However, because shot satisfaction numbers are mostly positive league-wide, players still shoot relatively efficiently---almost every player generally helps his team by shooting rather than passing in the same situations, though some players do so more than others.
We stress that the two derived metrics given in this paper, EPVA and shot satisfaction, are simply examples of the kinds of analyses enabled by EPV. Convential metrics currently used in basketball analysis do measure shot selection and efficiency, as well as passing rates and assists, yet EPVA and shot satisfaction are novel in analyzing these events in their spatiotemporal contexts.
\end{document}
\subsection{Estimator Criteria}
\label{subsec:multiCrit}
We have defined EPV in \eqref{epvdef} as an unobserved, theoretical quantity; one could thus imagine many different EPV estimators based on different models and/or information in the data. However, we believe that in order for EPV to achieve its full potential as a basis for high-resolution player and strategy evaluation, an EPV estimator should meet several criteria.
First, we require that the EPV estimator be stochastically consistent. Recognizing that EPV is simply a conditional expectation, it is tempting to estimate EPV using a regression or classification approach that maps features from $\mathcal{F}^{(Z)}_t$ to an outcome space, $[0, 3]$ or $\{0, 2, 3\}$. Setting aside the fact that our data associate each possession outcome $X$ with process-valued inputs $Z$, and thus do not conform naturally to input/output structure of such models, such an approach cannot guarantee the estimator will have the (Kolmogorov) stochastic consistency inherent to theoretical EPV, which is essential to its ``stock ticker'' interpretation. Using a stochastically consistent EPV estimator guarantees that changes in the resulting EPV curve derive from players' on-court actions, rather than artifacts or inefficiencies of the data analysis.
A stochastic process model for the evolution of a basketball possession guarantees such consistency.
The second criterion that we require is that the estimator be sensitive to the fine-grained details of the data without incurring undue variance or computatonal complexity. Applying a Markov chain-based estimation approach would require discretizing the data by mapping the observed spatial configuration $Z_t$ into a simplified summary $C_t$, violating this criterion by trading potentially useful information in the player tracking data for computational tractability.
To develop methodology that meet both criteria, we note the information-computation tradeoff in current process modeling strategies results from choosing a single level of resolution at which to model the possession and compute all expectations. In contrast, our method for estimating EPV combines models for the possession at two distinct levels of resolution, namely, a fully continuous model of player movement and actions, and a Markov chain model for a highly coarsened view of the possession. This multiresolution approach leverages the computational simplicity of a discrete Markov chain model while conditioning on exact spatial locations and high-resolution data features.
\subsection{A Coarsened Process}\label{subsec:multiCoarse}
The Markov chain portion of our method requires a coarsened view of the data. For all time $0 < t \leq T$ during a possession, let $C(\cdot)$ be a coarsening that maps $\mathcal{Z}$ to a finite set $\mathcal{C}$, and call $C_t = C(Z_t)$ the ``state'' of the possession. To make the Markovian assumption plausible, we populate the coarsened state space $\mathcal{C}$ with summaries of the full resolution data so that transitions between these states represent meaningful events in a basketball possession---see Figure~\ref{fig:states} for an illustration.
\begin{figure}[!ht]
\input{EPV_coarsened_tikz.tex}
\caption{Schematic of the coarsened possession process $C$, with states (rectangles) and possible state transitions (arrows) shown. The unshaded states in the first row compose $\mathcal{C}_{\text{poss}}$. Here, states corresponding to distinct ballhandlers are grouped together (Player 1 through 5), and the discretized court in each group represents the player's coarsened position and defended state. The gray shaded rectangles are transition states, $\mathcal{C}_{\text{trans}}$, while the rectangles in the third row represent the end states, $\mathcal{C}_{\text{end}}$. Blue arrows represent possible macrotransition entrances (and red arrows, macrotransition exits) when Player 1 has the ball; these terms are introduced in Section \ref{sec:Macro}.}
\label{fig:states}
\end{figure}
First, there are 3 ``bookkeeping'' states, denoted $\mathcal{C}_{\text{end}}$, that categorize the end of the possession, so that $C_T \in \mathcal{C}_{\text{end}}$ and for all $t < T, C_t \not \in \mathcal{C}_{\text{end}}$ (shown in the bottom row of Figure~\ref{fig:states}). These are $\mathcal{C}_{\text{end}} = $\{made 2 pt, made 3 pt, end of possession\}.
These three states have associated point values of 2, 3, and 0, respectively (the generic possession end state can be reached by turnovers and defensive rebounds, which yield no points). This makes the possession point value $X$ a function of the final coarsened state $C_T$.
Next, whenever a player possesses the ball at time $t$, we assume $C_t = (\text{ballcarrier ID at }t) \times (\text{court region at }t) \times (\text{defended at }t)$, having defined seven disjoint regions of the court and classifying a player as defended at time $t$ by whether there is a defender within 5 feet of him. The possible values of $C_t$, if a player possesses the ball at time $t$, thus live in $\mathcal{C}_{\text{poss}} = \{\text{player ID}\} \times \{\text{region ID}\} \times \{\mathbf{1}[\text{defended}]\}$. These states are represented by the unshaded portion of the top row of Figure~\ref{fig:states}, where the differently colored regions of the court diagrams reveal the court space discretization.
Finally, we define a set of states to indicate that an annotated basketball action from the full resolution data $Z$ is currently in progress. These ``transition'' states encapsulate constrained motifs in a possession, for example, when the ball is in the air traveling between players in a pass attempt. Explicitly, denote $\mathcal{C}_{\text{trans}} = \{$shot attempt from $c \in \mathcal{C}_{\text{poss}}$, pass to $c' \in \mathcal{C}_{\text{poss}}$ from $c \in \mathcal{C}_{\text{poss}}$, turnover in progress, rebound in progress$\}$ (listed in the gray shaded portions of Figure~\ref{fig:states}). These transition states carry information about the possession path, such as the most recent ballcarrier, and the target of the pass, while the ball is in the air during shot attempts and passes\footnote{The reason we index transition states by the origin of the pass/shot attempt (and destination of the pass) is to preserve this information under a Markov assumption, where generic ``pass'' or ``shot'' states would inappropriately allow future states to be independent of the players involved in the shot or pass.}. Note that, by design, a possession must pass through a state in $\mathcal{C}_{\text{trans}}$ in order to reach a state in $\mathcal{C}_{\text{end}}$. For simplicity and due to limitations of the data, this construction of $\mathcal{C} = \mathcal{C}_{\text{poss}} \cup \mathcal{C}_{\text{trans}} \cup \mathcal{C}_{\text{end}}$ excludes several notable basketball events (such as fouls, violations, and other stoppages in play) and aggregates others (the data, for example, does not discriminate among steals, intercepted passes, or lost balls out of bounds)
\subsection{Combining Resolutions}\label{subsec:multiTheory}
We make several modeling assumptions about the processes $Z$ and $C$, which allow them to be combined into a coherent EPV estimator.
\begin{enumerate}[label=(A\arabic*)]
\item $C$ is marginally semi-Markov.\label{A1}
\end{enumerate}
The semi-Markov assumption \ref{A1} guarantees that the embedded sequence of disjoint possession states $C^{(0)}, C^{(1)}, \ldots, C^{(K)}$ is a Markov chain, which ensures that it is straightforward to compute $\E[X | C_t]$ using the associated transition probability matrix \cite{kemeny1976finite}.
Next, we specify the relationship between coarsened and full-resolution conditioning. This first requires defining two additional time points which mark changes in the future evolution of the possession:
\begin{align}
\tau_t & = \begin{cases}
\text{min} \{ s : s > t, C_s \in \mathcal{C}_{\text{trans}}\} & \text{if } C_t \in \mathcal{C}_{\text{poss}} \\
t & \text{if } C_t \not \in \mathcal{C}_{\text{poss}}
\end{cases} \label{taudef} \\
\delta_t &= \text{min}\{s : s \geq \tau_t, C_s \not \in \mathcal{C}_{\text{trans}} \}
\label{deltadef}.
\end{align}
Thus, assuming a player possesses the ball at time $t$, $\tau_t$ is the first time after $t$ he attempts a shot/pass or turns the ball over (entering a state in $\mathcal{C}_{\text{trans}}$), and $\delta_t$ is the endpoint of this shot/pass/turnover (leaving a state in $\mathcal{C}_{\text{trans}}$). We assume that passing through these transition states, $\mathcal{C}_{\text{trans}}$, \textit{decouples} the future of the possession after time $\delta_t$ with its history up to time $t$:
\begin{enumerate}[label=(A\arabic*),resume]
\item For all $s > \delta_t$ and $c \in \mathcal{C}$, $\mathbb{P}(C_s =c | C_{\delta_t}, \mathcal{F}^{(Z)}_t) = \mathbb{P}(C_s =c | C_{\delta_t})$. \label{A2}
\end{enumerate}
Intuitively, assumption \ref{A2} states that for predicting coarsened states beyond some point in the future $\delta_t$, all information in the possession history up to time $t$ is summarized by the distribution of $C_{\delta_t}$. The dynamics of basketball make this assumption reasonable; when a player passes the ball or attempts a shot, this represents a structural transition in the basketball possession to which all players react. Their actions prior to this transition are not likely to influence their actions after this transition. Given $C_{\delta_t}$---which, for a pass at $\tau_t$ includes the pass recipient, his court region, and defensive pressure, and for a shot attempt at $\tau_t$ includes the shot outcome---data prior to the pass/shot attempt are not informative of the possession's future evolution.
Together, these assumptions yield a simplified expression for \eqref{epvdef}, which combines contributions from full-resolution and coarsened views of the process.
\begin{theorem}\label{epvtheorem}
Under assumptions \ref{A1}--\ref{A2}, the full-resolution EPV $\nu_t$ can be rewritten:
\begin{equation}\label{epveqn}
\nu_t = \sum_{c \in \mathcal{C}} \E[X | C_{\delta_t} = c]\mathbb{P}(C_{\delta_t} = c | \mathcal{F}^{(Z)}_t).
\end{equation}
\end{theorem}
\begin{remark}
Although we have specified this result in terms of the specific coarsening defined in Section~\ref{subsec:multiCoarse}, we could substitute any coarsening for which \ref{A1}--\ref{A2} are well-defined and reasonably hold. We briefly discuss potential alternative coarsenings in Section \ref{sec:Discussion}.
\end{remark}
The proof of Theorem~\ref{epvtheorem}, follows immediately from \ref{A1}--\ref{A2}, and is therefore omitted. Heuristically, \eqref{epveqn} expresses $\nu_t$ as the expectation given by a homogeneous Markov chain on $\mathcal{C}$ with a random starting point $C_{\delta_t}$, where only the starting point depends on the full-resolution information $\mathcal{F}^{(Z)}_t$. This result illustrates the multiresolution conditioning scheme that makes our EPV approach computationally feasible: the term $\E[X | C_{\delta_t} = c]$ is easy to calculate using properties of Markov chains, and $\mathbb{P}(C_{\delta_t} | \mathcal{F}^{(Z)}_t)$ only requires forecasting the full-resolution data for a short period of time relative to \eqref{epvdef}, as $\delta_t \leq T$.
\end{document}
\iffalse
When the coarsened process $C_t$ transitions from a state in $\mathcal{C}_{\text{poss}}$ to one in $\mathcal{C}_{\text{trans}}$, we call this transition between coarsened states a \textit{macrotransition}.
\begin{definition}
If $C_t \in \mathcal{C}_{\text{poss}}$ and $C_{t + \epsilon} \in \mathcal{C}_{\text{trans}}$, then $C_t \rightarrow C_{t + \epsilon}$ is a \textit{macrotransition}.
\end{definition}
Macrotransitions, which include all ball movements (passes, shot attempts, turnovers), mark large-scale shifts that form the basis of offensive basketball play. The term carries a double meaning, as a macrotransition describes both a transition among states in our coarsened process, $C_t \rightarrow C_{t + \epsilon}$, as well as a transition of ballcarrier identity on the basketball court.
By construction, for a possession that is in a state in $\mathcal{C}_{\text{poss}}$ to proceed to a state in $\mathcal{C}_{\text{end}}$ or a state in $\mathcal{C}_{\text{poss}}$ corresponding to a different ballhandler, a macrotransition must occur as possession passes through a transition state in $\mathcal{C}_{\text{trans}}$ (see possible transition paths illustrated in Figure~\ref{fig:states}).
This structure reveals that at any time $t$ during a possession, we are guaranteed to observe the \textit{exit state} of a future (or current, if $C_t \in \mathcal{C}_{\text{trans}}$) macrotransition. Specifically, let $\delta = \min\{s: s > t, C_{s-\epsilon} \in \mathcal{C}_{\text{trans}} \text{ and } C_s \not \in \mathcal{C}_{\text{trans}} \}$ denote the time the possession reaches the state \textit{after} the next (or current, if $C_t \in \mathcal{C}_{\text{trans}}$) macrotransition after time $t$. Thus, if the possession is currently in a macrotransition, $\delta$ is the first time at which a new possession or end state is occupied (ending the macrotransition), while if a player currently possesses the ball, $\delta$ is the time at which the possession reaches the exit state of a future macrotransition. $\delta$ is a bounded stopping time, so we can condition on $C_{\delta}$ to rewrite EPV \eqref{epvdef} as
\begin{align}
\nu_t &= \sum_{c \in \mathcal{C}} \E[h(C_T)|C_\delta = c, \mathcal{F}^{(Z)}_t] \mathbb{P}(C_\delta = c | \mathcal{F}^{(Z)}_t). \label{EPVdecomp}
\end{align}
It is helpful to expand the second term in \eqref{EPVdecomp}, $\mathbb{P}(C_\delta = c|\mathcal{F}^{(Z)}_t)$, by conditioning on the start of the macrotransition that corresponds to the exit state $C_\delta$. Denote $M(t)$ as the event that a macrotransition begins in $(t, t + \epsilon]$, and let $\tau = \text{min}\{s: s >t, M(s)\}$ be the time at which the macrotransition ending in $C_\delta$ begins. Thus, $\tau$ and $\delta$ bookend the times during which the possession is in the next (or current, but ongoing) macrotransition, with $C_{\tau}$ being the state in $\mathcal{C}$ immediately \textit{prior} to the start of this macrotransition and $C_{\delta}$ the state immediately succeeding it. Like $\delta$, at any time $t < T$, $\tau$ is a bounded stopping time; however, note that if a macrotransition is in progress at time $t$ then $\tau < t$, and, having been observed, $\tau$ has a degenerate distribution. Defining $\tau$ allows us to write:
\begin{align}
\mathbb{P}(C_\delta = c|\mathcal{F}^{(Z)}_t) = \sum_{c \in \mathcal{C}} \int_{t}^{\infty} \int_{\mathcal{Z}}& \mathbb{P}(C_\delta = c | M(\tau), Z_\tau = z, \tau = s, \mathcal{F}^{(Z)}_t) \nonumber \\
& \times \mathbb{P}(M(\tau), Z_\tau = z, \tau = s | \mathcal{F}^{(Z)}_t) dz ds. \label{EPVmulti2}
\end{align}
We make one additional expansion to the terms we have introduced for calculating EPV. The second factor in \eqref{EPVmulti2}, $\mathbb{P}(M(\tau), Z_\tau=z,\tau=s|\mathcal{F}^{(Z)}_t)$, models the location and time of the next macrotransition---implicitly averaging over the intermediate path of the possession in the process. This is the critical piece of our multiresolution structure that connects the full-resolution process $Z$ to the coarsened process $C$, and the component of our model that fully utilizes multiresolution conditioning. We expand this term using our macro- and microtransition models.
\begin{definition}
The \textit{macrotransition model} is $\mathbb{P}(M(t)|\mathcal{F}^{(Z)}_t)$.
\end{definition}
\begin{definition}
The \textit{microtransition model} is $\mathbb{P}(Z_{t + \epsilon} | M(t)^c, \mathcal{F}^{(Z)}_t)$, where $M(t)^c$ is the complement of $M(t)$. \textit{Microtransitions} are instantaneous changes in the full resolution data $Z_t \rightarrow Z_{t + \epsilon}$ over time windows where a macrotransition is not observed; thus, only location components (and not event annotations) change from $Z_t$ to $Z_{t + \epsilon}$.
\end{definition}
Multiresolution transition models allow us to sample from $\mathbb{P}(\tau, Z_{\tau}|\mathcal{F}^{(Z)}_t)$, enabling Monte Carlo evaluation of \eqref{EPVmulti2}. The basic idea is that we use the macrotransition model to draw from $\mathbb{P}(M(t)|\mathcal{F}^{(Z)}_t)$ and if $M(t)^c$ and no macrotransition occurs in $(t, t+\epsilon]$, we use the microtransition model to draw from $\mathbb{P}(Z_{t + \epsilon} | M(t)^c, \mathcal{F}^{(Z)}_t)$. Iterating this process, we alternate draws from the macro- and microtransition models until observing $(\tau, Z_{\tau})$---of course, this also yields $M(\tau)$ as a consequence of our definition of $\tau$. Parametric forms for these macro- and microtransition models are dicussed expcility in Sections \ref{sec:Macro} and \ref{sec:Micro} respectively, while Section \ref{sec:Computation} provides additional details on the Monte Carlo integration scheme.
Expanding EPV by conditioning on intermediate values in principle does not ease the problem of its evaluation. However several of the components we have introduced motivate reasonable conditional independence assumptions that simplify their evaluation. Only by writing EPV as an average over additional random variables defined in the probability space of our possession can we articulate such assumptions and leverage them to compute EPV.
\subsection{Conditional Independence Assumptions}
Our expansions of $\nu_t = \E[h(C_T) | \mathcal{F}^{(Z)}_t]$ introduced in the previous subsection \eqref{EPVdecomp}--\eqref{EPVmulti2} express EPV in terms of three probability models:
\begin{align}
\nu_t &= \sum_{c \in \mathcal{C}} E[h(C_T)|C_\delta = c, \mathcal{F}^{(Z)}_t] \left( \int_{t}^{\infty} \int_{\mathcal{Z}} \mathbb{P}(C_\delta = c | M(\tau), Z_\tau = z, \tau = s, \mathcal{F}^{(Z)}_t) \right. \nonumber \\
& \hspace{2cm} \left. \times \mathbb{P}(M(\tau), Z_\tau = z, \tau = s | \mathcal{F}^{(Z)}_t) dz ds \right). \label{EPVmulti}
\end{align}
The multiresolution transition models sample from $\mathbb{P}(M(\tau), Z_{\tau}, \tau|\mathcal{F}^{(Z)}_t)$, eliminating the need to evaluate the third term in \eqref{EPVmulti} explicitly when computing $\nu_t$ via Monte Carlo. The second term in \eqref{EPVmulti} is actually quite easy to work with since $C_{\delta}$ is categorical, and given $Z_{\tau}$ the space of possible values it can take is relatively small. This is due to the manner in which macrotransitions constrain the spatiotemporal evolution of the possession. Given $Z_{\tau}$, we can obtain the location and separation from the defense of all four possible pass recipients given a pass in $(\tau, \tau + \epsilon]$, so only a subset of states in $\mathcal{C}_{\text{poss}}$ are possible for $C_\delta$. Similarly, if a shot attempt occurs in this time window, $Z_{\tau}$ indicates whether a successful shot would yield 2 or 3 points, further subsetting the possible values of $C_\delta$. Modeling $C_\delta$ thus reduces to predicting the type of macrotransition corresponding to $M(\tau)$---a pass, shot attempt, or turnover. We discuss this in Section \ref{sec:Macro} in the context of our macrotransition model.
The first term in \eqref{EPVmulti}, $E[h(C_T)|C_\delta = c, \mathcal{F}^{(Z)}_t]$ provides the expected point value of the possession given the (coarsened) result of the next macrotransition. Prima facie, this term seems as difficult to evaluate as it has the same essential structure as EPV itself, requiring integration over the future trajectory of the possession after time $\delta$. However, we make a key assumption that frees subsequent evolution of the possession, after time $\delta$, from dependence on the full-resolution history $\mathcal{F}^{(Z)}_t$:
\begin{equation}\label{decoupling}
\E[h(C_T)|C_\delta, \mathcal{F}^{(Z)}_t] = \E[h(C_T)|C_\delta]
\end{equation}
This assumption is intuitive for two reasons. First, by constraining the possession to follow a restricted spatiotemporal path, it is reasonable to assume that the macrotransition exit state itself contains sufficient information to characterize the future evolution of the system. Secondly, because macrotransitions play out over much longer timescales than the resolution of the data (i.e., several seconds, as opposed to 1/25th of a second), it is reasonable to assume that fine-scale spatial detail before the start of the macrotransition has been ``mixed out'' by the time the macrotransition ends.
An additional, reasonable conditional independence assumption is that the coarsened state sequence $C_t, t > 0$ is marginally a semi-Markov process; that is, denoting $\mathcal{F}^{(C)}_t = \sigma(\{C_s^{-1}, 0 \leq s \leq t\})$ as the history of the coarsened process, for all $t' > t$ and $c \in \mathcal{C}$, we assume $\mathbb{P}(C_{t'} = c | \mathcal{F}^{(C)}_t) = \mathbb{P}(C_{t'} = c | C_t)$. A semi-Markov process generalizes a continuous time Markov Chain in that sojourn times need not be exponentially distributed. We associate with this semi-Markov process an embedded discrete, homogeneous Markov Chain: denote $C^{(0)}, C^{(1)}, \ldots, C^{(K)}$ as the sequence of consecutive states $c \in \mathcal{C}$ visited by $C_t$ during the possession $0 < t \leq T$. Thus, $C^{(K)} = C_T$, and $K$ records the length of the possession in terms of the number of transitions between states in $\mathcal{C}$, which like $T$ is random.
Combining these assumptions, the first term in \eqref{EPVmulti}, $\E[h(C_T)|C_\delta, \mathcal{F}^{(Z)}_t]$, can be computed easily from the transition probability matrix of the homogeneous Markov chain embedded in $C_t$. As $C^{(K)}$ is an absorbing state, ending the possession, we can rewrite \eqref{decoupling} as $\E[h(C^{(K)})|C_\delta]$. This is easily obtained by solving a linear system of equations deriving from the transition probability matrix of $C^{(0)}, C^{(1)}, \ldots, C^{(K)}$. Estimating this transition probability matrix is also discussed in Section \ref{sec:Macro}, where we show that it actually derives from the macrotransition model.
Compared to using discrete, homogeneous Markov Chains alone to calculate EPV, the multiresolution approach we take ultimately leverages much of the same computational advantages while remaining attenuated to the full-resolution data, responding smoothly as the possession evolves over space and time.
\subsection{Consistency}
To be useful for decision-making an EPV estimator requires several properties
\begin{itemize}
\item \textbf{Coherence}: Our estimand of interest is not just the point-by-point EPV at any given time during a possession, but a joint estimate of the whole EPV curve as a possession unfolds. We expect that the martingale relationship described in Section~\ref{sec:EPV} should hold for EPV estimates as well, so that marginalizing over the conditional distributions of future EPV estimates yields the EPV for the current situation. This prevents contradictions (e.g. Simpson's Paradoxes) between EPV estimates provided for a single possession, which may arise from marginal estimates that do not enforce this coherence.
\item \textbf{Interpretability}: The model used to compute the expectation should have interpretable components. This aids in model checking, communication with interested end-users, and is useful for computing meaningful summaries.
\item \textbf{Estimability}: The model should have few enough degrees of freedom to estimate its parameters from real data.
\item \textbf{Tractability}: Given estimates of model parameters, the expectation in \eqref{epvdef} should be computationally tractable.
\item \textbf{Sensitivity}: Given estimates of model parameters, EPV should respond to full-resolution changes in spatial information.
\end{itemize}
\todo[inline]{Should the above be turned into a paragraph?}
\todo[inline]{I think ``coherence'' actually belongs in the previous section defining EPV, motivating the stochastic process approach over regression/classification. Then the remaining points could be worked in here as part of a discussion on how the multiresolution framework generalizes. It's a version of ``Markov Chain Regression'' and the ideas of added interpretability and tractability for averaging over stochastic process are very valuable as well.}
The above properties can be difficult to obtain together. Note first that a coherent EPV estimator is not trivial to obtain. For example, regression or classification approaches taking player positions and momenta as features and predicting 0, 2, or 3 points as outcomes would provide no guarnatees of coherence. A coherent EPV estimator requires the integral be computed with respect to a Kolmogorov consistent stochastic process on $Z_t$ -- in other words, a distribution over the full path of states that a possession can take.
Given that the possession model is coherent, tractability trades off with interpretability and sensitivity. A coherent stochastic process is easiest to construct as a set of transition kernels between possession states in adjacent timesteps. While these transition kernels may be relatively simple on small timescales, a coherent estimator requires that the transition kernel for longer timescales be defined by the convolution of the transition kernels on its subintervals. This can complicate computation substantially because most realistic transition kernels do not have have closed form convolution distributions, and thus require manual computation of normalizing factors. This problem is compounded by the fact that the possession path is an exotic mixture of continuous (e.g., position) and discrete (e.g., ballhandler identity) components. In general, this computation requires summing across all nodes in a tree whose branches represent the distinct paths that the possession could take. The number of nodes in this tree scales exponentially in the number of possession states and is uncountably infinite if some aspect of the possession state is continuous. Thus, defining a possession model to compute EPV requires restrictive assumptions on the stochastic process to obtain tractability.
Previous work on win probability and point expectations in other sports have obtained computational tractability by modeling a lower resolution summary of the possession state as a homogeneous Markov chain. This reduces the computational complexity of the integral by a) reducing the breadth of the possession state tree by coarsening the state space, and b) reducing the effective depth of the possession state tree using recursive symmetries in the transition kernels that make the convolution kernel simple to compute. In particular, the Markov chain formulation reduces the integration complexity from exponential in the size of the state space to cubic. In baseball and football, these assumptions have been applied successfully because the games have a natural discrete structure that make an interpretable coarsening simple to define. In addition, the coarsened states are defined at natural breakpoints in play (at-bats in baseball, or downs in football), so while conditional expectations taken with respect to this coarsened state space may not obtain the maximum level of resolution, they are still considered sensitive enough to provide useful summaries.
Unfortunately, the homogeneous Markov chain approach is not as effective for basketball. Within a possession, there are no discretizations or breakpoints that define a natural coarsening, so there is no appropriate level of sensitivity coarser than full spatial resolution. This presents an untenable tradeoff. On one hand, any homogeneous Markov chain defined at a coarser resolution averages together irrelevant spatial situations in defining EPV at a given moment---for example, by averaging in a path that begins with a pass to a teammate in a location that is far from where he is currently standing \todo{wording}. On the other hand, any homogeneous Markov chain defined at the appropriate level of resolution would not be feasible to estimate or tractable to integrate.
Modeling a coarsened possession process under less stringent assumptions, however, can be well-motivated. Note that EPV is an expecation of a function $h(\cdot)$ that only depends on a few aspects of the possession state at the end time $T$---for example, if the ball goes through the hoop at this time the point value $X$ does not depend on where any player but the shooter is standing. This suggests that we can obtain acceptable EPV estimates from a model that only describes the evolution of a coarsened possession process, so long as that coarsened process depends on a filtration that is richer than that generated by the homogeneous Markov process above.
To obtain these competing priorities of coherence, tractability, and interpretability together in one model, we simultaneously model the possession at two levels of resolution. We use a high-resolution model for $Z$ to capture fine details of the possession, and we use a low-resolution model for a coarsened process $C$ that we obtain by a deterministic simplification of the full-resolution process $Z$. Together, the high-resolution model can capture high-resolution motifs that affect the path of the possession on short time horizons, while the low-resolution model captures less detailed aspects of the possession that are sufficient to learn the value of the possession on longer time horizons.
To make these simultaneous models coherent, we require an assumption that the full-resolution possession state becomes decoupled from the coarsened state at more distant timepoints by intervening ``large'' shifts in the possession state that we call \textit{macrotransitions}. Both the coarsening and the macrotransition set may be chosen to reflect the modeler's intuition about the structure of a possession. If the chosen coarsening and shifts respect the decoupling assumption, the methodology developed here computes EPV exactly. If the assumption only holds approximately for a given coarsening, the methodology here approximates the integral in \eqref{epvdef}.
For concreteness, we begin by describing a coarsening and macrotransition set that we consider to be the simplest non-trivial specification for modeling a basketball possession. We then describe the decoupling assumption that ensures consistency between the high-resolution model for $Z$ and the low-resolution model for $C$ under general specifications.
Thus, we can divide the computation of EPV into two parts: first, computing the distribution over the first macrotransition to be encountered after time $t$ using the macrotransition and microtransition models, and second, computing the conditional EPV given the coarsened endpoint of that macrotransition using a transition matrix that can be derived from the macrotransition model parameters. We now discuss in detail the specification and estimation of these models.
and simulate the player locations in the full-resolution data at time $t + \epsilon$ (black loop arrow in Figure~\ref{fig:states}). As discussed in Section~\ref{sec:Computation}, repeatedly drawing alternately from the macro- and microtransition models allows us to sample from $\mathbb{P}(M_j(s), Z_s=z,\tau=s|\mathcal{F}^{(Z)}_t)$.
\begin{align}
\nu_t
&= \sum_{c' \in \mathcal{C}} \sum_{j=1}^6 \int_\mathcal{T} \int_{\mathcal{Z}} \E[h(C_T)|C_\delta = c', M_j(s), Z_s=z, \tau=s,\mathcal{F}^{(Z)}_t] \nonumber \\
& \hspace{2cm} \times \mathbb{P}(C_\delta = c' | M_j(s), Z_s=z, \tau=s,\mathcal{F}^{(Z)}_t)
\mathbb{P}(M_j(s), Z_s=z,\tau=s|\mathcal{F}^{(Z)}_t) dzds. \label{EPVmulti}
\end{align}
\begin{align}
\nu_t
&= \sum_{c' \in \mathcal{C}} \sum_{j=1}^6 \int_\mathcal{T} \int_{\mathcal{Z}} \E[h(C_T)|C_\delta = c', M_j(s), Z_s=z, \tau=s,\mathcal{F}^{(Z)}_t] \nonumber \\
& \hspace{2cm} \times \mathbb{P}(C_\delta = c' | M_j(s), Z_s=z, \tau=s,\mathcal{F}^{(Z)}_t)
\mathbb{P}(M_j(s), Z_s=z,\tau=s|\mathcal{F}^{(Z)}_t) dzds. \label{EPVmulti}
\end{align}
More formally, for any $t < T$, $\delta$ is a bounded stopping time, and we can condition on it to re-express EPV:
It is therefore useful to define
When a possession moves out of a macrotransition state, we call this next state the macrotransition's \textit{exit state}. These are illustrated by the endpoints of the red arrows in Figure~\ref{fig:states}. As show in the figure, for any macrotransition $c \in \mathcal{C}_{\text{trans}}$, the set of possible exit states are restricted. For example, a possession in the shot attempt state must exit to either the ``made 2pt'' state in $\mathcal{C}_{\text{end}}$ or the ``rebound in progress'' state in $\mathcal{C}_{\text{trans}}$. In the scheme we employ here, the exit state of a shot or rebound macrotransition is treated as random (denoted by having multiple red "exit" arrows in Figure~\ref{fig:states}), while the exit state of a pass macrotransition is set deterministically to the position of the receiver of the pass---note that by linking pairs of states in $\mathcal{C}_{\text{poss}}$, we assume the intended recipient of the pass is known in real-time. In future work, pass macrotransitions could also be modeled with random exit states, representing the possibility of a pass being intercepted, but the annotations in the current data do not enable this modeling approach at present.
\todo{We need to make clear the connection between macrotransitions and the previous coarsened process}
Macrotransitions induce a natural decomposition of a possession that conforms to basketball intuition. Coaches generally draw up offensive schemes that focus on sequences of macrotransitions, with all action in between designed to provide a more favorable context in which to make the next macrotransition decision. Here we introduce notation for macrotransitions and specify a similar decomposition of \eqref{epvdef} that splits the expectation across the result of the next macrotranstion.
\textbf{Macrotransition type.} At any given moment when a player possesses the ball, there are six possible categories of macrotransition, corresponding to 4 pass options, a shot attempt, or a turnover, which we index by $j \in \{1, \ldots, 6\}$ (See the six blue arrows in Figure~\ref{fig:states}.). Without loss of generality, assume $j \leq 4$ correspond to pass events, $j=5$ is a shot attempt and $j=6$ a turnover. For a fixed $\epsilon$ (chosen to be 1/25, the temporal resolution of our data), let $M_j(t)$ be the event that a macrotransition of type $j$ begins in the time window $(t, t + \epsilon]$. Also, denote $M(t) = \bigcup_{j=1}^6 M_j(t)$ and $M(t)^c$ its complement.
\textbf{Macrotransition times.}
Using this notation, a basic decomposition of \eqref{epvdef} conditions on the result of the next macrotransition to give
\begin{align}
\nu_t &= \sum_{c' \in \mathcal{C}} \E[h(C_T)|C_\delta = c', \mathcal{F}^{(Z)}_t] \mathbb{P}(C_\delta = c' | \mathcal{F}^{(Z)}_t), \label{EPVdecomp}
\end{align}
where the exit time $\delta$ is implicitly integrated out. For modeling purposes, it is useful to explicitly express the distribution of the next exit state $C_\delta$ in terms of the macrotransition event that preceded it, $M(\tau)$. For full generality, we average over the time ($\tau$), type ($M_j(\tau)$), and spatial context ($Z_s$) of the macrotransition yielding exit state $C_\delta$. This yields the expression
\begin{align}
\nu_t
&= \sum_{c' \in \mathcal{C}} \sum_{j=1}^6 \int_\mathcal{T} \int_{\mathcal{Z}} \E[h(C_T)|C_\delta = c', M_j(s), Z_s=z, \tau=s,\mathcal{F}^{(Z)}_t] \nonumber \\
& \hspace{2cm} \times \mathbb{P}(C_\delta = c' | M_j(s), Z_s=z, \tau=s,\mathcal{F}^{(Z)}_t)
\mathbb{P}(M_j(s), Z_s=z,\tau=s|\mathcal{F}^{(Z)}_t) dzds. \label{EPVmulti}
\end{align}
Each factor in \eqref{EPVmulti} corresponds to an aspect of a possession that needs to be modeled under this decomposition. The third factor $\mathbb{P}(M_j(s), Z_s=z,\tau=s|\mathcal{F}^{(Z)}_t)$ models the type, location, and time of the next macrotransition---implicitly averaging over the intermediate path of the possession in the process. This is the critical piece of our multiresolution structure that connects the full-resolution process $Z$ to the coarsened process $C$. The basic idea is that we use the \textit{macrotransition model} to draw from $\mathbb{P}(M_j(t)|\mathcal{F}^{(Z)}_t)$ (blue arrows in Figure~\ref{fig:states}); if $M(t)^c$ and no macrotransition occurs in $(t, t+\epsilon]$, we use the \textit{microtransition model} to draw from $\mathbb{P}(Z_{t + \epsilon} | M(t)^c, \mathcal{F}^{(Z)}_t)$ and simulate the player locations in the full-resolution data at time $t + \epsilon$ (black loop arrow in Figure~\ref{fig:states}). As discussed in Section~\ref{sec:Computation}, repeatedly drawing alternately from the macro- and microtransition models allows us to sample from $\mathbb{P}(M_j(s), Z_s=z,\tau=s|\mathcal{F}^{(Z)}_t)$.
The second term in \eqref{EPVmulti} is the conditional distribution of the exit state $C_{\delta}$ given the macrotransition $M_j(s)$, which is in some cases degenerate (red arrows in Figure~\ref{fig:states}). Because there are different pass macrotransition events for each teammate, if $j$ corresponds to a pass, then $C_{\delta}$ is (with probability one) the possession state occupied by the player corresponding to the $j$th passing option at the time of the pass. Likewise, if $j$ is the turnover macrotransition, then $C_{\delta}$ is in the turnover state with probability one. Only if $j$ is a shot attempt is this distribution nontrivial; in the case of a shot attempt, $C_{\delta}$ may be a made 2 or 3 point basket, or a missed 2 or 3 point basket (the point value of the shot would be contained in the full-resolution data at the time of the shot attempt, $Z_s$). We thus require a model for shot success given the full resolution data prior to the shot. As the parametric form of this model is similar to that of our macrotransition model, it is discussed in the context of the macrotransition model in Section~\ref{sec:Macro}.
Lastly, the remaining factor $\E[h(C_T)|C_\delta = c', M_j(s), Z_s=z, \tau=s,\mathcal{F}^{(Z)}_t]$ is potentially the most complex because, without additional assumptions, it has the same essential structure as $\nu_t$ in \eqref{epvdef} itself. However, the structure of macrotransitions motivates assumptions that simplify this factor to provide computational tractability while conforming to basketball intuition.
\fi
|
2,869,038,154,337 | arxiv | \section{Introduction}
For a hypersurface $\Omega_0$ given by an embedding $\tilde{\bm{X}}_0:M^n\rightarrow\mathbb{R}^{n+1}$, where $M^n$ is an $n$-dimensional manifold, its weighted-volume preserving curvature flow is a family of hypersurfaces $\left\{\Omega_t:t\in[0,T)\right\}$, $T>0$, such that $\Omega_t=\bm{X}\left(M^n,t\right)$ and $\bm{X}$ satisfies
\begin{equation}\label{WVPCF}
\frac{\partial\bm{X}}{\partial t}=\left(\frac{1}{\int_{M^n}\Xi\left(\bm{\kappa}\right)\,d\mu_t}\int_{M^n}F\left(\bm{\kappa}\right)\Xi\left(\bm{\kappa}\right)\,d\mu_t -F\left(\bm{\kappa}\right)\right)\bm{\nu},\ \ \bm{X}\left(\cdot,0\right)=\tilde{\bm{X}}_0,
\end{equation}
where $F\left(\bm{\kappa}\right)$ and $\Xi\left(\bm{\kappa}\right)$ are smooth, symmetric functions of the principal curvatures, $\bm{\kappa}=(\kappa_1,\ldots,\kappa_n)$, $\bm{\nu}$ is a choice of unit normal, and $\,d\mu_t$ is the volume form of the hypersurface $\Omega_t$. Short time existence for initially near cylindrical hypersurfaces, under assumptions \descref{(A1)}-\descref{(A3)} below, was proved in \cite{HartleyPhD}.
This flow is a generalisation of the volume preserving mean curvature flow (VPMCF), which has $F\left(\bm{\kappa}\right)=\sum_{a=1}^n\kappa_a$ and $\Xi(\bm{\kappa})=1$. In the case of compact, convex without boundary hypersurfaces, the VPMCF has been show to exist for all time and the hypersurfaces $\Omega_t$ converge to a sphere of the same enclosed volume as $\Omega_0$ as $t\rightarrow\infty$ \cite{Gage86,Huisken87}. The stability of spheres as stationary solutions to the VPMCF has previously been studied by Escher and Simonett. In \cite{Escher98A} Escher and Simonett consider graphs over spheres and prove that if the height function is small in the little-H\"older space $h^{1,\beta}$, then the VPMCF exists for all time and the hypersurfaces converge to a sphere. This result also proves the existence of non-convex hypersurfaces that converge to spheres. Generalisations of the VPMCF to speeds that are a power of an elementary symmetric function has been considered in \cite{Cabezas10}, where convergence to a sphere was obtained for convex hypersurfaces under an additional pinching assumption. When the weight function is an elementary symmetric function the flow is the mixed-volume preserving curvature flow, which has been studied by McCoy in \cite{McCoy04} for speed functions given by the mean curvature and in \cite{McCoy05} for a more general class of speed functions. In both cases it was proved that if the initial hypersurface is strictly convex it will converge to a sphere under the flow.
In the present article we consider the setting introduced by Athanassenas in \cite{Athanassenas97}. That is, we consider hypersurfaces, $\Omega$, with non-empty boundary embedded in the domain
\begin{equation*}
W=\left\{\bm{x}\in\mathbb{R}^{n+1}:0<x_{n+1}<d\right\},\ d>0,
\end{equation*}
such that $\partial\Omega\subset\partial W$ and $\Omega$ meets $\partial W$ orthogonally. In the paper by Athanassenas it was shown that the volume, $Vol$, enclosed by $\Omega_0$ and $\partial W$ is preserved under the VPMCF and, assuming an axial symmetry, it was proved that if the initial hypersurface satisfies
\begin{equation}\label{MariaAssump}
\left|\Omega_0\right|:=\int_{M^n}\,d\mu_0\leq\frac{Vol}{d},
\end{equation}
then the VPMCF will exist for all time and the hypersurfaces will converge to a cylinder. In this case the assumption (\ref{MariaAssump}) was used to ensure that the hypersurfaces $\Omega_t$ do not touch the axis of rotation. In \cite{Hartley13} the author considered graphs over cylinders and proved that a cylinder of radius
\begin{equation*
R>\frac{d\sqrt{n-1}}{\pi},
\end{equation*}
is stable under the VPMCF, in the same sense as in \cite{Escher98A}. In \cite{HartleyPhD} this result was extended to the flow (\ref{WVPCF}) subject to the condition:
\begin{equation}\label{RCond}
R>\frac{d}{\pi}\sqrt{\frac{(n-1)\frac{\partial F}{\partial\kappa_1}\left(\bm{\kappa}_{R}\right)}{\frac{\partial F}{\partial\kappa_n}\left(\bm{\kappa}_{R}\right)}},
\end{equation}
where $\bm{\kappa}_{R}=\left(\frac{1}{R},\ldots,\frac{1}{R},0\right)$.
\begin{theorem}[\cite{HartleyPhD}]\label{MainRCyl}
Let $\mathscr{C}_{R,d}^n$ represent a cylinder of radius $R$ and length $d$. Assuming \descref{(A1)}-\descref{(A3)} are satisfied for $\tilde{R}=R$, there exists a neighbourhood of zero, $O_c\subset h^{2,\alpha}_{\frac{\partial}{\partial z}}\left(\overline{\mathscr{C}}_{R,d}^n\right)$ with $0<\alpha<1$ (see Section \ref{SecRedEq}), such that if $\Omega_0$ is a graph over $\mathscr{C}_{R,d}^n$ with height function in $O_c$, then there exists $T>0$ such that the flow (\ref{WVPCF}) with orthogonal boundary condition exists for $t\in[0,T)$. Furthermore, if (\ref{RCond}) is satisfied, then $T=\infty$ and the hypersurfaces converge exponentially fast to a cylinder as $t\rightarrow\infty$, with respect to the $h^{2,\alpha}\left(\overline{\mathscr{C}}_{R,d}^n\right)$ topology.
\end{theorem}
It is important to note that in general both sides of (\ref{RCond}) depend on $R$, however when the speed function is homogeneous in $\bm{\kappa}$ this is not the case and (\ref{RCond}) is true for $R$ large enough. For the axially symmetric VPMCF in two dimensions the stability result was also obtained by LeCrone in \cite{LeCronePre}. In that paper LeCrone also showed that the cylinder of radius $\frac{d}{\pi}$ is part of a continuous family of constant mean curvature (CMC) unduloids, which satisfy the orthogonal boundary condition. He then proved that the unduloids in this family close to the cylinder are unstable stationary solutions of the VPMCF. The aim of this paper is to generalise this final result to the weighted-volume preserving curvature flows in any dimension.
For the main analysis we make some assumptions on the form of $F$ and $\Xi$:
\begin{description}
\descitem{(A1)} $F$ and $\Xi$ are smooth, symmetric functions
\descitem{(A2)} $\frac{\partial F}{\partial \kappa_a}\left(\kapsurf{\tilde{R}}\right)>0$ for every $a=1,\ldots,n$ and for some $\tilde{R}\in\mathbb{R}^+$
\descitem{(A3)} $\Xi\left(\kapsurf{\tilde{R}}\right)>0$ for some $\tilde{R}\in\mathbb{R}^+$
\descitem{(A4)} $\Xi(\bm{\kappa})=\sum_{a=0}^nc_aE_a(\bm{\kappa})$
\descitem{(A5)} There exists $R_{crit}>0$ such that, in a neighbourhood of $R_{crit}$, (\ref{RCond}) is satisfied for $R>R_{crit}$ ($R<R_{crit}$) and strictly not satisfied for $R< R_{crit}$ ($R> R_{crit}$),
\end{description}
where
\begin{equation}\label{EleDef}
E_a\left(\bm{\kappa}\right)=\sum_{1\leq b_1<\ldots<b_a\leq n}\prod_{i=1}^a\kappa_{b_i},
\end{equation}
are the elementary symmetric functions. The first two assumptions ensure isotropy and local parabolicity respectively, while \descref{(A3)} ensures a valid flow. Assumption \descref{(A4)} ensures that a type of weighted-volume is preserved under the flow, see Appendix \ref{AppInv}. While the final assumption ensures a change in sign of the critical eigenvalue. Since \descref{(A5)} is not the most descriptive assumption, we note here that it is satisfied when $F$ is homogeneous and we will often consider the revised assumption:
\begin{description}
\descitem{(A5)*} $F$ is homogeneous of degree $k$, i.e. $F(\alpha\bm{\kappa})=\alpha^kF(\bm{\kappa})$ for all $\alpha>0$.
\end{description}
We note that when this assumption is used, we have $R_{crit}=\frac{d}{\pi}\sqrt{\frac{(n-1)\frac{\partial F}{\partial\kappa_1}\left(\kapsurf{1}\right)}{\frac{\partial F}{\partial\kappa_n}\left(\kapsurf{1}\right)}}$ and if \descref{(A2)} is true for some $\tilde{R}>0$ then it is true for all $\tilde{R}>0$.
\begin{theorem}\label{MainThm}
For the flow (\ref{WVPCF}), with assumptions \descref{(A1)}-\descref{(A5)} satisfied for $\tilde{R}=R_{crit}$, the cylinder of radius $R_{crit}$ is part of continuous family of stationary solutions that satisfy the orthogonal boundary condition. Furthermore, if (\ref{FCond}) holds then the stationary solutions close to the cylinder are stable under axially symmetric, weighted-volume preserving perturbations.
\end{theorem}
In (\ref{FCond}) we have used the functions $F_{a}(\eta)=\frac{\partial F}{\partial\kappa_a}\left(\kapsurf{\frac{n-1}{\eta}}\right)$, $a=\{1,n\}$, $F_{nn}(\eta)=\frac{\partial^2F}{\partial\kappa_n^2}\left(\kapsurf{\frac{n-1}{\eta}}\right)$ and $F_{nnn}(\eta)=\frac{\partial^3F}{\partial\kappa_n^3}\left(\kapsurf{\frac{n-1}{\eta}}\right)$ to simplify the notation. Also note that when \descref{(A5)*} is satisfied then these functions are homogeneous, so have the representations $F_a(\eta)=\eta^{k-1}F_a$, $F_{nn}(\eta)=\eta^{k-2}F_{nn}$ and $F_{nnn}(\eta)=\eta^{k-3}F_{nnn}$, where we define the constants $F_a:=F_a(1)$, $F_{nn}:=F_{nn}(1)$ and $F_{nnn}:=F_{nnn}(1)$.
\begin{remark}\label{ThmRem}
For $n=4$, an example of a non-homogeneous speed function that satisfies assumption \descref{(A1)}, \descref{(A2)} (for all $R$) and \descref{(A5)} is
\begin{equation*}
F\left(\bm{\kappa}\right)=\kappa_1\kappa_2\kappa_3\kappa_4 +\frac{\pi^2}{12d^2}\left(\kappa_1^2+\kappa_2^2+\kappa_3^2+\kappa_4^2\right) +\frac{\pi^2}{18d^2}\left(\kappa_1^3+\kappa_2^3+\kappa_3^3+\kappa_4^3\right),
\end{equation*}
in which case $R_{crit}=1$. However if we change the speed function slightly to
\begin{equation*}
F\left(\bm{\kappa}\right)=\kappa_1\kappa_2\kappa_3\kappa_4 +\frac{\pi^2}{6d^2}\left(\kappa_1^2+\kappa_2^2+\kappa_3^2+\kappa_4^2\right) +\frac{\pi^2}{18d^2}\left(\kappa_1^3+\kappa_2^3+\kappa_3^3+\kappa_4^3\right),
\end{equation*}
then (\ref{RCond}) is never satisfied so assumption \descref{(A5)} is false (while both \descref{(A1)} and \descref{(A2)} are still satisfied for all $\tilde{R}$). This makes the classification of allowable speed functions difficult.
\end{remark}
\begin{corollary}\label{HomogCor}
For the flow (\ref{WVPCF}), with assumptions \descref{(A1)}- \descref{(A5)*} satisfied for $\tilde{R}=\frac{d}{m\pi}\sqrt{\frac{(n-1)F_1}{F_n}}$ with $m\in\mathbb{N}$, then the cylinder of radius $\frac{d}{m\pi}\sqrt{\frac{(n-1)F_1}{F_n}}$ is part of continuous family of stationary solutions that satisfy the orthogonal boundary condition. Furthermore, if $m=1$ and
\begin{align}\label{HomogCond}
&\frac{-6(n-1)\sum_{a=1}^n\frac{c_a\pi^a}{d^a}\left(\frac{F_n}{(n-1)F_1}\right)^{\frac{a}{2}}\left(\binom{n-2}{a-1} -\frac{F_1}{F_n}\binom{n-1}{a-1}\right)}{\sum_{a=0}^n\frac{c_a\pi^a}{d^a}\left(\frac{F_n}{(n-1)F_1}\right)^{\frac{a}{2}}\binom{n-1}{a}} -k^2+6n+6-\frac{3F_1^2F_{nnn}}{2F_n^3}\\
&\hspace{0.6cm} +\frac{2F_1^2F_{nn}^2}{F_n^4} +\frac{kF_1F_{nn}}{2F_n} +\frac{(n-1)F_1^2F_{nn}}{F_n^3}-\frac{(n-1)^2F_1^2}{F_n^2} -\frac{(n-1)(k-3)F_1}{F_n}<0,\nonumber
\end{align}
holds, then the stationary solutions close to the cylinder of radius $\frac{d}{\pi}\sqrt{\frac{(n-1)F_1}{F_n}}$ are stable under axially symmetric, weighted-volume preserving perturbations.
\end{corollary}
Note that the first result of this Corollary is slightly stronger than what is immediately obtained from Theorem \ref{MainThm} as it proves the existence of a sequence of bifurcation points, see Corollary \ref{StatSolHomog}. However, the remainder of Corollary \ref{HomogCor} follows straight from Theorem \ref{MainThm} by using the homogeneous representations of $F_a(\eta)$, $F_{nn}(\eta)$ and $F_{nnn}(\eta)$.
\begin{corollary}\label{MainCor}
The cylinder of radius $\frac{d\sqrt{n-1}}{\pi}$ is part of a continuous family of CMC unduloids that satisfy the orthogonal boundary condition. Furthermore, under the VPMCF along with orthogonal boundary condition, the unduloids close to the cylinder are unstable stationary solutions in dimensions $2\leq n\leq10$, while for $n\geq11$ they are stable under axially symmetric, volume preserving perturbations.
\end{corollary}
\begin{remark}\label{MainRem}
\begin{enumerate}
\item The first statement of the corollary is easily seen from the work of Delaunay \cite{Delaunay41} when $n=2$ and Hsiang and Yu \cite{Hsiang81} in higher dimensions. It was also shown in \cite{Hsiang81} that this family of CMC unduloids limits to spheres of radius $d$.
\item It can also be seen from Corollary \ref{HomogCor} that the cylinders of radius $\frac{md\sqrt{n-1}}{\pi}$, $m\in\mathbb{N}$ are also part of a continuous family of CMC unduloids that satisfy the orthogonal boundary condition. However these cylinders, and nearby unduloids, are known to be linearly unstable under the flows.
\item The stability of unduloids under VPMCF in high dimensions is an interesting result and it is not currently known why such a change in stability occurs at $n=11$.
\item Note that Corollary \ref{MainCor} does not contradict the theorem in \cite{Athanassenas97} since for a critical cylinder $\frac{Vol}{d\left|\Omega\right|}=\frac{\sqrt{n-1}}{n\pi}<\frac{1}{6}$, so that (\ref{MariaAssump}) is not satisfied and will not be satisfied for unduloids close to this cylinder, or the hypersurfaces close to them.
\end{enumerate}
\end{remark}
In Section \ref{SecRedEq} we introduce the axially symmetric flow and remove the boundary conditions by considering it as an even function on the circle of radius $\frac{d}{\pi}$. We then introduce an invariant parameter, based on the weighted-volume, into the flow that enables us to reduce the problem to one on even, mean zero functions on the circle. Section \ref{SecBifAnal} analyses the reduced equation and proves Theorem \ref{MainThm}, while Section \ref{SecMVPMCF} considers the specific case of mixed-volume preserving mean curvature flow and in doing so proves Corollary \ref{MainCor}. In Section \ref{SecGeoCons} we provide an alternate construction of the family of CMC unduloids, using their representation from \cite{Hsiang81}. This is then used to provide more insight into the results of Section \ref{SecMVPMCF}.
The author is thankful to the University of Queensland and Dr Artem Pulemotov along with Monash University and Dr Maria Athanassenas for their support while preparing this paper. Part of this work was initiated during the author's PhD candidature at Monash University.
\section{Reducing the Equation}\label{SecRedEq}
We consider normal graphs over cylinders, hence $M^n=\mathbb{S}^{n-1}\times(0,d)$, and embeddings that are axially symmetric:
\begin{equation*
\bm{X}_{\rho}\left(\bm{\theta},z\right)=\left(\rho(z)Y^{n-1}_{1,1}\left(\bm{\theta}\right),\ldots,\rho(z)Y^{n-1}_{1,n}\left(\bm{\theta}\right),z\right)
\end{equation*}
where $\rho:[0,d]\rightarrow\mathbb{R}^+$ and $Y^{n-1}_{1,a}:\mathbb{S}^{n-1}\rightarrow\mathbb{R}$, $1\leq a\leq n$, are the first order spherical harmonics on $\mathbb{S}^{n-1}$. We also restrict to functions so that the orthogonal boundary condition is satisfied, i.e. $\left.\frac{d\rho}{dz}\right|_{z=0,d}=0$. Up to a tangential diffeomorphism the flow (\ref{WVPCF}) is equivalent to a PDE on the function $\rho:[0,d]\times[0,T)\rightarrow\mathbb{R}^+$:
\begin{equation}\label{WVPCFGraph}
\begin{array}{c}\frac{\partial \rho}{\partial t}=\sqrt{1+\left(\frac{\partial\rho}{\partial z}\right)^2}\left(\frac{1}{\int_{[0,d]}\Xi\left(\kapsurf{\rho}\right)\,d\musurf{\rho}}\int_{[0,d]}F\left(\kapsurf{\rho}\right) \Xi\left(\kapsurf{\rho}\right)\,d\musurf{\rho}-F\left(\kapsurf{\rho}\right)\right),\\ \left.\frac{\partial\rho}{\partial z}\right|_{z=0,d}=0,\hspace{0.7cm}\rho(\cdot,0)=\rho_0,\end{array}
\end{equation}
where $\tilde{\bm{X}}_{0}=\bm{X}_{\rho_0}$, $\,d\musurf{\rho}=\rho^{n-1}\sqrt{1+\left(\frac{\partial\rho}{\partial z}\right)^2}\,dz$ and $\kapsurf{\rho}$ is the principal curvature vector of the embedding $\bm{X}_{\rho}$.
Throughout this paper we will be considering functions in the little-H\"older spaces, which are defined for an open set $U\subset\mathbb{R}^n$, $k\in\mathbb{N}$ and $\alpha\in(0,1)$ by
\begin{align*}
&h^{0,\alpha}\left(\bar{U}\right)=\left\{f\in C^{0,\alpha}\left(\bar{U}\right):\lim_{r\rightarrow0}\sup_{\stackrel{x,y\in\bar{U}}{0<|x-y|<r}}\frac{|f(x)-f(y)|}{|x-y|^{\alpha}}=0\right\},\\
h^{k,\alpha}\left(\bar{U}\right)=&\left\{f\in C^{k,\alpha}\left(\bar{U}\right):D^{\beta}f\in h^{0,\alpha}\left(\bar{U}\right)\text{ for all multi-indices }\beta\text{ with }|\beta|=k\right\},
\end{align*}
where $C^{k,\alpha}\left(\bar{U}\right)$ are the H\"older spaces. These spaces are equipped with the same norm as the H\"older spaces and can be extended to a manifold by means of an atlas. Importantly these spaces interpolate between themselves when using the continuous interpolation functor $(\cdot,\cdot)_{\theta}$, $\theta\in(0,1)$, \cite{Lunardi95}. That is, for $k,l\in\mathbb{N}$, $\alpha,\beta\in(0,1)$ with $k+\alpha>l+\beta$ then
\begin{equation}\label{lHInterp}
\left(h^{l,\beta},h^{k,\alpha}\right)_{\theta}=h^{\theta k+(1-\theta)l+\theta\alpha-(1-\theta)\beta}
\end{equation}
for $\theta\in(0,1)$ provided $\theta k+(1-\theta)l+\theta\alpha-(1-\theta)\beta\notin\mathbb{Z}$, see \cite{Guenther02}. Note that here we use the notation $h^{\sigma}=h^{\lfloor\sigma\rfloor,\sigma-\lfloor\sigma\rfloor}$ for $\sigma\in\mathbb{R}$. To restrict to functions that satisfy our boundary condition we define:
\begin{equation*}
h^{k,\alpha}_{B}\left(\overline{M}^n\right)=\left\{f\in h^{k,\alpha}\left(\overline{M}^n\right):\left.B[f]\right|_{\partial M^n}=0\right\},
\end{equation*}
for $k\in\mathbb{N}_0$ and $\alpha\in(0,1)$.
To remove the boundary condition we consider $h^{k,\alpha}$ functions on the circle of radius $\frac{d}{\pi}$, $\mathscr{S}_{\frac{d}{\pi}}^1$, with an even symmetry; we denote this space of functions by $h^{k,\alpha}_e\left(\mathscr{S}_{\frac{d}{\pi}}^1\right)$. Also, denote the even extension of a function $\rho$ from $[0,d]$ to $\mathscr{S}_{\frac{d}{\pi}}^1$ by $u_{\rho}$, i.e.
\begin{equation*}
u_{\rho}(z)=\left\{\begin{array}{ll} \rho(z) & z\in[0,d]\\ \rho(-z) & z\in(-d,0)\end{array}\right..
\end{equation*}
Lastly, for an even function $u$, define $\left.u\right|_{[0,d]}$ to be the function on $[0,d]$ such that its even extension to the circle is $u$. Using this notation we see that solving (\ref{WVPCFGraph}) is then equivalent to solving
\begin{equation}\label{WVPCFCircle}
\frac{\partial u}{\partial t}=\sqrt{1+u'^2}\left(\frac{1}{\int_{\mathscr{S}_{\frac{d}{\pi}}^1}\Xi\left(\kapsurf{u}\right)\,d\musurf{u}}\int_{\mathscr{S}_{\frac{d}{\pi}}^1}F\left(\kapsurf{u}\right)\Xi\left(\kapsurf{u}\right)\,d\musurf{u}-F\left(\kapsurf{u}\right)\right),\hspace{0.5cm}u(\cdot,0)=u_{\rho_0},
\end{equation}
where $z$ is an arc-length variable, $\,d\musurf{u}=u^{n-1}\sqrt{1+u'^2}\,dz$ and $\kapsurf{u_{\rho}}$ is the even extension of $\kapsurf{\rho}$, i.e.
\begin{equation*
\kapsurf{u}=\left(\frac{1}{u\sqrt{1+u'^2}},\ldots,\frac{1}{u\sqrt{1+u'^2}},-\frac{u''}{\left(1+u'^2\right)^{\frac{3}{2}}}\right),
\end{equation*}
where we use $'$ to denote derivatives with respect to $z$. The equivalence is easily seen because the right hand side of (\ref{WVPCFCircle}) is an even function when $u$ is even.
We now consider the function $Q$:
\begin{equation}\label{defQ}
Q(u):=c_0u^n+\left(\sum_{a=1}^{n}\frac{nc_a}{a}E_{a-1}\left(\kapsurf{u}\right)\right)u^{n-1}\sqrt{1+u'^2},
\end{equation}
and define
\begin{equation*
\tilde{Q}(\eta):=Q\left(\frac{n-1}{\eta}\right)=\sum_{a=0}^{n}c_a(n-1)^{n-a}\binom{n}{a}\eta^{-(n-a)}.
\end{equation*}
Note that $\tilde{Q}'(\eta)=-\frac{n(n-1)^n}{\eta^{n+1}}\Xi\left(\kapsurf{\frac{n-1}{\eta}}\right)$ so the function is decreasing at $\eta$ whenever \descref{(A3)} holds for $\tilde{R}=\frac{n-1}{\eta}$, in particular we have a local inverse of $\tilde{Q}$ at these points. Corollary \ref{CorInvar} gives us that the weighted-volume
\begin{align*
WVol(u)=&\int_{\mathscr{S}_{\frac{d}{\pi}}^1}Q(u)\,dz\\
=&c_0\int_{\mathscr{S}_{\frac{d}{\pi}}^1}u^n\,dz+\sum_{a=1}^n\frac{nc_a}{a}\int_{\mathscr{S}_{\frac{d}{\pi}}^1}E_{a-1}\left(\kapsurf{u}\right)\,d\musurf{u}\\
=&\frac{2}{\omega_n}\left(c_0 V_{n+1}(u|_{[0,d]})+\sum_{a=1}^nc_a\binom{n+1}{a}V_{n+1-a}(u|_{[0,d]})\right)
\end{align*}
is an invariant of the flow. Here we use $\omega_n$ to denote the volume of a unit $n$-ball and $V_b(\rho)$ to denote the mixed-volume of the hypersurface defined by $\bm{X}_{\rho}$. The mixed-volumes are usually only defined for convex hypersurfaces, however they can be extended to all hypersurfaces by using the formula:
\begin{equation}\label{MixedV}
V_{b}=\left\{\begin{array}{ll}\frac{1}{(n+1)\binom{n}{n-b}}\int_{M^n}E_{n-b}\left(\bm{\kappa}\right)\,d\mu & b=1,\ldots,n,\\ Vol & b=n+1.\end{array}\right.
\end{equation}
Note that in the case $c_a=\delta_{ab}$ for some $0\leq b\leq n-1$, $WVol(u)$ is (up to a positive multiple) the $(n+1-b)$\textsuperscript{th} mixed volume $V_{n+1-b}(u|_{[0,d]})$, the $b=n$ case is excluded as then \descref{(A3)} is not satisfied for any $\tilde{R}$.
We now follow \cite{LeCronePre} in introducing the function $\psi_{\eta_0}$ from an open set of $h^{k,\alpha}_{e,0}\left(\mathscr{S}_{\frac{d}{\pi}}^1\right)\times\mathbb{R}^+$ to $h^{k,\alpha}_e\left(\mathscr{S}_{\frac{d}{\pi}}^1\right)$, where
\begin{equation*}
h^{k,\alpha}_{e,0}\left(\mathscr{S}_{\frac{d}{\pi}}^1\right)=\left\{f\in h^{k,\alpha}_{e}\left(\mathscr{S}_{\frac{d}{\pi}}^1\right):\int_{\mathscr{S}_{\frac{d}{\pi}}^1}f\,dz=0\right\}.
\end{equation*}
To simplify the notation we define the projection $P_0:h^{0,\alpha}_e\left(\mathscr{S}_{\frac{d}{\pi}}^1\right)\rightarrow h^{0,\alpha}_{e,0}\left(\mathscr{S}_{\frac{d}{\pi}}^1\right)$:
\begin{equation*}
P_0[u]=u-\fint_{\mathscr{S}_{\frac{d}{\pi}}^1}u\,dz,
\end{equation*}
and the operators $L(u):=\sqrt{1+u'^2}$ and
\begin{equation*
G(u)=L(u)\left(\frac{1}{\int_{\mathscr{S}_{\frac{d}{\pi}}^1}\Xi\left(\kapsurf{u}\right)\,d\musurf{u}}\int_{\mathscr{S}_{\frac{d}{\pi}}^1}F\left(\kapsurf{u}\right)\Xi\left(\kapsurf{u}\right)\,d\musurf{u} -F\left(\kapsurf{u}\right)\right).
\end{equation*}
\begin{lemma}\label{defpsi}
For each $\eta_0\in\mathbb{R}^+$ such that $\Xi\left(\kapsurf{\frac{n-1}{\eta_0}}\right)>0$ there exist $U_{\eta_0}$, $V_{\eta_0}$, neighbourhoods of $(0,\eta_0)\in h_{e,0}^{2,\alpha}\left(\mathscr{S}_{\frac{d}{\pi}}^1\right)\times\mathbb{R}^+$ and $\frac{n-1}{\eta_0}\in h_e^{2,\alpha}\left(\mathscr{S}_{\frac{d}{\pi}}^1\right)$ respectively, and a smooth diffeomorphism $\psi_{\eta_0}:U_{\eta_0}\rightarrow V_{\eta_0}$ such that:
\begin{itemize}
\item $P_0\left[\psi_{\eta_0}\left(\bar{u},\eta\right)\right]=\bar{u}$ for all $\left(\bar{u},\eta\right)\in U_{\eta_0}$,
\item $\fint_{\mathscr{S}_{\frac{d}{\pi}}^1}Q\left(\psi_{\eta_0}\left(\bar{u},\eta\right)\right)\,dz=\tilde{Q}\left(\eta\right)$ for all $\left(\bar{u},\eta\right)\in U_{\eta_0}$,
\item For $u\in V_{\eta_0}$, $\psi_{\eta_0}^{-1}(u)=\left(P_0[u],\tilde{Q}^{-1}\left(\fint_{\mathscr{S}_{\frac{d}{\pi}}^1}Q(u)\,dz\right)\right)$. In particular $\psi_{\eta_0}\left(0,\eta\right)=\frac{n-1}{\eta}$ for all $\left(0,\eta\right)\in U_{\eta_0}$,
\item $\Xi\left(\kapsurf{\psi_{\eta_0}(\bar{u},\eta)}\right)>0$ for all $\left(\bar{u},\eta\right)\in U_{\eta_0}$.
\end{itemize}
Furthermore:
\begin{equation}\label{Dpsi}
D_1\psi_{\eta_0}\left(\bar{u},\eta\right)[\bar{v}]=\bar{v}-\frac{\int_{\mathscr{S}_{\frac{d}{\pi}}^1}\bar{v}\Xi\left(\kapsurf{\psi_{\eta_0}(\bar{u},\eta)}\right)\psi_{\eta_0}\left(\bar{u},\eta\right)^{n-1}\,dz}{\int_{\mathscr{S}_{\frac{d}{\pi}}^1}\Xi\left(\kapsurf{\psi_{\eta_0}(\bar{u},\eta)}\right)\psi_{\eta_0}\left(\bar{u},\eta\right)^{n-1}\,dz}.
\end{equation}
\end{lemma}
\begin{proof}
We consider the operator
\begin{equation*
\Phi\left(u,\bar{u},\eta\right)=\left(P_0[u]-\bar{u},\fint_{\mathscr{S}_{\frac{d}{\pi}}^1}Q(u)\,dz-\tilde{Q}\left(\eta\right)\right),
\end{equation*}
which is a generalisation of the one in \cite{LeCronePre}, and note that $\Phi\left(\frac{n-1}{\eta_0},0,\eta_0\right)=(0,0)$. We linearise with respect to the $u$ function around the point $\left(\frac{n-1}{\eta_0},0,\eta_0\right)$ and act on $v\in h_e^{2,\alpha}\left([0,d]\right)$:
\begin{align*}
D_1\Phi\left(\frac{n-1}{\eta_0},0,\eta_0\right)[v]=&\left(P_0[v],\fint_{\mathscr{S}_{\frac{d}{\pi}}^1}DQ\left(\frac{n-1}{\eta_0}\right)[v]\,dz\right)\\
=&\left(P_0[v],n\fint_{\mathscr{S}_{\frac{d}{\pi}}^1}v\Xi\left(\kapsurf{\frac{n-1}{\eta_0}}\right)\left(\frac{n-1}{\eta_0}\right)^{n-1}\,dz\right),
\end{align*}
where we have used (\ref{QIdent}). Since $\Xi\left(\kapsurf{\frac{n-1}{\eta_0}}\right)\neq0$ this operator has trivial kernel. Therefore by the implicit function theorem we obtain the existence of $\psi_{\eta_0}$ locally around $(0,\eta_0)\in h_{e,0}^{2,\alpha}\left(\mathscr{S}_{\frac{d}{\pi}}^1\right)\times\mathbb{R}^+$ such that it satisfies the first three dot points and, if necessary, we shrink $U_{\eta_0}$ to ensure the fourth holds.
To obtain the linearisation of $\psi_{\eta_0}$ we consider
\begin{equation}\label{defphi}
\phi(\bar{u},\eta)=\Phi\left(\psi_{\eta_0}\left(\bar{u},\eta\right),\bar{u},\eta\right)=\left(P_0\left[\psi_{\eta_0}\left(\bar{u},\eta\right)\right]-\bar{u},\fint_{\mathscr{S}_{\frac{d}{\pi}}^1}Q\left(\psi_{\eta_0}\left(\bar{u},\eta\right)\right)\,dz-\tilde{Q}\left(\eta\right)\right).
\end{equation}
By linearising with respect to $\bar{u}$ we obtain:
\begin{align*}
D_1\phi(\bar{u},\eta)[\bar{v}]=&\left(P_0\left[D_1\psi_{\eta_0}(\bar{u},\eta)[\bar{v}]\right]-\bar{v},\fint_{\mathscr{S}_{\frac{d}{\pi}}^1}DQ\left(\psi_{\eta_0}(\bar{u},\eta)\right)\left[D_1\psi_{\eta_0}(\bar{u},\eta)[\bar{v}]\right]\,dz\right)\\
=&\left(P_0\left[D_1\psi_{\eta_0}(\bar{u},\eta)[\bar{v}]\right]-\bar{v},n\fint_{\mathscr{S}_{\frac{d}{\pi}}^1}D_1\psi_{\eta_0}(\bar{u},\eta)[\bar{v}]\Xi\left(\kapsurf{\psi_{\eta_0}(\bar{u},\eta)}\right)\psi_{\eta_0}(\bar{u},\eta)^{n-1}\,dz\right),
\end{align*}
where we have used (\ref{QIdent}) again. By the properties of $\psi_{\eta_0}$ we have that $\phi(\bar{u},\eta)=(0,0)$ for all $\left(\bar{u},\eta\right)\in U_{\eta_0}$, hence we obtain:
\begin{equation*
\left(P_0\left[D_1\psi_{\eta_0}(\bar{u},\eta)[\bar{v}]\right]-\bar{v},n\fint_{\mathscr{S}_{\frac{d}{\pi}}^1}D_1\psi_{\eta_0}(\bar{u},\eta)[\bar{v}]\Xi\left(\kapsurf{\psi_{\eta_0}(\bar{u},\eta)}\right)\psi_{\eta_0}(\bar{u},\eta)^{n-1}\,dz\right)=(0,0).
\end{equation*}
From the first of these equations we obtain $D_1\psi_{\eta_0}(\bar{u},\eta)[\bar{v}]=\bar{v}+\fint_{\mathscr{S}_{\frac{d}{\pi}}^1}D_1\psi_{\eta_0}(\bar{u},\eta)[\bar{v}]\,dz$, so by substituting this into the second equation we obtain:
\begin{equation*}
\fint_{\mathscr{S}_{\frac{d}{\pi}}^1}\bar{v}\Xi\left(\kapsurf{\psi_{\eta_0}(\bar{u},\eta)}\right)\psi_{\eta_0}(\bar{u},\eta)^{n-1}\,dz +\fint_{\mathscr{S}_{\frac{d}{\pi}}^1}\Xi\left(\kapsurf{\psi_{\eta_0}(\bar{u},\eta)}\right)\psi_{\eta_0}(\bar{u},\eta)^{n-1}\,dz\fint_{\mathscr{S}_{\frac{d}{\pi}}^1}D_1\psi_{\eta_0}(\bar{u},\eta)[\bar{v}]\,dz=0.
\end{equation*}
Therefore
\begin{equation*}
\fint_{\mathscr{S}_{\frac{d}{\pi}}^1}D_1\psi_{\eta_0}(\bar{u},\eta)[\bar{v}]\,dz=-\frac{\fint_{\mathscr{S}_{\frac{d}{\pi}}^1}\bar{v}\Xi\left(\kapsurf{\psi_{\eta_0}(\bar{u},\eta)}\right)\psi_{\eta_0}(\bar{u},\eta)^{n-1}\,dz}{\fint_{\mathscr{S}_{\frac{d}{\pi}}^1}\Xi\left(\kapsurf{\psi_{\eta_0}(\bar{u},\eta)}\right)\psi_{\eta_0}(\bar{u},\eta)^{n-1}\,dz},
\end{equation*}
and the result follows by substituting this into $D_1\psi_{\eta_0}(\bar{u},\eta)[\bar{v}]=\bar{v}+\fint_{\mathscr{S}_{\frac{d}{\pi}}^1}D_1\psi_{\eta_0}(\bar{u},\eta)[\bar{v}]\,dz$.
\end{proof}
We also note that from (\ref{defphi}) and $\phi(\bar{u},\eta)=(0,0)$ we have the representation $\psi_{\eta_0}(\bar{u},\eta)=\bar{u}+\fint_{\mathscr{S}_{\frac{d}{\pi}}^1}\psi_{\eta_0}(\bar{u},\eta)\,dz$; in particular this means that $\frac{d\psi_{\eta_0}(\bar{u},\eta)}{dz}=\frac{d\bar{u}}{dz}$. We are now able to introduce an equivalent equation on the reduced space $h_{e,0}^{2,\alpha}\left(\mathscr{S}_{\frac{d}{\pi}}^1\right)$.
\begin{corollary}\label{EquivEq}
Let $\eta_0\in\mathbb{R}^+$ be such that \descref{(A1)}-\descref{(A4)} are satisfied with $\tilde{R}=\frac{n-1}{\eta_0}$, then the flow
\begin{equation}\label{RedWVPCF}
\frac{\partial\bar{u}}{\partial t}=\bar{G}_{\eta_0}(\bar{u},\eta):=P_0\left[G\left(\psi_{\eta_0}(\bar{u},\eta)\right)\right],\ \ \bar{u}(\cdot,0)=\bar{u}_0,
\end{equation}
for some fixed $\eta$ such that $(\bar{u}_0,\eta)\in U_{\eta_0}$, is equivalent to the flow (\ref{WVPCFCircle}).
That is, if $\bar{u}$ is a solution to (\ref{RedWVPCF}) with initial condition $\bar{u}_0$ then $\psi_{\eta_0}(\bar{u},\eta)$ is a solution to (\ref{WVPCFCircle}) with initial condition $\psi_{\eta_0}(\bar{u}_0,\eta)$. Conversely, if $u$ is a solution to (\ref{WVPCFCircle}) with initial condition $u_0$ such that $u(t)\in V_{\eta_0}$ for each $t$, then $P_0[u]$ is a solution to (\ref{RedWVPCF}) with $\eta=\tilde{Q}^{-1}\left(\fint_{\mathscr{S}_{\frac{d}{\pi}}^1}Q(u_0)\,dz\right)$.
\end{corollary}
\begin{proof}
Firstly let $u(t)$ be a solution to (\ref{WVPCFCircle}) in $V_{\eta_0}$ for all $t\in[0,\delta)$. Set $\eta=\tilde{Q}^{-1}\left(\fint_{\mathscr{S}_{\frac{d}{\pi}}^1}Q(u_0)\,dz\right)$ and since $\fint_{\mathscr{S}_{\frac{d}{\pi}}^1}Q(u)\,dz$ is an invariant of the flow we have
\begin{equation*}
\eta=\tilde{Q}^{-1}\left(\fint_{\mathscr{S}_{\frac{d}{\pi}}^1}Q(u(t))\,dz\right),
\end{equation*}
for all $t\in[0,\delta)$. So by Lemma \ref{defpsi} we have that $u(t)=\psi_{\eta_0}\left(P_0[u(t)],\eta\right)$ for all $t\in[0,\delta)$. Substituting this into (\ref{WVPCFCircle}) gives
\begin{equation*}
\frac{\partial u}{\partial t}=G\left(\psi_{\eta_0}\left(P_0[u(t)],\eta\right)\right),\ t\in[0,\delta),
\end{equation*}
and by taking the projection of this we find that $P_0[u]$ is a solution to (\ref{RedWVPCF}), with parameter $\eta$ and initial condition $\bar{u}_0=P_0[u_0]$, for all $t\in[0,\delta)$.
Now let $\bar{u}(t)$, for $t\in[0,\delta)$, be a solution to (\ref{RedWVPCF}) with parameter $\eta$ and initial condition $\bar{u}_0$. We let $u(t)=\psi_{\eta_0}\left(\bar{u}(t),\eta\right)$ and compute it's derivative using (\ref{Dpsi}):
\begin{align*}
\frac{\partial u}{\partial t}=&D_1\psi_{\eta_0}\left(\bar{u},\eta\right)\left[\frac{\partial\bar{u}}{\partial t}\right]\\
=&P_0\left[G(u)\right] -\frac{\int_{\mathscr{S}_{\frac{d}{\pi}}^1}P_0\left[G(u)\right]\Xi\left(\kapsurf{u}\right)u^{n-1}\,dz}{\int_{\mathscr{S}_{\frac{d}{\pi}}^1}\Xi\left(\kapsurf{u}\right)u^{n-1}\,dz}\\
=&G(u) -\frac{\int_{\mathscr{S}_{\frac{d}{\pi}}^1}G(u)\Xi\left(\kapsurf{u}\right)u^{n-1}\,dz}{\int_{\mathscr{S}_{\frac{d}{\pi}}^1}\Xi\left(\kapsurf{u}\right)u^{n-1}\,dz}\\
=&G(u) -\frac{1}{\int_{\mathscr{S}_{\frac{d}{\pi}}^1}\Xi\left(\kapsurf{u}\right)u^{n-1}\,dz}\int_{\mathscr{S}_{\frac{d}{\pi}}^1}\left(\frac{\int_{\mathscr{S}_{\frac{d}{\pi}}^1}F\left(\kapsurf{u}\right) \Xi\left(\kapsurf{u}\right)\,d\musurf{u}}{\int_{\mathscr{S}_{\frac{d}{\pi}}^1}\Xi\left(\kapsurf{u}\right)\,d\musurf{u}} -F\left(\kapsurf{u}\right)\right)\Xi\left(\kapsurf{u}\right)\,d\musurf{u}\\
=&G(u).
\end{align*}
Therefore $u(t)$ solves (\ref{WVPCFCircle}) for $t\in[0,\delta)$ with initial condition $u_0=\psi_{\eta_0}\left(\bar{u}_0,\eta\right)$.
\end{proof}
\section{Non-trivial Stationary Solutions}\label{SecBifAnal}
The aim of this section is to prove Theorem \ref{MainThm}. We do it in two parts. We will first prove the existence of the stationary solutions, then determine the criteria for their stability. We will perform the analysis on the reduced equation (\ref{RedWVPCF}) then transfer the results to the full flow in Corollaries \ref{StatSol} and \ref{GenStable}, which together give Theorem \ref{MainThm}. We now fix $\eta_0=\frac{n-1}{R_{crit}}$.
\begin{theorem}\label{BifPoints}
Assume \descref{(A1)}-\descref{(A5)} are satisfied for $\tilde{R}=R_{crit}$. Then $(0,\eta_0)$ is a bifurcation point of $\bar{G}_{\eta_0}\left(\bar{u},\eta\right)=0$.
That is, there exists a curve $\left\{\left(\tilde{u}(s),\eta(s)\right):|s|<\delta\right\}\subset U_{\eta_0}$, $\delta>0$, such that $\left(\tilde{u}(0),\eta(0)\right)=(0,\eta_0)$, $\tilde{u}(s)\neq0$ for $s\neq0$, and $\bar{G}_{\eta_0}\left(\tilde{u}(s),\eta(s)\right)=0$ for all $|s|<\delta$. These are also the only non-trivial solutions to $\bar{G}_{\eta_0}\left(\bar{u},\eta\right)=0$ in a neighbourhood of $(0,\eta_0)$.
\end{theorem}
\begin{proof}
We calculate the linearisation of $\bar{G}_{\eta_0}(\bar{u},\eta)=P_0\left[G\left(\psi_{\eta_0}(\bar{u},\eta)\right)\right]$ with respect to $\bar{u}$ and act on $\bar{v}\in h_{e,0}^{2,\alpha}\left([0,d]\right)$:
\begin{equation}\label{D1Gbar}
D_1\bar{G}_{\eta_0}(\bar{u},\eta)[\bar{v}]=P_0\left[DG\left(\psi_{\eta_0}(\bar{u},\eta)\right)\left[D_1\psi_{\eta_0}(\bar{u},\eta)[\bar{v}]\right]\right].
\end{equation}
To simplify the calculation of $DG(u)$ we set $W(u)=\frac{F\left(\kapsurf{u}\right)\Xi\left(\kapsurf{u}\right)u^{n-1}L(u)}{\int_{\mathscr{S}_{\frac{d}{\pi}}^1}\Xi\left(\kapsurf{u}\right)u^{n-1}L(u)\,dz}$ and use that $DL(u)[v]=\frac{u'v'}{L(u)}$:
\begin{align}\label{DG}
DG(u)[v]=&\frac{u'v'}{L(u)}\left(\int_{\mathscr{S}_{\frac{d}{\pi}}^1}W(u)\,dz-F\left(\kapsurf{u}\right)\right)+L(u)\left(\int_{\mathscr{S}_{\frac{d}{\pi}}^1}DW(u)[v]\,dz-D\left(F\left(\kapsurf{u}\right)\right)[v]\right)\nonumber\\
=&\frac{u'v'G(u)}{L(u)^2}+L(u)\left(\int_{\mathscr{S}_{\frac{d}{\pi}}^1}DW(u)[v]\,dz-D\left(F\left(\kapsurf{u}\right)\right)[v]\right).
\end{align}
If we linearise $W$ and evaluate at $u=\frac{n-1}{\eta}$ we obtain
\begin{equation*
\int_{\mathscr{S}_{\frac{d}{\pi}}^1}DW\left(\frac{n-1}{\eta}\right)[v]\,dz=\fint_{\mathscr{S}_{\frac{d}{\pi}}^1}\left.D\left(F\left(\kapsurf{u}\right)\right)\right|_{u=\frac{n-1}{\eta}}[v]\,dz.
\end{equation*}
Substituting this into (\ref{DG}):
\begin{equation}\label{DG0}
DG\left(\frac{n-1}{\eta}\right)[v]=\fint_{\mathscr{S}_{\frac{d}{\pi}}^1}\left.D\left(F\left(\kapsurf{u}\right)\right)\right|_{u=\frac{n-1}{\eta}}[v]\,dz -\left.D\left(F\left(\kapsurf{u}\right)\right)\right|_{u=\frac{n-1}{\eta}}[v],
\end{equation}
Therefore using (\ref{Dpsi}) and (\ref{D1Gbar}) the linearisation of $\bar{G}_{\eta_0}$ at the point $(0,\eta)$ is
\begin{align}\label{D1Gbar0}
D_1\bar{G}_{\eta_0}(0,\eta)[\bar{v}]=&DG\left(\frac{n-1}{\eta}\right)[\bar{v}]\\
=&\fint_{\mathscr{S}_{\frac{d}{\pi}}^1}\left.D\left(F\left(\kapsurf{u}\right)\right)\right|_{u=\frac{n-1}{\eta}}[\bar{v}]\,dz -\left.D\left(F\left(\kapsurf{u}\right)\right)\right|_{u=\frac{n-1}{\eta}}[\bar{v}]\nonumber.
\end{align}
To calculate the linearisation of $F\left(\kapsurf{u}\right)$ we use its symmetries:
\begin{equation}\label{DF}
D\left(F\left(\kapsurf{u}\right)\right)[v]=(n-1)\frac{\partial F}{\partial\kappa_1}\left(\kapsurf{u}\right)D\kappa_1(u)[v] +\frac{\partial F}{\partial\kappa_n}\left(\kapsurf{u}\right)D\kappa_n(u)[v],
\end{equation}
where
\begin{equation*
\kappa_1(u)=\frac{1}{uL(u)},\ \ \ \kappa_n(u)=-\frac{u''}{L(u)^3}.
\end{equation*}
Taking the linearisation of the curvatures:
\begin{equation}\label{Dkap}
D\kappa_1(u)[v]=\frac{-v}{u^2L(u)}-\frac{u'v'}{uL(u)^3},\ \ \ D\kappa_n(u)[v]=-\frac{v''}{L(u)^3}+\frac{3u''u'v'}{L(u)^5},
\end{equation}
therefore
\begin{equation}\label{Dkap0}
D\kappa_1\left(\frac{n-1}{\eta}\right)[v]=-\left(\frac{\eta}{n-1}\right)^2v,\ \ \ D\kappa_n\left(\frac{n-1}{\eta}\right)[v]=-v''.
\end{equation}
Substituting this into (\ref{DF}) we find:
\begin{equation}\label{DF0}
\left.D\left(F\left(\kapsurf{u}\right)\right)\right|_{u=\frac{n-1}{\eta}}[v]=-\left(F_n(\eta)v''+\frac{\eta^2F_1(\eta)}{n-1}v\right).
\end{equation}
By (\ref{D1Gbar0}) this gives:
\begin{equation}\label{D1Gbar02}
D_1\bar{G}_{\eta_0}(0,\eta)[\bar{v}]=F_n(\eta)\bar{v}''+\frac{\eta^2F_1(\eta)}{(n-1)}\bar{v}.
\end{equation}
This map is a bijection from $h_{e,0}^{2,\alpha}\left(\mathscr{S}_{\frac{d}{\pi}}^1\right)$ to $h_{e,0}^{0,\alpha}\left(\mathscr{S}_{\frac{d}{\pi}}^1\right)$ except if $\eta=\frac{m\pi}{d}\sqrt{\frac{(n-1)F_n(\eta)}{F_1(\eta)}}$ for some $m\in\mathbb{N}$ or $F_1(\eta)=F_n(\eta)=0$. Thus bifurcation can only occur at these points, such as when $\eta=\frac{n-1}{R_{crit}}$.
When $\eta=\frac{m\pi}{d}\sqrt{\frac{(n-1)F_n(\eta)}{F_1(\eta)}}$ it is easily seen that
\begin{align*}
N\left(D_1\bar{G}_{\eta_0}(0,\eta)\right)=&\text{span}\left(\cos\left(\frac{m\pi z}{d}\right)\right)\\
R\left(D_1\bar{G}_{\eta_0}(0,\eta)\right)=&h_{e,0}^{0,\alpha}\left(\mathscr{S}_{\frac{d}{\pi}}^1\right)/\cos\left(\frac{m\pi z}{d}\right).
\end{align*}
Also, from (\ref{D1Gbar02}),
\begin{align}\label{D12Gbar0}
D_{12}^2\bar{G}_{\eta_0}\left(0,\eta\right)[\bar{v}]=&F_{n}'(\eta)\bar{v}''+\left(\frac{2\eta F_1(\eta)}{n-1}+\frac{\eta^2F_1'(\eta)}{n-1}\right)\bar{v}\nonumber\\
=&\frac{F_{n}'(\eta)}{F_n(\eta)}D_1\bar{G}_{\eta_0}(0,\eta)[\bar{v}]+\frac{\eta}{n-1}\left(2F_1(\eta)+\eta\left(F_1'(\eta)-\frac{F_1(\eta)F_{n}'(\eta)}{F_n(\eta)}\right)\right)\bar{v}.
\end{align}
Therefore when $\eta$ satisfies $\eta=\frac{m\pi}{d}\sqrt{\frac{(n-1)F_n(\eta)}{F_1(\eta)}}$ we have
\begin{equation*}
D_{12}^2\bar{G}_{\eta_0}\left(0,\eta\right)\left[\cos\left(\frac{m\pi z}{d}\right)\right]= \frac{\eta}{n-1}\left(2F_1(\eta)+\eta\left(F_1'(\eta)-\frac{F_1(\eta)F_{n}'(\eta)}{F_n(\eta)}\right)\right)\cos\left(\frac{m\pi z}{d}\right).
\end{equation*}
To apply Theorem I.5.1 in \cite{Kielhofer12} we require that $D_{12}^2\bar{G}_{\eta_0}\left(0,\eta\right)\left[\cos\left(\frac{m\pi z}{d}\right)\right]\notin R\left(D_1\bar{G}_{\eta_0}(0,\eta)\right)$, which is equivalent to
\begin{equation}\label{BifCond}
2F_1(\eta)+\eta\left(F_1'(\eta)-\frac{F_1(\eta)F_{n}'(\eta)}{F_n(\eta)}\right)\neq0.
\end{equation}
To prove this is the case when $\eta=\frac{n-1}{R_{crit}}$, we notice that by assumption \descref{(A5)} the function $f(\eta)=\frac{\eta^2}{n-1}F_1(\eta)-\frac{\pi^2}{d^2}F_n(\eta)$ changes sign at $\eta=\frac{n-1}{R_{crit}}$, and hence $f'\left(\frac{n-1}{R_{crit}}\right)\neq0$. Calculating this derivative gives:
\begin{align*}
f'(\eta)=&\frac{2\eta}{n-1}F_1(\eta)+\frac{\eta^2F_1'(\eta)}{n-1}-\frac{\pi^2}{d^2}F_{n}'(\eta)\\
=&\frac{\eta}{n-1}\left(2F_1(\eta)+\eta\left(F_1'(\eta)-\frac{(n-1)^2\eta^2 F_1\left(\frac{n-1}{R_{crit}}\right)}{R_{crit}^2F_n\left(\frac{n-1}{R_{crit}}\right)}F_{n}'(\eta)\right)\right),
\end{align*}
so that (\ref{BifCond}) is satisfied when $\eta=\frac{n-1}{R_{crit}}$.
\end{proof}
\begin{corollary}\label{StatSol}
There exists a continuously differentiable family of nontrivial, axially symmetric hypersurfaces that are stationary solutions to the flow (\ref{WVPCF}), with assumptions \descref{(A1)}-\descref{(A5)} for $\tilde{R}=R_{crit}$, that includes the cylinder of radius $R_{crit}$, they are given by the profile curves $\tilde{\rho}(s):=\psi_{\eta_0}\left(\tilde{u}(s),\eta(s)\right)|_{[0,d]}$.
\end{corollary}
We now give a stronger corollary for when $F$ is a homogeneous function. This proves the first part of Corollary \ref{HomogCor}; the second part of which follows straight from Theorem \ref{MainThm}.
\begin{corollary}\label{StatSolHomog}
There exists a continuously differentiable family of nontrivial, axially symmetric hypersurfaces that are stationary solutions to the flow (\ref{WVPCF}), with assumptions \descref{(A1)}-\descref{(A5)*} satisfied at $\tilde{R}=\frac{n-1}{\eta_m}$ with $\eta_m:=\frac{m\pi}{d}\sqrt{\frac{(n-1)F_n}{F_1}}$, that includes the cylinder of radius $\frac{n-1}{\eta_m}$; they can each be represented by a profile curve: $\tilde{\rho}_m(s):=\psi_{\eta_m}\left(\tilde{u}_m(s),\eta_m(s)\right)|_{[0,d]}$.
\end{corollary}
\begin{proof}
Since $F$ is homogeneous, \descref{(A2)} reduces to $F_a\neq0$ for $a=1,n$. Therefore (\ref{D1Gbar02}) becomes
\begin{equation}
D_1\bar{G}_{\eta_0}(0,\eta)[\bar{v}]=\eta^{k-1}F_n\left(\bar{v}''+\frac{\eta^2F_1}{(n-1)F_n}\bar{v}\right),
\end{equation}
which is a bijection if and only if $\eta\neq\eta_m$. Thus bifurcation can only occur at $(0,\eta_m)$. Also, by using the relation $F_a'(\eta)=(k-1)\eta^{k-2}F_a$ for $a=1,n$, condition (\ref{BifCond}) reduces to $F_1\neq0$ and hence each of these points is a bifurcation point with bifurcation curve $(\tilde{u}_m(s),\eta_m(s))$.
\end{proof}
We will now consider the spectrum of $D_1\bar{G}_{\eta_0}(0,\eta)$. It is clear from (\ref{D1Gbar02}) that if $\eta<\eta_0$ ($\eta>\eta_0$, refer \descref{(A5)}) then the spectrum of $D_1\bar{G}_{\eta_0}(0,\eta)$ is contained in the negative real axis, this leads to the stability of the null solution which is special case of Theorem \ref{MainRCyl}. However, when $\eta=\eta_0$ the first eigenvalue becomes zero; we now determine how this eigenvalue behaves as we move from linearising about $\left(0,\eta_0\right)$ to linearising about points on the bifurcation curve.
To analyse the curve we make the following definitions:
\begin{equation*
\hat{v}:=A\cos\left(\eta_0\sqrt{\frac{F_1(\eta_0)}{(n-1)F_n(\eta_0)}}z\right),\ \ A:=\left\|\cos\left(\eta_0\sqrt{\frac{F_1(\eta_0)}{(n-1)F_n(\eta_0)}}z\right)\right\|^{-1}_{h^{2,\alpha}},
\end{equation*}
\begin{equation*
\tilde{v}:=B\cos\left(\eta_0\sqrt{\frac{F_1(\eta_0)}{(n-1)F_n(\eta_0)}}z\right),\ \ B:=\left\|\cos\left(\eta_0\sqrt{\frac{F_1(\eta_0)}{(n-1)F_n(\eta_0)}}z\right)\right\|^{-1}_{h^{0,\alpha}},
\end{equation*}
and
\begin{equation}\label{defvtils}
\tilde{v}^*[\bar{v}]:=\frac{2}{B}\fint_{\mathscr{S}_{\frac{d}{\pi}}^1}\bar{v}\cos\left(\eta_0\sqrt{\frac{F_1(\eta_0)}{(n-1)F_n(\eta_0)}}z\right)\,dz,
\end{equation}
so that $\tilde{v}^*[\tilde{v}]=1$ and, by the self adjointness of $D_1\bar{G}_{\eta_0}(0,\eta_0)$ with respect to the $L^2$ inner product, $\tilde{v}^*\left[D_1\bar{G}_{\eta_0}\left(0,\eta_0\right)[v]\right]=0$ for all $v\in h_e^{2,\alpha}\left(\mathscr{S}_{\frac{d}{\pi}}^1\right)$. For ease of notation we now drop all subscripts referencing $\eta_0$.
\begin{theorem}\label{BifShape}
Let \descref{(A1)}-\descref{(A5)} hold with $\tilde{R}=R_{crit}$. Then the bifurcation curve from Theorem \ref{BifPoints} has the following properties:
\begin{equation}\label{Deta}
\eta'(0)=0
\end{equation}
and
\begin{equation}\label{D2eta}
\eta''(0)=\frac{-\eta_0^3A^2}{12}\left(\frac{\mathscr{F}}{2F_1(\eta_0)+\eta_0F_{1}'(\eta_0)-\frac{\eta_0F_1(\eta_0)F_{n}'(\eta_0)}{F_n(\eta_0)}} -\frac{6\sum_{a=1}^n\frac{c_a\eta_0^{a}}{(n-1)^{a}}\left(\binom{n-2}{a-1}-\frac{F_1(\eta_0)}{F_n(\eta_0)}\binom{n-1}{a-1}\right)}{(n-1)\sum_{a=0}^n\frac{c_a\eta_0^a}{(n-1)^a}\binom{n-1}{a}}\right),
\end{equation}
where
\begin{align}
\mathscr{F}=&\frac{3\eta_0^2F_{1}''(\eta_0)}{(n-1)^2}-\frac{9\eta_0^2F_1(\eta_0)F_{n}''(\eta_0)}{(n-1)^2F_n(\eta_0)}+\frac{9\eta_0^2F_1(\eta_0)^2F_{nn}'(\eta_0)}{(n-1)^2F_n(\eta_0)^2}-\frac{3\eta_0^2F_1(\eta_0)^3F_{nnn}(\eta_0)}{(n-1)^2F_n(\eta_0)^3} \nonumber\\
&+\frac{\eta_0^2F_{1}'(\eta_0)^2}{(n-1)^2F_1(\eta_0)}-\frac{7\eta_0^2F_{1}'(\eta_0)F_{n}'(\eta_0)}{(n-1)^2F_n(\eta_0)}+\frac{5\eta_0^2F_1(\eta_0)F_{1}'(\eta_0)F_{nn}(\eta_0)}{(n-1)^2F_n(\eta_0)^2}\nonumber\\
&+\frac{10\eta_0^2F_1(\eta_0)F_{n}'(\eta_0)^2}{(n-1)^2F_n(\eta_0)^2}-\frac{13\eta_0^2F_1(\eta_0)^2F_{n}'(\eta_0)F_{nn}(\eta_0)}{(n-1)^2F_n(\eta_0)^3}+\frac{4\eta_0^2F_1(\eta_0)^3F_{nn}(\eta_0)^2}{(n-1)^2F_n(\eta_0)^4} \nonumber\\
&+\frac{2(3n+8)\eta_0F_{1}'(\eta_0)}{(n-1)^2}-\frac{4\eta_0F_1(\eta_0)F_{1}'(\eta_0)}{(n-1)F_n(\eta_0)}-\frac{2(3n+13)\eta_0F_1(\eta_0)F_{n}'(\eta_0)}{(n-1)^2F_n(\eta_0)}\nonumber\\
&+\frac{2\eta_0F_1(\eta_0)^2F_{n}'(\eta_0)}{(n-1)F_n(\eta_0)^2}+\frac{10\eta_0F_1(\eta_0)^2F_{nn}(\eta_0)}{(n-1)^2F_n(\eta_0)^2}+\frac{2\eta_0F_1(\eta_0)^3F_{nn}(\eta_0)}{(n-1)F_n(\eta_0)^3} +\frac{2(6n+5)F_1(\eta_0)}{(n-1)^2}\nonumber\\
&+\frac{4F_1(\eta_0)^2}{(n-1)F_n(\eta_0)}-\frac{2F_1(\eta_0)^3}{F_n(\eta_0)^2},\nonumber
\end{align}
and $F_{a}(\eta)=\frac{\partial F}{\partial\kappa_a}\left(\kapsurf{\frac{n-1}{\eta}}\right)$, $F_{nn}(\eta)=\frac{\partial^2 F}{\partial\kappa_n^2}\left(\kapsurf{\frac{n-1}{\eta}}\right)$ and $F_{nnn}(\eta)=\frac{\partial^3 F}{\partial\kappa_n^3}\left(\kapsurf{\frac{n-1}{\eta}}\right)$.
\end{theorem}
\begin{remark}\label{FderivRem}
Note that the derivatives that appear in equation (\ref{D2eta}) can be expanded in terms of the speed function as follows:
\begin{equation*
F_1'(\eta)=\frac{1}{n-1}\left(\frac{\partial^2F}{\partial\kappa_1^2}\left(\kapsurf{\frac{n-1}{\eta}}\right)+(n-2)\frac{\partial^2F}{\partial\kappa_1\partial\kappa_2}\left(\kapsurf{\frac{n-1}{\eta}}\right)\right),
\end{equation*}
\begin{equation*
F_1''(\eta)=\frac{1}{(n-1)^2}\left(\frac{\partial^3F}{\partial\kappa_1^3}\left(\kapsurf{\frac{n-1}{\eta}}\right) +2(n-2)\frac{\partial^3F}{\partial\kappa_1^2\partial\kappa_2}\left(\kapsurf{\frac{n-1}{\eta}}\right) +(n-2)(n-3)\frac{\partial^3F}{\partial\kappa_1\partial\kappa_2\partial\kappa_3}\left(\kapsurf{\frac{n-1}{\eta}}\right)\right),
\end{equation*}
\begin{equation*}
F_n'(\eta)=\frac{\partial^2F}{\partial\kappa_1\partial\kappa_n}\left(\kapsurf{\frac{n-1}{\eta}}\right),\ \ F_n''(\eta)=\frac{1}{n-1}\left(\frac{\partial^3F}{\partial\kappa_1^2\partial\kappa_n}\left(\kapsurf{\frac{n-1}{\eta}}\right) +(n-2)\frac{\partial^3F}{\partial\kappa_1\partial\kappa_2\partial\kappa_n}\left(\kapsurf{\frac{n-1}{\eta}}\right)\right),
\end{equation*}
\begin{equation*}
F_{nn}'(\eta)=\frac{\partial^3F}{\partial\kappa_n^2\partial\kappa_1}\left(\kapsurf{\frac{n-1}{\eta}}\right).
\end{equation*}
\end{remark}
\begin{proof}
These formulas come from standard calculations using equations (I.6.3), (I.6.8) and (I.6.11) from \cite{Kielhofer12}:
\begin{equation}\label{Deta1}
\eta'(0)=\frac{-\tilde{v}^*\left[D_{11}^2\bar{G}(0,\eta_0)\left[\hat{v},\hat{v}\right]\right]}{2\tilde{v}^*\left[D_{12}^2\bar{G}(0,\eta_0)\left[\hat{v}\right]\right]},
\end{equation}
\begin{equation}\label{D2eta2}
\eta''(0)=\frac{-\tilde{v}^*\left[D_{111}^3\bar{G}(0,\eta_0)\left[\hat{v},\hat{v},\hat{v}\right]+3D_{11}^2\bar{G}(0,\eta_0)\left[\hat{v},\tilde{w}\right]\right]}{3\tilde{v}^*\left[D_{12}^2\bar{G}(0,\eta_0)\left[\hat{v}\right]\right]},
\end{equation}
where $\tilde{w}$ is the solution to
\begin{equation}\label{defw}
D_1\bar{G}(0,\eta_0)[\bar{w}]=-D_{11}^2\bar{G}(0,\eta_0)\left[\hat{v},\hat{v}\right],
\end{equation}
such that $\tilde{v}^*[\tilde{w}]=0$. Note that (\ref{D2eta2}) is only the correct formula when $\eta'(0)=0$.
By linearising (\ref{D1Gbar}) we have
\begin{align}\label{D11Gbar}
D_{11}^2\bar{G}(\bar{u},\eta)[\bar{v},\bar{w}]=&P_0\left[D^2G\left(\psi(\bar{u},\eta)\right)\left[D_1\psi(\bar{u},\eta)[\bar{v}],D_1\psi(\bar{u},\eta)[\bar{w}]\right]\right]\\
&+P_0\left[DG\left(\psi(\bar{u},\eta)\right)\left[D_{11}^2\psi(\bar{u},\eta)[\bar{v},\bar{w}]\right]\right],\nonumber
\end{align}
and from (\ref{Dpsi}) we find that
\begin{align}\label{DDpsi0}
D_{11}^2\psi(0,\eta_0)[\bar{v},\bar{w}]=&\Xi\left(\kapsurf{\frac{n-1}{\eta_0}}\right)^{-1}\fint_{\mathscr{S}_{\frac{d}{\pi}}^1}\bar{v}\left(\frac{\eta_0^2}{n-1}\frac{\partial\Xi}{\partial\kappa_1}\left(\kapsurf{\frac{n-1}{\eta_0}}\right)\bar{w} +\frac{\partial\Xi}{\partial\kappa_n}\left(\kapsurf{\frac{n-1}{\eta_0}}\right)\bar{w}''\right)\,dz\\
& -\eta_0\fint_{\mathscr{S}_{\frac{d}{\pi}}^1}\bar{v}\bar{w}\,dz.\nonumber
\end{align}
However at this stage it is only important that this is a constant function, so we associate it to its corresponding real number; in fact it is clear from (\ref{Dpsi}) that $D_{11}^2\psi(\bar{u},\eta)[\bar{v},\bar{w}]$ will be a constant function for any $(\bar{u},\eta)$. Using this, (\ref{D11Gbar}) simplifies to
\begin{equation*
D_{11}^2\bar{G}(0,\eta_0)=P_0\left[D^2G\left(\frac{n-1}{\eta_0}\right)[\bar{v},\bar{w}]+D_{11}^2\psi(0,\eta_0)[\bar{v},\bar{w}]DG\left(\frac{n-1}{\eta_0}\right)[1]\right].
\end{equation*}
From (\ref{DF0}) we find $\left.D\left(F\left(\kapsurf{u}\right)\right)\right|_{u=\frac{n-1}{\eta_0}}[1]=-\frac{\eta_0^{2}}{n-1}F_1(\eta)$ and hence, by (\ref{DG0}), $DG\left(\frac{n-1}{\eta_0}\right)[1]=0$. Therefore
\begin{equation*
D_{11}^2\bar{G}(0,\eta_0)=P_0\left[D^2G\left(\frac{n-1}{\eta_0}\right)[\bar{v},\bar{w}]\right].
\end{equation*}
Using (\ref{DG}) we have
\begin{align}\label{DDG}
D^2G(u)[v,w]=&\frac{w'v'G(u)+u'\left(v'DG(u)[w]+w'DG(u)[v]\right)}{L(u)^2} -\frac{3u'^2v'w'G(u)}{L(u)^4}\\
& +L(u)\left(\int_{\mathscr{S}_{\frac{d}{\pi}}^1}D^2W(u)[v,w]\,dz-D^2\left(F\left(\kapsurf{u}\right)\right)[v,w]\right),\nonumber
\end{align}
which reduces to
\begin{equation*
D^2G\left(\frac{n-1}{\eta_0}\right)[v,w]=\int_{\mathscr{S}_{\frac{d}{\pi}}^1}D^2W\left(\frac{n-1}{\eta_0}\right)[v,w]\,dz-\left.D^2\left(F\left(\kapsurf{u}\right)\right)\right|_{u=\frac{n-1}{\eta_0}}[v,w].
\end{equation*}
So taking the projection gives:
\begin{equation*
D_{11}^2\bar{G}(0,\eta_0)[\bar{v},\bar{w}]=\fint_{\mathscr{S}_{\frac{d}{\pi}}^1}\left.D^2\left(F\left(\kapsurf{u}\right)\right)\right|_{u=\frac{n-1}{\eta_0}}[\bar{v},\bar{w}]\,dz -\left.D^2\left(F\left(\kapsurf{u}\right)\right)\right|_{u=\frac{n-1}{\eta_0}}[\bar{v},\bar{w}].
\end{equation*}
The second linearisation of $F\left(\kapsurf{u}\right)$ is given by:
\begin{align}\label{DDF}
D^2\left(F\left(\kapsurf{u}\right)\right)[v,w]=&(n-1)\left(\frac{\partial^2F}{\partial\kappa_1^2}\left(\kapsurf{u}\right) +(n-2)\frac{\partial^2F}{\partial\kappa_1\partial\kappa_2}\left(\kapsurf{u}\right)\right)D\kappa_1(u)[v]D\kappa_1(u)[w]\\
&+(n-1)\frac{\partial^2F}{\partial\kappa_1\partial\kappa_n}\left(\kapsurf{u}\right)\left(D\kappa_1(u)[v]D\kappa_n(u)[w]+D\kappa_n(u)[v]D\kappa_1(u)[w]\right)\nonumber\\
& +\frac{\partial^2F}{\partial\kappa_n^2}\left(\kapsurf{u}\right)D\kappa_n(u)[v]D\kappa_n(u)[w] +(n-1)\frac{\partial F}{\partial\kappa_1}\left(\kapsurf{u}\right)D^2\kappa_1(u)[v,w]\nonumber\\
& +\frac{\partial F}{\partial\kappa_n}\left(\kapsurf{u}\right)D^2\kappa_n(u)[v,w].\nonumber
\end{align}
From (\ref{Dkap}) we obtain
\begin{equation}\label{DDkap}
\begin{array}{c}\normalsize{D^2\kappa_1(u)[v,w]=\frac{2vw}{u^3L(u)}+\frac{u'(vw'+v'w)}{u^2L(u)^3}-\frac{v'w'}{uL(u)^3}+\frac{3u'^2v'w'}{uL(u)^5}},\\ D^2\kappa_n(u)[v,w]=\frac{3(u'v''w'+u'v'w''+u''v'w')}{L(u)^5}-\frac{15u''u'^2v'w'}{L(u)^7},\end{array}
\end{equation}
therefore
\begin{equation}\label{DDkap0}
D^2\kappa_1\left(\frac{n-1}{\eta_0}\right)[v,w]=\frac{2\eta_0^3vw}{(n-1)^3} -\frac{\eta_0v'w'}{n-1},\ \ D^2\kappa_n\left(\frac{n-1}{\eta_0}\right)[v,w]=0,
\end{equation}
and combining this with (\ref{Dkap0}), (\ref{DDF}) and Remark \ref{FderivRem} we have
\begin{align}\label{DDF0}
\left.D^2\left(F\left(\kapsurf{u}\right)\right)\right|_{u=\frac{n-1}{\eta_0}}[v,w]=&\frac{\eta_0^{3}\left(\eta_0F_{1}'(\eta_0)+2F_1(\eta_0)\right)}{(n-1)^{2}}vw +\frac{\eta_0^{2}F_{n}'(\eta_0)}{n-1}\left(vw''+v''w\right)\\
& +F_{nn}(\eta_0)v''w'' -\eta_0F_1(\eta_0)v'w'.\nonumber
\end{align}
So by using the formula for $\hat{v}$ we obtain
\begin{align}\label{DDF0vv}
\left.D^2\left(F\left(\kapsurf{u}\right)\right)\right|_{u=\frac{n-1}{\eta_0}}[\hat{v},\hat{v}]=&\frac{\eta_0^{3}A^2}{n-1}\left(\mathscr{F}_1\cos^2\left(\eta_0\sqrt{\frac{F_1(\eta_0)}{(n-1)F_n(\eta_0)}}z\right)\right.\nonumber\\
&\hspace{1.0cm}\left. -\frac{F_1(\eta_0)^2}{F_n(\eta_0)}\sin^2\left(\eta_0\sqrt{\frac{F_1(\eta_0)}{(n-1)F_n(\eta_0)}}z\right)\right)\nonumber\\
=&\frac{\eta_0^{3}A^2}{2(n-1)}\left(\mathscr{F}_1-\frac{F_1(\eta_0)^2}{F_n(\eta_0)}\right.\nonumber\\
&\hspace{1.5cm}\left.+\left(\mathscr{F}_1+\frac{F_1(\eta_0)^2}{F_n(\eta_0)}\right)\cos\left(2\eta_0\sqrt{\frac{F_1(\eta_0)}{(n-1)F_n(\eta_0)}}z\right)\right),\nonumber
\end{align}
where
\begin{equation*}
\mathscr{F}_1:=\frac{\eta_0 F_{1}'(\eta_0)}{n-1} -\frac{2\eta_0F_1(\eta_0)F_{n}'(\eta_0)}{(n-1)F_n(\eta_0)} +\frac{\eta_0F_1(\eta_0)^2F_{nn}(\eta_0)}{(n-1)F_n(\eta_0)^2}+\frac{2F_1(\eta_0)}{n-1}.
\end{equation*}
Hence
\begin{equation}\label{D11Gbar0vv}
D_{11}^2\bar{G}(0,\eta_0)[\hat{v},\hat{v}]=-\frac{\eta_0^{3}A^2}{2(n-1)}\left(\mathscr{F}_1+\frac{F_1(\eta_0)^2}{F_n(\eta_0)}\right)\cos\left(2\eta_0\sqrt{\frac{F_1(\eta_0)}{(n-1)F_n(\eta_0)}}z\right).
\end{equation}
From the formula for $\tilde{v}^*$ in (\ref{defvtils}) we easily see that $\tilde{v}^*\left[D_{11}^2\bar{G}(0,\eta_0)[\hat{v},\hat{v}]\right]=0$ and hence from (\ref{Deta1}) we get the first result.
We now turn our attention to the second derivative of $\eta(s)$. We see from (\ref{D1Gbar02}) and (\ref{D11Gbar0vv}) that the solution to (\ref{defw}) is of the form $\tilde{\omega}=C\cos\left(2\eta_0\sqrt{\frac{F_1(\eta_0)}{(n-1)F_n(\eta_0)}}z\right)$ and in fact
\begin{equation*
C=-\frac{\eta_0A^2}{6F_1(\eta_0)}\left(\mathscr{F}_1+\frac{F_1(\eta_0)^2}{F_n(\eta_0)}\right).
\end{equation*}
Now if we define:
\begin{equation*}
\mathscr{F}_2=\frac{\eta_0F_{1}'(\eta_0)}{n-1} -\frac{5\eta_0F_1(\eta_0)F_{n}'(\eta_0)}{(n-1)F_n(\eta_0)} +\frac{4\eta_0F_1(\eta_0)^2F_{nn}(\eta_0)}{(n-1)F_n(\eta_0)^2}+\frac{2F_1(\eta_0)}{n-1},
\end{equation*}
we can use (\ref{DDF0}) to obtain
\begin{align
\left.D^2\left(F\left(\kapsurf{u}\right)\right)\right|_{u=\frac{n-1}{\eta_0}}[\hat{v},\tilde{w}]=& \frac{\eta_0^{3}AC}{n-1}\left(\mathscr{F}_2\cos\left(\eta_0\sqrt{\frac{F_1(\eta_0)}{(n-1)F_n(\eta_0)}}z\right)\cos\left(2\eta_0\sqrt{\frac{F_1(\eta_0)}{(n-1)F_n(\eta_0)}}z\right)\right.\nonumber\\
&\hspace{1.0cm}\left. -\frac{2F_1(\eta_0)^2}{F_n(\eta_0)}\sin\left(\eta_0\sqrt{\frac{F_1(\eta_0)}{(n-1)F_n(\eta_0)}}z\right)\sin\left(2\eta_0\sqrt{\frac{F_1(\eta_0)}{(n-1)F_n(\eta_0)}}z\right)\right)\nonumber\\
=&\frac{\eta_0^{3}AC}{2(n-1)}\left(\left(\mathscr{F}_2-\frac{2F_1(\eta_0)^2}{F_n(\eta_0)}\right)\cos\left(\eta_0\sqrt{\frac{F_1(\eta_0)}{(n-1)F_n(\eta_0)}}z\right)\right.\nonumber\\
&\hspace{1.3cm}\left. +\left(\mathscr{F}_2+\frac{2F_1(\eta_0)^2}{F_n(\eta_0)}\right)\cos\left(3\eta_0\sqrt{\frac{F_1(\eta_0)}{(n-1)F_n(\eta_0)}}z\right)\right).\nonumber
\end{align}
Thus $\fint_{\mathscr{S}_{\frac{d}{\pi}}^1}\left.D^2\left(F\left(\kapsurf{u}\right)\right)\right|_{u=\frac{n-1}{\eta_0}}[\hat{v},\tilde{w}]\,dz=0$, $D_{11}^2\bar{G}(0,\eta_0)[\hat{v},\tilde{w}]=-\left.D^2\left(F\left(\kapsurf{u}\right)\right)\right|_{u=\frac{n-1}{\eta_0}}[\hat{v},\tilde{w}]$ and
\begin{equation}\label{vD11Gbar0vw}
\tilde{v}^*\left[D_{11}^2\bar{G}(0,\eta_0)[\hat{v},\tilde{w}]\right]=\frac{\eta_0^{4}A^3}{12(n-1)F_1(\eta_0)B}\left(\mathscr{F}_1+\frac{F_1(\eta_0)^2}{F_n(\eta_0)}\right)\left(\mathscr{F}_2-\frac{2F_1(\eta_0)^2}{F_n(\eta_0)}\right).
\end{equation}
From (\ref{D11Gbar}) it is easily seen that
\begin{align*
D_{111}^3\bar{G}(0,\eta_0)[\hat{v},\hat{v},\hat{v}]=&P_0\left[D^3G\left(\frac{n-1}{\eta_0}\right)[\hat{v},\hat{v},\hat{v}]+3D^2G\left(\frac{n-1}{\eta_0}\right)\left[\hat{v},D_{11}^2\psi(0,\eta_0)[\hat{v},\hat{v}]\right]\right.\\
&\hspace{0.4cm}\left.+DG\left(\frac{n-1}{\eta_0}\right)\left[D_{111}^3\psi(0,\eta_0)[\hat{v},\hat{v},\hat{v}]\right]\right]\\
=&P_0\left[D^3G\left(\frac{n-1}{\eta_0}\right)[\hat{v},\hat{v},\hat{v}]+3D_{11}^2\psi(0,\eta_0)[\hat{v},\hat{v}]D^2G\left(\frac{n-1}{\eta_0}\right)\left[\hat{v},1\right]\right],
\end{align*}
where we have used that both $D_{11}^2\psi(0,\eta_0)[\hat{v},\hat{v}]$ and $D_{111}^3\psi(0,\eta_0)[\hat{v},\hat{v},\hat{v}]$ are constant functions, see the comment after (\ref{DDpsi0}), and also that $DG\left(\frac{n-1}{\eta_0}\right)\left[1\right]=0$. From (\ref{DDF0}) we see
\begin{align
\left.D^2\left(F\left(\kapsurf{u}\right)\right)\right|_{u=\frac{n-1}{\eta_0}}[\hat{v},1]=&\frac{\eta_0^{3}A\left(\eta_0F_{1}'(\eta_0)-\frac{\eta_0F_1(\eta_0)F_{n}'(\eta_0)}{F_n(\eta_0)}+2F_1(\eta_0)\right)}{(n-1)^2}\cos\left(\eta_0\sqrt{\frac{F_1(\eta_0)}{(n-1)F_n(\eta_0)}}z\right),\nonumber
\end{align}
hence
\begin{align
P_0\left[D^2G\left(\frac{n-1}{\eta_0}\right)[\hat{v},1]\right]=&\fint_{\mathscr{S}_{\frac{d}{\pi}}^1}\left.D^2\left(F\left(\kapsurf{u}\right)\right)\right|_{u=\frac{n-1}{\eta_0}}[\hat{v},1]\,dz -\left.D^2\left(F\left(\kapsurf{u}\right)\right)\right|_{u=\frac{n-1}{\eta_0}}[\hat{v},1]\nonumber\\
=&-\frac{\eta_0^{3}A\left(\eta_0F_{1}'(\eta_0)-\frac{\eta_0F_1(\eta_0)F_{n}'(\eta_0)}{F_n(\eta_0)}+2F_1(\eta_0)\right)}{(n-1)^2}\cos\left(\eta_0\sqrt{\frac{F_1(\eta_0)}{(n-1)F_n(\eta_0)}}z\right).\nonumber
\end{align}
From (\ref{DDpsi0}) we also see that
\begin{align
D_{11}^2\psi(0,\eta_0)[\hat{v},\hat{v}]=&\frac{\eta_0^2A^2}{(n-1)\Xi\left(\kapsurf{\frac{n-1}{\eta_0}}\right)}\fint_{\mathscr{S}_{\frac{d}{\pi}}^1}\left(\frac{\partial\Xi}{\partial\kappa_1}\left(\kapsurf{\frac{n-1}{\eta_0}}\right)-\frac{F_1(\eta_0)}{F_n(\eta_0)}\frac{\partial\Xi}{\partial\kappa_n}\left(\kapsurf{\frac{n-1}{\eta_0}}\right)\right)\cos^2\left(\eta_0\sqrt{\frac{F_1(\eta_0)}{(n-1)F_n(\eta_0)}}z\right)\,dz\nonumber\\
&-\eta_0A^2\fint_{\mathscr{S}_{\frac{d}{\pi}}^1}\cos^2\left(\eta_0\sqrt{\frac{F_1(\eta_0)}{(n-1)F_n(\eta_0)}}z\right)\,dz\nonumber\\
=&\frac{\eta_0^2A^2}{2(n-1)\Xi\left(\kapsurf{\frac{n-1}{\eta_0}}\right)}\left(\frac{\partial\Xi}{\partial\kappa_1}\left(\kapsurf{\frac{n-1}{\eta_0}}\right)-\frac{F_1(\eta_0)}{F_n(\eta_0)}\frac{\partial\Xi}{\partial\kappa_n}\left(\kapsurf{\frac{n-1}{\eta_0}}\right)\right) -\frac{\eta_0A^2}{2}.\nonumber
\end{align}
Lastly, the third linearisation of $G$ can be calculated from (\ref{DDG}):
\begin{align}\label{DDDG0vvv}
D^3G\left(\frac{n-1}{\eta_0}\right)[\hat{v},\hat{v},\hat{v}]=&\int_{\mathscr{S}_{\frac{d}{\pi}}^1}D^3W\left(\frac{n-1}{\eta_0}\right)[\hat{v},\hat{v},\hat{v}]\,dz - \left.D^3\left(F\left(\kapsurf{u}\right)\right)\right|_{u=\frac{n-1}{\eta_0}}[\hat{v},\hat{v},\hat{v}],
\end{align}
where we have used (\ref{D1Gbar0}) and the definition of $\hat{v}$ as being in the null space of $D_1\bar{G}(0,\eta_0)$.
We calculate $\left.D^3\left(F\left(\kapsurf{u}\right)\right)\right|_{u=\frac{n-1}{\eta_0}}[\hat{v},\hat{v},\hat{v}]$ using Remark \ref{FderivRem} along with (\ref{Dkap0}), (\ref{DDkap0}) and
\begin{equation*
D^3\kappa_1\left(\frac{n-1}{\eta_0}\right)[\hat{v},\hat{v},\hat{v}]=\frac{-6\eta_0^4}{(n-1)^4}\hat{v}^3+\frac{3\eta_0^2}{(n-1)^2}\hat{v}\hat{v}'^2,\ \ D^3\kappa_n\left(\frac{n-1}{\eta_0}\right)[\hat{v},\hat{v},\hat{v}]=9\hat{v}''\hat{v}'^2,
\end{equation*}
to obtain
\begin{align
\left.D^3\left(F\left(\kapsurf{u}\right)\right)\right|_{u=\frac{n-1}{\eta_0}}[\hat{v},\hat{v},\hat{v}] =&-\frac{\eta_0^{6}F_{1}''(\eta_0)}{(n-1)^{3}}\hat{v}^3 -\frac{3\eta_0^{4}F_{n}''(\eta_0)}{(n-1)^{2}}\hat{v}^2\hat{v}'' -\frac{3\eta_0^{2}F_{nn}'(\eta_0)}{n-1}\hat{v}\hat{v}''^2 -F_{nnn}(\eta_0)\hat{v}''^3\nonumber\\
& -\frac{3\eta_0^{3}F_{1}'(\eta_0)}{n-1}\left(\frac{2\eta_0^2}{(n-1)^2}\hat{v}^2-\hat{v}'^2\right)\hat{v} -3\eta_0F_{n}'(\eta_0)\left(\frac{2\eta_0^2}{(n-1)^2}\hat{v}^2-\hat{v}'^2\right)\hat{v}''\nonumber\\
& +\frac{3\eta_0^{2}F_1(\eta_0)}{n-1}\left(\hat{v}\hat{v}'^2-\frac{2\eta_0^2}{(n-1)^2}\hat{v}^3\right) +9F_n(\eta_0)\hat{v}''\hat{v}'^2\nonumber\\
=&-\frac{\eta_0^{4}A^3}{(n-1)^2}\left(\mathscr{F}_3\cos^3\left(\eta_0\sqrt{\frac{F_1(\eta_0)}{(n-1)F_n(\eta_0)}}z\right)\right.\nonumber\\
&\hspace{1.7cm} \left.+3\mathscr{F}_4\sin^2\left(\eta_0\sqrt{\frac{F_1(\eta_0)}{(n-1)F_n(\eta_0)}}z\right)\cos\left(\eta_0\sqrt{\frac{F_1(\eta_0)}{(n-1)F_n(\eta_0)}}z\right)\right)\nonumber\\
=&-\frac{\eta_0^{4}A^3}{4(n-1)^{2}}\left(3\left(\mathscr{F}_3+\mathscr{F}_4\right)\cos\left(\eta_0\sqrt{\frac{F_1(\eta_0)}{(n-1)F_n(\eta_0)}}z\right)\right.\nonumber\\
&\hspace{1.9cm} \left.+\left(\mathscr{F}_3-3\mathscr{F}_4\right)\cos\left(3\eta_0\sqrt{\frac{F_1(\eta_0)}{(n-1)F_n(\eta_0)}}z\right)\right),\nonumber
\end{align}
where
\begin{align*}
\mathscr{F}_3:=&\frac{\eta_0^2F_{1}''(\eta_0)}{n-1} -\frac{3\eta_0^2F_1(\eta_0)F_{n}''(\eta_0)}{(n-1)F_n(\eta_0)} +\frac{3\eta_0^2F_1(\eta_0)^2F_{nn}'(\eta_0)}{(n-1)F_n(\eta_0)^2} -\frac{\eta_0^2F_1(\eta_0)^3F_{nnn}(\eta_0)}{(n-1)F_n(\eta_0)^3}\\
& +\frac{6\eta_0F_{1}'(\eta_0)}{n-1} -\frac{6\eta_0F_1(\eta_0)F_{n}'(\eta_0)}{(n-1)F_n(\eta_0)} +\frac{6F_1(\eta_0)}{n-1}
\end{align*}
and
\begin{equation*}
\mathscr{F}_4:=\frac{-\eta_0F_1(\eta_0)F_{1}'(\eta_0)}{F_n(\eta_0)} +\frac{\eta_0F_1(\eta_0)^2F_{n}'(\eta_0)}{F_n(\eta_0)^2} +\frac{2F_1(\eta_0)^2}{F_n(\eta_0)}.
\end{equation*}
Combining this with (\ref{DDDG0vvv}) we conclude
\begin{align
P_0\left[D^3G\left(\frac{n-1}{\eta_0}\right)[\hat{v},\hat{v},\hat{v}]\right]=&\fint_{\mathscr{S}_{\frac{d}{\pi}}^1}\left.D^3\left(F\left(\kapsurf{u}\right)\right)\right|_{u=\frac{n-1}{\eta_0}}[\hat{v},\hat{v},\hat{v}] \,dz -\left.D^3\left(F\left(\kapsurf{u}\right)\right)\right|_{u=\frac{n-1}{\eta_0}}[\hat{v},\hat{v},\hat{v}] \nonumber\\
=&\frac{\eta_0^{4}A^3}{4(n-1)^{2}}\left(3\left(\mathscr{F}_3+\mathscr{F}_4\right)\cos\left(\eta_0\sqrt{\frac{F_1(\eta_0)}{(n-1)F_n(\eta_0)}}z\right)\right.\nonumber\\
&\hspace{1.7cm} \left.+\left(\mathscr{F}_3-3\mathscr{F}_4\right)\cos\left(3\eta_0\sqrt{\frac{F_1(\eta_0)}{(n-1)F_n(\eta_0)}}z\right)\right),\nonumber
\end{align}
and
\begin{align
\tilde{v}^*\left[D_{111}^3\bar{G}(0,\eta_0)[\hat{v},\hat{v},\hat{v}]\right]=&\frac{3\eta_0^4A^3}{4(n-1)^2B}\left(\mathscr{F}_3+\mathscr{F}_4 +2\eta_0F_{1}'(\eta_0) -\frac{2\eta_0F_1(\eta_0)F_{1}'(\eta_0)}{F_n(\eta_0)} +4F_1(\eta_0)\right.\nonumber\\
&\hspace{1.7cm}\left.-\frac{2\eta_0\left(\frac{\partial\Xi}{\partial\kappa_1}\left(\kapsurf{\frac{n-1}{\eta_0}}\right)-\frac{F_1(\eta_0)}{F_n(\eta_0)}\frac{\partial\Xi}{\partial\kappa_n}\left(\kapsurf{\frac{n-1}{\eta_0}}\right)\right)}{(n-1)\Xi\left(\kapsurf{\frac{n-1}{\eta_0}}\right)}\left(\eta_0F_{1}'(\eta_0)-\frac{\eta_0F_1(\eta_0)F_{n}'(\eta_0)}{F_n(\eta_0)}+2F_1(\eta_0)\right)\right).\nonumber
\end{align}
Combining this with (\ref{D12Gbar0}) and (\ref{vD11Gbar0vw}) into equation (\ref{D2eta2}) gives
\begin{align
\eta''(0)=& \frac{-\eta_0^3A^2}{12F_1(\eta_0)\left(2F_1(\eta_0)+\eta_0F_{1}'(\eta_0)-\frac{\eta_0F_1(\eta_0)F_{n}'(\eta_0)}{F_n(\eta_0)}\right)}\left(\frac{3F_1(\eta_0)}{n-1}\left(\mathscr{F}_3+\mathscr{F}_4 +2\eta_0F_{1}'(\eta_0) -\frac{2\eta_0F_1(\eta_0)F_{1}'(\eta_0)}{F_n(\eta_0)} +4F_1(\eta_0)\right)\right.\\
&\hspace{4cm}\left.-\frac{6\eta_0F_1(\eta_0)\left(\frac{\partial\Xi}{\partial\kappa_1}\left(\kapsurf{\frac{n-1}{\eta_0}}\right)-\frac{F_1(\eta_0)}{F_n(\eta_0)}\frac{\partial\Xi}{\partial\kappa_n}\left(\kapsurf{\frac{n-1}{\eta_0}}\right)\right)}{(n-1)^2\Xi\left(\kapsurf{\frac{n-1}{\eta_0}}\right)}\left(\eta_0F_{1}'(\eta_0)-\frac{\eta_0F_1(\eta_0)F_{n}'(\eta_0)}{F_n(\eta_0)}+2F_1(\eta_0)\right) \right.\nonumber\\
&\hspace{4cm} \left.+\left(\mathscr{F}_1+\frac{F_1(\eta_0)^2}{F_n(\eta_0)}\right)\left(\mathscr{F}_2-\frac{2F_1(\eta_0)^2}{F_n(\eta_0)}\right)\right)\nonumber
\end{align}
The formula (\ref{D2eta}) then follows by expanding and using that due to axial symmetry, (\ref{EleDef}) can be rewritten as $E_b(\kapsurf{u})=\sum_{a=0}c_a\left(\binom{n-1}{a}\kappa_1^a+\binom{n-1}{a-1}\kappa_1^{a-1}\kappa_n\right)$, with the derivatives given by $\frac{\partial\Xi}{\partial\kappa_1}\left(\kapsurf{u}\right)=\sum_{a=1}^nc_a\left(\binom{n-2}{a-1}\kappa_1(u)^{a-1}+\binom{n-2}{a-2}\kappa_1(u)^{a-2}\kappa_n(u)\right)$ and $\frac{\partial\Xi}{\partial\kappa_n}\left(\kapsurf{u}\right)=\sum_{a=1}^nc_a\binom{n-1}{a-1}\kappa_1(u)^{a-1}$.
\end{proof}
\begin{remark}
In the case where $F$ is homogeneous we found a sequence of bifurcation points, $(0,\eta_m)$. In this case by setting $\eta_0=\eta_m$ the same analysis shows that equations (\ref{Deta}) and (\ref{D2eta}) are still the correct formula. However, the analysis is less relevant here since there is already a strictly positive eigenvalue for $D_1\bar{G}(0,\eta_m)$.
\end{remark}
We are now able to give a stability theorem for the full flow.
\begin{corollary}\label{GenStable}
Let \descref{(A1)}-\descref{(A5)} be satisfied with $\tilde{R}=R_{crit}$. If
\begin{equation}
\mathscr{F} -\frac{6\sum_{a=1}^n\frac{c_a\eta_0^{a}}{(n-1)^{a}}\left(\binom{n-2}{a-1}-\frac{F_1(\eta_0)}{F_n(\eta_0)}\binom{n-1}{a-1}\right)}{(n-1)\sum_{a=0}^n\frac{c_a\eta_0^a}{(n-1)^a}\binom{n-1}{a}}\left(2F_1(\eta_0)+\eta_0F_{1}'(\eta_0) -\frac{\eta_0F_1(\eta_0)F_{n}'(\eta_0)}{F_n(\eta_0)}\right)>0,
\end{equation}
then the stationary solutions to (\ref{WVPCF}) that are close to the cylinder of radius $R_{crit}$ are unstable equilibria.
Alternatively if
\begin{equation}\label{FCond}
\mathscr{F} -\frac{6\sum_{a=1}^n\frac{c_a\eta_0^{a}}{(n-1)^{a}}\left(\binom{n-2}{a-1}-\frac{F_1(\eta_0)}{F_n(\eta_0)}\binom{n-1}{a-1}\right)}{(n-1)\sum_{a=0}^n\frac{c_a\eta_0^a}{(n-1)^a}\binom{n-1}{a}}\left(2F_1(\eta_0)+\eta_0F_{1}'(\eta_0) -\frac{\eta_0F_1(\eta_0)F_{n}'(\eta_0)}{F_n(\eta_0)}\right)<0,
\end{equation}
then the stationary solutions to (\ref{WVPCF}) that are close to the cylinder of radius $R_{crit}$ are stable under axially symmetric, weighted-volume preserving perturbations. That is, there exists $\epsilon>0$ such that for any $s\in(0,\epsilon)$ there exists a neighbourhood, $Z_s\subset h_{\frac{d}{dz}}^{2,\alpha}\left([0,d]\right)$, of $\tilde{\rho}(s)$, such that for any $\rho_0\in Z_s$ with $WVol(\rho_0)=WVol(\tilde{\rho}(s))$, the flow (\ref{WVPCFGraph}), with orthogonal boundary condition, exists for all time and the solution $\rho(t)$ converges exponentially fast to $\tilde{\rho}(s)$ as $t\rightarrow\infty$.
\end{corollary}
\begin{proof}
We start by again noting that the eigenvalues of $D_1\bar{G}(0,\eta_0)$, except for the dominant one, lie in the open complex halfplane, $Re\left(\lambda\right)<0$. Through a perturbation argument this is also true for the operator $D_1\bar{G}(\tilde{u}(s),\eta(s))$ as long as $s$ is small. We now determine the sign of the dominant eigenvalue of $D_1\bar{G}(\tilde{u}(s),\eta(s))$ for $s$ small. By Proposition I.7.2 in \cite{Kielhofer12}, there exists $\epsilon\in(0,\delta)$ and a continuously differentiable curve:
\begin{equation*}
\{\lambda(s):|s|<\epsilon,\lambda_{0}=0\}\subset\mathbb{R},
\end{equation*}
such that
\begin{equation*
D_1\bar{G}(\tilde{u}(s),\eta(s))[\hat{v}+v(s)]=\lambda(s)(\hat{v}+v(s)),
\end{equation*}
where $v(s)$, for $|s|<\epsilon$, is a continuously differentiable curve in range of $D_1\bar{G}(\tilde{u}(s),\eta(s))$ satisfying $v(0)=0$. Also, since $\eta'(0)=0$, we can use equations (I.7.34), (I.7.40) and (I.7.45) in \cite{Kielhofer12} to conclude that $\frac{d\lambda}{ds}(0)=0$ and
\begin{align}\label{D2lambda}
\frac{d^2\lambda}{ds^2}(0)=&-2\tilde{v}^*\left[D_{12}^2\bar{G}(0,\eta_0)\left[\hat{v}\right]]\right]\eta''(0)\nonumber\\
=&\frac{\eta_0^4A^3}{6(n-1)B}\left(\mathscr{F} -\frac{6\sum_{a=1}^n\frac{c_a\eta_0^{a}}{(n-1)^{a}}\left(\binom{n-2}{a-1}-\frac{F_1(\eta_0)}{F_n(\eta_0)}\binom{n-1}{a-1}\right)}{(n-1)\sum_{a=0}^n\frac{c_a\eta_0^a}{(n-1)^a}\binom{n-1}{a}}\left(2F_1(\eta_0)+\eta_0F_{1}'(\eta_0) -\frac{\eta_0F_1(\eta_0)F_{n}'(\eta_0)}{F_n(\eta_0)}\right)\right).
\end{align}
In the first case we see from equation (\ref{D2lambda}) that $\lambda(0)=0$ is a local minimum of $\lambda(s)$ and hence, possibly making $\epsilon$ smaller, the eigenvalue $\lambda(s)$ is positive for $0<|s|<\epsilon$. We also note that $D_1\bar{G}(0,\eta_0)[\bar{v}]$ is the negative of an elliptic operator, so by Theorem 3.2.6 in \cite{HartleyPhD} it is a sectorial operator on the little-H\"older spaces. The perturbation result in Proposition 2.4.2 \cite{Lunardi95} then ensures that $D_1\bar{G}(\tilde{u}(s),\eta(s))$ is sectorial on the little-H\"older spaces for all $|s|<\epsilon$ (again possibly making $\epsilon$ smaller).
From (\ref{lHInterp}) we know that the little-H\"older spaces are interpolation spaces and we can apply Theorem 9.1.7 in \cite{Lunardi95} to obtain a nontrivial backward solution, $\bar{u}(t)$, of (\ref{RedWVPCF}) with $\eta=\eta(s)$ such that:
\begin{equation*
\left\|\bar{u}(t)-\tilde{u}(s)\right\|_{h^{2,\alpha}}\leq Ce^{\omega t},\ t\leq0,
\end{equation*}
where $C,\omega>0$. By setting $\rho(t):=\psi\left(\bar{u}(t),\eta(s)\right)|_{[0,d]}$ we obtain a nontrivial backward solution to (\ref{WVPCFGraph}) such that
\begin{align*}
\left\|\rho(t)-\tilde{\rho}(s)\right\|_{h^{2,\alpha}}=&\left\|\psi\left(\bar{u}(t),\eta(s)\right)|_{[0,d]}-\psi\left(\tilde{u}(s),\eta(s)\right)|_{[0,d]}\right\|_{h^{2,\alpha}}\\
\leq&\left\|\psi\left(\bar{u}(t),\eta(s)\right)-\psi\left(\tilde{u}(s),\eta(s)\right)\right\|_{h^{2,\alpha}}\\
\leq& C\left\|\bar{u}(t)-\tilde{u}(s)\right\|_{h^{2,\alpha}}\\
\leq &Ce^{\omega t},\ t\leq0,
\end{align*}
where we have used that $\psi$ is Lipschitz. Thus we have instability of the stationary solution.
However in the second case we see that $\lambda(0)=0$ is a local maximum of $\lambda(s)$ and hence the eigenvalue $\lambda(s)$ is negative for $0<|s|<\epsilon$. We can therefore prove stability of the hypersurface defined by $\rho(s)$ by applying Theorem 9.1.7 in \cite{Lunardi95}. There exist $C,r,\omega>0$ such that if $\|\bar{u}_0-\tilde{u}(s)\|_{h^{2,\alpha}}<r$ then the solution, $\bar{u}(t)$, of (\ref{RedWVPCF}) with $\eta=\eta(s)$ and initial condition $\bar{u}_0$ is defined for all $t\geq0$ and satisfies
\begin{equation}\label{Fordecay}
\left\|\bar{u}(t)-\tilde{u}(s)\right\|_{h^{2,\alpha}}+\left\|\bar{u}'(t)\right\|_{h^{0,\alpha}}\leq Ce^{-\omega t}\left\|\bar{u}_0-\tilde{u}(s)\right\|_{h^{2,\alpha}},\ t\geq0.
\end{equation}
By now considering $\rho_0$ such that $\left\|\rho_0-\tilde{\rho}(s)\right\|_{h^{2,\alpha}}<\frac{r}{4}$ and $WVol(\rho_0)=WVol(\tilde{\rho}(s))$, then we have
\begin{align*}
\left\|P_0[u_{\rho_0}]-\tilde{u}(s)\right\|_{h^{2,\alpha}}=&\left\|P_0\left[u_{\rho_0}-\psi\left(\tilde{u}(s),\eta(s)\right)\right]\right\|_{h^{2,\alpha}}\\
\leq&2\left\|u_{\rho_0}-\psi\left(\tilde{u}(s),\eta(s)\right)\right\|_{h^{2,\alpha}}\\
\leq&4\left\|\rho_0-\psi\left(\tilde{u}(s),\eta(s)\right)|_{[0,d]}\right\|_{h^{2,\alpha}}\\
<&r.
\end{align*}
So, by the above calculations, there is a solution of (\ref{RedWVPCF}) with $\eta=\eta(s)$ and $\bar{u}(0)=P_0[u_{\rho_0}]$, $\bar{u}(t)$, that satisfies (\ref{Fordecay}). By setting $\rho(t)=\psi\left(\bar{u}(t),\eta(s)\right)|_{[0,d]}$ we obtain a solution to (\ref{WVPCFGraph}) with $\rho(0)=\psi\left(P_0[u_{\rho_0}],\eta(s)\right)|_{[0,d]}=u_{\rho_0}|_{[0,d]}=\rho_0$ such that
\begin{align*}
\left\|\rho(t)-\tilde{\rho}(s)\right\|_{h^{2,\alpha}}=&\left\|\psi\left(\bar{u}(t),\eta(s)\right)|_{[0,d]}-\psi\left(\tilde{u}(s),\eta(s)\right)|_{[0,d]}\right\|_{h^{2,\alpha}}\\
\leq&\left\|\psi\left(\bar{u}(t),\eta(s)\right)-\psi\left(\tilde{u}(s),\eta(s)\right)\right\|_{h^{2,\alpha}}\\
\leq& C\left\|\bar{u}(t)-\tilde{u}(s)\right\|_{h^{2,\alpha}}
\end{align*}
Therefore from (\ref{Fordecay}):
\begin{align*}
\left\|\rho(t)-\tilde{\rho}(s)\right\|_{h^{2,\alpha}}\leq& Ce^{-\omega t}\left\|P_0[u_{\rho_0}]-\tilde{u}(s)\right\|_{h^{2,\alpha}},\ t\geq0.
\end{align*}
Thus the hypersurface defined by $\tilde{\rho}(s)$ is a stable stationary solution of (\ref{WVPCF}) under axially symmetric, weighted-volume preserving perturbations.
\end{proof}
\section{Mixed-Volume Preserving Mean Curvature Flow}\label{SecMVPMCF}
In this section we consider the specific case of the mixed-volume preserving mean curvature flow (including the classical volume preserving mean curvature flow). In this case we have $F\left(\bm{\kappa}\right)=\sum_{a=1}^n\kappa_a$ and $\Xi\left(\bm{\kappa}\right)=E_b\left(\bm{\kappa}\right)$ (i.e. $c_a=1$ for $a=b$ and $c_a=0$ otherwise). Therefore $k=1$, $F_1=F_n=1$, and $F_{nn}=F_{nnn}=0$. Note that for assumption \descref{(A3)} to be satisfied we must exclude the case $b=n$ but in the other cases, $b=1,\ldots,n-1$, it is satisfied for any $\tilde{R}\in\mathbb{R}^+$. The stationary solutions to the flow are CMC hypersurfaces and the family of (mostly) non-cylindrical stationary solutions found in Corollary \ref{StatSol} represent the unduloids, with the cylindrical element of the family having radius $\frac{d\sqrt{n-1}}{\pi}$. In this case condition (\ref{HomogCond}) reduces to
\begin{equation}\label{FCondMV}
\frac{-\left(n^3-(b+10)n^2+2(5b-1)n-2b(3b-4)\right)}{(n-b)}<0.
\end{equation
Further cancellation occurs in the volume preserving ($b=0$) and surface area-preserving ($b=1$) cases. In both situations the condition reduces to $n^2-10n-2>0$. For $b\geq2$ the cubic $n^3-(b+10)n^2+2(5b-1)n-2b(3b-4)$ has a single real root, which is also positive, and two complex roots. We now define this root: for $b=2,\ldots,n-1$ the real root of $n^3-(b+10)n^2+2(5b-1)n-2b(3b-4)$ is
\begin{align
\gamma(b):=&\frac{1}{3}\left(b+10+\frac{b^2-10b+106}{\left(b^3+66b^2-249b+1090+9\sqrt{2b^5+40b^4-288b^3+1733b^2-2540b-36}\right)^{\frac{1}{3}}}\right.\nonumber\\
&\hspace{0.4cm}\left.+\left(b^3+66b^2-249b+1090+9\sqrt{2b^5+40b^4-288b^3+1733b^2-2540b-36}\right)^{\frac{1}{3}}\right)\nonumber
\end{align}
\begin{corollary}\label{MVStable}
The unduloids are stable, with respect to the $(n+1-b)$\textsuperscript{th} mixed-volume preserving mean curvature flow, under $(n+1-b)$\textsuperscript{th} mixed-volume preserving, axially symmetric perturbations in the following cases:
\begin{itemize}
\item For $b=0,\ldots,3$ and $n\geq11$
\item For $b=4,5$ and $n\geq12$
\item For $b=6,7,8$ and $n\geq b+7$
\item For $b\geq9$ and $n\geq b+6$.
\end{itemize}
Otherwise they are unstable.
\end{corollary}
\begin{proof}
This follows from Corollary \ref{GenStable} and we just check that (\ref{FCondMV}) is satisfied, note from its structure it is clear that the left hand side of (\ref{FCondMV}) will be negative for $n$ large enough. If $b=0,1$ then we have equality in (\ref{FCondMV}) when $n=5\pm3\sqrt{3}$, only one of which is positive and is between $10$ and $11$. For the cases of $b\geq2$ the condition is satisfied for $n>\gamma(b)$, which we evaluate for $b=2,\ldots,8$ and note that for $b\geq9$ we have $b+5<\gamma(b)<b+6$.
\end{proof}
\section{Geometric Construction}\label{SecGeoCons}
In this section we consider an alternative method for constructing the bifurcation curves of stationary solutions to the mixed-volume preserving mean curvature flow. We will use a representation of the axially symmetric CMC hypersurfaces to calculate the mixed-volume of such hypersurfaces and hence explicitly give a formula for $\eta(s)$.
The $n$-dimensional axially symmetric CMC hypersurfaces were studied in \cite{Hsiang81}, where the profile curve, $\rho(z)$, was shown to satisfy:
\begin{equation*
z=\int_{\rho(0)}^{\rho} \frac{1}{\sqrt{\left(\frac{x^{n-1}}{C+\frac{H}{n}x^n}\right)^2-1}}\,dx,
\end{equation*}
where $C$ is a constant and $H$ is the mean curvature of the hypersurface. We note that for this representation the cylinders can only be treated through limits. Similarly, we can only treat the unduloids with half a period, i.e. when $m=1$. However, the formulas proved here can easily be extended to any amount of periods.
To obtain the hypersurfaces that satisfy the orthogonal boundary conditions we set $\left.\frac{d\rho}{dz}\right|_{z=0}=\left.\frac{d\rho}{dz}\right|_{z=d}=0$ and we will also define $s:=\frac{\rho(d)-\rho(0)}{\rho(d)+\rho(0)}$. This leads to the formulas:
\begin{equation*
z=\rho_{0,s}\int_{1}^{\frac{\rho_s}{\rho_{0,s}}} \frac{1}{\sqrt{\left(\frac{\bar{x}^{n-1}\left((1+s)^{n}-(1-s)^{n}\right)}{2s(1+s)^{n-1}+\left((1+s)^{n-1}-(1-s)^{n-1}\right)(1-s)\bar{x}^n}\right)^2-1}}\,d\bar{x},
\end{equation*}
where we have used the change of variables $x=\rho_s(0)\bar{x}$ and have set $\rho_{0,s}:=\rho_s(0)$, which is obtained by evaluating at $z=d$ and using $\frac{\rho_s(d)}{\rho_s(0)}=\frac{1+s}{1-s}$:
\begin{equation*
\rho_{0,s}=d\left(\int_{1}^{\frac{1+s}{1-s}} \frac{1}{\sqrt{\left(\frac{\bar{x}^{n-1}\left((1+s)^{n}-(1-s)^{n}\right)}{2s(1+s)^{n-1}+\left((1+s)^{n-1}-(1-s)^{n-1}\right)(1-s)\bar{x}^n}\right)^2-1}}\,d\bar{x}\right)^{-1}.
\end{equation*}
The mean curvature of the hypersurface is also easily obtained:
\begin{equation*
H=\left(\frac{(1+s)^{n-1}-(1-s)^{n-1}}{(1+s)^n-(1-s)^n}\right)\frac{n(1-s)}{\rho_{0,s}}.
\end{equation*}
\begin{lemma}\label{BifCurvForm}
The bifurcation curve $\eta(s)$ is given by the formula
\begin{equation*
\eta(s)=\frac{n-1}{d}\left(\frac{\left(\int_1^{\frac{1+s}{1-s}}g_s(x)\,dx\right)^{n+1}}{\int_1^{\frac{1+s}{1-s}}x^ng_s(x)\,dx}\right)^{\frac{1}{n}},
\end{equation*}
for the VPMCF and by
\begin{equation*
\eta(s)=\frac{n-1}{d}\left(\frac{\left(\int_1^{\frac{1+s}{1-s}}g_s(x)\,dx\right)^{n+1-b}}{\int_1^{\frac{1+s}{1-s}}\frac{x^{n-b}g_s(x)}{\left(1+g_s(x)^{-2}\right)^{\frac{b-2}{2}}}+\frac{(b-1)x^{n+1-b}g_s'(x)}{(n+1-b)g_s(x)^2\left(1+g_s(x)^{-2}\right)^{\frac{b}{2}}}\,dx}\right)^{\frac{1}{n-b}},
\end{equation*}
for the $(n+1-b)$\textsuperscript{th} mixed-volume preserving mean curvature flow, where
\begin{equation*
g_s(x)=\frac{1}{\sqrt{\left(\frac{x^{n-1}\left((1+s)^n-(1-s)^n\right)}{2s(1+s)^{n-1}+\left((1+s)^{n-1}-(1-s)^{n-1}\right)(1-s)x^n}\right)^2-1}}.
\end{equation*}
\end{lemma}
\begin{proof}
We first note that for the mixed-volume preserving flows $\tilde{Q}(\eta)=\binom{n}{b}\left(\frac{n-1}{\eta}\right)^{n-b}$, so that
\begin{equation}\label{QtilInv}
\tilde{Q}^{-1}(x)=(n-1)\left(\frac{\binom{n}{b}}{x}\right)^{\frac{1}{n-b}}.
\end{equation}
Now we calculate $\fint Q(u)\,dz$ for the unduloids:
\begin{equation*}
\fint_{\mathscr{S}_{\frac{d}{\pi}}^1}Q\left(u_{\rho_{s}}\right)\,dz=\fint_0^dQ\left(\rho_{s}\right)\,dz=\left\{\begin{array}{ll} \frac{1}{d}\int_0^d \rho_{s}(z)^n\,dz & b=0\\ \frac{n}{db}\int_0^d\frac{\binom{n-1}{b-1}\rho_{s}(z)^{n-b}}{\left(1+\rho_{s}'(z)^2\right)^{\frac{b-2}{2}}} -\frac{\binom{n-1}{b-2}\rho_{s}(z)^{n+1-b}\rho_{s}''(z)}{\left(1+\rho_{s}'(z)^2\right)^{\frac{b}{2}}}\,dz & b\neq0.\end{array}\right.
\end{equation*}
Using the substitution $z(x)=\rho_{0,s}\int_1^xg_s(y)\,dy$, we have that $\rho_{s}\left(z(x)\right)=\rho_{0,s}y$, $\frac{dz}{dx}=\rho_{0,s}g_s(x)$ and
\begin{equation*}
\fint_{\mathscr{S}_{\frac{d}{\pi}}^1}Q\left(u_{\rho_{s}}\right)\,dz=\left\{\begin{array}{ll} \frac{\rho_{0,s}^{n+1}}{d}\int_1^{\frac{1+s}{1-s}}x^ng_s(x)\,dx & b=0\\ \frac{\binom{n}{b}\rho_{0,s}^{n+1-b}}{d}\int_1^{\frac{1+s}{1-s}}\frac{x^{n-b}}{\left(1+g_s(x)^{-2}\right)^{\frac{b-2}{2}}} +\frac{(b-1)x^{n+1-b}g_s'(x)}{(n+1-b)\left(1+g_s(x)^{-2}\right)^{\frac{b}{2}}g_s(x)^3}\,dx & b\neq0.\end{array}\right.
\end{equation*}
From the second point of Lemma \ref{defpsi} $\fint_{\mathscr{S}_{\frac{d}{\pi}}^1}Q\left(u_{\rho_{s}}\right)\,dz=\tilde{Q}(\eta_1(s))$, so by using (\ref{QtilInv}) and the fact that $\rho_{0,s}=d\left(\int_0^{\frac{1+s}{1-s}}g_s(x)\,dx\right)^{-1}$ we obtain the result.
\end{proof}
\begin{figure}[ht]
\centering
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[width=\textwidth]{Etabar_b00_n02.pdf}
\caption{$\bar{\eta}(s)$ in dimension $n=2$}
\label{fig:Vol2}
\end{subfigure}%
~
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[width=\textwidth]{Etabar_b00_n03.pdf}
\caption{$\bar{\eta}(s)$ in dimension $n=3$}
\label{fig:Vol3}
\end{subfigure}
~
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[width=\textwidth]{Etabar_b00_n04.pdf}
\caption{$\bar{\eta}(s)$ in dimension $n=4$}
\label{fig:Vol4}
\end{subfigure}
\\%newline
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[width=\textwidth]{Etabar_b00_n05.pdf}
\caption{$\bar{\eta}(s)$ in dimension $n=5$}
\label{fig:Vol5}
\end{subfigure}%
~
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[width=\textwidth]{Etabar_b00_n06.pdf}
\caption{$\bar{\eta}(s)$ in dimension $n=6$}
\label{fig:Vol6}
\end{subfigure}
~
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[width=\textwidth]{Etabar_b00_n07.pdf}
\caption{$\bar{\eta}(s)$ in dimension $n=7$}
\label{fig:Vol7}
\end{subfigure}
\\%newline
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[width=\textwidth]{Etabar_b00_n08.pdf}
\caption{$\bar{\eta}(s)$ in dimension $n=8$}
\label{fig:Vol8}
\end{subfigure}%
~
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[width=\textwidth]{Etabar_b00_n09.pdf}
\caption{$\bar{\eta}(s)$ in dimension $n=9$}
\label{fig:Vol9}
\end{subfigure}
~
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[width=\textwidth]{Etabar_b00_n10.pdf}
\caption{$\bar{\eta}(s)$ in dimension $n=10$}
\label{fig:Vol10}
\end{subfigure}
\\%newline
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[width=\textwidth]{Etabar_b00_n11.pdf}
\caption{$\bar{\eta}(s)$ in dimension $n=11$}
\label{fig:Vol11}
\end{subfigure}
~
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[width=\textwidth]{Etabar_b00_n12.pdf}
\caption{$\bar{\eta}(s)$ in dimension $n=12$}
\label{fig:Vol12}
\end{subfigure}
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[width=\textwidth]{Etabar_b00_n13.pdf}
\caption{$\bar{\eta}(s)$ in dimension $n=13$}
\label{fig:Vol13}
\end{subfigure}
\caption{Normalised bifurcation parameter in different dimensions}\label{fig:UnduloidEtaBar}
\end{figure}
\begin{figure}[ht]
\centering
\begin{subfigure}[b]{0.45\textwidth}
\centering
\includegraphics[width=0.82\textwidth]{Etabar_b00_n10_zoom.pdf}
\caption{Close up of $\bar{\eta}(s)$ in dimension $n=10$}
\label{fig:Vol10zoom}
\end{subfigure}%
~
\begin{subfigure}[b]{0.45\textwidth}
\centering
\includegraphics[width=0.82\textwidth]{Etabar_b00_n11_zoom.pdf}
\caption{Close up of $\bar{\eta}(s)$ in dimension $n=11$}
\label{fig:Vol11zoom}
\end{subfigure}
\caption{Turning point of the normalised bifurcation parameter for $n=10,11$}\label{fig:UnduloidEtaBarZoom}
\end{figure}
Figure \ref{fig:UnduloidEtaBar} shows these bifurcation curves for the case of the VPMCF and various $n$ values. These plots confirm that the bifurcation parameter (volume enclosed) is a local maximum (minimum) at the cylinder if $n\leq10$, while for $n\geq11$ it is a local minimum (maximum) at the cylinder; see Figure \ref{fig:UnduloidEtaBarZoom} for a close up of the turning point for dimensions ten and eleven. Interesting phenomena are also apparent in dimensions eight and higher where additional turning points appear. In dimension eight, a local maximum and minimum of the enclosed volume occur within the family of unduloids. In dimensions nine and ten, the turning points separate from each other and these points are the global maximum and minimum volume of the family. In dimensions eleven and higher only the local minimum of the volume occurs and it remains a global minimum volume of the family. This behaviour is very intriguing and it would be of interest to know what is special about these unduloids.
|
2,869,038,154,338 | arxiv | \section{Introduction}
Let $b_t(x)$ be a vector-valued function from $\mathbb{R}^{d}$ into $\mathbb{R}^d$ and $\sigma_{t}(x)=[\sigma_{t,1}(x),\ldots,\sigma_{t,r}(x)]$ be a matrix-valued function from $\mathbb{R}^{d}$ into $\mathbb{R}^{d\times r}$, for some parameters $d,r\geq 1$. Both functions will be assumed to be differentiable.
Let $W_t$ be an $r$-dimensional Brownian motion and denote by $ {\cal W }_{s,t}$ the $\sigma$-field generated by the increments $(W_u-W_v)$ of the Brownian motion, with $u,v\in [s,t]$.
For any time horizon $s\geq 0$ we denote by
$X_{s,t}(x)$ the stochastic flow defined for any $t\in [s,\infty[$ and any starting point $X_{s,s}(x)=x\in \mathbb{R}^d$ by the stochastic differential equation
\begin{equation}\label{diff-def}
d X_{s,t}(x)=b_t\left(X_{s,t}(x)\right)~dt+\sigma_{t}\left(X_{s,t}(x)\right)~
dW_t
\end{equation}
We assume that $ x\mapsto b_t(x)$ and $ x\mapsto \sigma_t(x)$ have continuous and uniformly bounded derivatives up to the third order. This condition is clearly met for linear Gaussian models as well as for the geometric Brownian motion. This condition ensures that the stochastic flow $
x\mapsto X_{s,t}(x)$ is a twice differentiable function of the initialisation $x$. In addition,
all absolute moments of the flow and the ones of its first and second order derivatives exists for any time horizon.
As it is well known, dynamical systems and hence stochastic models involving drift functions with quadratic growth require additional regularity conditions to ensure non explosion of the solution in finite time.
It is also implicitly assumed that all functions $(b_t,\sigma_t)$ are smooth functions w.r.t. the time parameter. The present article develop several constructive and stochastic analysis tools including Bismut-Elworthy-Li formulae, stochastic semigroup perturbation formulae, extended two-sided stochastic integration, Malliavin calculus, gradient and Hessian semigroup processes estimates. We are also looking for useful quantitative and time uniform estimates which are valid under a single set of easily checked conditions that only depend on the parameters of the model. Various techniques presented in the article and many results can be separately and readily extended to more general models with weaker and abstract
custom assumptions that depend on the different quantities to handle.
Let $\overline{X}_{s,t}(x)$ be the stochastic flow associated with a stochastic differential equation
defined as (\ref{diff-def}) by replacing $(b_t,\sigma_t)$ by some drift and diffusion functions $(\overline{b}_t,\overline{\sigma}_t)$ with the same regularity properties. Constant diffusion functions $(\sigma_t,\overline{\sigma}_t)$ are defined by
\begin{equation}\label{constant-sigma}
\sigma_t(x)=\Sigma_t\quad \mbox{\rm and}\quad \overline{\sigma}_t(x)=\overline{\Sigma}_t\quad \mbox{\rm for some matrices $\Sigma_t$ and $\overline{\Sigma}_t$.}\quad
\end{equation}
In this context, we will assume that $\Sigma_t$ and $\overline{\Sigma}_t$ are uniformly bounded w.r.t. the time horizon.
The Markov transition semigroups associated with the flows $X_{s,t}(x)$ and
$\overline{X}_{s,t}(x)$ are defined for any measurable function $f$
on $\mathbb{R}^d$ by the formula
$$
P_{s,t}(f)(x):=\mathbb{E}\left(f(X_{s,t}(x))\right)\quad \mbox{\rm and}\quad \overline{P}_{s,t}(f)(x):=\mathbb{E}\left(f(\overline{X}_{s,t}(x))\right)
$$
In this paper we derive equations for the differences $(X_{s,t}-\overline{X}_{s,t})$ and $(P_{s,t}-\overline{P}_{s,t})$
in terms of the difference of their corresponding drifts and diffusion functions,
\begin{equation}\label{diff-functions}
\Delta a_t:=a_t-\overline{a}_t\qquad
\Delta b_t:=b_t-\overline{b}_t \quad \mbox{\rm and}\quad \Delta \sigma_t=\sigma_t-\overline{\sigma}_t
\end{equation}
where $a_t(x):=\sigma_{t}(x)~\sigma_{t}(x)^{\prime}$ and
$\overline{a}_t(x):=\overline{\sigma}_t(x)\,\overline{\sigma}_t^{\prime}(x)$.
In some applications the functions $ \overline{b}_t = b_t - \Delta b_t$ and $ \overline{\sigma}_t = \sigma_t - \Delta \sigma_t$ can be interpreted as a local perturbation of the drift and the diffusion of the stochastic flow ${X}_{s,t}$.
We also address the problem of finding time-uniform estimates for the difference between the stochastic flows $X_{s,t}$ and $\overline{X}_{s,t}$ and their corresponding Markov transition kernels $P_{s,t}$ and $\overline{P}_{s,t}$.
These important questions arise in a variety of domains including stochastic perturbation theory as well as in the stability and the qualitative theory of stochastic systems. Classical analytic estimates on the difference between the stochastic flows driven by different drift and diffusion functions are often much too large for most diffusion processes of practical interest. In some instances none of the diffusion flows are stable. In this context, any local perturbation of the stochastic model propagates so that any global error estimate eventually tends to $\infty$ as the time horizon $t\rightarrow\infty$.
Whenever one of the stochastic flows is stable, classical perturbation bounds combining Lipschitz type inequalities
with Gronwall lemma~\cite{bellman-2,gronwall} yield exceedingly pessimistic global estimates that grows exponentially fast w.r.t. the time horizon.
Notice that
an exponential type estimate of the form $e^{\lambda t}$ for some parameter $\lambda>0$ and some time horizon $t$ s.t. $\lambda\,t\geq 199$ would induce an error bound larger than the estimated number $10^{86}$ of elementary particles of matters in the visible universe. As mentioned in~\cite{iserles} in the context of Euler scheme type approximations of deterministic dynamical systems, one may encounter situations where $\lambda=10^8$ and $t=10^2$ and the resulting exponential bounds are clearly impractical from a numerical perspective. \\
The statement of the main results of the article are presented in section~\ref{sec-smr-intro}:
\begin{enumerate}
\item[i.] Section~\ref{sec-1-biw-intro} presents a novel generalized backward It\^o-Ventzell formula (cf. theorem~\ref{biv}). The It\^o-Ventzell is a very important formula, arguably as useful as the It\^o's change of variable, but surprisingly the backward It\^o-Ventzell presented in this work has never been studied before. Theorem~\ref{biv} can be seen as a new generalized backward version of the generalized It\^o-Ventzell formula presented in~\cite{ocone-pardoux}.
\item[ii.] In section~\ref{sec-sfi-intro} we apply the backward It\^o-Ventzell formula to derive a forward-backward stochastic perturbation formula that expresses the difference between the stochastic flows $X_{s,t}$ and $\overline{X}_{s,t}$ in terms of first and second order derivatives of the flows, which we call the tangent and Hessian processes respectively, with respect to the space parameter (cf. theorem~\ref{theo-al-gr}).
\item[iii.] Section~\ref{sec-sfi-intro} also provides a novel forward-backward It\^o type differential formula for interpolating stochastic diffusion flows (cf. the change of variable formula (\ref{ref-interp-du})).
\item[iv.] In the beginning of section~\ref{sec-sfi-intro} we present a discrete time approach based on the pivotal interpolating telescoping sum formula (\ref{ref-telescoping-sum}). This interpolating stochastic semigroup technique can be seen as an extension to stochastic flows of the stochastic perturbation analysis developed in~\cite{dm-2000,d-2004,dm-g-99,guionnet} and in~\cite{mp-var-18,mp-var-19,bishop-19} in the context of discrete time models, matrix and nonlinear interacting processes (see also~\cite{mp-dualtiy,mp-var-19}). For a more thorough discussion on these models, we refer to section~\ref{sec-comments}.
This approach allows to derive a stochastic interpolation formula (\ref{Alekseev-grobner}) with a fluctuation term (\ref{def-S-sk}) defined by an extended two-sided stochastic integral.
\item[v.] Section~\ref{sec-uewrtt} presents some natural spectral conditions on the gradients of $b_t(x), \sigma_t(x), \overline{b}_t(x)$ and $\overline{\sigma}_t(x)$ that allows us to derive in a direct way a series of realistic uniform estimates
with respect to the time horizon.
\end{enumerate}
The rest of the article is organized as follows:
Section~\ref{var-sec} provides some basic tools associated with the first and second variational equations associated with a diffusion flow.
We also present some quantitative estimates of the tangent and the Hessian processes. For a more thorough discussion on stochastic flows and their differentiability properties we refer to~\cite{carverhill,kunita,norris}.
Section~\ref{proof-theo-al-gr} is mainly concerned with the forward-backward stochastic interpolation formula (\ref{Alekseev-grobner}) stated in theorem~\ref{theo-al-gr}.
Two approaches are presented: The first one discussed in
section~\ref{two-sided} is based on an extension of the two-sided stochastic calculus introduced by Pardoux and Protter in~\cite{pardoux-protter} to stochastic interpolation flows. The second one discussed in section~\ref{ref-rig} is based on the generalized backward It\^o-Ventzell formula. This section also discusses a multivariate Skorohod-Alekseev-Gr\"obner formula. Apart from more complex and sophisticated tensor notation, the quantitative stochastic analysis of these
multivariate formulae follows the same arguments as the ones used in the proof of theorem~\ref{theo-al-gr-2}. Thus, we have chosen to concentrate this introduction on stochastic flows.
Some extensions of the stochastic interpolation formula
(\ref{Alekseev-grobner})
are discussed in section~\ref{sec-extensions}.
Section~\ref{sk-section} is dedicated to the analysis of the Skorohod fluctuation process introduced in (\ref{def-S-sk}).
Section~\ref{lem-var-proof} is dedicated to the analysis of an extended version of two-sided stochastic integrals and a generalized backward It\^o-Ventzell formula.
Section~\ref{sec-illustrations} presents some illustrations of the forward-backward interpolation formulae discussed in the present article in the context of diffusion perturbation theory, interacting diffusions and discrete time approximations.
The technical proofs of some results are housed in the appendix.
\subsection{Statement of some main results}\label{sec-smr-intro}
\subsubsection{A backward It\^o-Ventzell formula}\label{sec-1-biw-intro}
We represent the gradient of a real valued function of several variables as a column vector while the gradient and the Hessian of a (column) vector valued function as tensors of type $(1,1)$ and $(2,1)$, see for instance (\ref{grad-def}) and (\ref{Hessian-def}); in more layman terms a $(1,1)$ tensor is a matrix while the $(2,1)$ tensor can be visualized as a ``row of matrices'' $[A_1,\ldots, A_n]$ where the entries $A_i$ are matrices of a common dimension. We also use the tensor product and the transpose operator defined in (\ref{tensor-notation}), see also (\ref{tensor-product-2-2}).
We denote by $D_t$ the Malliavin
derivative from some dense domain $\DD_{2,1}\subset\mathbb{L}_2(\Omega)$ into the space $\mathbb{L}_2(\Omega\times\mathbb{R}_+;\mathbb{R}^r)$. For multivariate $d$-column vector random variables $F$ with entries $F^j$, we use the same rules as for the gradient and $D_t F$ is the $(r,p)$-matrix with entries $(D_t F)_{i,j}:=D_t^iF^j$. For $(p\times q)$-matrices $F$ with entries $F^j_{k}$ we let $D_tF$ be the tensor with entries
$
(D_tF)_{i,j,k}=D^i _tF^j_{k}
$.
For a more thorough discussion on Malliavin derivatives and Skorohod integration we refer to section~\ref{sec-malliavin}.
Let $F$ be some function from $\mathbb{R}^p$ into $\mathbb{R}^q$, and let $y\in\mathbb{R}^p$ be some given state, for some $p,q\geq 1$. Suppose we are given a forward $p$-dimensional continuous semi-martingale $Y_{s,t}$ and a backward random field $F_{s,t}$ from $\mathbb{R}^p$ into $\mathbb{R}^q$ with a column-vector type canonical representation of the following form:
\begin{equation}\label{backward-random-fields}
\left\{
\begin{array}{rcl}
\displaystyle Y_{s,t}&=&\displaystyle y+\int_s^t~B_{s,u}~du+\int_s^t~\Sigma_{s,u}~dW_u\\
\displaystyle F_{s,t}(x)&=&\displaystyle F(x)+\int_s^t~G_{u,t}(x)~du+\int_s^t~H_{u,t}(x)~dW_u
\end{array}\right.
\end{equation}
for some $ {\cal W }_{s,t}$-adapted functions
$B_{s,t},G_{s,t},H_{s,t},\Sigma_{s,t}$ with appropriate dimensions and satisfying
the following conditions:\\
{\it
$(H_1)$: The functions $F_{s,t}$, $G_{u,t}$ and $H_{u,t}$ as well as $\nabla H_{u,t}$, $\nabla^2F_{u,t}$ and the derivatives $ D_v\nabla F_{u,t}$ and $D_vH_{u,t}$ are continuous w.r.t. the state and the time variables for any given $\omega\in\Omega$.
$(H_2)$ The function $G_{u,t},\nabla H_{u,t}, \nabla^2 F_{u,t}$, and the derivatives
$D_vH_{u,t},D_v\nabla F_{u,t}$ have at most polynomial growth w.r.t. the state variable, uniformly with respect to $\omega\in \Omega$.
$(H_3)$ The processes $B_{s,u},\Sigma_{s,u}$ as well as $D_v\Sigma_{s,u}$ are continuous and have moments of any order. }\\
In this notation, the first main result of this article is the following theorem.
\begin{theo}\label{biv}
Assume conditions $(H_i)_{i=1,2,3}$ are satisfied. In this situation, for any $s\leq u\leq v\leq t$ we have the generalized backward It\^o-Ventzell formula
\begin{equation}\label{backward-ito-v}
\begin{array}{l}
\displaystyle F_{v,t}(Y_{s,v})-F_{u,t}(Y_{s,u}) =\int_u^v( \nabla F_{r,t}(Y_{s,r})^{\prime}~B_{s,r} +\frac{1}{2}~\nabla^2 F_{r,t}(Y_{s,r})^{\prime}~\Sigma_{s,r}\Sigma_{s,r}^{\prime}-G_{r,t}(Y_{s,r}))~dr\\
\\
\hskip5cm+ \displaystyle\int_u^v~\left(
\nabla F_{r,t}(Y_{s,r})^{\prime}~\Sigma_{s,r}-H_{r,t}(Y_{s,r})\right)~dW_r
\end{array}
\end{equation}
The stochastic anticipating integral in the r.h.s. of~\ref{backward-ito-v} is understood as a Skorohod stochastic integral.
\end{theo}
The above theorem can be seen as the backward version of the generalized It\^o-Ventzell formula presented in~\cite{ocone-pardoux,pardoux-90}. The proof of the above theorem is provided in section~\ref{biv-proof} (see theorem~\ref{theo-iv}).
Conventional forward and backward It\^o stochastic integrals are particular instances of the two-sided stochastic integrals introduced by Pardoux and Protter in~\cite{pardoux-protter}. The terminology " two-sided " coined by the authors in~\cite{pardoux-protter} comes from the fact that the integrand of the Skorohod integral depend on the past as well as on the future of the history generated by the Brownian motion.
The stochastic anticipating integral in the r.h.s. of (\ref{backward-ito-v}) involves a backward random field and a forward semimartingale, thus it is tempting to interpret this integral as a two sided integral. Unfortunately, this class of integrands are not considered in the construction of the two-sided stochastic integrals defined in~\cite{pardoux-protter}. In section~\ref{two-sided} and section~\ref{sec-extended-2-sided} we shall present an extended version of the two-sided stochastic integrals introduced in~\cite{pardoux-protter} that applies to integrands defined as a compositions of backward and forward stochastic flows. This extended
version applies to backward stochastic flows but it doesn't encapsulate more general backward random fields.
We believe more general extensions of the two-sided integrals can be developed but it is out of the scope of this article to develop a theory on generalized two-sided stochastic integrals. We finally mention that all two-sided stochastic integrals discussed in this article are particular instances of Skorohod integrals
\subsubsection{A stochastic flow interpolation formula}\label{sec-sfi-intro}
The diffusion flow (\ref{diff-def}) is defined
in term of a column vector with twice continuously differentiable entries.
For $h\simeq 0$ we use
the backward approximation:
\begin{equation}\label{Alekseev-grobner-intro-0}
\begin{array}{l}
X_{s,t}(x)- X_{s-h ,t}(x)
=X_{s,t}(x)-(X_{s,t}\circ X_{s-h ,s})(x)\\
\\
\simeq X_{s,t}(x)-X_{s,t}\left(x+b_s(x)~h +\sigma_s(x)~\left(W_{s}-W_{s-h }\right)\right)\\
\\
\displaystyle \simeq -\left[\left( \nabla X_{s,t}(x)^{\prime}~b_s(x)+\frac{1}{2}~\nabla^2 X_{s,t}(x)^{\prime}~a_s(x)\right)~h + \nabla X_{s,t}(x)^{\prime}\sigma_s(x)~(W_{s}-W_{s-h})\right]
\end{array}
\end{equation}
In the above display, $X_{s,t}\circ X_{s-h ,s}$ stands for the composition of the mappings $X_{s,t}$ and $X_{s-h ,s}$.
The above approximations are rigorously justified in section~\ref{two-sided} and lead to the backward stochastic flow evolution equation:
\begin{equation}\label{backward-synthetic}
\begin{array}{l}
d_s X_{s,t}(x)
=-\left[\left( \nabla X_{s,t}(x)^{\prime}~b_s(x)+\frac{1}{2}~\nabla ^2 X_{s,t}(x)^{\prime}~a_s(x)\right)~ds+\nabla X_{s,t}(x)^{\prime}\sigma_s(x)~dW_s\right]
\end{array}
\end{equation}
In the above display, $d_s X^i_{s,t}(x)$ represents the change in $X^i_{s,t}(x)$ w.r.t. the variable $s$.
In the same vein, for any $s< u< t$ we have the interpolating semigroup decompositions
$$
\begin{array}{l}
X_{u+h,t}\circ \overline{X}_{s,u+h}-X_{u,t}\circ \overline{X}_{s,u}\\
\\
=(X_{u+h,t}-X_{u,t})\circ \overline{X}_{s,u}+\left(X_{u+h,t}\circ \overline{X}_{s,u+h}-X_{u+h,t}\circ\overline{X}_{s,u}\right)
\end{array}$$
as well as the forward approximations
\begin{equation}\label{Alekseev-grobner-intro-ref-2}
\begin{array}{l}
X_{u+h,t}\left(\,\overline{X}_{s,u}(x)+\left(\overline{X}_{s,u+h}(x)-\overline{X}_{s,u}(x)\right)\right)-X_{u+h,t}(\overline{X}_{s,u}(x))\\
\\
\displaystyle \simeq\left(\nabla X_{u+h,t}\right)(\overline{X}_{s,u}(x))^{\prime}~(\overline{X}_{s,u+h}(x)-\overline{X}_{s,u}(x))
+\frac{1}{2}\,\left(\nabla ^2X_{u+h,t}\right)(\overline{X}_{s,u}(x))^{\prime}~\overline{a}_u(\overline{X}_{s,u}(x))~h
\end{array}
\end{equation}
The above approximations are rigorously justified in section~\ref{two-sided} and lead to the forward-backward stochastic interpolation equation
\begin{equation}\label{ref-interp-du}
\begin{array}{l}
d_u\left(X_{u,t}\circ \overline{X}_{s,u}\right)(x)\\
\\
\displaystyle=\left(d_uX_{u,t}\right)(\overline{X}_{s,u}(x))+\left(\nabla X_{u,t}\right)(\overline{X}_{s,u}(x))^{\prime}~d_u\overline{X}_{s,u}(x)
+\frac{1}{2}\,\left(\nabla ^2X_{u,t}\right)(\overline{X}_{s,u}(x))^{\prime}~\overline{a}_u(\overline{X}_{s,u}(x))~du
\end{array}
\end{equation}
The discrete time version of the forward-backward stochastic formula in the above display reduces to the telescoping sum formula (\ref{ref-telescoping-sum}) and the second order Taylor expansions discussed in section~\ref{two-sided}. We already mention that (\ref{ref-telescoping-sum}) can be interpreted as a discrete time version of the Alekseev-Gr\"obner lemma~\cite{Alekseev,grobner}. The terminology {\em forward-backward} comes from the forward and backward nature of (\ref{ref-interp-du}) and the telescoping sum formula (\ref{ref-telescoping-sum}).
Also notice that (\ref{backward-synthetic}) can also be deduced formally from (\ref{ref-interp-du}) by replacing $\overline{X}_{s,u}$ by the stochastic flow ${X}_{s,u}$ in (\ref{ref-interp-du}), and then letting $s=u$.
This yields the following interpolation theorem.
\begin{theo}\label{theo-al-gr}
We have the forward-backward stochastic interpolation formula
\begin{equation}\label{Alekseev-grobner}
X_{s,t}(x)-\overline{X}_{s,t}(x)=T_{s,t}(\Delta a,\Delta b)(x)+S_{s,t}(\Delta \sigma)(x)
\end{equation}
with the stochastic process
\begin{equation}\label{def-T-st}
\begin{array}{l}
\displaystyle
T_{s,t}(\Delta a,\Delta b)(x)\\
\\
\displaystyle := \int_s^t\left[\left(\nabla X_{u,t}\right)(\overline{X}_{s,u}(x))^{\prime}~\Delta b_u (\overline{X}_{s,u}(x))+\frac{1}{2}~\left(\nabla ^2X_{u,t}\right)(\overline{X}_{s,u}(x))^{\prime}~\Delta a_u(\overline{X}_{s,u}(x))\right]~du
\end{array}
\end{equation}
and the fluctuation term given by the Skorohod stochastic integral
\begin{equation}\label{def-S-sk}
S_{s,t}(\Delta \sigma)(x):=\int_s^t~\left(\nabla X_{u,t}\right)(\overline{X}_{s,u}(x))^{\prime}~\Delta\sigma_u(\overline{X}_{s,u}(x))~dW_u
\end{equation}
The fluctuation term in the above display can also be seen as the extended two-sided stochastic integral
defined in (\ref{sk-integral}) (see also proposition~\ref{k-prop}).
\end{theo}
These interpolation formulae combine the backward evolution (\ref{backward-synthetic}) with the conventional forward evolution of the perturbed flow.
The proof of the interpolation formula (\ref{Alekseev-grobner}) is provided in section~\ref{proof-theo-al-gr}.
We will present two different approaches:
The first one presented in section~\ref{two-sided} is rather elementary and very intuitive. It combines the conventional It\^o-type discrete time approximations of stochastic integrals
discussed above with the two-sided stochastic integration calculus introduced in~\cite{pardoux-protter}. Using this approximation technique the
fluctuation term is defined by the extended two-sided stochastic integral
defined in (\ref{sk-integral}).
In this interpretation, the equation (\ref{Alekseev-grobner}) can be seen as an extended version of the It\^o-type change rule formula stated in theorem 6.1 in the article~\cite{pardoux-protter} to the interpolating flow
\begin{equation}\label{interpolating-flow}
Z^{s,t}~:~ u\in [s,t]~\mapsto~ Z^{s,t}_u:=X_{u,t}\circ \overline{X}_{s,u}\quad\Longrightarrow\quad Z^{s,t}_{s}-Z^{s,t}_{t}= X_{s,t}-\overline{X}_{s,t}
\end{equation}
Roughly speaking, the increments of the interpolating path
are decomposed into two parts:
One comes from the backward increments of the flow $u\mapsto X_{u,t}$ given the past values of the stochastic flow $\overline{X}_{s,u}$.
The other one comes from the conventional It\^o increments of $u\mapsto \overline{X}_{s,u}$ given the future values of the stochastic flow $X_{u,t}$.
The second approach discussed in section~\ref{ref-rig} is based on the generalized backward It\^o-Ventzell formula stated in theorem~\ref{biv}.
More precisely we also recover (\ref{Alekseev-grobner}) from (\ref{backward-ito-v}) by choosing
\begin{eqnarray*}
(F_{s,t}(x),Y_{s,t}(y))&=&(X_{s,t}(x),\overline{X}_{s,t}(y))\qquad
(B_{s,t},\Sigma_{s,t})=\left(\overline{b}_t\left(\overline{X}_{s,t}(x)\right),\overline{\sigma}_{t}\left(\overline{X}_{s,t}(x)\right)\right)\\
G _{u,t}(x)&=&\nabla F_{u,t}(x)^{\prime}~b_u(x) +\frac{1}{2}~\nabla^2 F_{u,t}(x)^{\prime}~a_{u}(x)\quad \mbox{\rm and}\quad
H _{u,t}(x)=\nabla F_{u,t}(x)^{\prime}~\sigma_u(x)
\end{eqnarray*}
and letting $(u,v)=(s,t)$ in (\ref{backward-ito-v}). The regularity conditions on the drift and the diffusion function ensure that conditions $(H_i)_i$ with $i=1,2,3$ stated in section~\ref{sec-1-biw-intro} are satisfied.
We emphasize that the backward diffusion flow discussed in (\ref{backward-synthetic}) and (\ref{ref-backward-flow}) is essential to apply theorem~\ref{biv}. Section~\ref{ref-rig} also provides a multivariate version of (\ref{Alekseev-grobner}).
The interpolation formula (\ref{Alekseev-grobner}) with a fluctuation term given
by the Skorohod stochastic integral (\ref{def-S-sk}) can be seen as a Alekseev-Gr\"obner formula of Skorohod type.
In this context, the integrability of the fluctuation term and any quantitative type
estimates require a refined analysis of the Malliavin derivatives of the integrand. Under our regularity
conditions the stochastic flows $X_{s,t}(x)$ and $\overline{X}_{s,t}(x)$ are Holder-continuous w.r.t. the time parameters
as well as twice differentiable w.r.t. the space variables, with almost sure uniformly bounded first and second order derivatives. In addition, for any $n\geq 1$
all the $n$-absolute moments of the stochastic flows
are finite with at most linear growth w.r.t. the initial values.
These properties ensure that the Skorohod stochastic integral (\ref{def-S-sk}) is well defined and they allow to derive several quantitative estimates. Section~\ref{sk-section} provides a refined of the fluctuation term; see for instance theorem~\ref{theo-quantitative-sko}.
When $\sigma_t=0$ the flow $X_{s,t}(x)$ is deterministic so that the Skorohod fluctuation term (\ref{def-S-sk}) reduces to the traditional It\^o stochastic integral. In this context,
quantitative estimates of the fluctuation term are obtained combining Burkholder-Davis-Gundy inequalities with the generalized Minkowski inequality.
The resulting interpolation formula (\ref{Alekseev-grobner}) can be seen as a Alekseev-Gr\"obner formula of It\^o-type.
To distinguish these two classes of models, the
interpolation formulae (\ref{Alekseev-grobner}) associated with the case $\sigma_t=0$
will be called an It\^o-Alekseev-Gr\"obner formula; the one associated with the case $\Delta\sigma_t\not=0$ will be called a Skorohod-Alekseev-Gr\"obner formula.
\subsubsection{Uniform estimates w.r.t. the time horizon}\label{sec-uewrtt}
The final objective of this article is to derive uniform estimates w.r.t. the time parameter.
Our methodology is mainly based on two different types of regularity conditions to be defined and discussed in detail in section~\ref{regularity-sec}:
$\bullet$ The first is a technical condition
that ensures that the
$n$-absolute moments of the flows $ X_{s,t}$ and $\overline{X}_{s,t}$ are uniformly bounded w.r.t. the time horizon; we call this condition $(M)_{n}$.
$\bullet$ The second is a spectral condition on the gradient of the drift and diffusion matrices of the stochastic flows, which we call condition $(T)_{n}$. Without going into details, we state one usual case of interest: for constant diffusion functions (\ref{constant-sigma})
the spectral condition $(T)_{n}$ is met for any $n\geq 2$ as soon as the following log-norm
conditions are met
\begin{equation}\label{T2-intro}
\nabla b_t+(\nabla b_t)^{\prime}\leq -2\lambda~I\quad \mbox{\rm and}\quad \nabla \overline{b}_t+(\nabla \overline{b}_t)^{\prime}\leq -2\overline{\lambda}~I
\quad\mbox{\rm
for some $\lambda\wedge\overline{\lambda}>0,$}
\end{equation}
To motivate the above condition consider a linear drift function of the form $b_t(x)=B_t~x$
and $\sigma=0$. In this case the tangent process $\nabla X_{s,t}(x)$ satisfies a time-varying deterministic linear dynamical system
$$
\partial_t \,\nabla X_{s,t}(x)=\nabla X_{s,t}(x)~ B_t^{\prime}
$$
The asymptotic behavior of this process cannot be characterized by the statistical properties of the spectral abscissa of the matrices $B_t$. Indeed, unstable semigroups associated with time-varying (deterministic) matrices $B_t$ with negative eigenvalues are exemplified in~\cite{coppel1978stability,wu1974note}. Conversely, stable semigroups with $B_t$ having positive eigenvalues are given by Wu in~\cite{wu1974note}.
In contrast, the uniform log-norm condition (\ref{T2-intro}) provides a readily verifiable condition.
To describe with some precision the second main result of the article, we need to introduce some additional terminology.
When there is no ambiguity, we denote by $\Vert\mbox{\LARGE .}\Vert$ any (equivalent) norm on some finite dimensional vector space.
For some multivariate function $f_t(x)$, for $(t,x)\in [0,\infty)\times\mathbb{R}^d$,
let $\Vert f(x)\Vert:=\sup_{t}\Vert f_t(x)\Vert$ and the uniform norm be $\Vert f\Vert:=\sup_{t,x}\Vert f_t(x)\Vert$.
For any $n\geq 1$ we also set
\begin{equation}\label{ref-vertiii}
\vertiii{f(x)}_n:=\sup_{s\geq 0}\sup_{t\geq s}\mathbb{E}\left(\Vert f_t (\overline{X}_{s,t}(x))\Vert^{n}\right)^{1/n}
\end{equation}
We denote by $\kappa_n$ and $\kappa_{\delta,n}$ some constants that depend on some parameters $n$ and $(\delta,n)$ but do not depend on the time horizon, nor on the space variable.
In this notation, the second main result of the article takes basically the following form.
\begin{theo}\label{theo-al-gr-2}
Assume conditions $(M)_{2n/\delta}$ and $(T)_{2n/(1-\delta)}$ are satisfied for some parameters $n\geq 2$ and $\delta\in ]0,1[$. In this situation, we have the time-uniform estimates
\begin{equation}\label{intro-inq-1}
\begin{array}{l}
\displaystyle
\mathbb{E}\left[\Vert X_{s,t}(x)-\overline{X}_{s,t}(x)\Vert^{n}\right]^{1/n}\\
\\
\displaystyle\leq \kappa_{\delta,n}~\left(\vertiii{\Delta a(x)}_{2n/(1+\delta)}+\vertiii{\Delta b(x)}_{2n/(1+\delta)}
+\vertiii{\Delta\sigma(x)}_{2n/\delta}~(1\vee\Vert x\Vert)\right)\end{array}
\end{equation}
For constant diffusion functions (\ref{constant-sigma}), the estimate simplifies to
\begin{equation}\label{intro-inq-2}
\displaystyle (\ref{T2-intro})\Longrightarrow \forall n\geq 2\quad\mathbb{E}\left[\Vert X_{s,t}(x)-\overline{X}_{s,t}(x)\Vert^{n}\right]^{1/n}
\leq \kappa_n~\left(\vertiii{\Delta b(x)}_{n}+\Vert \Sigma-\overline{\Sigma}\Vert\right)
\end{equation}
\end{theo}
The estimates (\ref{intro-inq-1})
come from (\ref{intro-inq-0}) and (\ref{intro-inq-s}). A more detailed proof is provided in the appendix, on page~\pageref{intro-inq-1-proof}.
The estimates (\ref{intro-inq-2})
are direct consequences of (\ref{intro-inq-0-0}) and (\ref{intro-inq-s-2-2}).
When $\sigma_t=\overline{\sigma}_t$ the Skorohod term is indeed absent and (\ref{Alekseev-grobner}) reduces to
\begin{equation}\label{Alekseev-grobner-sigma-d}
X_{s,t}(x)-\overline{X}_{s,t}(x)=\int_s^t~\left(\nabla X_{u,t}\right)(\overline{X}_{s,u}(x))^{\prime}~\Delta b_u (\overline{X}_{s,u}(x))~du
\end{equation}
We recover the interpolation formula for nonlinear stochastic flows presented in section 3.1 in the article~\cite{mp-var-18}.
In this context the analysis of $\mathbb{L}_n$-errors will proceed via two-step procedure. In section \ref{tangent-sec} we will derive the exponential bound
$$\sup_{x} \mathbb{E}(\Vert \left(\nabla X_{u,t}\right)(x) \Vert_2^n)^{1/n}\leq \kappa_n \exp(-\lambda(n)~(t-u))\quad\mbox{\rm for some} \quad \lambda(n)>0$$
Using the Minkowski integral inequality in (\ref{Alekseev-grobner-sigma-d}) yields
\begin{eqnarray*}
\mathbb{E} \left [\Vert X_{s,t}(x)-\overline{X}_{s,t}(x) \Vert^n \right]^{1/n}
&\leq &
\int_s^t~ \mathbb{E} \left [\Vert \left(\nabla X_{u,t}\right)(\overline{X}_{s,u}(x)) \Vert^n\times\Vert \Delta b_u (\overline{X}_{s,u}(x))\Vert^n \right]^{1/n}~du.
\end{eqnarray*}
A further conditioning argument and the above exponential bound on the tangent process yields
\begin{equation*}
\mathbb{E} \left [\Vert X_{s,t}(x)-\overline{X}_{s,t}(x) \Vert^n \right]^{1/n} \leq \kappa_n~\int_s^t \exp(-\lambda(n)~(t-u))du~~
\sup_{s\leq u} \mathbb{E} [\Vert \Delta b_u (\overline{X}_{s,u}(x))\Vert^n]^{1/n}.
\end{equation*} Replacing the term outside the time integral with $\vertiii{\Delta b (x)}_n$ yields the stated result in (\ref{intro-inq-1}) excluding the terms representing the difference in the diffusions.
We illustrate one use of theorem \ref{theo-al-gr} in the context of analyzing the error in discretising the diffusion $ X_{s,t}(x)$ for some initial time point $s\geq 0$.
Let $h>0$ denote the discretisation interval size and for any $ t\in [s+kh,s+(k+1)h[$ let
$$
d X_{s,t}^h(x)=Y^h_{s,t}(x)~dt+ \Sigma~ dW_t\quad \mbox{\rm with}\quad Y^h_{s,t}(x):=b\left(X^h_{s,s+kh}(x)\right)
$$
for a fixed diffusion matrix $\sigma_t(x)=\Sigma$. Here $X_{s,t}^h(x)$ is the discretisation of $X_{s,t}(x)$ with resolution $h$. Note that that the drift at time $t$ is not a function of the instantaneous value of $X_{s,t}^h(x)$, at time $t$, but rather the value it took at the largest discrete time-point before $t$. In section \ref{sec-extensions} we discuss how the formula in (\ref{Alekseev-grobner}) also applies in this context and establish that
$$
X^h_{s,t}(x)-X_{s,t}(x)=\int_s^t~\left(\nabla X_{u,t}\right)(X^h_{s,u}(x))^{\prime}~
~\left[Y^h_{s,u}(x)-b(X^h_{s,u}(x))\right]~
du.
$$ This comparison result when combined with the regularity assumptions (\ref{eq:lem_discretize}) yields the moment bound below.
\begin{prop}\label{lem:discretize}
Assume that
\begin{equation}\label{eq:lem_discretize}
\nabla b+(\nabla b)^{\prime}\leq -2\lambda~I\qquad
\Vert \nabla b\Vert:=\sup_{x}{\Vert \nabla b(x)\Vert}<\infty \quad \mbox{\rm and}\quad
\langle x,b(x)\rangle\leq -\beta~\Vert x\Vert^2
\end{equation}
for some $\lambda>0$, $\beta>0.$ In this situation, for any $n\geq 1$ we have the uniform estimates
$$
\mathbb{E}\left(\Vert X^h_{s,t}(x)-X_{s,t}(x)\Vert^n\right)^{1/n}\leq
\Vert \nabla b\Vert~\left(\left[\Vert b(0)\Vert+\widehat{m}_n(x)~\Vert \nabla b\Vert \right]~h+\sigma~\sqrt{h}\right)
/\lambda
$$ where $\widehat{m}_n(x) \leq \kappa_n~(1+\Vert x\Vert)$.
\end{prop}
Proposition~\ref{lem:discretize} is proved in section \ref{subsec:lemdiscretizeproof}. To apply proposition~\ref{lem:discretize} to a Langevin diffusion with a convex potential $U(x)$, the drift would be $b_t(x)=-\nabla U(x)$ and the corresponding assumptions on $U(x)$ are typical.
\subsection{Comments and comparisons with existing literature}\label{sec-comments}
The interpolation formula (\ref{Alekseev-grobner}) can be interpreted as an extension of Alekseev-Gr\"obner lemma~\cite{Alekseev,grobner,jentzen} as well as an extended version of the variation-of-constant and related
Gronwall type lemma~\cite{bellman-2,gronwall} to diffusion processes. In this connection we underline that the forward-backward formula (\ref{Alekseev-grobner}) differs from the stochastic Gronwall lemma presented in~\cite{scheutzow} based on particular classes of stochastic linear inequalities that doesn't involve Skorohod type integrals.
The forward-backward interpolation formula (\ref{Alekseev-grobner}) can also be seen as an extension of theorem 6.1 in~\cite{pardoux-protter} on two-sided stochastic integrals to diffusion flows. This interpolation formula can also be interpreted as a backward version of the generalized It\^o-Ventzell formula presented in~\cite{ocone-pardoux} (see also theorem 3.2.11 in~\cite{nualart}).
Stochastic interpolation formulae of the form (\ref{Alekseev-grobner}) and their discrete time version discussed in (\ref{ref-telescoping-sum}) are not really new.
To describe their origins, it is worth to mention that the stochastic perturbations may come from auxiliary random sources, uncertainty propagations, as well as time discretization schemes and mean field type particle fluctuations.
The pivotal interpolating telescoping sum formula (\ref{ref-telescoping-sum}) and the second order forward-backward perturbation semigroup methodology discussed in the present article can also be found in chapter 7 in~\cite{d-2004} for discrete time models as well as in the series of articles~\cite{dm-g-99,guionnet,dm-2000} published at the beginning of the 2000s, see also chapter 10 in~\cite{d-2013}. In this context, the random perturbations come from the fluctuations of a genetic type particle interpretation of nonlinear Feynman-Kac semigroups.
The more recent articles~\cite{bishop-stab,bishop-18,bishop-19} also provide a series of backward-forward interpolation formulae of the same form as (\ref{Alekseev-grobner}) for stochastic matrix Riccati diffusion flows arising in data assimilation theory (cf. for instance theorem 1.3 in~\cite{bishop-19} as well as section 2.2 in~\cite{bishop-18} and the proof of theorem 2.3 in~\cite{bishop-stab}). In this context, the random perturbations come from the fluctuations of a mean field particle interpretation of a class of nonlinear diffusions equipped with an interacting sample covariance matrix functional.
We underline that the It\^o-Alekseev-Gr\"obner formula (4.6) discussed in~\cite{bishop-19}
is an extension of the interpolation formula (\ref{Alekseev-grobner}) to stochastic diffusion flows in matrix spaces. In this context the unperturbed model is given by the
flow of a deterministic matrix Riccati differential equation and the random perturbations are described by matrix-valued diffusion martingales. The corresponding It\^o-Alekseev-Gr\"obner formulae can be seen as a matrix version of theorem 1.2 in the present article when $\sigma=0$.
These stochastic interpolation formulae were used in~\cite{bishop-19} to quantify the fluctuation of the stochastic flow around the limiting deterministic Riccati equation, at any order. We will briefly discuss the analog of these Taylor type expansions in section~\ref{sec-perturbation} in the context of Euclidian diffusions.
The forward-backward perturbation methodology discussed in the present article has also been used in~\cite{mp-var-18,mp-var-19} in the context of nonlinear diffusions and their mean field type interacting particle interpretations, see for instance section 2.3 in~\cite{mp-var-19}.
In this context, the random perturbations come from the fluctuations of a mean field particle interpretation of a class of nonlinear diffusions.
The extended version of the It\^o-Alekseev-Gr\"obner formula (\ref{Alekseev-grobner-sigma-d}) to nonlinear diffusions is also discussed in section 3.1 in the article~\cite{mp-var-18}. In this situation, the time varying drift and diffusion functions of the stochastic flows
depend on some possibly different nonlinear measure valued semigroups which may start from two possibly different initial distributions. For a more thorough discussion on this class of nonlinear diffusions, we refer to the It\^o-Alekseev-Gr\"obner formula (3.2) and corollary 3.2 in the article~\cite{mp-var-18}. These It\^o-Alekseev-Gr\"obner formulae correspond to theorem 1.2 in the present article when $\sigma=0$.
The interpolating stochastic semigroup techniques discussed in the present article are also applied to mean field particle systems and deterministic nonlinear measure valued semigroups. In this context, the process $X_{s,t}$ is given a deterministic measure-valued process and $\overline{X}_{s,t}$ represents the evolution of the particle density profiles associated with an approximating mean field particle interpretation of $X_{s,t}$.
For instance, the article ~\cite{mp-dualtiy} is concerned with interacting jumps models on path spaces, the second article~\cite{mp-var-19} discusses the propagation of chaos properties of mean field type interacting diffusions. The stochastic interpolation formulae discussed in ~\cite{mp-dualtiy,mp-var-19} correspond to the case (\ref{Alekseev-grobner}) with $\sigma=0$ and or $\overline{\sigma}\not=\sigma$ (see for instance the interpolation formula (3.5), theorem 2.6, theorem 2.7 and the interpolating telescoping sum in section 1.2 in~\cite{mp-var-19})
In the series of articles discussed above, as in (\ref{ref-interp-du}) the central common idea is to analyse the evolution of the interpolating process (\ref{interpolating-flow}) between a given process $X_{s,t}$ and some stochastic flow $\overline{X}_{s,t}$ with an extra level of randomness. In discrete time settings, the differential interpolation formula (\ref{ref-interp-du}) can also recasted in terms of a telescoping sum of the same form as (\ref{ref-telescoping-sum}) combined with a second order Taylor expansion reflecting the differences between a stochastic semigroup and its perturbations, see for instance chapter 7 in~\cite{d-2004}.
In most of the application domains discussed above, this second order stochastic perturbation methodology has been developed to quantify uniformly w.r.t. the time horizon the propagations of some stochastic perturbations entering in {\em some deterministic and stable reference or unperturbed process.} In the context of Euclidian diffusions, this corresponds to the situation where the diffusion function $\sigma=0$ (the case $\overline{\sigma}=0$ can be treated by symmetry arguments).
The It\^o-Alekseev-Gr\"obner type formulae discussed in section 3.1 in the article~\cite{mp-var-18} correspond to theorem 1.2 in the present article when $\sigma=\overline{\sigma}$.
The present article can be seen as a natural extension of the second order perturbation methodology developed in the above referenced articles to diffusion type perturbed processes when $\sigma\not=\overline{\sigma}$.
To the best of our knowledge, the first article considering the case $\sigma\not=\overline{\sigma}$ with $\sigma\not=0$ and $\overline{\sigma}\not=0$ is the independent work of Hudde-Hutzenthaler-Jentzen-Mazzonetto~\cite{hudde}. In this article,
the authors discuss an It\^o-Alekseev-Gr\"obner formula for abstract diffusion perturbation models of the form (\ref{X-Y-Z}).
Here again, as in the list of referenced articles discussed above, the common central idea is to use discrete time approximations and combine the pivotal interpolating telescoping sum formulae (\ref{ref-telescoping-sum}) with a second order Taylor expansion. Besides this fact and in contrast with our analysis, the fluctuation term (\ref{def-S-sk}) discussed in~\cite{hudde} cannot be interpreted in terms of the extended two-sided stochastic integral
defined in (\ref{sk-integral}) (see also proposition~\ref{k-prop}) but only in terms of a Skorohod stochastic integral.
The study~\cite{hudde} is also based on a series of particularly chosen and custom regularity conditions.
For instance, the authors assume that the abstract diffusion perturbation models are chosen so that the Skorohod fluctuation term exists
without providing any quantitative type estimate. This work is also not connected to
the two-sided stochastic integration calculus developed by Pardoux and Protter in~\cite{pardoux-protter} nor to any type of backward It\^o-Ventzell formula.
We feel that our approach is more direct and intuitive as it relies on an extended version
of It\^o's change rule formula (\ref{ref-interp-du}) to interpolating stochastic flows. It also allows to interpret the fluctuation term (\ref{def-S-sk}) as an extended two-sided stochastic integral.
In section~\ref{sk-section} in the present article, we will also see that any quantitative analysis requires to estimate the absolute moments of the Malliavin derivatives of the stochastic integrands of the Brownian motion arising in the Skorohod fluctuation term. In our framework, these Malliavin derivatives depend on the gradient of both of the diffusion functions $(\sigma,\overline{\sigma})$ as well as on the tangent process of the perturbed diffusion flow. The quantitative analysis developed in~\ref{sk-section} can be extended without difficulties to abstract diffusion perturbation models satisfying appropriate differentiability and integrability conditions.
The article~\cite{hudde} also presents an application to tamed Euler type
discrete time approximations of a stochastic van-der-Pol process introduced in~\cite{tim},
simplifying the analysis provided in an earlier work~\cite{hutz-14}. In this situation, we underline that the Skorohod fluctuation term is null so that the resulting Alekseev-Gr\"obner type formula resumes to the simple and elementary case discussed in (\ref{Alekseev-grobner-sigma-d}) and in the article~\cite{mp-var-18}.
As expected for this class of "unstable processes",
the authors recast a series of $\mathbb{L}_2$-estimates discussed in~\cite{hutz-14} into a series of estimates that grow exponentially fast with respect to the time horizon.
In contrast with the present work, the above article doesn't discuss any quantitative uniform estimates w.r.t. the time horizon. The analysis presented in~\cite{hudde} is mainly concerned with the proof of a Skorohod-Alekseev-Gr\"obner type formula for abstract diffusion perturbation models and it doesn't apply to derive any type of estimates to general diffusion perturbation models without adding regularity conditions.
Besides its elegance the forward-backward interpolation formula (\ref{Alekseev-grobner}) is clearly of rather poor mathematical and numerical interest without a better understanding
of the variational processes and the Skorohod fluctuation term (\ref{def-S-sk}).
A crucial problem is to avoid exceedingly pessimistic exponential estimates that grow exponentially fast w.r.t. the time horizon.
One advantage of the second order perturbation methodology developed in the present article is that it takes advantage of the stability properties of the
tangent and the Hessian flow in the estimation of Skorohod fluctuation term and this sharpen analysis of the difference between stochastic flows.
Our main contribution is to develop a refined analysis of these variational processes and the Skorohod fluctuation terms. We also deduce several uniform perturbation propagation estimates with respect to the time horizon, yielding what seems to be the first results of this type for this class of models.
The forward-backward stochastic interpolation formula (\ref{Alekseev-grobner}) can also be extended to more general classes of stochastic flows on abstract state spaces. For instance the recent article~\cite{jentzen} provides a deterministic first order version of (\ref{Alekseev-grobner}) on abstract Banach spaces. The stochastic perturbation analysis developed in the series of articles~\cite{mp-dualtiy,mp-var-19,bishop-stab,bishop-18,bishop-19,dm-g-99,guionnet,dm-2000} and the books~\cite{d-2004,d-2013} is applied to matrix-valued diffusions and measure valued processes, including mean field type interacting diffusions and Feynman-Kac type interacting jumps models.
The stability properties of these abstract models discussed above depend on the problem at hand.
To focus on the main ideas without clouding the article with unnecessary technical details and sophisticated mathematical tools based on abstract ad hoc regularity conditions we have chosen to concentrate the article on diffusion flows on Euclidian spaces with simple and easily checked regularity conditions.
\section{Preliminary results}
\subsection{Some basic notation}\label{notation-sec}
With a slight abuse of notation, we denote by $I$ the identity $(d\times d)$-matrix, for any $d\geq 1$. We also denote by $\Vert\mbox{\LARGE .}\Vert$ any (equivalent) norm on a finite dimensional vector space over $\mathbb{R}$. All vectors are column vectors by default.
We introduce some matrix notation needed from the onset.
We denote by $\mbox{\rm Tr}(A)$, $\Vert A\Vert_{2}:=\lambda_{\tiny max}(AA^{\prime})^{1/2}=\lambda_{\tiny max}(A^{\prime}A)^{1/2}$, resp. $\Vert A\Vert_{F}=\mbox{\rm Tr}(AA^{\prime})^{1/2}$ and $\rho(A)=\lambda_{\tiny max}((A+A^{\prime})/2)$ the trace, the spectral norm, the Frobenius norm, and the logarithmic norm of some matrix $A$. $A^{\prime}$ is the transpose of $A$ and $\lambda_{\tiny max}(\mbox{\LARGE .})$ the largest eigenvalue. The spectral norm is sub-multiplicative or $\Vert A B\Vert_{2}\leq \Vert A\Vert_{2} \Vert B\Vert_{2}$ and compatible with the Euclidean norm for vectors, by that we mean for a vector $x$ we have $\Vert A x \Vert \leq \Vert A\Vert_{2} \Vert x\Vert$.
Let $[n]$ be the set of $n$ multiple indexes $i=(i_1,\ldots,i_n)\in {\cal I }^n$ over some finite set $ {\cal I }$.
We denote by $(A_{i,j})_{(i,j)\in [p]\times [q]}$ the entries of a $(p,q)$-tensor $A$ with index set $ {\cal I }$ for $[p]$ and $\mathcal{J}$ for $[q]$. For the sake of brevity, the index sets will be implicitly defined through the context.
For a given $(p_1,q)$-tensor $A$ and a given $(q,p_2)$ tensor $B$, $AB$ and $B^{\prime}$ is a $(p_1,p_2)$-tensor resp. a $(p_2,q)$-tensor with entries
given by
\begin{equation}\label{tensor-notation}
\forall (i,j)\in [p_1]\times [p_2]\qquad
(AB)_{i,j}=\sum_{k\in [q]}A_{i,k}~B_{k,j}\quad \mbox{\rm and}\quad B_{j,k}^{\prime}:=B_{k,j}.
\end{equation}
The symmetric part $A_{\tiny sym}$ of a $(p,p)$-tensor is the $(p,p)$-tensor $A_{\tiny sym}$ with entries
$$\forall (i,j)\in [p]\times [p]\qquad(A_{\tiny sym})_{i,j}=(A_{i,j}+A_{j,i})/2$$
We consider the Frobenius inner product given for any $(p,q)$-tensors $A$ and $B$ by
$$
\langle A,B\rangle_F=\mbox{\rm Tr}(AB^{\prime})=\sum_{i}(AB^{\prime})_{i,i}\quad \mbox{\rm and the norm}\quad \Vert A\Vert_F=\sqrt{\mbox{\rm Tr}(AA^{\prime})}
$$
For any $(p,q)$-tensors $A$ and $B$ we also check the Cauchy-Schwartz inequality
$$
\langle A,B\rangle_F^2\leq \Vert A\Vert_F~\Vert B\Vert_F\quad\mbox{\rm and}\quad \Vert A\Vert_{2}\leq\Vert A\Vert_F\leq \mbox{\rm Card}( {\cal I })^p~ \Vert A\Vert_{2}
\quad\mbox{\rm with}\quad
\Vert A\Vert_2:=\lambda_{\tiny max}(AA^{\prime})^{1/2}
$$
For any tensors $A,B$ with appropriate dimensions we have the inequality
$$
\Vert AB\Vert_F\leq \Vert A\Vert_F~\Vert B\Vert_F
$$
Given some tensor valued function $T:(t,x)\mapsto T_t(x)$ we also set
$$
\Vert T\Vert_{F}:=\sup_{t,x}\Vert T_t(x)\Vert_F\qquad \Vert T\Vert_{2}:=\sup_{t,x}\Vert T_t(x)\Vert_2
\qquad\mbox{\rm and}\qquad \Vert T\Vert:=\sup_{t,x}\Vert T_t(x)\Vert
$$
Given some smooth function $h(x)$ from $\mathbb{R}^{p}$ into $\mathbb{R}^{q}$
we denote by
\begin{equation}\label{grad-def}
\nabla h=\left[\nabla h^1,\ldots,\nabla h^q\right]\quad \mbox{\rm with}\quad\nabla h^i=\left[\begin{array}{c}
\partial_{x_1}h^i\\
\vdots\\
\partial_{x_p}h^i
\end{array}\right]
\end{equation}
the gradient $(p,q)$-matrix associated with the column vector-valued
function $h=(h^i)_{1\leq i\leq q}$. Building on this notation: let $b:\mathbb{R}^n \rightarrow \mathbb{R}^p$ and let the mapping $x \rightarrow G(x) = h(b(x))$. Then $\nabla G(x) = \nabla b(x) \times \nabla h (b(x))$. Let
\begin{equation}\label{Hessian-def}
\nabla^2 h=\left[\nabla^2 h^1,\ldots,\nabla^2 h^q\right]\quad \mbox{\rm with}\quad\nabla^2 h^i=\left[\begin{array}{ccc}
\partial_{x_1,x_1}h^i&\ldots&\partial_{x_1,x_p}h^i\\
\vdots&\ldots&\vdots\\
\partial_{x_p,x_1}h^i&\ldots&\partial_{x_p,x_p}h^i\
\end{array}\right]
\end{equation}
The Hessian $H=\nabla^2 h$ associated with the
function $h=(h^i)_{1\leq i\leq q}$ is a $(2,1)$-tensor where $H_{(i,j),k}=(\nabla^2 h^k)_{i,j}=\partial_{x_i,x_j}h^k$. In this notation we can compactly represent the second order term of the Taylor expansion of the the vector valued function $h$. For a vector $y=(y_1,\ldots,y_p)'$
\[
\left [
\begin{array}{c}
y^{\prime}~\nabla^2 h^1(x)~y \\
\vdots \\
y^{\prime}~\nabla^2 h^q(x)~y \\
\end{array}
\right ] = \nabla^2 h(x)^{\prime}~yy^{\prime}
\]
where we have regarded the matrix $yy'$ as the $(2,1)$-tensor $Y$ with $Y_{(i,j),1}=y_iy_j$.
In the same vein, in terms of the tensor product (\ref{tensor-notation}), for any pair of column vector-valued
function $h=(h^k)_{1\leq k\leq q}$ and $b=(b^i)_{1\leq i\leq p}$ and any matrix function $a=(a^{i,j})_{1\leq i,j\leq p}$ from $\mathbb{R}^{p}$ into $\mathbb{R}^{q}$, for any parameter $1\leq k\leq q$ we also have
$$
\begin{array}{rcl}
\displaystyle\left( \nabla h(x)^{\prime}~b(x)\right)^k&=& \displaystyle\sum_{1\leq i\leq p}( \nabla h(x))^{\prime}_{k,i}~b^i(x)=\sum_{1\leq j\leq p}~\partial_{x_i} h^k(x)~b^i(x)=\langle \nabla h^k(x),b(x)\rangle\\
\\
\displaystyle\left(\nabla^2 h(x)^{\prime}~a(x)\right)^k&=& \displaystyle\sum_{1\leq i,j\leq p}( \nabla^2 h(x))^{\prime}_{k,(i,j)}~a^{i,j}(x)\\
&&\\
&=& \displaystyle\sum_{1\leq i,j\leq p}~ \partial_{x_i,x_j} h^k(x)~a^{i,j}(x)=\langle \nabla^2h^k(x),a(x)\rangle_{F}
\end{array}
$$
In a more compact form, the above formula takes the form
\begin{equation}\label{tensor-product-2-2}
\nabla h(x)^{\prime}~b(x)=\left[\begin{array}{c}
\langle\nabla h^1(x),b(x)\rangle\\
\vdots\\
\langle\nabla h^q(x),b(x)\rangle
\end{array}\right]\quad\mbox{\rm and}\quad
\nabla^2 h(x)^{\prime}~a(x)=\left[\begin{array}{c}
\langle\nabla^2h^1(x),a(x)\rangle_F\\
\vdots\\
\langle\nabla^2h^q(x),a(x)\rangle_F
\end{array}\right]
\end{equation}
For any $n\geq 1$ we let $ {\cal P }_n(\mathbb{R}^d)$ be the convex set of probability measures $\mu_1,\mu_2$ on $\mathbb{R}^d$ with absolute $n$-th moment
and equipped with the Wasserstein distance of order $n$ denoted by
$$
\mathbb{W}_n(\mu_1,\mu_2):=\inf\mathbb{E}(\Vert X_1-X_2\Vert^n)^{1/n}
$$
In the above display the infimum is taken over all pair or random variables $(X_1,X_2)$ with marginal distributions $(\mu_1,\mu_2)$.
The stochastic transition semigroups associated with the flows $X_{s,t}(x)$ and $\overline{X}_{s,t}(x)$ are defined for any measurable function $f$
on $\mathbb{R}^d$ by the formulae
$$
\mathbb{P}_{s,t}(f)(x):=f(X_{s,t}(x))\quad\mbox{\rm and}\quad \overline{\mathbb{P}}_{s,t}(f)(x):=f(\overline{X}_{s,t}(x))
$$
Given some column vector-valued
function $f=(f^i)_{1\leq i\leq p}$, let $\mathbb{P}_{s,t}(f)$ and $P_{s,t}(f)$ denote the column vector-valued
functions with entries $\mathbb{P}_{s,t}(f^i)$ and $P_{s,t}(f^i)$. Building on the tensor notation, let $\mathbb{P}_{s,t}(\nabla f)$ and $\mathbb{P}_{s,t}(\nabla^2 f)$
respectively denote the $(1,1)$ and $(2,1)$-tensor valued functions with entries
$$
\mathbb{P}_{s,t}(\nabla f)(x)_{i,k}:=
\mathbb{P}_{s,t}(\partial_{x_i} f^k)(x)\quad\mbox{\rm and}\quad \mathbb{P}_{s,t}(\nabla^2 f)(x)_{(i,j),k} :=\mathbb{P}_{s,t}(\partial_{x_i,x_j} f^k)(x)
$$
We also consider the random $(2,1)$ and $(2,2)$-tensors
given by
\begin{eqnarray*}
\nabla^2\, X_{s,t}(x)_{(i,j),k}&=&\partial_{x_i,x_j}X^{k}_{s,t}(x)=\left[\nabla^2\, X_{s,t}(x)\right]^{\prime}_{k,(i,j)}\\
\left[\nabla X_{s,t}(x)\otimes \nabla X_{s,t}(x)\right]_{(i,j),(k,l)}&=&\nabla X_{s,t}(x)_{i,k}\nabla X_{s,t}(x)_{j,l}=\left[\nabla X_{s,t}(x)\otimes \nabla X_{s,t}(x)\right]^{\prime}_{(k,l),(i,j)}
\end{eqnarray*}
Throughout the rest of the article, unless otherwise stated $\kappa,\kappa_{\epsilon},\kappa_n,\kappa_{n,\epsilon}$ denote
constants whose values may vary from line to line but only depend on the parameters in their subscripts, i.e. $n\geq 0$ and $\epsilon>0$, as well as on the parameters of the model; that is, on the drift and diffusion functions.
We also use the letters
$c,c_{\epsilon},c_n,c_{n,\epsilon}$ to denote universal constants.
Importantly these contants do not depend on the time horizon.
We also consider the uniform log-norm parameters
\begin{equation}\label{def-nabla-sigma}
\rho(\nabla\sigma)^2:=
\sum_{1\leq k\leq r}\rho(\nabla\sigma_{k})^2
\quad\mbox{\rm and}\quad \rho_{\star}(\nabla \sigma):=\sup_{1\leq k\leq r}\rho(\nabla \sigma_{k})\quad\mbox{\rm with}\quad \rho(\nabla\sigma_{k}):=\sup_{t,x}\rho(\nabla \sigma_{t,k}(x))
\end{equation}
and the parameters $\rchi(b,\sigma)$ defined by
\begin{equation}\label{def-chi-b}
\rchi(b,\sigma):=c+\Vert \nabla^2b\Vert+\Vert \nabla^2\sigma\Vert^2
+ \rho_{\star}(\nabla \sigma)^2
\end{equation}
\subsection{Regularity conditions and some preliminary results}\label{regularity-sec}
We consider two different types of regularity conditions ($ {\cal M }$)$_n$ and $( {\cal T})_n$, indexed by some parameter
$n\in [2,\infty[$, for the diffusion $(b_t,\sigma_t)$.
\\
\begin{description}
\item[$( {\cal M })_n$] There exists some parameter $\kappa_n\geq 0$ such that for any $x\in \mathbb{R}^d$ we have
$$
m_n(x):=\sup_{s\leq t}\mathbb{E}\left(\Vert X_{s,t}(x)\Vert^{n}\right)^{1/n}\leq \kappa_n~(1\vee \Vert x\Vert)
$$
\item[$( {\cal T})_n$] There exists some parameter $\lambda_A>0$ such that
\begin{equation}\label{ref-mat-Upsilon}
A_t:=\nabla b_t+(\nabla b_t)^{\prime}+\sum_{1\leq k \leq r}\nabla \sigma_{k,t}(\nabla\sigma_{k,t})^{\prime}\leq -2\lambda_A~I
\end{equation}
where $\sigma_{k,t}$ denotes the $k$-th column of $\sigma_{t}.$
In addition, the following condition is satisfied
\begin{equation}\label{ref-lambda-A-n}
\lambda_A(n):=\lambda_A-\frac{d(n-2)}{2}~ \rho_{\star}(\nabla \sigma)^2>0
\end{equation} \end{description}
We now define the corresponding assumptions for the diffusion $(\overline{b}_t,\overline{\sigma}_t)$.
\begin{description}
\item[$(\overline{ {\cal M }})_n$] The regularity condition defined as in $( {\cal M })_n$ for the diffusion $(\overline{b}_t,\overline{\sigma}_t)$.
\item[$(\overline{ {\cal T}})_n$] Let $\overline{A}_t$ be the symmetric matrix defined as $A_t$ in \eqref{ref-mat-Upsilon} when
$({b}_t,{\sigma}_t) = (\overline{b}_t,\overline{\sigma}_t)$. Assume there exists some $\lambda_{\overline{A}}>0$ such that $\overline{A}_t \leq -2 \lambda_{\overline{A}}~I $.
Furthermore, assume $ \lambda_{\overline{A}}(n)>0$ where $ \lambda_{\overline{A}}(n)$ is defined as $\lambda_A(n)$ when
$(\lambda_A,{\sigma}_t)=(\lambda_{\overline{A}},\overline{\sigma}_t)$.
\item[$(M)_n$] We write $(M)_n$ when both conditions $({ {\cal M }})_n$ and $(\overline{ {\cal M }})_n$ are satisfied.
\item[$(T)_n$] Both conditions $({ {\cal T}})_n$ and $(\overline{ {\cal T}})_n$ are met, and let
$$
\lambda_{A,\overline{A}}(n):=\lambda_{A}(n)\wedge \lambda_{\overline{A}}(n)
$$
\end{description}
In practice, the uniform moment condition $( {\cal M })_n$ is often checked using Lyapunov techniques. For example we can use the following
polynomial growth condition. \\
\begin{description}
\item[$( {\cal P })_n$] There exists some parameters $\alpha_i,\beta_i\geq 0$ with $i=0,1,2$ such that
for any $t\geq 0$
and any $x\in\mathbb{R}^d$ we have
\begin{equation}\label{def-alpha-beta}
\Vert \sigma_{t}(x)\Vert_F^2\leq \alpha_0+\alpha_1\Vert x\Vert+\alpha_2\Vert x\Vert^2
\quad\mbox{and}\quad
\langle x,b_t(x)\rangle\leq \beta_0+\beta_1\Vert x\Vert-\beta_2\Vert x\Vert^2
\end{equation}
for some norm $\Vert \sigma_{t}(x) \Vert$ of the matrix-valued diffusion function. In addition, we have
$$
\beta_2(n):= \beta_2-\frac{(n-1)}{2}~\alpha_2>0
$$
\end{description}
\begin{lem} For any $n\geq 2$ we have
\begin{equation}\label{moments-intro}
( {\cal P })_n\quad\Longrightarrow\quad ( {\cal M })_n\quad \mbox{\rm with}\quad \kappa_n=1+\frac{(\gamma_1+(n-2)\alpha_1)+(\gamma_0+(n-2)\alpha_0)^{1/2}}{2\beta_2(n)^{1/2}}
\end{equation}
\end{lem}
The proof of the above assertion follows standard stochastic calculations, thus it is housed in the appendix, on page~\pageref{moments-intro-proof}.
For one-dimensional geometric Brownian motions the condition
$( {\cal P })_n$ is a sufficient and necessary condition for the existence of uniformly bounded absolute $n$-moments. In this case $( {\cal T})_n$ coincides with
$( {\cal P })_n$ by setting $$\lambda_A=\beta_2-\alpha_2/2\quad \mbox{\rm and}\quad \alpha_2= \rho_{\star}(\nabla \sigma)^2$$
Whenever condition $(M)_n$ is met for some $n\geq 2$, we also check the uniform estimates
\begin{equation}\label{ref-ui-m-over}
\mathbb{E}\left(\Vert [X_{u,t}\circ \overline{X}_{s,u}](x)\Vert^{n}\right)^{1/n}\leq \kappa_n~(1+\Vert x\Vert)
\end{equation}
with the same parameter $ \kappa_n$ as the one associated with the condition $(M)_n$.
Recalling that the functions $(b_t,\overline{b}_t)$ and $(\sigma_t,\overline{\sigma}_t)$ have at most linear growth,
with the $\mathbb{L}_n$-norms $\vertiii{\mbox{\LARGE .}}_n$ introduced in (\ref{ref-vertiii}) we also have that
\begin{equation}\label{ref-ui-m-over-delta}
\vertiii{\Delta b(x)}_n\leq \kappa_{1,n}(1\vee\Vert x\Vert)\quad \mbox{\rm and}\quad \vertiii{\Delta a(x)}_{n/2}\leq \kappa_{2,n}~(1\vee\Vert x\Vert)^2
\end{equation}
To give more insight where these assumptions will be used, we now briefly state the stability results that stem from them. Condition $( {\cal T})_n$ ensures that the exponential decays of the absolute and uniform $n$-moments of the tangent and the Hessian processes; that is, when $( {\cal T})_n$ is met for some $n\geq 2$ we have that
\begin{equation}\label{ref-tan-hess}
\mathbb{E}\left(\Vert \nabla X_{s,t}(x)\Vert^{n}\right)^{1/n}\vee\mathbb{E}\left(\Vert \nabla^2 X_{s,t}(x)\Vert^{n}\right)^{1/n}\leq \kappa_n~e^{-\lambda(n) (t-s)}\quad \mbox{\rm for some}\quad\lambda(n)>0
\end{equation}
A more precise statement is provided in proposition~\ref{def-4th-prop} and proposition~\ref{eq-prop-nabla-2-estimate}.
These uniform estimates clearly imply, via a conditioning argument, that for any $n\geq 2$ and $s\leq u\leq t$ we have
\begin{equation}\label{intro-inq-nabla}
\mathbb{E}\left(\Vert (\nabla X_{u,t})(\overline{X}_{s,u}(x))\Vert^{n}\right)^{1/n}\vee\mathbb{E}\left(\Vert (\nabla^2 X_{u,t})(\overline{X}_{s,u}(x))\Vert^{n}\right)^{1/n}\leq \kappa_n~e^{-\lambda(n) (t-u)}
\end{equation}
with the same parameters $(\kappa_n,\lambda(n))$ as in (\ref{ref-tan-hess}).
The case $\nabla \sigma=0$ will also serve a useful purpose, for example in analysing the error of a numerical implementation as in proposition \ref{lem:discretize}. For instance whenever $( {\cal T})_2$ is met we have
the almost sure and uniform gradient estimates
\begin{equation}\label{ref-nablax-estimate-0-ae}
\Vert\nabla X_{s,t}\Vert_2:=\sup_x\Vert\nabla X_{s,t}(x)\Vert_2\leq e^{-\lambda_A(t-s)}
\end{equation}
In addition, we have the almost sure and uniform Hessian estimates
\begin{equation}\label{ref-nabla2x-estimate-0-ae}
\Vert\nabla^2 X_{s,t}\Vert_F:=\sup_x\Vert\nabla^2 X_{s,t}(x)\Vert_F\leq \frac{d}{\lambda_A}~\Vert\nabla^2b\Vert_{F}~e^{-\lambda_A(t-s)}
\end{equation}
A proof of the above estimates is provided in the beginning of section~\ref{tangent-sec} and section~\ref{hessian-sec}. In this situation, whenever $( {\cal T})_{2}$ is met we have
\begin{equation}\label{intro-inq-0-0}
\mathbb{E}\left[\Vert T_{s,t}(\Delta a, \Delta b)(x)\Vert^n\right]^{1/n} \leq \kappa~\left(\vertiii{\Delta b(x)}_{n}+\vertiii{\Delta a(x)}_{n}\right).
\end{equation}
In the above display, $T_{s,t}(\Delta a, \Delta b)(x)$ stands for the stochastic process discussed in (\ref{def-T-st}), and $\kappa$ stands for some finite constant that doesn't depend on the parameter $n$.
For instance, for a Langevin diffusion associated with some convex potential function $U$ we have $b=-\nabla U$ and $\nabla \sigma=0$. Then
assuming
\begin{equation}\label{ref-HA-Langevin}
\begin{array}{l}
\nabla^2 U\geq \lambda~I \quad \Longrightarrow \quad ( {\cal T})_2 \quad \mbox{\rm is met}\\
\\
\displaystyle \Longrightarrow \quad \Vert\nabla X_{s,t}\Vert_2\leq e^{-\lambda(t-s)}\quad\mbox{\rm and}\quad
\Vert\nabla^2 X_{s,t}\Vert_F\leq \frac{d}{\lambda}~\Vert\nabla^3U\Vert_{F}~e^{-\lambda(t-s)}
\end{array}
\end{equation} where the almost sure tangent and Hessian bounds follow from (\ref{ref-nablax-estimate-0-ae}) and (\ref{ref-nabla2x-estimate-0-ae}) respectively.
In practice, it is often easier to work with $a_t(x)=\sigma_t(x)\sigma_t(x)'$ than $\sigma_t(x)$ and we now discuss some ways of estimating $\Delta \sigma_t(x) = \sigma_t(x) -\overline{\sigma}_t(x)$
in terms of $\Delta a_t(x) =a_t(x) -\overline{a}_t(x)$ and in the reverse direction. The latter is straightforward:
$$
\Vert \Delta a_t(x) \Vert\leq \Vert \Delta \sigma_t (x)\Vert~\left[ \Vert \sigma_t(x)\Vert+ \Vert \overline{\sigma}_t(x)\Vert\right].
$$
To estimate $\Delta \sigma_t$ in terms of $\Delta a_t$, assume the following
ellipticity condition is satisfied
\begin{equation}\label{elip}
a_t(x)\geq \upsilon~I\quad \mbox{\rm and}\quad \overline{a}_t(x)\geq \upsilon~I
\quad\mbox{\rm
for some parameter $\upsilon>0$.}
\end{equation}
We recall the Ando-Hemmen inequality~\cite{hemmen} for any symmetric positive definite matrices $Q_1,Q_2$
\begin{equation}\label{square-root-key-estimate}
\Vert Q_1^{1/2}-Q_2^{1/2}\Vert \leq \left[\lambda^{1/2}_{min}(Q_1)+\lambda^{1/2}_{min}(Q_2)\right]^{-1}~\Vert Q_1- Q_2\Vert
\end{equation}
for any unitary invariant matrix norm $\Vert . \Vert$. In the above display, $\lambda_{\tiny min}(\mbox{\LARGE .})$ stands for the minimal eigenvalue. We also have
the square root inequality
\begin{equation}\label{square-root-inq}
Q_1\geq Q_2\Longrightarrow Q_1^{1/2}\geq Q_2^{1/2}
\end{equation}
See for instance theorem 6.2 on page 135 in~\cite{higham}, as well as proposition 3.2 in~\cite{hemmen}. A proof of (\ref{square-root-inq}) can be found in~\cite{bellman}.
In this situation, using (\ref{square-root-key-estimate}) and (\ref{square-root-inq}) we check that
\begin{equation}\label{elip-ref-est}
\Vert \Delta \sigma_t (x)\Vert \leq \frac{1}{\sqrt{ \upsilon}}~\Vert \Delta a_t(x) \Vert\quad \mbox{\rm and}\quad
\Vert \sigma_t(x)\Vert\leq \Vert \sigma_t(0)\Vert+\frac{1}{\sqrt{ \upsilon}}~\left[\Vert a_t(x)\Vert+\Vert a_t(0)\Vert\right]
\end{equation}
This provides a way to estimate the growth of $\sigma_t(x)$ in terms of the one of $a_t(x)$.
For instance the estimate (\ref{intro-inq-1}) combined with (\ref{elip-ref-est}) implies that
$$
\mathbb{E}\left[\Vert X_{s,t}(x)-\overline{X}_{s,t}(x)\Vert^{n}\right]^{1/n}\leq \kappa_{\delta,n}~\left(\vertiii{\Delta b(x)}_{2n/(1+\delta)}+\vertiii{\Delta a(x)}_{2n/\delta}(1\vee\Vert x\Vert)
\right)
$$
$\bullet$ Assume that $(\overline{ {\cal M }})_n$ is satisfied for some $n\geq 1$. Also let $f_t(x)$ be some multivariate function such that
$$\Vert f(0)\Vert:=\sup_t\Vert f_t(0)\Vert<\infty\quad\mbox{\rm and} \quad\Vert \nabla f\Vert:=\sup_{t,x}\Vert \nabla f_t(x)\Vert<\infty$$
In this situation, we have the estimates
$$
\vertiii{f(x)}_n\leq \Vert f(0)\Vert+\Vert \nabla f\Vert~\overline{m}_n(x)\quad \mbox{\rm and therefore}\quad
\vertiii{f(x)}_n\leq \kappa_n~(\Vert f(0)\Vert+\Vert \nabla f\Vert)~(1\vee \Vert x\Vert)
$$
\subsection{Some results on anticipating stochastic calculus}\label{sec-malliavin}
In this section we review some results on Malliavin derivatives and Skorohod integration calculus which will be needed below.
We restrict the presentation to unit time intervals.
Let $(\Omega, {\cal W })$ be the canonical space equipped with the Wiener measure $\mathbb{P}$ associated with the $r$-dimensional Brownian motion $W_t$ discussed in the introduction.
The Malliavin
derivative $D_t$ is a linear operator from some dense domain $\DD_{2,1}\subset\mathbb{L}_2(\Omega)$ into the space $\mathbb{L}_2(\Omega\times [0,1];\mathbb{R}^r)$ of $r$-dimensional processes
with square integrable states on the unit time interval. For multivariate $d$-column vector random variables $F$ with entries $F^i$, we use the same rules as for the gradient and we set
$$
D_tF=\left[D_tF^1,\ldots,D_tF^d\right]\quad \mbox{\rm with}\quad D_tF^i=\left[\begin{array}{c}
D^1_tF^i\\
\vdots\\
D^rF^i
\end{array}\right]
$$
For $(p\times q)$-matrices $F$ with entries $F^j_{k}$ we let $D_tF$ be the tensor with entries
$$
(D_tF)_{i,j,k}=D^i _tF^j_{k}
$$
It is clearly out of the scope of this article to review the
analytical construction of Malliavin differential calculus. For a more thorough discussion we refer the reader to the seminal book by Nualart~\cite{nualart}, see also
the more synthetic presentation in the articles~\cite{nualart-pardoux,ocone-pardoux}.
Formally, one can think the Malliavin derivatives $D_{t}^iF$ of some $F\in \DD_{2,1}$ as way to extract from the random variable $F$ the integrand of Brownian increment $dW^i_t$.
For instance, when $s\leq t$ we have
\begin{eqnarray}
D_{t}^iX_{s,t}(x)&=&\sigma_{t,i}( X_{s,t}(x))\nonumber\\
(D_{t}\,\nabla X_{s,t}(x))_{i,j,k}&=& D_{t}^i\,(\nabla X_{s,t}(x))_{j,k}:=\left(\nabla X_{s,t}(x)~\nabla\sigma_{t,i}( X_{s,t}(x))\right)_{j,k}~
\label{first-Malliavin}
\end{eqnarray}
As conventional differentials, for any smooth function $G$ from $\mathbb{R}^d$ into $\mathbb{R}^{p\times q}$, Malliavin derivatives satisfy the chain rule properties
$$
D_t^i(G^j_{k}\circ F)=\sum_{1\leq l\leq d}\left(\partial_{x_l}G_{k}^j\right)(F)\times D_t^iF^l\quad\Longleftrightarrow\quad D_t(G\circ F)=D_tF~((\nabla G)\circ F)
$$
For instance, for any $s\leq u\leq v$ we have
\begin{equation}
D_u\left(X_{u,t}\circ X_{s,u}\right)=\left(D_u X_{s,u}\right)~\left[\left(\nabla X_{u,t}\right)\circ X_{s,u}\right]~~\mbox{\rm and}~~
D_{u}\left(\varsigma_{t}\circ X_{s,t}\right)=(D_u X_{s,t})~\left[\left(\nabla \varsigma_{t}\right)\circ X_{s,t}\right]\label{ref-chain-r}
\end{equation}
In the same vein, we have
\begin{equation}\label{s-tensor-diff-u-v}
\begin{array}{l}
D_{u}\left( \nabla X_{s,u}~\left[\left(\nabla X_{u,t}\right)\circ X_{s,u}\right]\right)\\
\\
\displaystyle=(D_{u}\nabla X_{s,u})
\left[\left(\nabla X_{u,t}\right)\circ X_{s,u}\right]+\left(D_{u}X_{s,u}\otimes \nabla X_{s,u}\right)\left[\left(\nabla^2X_{u,t}\right)\circ X_{s,u}\right]
\end{array}
\end{equation}
Let $\mathbb{L}_{2,1}(\mathbb{R}^r)\subset \mathbb{L}_2(\Omega\times [0,1];\mathbb{R}^r)$ be the Hilbert space of $r$-dimensional process $U_t$ with Malliavin differentiable
entries $U^i_t\in\DD_{2,1}$ equipped with the norm
$$
\vertiii{U}:=\mathbb{E}\left(\int_{[0,1]}~\Vert U_t\Vert^2~dt \right)^{1/2}+\mathbb{E}\left(\int_{[0,1]^2}~\Vert D_sU_t\Vert^2~ds\,dt\right)^{1/2}
$$
The Skorohod integral w.r.t. the Brownian motion $W^i_t$ on the unit interval is defined a linear and continuous mapping from $$V\in \mathbb{L}_{2,1}(\mathbb{R})\mapsto \int_0^1V_t~dW_t^i\in \mathbb{L}_2(\Omega)$$ characterized by the
two following properties
\begin{eqnarray}
\mathbb{E}\left(\int_0^1V_t~dW_t^i\right)&=&0\nonumber\\
\mathbb{E}\left(\left(\int_0^1V_t~dW_t^i\right)^2\right)&=&\mathbb{E}\left(\int_{[0,1]}~V_t^2~dt \right)+\mathbb{E}\left(\int_{[0,1]^2}~D^i_sV_t~D^i_{t}V_s~ds\,dt \right)\label{isometry}
\end{eqnarray}
The above formula can be seen as an extended version of the It\^o isometry to Skorohod integrals, for instance~\cite{nualart-z}, as well as chapters 1.3 to 1.5 in the book by Nualart~\cite{nualart}.
As for the It\^o integral, the Skorohod integral w.r.t. the $r$-dimensional Brownian motion $W_t$ of a matrix valued process with entries
$V^i_{k}\in \mathbb{L}_{2,1}(\mathbb{R})$ is defined by the column vector with entries
$$
\left( \int_0^1V_t~dW_t\right)^i:=
\int_0^1V_t^i~dW_t:=\sum_{1\leq k\leq r} \int_0^1V^i_{t,k}~dW_t^k
$$
\section{Variational equations}\label{var-sec}
\subsection{The tangent process}\label{tangent-sec}
In terms of the tensor product (\ref{tensor-product-2-2}), the gradient $\nabla X_{s,t}(x)$ of the diffusion flow $X_{s,t}(x)$ is given by the gradient $(d\times d)$-matrix
$$
d \,\nabla X_{s,t}(x)=\nabla X_{s,t}(x)~\left[\nabla b_t\left(X_{s,t}(x)\right)~dt+\sum_{1\leq k\leq r}\nabla \sigma_{t,k}\left(X_{s,t}(x)\right)~dW^k_t\right]
$$
where $W^k_t$ is the $k$-th component of the Brownian motion. After some calculations we check that
\begin{equation}\label{def-C}
\begin{array}{l}
\displaystyle d \,\left[\nabla X_{s,t}(x) \,\nabla X_{s,t}(x)^{\prime}\right]
=\nabla X_{s,t}(x) ~A_t\left(X_{s,t}(x)\right)~\nabla X_{s,t}(x)^{\prime}~dt+d M_{s,t}(x)
\end{array}
\end{equation}
with the matrix function $A_t(x)$ defined in (\ref{ref-mat-Upsilon}) and the symmetric matrix valued martingale
$$
d M_{s,t}(x):=\sum_{1\leq k\leq r}~\nabla X_{s,t}(x) \left[\nabla\sigma_{t,k}\left(X_{s,t}(x)\right)+\nabla\sigma_{t,k}\left(X_{s,t}(x)\right)^{\prime}\right]\nabla X_{s,t}(x)^{\prime}~dW^k_t
$$
These expansions, when combined with condition $( {\cal T})_2$, yield the following estimates of the difference between $X_{s,t}(x)$ and $X_{s,t}(y)$.
\begin{prop}
Assume $( {\cal T})_2$ is satisfied. Then
\begin{equation}\label{ref-nablax-estimate-0-bis}
\mathbb{E}\left(\Vert X_{s,t}(x)-X_{s,t}(y)\Vert^2\right)^{1/2}\leq \sqrt{d}~e^{-\lambda_A(t-s)}~~\Vert x-y\Vert.
\end{equation}
In addition, we have the almost sure estimate
\begin{equation}\label{ref-nablax-estimate-0-ae-bis}
\nabla\sigma=0\Longrightarrow \Vert X_{s,t}(x)-X_{s,t}(y)\Vert\leq e^{-\lambda_A(t-s)}~~\Vert x-y\Vert
\end{equation}
\label{prop:diff_innit}
\end{prop}
\begin{proof} [Proof of Prop.\ \ref{prop:diff_innit}]
Whenever $( {\cal T})_2$ is met, we have the following uniform estimate from (\ref{def-C})
\begin{equation}\label{ref-nablax-estimate-0}
( {\cal T})_2\Longrightarrow \mathbb{E}\left(\Vert\nabla X_{s,t}(x)\Vert_2^2\right)^{1/2}\leq \mathbb{E}\left(\Vert\nabla X_{s,t}(x)\Vert_F^2\right)^{1/2}\leq \sqrt{d}~e^{-\lambda_A(t-s)}
\end{equation}
where the $\sqrt{d}$ term arises from imposing the initial condition $\nabla X_{s,s}(x)=I$ on the resulting differential equation for
$\partial_t \mathbb{E}\left(\Vert\nabla X_{s,t}(x)\Vert_F^2\right)^{1/2}$.
In addition, when $\nabla \sigma=0$ the martingale $M_{s,t}(x)=0$ is null and as a consequence of (\ref{def-C}) we have the following almost sure estimate
\begin{equation}\label{ref-nablax-estimate-0-ae-again}
\Vert\nabla X_{s,t}\Vert_2:=\sup_x\Vert\nabla X_{s,t}(x)\Vert_2\leq e^{-\lambda_A(t-s)}
\end{equation}
The Taylor expansion
$$
\begin{array}{l}
\displaystyle X_{s,t}(x)-X_{s,t}(y)=\int_0^1~\nabla X_{s,t}(\epsilon x+(1-\epsilon)y)^{\prime}(x-y)~d\epsilon\\
\\
\displaystyle\Longrightarrow \Vert X_{s,t}(x)-X_{s,t}(y)\Vert^2\leq \left[\int_0^1~\Vert \nabla X_{s,t}(\epsilon x+(1-\epsilon)y)\Vert_2^2~d\epsilon\right]~\Vert x-y\Vert^2
\end{array}
$$
combined with (\ref{ref-nablax-estimate-0}) and (\ref{ref-nablax-estimate-0-ae-again}) completes the proof.
\end{proof}
These contraction inequalities quantify the stability of the stochastic flow $X_{s,t}(x)$ w.r.t. the initial state $x$.
For instance, the estimate (\ref{ref-nablax-estimate-0-bis}) ensures that the Markov transition semigroup is exponentially stable; that is, we have that
\begin{equation}\label{ref-eta-mu-cv}
\mathbb{W}_2\left(\mu_0 P_{s,t},\mu_1 P_{s,t}\right)\leq c~\exp{\left[-\lambda_A(t-s)\right]}~ \mathbb{W}_2\left(\mu_0 ,\mu_1 \right)
\end{equation}
For the Langevin diffusions discussed in (\ref{ref-HA-Langevin}) the stochastic flow is time homogeneous; that is we have that $X_{s,t}=X_{t-s}:=X_{0,(t-s)}$
and $P_{s,t}=P_{t-s}:=P_{0,(t-s)}$.
In addition when $\sigma(x)=\sigma~I$, the diffusion flow $X_t(x)$ has a single invariant measure on $\mathbb{R}^d$ given by
the Boltzmann-Gibbs measure
\begin{equation}\label{ref-gibbs}
\pi(dx)=\frac{1}{Z}~\exp{\left(-\frac{2}{\sigma^2}\,U(x)\right)}~dx\quad \mbox{\rm with}\quad Z:=\int~~ e^{-\frac{2}{\sigma^2}U(x)}~dx
\end{equation}
From (\ref{ref-HA-Langevin}), it follows that
$$
\nabla^2 U\geq \lambda~I\quad\Longrightarrow\quad
\mathbb{W}_n\left(\mu P_{s,t},\pi\right)\leq \exp{\left[-\lambda(t-s)\right]}~ \mathbb{W}_n\left(\mu ,\pi \right)
$$ for all $n \geq 1$.
Taking the trace in (\ref{def-C}) we also find that
$$
\begin{array}{l}
\displaystyle d \,\Vert \nabla X_{s,t}(x)\Vert^2_F
=\mbox{\rm Tr}\left[\nabla X_{s,t}(x) ~A_t\left(X_{s,t}(x)\right)~\nabla X_{s,t}(x)^{\prime}\right]~dt+d N_{s,t}(x)
\end{array}
$$
with the martingale
$$
d N_{s,t}(x)=\sum_{1\leq k \leq r} \mbox{\rm Tr}\left(\nabla X_{s,t}(x) \left[\nabla\sigma_{t,k}\left(X_{s,t}(x)\right)+\nabla\sigma_{t,k}\left(X_{s,t}(x)\right)^{\prime}\right]\nabla X_{s,t}(x)^{\prime}\right)~dW^k_t
$$
Observe that
$$
\partial_t\langle N_{s,\mbox{\LARGE .}}(x)\rangle_t=\sum_k~\mbox{\rm Tr}\left(\nabla X_{s,t}(x) \left[\nabla\sigma_{t,k}\left(X_{s,t}(x)\right)+\nabla\sigma_{t,k}\left(X_{s,t}(x)\right)^{\prime}\right]\nabla X_{s,t}(x)^{\prime}\right)^2
$$
This implies that
$$
\begin{array}{l}
\displaystyle \partial_t\mathbb{E}\left(\Vert \nabla X_{s,t}(x)\Vert^4_F\right)
=2~\mathbb{E}\left(\Vert \nabla X_{s,t}(x)\Vert^2_F~\mbox{\rm Tr}\left[\nabla X_{s,t}(x) ~A_t\left(X_{s,t}(x)\right)~\nabla X_{s,t}(x)^{\prime}\right]\right)\\\
\\
\hskip3cm\displaystyle+\sum_{1\leq k\leq r}\mathbb{E}\left(\mbox{\rm Tr}\left(\nabla X_{s,t}(x) \left[\nabla\sigma_{t,k}\left(X_{s,t}(x)\right)+\nabla\sigma_{t,k}\left(X_{s,t}(x)\right)^{\prime}\right]\nabla X_{s,t}(x)^{\prime}\right)^2\right)
\end{array}
$$
Whenever $( {\cal T})_2$ is met, we have the estimate
$$
\begin{array}{l}
\displaystyle \partial_t\mathbb{E}\left(\Vert \nabla X_{s,t}(x)\Vert^4_F\right)
\leq -4\left[\lambda_A-\rho(\nabla\sigma)^2\right]~\mathbb{E}\left(\Vert \nabla X_{s,t}(x)\Vert^4_F\right)
\end{array}
$$
with the uniform log-norm parameter $\rho(\nabla\sigma)$ defined in (\ref{def-nabla-sigma}).
This yields the estimate
$$
\partial_t\mathbb{E}\left(\Vert \nabla X_{s,t}(x)\Vert^4_F\right)^{1/4}
\leq -\left[\lambda_A-\rho(\nabla\sigma)^2\right]~\mathbb{E}\left(\Vert \nabla X_{s,t}(x)\Vert^4_F\right)^{1/4}
$$
More generally, we readily check the following result.
\begin{prop}\label{def-4th-prop}
When condition $( {\cal T})_n$ is met we have the following time-uniform bounds
\begin{equation}\label{def-4th}
\mathbb{E}\left(\Vert \nabla X_{s,t}(x)\Vert^{n}_F\right)^{1/n}\leq \sqrt{d}~e^{-\left[\lambda_A-(n-2)\rho(\nabla\sigma)^2/2\right](t-s)}
\end{equation}
\end{prop}
\subsection{The Hessian process}\label{hessian-sec}
In terms of the tensor product (\ref{tensor-notation}), we have the matrix diffusion equation
$$
\begin{array}{l}
d \, \nabla^2 X_{s,t}(x)\\
\\
=\left[\left[\nabla X_{s,t}(x)\otimes \nabla X_{s,t}(x)\right]\nabla^2b_t(X_{s,t}(x))+\nabla^2 X_{s,t}(x)\nabla b_t(X_{s,t}(x))\right]dt+d {\cal M }_{s,t}(x)
\end{array}$$
with the null matrix initial condition $\nabla^2 X_{s,s}(x)=0$ and the matrix-valued martingale
$$
d {\cal M }_{s,t}(x)=\sum_{1\leq k\leq r}\left(\left[\nabla X_{s,t}(x)\otimes \nabla X_{s,t}(x)\right]\nabla^2\sigma_{t,k}(X_{s,t}(x))+\nabla^2 X_{s,t}(x)\nabla \sigma_{t,k}(X_{s,t}(x))\right)~dW^k_t
$$
Consider the tensor functions
\begin{equation}\label{tensor-functions-ref}
\upsilon_t:=
\sum_{1\leq k\leq d}(\nabla^2\sigma_{t,k})~(\nabla^2 \sigma_{t,k})^{\prime}\quad \mbox{\rm and}\quad \tau_t:=\nabla^2b_t+
\sum_{1\leq k\leq d}(\nabla^2\sigma_{t,k})~(\nabla\sigma_{t,k})^{\prime}
\end{equation}
After some computations, we check that
$$
\begin{array}{l}
\displaystyle d \, \left[\nabla^2 X_{s,t}(x)\nabla^2 X_{s,t}(x)^{\prime}\right]\\
\\
\displaystyle=\left\{\left[\nabla^2 X_{s,t}(x)~A_t(X_{s,t}(x))~\nabla^2 X_{s,t}(x)^{\prime}\right]+2\left[\left[\nabla X_{s,t}(x)\otimes \nabla X_{s,t}(x)\right]~\tau_t(X_{s,t}(x))~\nabla^2 X_{s,t}(x)^{\prime}\right]_{\tiny sym}\right.\\
\\
\displaystyle\hskip3cm\left.+\left[\left[\nabla X_{s,t}(x)\otimes \nabla X_{s,t}(x)\right]\upsilon_t(X_{s,t}(x))\left[\nabla X_{s,t}(x)\otimes \nabla X_{s,t}(x)\right]^{\prime}\right]\right\}dt+d {\cal N }_{s,t}(x)
\end{array}$$
with the matrix function $A_t(x)$ defined in (\ref{ref-mat-Upsilon}) and the tensor-valued martingale
$$
\begin{array}{l}
\displaystyle d {\cal N }_{s,t}(x:)=2~\sum_{1\leq k\leq r}
\left\{\left[\nabla X_{s,t}(x)\otimes \nabla X_{s,t}(x)\right]~\nabla^2\sigma_{t,k}(X_{s,t}(x))~\nabla^2 X_{s,t}(x)^{\prime}\right.\\
\\
\displaystyle\hskip6cm\left.+\nabla^2 X_{s,t}(x)~\nabla \sigma_{t,k}(X_{s,t}(x))~\nabla^2 X_{s,t}(x)^{\prime}\right\}_{\tiny sym}~dW^k_t
\end{array}
$$
When $\nabla\sigma=0$ the above equation reduces to
$$
\begin{array}{l}
\displaystyle \partial_t\, \left[\nabla^2 X_{s,t}(x)\nabla^2 X_{s,t}(x)^{\prime}\right]\\
\\
\displaystyle=\left[\nabla^2 X_{s,t}(x)~A_t(X_{s,t}(x))~\nabla^2 X_{s,t}(x)^{\prime}\right]+2\left[\left[\nabla X_{s,t}(x)\otimes \nabla X_{s,t}(x)\right]~\nabla^2b_t(X_{s,t}(x))~\nabla^2 X_{s,t}(x)^{\prime}\right]_{\tiny sym}
\end{array}$$
Whenever $( {\cal T})_2$ is met, taking the trace in the above display we check that
$$
\partial_t\, \Vert\nabla^2 X_{s,t}(x)\Vert_F^2\leq -2\lambda_A~\Vert\nabla^2 X_{s,t}(x)\Vert_F^2+2\Vert\nabla^2b\Vert_{F}~\Vert\nabla X_{s,t}(x)\Vert_F^2~\Vert\nabla^2 X_{s,t}(x)\Vert_F
$$
This yields the estimate
$$
\partial_t\, \Vert\nabla^2 X_{s,t}(x)\Vert_F\leq -\lambda_A~\Vert\nabla^2 X_{s,t}(x)\Vert_F+\Vert\nabla^2b\Vert_{F}~\Vert\nabla X_{s,t}(x)\Vert_F^2
$$
Using (\ref{ref-nablax-estimate-0-ae}) this implies that
$$
\Vert\nabla^2 X_{s,t}(x)\Vert_F\leq \Vert\nabla^2b\Vert_{F}~e^{-\lambda_A(t-s)}~\int_s^t~e^{\lambda_A(u-s)}~\Vert\nabla X_{s,u}(x)\Vert_F^2~du\leq
\frac{d}{\lambda_A}~\Vert\nabla^2b\Vert_{F}~e^{-\lambda_A(t-s)}
$$
This ends the proof of the almost sure estimate (\ref{ref-nabla2x-estimate-0-ae}).
For more general models, we have that
$$
\begin{array}{l}
\displaystyle d \, \Vert \nabla^2 X_{s,t}(x)\Vert^2_F\\
\\
\displaystyle=\left\{\mbox{\rm Tr}\left[\nabla^2 X_{s,t}(x)~A_t(X_{s,t}(x))~\nabla^2 X_{s,t}(x)^{\prime}\right]+2~\mbox{\rm Tr}\left[\left[\nabla X_{s,t}(x)\otimes \nabla X_{s,t}(x)\right]~\tau_t(X_{s,t}(x))~\nabla^2 X_{s,t}(x)^{\prime}\right]\right.\\
\\
\displaystyle\hskip3cm\left.+\mbox{\rm Tr}\left[\left[\nabla X_{s,t}(x)\otimes \nabla X_{s,t}(x)\right]\upsilon_t(X_{s,t}(x))\left[\nabla X_{s,t}(x)\otimes \nabla X_{s,t}(x)\right]^{\prime}\right]\right\}dt+d M_{s,t}(x)
\end{array}$$
with a continuous martingale $M_{s,t}(x)$ with angle bracket
$$
\begin{array}{l}
\displaystyle \partial_t\langle M_{s,\mbox{\LARGE .}}(x)\rangle_t\\
\\
\displaystyle=4~\sum_{1\leq k\leq r}\mbox{\rm Tr}
\left\{\left[\nabla X_{s,t}(x)\otimes \nabla X_{s,t}(x)\right]~\nabla^2\sigma_{t,k}(X_{s,t}(x))~\nabla^2 X_{s,t}(x)^{\prime}\right.\\
\displaystyle\hskip7cm\left.+\nabla^2 X_{s,t}(x)~\nabla \sigma_{t,k}(X_{s,t}(x))~\nabla^2 X_{s,t}(x)^{\prime}\right\}^2
\end{array}
$$
\begin{prop}\label{prop-nabla-2-estimate}
Assume $( {\cal T})_n$ is met. In this situation, for any $\epsilon>0$ s.t. $\lambda_A(n)>\epsilon$ we have
\begin{equation}\label{eq-prop-nabla-2-estimate}
\mathbb{E}\left(\Vert \nabla^2 X_{s,t}(x)\Vert^{n}_F\right)^{1/n}\leq n~\epsilon^{-1}~\rchi(b,\sigma)~\exp{\left(-\left[\lambda_A(n)-\epsilon\right](t-s)\right)}
\end{equation}
with the parameters $\rchi(b,\sigma)$ and $\lambda_A(n)$ defined in (\ref{def-chi-b}) and (\ref{ref-lambda-A-n}).
\end{prop}
In the above display, $\rho_{\star}(\nabla \sigma)$ is defined in (\ref{def-nabla-sigma}).
The proof of the above estimate is technical and thus housed in the appendix on page~\pageref{prop-nabla-2-estimate-proof}
\subsection{Bismut-Elworthy-Li formulae}
We further assume that ellipticity condition (\ref{elip}) is met.
In this situation, we can extend gradient semigroup formulae to measurable functions using the Bismut-Elworthy-Li formula
\begin{equation}\label{bismut-omega}
\nabla P_{s,t}(f)(x)=
\mathbb{E}\left(f(X_{s,t}(x))~ \tau^{\omega}_{s,t}(x)\right) \end{equation}
with the stochastic process
$$
\tau^{\omega}_{s,t}(x):=\int_s^t~ \partial_u \omega_{s,t}(u)~
\nabla X_{s,u}(x)~ a_u(X_{s,u}(x))^{-1/2 }~dW_u
$$
The above formula is valid for any function $\omega_{s,t}:u\in [s,t]\mapsto \omega_{s,t}(u)\in \mathbb{R} $ of the following form
\begin{equation}\label{bismut-omega-varphi}
\omega_{s,t}(u)=\varphi\left((u-s)/(t-s)\right)~\Longrightarrow \partial_u \omega_{s,t}(u)=\frac{1}{t-s}~\partial\varphi\left((u-s)/(t-s)\right)~
\end{equation}
for some non decreasing differentiable function $\varphi$ on $[0,1]$ with bounded continuous derivatives and such that
$$
(\varphi(0),\varphi(1))=(0,1)\Longrightarrow \omega_{s,t}(t)-\omega_{s,t}(s)=1
$$
Whenever $( {\cal T})_2$ is met, combining (\ref{ref-nablax-estimate-0}) with (\ref{bismut-omega}), for any $f$ s.t. $\Vert f\Vert\leq 1$ we check that
\begin{eqnarray*}
\Vert \nabla P_{s,t}(f)\Vert^2&\leq&
\mathbb{E}\left( \Vert\tau^{\omega}_{s,t}(x)\Vert^2 \right)\\
&\leq& \kappa_1~ \int_s^t~e^{-2\lambda_A (u-s)}~
\Vert \partial_u \omega^{s,t}(u)\Vert^2~du=~\frac{ \kappa_1}{t-s}~ \int_0^1~e^{-2\lambda_A (t-s)v}~
\left( \partial\varphi(v)\right)^2~dv
\end{eqnarray*}
Let $\varphi_{\epsilon}$ with $\epsilon\in ]0,1[$ be some differentiable function on $[0,1]$ null on $[0,1-\epsilon]$ and such that $\vert \partial\varphi_{\epsilon}(u)\vert\leq c/\epsilon$
and
$(\varphi_{\epsilon}(1-\epsilon),\varphi(1))=(0,1)$. For instance we can choose
$$
\varphi_{\epsilon}(u)=\left\{
\begin{array}{ccl}
0&\mbox{\rm if}&u\in [0,1-\epsilon]\\
\displaystyle1+\cos{\left(\left(1+\frac{1-u}{\epsilon}\right)\frac{\pi}{2}\right)}&\mbox{\rm if}&u\in [1-\epsilon,1]
\end{array}
\right.
$$
In this situation, we check that
$$
\Vert \nabla P_{s,t}(f)\Vert^2\leq ~\frac{\kappa_2}{\epsilon^2}~\frac{1}{t-s}~ \int_{1-\epsilon}^1~e^{-2\lambda_A (t-s)v}~dv
$$
from which we find the rather crude uniform estimate
\begin{equation}\label{bismut-est}
\Vert \nabla P_{s,t}(f)\Vert\leq \frac{ \kappa}{\epsilon}~\frac{1}{\sqrt{t-s}}~e^{-\lambda_A(1-\epsilon) (t-s)}
\end{equation}
In the same vein, for any $s\leq u\leq t$ we have the formulae
\begin{eqnarray}
\nabla^2 P_{s,t}(f)(x)&=& \mathbb{E}\left(f(X_{s,t}(x))~\tau^{[2],\omega}_{s,t}(x)+\nabla X_{s,t}(x)\,\nabla f(X_{s,t}(x))~ \tau^{\omega}_{s,t}(x)^{\prime}\right)\label{bismut-omega-2-grad}\\
&=& \mathbb{E}\left(f(X_{s,t}(x))~ \left[\tau^{[2],\omega}_{s,u}(x)+\nabla X_{s,u}(x)~\tau^{\omega}_{u,t}(X_{s,u}(x))\,\tau^{\omega}_{s,u}(x)^{\prime}\right]\right) \label{bismut-omega-2} \end{eqnarray}
with the process
$$
\begin{array}{l}
\tau^{[2],\omega}_{s,t}(x)\\
\\
\displaystyle :=\int_s^t~ \partial_u \omega_{s,t}(u)~\left[\nabla^2 X_{s,u}(x)~ a_u(X_{s,u}(x))^{-1/2 }+
\left(\nabla X_{s,u}(x)\otimes \nabla X_{s,u}(x)\right)~ (\overline{\nabla} a_u^{-1/2})(X_{s,u}(x))\right]~dW_u
\end{array}$$
In the above display $\overline{\nabla} a^{-1/2}_u$ stands for the tensor function
$$
(\overline{\nabla} a^{-1/2}_u(x))_{(i,j),k}:= \partial_{x_i}a^{-1/2}_u(x)_{j,k}=-\left(a_u^{-1/2}(x)\left[ \partial_{x_i}a_u^{1/2}(x)\right]a_u^{-1/2}(x)\right)_{j,k}
$$
A detailed proof of the formulae (\ref{bismut-omega-2-grad}) and (\ref{bismut-omega-2})
in the context of nonlinear diffusion flows can be found in the appendix in~\cite{mp-var-19}.
Observe that
$$
(\ref{elip})\Longrightarrow
\sup_i \Vert \partial_{x_i}a^{-1/2}_u(x)\Vert\leq c~\Vert \nabla\sigma\Vert/ \upsilon
$$
Whenever $( {\cal T})_2$ is met,
using the estimate (\ref{prop-nabla-2-estimate}) for any $\epsilon\in ]0,1[$
\begin{equation}\label{bismut-est-P2-grad}
\Vert \nabla^2 P_{s,t}(f)\Vert\leq\frac{\kappa}{\epsilon}~\frac{1}{\sqrt{t-s}}~e^{-\lambda_A (t-s) (1-\epsilon)}~\left(\Vert f\Vert+\Vert\nabla f\Vert\right)~\end{equation}
In the same vein, using (\ref{bismut-omega-2}) for any $u\in ]s,t[$ and any bounded measurable function $f$ s.t.
$\Vert f\Vert\leq 1$ we also check the rather crude uniform estimate
$$
\Vert \nabla^2 P_{s,t}(f)\Vert\leq \frac{\kappa_1}{\epsilon}~\frac{1}{\sqrt{u-s}}~e^{-\lambda_A (u-s) (1-\epsilon)}+\frac{\kappa_2}{\epsilon^2}~\frac{1}{\sqrt{(t-u)(u-s)}}~e^{-\lambda_A (u-s) }~e^{-\lambda_A (t-s) (1-\epsilon)}
$$
Choosing $u=s+(1-\epsilon)(t-s)$ in the above display we check that for any $\epsilon\in ]0,1[$ we obtain the uniform estimate
\begin{equation}\label{bismut-est-P2}
\Vert \nabla^2 P_{s,t}(f)\Vert\leq \frac{c_1}{\epsilon\sqrt{1-\epsilon}}~\frac{1}{\sqrt{t-s}}~e^{-\lambda_A (1-\epsilon)^2(t-s)}+\frac{c_2}{\epsilon^2}\frac{1}{\sqrt{\epsilon(1-\epsilon)}}~\frac{1}{t-s}~ e^{-2\lambda_A (1-\epsilon)(t-s)}
\end{equation}
The extended versions of the above formulae in the context of diffusions on differentiable
manifolds can be found in the series of articles~\cite{aht-03,bismut,Elworthy,xm-li,thompson}.
\section{Backward semigroup analysis}\label{proof-theo-al-gr}
\subsection{The two-sided stochastic integration}\label{two-sided}
For any given time horizon $s\leq t$ we have the rather well known backward stochastic flow equation
\begin{equation}\label{ref-backward-flow}
X_{s,t}(x)=x+\int_s^t\left[\nabla X_{u,t}(x)^{\prime}~b_u(x)+\frac{1}{2}~~ \nabla ^2X_{u,t}(x)^{\prime}~a_u(x)~\right]~du+\int_s^t\nabla X_{u,t}(x)^{\prime}\sigma_u(x)~dW_u
\end{equation}
The right hand side integral is understood as a conventional backward It\^o-integral.
In a more synthetic form, the above backward formula reduces to (\ref{backward-synthetic}).
An elementary proof of the above formula based on Taylor expansions is presented in~\cite{daprato-3}, different approaches can also be found in~\cite{kunita-2} and
~\cite{krylov}. Extensions of the backward It\^o formula (\ref{ref-backward-flow}) to jump type diffusion models as well as nonlinear diffusion flows can also be found in~\cite{daprato-2} and in the appendix of~\cite{mp-var-18}.
Consider the discrete time interval $[s,t]_h:=\{u_0,\ldots,u_{n-1}\}$ associated with some refining time mesh $u_{i+1}=u_i+h$ from $u_0=s$ to $u_{n}=t$, for some time step $h>0$. In this notation, combining (\ref{Alekseev-grobner-intro-0}) with (\ref{Alekseev-grobner-intro-ref-2}) for any $u\in [s,t]_h$ we have the Taylor type approximation
$$
\begin{array}{l}
X_{u+h,t}\circ \overline{X}_{s,u+h}-X_{u,t}\circ \overline{X}_{s,u}\\
\\
\displaystyle\simeq -\left(\left(\nabla X_{u+h,t}\right)(\overline{X}_{s,u}(x))^{\prime}~\Delta b_u(\overline{X}_{s,u}(x))
+\frac{1}{2}\,\left(\nabla ^2X_{u+h,t}\right)(\overline{X}_{s,u}(x))^{\prime}~\Delta a_u(\overline{X}_{s,u}(x))~\right)~h\\
\\
\displaystyle\hskip3cm- \left(\nabla X_{u+h,t}\right)(\overline{X}_{s,u}(x))^{\prime}~\Delta\sigma_u(\overline{X}_{s,u}(x))~(W_{u+h}-W_u)
\end{array}
$$
This yields the interpolating forward-backward telescoping sum formula
\begin{equation}\label{ref-telescoping-sum}
\begin{array}{l}
X_{s,t}(x)-\overline{X}_{s,t}(x)\\
\\
\displaystyle=-\sum_{u\in [s,t]_h}\left[X_{u+h,t}(\overline{X}_{s,u+h}(x))-X_{u,t}\left(\overline{X}_{s,u}(x)\right)\right]\\
\\
\displaystyle\simeq\sum_{u\in [s,t]_h}\left(\left(\nabla X_{u+h,t}\right)(\overline{X}_{s,u}(x))^{\prime}~\Delta b_u(\overline{X}_{s,u}(x))
+\frac{1}{2}\,\left(\nabla ^2X_{u+h,t}\right)(\overline{X}_{s,u}(x))^{\prime}~\Delta a_u(\overline{X}_{s,u}(x))~\right)~h\\
\\
\hskip3cm\displaystyle+\sum_{u\in [s,t]_h}\left(\nabla X_{u+h,t}\right)(\overline{X}_{s,u}(x))^{\prime}~\Delta\sigma_u(\overline{X}_{s,u}(x))~(W_{u+h}-W_u)
\end{array}
\end{equation}
We obtain formally (\ref{Alekseev-grobner}) by summing the above terms and passing to the limit $h\downarrow 0$.
To be more precise, we follow the two-sided stochastic integration calculus introduced by Pardoux and Protter in~\cite{pardoux-protter}. As mentioned by the authors this methodology can be seen as a variation of It\^o original construction of the stochastic integral.
In this framework,
the Skorohod stochastic integral (\ref{def-S-sk}) arising in (\ref{ref-interp-du}) is defined by the $\mathbb{L}_2$-convergence
\begin{equation}\label{sk-integral}
\begin{array}[b]{l}
S_{s,t}(\varsigma)(x)\\
\\
\displaystyle:=\lim_{h\rightarrow 0}\sum_{u\in [s,t]_h} \left(\nabla X_{u+h,t}\right)(\overline{X}_{s,u}(x))^{\prime}~
\varsigma_{u}(\overline{X}_{s,u}(x))~(W_{u+h}-W_{u})
\end{array}\quad \mbox{\rm with}\quad \varsigma_u=\Delta \sigma_{u}
\end{equation}
The proof of the above assertion is based on a slight extension of proposition 3.3 in~\cite{pardoux-protter} to Skorohod integrals of the form (\ref{def-S-sk}).
For the convenience of the reader, a detailed proof of the above assertion for one dimensional models is provided in section~\ref{sec-extended-2-sided}.
Using (\ref{sk-integral}), the complete proof of (\ref{ref-interp-du}) now follows the same line of arguments as the ones used in the proof of It\^o-type change rule formula stated in theorem 6.1 in~\cite{pardoux-protter}, thus it is skipped.
\subsection{A multivariate stochastic interpolation formulae}\label{ref-rig}
In terms of the tensor product (\ref{tensor-notation}), for any $p\geq 1$ and any twice differentiable function $f$ from $\mathbb{R}^d$ into $\mathbb{R}^p$ with at most polynomial growth
the function $ F_{s,t}:= \mathbb{P}_{s,t}(f)$ satisfies the backward formula (\ref{backward-random-fields})
with the random fields
$$
G _{u,t}(x):=\nabla F_{u,t}(x)^{\prime}~b_u(x) +\frac{1}{2}~\nabla^2 F_{u,t}(x)^{\prime}~a_{u}(x)\quad
\mbox{\rm and}\quad H _{u,t}(x):=\nabla F_{u,t}(x)^{\prime}~\sigma_u(x)~
$$
Using the quantitative estimates presented in section~\ref{q-sec}, we checked that the regularity conditions $(H_1)$, $(H_2)$ and $(H_3)$ stated in section~\ref{sec-1-biw-intro} are satisfied.
Rewritten in terms of the stochastic semigroups $ \mathbb{P}_{s,t}$ and $\overline{\mathbb{P}}_{s,t}$ we obtain the forward-backward multivariate interpolation formula
\begin{equation}\label{Alekseev-grobner-sg-ae}
\mathbb{P}_{s,t}(f)(x)- \overline{\mathbb{P}}_{s,t}(f)(x)= \mathbb{T}_{s,t}(f,\Delta a,\Delta b)(x)+\mathbb{S}_{s,t}(f,\Delta \sigma)(x)
\end{equation}
with the stochastic integro-differential operator
\begin{equation}\label{Alekseev-grobner-sg-ae)int}
\begin{array}{l}
\displaystyle \mathbb{T}_{s,t}(f,\Delta a,\Delta b)(x)\\
\\
\displaystyle:=\int_s^t~\left[\nabla \mathbb{P}_{u,t}(f)(\overline{X}_{s,u}(x))^{\prime}~\Delta b_u(\overline{X}_{s,u}(x)) +\frac{1}{2}~\nabla^2 \mathbb{P}_{u,t}(f)(\overline{X}_{s,u}(x))^{\prime}~\Delta a_{u}(\overline{X}_{s,u}(x))\right]~du
\end{array}
\end{equation}
and the two-sided stochastic integral term given by
\begin{equation}\label{Alekseev-grobner-sg-ae-f}
\mathbb{S}_{s,t}(f,\Delta \sigma)(x):=\int_s^t~ \nabla \mathbb{P}_{u,t}(f)(\overline{X}_{s,u}(x))^{\prime}~\Delta\sigma_u(\overline{X}_{s,u}(x))~dW_u
\end{equation}
Using elementary differential calculus, for twice differentiable (column vector-valued) function $f$ from $\mathbb{R}^d$ into $\mathbb{R}^p$ we readily check the gradient and the Hessian formulae
\begin{eqnarray}
\nabla \,\mathbb{P}_{s,t}(f)(x)&=&\nabla X_{s,t}(x)~\mathbb{P}_{s,t}(\nabla f)(x)\nonumber\\
\nabla^2 \mathbb{P}_{s,t}(f)(x)
&=&\left[\nabla X_{s,t}(x)\otimes \nabla X_{s,t}(x)\right]~\mathbb{P}_{s,t}(\nabla^2 f)(x)+\nabla^2 X_{s,t}(x)~\mathbb{P}_{s,t}(\nabla f)(x)\label{grad-sg}
\end{eqnarray}
This shows that $\mathbb{T}_{s,t}(f,\Delta a,\Delta b)$ and $ \mathbb{S}_{s,t}(f,\Delta \sigma)$ have the same form as the integrals $T_{s,t}(\Delta a,\Delta b)$ and $S_{s,t}(\Delta a,\Delta b)$ defined in (\ref{Alekseev-grobner}) and (\ref{def-T-st}) up to some terms involving the gradient and the Hessian of the function $f$. For instance, we have the two-sided stochastic integral formula
$$
\mathbb{S}_{s,t}(f,\Delta \sigma)(x)=\int_s^t~
\mathbb{P}_{u,t}(\nabla f)(\overline{X}_{s,u}(x))^{\prime}~
\nabla X_{u,t}(\overline{X}_{s,u}(x))^{\prime}~\Delta\sigma_u(\overline{X}_{s,u}(x))~dW_u
$$
Also observe that (\ref{Alekseev-grobner-sg-ae}) coincides with (\ref{Alekseev-grobner}) for the identity function; that is, we have that
$$
f(x)=x\Longrightarrow \mathbb{T}_{s,t}(f,\Delta a,\Delta b)=T_{s,t}(\Delta a,\Delta b)\quad \mbox{\rm and}\quad \mathbb{S}_{s,t}(f,\Delta \sigma)=S_{s,t}(\Delta \sigma)
$$
The above discussion shows that the analysis of the differences of the stochastic semigroups $(\mathbb{P}_{s,t}- \overline{\mathbb{P}}_{s,t})$ in terms of the tangent and the Hessian processes is essentially the same as the one of
the difference of the stochastic flows $(X_{s,t}-\overline{X}_{s,t})$. For instance using the discussion provided section~\ref{sec-sk-f}, when the gradient and the Hessian of the function $f$ are uniformly bounded the estimates stated in theorem~\ref{theo-al-gr-2} can be easily extended at the level of the stochastic semigroups.
The $\mathbb{L}_2$-norm of the two-sided stochastic integrals in (\ref{Alekseev-grobner}) and (\ref{Alekseev-grobner-sg-ae}) are uniformly estimated
as soon as the pair of drift and diffusion functions $(b_t,\sigma_t)$ and $(\overline{b},\overline{\sigma}_t)$ satisfy condition $( {\cal T})_2$.
For a more thorough discussion we refer to section~\ref{var-skorohod}, see for instance the $\mathbb{L}_n$-norm estimates presented in theorem~\ref{theo-quantitative-sko} applied to the difference function $\varsigma_t=\Delta\sigma_t$.
\subsection{Semigroup perturbation formulae}\label{sg-sect}
Besides the fact that the Skorohod integral in the r.h.s. of (\ref{Alekseev-grobner-sg-ae}) is not a martingale (w.r.t. the Brownian motion filtration) it is centered (see for instance (\ref{isometry}) and the argument provided in the beginning of section~\ref{var-skorohod}). Thus, taking the expectation in the univariate version of (\ref{Alekseev-grobner-sg-ae}) we obtain the following interpolation semigroup decomposition.
\begin{cor}\label{weak-theo-al-gr}
For any twice differentiable function $f$ from $\mathbb{R}^d$ into $\mathbb{R}$ with bounded derivatives we have
the forward-backward semigroup interpolation formula
\begin{equation}\label{Alekseev-grobner-sg}
\begin{array}{l}
\displaystyle
P_{s,t}(f)(x)-\overline{P}_{s,t}(f)(x)
=\int_s^t~\mathbb{E}\left(
\langle \nabla P_{u,t}(f)(\overline{X}_{s,u}(x)),\Delta b_u(\overline{X}_{s,u}(x))\rangle
\right)~du\\
\\
\displaystyle\hskip5cm+\frac{1}{2}~\int_s^t~\mathbb{E}\left(\mbox{\rm Tr}\left[\nabla^2 P_{u,t}(f) (\overline{X}_{s,u}(x))~\Delta a_{u}(\overline{X}_{s,u}(x))\right]\right)~du
\end{array}
\end{equation}
In addition, under some appropriate regularity conditions for any differentiable function $f$ such that $\Vert f\Vert\leq 1$ and $\Vert\nabla f\Vert\leq 1$ we have
the uniform estimate
\begin{equation}\label{ref-P-2}
\vert P_{s,t}(f)(x)-\overline{P}_{s,t}(f)(x) \vert \leq \kappa~\left[\vertiii{\Delta a(x)}_{1}+\vertiii{\Delta b(x)}_{1}\right]
\end{equation}
\end{cor}
Rewritten in terms of the infinitesimal generators $(L_t,\overline{L}_t)$ of the stochastic flows $(X_{s,t},\overline{X}_{s,t})$ we recover the rather well known semigroup perturbation formula
$$
P_{s,t}=\overline{P}_{s,t}+\int_s^t~\overline{P}_{s,u} (L_u-\overline{L}_u)P_{u,t} ~du
\quad \Longleftrightarrow\quad (\ref{Alekseev-grobner-sg})
$$
The above formula can be readily checked using the interpolating formula given for any $s\leq u<t$ by the evolution equation
$$
\partial_u (\overline{P}_{s,u}P_{u,t})=(\partial_u \overline{P}_{s,u})P_{u,t}+\overline{P}_{s,u}(\partial_uP_{u,t})= \overline{P}_{s,u}\overline{L}_uP_{u,t}-
\overline{P}_{s,u}L_uP_{u,t}$$
Now we come to the proof of (\ref{ref-P-2}).
Whenever $( {\cal T})_2$ is met, combining (\ref{bismut-est}) with (\ref{bismut-est-P2-grad}) for any differentiable function
$f$ s.t. $\Vert f\Vert\leq 1$ and $\Vert\nabla f\Vert\leq 1$ and for any $\epsilon\in ]0,1[$ we check that
$$
\vert P_{s,s+t}(f)(x)-\overline{P}_{s,s+t}(f)(x) \vert \leq \frac{ \kappa}{\epsilon} ~\left[\vertiii{\Delta a(x)}_{1}+\vertiii{\Delta b(x)}_{1}\right]~\int_0^t~
~\frac{1}{\sqrt{u}}~e^{-\lambda_A(1-\epsilon) u}~du
$$
This ends the proof of (\ref{ref-P-2}).\hfill\hbox{\vrule height 5pt width 5pt depth 0pt}\medskip \\
After some elementary manipulations the forward-backward interpolation formula (\ref{Alekseev-grobner-sg}) yields the following corollary.
\begin{cor}
Let $X_t$ and $\overline{X}_t$ be some ergodic diffusions associated with some time homogeneous drift and diffusion functions
$(b,\sigma)$ and $(\overline{b},\overline{\sigma})$.
The invariant probability measures $\pi$ and $\overline{\pi}$ of $X_t$ and $\overline{X}_t$
are connected for any twice differentiable function $f$ from $\mathbb{R}^d$ into $\mathbb{R}$ with bounded derivatives by the following interpolation formula
\begin{equation}\label{diff-pi}
(\pi-\overline{\pi})(f) =\int_0^{\infty}~\mathbb{E}\left(
\left\langle \nabla P_{t}(f)(\overline{Y}),\Delta b(\overline{Y})\right\rangle
+\frac{1}{2}~\mbox{\rm Tr}\left[\nabla^2 P_{t}(f) (\overline{Y})~\Delta a(\overline{Y})\right]\right)~dt
\end{equation}
In the above display $\overline{Y}$ stands for a random variable with distribution $\overline{\pi}$ and $P_t$ stands for the Markov transition semigroup of
the process $X_t$.
\end{cor}
The formula (\ref{diff-pi}) can be used to estimate the invariant measure of a stochastic flow associated with some perturbations of the drift and the diffusion function.
For instance, for homogeneous Langevin diffusions $X_{t}$ associated with some convex potential function $U$
we have
$$
b=-\nabla U\quad \mbox{\rm and}\quad\sigma=I\quad \Longrightarrow\quad \pi(dx)~\propto~\exp{\left(-2\,U(x)\right)}~dx
$$
In the above display, $dx$ stands for the Lebesgue measure on $\mathbb{R}^d$. In this situation, using (\ref{diff-pi}), for any ergodic
diffusion flow $\overline{X}_{t}$ with some drift $\overline{b}$ and an unit diffusion matrix we have
$$
\overline{\pi}(f)=\pi(f)+\int_0^{\infty}~\mathbb{E}\left(
\left\langle (\overline{b}+\nabla U)(\overline{Y}),\nabla P_{t}(f)(\overline{Y})\right\rangle\right)~dt
$$
Notice that the above formula is implicit as the r.h.s. term depends on $ \overline{\pi}$. By symmetry arguments, we also have
the following more explicit perturbation formula
$$
\overline{\pi}(f)=\pi(f)+\int_0^{\infty}~\mathbb{E}\left(
\left\langle (\overline{b}+\nabla U)(Y),\nabla \overline{P}_{t}(f)(Y)\right\rangle\right)~dt
$$
In the above display ${Y}$ stands for a random variable with distribution ${\pi}$ and $\overline{P}_t$ stands for the Markov transition semigroup of
the process $\overline{X}_t$.
\subsection{Some extensions}\label{sec-extensions}
Several extensions of the forward-backward stochastic interpolation formula (\ref{Alekseev-grobner}) to more general stochastic perturbation processes can be developed. For instance, suppose we are given some stochastic processes $\overline{Y}_{s,t}(x)\in\mathbb{R}^d$ and $\overline{Z}_{s,t}(x)\in \mathbb{R}^{d\times r}$ adapted to the filtration of the Brownian motion $W_t$, and let
$\overline{X}_{s,t}(x)$ be the stochastic flow defined by the stochastic differential equation
\begin{equation}\label{X-Y-Z}
d\overline{X}_{s,t}(x)=\overline{Y}_{s,t}(x)~dt+\overline{Z}_{s,t}(x)~dW_t
\end{equation}
In this situation, the interpolation formula (\ref{ref-interp-du}) remains valid when $\overline{a}_u(\overline{X}_{s,u}(x))$ is replaced by the stochastic matrices
$\overline{Z}_{s,t}(x)\overline{Z}_{s,t}(x)^{\prime}$. This yields without further work the forward-backward stochastic interpolation formula (\ref{Alekseev-grobner}) with the local perturbations
\begin{eqnarray*}
\Delta b_u (\overline{X}_{s,u}(x))&:=&b_u (\overline{X}_{s,u}(x))-\overline{Y}_{s,u}(x)\\
\Delta \sigma_u (\overline{X}_{s,u}(x))&:=&\sigma_u (\overline{X}_{s,u}(x))-\overline{Z}_{s,u}(x)
\quad \mbox{\rm and}\quad
\Delta a_u (\overline{X}_{s,u}(x)):=a_u (\overline{X}_{s,u}(x))-\overline{Z}_{s,u}(x)\overline{Z}_{s,u}(x)^{\prime}
\end{eqnarray*}
The corresponding interpolation formula should be used with some caution as the $\mathbb{L}_2$-norm of the two-sided stochastic integral (\ref{def-S-sk}) depends on the Malliavin differential of the integrand process of the Brownian motion; see for instance the variance formula provided in lemma~\ref{lem-var}.
Assume that $\sigma=I$ and the regularity condition $( {\cal T})_2$ is met.
Also suppose
$\overline{X}_{s,t}(x)$ is given by a stochastic differential equation of the form (\ref{X-Y-Z}) with $r=d$ and $\overline{Z}_{s,t}(x)=I$.
Arguing as above, in terms of the tensor product (\ref{tensor-notation}) we have
\begin{equation}\label{X-Y-ref-ag}
X_{s,t}(x)-\overline{X}_{s,t}(x)=\int_s^t~\left(\nabla X_{u,t}\right)(\overline{X}_{s,u}(x))^{\prime}~(b_u(\overline{X}_{s,u}(x))-\overline{Y}_{s,u}(x))~du
\end{equation}
Combining (\ref{ref-nablax-estimate-0-ae}) with the generalized Minkowski inequality, we check the following proposition.
\begin{prop}
Assume that $( {\cal T})_2$ is met for some $\lambda_A>0$. In this situation,
for any $1\leq n\leq \infty$ we have the estimates
\begin{equation}\label{Alekseev-grobner-2-n-XY}
\mathbb{E}\left[\Vert X_{s,t}(x)-\overline{X}_{s,t}(x)\Vert^{n}\right]^{1/n}\leq \int_s^t~e^{-\lambda_A(t-u)}~\mathbb{E}\left[\Vert b_u(\overline{X}_{s,u}(x))-\overline{Y}_{s,u}(x)\Vert\right]^{1/n}~du
\end{equation}
\end{prop}
In the same vein, we have
\begin{equation}\label{Alekseev-grobner-2-weak}
P_{s,t}(f)(x)-\overline{P}_{s,t}(f)(x)
=\int_s^t~\mathbb{E}\left(
\langle \nabla P_{u,t}(f)(\overline{X}_{s,u}(x)),b_u(\overline{X}_{s,u}(x))-\overline{Y}_{s,u}(x)\rangle
\right)~du
\end{equation}
For instance, for the Langevin diffusion discussed in (\ref{ref-HA-Langevin}) and (\ref{ref-gibbs}) the weak expansion (\ref{Alekseev-grobner-2-weak})
implies that
\begin{equation}\label{ref-discrete-time}
[\pi\overline{P}_{s,t} -
\pi](f)=\int_s^t~\int\pi(dx)~\mathbb{E}\left(
\langle \nabla P_{t-u}(f)(\overline{X}_{s,u}(x)),\nabla U(\overline{X}_{s,u}(x))+\overline{Y}_{s,u}(x)\rangle
\right)~du
\end{equation}
This yields the $\mathbb{W}_1$-Wasserstein estimate
$$
\mathbb{W}_1(\pi\overline{P}_{s,t},
\pi)\vert\leq \int_s^t~e^{-\lambda_A(t-u)}~\int\pi(dx)~\mathbb{E}\left(
\Vert \nabla U(\overline{X}_{s,u}(x))+\overline{Y}_{s,u}(x)\Vert
\right)~du
$$
Combining (\ref{bismut-est}) with (\ref{ref-discrete-time}), for any $\epsilon\in ]0,1[$ we also have the total variation norm estimate
\begin{equation}\label{bismut-est-overline}
\Vert \pi\overline{P}_{s,t} -
\pi\Vert_{\tiny tv}\leq \frac{c}{\epsilon}~\int_s^t~\frac{1}{\sqrt{t-u}}~e^{-\lambda_A(1-\epsilon) (t-u)}~\left[\int\pi(dx)~\mathbb{E}\left(
\Vert \nabla U(\overline{X}_{s,u}(x))+\overline{Y}_{s,u}(x)\Vert
\right)\right]~du
\end{equation}
\section{Skorohod fluctuation processes}\label{sk-section}
\subsection{A variance formula}\label{var-skorohod}
Let $\varsigma_t(x)$ be some differentiable $(d\times r)$-matrix valued function on $\mathbb{R}^d$ such that
\begin{equation}\label{hyp-varsigma}
\Vert\nabla \varsigma\Vert<\infty\quad \mbox{\rm and}\quad
\Vert \varsigma(0)\Vert:=\sup_t\Vert \varsigma_t(0)\Vert<\infty
\end{equation}
Recalling that $(W_{u+h}-W_{u})$ is independent of the flows $\overline{X}_{s,u}$ and $\nabla X_{u+h,t}$, the discrete time approximation (\ref{sk-integral}) shows that
Skorohod stochastic integral is centered; that is, we have that
$\mathbb{E}(S_{s,t}(\varsigma)(x))=0$.
Following (\ref{sk-integral}), the variance can be computed using the following approximation formula
\begin{equation}\label{sk-integral-var-ref}
\begin{array}{l}
\displaystyle
\mathbb{E}\left[\Vert S_{s,t}(\varsigma)(x)\Vert^2\right]
=\lim_{h\rightarrow 0}~\sum_{u,v\,\in [s,t]_h}~ \sum_{1\leq i\leq d}~\sum_{1\leq j,k\leq r} \\
\\
\hskip3cm\displaystyle\mathbb{E}\left\{
\left[\left(\nabla X_{u+h,t}\right)(\overline{X}_{s,u}(x))^{\prime}~ \varsigma_u(\overline{X}_{s,u}(x))\right]_{i,j}~
\left[\left(\nabla X_{v+h,t}\right)(\overline{X}_{s,v}(x))^{\prime}~ \varsigma_{v}(\overline{X}_{s,v}(x))\right]_{i,k}\right.\\
\\
\left.\hskip7cm
(W^j_{u+h}-W^j_{u})
(W^k_{v+h}-W^k_{v})~\right\}
\end{array}
\end{equation}
The proof of the above assertion is provided in section~\ref{sec-extended-2-sided}, see for instance proposition~\ref{k-prop}.
Consider the matrix valued function
\begin{equation}\label{defi-Sigma}
\Sigma_{s,u,t}(x):=\left[\left(\nabla X_{u,t}\right)^{\prime}\circ\overline{X}_{s,u}\right](x)~\varsigma_u(\overline{X}_{s,u}(x))
\end{equation}
In this notation, the limiting diagonal term $u=v$ in the r.h.s. of (\ref{sk-integral-var-ref}) is clearly equal to
$$
\int_s^t~ \mathbb{E}\left[ \sum_{i,j}\Sigma_{s,u,t}(x)_{i,j}~\Sigma_{s,u,t}(x)_{i,j}\right]~du= \int_s^t~ \mathbb{E}\left[ \Vert\Sigma_{s,u,t}(x)\Vert^2_{\tiny F}\right]~du
$$
In addition, whenever condition $( {\cal T})_2$ is met and $\varsigma$ is bounded, (\ref{ref-nablax-estimate-0}) readily yields the estimate
\begin{equation}\label{ref-diagonal-Sigma}
\left[ \int_s^t~ \mathbb{E}\left[ \Vert\Sigma_{s,u,t}(x)\Vert^2_{\tiny F}\right]~du\right]^{1/2}\leq \Vert\varsigma\Vert_2~\sqrt{d/(2\lambda_A)}
\end{equation}
More generally, using (\ref{def-4th}) whenever $(\overline{ {\cal M }})_{2/\delta}$ and $( {\cal T})_{2/(1-\delta)}$ are met for some $\delta\in ]0,1[$ we have the estimate
$$
\mathbb{E}\left[ \Vert \Sigma_{s,u,t}(x)\Vert^2\right]
\leq c_{1,\delta}~
\left[
\Vert \varsigma(0)\Vert^2+\Vert\nabla \varsigma\Vert^2~(1+\Vert x\Vert)^2\right]~e^{-2\lambda_A(2/(1-\delta))(t-u)}
$$
This implies that
\begin{equation}\label{ref-diagonal-Sigma-ref2}
\left[ \int_s^t~ \mathbb{E}\left[ \Vert\Sigma_{s,u,t}(x)\Vert^2_{\tiny F}\right]~du\right]^{1/2}\leq~c_{2,\delta}~
\left[
\Vert \varsigma(0)\Vert+\Vert\nabla \varsigma\Vert~(1+\Vert x\Vert)\right]/\sqrt{\lambda_A}
\end{equation}
The non-diagonal term can be computed in a more direct way using Malliavin derivatives of the functions $\Sigma_{s,u,t}$.
For any $s\leq u\leq v\leq t$ we have
\begin{equation}\label{s-tensor-diff-u-v-ref}
D_v\left\{\left[\left(\nabla X_{u,t}\right)^{\prime}\circ\overline{X}_{s,u}\right]~\left[\varsigma_u\circ\overline{X}_{s,u}\right]\right\}=
\left[\left(D_v\left(\nabla X_{u,t}\right)^{\prime}\right)\circ\overline{X}_{s,u}\right]~\left[\varsigma_u\circ\overline{X}_{s,u}\right]
\end{equation}
As expected, observe that
$$
\nabla\sigma=0\quad\Longrightarrow\quad D_v \Sigma_{s,u,t}(x)=0
$$
In the reverse angle, whenever $s\leq v\leq u\leq t$ we have the chain rule formula
\begin{equation}\label{s-tensor-diff-u-v-2}
\begin{array}{l}
D_v\left(\left[\varsigma_u\circ\overline{X}_{s,u}\right]~\left[\left(\nabla X_{u,t}\right)\circ\overline{X}_{s,u}\right]\right)\\
\\
:=
\left[D_{v}\left(\varsigma_u\circ \overline{X}_{s,u}\right)\right]\left[\left(\nabla X_{u,t}\right)\circ\overline{X}_{s,u}\right]+
\left[D_v \overline{X}_{s,u}\otimes (\varsigma_u\circ \overline{X}_{s,u})\right] \left[\left(\nabla^2 X_{u,t}\right)\circ\overline{X}_{s,u}\right] \end{array}
\end{equation}
As above, Malliavin differentials $D_{v}\left(\varsigma_u\circ \overline{X}_{s,u}\right)$ and $D_v \overline{X}_{s,u}$ can be computed using the chain rule formulae (\ref{ref-chain-r}).
A more detailed analysis of the chain rules formulae (\ref{ref-chain-r}), (\ref{s-tensor-diff-u-v}) and (\ref{s-tensor-diff-u-v-2}) for one dimensional models is provided in section~\ref{sec-extended-2-sided} (cf. lemma~\ref{lem-tex}).
Observe that
$$
\nabla \varsigma=0\quad\Longrightarrow\quad D_v \left[\Sigma_{s,u,t}^{\,\prime}\right]= \left[D_v \overline{X}_{s,u}\otimes (\varsigma_u\circ \overline{X}_{s,u})\right] \left[\left(\nabla^2 X_{u,t}\right)\circ\overline{X}_{s,u}\right]
$$
We consider the inner product
\begin{eqnarray*}
\left\langle D_u\Sigma_{s,v,t}(x),D_v\Sigma_{s,u,t}(x)\right\rangle
&:=& \sum_{i,j,k} \left( D_v\Sigma_{s,u,t}(x)\right)_{k,i,j}~
\left(D_u\Sigma_{s,v,t}(x)\right)_{j,i,k}\end{eqnarray*}
In this notation, an explicit description of the $\mathbb{L}_2$-norm of the two-sided stochastic integral in terms of
Malliavin derivatives is given below.
\begin{lem}\label{lem-var}
The $\mathbb{L}_2$-norm of the Skorohod integral
$S_{s,t}(\varsigma)(x)$ introduced in (\ref{sk-integral}) is given for any $x\in \mathbb{R}^d$ and $s\leq t$ by the formulae
$$
\mathbb{E}\left[\Vert S_{s,t}(\varsigma)(x)\Vert^2\right]=\int_{[s,t]}~ \mathbb{E}\left[ \Vert\Sigma_{s,u,t}(x)\Vert^2_{\tiny F}\right]~du
+\int_{[s,t]^2}~
\mathbb{E}\left[ \left\langle D_v\Sigma_{s,u,t}(x),
D_u\Sigma_{s,v,t}(x)\right\rangle\right]
~du~dv
$$
with the random matrix function $\Sigma_{s,u,t}$ defined in (\ref{defi-Sigma}) and
the Malliavin
derivative $D_v\Sigma_{s,u,t}$ given in formulae (\ref{s-tensor-diff-u-v-ref}) and (\ref{s-tensor-diff-u-v-2}). In addition, we have
$$
\nabla\sigma=0\quad\Longrightarrow\quad \mathbb{E}\left[\Vert S_{s,t}(\varsigma)(x)\Vert^2\right]=\int_{[s,t]}~ \mathbb{E}\left[ \Vert\Sigma_{s,u,t}(x)\Vert^2_{\tiny F}\right]~du
$$
\end{lem}
The above lemma can be interpreted as a matrix version of the isometry property (\ref{isometry}).
A proof of the above lemma based on the $\mathbb{L}_2$-approximation of two-sided stochastic integrals is provided in section~\ref{sec-extended-2-sided} (see for instance proposition~\ref{k-prop}).
\subsection{Quantitative estimates}\label{q-sec}
For any $p>1$ and any tensor norms we also quote the rather well known $\mathbb{L}_p$-norm estimates
$$
\begin{array}{l}
\displaystyle \mathbb{E}\left[\Vert S_{s,t}(\varsigma)(x)\Vert^{p}\right]^{2/p}\\
\\
\displaystyle\leq c_{1,p} \int_{[s,t]}~ \mathbb{E}\left[ \Vert\Sigma_{s,u,t}(x)\Vert^2\right]~du
+c_{2,p}~\mathbb{E}\left[ \left(\int_{[s,t]^2}~
\Vert D_v\Sigma_{s,u,t}(x)\Vert^2
~du~dv\right)^{p/2}\right]^{2/p}
\end{array}
$$
for some finite constants $c_{i,p}$ whose values only depend on $p$.
A proof of these estimates can be found in~\cite{nualart-pardoux,watanabe}, see also \cite{nualart-z} for multiple Skorohod integrals. By the generalized Minkowski inequality, for any $n\geq 2$ we also have the estimate
\begin{equation}\label{pre-theo-fluctuation}
\begin{array}{l}
\displaystyle \mathbb{E}\left[\Vert S_{s,t}(\varsigma)(x)\Vert^{n}\right]^{2/n}\\
\\
\displaystyle\leq c_{1,n} \int_{[s,t]}~ \mathbb{E}\left[ \Vert\Sigma_{s,u,t}(x)\Vert^2\right]~du
+c_{2,n}~\int_{[s,t]^2}~\mathbb{E}\left[
\Vert D_v\Sigma_{s,u,t}(x)\Vert^n\right]^{2/n}
~du~dv
\end{array}
\end{equation}
Observe that for any $n\geq 2$ we have
$$
(\overline{ {\cal M }})_n\Longrightarrow \vertiii{\varsigma(x)}_n\leq \kappa_n~\left(\Vert\varsigma(0)\Vert+\Vert\nabla\varsigma\Vert\right) (1\vee\Vert x\Vert)
$$
The main objective of this section is to prove the following theorem.
\begin{theo}\label{theo-quantitative-sko}
Assume that $(M)_{2n/\delta}$ and $(T)_{2n/(1-\delta)}$ are satisfied for some parameter $n\geq 2$ and some $\delta\in ]0,1[$. In this situation, we have the uniform estimate
\begin{equation}\label{intro-inq-s}
\displaystyle \mathbb{E}\left[\Vert S_{s,t}(\varsigma)(x)\Vert^{n}\right]^{1/n}
\leq \kappa_{\delta,n}~\vertiii{\varsigma(x)}_{2n/\delta}~(1\vee \Vert x\Vert)
\end{equation}
For uniformly bounded diffusion functions $(\varsigma,\sigma,\overline{\sigma})$
whenever
$(T)_{2n}$ is met for some $n\geq 2$ we have
\begin{equation}\label{intro-inq-s-2}
\displaystyle \mathbb{E}\left[\Vert S_{s,t}(\varsigma)(x)\Vert^{n}\right]^{1/n}
\leq \kappa_{n}~\left(\Vert \varsigma\Vert+ \Vert\nabla\varsigma \Vert\right)
\end{equation}
In addition, for constant diffusion functions $(\varsigma,\sigma,\overline{\sigma})$ whenever $(T)_{2}$ is met, for any $n\geq 2$ we have the uniform estimate
\begin{equation}\label{intro-inq-s-2-2}
\mathbb{E}\left[\Vert S_{s,t}(\varsigma)(x)\Vert^{n}\right]^{1/n}
\leq \kappa_{n}~\Vert \varsigma\Vert
\end{equation}
\end{theo}
The proof of the above theorem, including a more detailed description of the parameters $\kappa_{\delta,n}$ and $\kappa_{n}$ is provided below.
Next, we estimate the $\mathbb{L}_n$-norm of the Malliavin differential $D_v\Sigma_{s,u,t}(x)$ in the two cases $(s\leq u\leq v\leq t)$ and $(s\leq v\leq u\leq t)$.
\subsubsection*{Case $(s\leq u\leq v\leq t)$:}
Using (\ref{s-tensor-diff-u-v-ref}) we have
$$
\Vert D_v\Sigma_{s,u,t}(x)\Vert\leq c~\Vert \varsigma_u(\overline{X}_{s,u}(x))\Vert~\Vert
(D_v\nabla X_{u,t})(\overline{X}_{s,u}(x))\Vert
$$
Using (\ref{ref-chain-r}) and (\ref{s-tensor-diff-u-v}) this yields the estimate
$$
\Vert D_v\Sigma_{s,u,t}(x)\Vert\leq c_1~ \mathbb{I}_{s,u,t}(x)+c_2~ \mathbb{J}_{s,u,t}(x)
$$
with the functions
$$
\begin{array}{l}
\displaystyle \mathbb{I}_{s,u,t}(x):= \Vert \nabla \sigma\Vert~\Vert \varsigma_u(\overline{X}_{s,u}(x))\Vert~
\Vert (\nabla X_{u,v})(\overline{X}_{s,u}(x))\Vert~\Vert(\nabla X_{v,t})(Z^{s,v}_u(x))\Vert\\
\\
\mathbb{J}_{s,u,t}(x):=\Vert \sigma_v(Z^{s,v}_u(x))\Vert~ \Vert \varsigma_u(\overline{X}_{s,u}(x))\Vert~\Vert(\nabla X_{u,v})(\overline{X}_{s,u}(x))\Vert~
\Vert(\nabla^2 X_{v,t})(Z^{s,v}_u(x))\Vert
\end{array}
$$
In the above display, $Z^{s,v}_u(x)$ stands for the interpolating flow defined in (\ref{interpolating-flow}).
\begin{itemize}
\item Firstly assume that $\Vert \varsigma\Vert\vee \Vert \sigma\Vert<\infty$ and $( {\cal T})_{2n}$ is satisfied for some parameter $n\geq 1$. In this situation,
applying proposition~\ref{def-4th-prop} and proposition~\ref{prop-nabla-2-estimate},
for any $\epsilon\in ]0,1[$
we have the uniform estimates
$$
\mathbb{E}\left( \Vert D_v\Sigma_{s,u,t}(x)\Vert^{n}\right)^{1/n}\leq \Vert \varsigma\Vert~\rchi_{n,\epsilon}(b,\sigma)~\exp{\left(-(1-\epsilon)\lambda_A(2n)(t-u)\right)}
$$
with the parameter $ \rchi_{n,\epsilon}(b,\sigma)$ given by
$$
\rchi_{n,\epsilon}(b,\sigma):= c~\left[ \Vert \sigma\Vert\vee \Vert \nabla \sigma\Vert\right]
\left[1+\frac{1}{\epsilon}~\frac{n}{\lambda_A(2n)}~\rchi(b,\sigma)\right]~\quad\mbox{\rm with $\rchi(b,\sigma)$ given in (\ref{def-chi-b}).}
$$
\item More generally, when $\Vert \nabla\varsigma\Vert\vee \Vert \nabla\sigma\Vert<\infty$ the functions $\varsigma_t(x)$ and $\sigma_t(x)$ may grow at the most linearly with respect to $\Vert x\Vert$. Assume that conditions $(M)_{2n/\delta}$ and condition $( {\cal T})_{2n/(1-\delta)}$ are satisfied for some parameters $n\geq 1$ and $\delta\in ]0,1[$. In this situation, applying H\"older inequality we check that
$$
\begin{array}{l}
\displaystyle
\mathbb{E}\left( \Vert \mathbb{I}_{s,u,t}(x)\Vert^{n}\right)^{1/n}\leq c~ \Vert \nabla \sigma\Vert~
\mathbb{E}\left( \Vert \varsigma_u(\overline{X}_{s,u}(x))\Vert^{n/\delta}\right)^{\delta/n}\\
\\
\hskip.3cm\displaystyle\times\mathbb{E}\left( \Vert (\nabla X_{u,v})(\overline{X}_{s,u}(x))\Vert^{2n/(1-\delta)}\right)^{(1-\delta)/(2n)}
\mathbb{E}\left( \Vert (\nabla X_{v,t})(Z^{s,v}_u(x))\Vert^{2n/(1-\delta)}\right)^{(1-\delta)/(2n)}
\end{array}$$
Applying proposition~\ref{def-4th-prop} we check that
$$
\begin{array}{l}
\displaystyle
\mathbb{E}\left( \Vert \mathbb{I}_{s,u,t}(x)\Vert^{n}\right)^{1/n}\leq c_{n,\delta}~ \Vert \nabla \sigma\Vert ~\vertiii{\varsigma(x)}_{n/\delta}~e^{-\lambda_A(2n/(1-\delta))(t-u)}
\end{array}$$
In the same vein, combining proposition~\ref{def-4th-prop} and proposition~\ref{prop-nabla-2-estimate} with the uniform moment estimates (\ref{ref-ui-m-over}) we check that
$$
\begin{array}{l}
\displaystyle
\mathbb{E}\left( \Vert \mathbb{J}_{s,u,t}(x)\Vert^{n}\right)^{1/n}\leq c_{n,\delta}~\left[\Vert \sigma(0)\Vert+\Vert\nabla \sigma\Vert\right]~\frac{1}{\epsilon}~\frac{\rchi(b,\sigma)}{\lambda_A(2n/(1-\delta))}\\
\\
\hskip3cm\displaystyle\times
~\vertiii{\varsigma(x)}_{2n/\delta}~~\left[ 1+\Vert x\Vert\right]~~e^{-(1-\epsilon)\lambda_A(2n/(1-\delta))(t-u)}
\end{array}$$
We conclude that
$$
\mathbb{E}\left( \Vert D_v\Sigma_{s,u,t}(x)\Vert^{n}\right)^{1/n}\leq
\rchi_{n,\delta,\epsilon}(b,\sigma)
~\vertiii{\varsigma(x)}_{2n/\delta}~\left[ 1+\Vert x\Vert\right]~~e^{-(1-\epsilon)\lambda_A(2n/(1-\delta))(t-u)}
$$
with the parameter
$$
\rchi_{n,\delta,\epsilon}(b,\sigma):=c_{n,\delta}~\left[\Vert \sigma(0)\Vert+\Vert\nabla \sigma\Vert\right]~\left(1+\frac{1}{\epsilon}~\frac{\rchi(b,\sigma)}{\lambda_A(2n/(1-\delta))}\right)
$$
\end{itemize}
\subsubsection*{Case $(s\leq v\leq u\leq t)$:}
We use (\ref{s-tensor-diff-u-v-2}) to check that
$$
\begin{array}{l}
\Vert D_v\Sigma_{s,u,t}(x)\Vert
\leq
\Vert [D_{v}\left(\varsigma_u\circ \overline{X}_{s,u}\right)](x)\Vert~\Vert \left(\nabla X_{u,t}\right)(\overline{X}_{s,u}(x))\Vert\\
\\
\hskip3cm+
\Vert [D_v \overline{X}_{s,u}](x)\otimes \varsigma_u(\overline{X}_{s,u})(x)\Vert ~\Vert \left(\nabla^2 X_{u,t}\right)(\overline{X}_{s,u} (x))\Vert
\end{array}
$$
On the other hand, using the chain rules (\ref{ref-chain-r}) we have
\begin{eqnarray*}
D_v\overline{X}_{s,u}&:=&\left(D_v \overline{X}_{s,v}\right)~\left[\left(\nabla \overline{X}_{v,u}\right)\circ \overline{X}_{s,v}\right]\nonumber\\
D_{v}\left(\varsigma_{u}\circ \overline{X}_{s,u}\right)&=&(D_v \overline{X}_{s,u})~\left[\left(\nabla \varsigma_{u}\right)\circ \overline{X}_{s,u}\right]
\end{eqnarray*}
This yields the estimate
$$
\begin{array}{l}
\Vert D_v\Sigma_{s,u,t}(x)\Vert
\leq
c_1~\Vert\overline{\sigma}_v(\overline{X}_{s,v}(x))\Vert~\Vert\nabla \varsigma \Vert~\Vert (\nabla \overline{X}_{v,u})(\overline{X}_{s,v}(x))\Vert~\Vert \left(\nabla X_{u,t}\right)(\overline{X}_{s,u}(x))\Vert\\
\\
\hskip.3cm+c_2~
\Vert\overline{\sigma}_v(\overline{X}_{s,v}(x))\Vert~\Vert\varsigma_{u}(\overline{X}_{s,u}(x)) \Vert~\Vert (\nabla \overline{X}_{v,u})(\overline{X}_{s,v}(x))\Vert~\Vert \left(\nabla^2 X_{u,t}\right)(\overline{X}_{s,u} (x))\Vert
\end{array}
$$
\begin{itemize}
\item Firstly assume that $\Vert \varsigma\Vert\vee \Vert \overline{\sigma}\Vert<\infty$ and condition $(T)_{2n}$ is satisfied for some $n\geq 1$.
In this situation, arguing as above for any $\epsilon\in ]0,1[$
we have the uniform estimates
$$
\mathbb{E}\left( \Vert D_v\Sigma_{s,u,t}(x)\Vert^{n}\right)^{1/n}\leq \left(\Vert \varsigma\Vert+ \Vert\nabla\varsigma \Vert\right)~\overline{\rchi}_{n,\epsilon}(b,\sigma)~\exp{\left(-(1-\epsilon)\lambda_{A,\overline{A}}(2n)(t-v)\right)}
$$
for some universal constant $c$ and the parameter $ \overline{\rchi}_{n,\epsilon}(b,\sigma)$ given by
$$
\overline{\rchi}_{n,\epsilon}(b,\sigma):=c~\Vert\overline{\sigma}\Vert~\left[1+\frac{1}{\epsilon}~\frac{n}{\lambda_{A,\overline{A}}(2n)}~\rchi(b,\sigma)\right]~\quad\mbox{\rm with $\rchi(b,\sigma)$ given in (\ref{def-chi-b}).}
$$
\item More generally assume that $\Vert \nabla\varsigma\Vert\vee \Vert \nabla\overline{\sigma}\Vert<\infty$. Also assume that conditions $(M)_{2n/\delta}$ and $(T)_{2n/(1-\delta)}$ are satisfied for some parameters $n\geq 1$ and $\delta\in ]0,1[$. In this situation, we have
$$
\begin{array}{l}
\displaystyle
\mathbb{E}\left( \Vert D_v\Sigma_{s,u,t}(x)\Vert^{n}\right)^{1/n}\\
\\
\displaystyle \leq
\rchi_{n,\delta,\epsilon}(b,\sigma,\overline{\sigma})
~\vertiii{\varsigma(x)}_{2n/\delta}~\left[ 1+\Vert x\Vert\right]~~e^{-(1-\epsilon)\lambda_{A,\overline{A}}(2n/(1-\delta))(t-v)}
\end{array} $$
with the parameter
$$
\rchi_{n,\delta,\epsilon}(b,\sigma,\overline{\sigma}):=c_{n,\delta}~\left[\Vert \overline{\sigma}(0)\Vert+\Vert\nabla \overline{\sigma}\Vert\right]~\left(1+\frac{1}{\epsilon}~\frac{\rchi(b,\sigma)}{\lambda_{A,\overline{A}}(2n/(1-\delta))}\right)
$$
\end{itemize}
The end of the proof of theorem~\ref{theo-quantitative-sko} is a direct consequence of the estimates discussed above combined with (\ref{pre-theo-fluctuation}) and the diagonal estimates presented in (\ref{ref-diagonal-Sigma}). \hfill\hbox{\vrule height 5pt width 5pt depth 0pt}\medskip \\
\subsection{Some extensions}\label{sec-sk-f}
This section is concerned with the two-sided stochastic integral (\ref{Alekseev-grobner-sg-ae-f}). Using the gradient formula in (\ref{grad-sg})
the Skorohod stochastic integral in (\ref{Alekseev-grobner-sg-ae-f}) takes the form
$$
\mathbb{S}_{s,t}(f,\Delta \sigma)(x)=\int_s^t~ \Sigma_{s,u,t}(f)(x)~dW_u
$$
with the integrands
$$
\Sigma_{s,u,t}(f)(x):=\nabla f(Z^{s,t}_u(x))^{\prime}~ \Sigma_{s,u,t}(x)\quad \mbox{\rm and}\quad
\Sigma_{s,u,t}(x):=\left[\left(\nabla X_{u,t}\right)^{\prime}\circ\overline{X}_{s,u}\right]~\left[\Delta \sigma_u\circ\overline{X}_{s,u}\right]
$$
As in (\ref{s-tensor-diff-u-v}), using the chain rule properties of Malliavin derivatives we check that
$$
D^i_v \Sigma_{s,u,t}(f)=\left(D^i_v \nabla f(Z^{s,t}_u)^{\prime}\right)~ \Sigma_{s,u,t}+\nabla f(Z^{s,t}_u)^{\prime}~D^i_v \Sigma_{s,u,t}
$$
as well as
$$
D^i_v \nabla f(Z^{s,t}_u)^{\prime}=\nabla^2 f(Z^{s,t}_u)^{\prime}~D^i_v Z^{s,t}_u
$$
This yields the differential formula
$$
D^i_v \Sigma_{s,u,t}(f)=\nabla f(Z^{s,t}_u)^{\prime}~D^i_v \Sigma_{s,u,t}+\nabla^2 f(Z^{s,t}_u)^{\prime}~(D^i_v Z^{s,t}_u)~ \Sigma_{s,u,t}
$$
The Malliavin derivatives $D^i_v \Sigma_{s,u,t}$ are computed using formulae (\ref{s-tensor-diff-u-v-ref}) and (\ref{s-tensor-diff-u-v-2}); thus, it remains to
compute the Malliavin derivatives $D_v Z^{s,t}_u$ of the interpolating path.
$\bullet$ When $u\leq v$ we have
$$
Z^{s,t}_u=(X_{v,t}\circ X_{u,v})\circ\overline{X}_{s,u}=X_{v,t}\circ Z^{s,v}_u
$$
In this situation, as in (\ref{ref-chain-r}) using the chain rule properties of Malliavin derivatives we check that
$$
D_v Z^{s,t}_u=D_v Z^{s,v}_u~((\nabla X_{v,t})\circ Z^{s,v}_u )=((D_vX_{u,v})\circ \overline{X}_{s,u})~((\nabla X_{v,t})\circ Z^{s,v}_u )
$$
By (\ref{first-Malliavin}) we conclude that
$$
D_v Z^{s,t}_u=(\sigma_v \circ Z^{s,v}_u)~((\nabla X_{v,t})\circ Z^{s,v}_u )
$$
$\bullet$ When $v\leq u$ we have
$$
Z^{s,t}_u=X_{u,t}\circ (\overline{X}_{v,u}\circ\overline{X}_{s,v})=Z^{v,t}_u\circ \overline{X}_{s,v}
$$
In this situation, arguing as above we check that
$$
D_v Z^{s,t}_u=D_v \overline{X}_{s,v}~((\nabla Z^{v,t}_u)\circ\overline{X}_{s,v})=D_v \overline{X}_{s,v}~
((\nabla \overline{X}_{v,u})\circ\overline{X}_{s,v})~((\nabla X_{u,t})\circ \overline{X}_{s,u})
$$
By (\ref{first-Malliavin}) we conclude that
$$
D_v Z^{s,t}_u=(\overline{\sigma}_v \circ \overline{X}_{s,v})~((\nabla \overline{X}_{v,u})\circ\overline{X}_{s,v})~((\nabla X_{u,t})\circ \overline{X}_{s,u})
$$
\section{Some anticipative calculus}\label{lem-var-proof}
For clarity and to avoid unnecessary sophisticated multi-index notation, we only consider one dimensional model. The proof of the results presented in this section in the general
case can be reproduced word-for-word for multidimensional models.
To simplify the presentation,
we write $\partial^n f$ the derivative of order $n\geq 1$ of a smooth function $f$. We also set $Y_{s,t}(x):=\overline{X}_{s,t}(x)$.
We also reduce the analysis to the unit interval.
In this context, for any $t\in [0,1]$
we set
\begin{equation}\label{def-XYt}
Y_{t}:=Y_{0,t}\quad \mbox{\rm and}\quad
X^t:=X_{t,1}
\end{equation}
\subsection{Extended two-sided stochastic integrals}\label{sec-extended-2-sided}
The aim of this section is to extend the two-sided stochastic integration introduced in~\cite{pardoux-protter} to Skorohod integrals of the form
(\ref{sk-integral}), for some time homogeneous function $\varsigma_u=\varsigma$ satisfying (\ref{hyp-varsigma}).
For any $t\in [0,1]$
we set
\begin{equation}\label{def-Phi}
\Phi( X^{t},Y_t(x)):=\partial X^{t}(Y_{t}(x))~ \varsigma(Y_{t}(x))
\end{equation}
In this notation the limiting integral in (\ref{sk-integral}) takes formally the following form
$$
S_{0,1}(\varsigma)(x):= \int_0^1~\Phi( X^{t},Y_t(x))~dW_t
$$
The existence of this two-sided stochastic integral is discussed below in (\ref{ref-2-sided-Phi}).
To simplify the presentation, we fix the state variable $x$ and we write $Y_{t}$ and $\Phi( X^{t},Y_t)$ instead of $Y_{t}(x)$ and $\Phi( X^{t},Y_t(x))$. Next technical lemma provided a more explicit description of the Malliavin derivatives of the processes $\Phi(X^t,Y_t)$.
\begin{lem}\label{lem-tex}
For any $s<t$ we have
$$
D_s\,\Phi(X^t,Y_t)=\left[
\partial((\partial X^{t})\circ Y_{s,t})(Y_s)~(\varsigma\circ Y_{s,t})(Y_{s})+((\partial X^{t})\circ Y_{s,t})(Y_{s})\times \partial(\varsigma\circ Y_{s,t})(Y_s)\right]~\overline{\sigma}(Y_{s})
$$
In addition, we have
$$
D_t\,\Phi(X^s,Y_s)=\left[\partial((\partial X^{t})\circ X_{s,t})(Y_s)~(\sigma\circ X_{s,t})(Y_s)+\partial(X^{t}\circ X_{s,t})(Y_s)~\partial\sigma(X_{s,t}(Y_s))~\right]\varsigma(Y_s)
$$
\end{lem}
\proof
Using the chain rules properties, for any $s<t$ we have
$$
\begin{array}{l}
\displaystyle
D_s\,\Phi(X^t,Y_t)=D_s\left((\partial X^t)(Y_{s,t}(Y_s))~(\varsigma\circ Y_{s,t})(Y_s)\right)\\
\\
=D_s\left((\partial X^t)\circ Y_{s,t})(Y_s)\right)~(\varsigma\circ Y_{s,t})(Y_s)+(\partial X^t)\circ Y_{s,t})(Y_s)~D_s(\varsigma\circ Y_{s,t})(Y_s)\\
\end{array}$$
The end of the proof of the first assertion comes from the fact that
$$
D_s\left((\partial X^t)\circ Y_{s,t})(Y_s)\right) =\partial((\partial X^{t})\circ Y_{s,t})(Y_s)~D_sY_s\quad
\mbox{\rm with}\quad D_sY_s=\overline{\sigma}(Y_{s})
$$
In the same vein, we have
$$
D_s(\varsigma\circ Y_{s,t})(Y_s)= \partial(\varsigma\circ Y_{s,t})(Y_s)~\overline{\sigma}(Y_{s})
$$
We also have that
$$
\begin{array}{l}
\displaystyle
D_t\,\Phi(X^s,Y_s)=D_t\left((\partial X^s)(Y_s)~\varsigma(Y_s)\right)\\
\\
=D_t\left(\partial (X^{t}\circ X_{s,t})(Y_s)\right)~\varsigma(Y_s)
=D_t\left(\left((\partial X^{t})\circ X_{s,t}\right)(Y_s)~(\partial X _{s,t})(Y_s) \right)~\varsigma(Y_s)
\end{array}$$
The last assertion comes from the fact that
$$
\begin{array}{l}
\displaystyle
D_t\,\left(\left((\partial X^{t})\circ X_{s,t}\right)(Y_s)~(\partial X _{s,t})(Y_s) \right)~\\
\\
=D_t\left((\partial X^{t})\circ X_{s,t}\right)(Y_s)~(\partial X _{s,t})(Y_s) +\left((\partial X^{t})\circ X_{s,t}\right)(Y_s)~D_t(\partial X _{s,t})(Y_s)
\end{array}$$
The r.h.s. term in the above display can be rewritten as follows
$$
\begin{array}{l}
\displaystyle
D_t(\partial X _{s,t})(Y_s)=\partial\sigma(X_{s,t}(Y_s))~(\partial X _{s,t})(Y_s)\\
\\
\Longrightarrow ((\partial X^{t})\circ X_{s,t})(Y_s)~D_t(\partial X _{s,t})(Y_s)=\partial(X^{t}\circ X_{s,t})(Y_s)~\partial\sigma(X_{s,t}(Y_s))~
\end{array} $$
In the same vein, we have
$$
\begin{array}{l}
\displaystyle D_t\left((\partial X^{t})\circ X_{s,t}\right)(Y_s)=((\partial^2 X^{t})\circ X_{s,t})(Y_s)~D_tX_{s,t}(Y_s)=((\partial^2 X^{t})\circ X_{s,t})(Y_s)~\sigma(X_{s,t}(Y_s))\\
\\
\Longrightarrow D_t\left((\partial X^{t})\circ X_{s,t}\right)(Y_s)~(\partial X _{s,t})(Y_s) =\partial((\partial X^{t})\circ X_{s,t})(Y_s)~\sigma(X_{s,t}(Y_s))
\end{array} $$
This ends the proof of the second assertion. The proof of the lemma is now completed.
\hfill\hbox{\vrule height 5pt width 5pt depth 0pt}\medskip \\
From the above lemma, we also check that
all the $n$-absolute moments of the Malliavin derivatives $ D_s\,\Phi(X^t,Y_t)$
are finite with at most quadratic growth w.r.t. the initial values.
Next proposition extends proposition 3.3 in~\cite{pardoux-protter} to stochastic processes of the form (\ref{def-Phi}).
\begin{prop}\label{k-prop}
Let $ [0,1]_h$ be any refining sequence of partitions of the unit interval. For any $h>0$ we define
$$
S^{h}(\Phi):=\sum_{t\in[0,1]_h}
\Phi( X^{t+h},Y_t)~(W_{t+h}-W_t)
$$
Then $ S^{h}(\Phi)$ is a Cauchy sequence in $\mathbb{L}_2(\Omega)$. In addition, for any decreasing sequence of time steps
$h_1>h_2$
we have the formula
\begin{equation}\label{cov-cauchy}
\displaystyle\lim_{h_1\rightarrow 0}\mathbb{E}\left( S^{h_1}(\Phi)~S^{h_2}(\Phi)\right) =\mathbb{E}\left(
\int_0^1~\Phi( X^{t},Y_t)^2~dt+\int_{[0,1]^2}~ D_s\,\Phi(X^t,Y_t)~ D_t\,\Phi(X^s,Y_s)~ds~dt
\right)
\end{equation}
\end{prop}
Before entering into the details of the proof of the proposition, we give a couple of comments.
The hypothesis that $[0,1]_h$ is a refining sequence indexed by $h$ is not essential but it simplifies the proof of the proposition, see for instance lemma 3.1.1 in~\cite{nualart}.
Arguing as in the proof of theorem 3.3 and theorem 7.1 in~\cite{pardoux-protter} the above proposition ensures that the two-sided integral
defined by the $\mathbb{L}_2(\Omega)$-limit
coincides with the two-sided stochastic integral of the process $\Phi( X^{t},Y_t)$ over the unit interval; that is, we have that
\begin{equation}\label{ref-2-sided-Phi}
\int_0^1 \Phi( X^{t},Y_t)~dW_t:=\mathbb{L}_2-\lim_{h\rightarrow 0}\sum_{t\in[0,1]_h}
\Phi( X^{t+h},Y_t)~(W_{t+h}-W_t)
\end{equation}
In this context, proposition~\ref {k-prop} can be interpreted as a version of the isometry property (\ref{isometry}) for the generalized two-sided integral defined above.
{\bf Proof of proposition~\ref{k-prop}:}
We fix $h_1>h_2$ and we assume that $[0,1]_{h_2} $ is a refinement of $\in [0,1]_{h_1}$. For any $(s,t)\in ([0,1]_{h_1}\times [0,1]_{h_2})$ we also set
$$
\Pi_{s,t}^{h_1,h_2}:=\Phi( X^{s+h_1},Y_s)~\Phi( X^{t+h_2},Y_t)~(W_{s+h_1}-W_s)~(W_{t+h_2}-W_t)
$$
With a slight abuse of notation we set
$$
\Delta W_s:=(W_{s+h_1}-W_s)\quad \mbox{\rm and}\quad \Delta W_t:=(W_{t+h_2}-W_t)
$$
$\bullet$ For any overlapping pair $s<t<t+h_2<s+h_1$ using the decomposition
$$
\Delta W_s= (W_{s+h_1}-W_{t+h_2})+ \Delta W_t+(W_t-W_s)
$$
we have
$$
\begin{array}{l}
\displaystyle\mathbb{E}\left(~\Phi( X^{s+h_1},Y_s)~\Phi( X^{t+h_2},Y_t)~\Delta W_t~ \Delta W_s~|~ {\cal W }_t\vee {\cal W }^{t+h_2}\right)=\Phi( X^{s+h_1},Y_s)~\Phi( X^{t+h_2},Y_t)~h_2
\end{array}$$
It follows from the continuity properties of the processes that
$$
\mathbb{E}\left(\sum_{s<t<t+h_2<s+h_1}~\Pi_{s,t}^{h_1,h_2}\right)\longrightarrow_{h_1\rightarrow 0}~\mathbb{E}\left(\int_0^1~\Phi( X^{t},Y_t)^2~~dt\right)
$$
$\bullet$ When $s+h_1<t$ we have
\begin{eqnarray*}
\partial X^{s+h_1}&=& \partial (X^t\circ X_{s+h_1,t})\\
&=&\partial (X^{t+h_2}\circ X_{s+h_1,t})+\left(\left(
\partial X^t-\partial X^{t+h_2}\right)\circ X_{s+h_1,t}\right)~\times~\partial X_{s+h_1,t}
\end{eqnarray*}
On the other hand, we have the decomposition
\begin{eqnarray*}
\partial X^t-\partial X^{t+h_2}&=&\left(( \partial X^{t+h_2})\circ X_{t,t+h_2}\right)\times \partial X_{t,t+h_2}-\partial X^{t+h_2}\\
&=&\left( (\partial X^{t+h_2})\circ (I+\Delta X_t)-\partial X^{t+h_2}\right)+\partial X^{t+h_2}\times \Delta X^{\prime}_t\\
&&\hskip3cm+\left(( \partial X^{t+h_2})\circ (I+\Delta X_t)-\partial X^{t+h_2}\right)\times \Delta X^{\prime}_t
\end{eqnarray*}
with the increment functions
$$
\Delta X^{\prime}_t:=\partial X_{t,t+h_2}-1\quad \mbox{\rm and}\quad \Delta X_t:=X_{t,t+h_2}-I
$$
With a slight abuse of notation, we shall
denote by $\mbox{\rm O}(h^p)$ some possible random variable with any $n$-absolute moment of order $h^p$, for some $p>0$ with
$0<h<1$.
In this notation, we have
\begin{eqnarray*}
\Delta X^{\prime}_t(x)&=&\int_t^{t+h_2}~\partial\sigma (X_{t,u}(x))~\partial X_{t,u}(x)~dW_u+\mbox{\rm O}(h_2)=\mbox{\rm O}(h^{1/2}_2)\\
\Delta X_t(x)&=&\int_t^{t+h_2}~\sigma (X_{t,u}(x))~dW_u+\mbox{\rm O}(h_2)=\mbox{\rm O}(h^{1/2}_2)
\end{eqnarray*}
Given
a smooth function $\theta$ we set
$$
\partial^n\theta(x,y):=\int_0^1~\frac{(1-\epsilon)^{n-1}}{(n-1)!}~\theta^{\prime\prime}(x+\epsilon y)~d\epsilon
$$
In this notation, we have the first and second order decompositions
$$
\begin{array}{l}
\left(( \partial X^{t+h_2})\circ (I+\Delta X_t)-\partial X^{t+h_2}\right)(x)\\
\\
=(\partial^2 X^{t+h_2})(x,\Delta X_t(x))~\Delta X_t(x)
=(\partial^2 X^{t+h_2})(x)~\Delta X_t(x)+(\partial^3 X^{t+h_2})(x,\Delta X_t(x))~\Delta X_t(x)^2
\end{array}
$$
This implies that
$$
\begin{array}{l}
(\partial X^t-\partial X^{t+h_2})(x)\\
\\
=(\partial^2 X^{t+h_2})(x)~\Delta X_t(x)+\partial X^{t+h_2}(x)\times \Delta X^{\prime}_t(x)\\
\\
\hskip.3cm+(\partial^3 X^{t+h_2})(x,\Delta X_t(x))~\Delta X_t(x)^2+(\partial^2 X^{t+h_2})(x,\Delta X_t(x))~\Delta X_t(x)\times \Delta X^{\prime}_t(x)
\end{array}
$$
from which we conclude that
$$
\begin{array}{l}
\displaystyle\partial X^{s+h_1}=
\partial (X^{t+h_2}\circ X_{s+h_1,t})\\
\\
\displaystyle +\left[\partial((\partial X^{t+h_2})\circ X_{s+h_1,t})~\times~((\Delta X_t)\circ X_{s+h_1,t})+\partial (X^{t+h_2}\circ X_{s+h_1,t})~\times~( (\Delta X^{\prime}_t)\circ X_{s+h_1,t})\right]+\mbox{\rm O}(h_2)
\end{array}
$$
This yields the first order decomposition
$$
\begin{array}{l}
\displaystyle\Phi(X^{s+h_1},Y_s)\\
\\
\displaystyle=
\psi^{0}_{s,t}(Y_s) +\psi^{1}_{s,t}(Y_s)~((\Delta X_t)\circ X_{s+h_1,t})(Y_s)+\psi^{2}_{s,t}(Y_s)~( (\Delta X^{\prime}_t)\circ X_{s+h_1,t})(Y_s)+\mbox{\rm O}(h_2)
\end{array}
$$
with the functions
$$
\begin{array}{l}
\psi^{0}_{s,t}(Y_s):=
\partial (X^{t+h_2}\circ X_{s+h_1,t})(Y_s)~ \varsigma(Y_s)\\
\\
\psi^{1}_{s,t}(Y_s):=\partial((\partial X^{t+h_2})\circ X_{s+h_1,t})(Y_s)~ \varsigma(Y_s)~\quad \mbox{\rm and}\quad
\psi^{2}_{s,t}(Y_s):=\partial (X^{t+h_2}\circ X_{s+h_1,t})(Y_s)~ \varsigma(Y_s)
\end{array}
$$
Notice that none of the functions but the increment functions $(\Delta X_t)$ and $(\Delta X^{\prime}_t)$
depend on $ {\cal W }_{t,t+h_2}$, nor on $ {\cal W }_{s,s+h_1}$.
In the reverse angle, we have
$$
\begin{array}{l}
\displaystyle(\partial X^{t+h_2})\circ Y_{t}\\
\\
\displaystyle=(\partial X^{t+h_2})\circ(Y_{s+h_1,t}\circ Y_{s})+\left[((\partial X^{t+h_2})\circ Y_{s+h_1,t})\circ (I+\Delta Y_s)-
((\partial X^{t+h_2})\circ Y_{s+h_1,t})\right]\circ Y_{s}
\end{array} $$
with
$$
\Delta Y_s:=(Y_{s,s+h_1}-I)\Longrightarrow Y_{s+h_1}=(I+\Delta Y_s)\circ Y_s
$$
Arguing as above, we have
$$
\begin{array}{l}
\displaystyle\left[((\partial X^{t+h_2})\circ Y_{s+h_1,t})\circ (y+\Delta Y_s(y))-
((\partial X^{t+h_2})\circ Y_{s+h_1,t})(y)\right]\\
\\
=\partial((\partial X^{t+h_2})\circ Y_{s+h_1,t})(y)~\Delta Y_s(y)+\partial^2((\partial X^{t+h_2})\circ Y_{s+h_1,t})(y,\Delta Y_s(y))~(\Delta Y_s(y))^2
\end{array} $$
We conclude that
$$
\begin{array}{l}
\displaystyle(\partial X^{t+h_2})\circ Y_{t}\\
\\
\displaystyle=(\partial X^{t+h_2})\circ(Y_{s+h_1,t}\circ Y_{s})+\partial((\partial X^{t+h_2})\circ Y_{s+h_1,t})(Y_s)~((\Delta Y_s)\circ Y_s)+\mbox{\rm O}(h_1)\end{array} $$
In the same vein, we have
$$
\displaystyle\varsigma\circ Y_{t}=(\varsigma\circ Y_{s+h_1,t}\circ Y_{s})+\partial(\varsigma\circ Y_{s+h_1,t})(Y_s)~((\Delta Y_s)\circ Y_s)+\mbox{\rm O}(h_1)
$$
Multiplying these terms, we check that
$$
\Phi(X^{t+h_2},Y_t)=\Psi^{0}_{s,t}(Y_s)
+\Psi^{1}_{s,t}(Y_s)~((\Delta Y_s)\circ Y_s)+\mbox{\rm O}(h_1)
$$
with the functions
$$
\begin{array}{l}
\Psi^{0}_{s,t}(Y_s):=((\partial X^{t+h_2})\circ Y_{s+h_1,t})(Y_{s})\times (\varsigma\circ Y_{s+h_1,t})(Y_{s})\\
\\
\Psi^{1}_{s,t}(Y_s):=\left[\partial((\partial X^{t+h_2})\circ Y_{s+h_1,t})(Y_s)\times(\varsigma\circ Y_{s+h_1,t})(Y_{s})\right.\\
\\
\hskip3cm+\left.((\partial X^{t+h_2})\circ Y_{s+h_1,t})(Y_{s})\times \partial(\varsigma\circ Y_{s+h_1,t})(Y_s)\right]
\end{array}
$$
None of the functions but the increment $\Delta Y_s$ depend on $ {\cal W }_{s,s+h_1}$, nor on $ {\cal W }_{t,t+h_2}$.
Recall that the functions $\Phi(X^{t+h_2},Y_t)$ and $\psi^{0}_{s,t}(Y_s)$ don't depend on $\Delta W_t$. In addition,
the functions $\Phi(X^{s+h_1},Y_s)$ and $\Psi^{0}_{s,t}(Y_s)$ don't depend on $\Delta W_s$. This yields the formula
$$
\begin{array}{l}
\displaystyle\mathbb{E}\left(\Phi(X^{s+h_1},Y_s)~\Phi(X^{t+h_2},Y_t)~~\Delta W_s~\Delta W_t\right)\\
\\
= \displaystyle\mathbb{E}\left(\left[\Phi(X^{s+h_1},Y_s)-\psi^{0}_{s,t}(Y_s)\right]~\left[\Phi(X^{t+h_2},Y_t)-\Psi^{0}_{s,t}(Y_s)\right]~\Delta W_s~\Delta W_t\right)\\
\\
= \displaystyle\mathbb{E}\left(\Psi^{1}_{s,t}(Y_s)~ \psi^{1}_{s,t}(Y_s)~
\left[((\Delta Y_s)\circ Y_s)~~\Delta W_s\right]~\left[((\Delta X_t)\circ X_{s+h_1,t})(Y_s)~\Delta W_t\right]\right)\\
\\
+ ~\displaystyle\mathbb{E}\left( \Psi^{1}_{s,t}(Y_s)~\psi^{2}_{s,t}(Y_s)~\left[((\Delta Y_s)\circ Y_s)~\Delta W_s\right]~\left[
( (\Delta X^{\prime}_t)\circ X_{s+h_1,t})(Y_s)~\Delta W_t\right]\right)+\mbox{\rm O}\left(h_1^{2+1/2}\right)
\end{array}
$$
To take the final step, observe that
$$
\begin{array}{l}
\displaystyle
\mathbb{E}\left(\Delta Y_s(y)~\Delta W_s\right)\\
\\
\displaystyle=\mathbb{E}\left(\int_s^{s+h_1}~\overline{b}(Y_{s,u}(y))~(W_{s+h_1}-W_u)~du\right)+\mathbb{E}\left(\int_s^{s+h_1}~\overline{\sigma}(Y_{s,u}(y))~du\right)\\
\\
\displaystyle=\mathbb{E}\left(\int_s^{s+h_1}~\overline{\sigma}(Y_{s,u}(y))~du\right)+\mbox{\rm O}\left(h_1^{1+1/2}\right)
\end{array}$$
In the same vein, we have
$$
\begin{array}{l}
\displaystyle
\mathbb{E}\left( (\Delta X_t) (X_{s+h_1,t}(y))~\Delta W_t~|~ {\cal W }_{s+h_1,t}\right)\\
\\
\displaystyle=\mathbb{E}\left(\int_t^{t+h_2}~\sigma(X_{s+h_1,u}(y))~du~|~ {\cal W }_{s+h_1,t}\right)+\mbox{\rm O}\left(h_2^{1+1/2}\right)
\end{array}$$
and
$$
\begin{array}{l}
\displaystyle
\mathbb{E}\left( (\Delta X^{\prime}_t)(X_{s+h_1,t}(y))~\Delta W_t~|~ {\cal W }_{s+h_1,t}\right)\\
\\
\displaystyle=\mathbb{E}\left(\int_t^{t+h_2}~\partial\sigma(X_{s+h_1,u}(y))~(\partial X_{t,u})(X_{s+h_1,t}(y))~du\right)+\mbox{\rm O}\left(h_2^{1+1/2}\right)\\
\\
\displaystyle=\mathbb{E}\left(\int_t^{t+h_2}~\partial\left(\sigma\circ X_{t,u}\right)(X_{s+h_1,t}(y))~du\right)+\mbox{\rm O}\left(h_2^{1+1/2}\right)
\end{array}$$
This shows that
$$
\begin{array}{l}
\displaystyle h_1^{-1}~h_2^{-1}\mathbb{E}\left(\Phi(X^{s+h_1},Y_s)~\Phi(X^{t+h_2},Y_t)~~\Delta W_s~\Delta W_t\right)\\
\\
= \displaystyle\mathbb{E}\left(\Psi^{1}_{s,t}(Y_s)~ \psi^{1}_{s,t}(Y_s)~
h_1^{-1} \left[\int_s^{s+h_1}~\overline{\sigma}(Y_{u})~du\right]~h_2^{-1}\left[\int_t^{t+h_2}~\sigma(X_{s+h_1,u}(Y_s))~du\right]\right)\\
\\
+ ~\displaystyle\mathbb{E}\left( \Psi^{1}_{s,t}(Y_s)~\psi^{2}_{s,t}(Y_s)~h_1^{-1}\left[\int_s^{s+h_1}~\overline{\sigma}(Y_{u})~du\right]~h_2^{-1}\left[
\int_t^{t+h_2}~\partial\left(\sigma\circ X_{t,u}\right)(X_{s+h_1,t}(Y_s))~du\right]\right)+\mbox{\rm O}\left(h_1^{1/2}\right)
\end{array}
$$
It follows that
$$
\begin{array}{l}
\displaystyle\lim_{h_1\rightarrow 0}\mathbb{E}\left(\sum_{s+h_1<t}~\Pi_{s,t}^{h_1,h_2}\right)\\
\\
\displaystyle=\mathbb{E}\left(
\int_0^1\int_0^t
\left[
\partial((\partial X^{t})\circ Y_{s,t})(Y_s)~(\varsigma\circ Y_{s,t})(Y_{s})+((\partial X^{t})\circ Y_{s,t})(Y_{s})\times \partial(\varsigma\circ Y_{s,t})(Y_s)\right]~\overline{\sigma}(Y_{s})~\right.
\\
\\
\left.\hskip3cm\times\left[
\partial((\partial X^{t})\circ X_{s,t})(Y_s)~
\sigma(X_{s,t}(Y_s))+\partial (X^{t}\circ X_{s,t})(Y_s)~\partial\sigma(X_{s,t}(Y_s))\right] \varsigma(Y_s)~~ds~dt
\right)
\end{array}
$$
We end the proof of (\ref{cov-cauchy})
using lemma~\ref{lem-tex} and symmetry arguments. This ends the proof of the proposition.
\hfill\hbox{\vrule height 5pt width 5pt depth 0pt}\medskip \\
\subsection{Generalized backward It\^o-Ventzell formula}\label{biv-proof}
This section is mainly concerned with the proof of theorem~\ref{biv}. Before entering into the details of the proof we discuss how it applies
to the process $(X^t,Y_t)$ introduced in (\ref{def-XYt}).
Consider the random fields
\begin{eqnarray}
F_{t}(x)&:=&X^t(x)\Longrightarrow
\partial F_t=\partial X^t\quad \mbox{\rm and}\quad
\partial^2 F_{t}=\partial^2 X^t\nonumber\\
G_t(x)&:=&
\partial X^t(x)~b(x)+\frac{1}{2}~\partial^2 X^t(x)~a(x)~
\quad \mbox{\rm and}\quad H_t(x):=\partial X^t(x)~\sigma(x)\label{def-rf}
\end{eqnarray}
In this notation, the backward random field formula (\ref{ref-backward-flow}) with $t\in [0,1]$ takes the form
\begin{equation}\label{back-random-field-def}
F_{t}(x):=F_{1}(x)+\int_t^1~G_s(x)~ds+\int_{t}^1~H_s(x)~dW_s\quad\mbox{\rm with}\quad F_1(x)=x
\end{equation}
We fix some given $Y_0=y\in\mathbb{R}$ and we write $Y_t$ instead of $Y_t(y)$ and set
$$
\left(A_u,B_u,\Sigma_u\right):=\left(\overline{a}(Y_u),\overline{b}(Y_u),\overline{\sigma}(Y_u)\right)
$$ In this notation, we have
\begin{equation}\label{Y0-ref}
Y_t=y+\int_0^tB_u~du+\int_0^t\Sigma_u~dW_u\end{equation}
Observe that $B_{u},\Sigma_{u}$ as well as the Malliavin derivatives $D_v\Sigma_{u}=\partial \overline{\sigma}(Y_u)~D_vY_u$ have moments of any order.
Consider the processes
\begin{eqnarray*}
U_t&:=&\partial F_t(Y_t)~B_t+\frac{1}{2}~
\partial^2 F_t(Y_t)~A_t-G_t(Y_t)=\partial X^t (Y_t)~(\overline{b}-b)(Y_t)+\frac{1}{2}~
\partial^2 X^t(Y_t)~(\overline{a}-a)(Y_t)
\\
V_t&:=&\partial F_t(Y_t)~\Sigma_t-H_t(Y_t)=\partial X^t(Y_t)~(\overline{\sigma}-\sigma)(Y_t)\quad\mbox{\rm with}\quad A_t:=\Sigma_t^2
\end{eqnarray*}
In this notation, up to a change of sign and replacing $x$ by $Y_0$ in (\ref{Alekseev-grobner}) the stochastic interpolation formula stated in theorem~\ref{theo-al-gr} on the unit interval takes the following form
$$
F_1(Y_1)-F_0(Y_0)=\int_0^1 U_s~ds+\int_0^1 V_s~dW_s
$$
More generally, suppose we are given a forward real valued continuous semi-martingale $Y_{t}$
of the form (\ref{Y0-ref}) for some $ {\cal W }_{0,t}$-adapted functions
$B_{t}$ and $\Sigma_{t}$,
and a backward random
field models of the form (\ref{back-random-field-def}) for some $ {\cal W }_{t,1}$-adapted functions
$F_t(x),G_t(x),H_t(x)$. \\
We consider the following conditions:\\
{\em
$(H_1)^{\prime}$: The functions $F_t(x)$, $G_t(x)$ and $H_t(x)$ as well as the differentials $\partial H_t(x)$ and $\partial^2F_t(x)$ are continuous w.r.t. $(t,x)$ for any given $\omega\in\Omega$. In addition, for any $n\geq 1$ we have
\begin{equation}\label{integral-H1}
\begin{array}{rclc}
\displaystyle\sup_{\vert y\vert\leq n}\left(\vert F_t(Y_t+y)\vert\vee \vert H_t(Y_t+y)\vert\vee \vert G_t(Y_t+y)\vert\right)& \leq& g_n(t)&
\\
\displaystyle\sup_{\vert y\vert\leq n}\left(\vert \partial H_t(Y_t+y)\vert\vee\vert \partial F_t(Y_t+y)\vert\vee\vert \partial^2 F_t(Y_t+y)\vert\right)& \leq& g_n(t)\quad \mbox{\it with}&\displaystyle\mathbb{E}\left(\int_0^1\,g^4_n(t)\,dt\right)<\infty
\end{array}
\end{equation}
$(H_2)^{\prime}$: The Malliavin derivatives $ D_s\partial F_t(x)$ and $D_sH_t(x)$ are continuous w.r.t. $x$ and $(s,t)$ for any given $\omega\in\Omega$. In addition, for any $n\geq 1$ we have
\begin{equation}\label{integral-H2}
\begin{array}{rclc}
\displaystyle\sup_{\vert y\vert\leq n}\left( \vert (D_sF_t)(Y_t+y)\vert\vee \vert (D_sH_t)(Y_t+y)\vert\right) &\leq& h_n(s,t)&\\
\sup_{\vert y\vert\leq n}\left( \vert (D_s\partial F_t)(Y_t+y)\vert\right) &\leq& h_n(s,t)\quad \mbox{\rm with}&\displaystyle\mathbb{E}\left(\int_{[0,1]^2}\,h^4_n(s,t)\,dsdt\right)<\infty
\end{array}
\end{equation}
$(H_3)$: The random processes $B_{u},\Sigma_{u}$ as well as $D_v\Sigma_{u}$ are continuous w.r.t. the time parameter and they have moments of any order. }\\
The next theorem is a slight extension of theorem~\ref{biv} applied to the semi-martingale and the random fields models discussed in
(\ref{Y0-ref}) and (\ref{def-rf}).
\begin{theo}\label{theo-iv}
Consider a backward random
field models of the form (\ref{back-random-field-def}) for some functions
$F_t(x),G_t(x),H_t(x)$ satisfying $(H_1)^{\prime}$ and $(H_2)^{\prime}$. Also let $Y_{t}$ be a continuous semi-martingale
of the form (\ref{Y0-ref}) functions
$B_{t}$ and $\Sigma_{t}$ satisfying $(H_3)$.
In this situation, for any $t\in [0,1]$ we have the generalized backward It\^o-Ventzell formula
\begin{equation}\label{random-fields-version}
\begin{array}{l}
F_t(Y_t)-F_0(Y_0)\\
\\
\displaystyle=\int_0^t~\left(\partial F_s(Y_s)~B_s+\frac{1}{2}~
\partial^2 F_s(Y_s)~A_s-G_s(Y_s)\right)~ds+\int_0^t~\left(\partial F_s(Y_s)~\Sigma_s-H_s(Y_s)\right)~dW_s
\end{array}
\end{equation}
The r.h.s. term in the above display is understood as a Skorohod integral.
\end{theo}
{\bf Proof:} We use the same approximation technique as in~\cite{bismut-2,ocone-pardoux} and~\cite{pardoux-90} (see also the proof of theorem 3.2.11 in~\cite{nualart}).
Consider a mollifier type approximation of the identify given for any $\epsilon>0$ by the function
$$
\varphi_{\epsilon}(x):=\varphi(x/\epsilon)/\epsilon
\quad\mbox{\rm for some smooth compactly supported function $\varphi$ s.t. $\int_{-\infty}^{\infty}\varphi(x)dx=1$. }
$$
For any $x$, applying the It\^o-type change rule formula stated in proposition 8.2 in~\cite{nualart-pardoux} to the product function
$$\Gamma(X^t(x),\varphi_{\epsilon}(Y_t-x)):=X^t(x)~\varphi_{\epsilon}(Y_t-x)
$$
we check that
\begin{equation}\label{ref-two-sided-ito}
\displaystyle(F_t(x)~\varphi_{\epsilon}(Y_t-x))-(F_0(x)~\varphi_{\epsilon}(Y_0-x))
\displaystyle=\int_0^t~u_s^{\epsilon}(x)~ds+\int_0^t~v_s^{\epsilon}(x)~dW_s
\end{equation}
with
\begin{eqnarray*}
u_s^{\epsilon}(x)&:=&F_s(x)~\partial \varphi_{\epsilon}(Y_s-x)~B_s+\frac{1}{2}~
F_s(x)~\partial^2 \varphi_{\epsilon}(Y_s-x) ~A_s-\varphi_{\epsilon}(Y_s-x)~G_s(x)\\
v_s^{\epsilon}(x)&:=&F_s(x)~\partial \varphi_{\epsilon}(Y_s-x)~ \Sigma_s-\varphi_{\epsilon}(Y_s-x)~H_s(x)
\end{eqnarray*}
The stochastic integral in the r.h.s. of (\ref{ref-two-sided-ito}) can be interpreted as a two-sided stochastic integral.
Recalling that
$$
D_t\varphi_{\epsilon}(Y_s-x)=D_tY_s~ \partial\varphi_{\epsilon}(Y_s-x)
$$
we check that
\begin{eqnarray*} D_tv_s^{\epsilon}(x)
&=&
D_tF_s(x)~\partial \varphi_{\epsilon}(Y_s-x)~ \Sigma_s+F_s(x)~D_tY_s~ \partial^2\varphi_{\epsilon}(Y_s-x)~ \Sigma_s\\
&&+F_s(x)~\partial \varphi_{\epsilon}(Y_s-x)~ D_t\Sigma_s-D_tY_s~ \partial\varphi_{\epsilon}(Y_s-x)~H_s(x)-\varphi_{\epsilon}(Y_s-x)~D_tH_s(x)
\end{eqnarray*}
Condition $(H_3)$ ensures that the processes $Y_t$ and $D_tY_s$ have moments of any order. In addition, under the regularity conditions $(H_1)^{\prime}$ and $(H_2)^{\prime}$ we check that
$$
\int~\mathbb{E}\left(\int_0^t~u_s^{\epsilon}(x)^2~ds\right)~dx<\infty\quad\mbox{\rm and}\quad
\int~\mathbb{E}\left(\left[\int_0^t~v_s^{\epsilon}(x)~dW_s\right]^2\right)~dx<\infty
$$
Applying the Fubini theorem for Skorohod and measure theory integrals (see for instance~\cite{kruk,nualart,purtu} and the work by Leon~\cite{leon}) we check that
$$
F_t^{\epsilon}(Y_t):=\int~F_t(x)~\varphi_{\epsilon}(Y_t-x)~dx=\int~F_0(x)~\varphi_{\epsilon}(Y_0-x)~dx
+\int_0^t~U^{\epsilon}_s~ds=\int_0^t~V^{\epsilon}_s~dW_s
$$
with
$$
U^{\epsilon}_s:=\int u_s^{\epsilon}(x)~dx\quad \mbox{\rm and}\quad
V^{\epsilon}_s:=\int v_s^{\epsilon}(x)~dx
$$
Integrating by parts where derivatives of $\varphi_{\epsilon}$ appear we check that
\begin{eqnarray*}
U^{\epsilon}_s&:=&\int~\left(\partial F_s(x)~B_s+\frac{1}{2}~
\partial^2 F_s(x)~A_s-G_s(x)\right)~\varphi_{\epsilon}(Y_s-x)~dx\\
V^{\epsilon}_s&:=&\int~\left(\partial F_s(x)~ \Sigma_s-~H_s(x)\right)~ \varphi_{\epsilon}(Y_s-x)~dx
\end{eqnarray*}
From the a.s. continuity of $F_t(x)$ in $x$ for each $t\geq 0$, we have
$$
F_t^{\epsilon}(Y_t)-F_t(Y_t)=\int~(F_t(Y_t-\epsilon~x)-F_t(Y_t))~\varphi(x)~dx~\longrightarrow_{\epsilon\rightarrow 0}~0
$$
The functions $\partial F_t(x)$, $\partial^2 F_t(x)$ and $G_t(x)$ are almost surely continuous w.r.t. $x$ and uniformly locally bounded. In addition,
the random variables $A_t$ and $B_t$ are integrable at any order. Moreover, under $(H_1)^{\prime}$ there exists some parameter $n\geq 0$ depending on the support of $\varphi$ such that for any $\epsilon>0$ we have the estimate
\begin{eqnarray*}
\vert U^{\epsilon}_s\vert
&\leq& \sup_{\vert y\vert\leq n}{\vert \partial F_s(Y_s+y)\vert}~\vert B_s\vert+\frac{1}{2}~
\sup_{\vert y\vert\leq n}{\vert\partial^2 F_s(Y_s+y)\vert}~\vert A_s\vert+ \sup_{\vert y\vert\leq n}{\vert G_s(Y_s+y)\vert}\\&\leq& g_n(t)~(1+\vert A_s\vert+\vert B_s\vert)
\end{eqnarray*}
Thus, by the dominated convergence theorem on $( \Omega\times [0,1])$ equipped with the measure $(\mathbb{P}(d\omega)\otimes dt)$ we have
$$
\int_0^t~U^{\epsilon}_s~ds\longrightarrow_{\epsilon\rightarrow 0}~
\int_0^t~U_s~ds\quad \mbox{\rm as well as}\quad
F_t^{\epsilon}(Y_t)\longrightarrow_{\epsilon\rightarrow 0}F_t(Y_t)
$$
It remains to check that
\begin{equation}\label{V-cv}
\mathbb{E}(\int_0^t (V_s^{\epsilon}-V_s)^2~ds)+\mathbb{E}\left(\int_{[0,t]^2} (D_rV^{\epsilon}_s-D_rV_s)~(D_sV^{\epsilon}_r-D_sV_r)~dr~ds\right)\longrightarrow_{\epsilon\rightarrow 0}~0
\end{equation}
Observe that
$$
\begin{array}{l}
\displaystyle\int_0^t (V_s^{\epsilon}-V_s)^2~ds\\
\\
\displaystyle\leq 2~\int_0^t \int~(\partial F_s(x)-\partial F_s(Y_s))^2~\Sigma_s^2~\varphi_{\epsilon}(Y_s-x)~dx~ds+2~\int_0^t~(H_s(x)-H_s(Y_s))^2~\varphi_{\epsilon}(Y_s-x)~dx~ds
\end{array}$$
Using the chain rule property we have
$$
\begin{array}{l}
D_tV^{\epsilon}_s\\
\\
\displaystyle:=\int~D_t\left(\partial F_s(x)~ \Sigma_s-H_s(x)\right)~ \varphi_{\epsilon}(Y_s-x)~dx+
\int~\left(\partial F_s(x)~ \Sigma_s-H_s(x)\right)~ D_t\varphi_{\epsilon}(Y_s-x)~dx
\end{array}
$$
Integrating by parts, we check that
$$
D_tV^{\epsilon}_s=\int~\left[D_t\left(\partial F_s(x)~ \Sigma_s-H_s(x)\right)~ +
\left(\partial^2 F_s(x)~ \Sigma_s-\partial H_s(x)\right)~D_tY_s\right]~ \varphi_{\epsilon}(Y_s-x)~dx
$$
Observe that
$$
\begin{array}{l}
\displaystyle D_t\left(\partial F_s(x)~ \Sigma_s-H_s(x)\right)~ +
\left(\partial^2 F_s(x)~\Sigma_s-\partial H_s(x)\right)~D_tY_s\\
\\
\displaystyle =((D_t\partial F_s)(x)+\partial^2 F_s(x)~D_tY_s)~\Sigma_s+
\partial F_s(x)~D_t\Sigma_s-((D_tH_s)(x)+\partial H_s(x)~D_tY_s)
\end{array}
$$
On the other hand, we have
$$
\begin{array}{l}
D_tV_s
=D_t(\partial F_s(Y_s))~ \Sigma_s+\partial F_s(Y_s)~ D_t\Sigma_s-D_t(H_s(Y_s))\\
\\
~~~=\left((D_t\partial F_s)(Y_s)+\partial^2F_s(Y_s)~D_tY_s\right) \Sigma_s+\partial F_s(Y_s)~ D_t\Sigma_s
-\left((D_tH_s)(Y_s)+\partial H_s(Y_s)~D_tY_s\right)
\end{array}
$$
Arguing as above, we have the estimate
$$
\begin{array}{l}
\displaystyle\mathbb{E}\left(\int_{[0,1]^2} (D_rV^{\epsilon}_s-D_rV_s)~(D_sV^{\epsilon}_r-D_sV_r)~dr~ds\right)
\\
\\
\displaystyle\leq 2~\mathbb{E}\left(\int_{[0,1]^2} (D_tV^{\epsilon}_s-D_tV_s)^2~ds~dt\right)\leq 2^4~\sum_{1\leq i\leq 5}~J_i(\epsilon)
\end{array}
$$
In the above display, $J_i(\epsilon)$ stands for the sequences
\begin{eqnarray*}
J_1(\epsilon)&:=&\mathbb{E}\left(\int_{[0,1]^2\times\mathbb{R}}~\left(\partial F_s(x)-\partial F_s(Y_s) \right)^2~\left(D_t\Sigma_s\right)^2~ \varphi_{\epsilon}(Y_s-x)~ds~dt~dx\right)\\
J_2(\epsilon)&:=&\mathbb{E}\left(\int_{[0,1]^2\times\mathbb{R}}~\left(\partial H_s(x)-\partial H_s(Y_s)\right)^2 (D_tY_s)^2~ \varphi_{\epsilon}(Y_s-x)~ds~dt~dx\right)\\
J_3(\epsilon)&:=&\mathbb{E}\left(\int_{[0,1]^2\times\mathbb{R}}~\left(\partial^2 F_s(x)-\partial^2 F_s(Y_s)\right)^2~ (D_tY_s)^2~A_s~ \varphi_{\epsilon}(Y_s-x)~ds~dt~dx\right)
\end{eqnarray*}
The last two terms depend on the Malliavin derivatives of $\partial F_s$ and $H_s$ are they are given by
\begin{eqnarray*}
J_4(\epsilon)&:=&\mathbb{E}\left(\int_{[0,1]^2\times\mathbb{R}}~\left((D_t\partial F_s)(x)-(D_t\partial F_s)(Y_s)\right)^2~A_s~ \varphi_{\epsilon}(Y_s-x)~ds~dt~dx\right)\\
J_5(\epsilon)&:=&\mathbb{E}\left(\int_{[0,1]^2\times\mathbb{R}}~\left((D_tH_s)(x)-(D_tH_s)(Y_s)\right)^2~ \varphi_{\epsilon}(Y_s-x)~ds~dt~dx\right)
\end{eqnarray*}
Arguing as above, by the dominated convergence theorem we conclude that the Skorohod integral
$$
\int_0^t~V^{\epsilon}_s~dW_s\quad \mbox{\rm converges in $\mathbb{L}_2(\Omega)$ as $\epsilon\rightarrow 0$ to the Skorohod integral}~\int_0^t~V_s~dW_s
$$
This ends the proof of (\ref{V-cv}), and the proof of the theorem is now easily completed.
\hfill\hbox{\vrule height 5pt width 5pt depth 0pt}\medskip \\
We end this section with some comments.
\begin{rmk}\label{f-rmk}
Recalling that the diffusion flow $Y_t$ introduced in (\ref{def-XYt}) has finite absolute moments of any order, the integrability conditions stated in (\ref{integral-H1}) and (\ref{integral-H2}) are satisfied
as soon as the functions $F_t,G_t,H_t$, the differentials $\partial F_t, \partial^2 F_t, \partial H_t$, and the Malliavin derivatives
$D_sH_t,D_s\partial F_t$ have at most polynomial growth w.r.t. the state variable.
It is now readily check that $(H_1)^{\prime}$ and $(H_2)^{\prime}$ are met for the random fields introduced in (\ref{def-rf}).
The proof can be also be extended without difficulties to multivariate models. Following the proof of proposition 3.1 in ~\cite{ocone-pardoux}, an alternative proof of theorem~\ref{theo-iv} based on It\^o formula for Hilbert space valued processes can be developed. This elegant functional approach requires to introduce a custom Hilbert-space valued processes framework but this approach avoids to do explicitly the interchange of integration using the Fubini theorem for Skorohod and measure theory integrals. As the statement of proposition 3.1 in ~\cite{ocone-pardoux}, the assumptions of theorem~\ref{theo-iv} can also be weaken when expressed in terms of this generalized stochastic calculus for Hilbert-space valued processes.
\end{rmk}
\section{Illustrations}\label{sec-illustrations}
\subsection{Perturbation analysis}\label{sec-perturbation}
Assume that $\overline{\sigma}=\sigma$ and the drift function $ \overline{b}_t$ is given by a first order
expansion
$$
\overline{b}_t(x)= b_{\delta,t}(x):= b_t(x)+\delta~ b^{(1)}_{\delta,t}(x)\quad \mbox{\rm with}\quad
b^{(1)}_{\delta,t}(x)= b^{(1)}_t(x)+\frac{\delta}{2}~ b^{(2)}_{\delta,t}(x)
$$
for some perturbation parameter $\delta\in [0,1]$ and some functions $ b^{(i)}_{\delta,t}(x)$ with $i=1,2$.
In this context, the stochastic flow
$
\overline{X}_{s,t}(x):=X^{\delta}_{s,t}(x)
$ can be seen as a $\delta$-perturbation of ${X}_{s,t}(x):=X^{0}_{s,t}(x)$.
We further assume that the unperturbed diffusion satisfies condition $( {\cal T})_2$.
To avoid unnecessary technical discussions on the existence of absolute moments of the flows we also assume that $ b^{(i)}_{\delta,t}(x)$
are uniformly bounded w.r.t. the parameters $(\delta,t,x)$. In addition, $b^{(1)}_t(x)$ is differentiable w.r.t. the coordinate $x$ and it has uniformly bounded gradients. In this situation, we set
$$
\Vert b^{(i)}\Vert:= \sup_{\delta,t,x}\Vert b^{(i)}_{\delta,t}(x)\Vert\quad \mbox{\rm and}\quad \Vert \nabla
b^{(1)}\Vert:= \sup_{t,x}\Vert \nabla b^{(1)}_{t}(x)\Vert
$$
With some additional work to estimate the absolute moments of the flows, the perturbation analysis presented below allows to handle more general models.
The methodology described in this section can also be extended to expand the flow $X^{\delta}_{s,t}(x)$ at any order as soon as $\delta\mapsto
b_{\delta,t}(x)$ is sufficiently smooth.
The first order approximation is given by the following theorem.
\begin{theo}
For any $s\leq t$, $x\in \mathbb{R}^d$ and $\delta\geq 0$ we have the first order expansion
\begin{equation}\label{first-taylor-perturbation}
\begin{array}{l}
\displaystyle X^{\delta}_{s,t}(x)={X}_{s,t}(x)+\delta~\partial {X}_{s,t}(x)+\frac{\delta^2}{2}~ \partial^2_{\delta} {X}_{s,t}(x)\end{array}
\end{equation}
with the first order stochastic flow
$$
\partial {X}_{s,t}(x):= \int_s^t~\left(\nabla X_{u,t}\right)(X_{s,u}(x))^{\prime}~
b^{(1)}_u(X_{s,u}(x))~du
$$
The remainder second order term $ \partial^2_{\delta} {X}_{s,t}(x)$ in the above display is such that
for any $n\geq 2$ s.t. $\lambda_A(n)>0$ we have the uniform estimate
$$
\sup_{s,t,x} \mathbb{E}[\Vert \partial^{2}_{\delta} {X}_{s,t}(x)\Vert^n]^{1/n}
\leq c_n
$$
\end{theo}
\proof
Using
(\ref{X-Y-ref-ag}) we readily check that
\begin{eqnarray*}
DX^{\delta}_{s,t}(x)&:=&
\delta^{-1}[X^{\delta}_{s,t}(x)-{X}_{s,t}(x)]=\int_s^t~\left(\nabla X_{u,t}\right)(X^{\delta}_{s,u}(x))^{\prime}~
b^{(1)}_{\delta,u}(X^{\delta}_{s,u}(x))~du
\end{eqnarray*}
By proposition~\ref{def-4th-prop}
for any $n\geq 2$ we have
\begin{equation}\label{ref-lambda-plus}
\lambda_A^+(n):=\lambda_A-(n-2)\rho(\nabla\sigma)^2/2>0\Longrightarrow
\mathbb{E}\left(\Vert DX^{\delta}_{s,t}(x)\Vert^{n}\right)^{1/n}\leq c~ \Vert b^{(1)}\Vert/\lambda_A^+(n)
\end{equation}
This yields the first order Taylor expansion (\ref{first-taylor-perturbation})
with $$\partial^2_{\delta} {X}_{s,t}(x):= \partial^{(2,1)}_{\delta} {X}_{s,t}(x)+ \partial^{(2,2)}_{\delta} {X}_{s,t}(x)$$ and the second order remainder terms
\begin{eqnarray*}
\partial^{(2,2)}_{\delta} {X}_{s,t}(x)&:=&\int_s^t~\left(\nabla X_{u,t}\right)(X^{\delta}_{s,u}(x))^{\prime}~b^{(2)}_{\delta,t}(X^{\delta}_{s,u}(x))~du\\
\partial^{(2,1)}_{\delta} {X}_{s,t}(x)&:=&2\delta^{-1}\int_s^t~\left[\left(\nabla X_{u,t}\right)(X^{\delta}_{s,u}(x))-\left(\nabla X_{u,t}\right)(X_{s,u}(x))\right]^{\prime}~
b^{(1)}_u(X^{\delta}_{s,u}(x))~du\\
&&\hskip2cm+2\delta^{-1}\int_s^t~\left(\nabla X_{u,t}\right)(X_{s,u}(x))^{\prime}~
[b^{(1)}_u(X^{\delta}_{s,u}(x))-b^{(1)}_u(X_{s,u}(x))]~du
\end{eqnarray*}
Arguing as above, for any $n\geq 2$ s.t. $\lambda_A^+(n)>0$ we have the uniform estimate
$$
\mathbb{E}\left(\Vert \partial^{(2,2)}_{\delta} {X}_{s,t}(x)\Vert^{n}\right)^{1/n}\leq c~ \Vert b^{(2)}\Vert/\lambda_A^+(n)
$$
To estimate $ \partial^{(2,1)}_{\delta} {X}_{s,t}(x)$ we need to consider the second order decompositions
$$
\begin{array}{l}
\displaystyle 2^{-1}~ \partial^{(2,1)}_{\delta} {X}_{s,t}(x)\\
\\
\displaystyle
=~\int_0^1~\int_s^t
\left[\nabla^2 X_{u,t}\right]\left(X_{s,u}(x)+\epsilon(X^{\delta}_{s,u}(y)-X_{s,u}(x))\right)^{\prime}~\left[b^{(1)}_u(X^{\delta}_{s,u}(x))\otimes
DX^{\delta}_{s,u}(x)\right]~~du~d\epsilon\\
\\
\displaystyle+\int_0^1\int_s^t~
\left(\nabla X_{u,t}\right)(X_{s,u}(x))^{\prime}~\nabla b_u^{(1)}\left(X_{s,u}(x)+\epsilon(X^{\delta}_{s,u}(x)-X_{s,u}(x)),y\right)^{\prime}~
DX^{\delta}_{s,u}(x)~du~d\epsilon
\end{array}$$
Combining proposition~\ref{prop-nabla-2-estimate} with the estimate (\ref{ref-lambda-plus}) for any $n\geq 2$ s.t. $\lambda_A(n)>0$ we check that
$$
\mathbb{E}[\Vert \partial^{(2,1)}_{\delta} {X}_{s,t}(x)\Vert^n]^{1/n}
\leq c~\left(1+n~\rchi(b,\sigma)/\lambda_A(n)\right)~\left(\Vert b^{(1)}\Vert/\lambda_A(n)\right)^2
$$
for some universal constant $c<\infty$ and the parameter $\rchi(b,\sigma)$ introduced in (\ref{def-chi-b}). This ends the proof of (\ref{first-taylor-perturbation}). The proof of the theorem is completed.
\hfill\hbox{\vrule height 5pt width 5pt depth 0pt}\medskip \\
\subsection{Interacting diffusions}
Consider a system of $N$ interacting and $\mathbb{R}^d$-valued diffusion flows $X^{i}_{s,t}(x)$, with $1\leq i\leq N$ given by a stochastic differential equation of the form
$$
dX^{i}_{s,t}(x)=B_t\left(X^{i}_{s,t}(x),\frac{1}{N}\sum_{1\leq i\leq N}X^j_{s,t}(x)\right)~dt+\sigma_t\left(\frac{1}{N}\sum_{1\leq i\leq N}X^j_{s,t}(x)\right)~dW^i_t
$$
for some Lipschitz functions $B_t(x,y)$ and $\sigma_{t}(y)$ with appropriate dimensions. In the above display, $W^i_t$ stands for a collection of independent copies of $d$-dimensional Brownian motion $W_t$.
Assume that $B_t(x,y)$ linear w.r.t. the first coordinate.
In this situation, up to a change of probability space, the empirical mean of the process
$$\overline{X}_{s,t}(x):=\frac{1}{N}\sum_{1\leq i\leq N}X^j_{s,t}(x)$$ satisfies the stochastic differential equation
$$
d\overline{X}_{s,t}(x)=b_t\left(\overline{X}_{s,t}(x)\right)~dt+\frac{1}{\sqrt{N}}~{\sigma}_t\left(\overline{X}_{s,t}(x)\right)~dW_t\quad\mbox{\rm with}\quad b_t(x):=B_t(x,x)
$$
Formally, the above diffusion converges as $N\rightarrow\infty$ to the flow ${X}_{s,t}(x)$ of the dynamical system defined by
$$
\partial_t{X}_{s,t}(x):=b_t\left({X}_{s,t}(x)\right)
$$
More rigorously and without further work, the forward-backward interpolation formula (\ref{Alekseev-grobner}) yields directly the bias-variance error decomposition
$$
\begin{array}{l}
\displaystyle \overline{X}_{s,t}(x)-X_{s,t}(x) =\frac{1}{2N}~\int_s^t\left(\nabla ^2X_{u,t}\right)(\overline{X}_{s,u}(x))^{\prime}~{a}_u(\overline{X}_{s,u}(x))~du\\
\\
\hskip5cm\displaystyle+\frac{1}{\sqrt{N}}~\int_s^t~\left(\nabla X_{u,t}\right)(\overline{X}_{s,u}(x))^{\prime}~{\sigma}_u(\overline{X}_{s,u}(x))~dW_u
\end{array}
$$
This readily implies the a.s. convergence
$$
\overline{X}_{s,t}(x)\longrightarrow_{N\rightarrow\infty}X_{s,t}(x)
$$
After some elementary manipulations we check the bias formula
$$
\lim_{N\rightarrow\infty}~N~\left[\mathbb{E}(\overline{X}_{s,t}(x))-X_{s,t}(x)\right]=\frac{1}{2}~\int_s^t\left(\nabla ^2X_{u,t}\right)(X_{s,u}(x))^{\prime}~{a}_u(X_{s,u}(x))~du
$$
We also have the almost sure fluctuation theorem
$$
\lim_{N\rightarrow\infty}~\sqrt{N}~\left[\overline{X}_{s,t}(x)-X_{s,t}(x) \right]=\int_s^t~\left(\nabla X_{u,t}\right)(X_{s,u}(x))^{\prime}~{\sigma}_u(X_{s,u}(x))~dW_u
$$
\subsection{Time discretization schemes}\label{subsec:lemdiscretizeproof}
This section is mainly concerned with the proof of proposition~\ref{lem:discretize}.
We fix some parameter $h>0$ and some $s\geq 0$ and for any $ t\in [s+kh,s+(k+1)h[$ we set
$$
d X_{s,t}^h(x)=Y^h_{s,t}(x)~dt+ \sigma~ dW_t\quad \mbox{\rm with}\quad Y^h_{s,t}(x):=b\left(X^h_{s,s+kh}(x)\right)
$$
for some fluctuation parameter $\sigma\geq 0$.
For any $ s+kh\leq u<s+(k+1)h$ we have
$$
X^h_{s,u}(x)-X^h_{s,s+kh}(x)
\displaystyle=Y^h_{s,u}(x)~(u-(s+kh))+\sigma~(W_u-W_{s+kh})
$$
Using (\ref{X-Y-ref-ag}), in terms of the tensor product (\ref{tensor-notation}) we
readily check that
$$
X^h_{s,t}(x)-X_{s,t}(x)=\int_s^t~\left(\nabla X_{u,t}\right)(X^h_{s,u}(x))^{\prime}~
~\left[Y^h_{s,u}(x)-b(X^h_{s,u}(x))\right]~
du
$$
Combining (\ref{ref-nablax-estimate-0-ae-again}) with the Minkowski integral inequality we check that
\begin{eqnarray*}
\mathbb{E} \left( \Vert X^h_{s,t}(x)-X_{s,t}(x)\Vert^n \right)^{1/n}& =&\int_s^t~\mathbb{E} \left( \Vert \left(\nabla X_{u,t}\right)(X^h_{s,u}(x))^{\prime}~
~\left[Y^h_{s,u}(x)-b(X^h_{s,u}(x)) \right] \Vert^n \right)^{1/n}~du\\
& = &\int_s^t e^{-\lambda(t-u)}~\mathbb{E} \left( \Vert Y^h_{s,u}(x)-b(X^h_{s,u}(x)) \Vert^n \right)^{1/n}~du
\end{eqnarray*}
where the second line follows from the exponential estimate of the tangent process from proposition \ref{prop:diff_innit}. The integrand will be bounded as follows: for any $ s+kh\leq u<s+(k+1)h$ and any $n\geq 1$ we have
$$
\mathbb{E}\left(\Vert b(X^h_{s,u}(x))-Y^h_{s,u}(x))\Vert^n\right)^{1/n}\leq \Vert \nabla b\Vert~\left(\left[\Vert b(0)\Vert+\widehat{m}_n(x)~\Vert \nabla b\Vert \right]~h+\sigma~\sqrt{h}\right)
$$
which then yields the stated result of the proposition.
We now prove the stated bound on the difference of the drift processes. For any $ s+kh\leq u<s+(k+1)h$ we have
\begin{eqnarray}
&& b(X^h_{s,u}(x))-Y^h_{s,u}(x)\nonumber
\\
&& =\left[\int_{0}^1
\nabla b\left(X^h_{s,s+kh}(x)+\epsilon(X^h_{s,u}(x)-X^h_{s,s+kh}(x))\right)^{\prime}~b\left(X^h_{s,s+kh}(x)\right)~d\epsilon\right]~(u-(s+kh))\nonumber
\\
&&\qquad\hskip1cm+\left[\int_{0}^1
\nabla b\left(X^h_{s,s+kh}(x)+\epsilon(X^h_{s,u}(x)-X^h_{s,s+kh}(x))\right)^{\prime}~
~d\epsilon\right]~\sigma~\left(W_{u}-W_{s+kh}\right)
\label{eq:dicrete_proof_eqn_bias}
\end{eqnarray}
The $\mathbb{L}_n$-norm of the second integral term is bounded by $\Vert \nabla b\Vert \sigma \sqrt{h}$.
The assumption $\langle x,b(x)\rangle\leq -\beta~\Vert x\Vert^2$, for some $\beta>0$,
implies the stochastic flows $X_{s,t}(x)$ has uniform absolute moments of any order $n\geq 1$ w.r.t. the time horizon, that is, we have that
$$
m_n(x)\leq \kappa_n~(1+\Vert x\Vert)\quad \mbox{\rm with $m_n(x)$ defined in (\ref{moments-intro})}.
$$
The stochastic flows $X_{s,t}^h(x)$ also obey a similar moment bound: observe that
for any $ t\in [s+kh,s+(k+1)h[$ we have
$$
\begin{array}{l}
d \Vert X_{s,t}^h(x)\Vert^2\\
\\
\leq \left[-2\lambda_0~\Vert X_{s,t}^h(x)\Vert^2+2~\langle X_{s,t}^h(x), b(X^h_{s,s+kh}(x))-b(X^h_{s,t}(x))\rangle+\sigma^2d\right]~dt+ 2\sigma~X_{s,t}^h(x)^{\prime} dW_t
\end{array}$$
Thus, for any $\epsilon>0$ we have
$$
d \Vert X_{s,t}^h(x)\Vert^2
\leq \left[(-2\lambda_0+\epsilon)\Vert X_{s,t}^h(x)\Vert^2+\epsilon^{-1}\Vert\nabla b\Vert+\sigma^2d\right]~dt+ 2\sigma~X_{s,t}^h(x)^{\prime} dW_t
$$
We can check that the stochastic flows $X_{s,t}^h(x)$ also have uniform moments w.r.t. the time horizon; that is, for any $n\geq 1$ we have that
$$
\widehat{m}_n(x):=\sup_{h\geq 0}\sup_{t\geq s}\mathbb{E}\left[\Vert X_{s,t}^h(x)\Vert^n\right]^{1/n}\leq c_n~(1+\Vert x\Vert)
$$ Using this bounds, we check that
$$
\mathbb{E} (\Vert b (X^h_{s,s+kh}(x))\Vert^n)^{1/n} = \Vert b(0) \Vert + \hat{m}_n(x) \Vert \nabla b \Vert
$$
The end of the proof now follows elementary manipulations, thus it is skipped.
The proof of proposition~\ref{lem:discretize} is now completed.\hfill\hbox{\vrule height 5pt width 5pt depth 0pt}\medskip \\
\section*{Appendix}
In this appendix we prove
the estimates (\ref{intro-inq-1}) and (\ref{moments-intro}) and proposition~\ref{prop-nabla-2-estimate}.
\subsection*{Proof of (\ref{moments-intro})}\label{moments-intro-proof}
Whenever $( {\cal M })_n$ is satisfied, we have
$$
2\langle x,b_t(x)\rangle+\Vert \sigma_{t}(x)\Vert_F^2\leq \gamma_0+\gamma_1\Vert x\Vert-\gamma_2\Vert x\Vert^2
$$
with the parameters
$$
\gamma_0=\alpha_0+2\beta_0\qquad \gamma_1=\alpha_1+2\beta_1\quad\mbox{and}\quad\gamma_2=2\beta_2-\alpha_2
$$
Observe that
$$
\begin{array}{l}
d \Vert X_{s,t}(x)\Vert^2\\
\\
=\left[2\,\langle X_{s,t}(x),b_t(X_{s,t}(x))\rangle+\Vert \sigma_{t}(X_{s,t}(x))\Vert_F^2\right]~dt+2\sum_k\langle X_{s,t}(x),\sigma_{k,t}(X_{s,t}(x))\rangle~dW^k_t
\end{array}$$
After some elementary computations, for any $n\geq 1$ we check that
$$
\begin{array}{l}
n^{-1}\partial_t\mathbb{E}\left[\Vert X_{s,t}(x)\Vert^{2n}\right]
\leq -\left[\gamma_2-2(n-1)\alpha_2\right]~\mathbb{E}\left[\Vert X_{s,t}(x)\Vert^{2n}\right]\\
\\
\hskip3cm+\left[\gamma_1+2(n-1)\alpha_1\right]~\mathbb{E}\left[\Vert X_{s,t}(x)\Vert^{2n-1}\right]+\left[\gamma_0+2(n-1)\alpha_0\right]~\mathbb{E}\left[\Vert X_{s,t}(x)\Vert^{2(n-1)}\right]
\end{array}$$
This implies that
$$
\begin{array}{l}
\partial_t\mathbb{E}\left[\Vert X_{s,t}(x)\Vert^{2n}\right]^{1/n}
\leq -\left[\gamma_2-2(n-1)\alpha_2\right]~\mathbb{E}\left[\Vert X_{s,t}(x)\Vert^{2n}\right]^{1/n}\\
\\
\hskip3cm\displaystyle+\left[\gamma_1+2(n-1)\alpha_1\right]~\mathbb{E}\left[\Vert X_{s,t}(x)\Vert^{2n}\right]^{1/(2n)}+\left[\gamma_0+2(n-1)\alpha_0\right]~\end{array}$$
from which we check that for any $\epsilon>0$ we have
$$
\begin{array}{l}
\partial_t\mathbb{E}\left[\Vert X_{s,t}(x)\Vert^{2n}\right]^{1/n}\\
\\
\displaystyle\leq -\left[\gamma_2-2(n-1)\alpha_2-2\epsilon\right]~\mathbb{E}\left[\Vert X_{s,t}(x)\Vert^{2n}\right]^{1/n}+\frac{1}{8\epsilon}\left[\gamma_1+2(n-1)\alpha_1\right]^2+\left[\gamma_0+2(n-1)\alpha_0\right]~\end{array}$$
This implies that
$$
\begin{array}{l}
\partial_t\mathbb{E}\left[\Vert X_{s,t}(x)\Vert^{2n}\right]^{1/n}\\
\\
\displaystyle\leq -2\left[\beta_2-(n-1/2)\alpha_2-\epsilon\right]~\mathbb{E}\left[\Vert X_{s,t}(x)\Vert^{2n}\right]^{1/n}+\frac{1}{8\epsilon}\left[\gamma_1+2(n-1)\alpha_1\right]^2+\left[\gamma_0+2(n-1)\alpha_0\right]~\end{array}$$
from which we check that
$$
\mathbb{E}\left[\Vert X_{s,t}(x)\Vert^{2n}\right]^{1/n}\leq e^{-2\left[\beta_2-(n-1/2)\alpha_2-\epsilon\right](t-s)}~\Vert x\Vert^2+
\frac{1}{8\epsilon}~\frac{\left[\gamma_1+2(n-1)\alpha_1\right]^2+\left[\gamma_0+2(n-1)\alpha_0\right]}{2\left[\beta_2-(n-1/2)\alpha_2-\epsilon\right]}
$$
as soon as $\epsilon<\beta_2-(n-1/2)\alpha_2$ and $n\geq 1$. Replacing $\epsilon$ by $\epsilon(\beta_2-(n-1/2)\alpha_2)$ and then $(2n)$ by $n$
we check that
$$
\begin{array}{l}
\displaystyle\mathbb{E}\left[\Vert X_{s,t}(x)\Vert^{n}\right]^{1/n}\\
\\
\displaystyle\leq e^{-(1-\epsilon)\beta_2(n)(t-s)}~\Vert x\Vert+
\frac{1}{4\sqrt{\epsilon(1-\epsilon)}}~\frac{\gamma_1(n)+\gamma_0(n)^{1/2}}{\beta_2(n)^{1/2}}
\quad\mbox{\rm with}\quad
\gamma_i(n):=\gamma_i+(n-2)\alpha_i
\end{array}
$$
This ends the proof of (\ref{moments-intro}).\hfill\hbox{\vrule height 5pt width 5pt depth 0pt}\medskip \\
\subsection*{Proof of proposition~\ref{prop-nabla-2-estimate}}\label{prop-nabla-2-estimate-proof}
The proof of the estimate (\ref{eq-prop-nabla-2-estimate}) is mainly based on the following technical lemma of its own interest.
\begin{lem}
Let $Z_t$ be a non negative diffusion process satisfying in integral sense an inequality of the following form
$$
dZ_t\leq (-\lambda Z_t+\alpha_t~\sqrt{Z_t}+\beta_t)~dt+dM_t\quad \mbox{\rm with}\quad \partial_t\langle M\rangle_t\leq (u_t \sqrt{Z_t}+v_tZ_t)^2
$$
for some parameters $\lambda>0$ and $v_t\geq 0$, and some non negative processes $(\alpha_t,\beta_t,u_t)$. In this situation, for any $\epsilon>0$ we have
\begin{equation}\label{key-est-y_n}
\mathbb{E}(Z^n_t)^{1/n}\leq e^{\int_0^t \lambda_{n,s}(\epsilon)ds}~\mathbb{E}(Z^n_0)^{1/n}+\int_0^te^{\int_s^t \lambda_{n,u}(\epsilon)du}~z^n_s(\epsilon)~ds
\end{equation}
with the parameters
\begin{eqnarray*}
\lambda_{n,t}(\epsilon)&:=&-\lambda+\frac{n-1}{2}~v^2_t+\frac{\epsilon}{2}
\\
\displaystyle z^n_t(\epsilon)&:=&
\mathbb{E}\left[\beta_t^n\right]^{1/n}+\frac{n-1}{2}~\mathbb{E}\left[ u^{2n}_t\right]^{1/n}+\frac{1}{\epsilon}~
\left(\mathbb{E}\left[ \alpha_t^{2n}\right]^{1/n}+(n-1)^2~\mathbb{E}\left[(u_tv_t)^{2n}\right]^{1/n}\right)
\end{eqnarray*}
\end{lem}
\proof
Applying It\^o's formula, for any $n\geq 2$, we have
$$
\begin{array}{l}
\displaystyle n^{-1}\partial_t\mathbb{E}(Z^n_t)\\
\\
\displaystyle \leq \mathbb{E}\left[Z^{n-1}_t(-\lambda Z_t+\alpha_t~\sqrt{Z_t}+\beta_t)+\frac{n-1}{2}~(u_t \sqrt{Z_t}+v_tZ_t)^2~Z^{n-2}_t\right]\\
\\
\displaystyle=\left(-\lambda+\frac{n-1}{2}~v^2_t\right)~\mathbb{E}(Z^n_t)+\mathbb{E}\left[\left(\beta_t+\frac{n-1}{2}~u^2_t\right)Z^{n-1}_t\right]+ \mathbb{E}\left(\left[\alpha_t+(n-1)u_tv_t\right]~Z^{n-1/2}_t\right)
\end{array}
$$
On the other hand, for any $\epsilon>0$ we have the almost sure inequality
$$
\left[\alpha_t+(n-1)u_tv_t\right]~Z^{(n-1)/2}_t~Z^{n/2}_t\leq \frac{1}{2\epsilon}~\left[\alpha_t+(n-1)u_tv_t\right]^2~Z^{n-1}_t+\frac{\epsilon}{2}~Z^{n}_t
$$
This implies that
$$
\begin{array}{l}
\displaystyle n^{-1}\partial_t\mathbb{E}(Z^n_t)\\
\\
\displaystyle \leq \lambda_{n,t}(\epsilon)~\mathbb{E}(Z^n_t)+\mathbb{E}\left[\left(\beta_t+\frac{n-1}{2}~u^2_t+\frac{1}{2\epsilon}~\left[\alpha_t+(n-1)u_tv_t\right]^2\right)Z^{n-1}_t\right]\end{array}
$$
Applying H\"older inequality we check that
$$
\begin{array}{l}
\displaystyle\mathbb{E}\left[\left(\beta_t+\frac{n-1}{2}~u^2_t+\frac{1}{2\epsilon}~\left[\alpha_t+(n-1)u_tv_t\right]^2\right)~Z^{n-1}_t\right]\\
\\
\displaystyle\leq
\mathbb{E}\left[\left(\beta_t+\frac{n-1}{2}~u^2_t+\frac{1}{2\epsilon}~\left[\alpha_t+(n-1)u_tv_t\right]^2\right)^n\right]^{1/n}
~\mathbb{E}(Z^n_t)^{1-1/n}\leq
z^n_t
~\mathbb{E}(Z^n_t)^{1-1/n}
\end{array}
$$
This yields the estimate
$$
\partial_t\mathbb{E}(Z^n_t)^{1/n}=\mathbb{E}(Z^n_t)^{-(1-1/n)}~n^{-1}\partial_t\mathbb{E}(Z^n_t)\leq \lambda_{n,t}(\epsilon)~\mathbb{E}(Z^n_t)^{1/n}+ z^n_t
$$
This ends the proof of the lemma.
\hfill\hbox{\vrule height 5pt width 5pt depth 0pt}\medskip \\
We set
$$
Y_{s,t}(x):=\Vert \nabla^2 X_{s,t}(x)\Vert^2_F\quad \mbox{\rm and}\quad T_{s,t}(x):=\Vert \nabla X_{s,t}(x)\Vert_F
$$
and we also consider the collection of parameters
$$
\begin{array}{rclcrcl}
\Vert \tau\Vert_F&:=&\sup_{t,x}\Vert \tau_t(x)\Vert_F&&
\rho( \upsilon)&:=&\sup_{t,x}\lambda_{\tiny max}(\upsilon_{t}(x))
\end{array}
$$
with the tensor functions $(\tau_t,\upsilon_t)$ introduced in (\ref{tensor-functions-ref}). Observe that
$$
\Vert \tau\Vert_F\leq \Vert \nabla^2b\Vert_F+d~ \Vert \nabla^2\sigma\Vert_{F}^2\quad \mbox{\rm and}\quad
\rho( \upsilon)\leq d~\Vert \nabla^2\sigma\Vert^2_{2}
$$
Whenever $( {\cal T})_2$ is met we have
$$
\mbox{\rm Tr}\left[\nabla^2 X_{s,t}(x)~A_t(X_{s,t}(x))~\nabla^2 X_{s,t}(x)^{\prime}\right]\leq -2\lambda_A~Y_{s,t}(x)
$$
Also observe that
$$
\vert\mbox{\rm Tr}\left[\left[\nabla X_{s,t}(x)\otimes \nabla X_{s,t}(x)\right]~\tau_t(X_{s,t}(x))~\nabla^2 X_{s,t}(x)^{\prime}\right]\vert\leq \Vert \tau\Vert_F~
Y_{s,t}(x)^{1/2}~T_{s,t}(x)^2
$$
and
$$
\mbox{\rm Tr}\left[\left[\nabla X_{s,t}(x)\otimes \nabla X_{s,t}(x)\right]\upsilon_t(X_{s,t}(x))\left[\nabla X_{s,t}(x)\otimes \nabla X_{s,t}(x)\right]^{\prime}\right]\leq \rho( \upsilon)~T_{s,t}(x)^4
$$
In the same vein, we have
$$
\begin{array}{l}
\vert \mbox{\rm Tr}
\left\{\left[\nabla X_{s,t}(x)\otimes \nabla X_{s,t}(x)\right]~\nabla^2\sigma_{t,k}(X_{s,t}(x))~\nabla^2 X_{s,t}(x)^{\prime}\right.\\
\displaystyle\hskip7cm\left.+\nabla^2 X_{s,t}(x)~\nabla \sigma_{t,k}(X_{s,t}(x))~\nabla^2 X_{s,t}(x)^{\prime}\right\}\vert\\
\\
\leq \Vert \nabla^2\sigma_k\Vert_F~T_{s,t}(x)^2~Y_{s,t}(x)^{1/2}+\rho(\nabla \sigma_k)~Y_{s,t}(x)
\end{array}
$$
We are now in position to prove proposition~\ref{prop-nabla-2-estimate}.
{\bf Proof of proposition~\ref{prop-nabla-2-estimate}:}
Applying the above lemma to the processes
$$
Z_t=Y_{s,t}(x)\qquad \lambda=2\lambda_A\qquad \alpha_t=2\Vert \tau\Vert_F~T_{s,t}(x)^2\qquad \beta_t=\rho( \upsilon)~T_{s,t}(x)^4
$$
and the parameters
$$
u_t=2\sqrt{d}~ \Vert \nabla^2\sigma\Vert_{F}~T_{s,t}(x)^2\quad\mbox{\rm and}\quad v_t=2\sqrt{d}~ \rho_{\star}(\nabla \sigma)
$$
we obtain the estimate (\ref{key-est-y_n}) with the parameters
\begin{eqnarray*}
\lambda_{n,t}(\epsilon)&:=&-2\left[\lambda_A-d(n-1) \rho_{\star}(\nabla \sigma)^2-\frac{\epsilon}{4}\right]
\\
\displaystyle z^n_t(\epsilon)&:=&\left\{\rho( \upsilon)~
+2d(n-1)~\Vert \nabla^2\sigma\Vert_{F}^2~\right.\left.+\frac{4}{\epsilon}~
\left(
\Vert \tau\Vert_F^2~
+4~d^2(n-1)^2~ \rho_{\star}(\nabla \sigma)^2~\Vert \nabla^2\sigma\Vert_{F}^2~\right)\right\}\\
&&\hskip3cm\times \mathbb{E}\left[\Vert \nabla X_{s,t}(x)\Vert_F^{4n}\right]^{1/n}
\end{eqnarray*}
Observe that
$$
\begin{array}{l}
\displaystyle
z^n_t(\epsilon)
\leq c n^2~(1\vee\epsilon^{-1})~\rchi(b,\sigma)^2~\mathbb{E}\left[\Vert \nabla X_{s,t}(x)\Vert_F^{4n}\right]^{1/n}\\
\\
\end{array}
$$
for some universal constant $c<\infty$ and the parameter $\rchi(b,\sigma)$ defined in (\ref{def-chi-b}).
Using (\ref{def-4th}) we check that
$$
\begin{array}{l}
\displaystyle\mathbb{E}\left(\Vert \nabla^2 X_{s,t}(x)\Vert^{2n}_F\right)^{1/n}\\
\\
\displaystyle\leq c n^2~(1\vee\epsilon^{-1})~\rchi(b,\sigma)^2~ \int_s^te^{-2\left[\lambda_A-d(n-1) \rho_{\star}(\nabla \sigma)^2-\frac{\epsilon}{4}\right](t-u)}~e^{-4\left[\lambda_A-(n-1)\rho(\nabla\sigma)^2\right](u-s)}~du\\
\\
\displaystyle= c n^2~(1\vee\epsilon^{-1})~\rchi(b,\sigma)^2~e^{-2\left[\lambda_A-d(n-1) \rho_{\star}(\nabla \sigma)^2-\frac{\epsilon}{4}\right](t-s)}~\\
\\
\hskip3cm\displaystyle \int_s^te^{-2\left[
\lambda_A-(n-1)\rho(\nabla\sigma)^2+(n-1) [d\rho_{\star}(\nabla \sigma)^2-\rho(\nabla\sigma)^2]+\frac{\epsilon}{4}\right](u-s)}~du
\end{array}$$
Assume that
$$
\lambda_A>d(n-1) \rho_{\star}(\nabla \sigma)^2
$$
In this case there exists some $0<\epsilon_n\leq 1$ such that for any $0<\epsilon\leq \epsilon_n$ we have
$$
\lambda_A-d(n-1) \rho_{\star}(\nabla \sigma)^2>\epsilon
$$
and therefore
$$
\displaystyle\mathbb{E}\left(\Vert \nabla^2 X_{s,t}(x)\Vert^{2n}_F\right)^{1/(2n)}\leq c ~n~\epsilon^{-1}~\rchi(b,\sigma)~\exp{\left(-\left[\lambda_A-d(n-1) \rho_{\star}(\nabla \sigma)^2-\epsilon\right](t-s)\right)}
$$
This ends the proof of the proposition.\hfill\hbox{\vrule height 5pt width 5pt depth 0pt}\medskip \\
\subsection*{Proof of (\ref{intro-inq-1})}\label{intro-inq-1-proof}
Using (\ref{intro-inq-nabla}), the generalized Minkowski inequality applied to (\ref{Alekseev-grobner}) whenever $( {\cal T})_{n/\delta}$ is met for some $\delta\in ]0,1[$ and $n\geq 2$ gives
\begin{equation}\label{intro-inq-0}
\begin{array}{l}
\displaystyle
\mathbb{E}\left[\Vert T_{s,t}(\Delta a,\Delta b)(x)\Vert^n\right]^{1/n}\\
\\
\displaystyle \leq \frac{\kappa_{n/\delta}}{\lambda(n/\delta)}~\left(\vertiii{\Delta b(x)}_{n/(1-\delta)}+\vertiii{\Delta a(x)}_{n/(1-\delta)}\right)\quad \mbox{\rm
with $(\kappa_n,\lambda(n))$ given in (\ref{ref-tan-hess}). }
\end{array}
\end{equation}
The Skorohod integral $S_{s,t}(\Delta \sigma)(x)$ is estimated using theorem~\ref{theo-quantitative-sko}.
Using (\ref{intro-inq-0}) and (\ref{intro-inq-s}) we check that
$$
\begin{array}{l}
\displaystyle
\mathbb{E}\left[\Vert X_{s,t}(x)-\overline{X}_{s,t}(x)\Vert^{n}\right]^{1/n}\\
\\
\displaystyle\leq \kappa_{(\delta_1,\delta_2),n}~\left(\vertiii{\Delta a(x)}_{n/(1-\delta_1)}+\vertiii{\Delta b(x)}_{n/(1-\delta_1)}
+\vertiii{\Delta\sigma(x)}_{2n/\delta_2}~(1\vee\Vert x\Vert)\right)\end{array}
$$
as soon as the regularity conditions
$( {\cal T})_{n/\delta_1}$,
$(M)_{2n/\delta_2}$ and $(T)_{2n/(1-\delta_2)}$ are satisfied for some parameter $n\geq 2$ and some $\delta_1,\delta_2\in ]0,1[$. Choosing $\delta_1=(1-\delta_2)/2$ and setting $\delta=\delta_2$ we check that
$$
\begin{array}{l}
\displaystyle
\mathbb{E}\left[\Vert X_{s,t}(x)-\overline{X}_{s,t}(x)\Vert^{n}\right]^{1/n}\\
\\
\displaystyle\leq \kappa_{\delta,n}~\left(\vertiii{\Delta a(x)}_{2n/(1+\delta)}+\vertiii{\Delta b(x)}_{2n/(1+\delta)}
+\vertiii{\Delta\sigma(x)}_{2n/\delta}~(1\vee\Vert x\Vert)\right)\end{array}
$$
as soon as $(M)_{2n/\delta}$ and $(T)_{2n/(1-\delta)}$ are satisfied for some parameter $n\geq 2$ and some $\delta\in ]0,1[$.
For instance, $( {\cal M })_{2n/\delta}$ and $( {\cal T})_{2n/(1-\delta)}$ are satisfied as soon as
$$
\beta_2-\alpha_2/2>(n/\delta-1)~\alpha_2\quad\mbox{\rm and}\quad \lambda_A>d(n/(1-\delta)-1)~ \rho_{\star}(\nabla \sigma)^2
$$
This ends the proof of (\ref{intro-inq-1}).\hfill\hbox{\vrule height 5pt width 5pt depth 0pt}\medskip \\
|
2,869,038,154,339 | arxiv | \section{Introduction}
Inclusive lepton scattering has for many decades been the most
important tool for probing the internal quark and gluon (or parton)
structure of nucleons and nuclei. Structure functions extracted
from inclusive deep-inelastic scattering (DIS) experiments display
the central features of quantum chromodynamics (QCD) ---
asymptotic freedom at short distances ({\it via} structure function
scaling and its violation) and confinement at large distance scales
({\it via} the momentum dependence of parton distributions).
Since the late 1960s, DIS experiments have yielded an impressive
data set that maps nucleon structure functions over several orders
of magnitude in the Bjorken scaling variable, $x$, and the squared
four-momentum transfer, $Q^2$. These data, supplemented by cross
sections from hadronic collisions and other high-energy processes,
have enabled a detailed picture of the parton distribution functions
(PDFs) of the nucleon to emerge through global QCD analyses
(see Refs.~\cite{Jimenez13, Forte13} and references therein).
At lower energies, where nonperturbative quark--gluon interactions
are important and the inclusive lepton--nucleon cross section is
dominated by nucleon resonances, the structure functions reveal
another intriguing feature of QCD, namely, quark--hadron duality.
Here, the low energy cross section, when averaged over appropriate
energy intervals, is found to resemble the high energy result,
whose $Q^2$ dependence is described by perturbative QCD.
In this context, quark--hadron duality provides a unique
perspective on the relationship between confinement
and asymptotic freedom, and establishes a critical link between
the perturbative and nonperturbative regimes of QCD.
In the framework of QCD, quark--hadron duality can be formally
interpreted in terms of structure function moments \cite{DeRujula75}.
From the operator product expansion (OPE), the moments can be
expressed as a series in $1/Q^2$, with coefficients given by
matrix elements of local quark--gluon operators of a given twist
\cite{Wilson69}.
The leading (twist 2) term corresponds to scattering from a single
parton, while higher twist terms correspond to multi--quark
and quark--gluon interactions.
At low $Q^2$ the resonance region makes a significant contribution
to the structure function moments, so that here one might expect
a strong $Q^2$ dependence of the moments arising from the higher
twist terms in the OPE.
In practice, however, the similarity of the structure function moments
at low $Q^2$ and the moments extracted from high energy cross sections
suggests the dominance of the leading twist contribution.
The combined higher twist, multi-parton contributions appear to play
a relatively minor role down to scales of the order
$Q^2 \sim 1$~GeV$^2$.
This nontrivial relationship between the low-energy cross section
and its deep-inelastic counterpart was first observed by Bloom and
Gilman \cite{Bloom70, Bloom71} in the early DIS measurements that
were instrumental in establishing structure function scaling.
More recently, the availability of extensive, precise structure
function data from Jefferson Lab and elsewhere, over a wide range
of kinematics, has opened up the possibility for in-depth studies
of quark--hadron duality.
Duality has now been observed in the proton $F_2$ and $F_L$
structure functions \cite{Niculescu00, Liang, Malace09, Bianchi04,
Melnitchouk05, Monaghan13}, the $F_2$ structure function of nuclei
\cite{Niculescu06}, the spin-dependent $g_1$ structure functions
of the proton and $^3$He \cite{HERMES03, Bosted07, Solvignon08},
the individual helicity-1/2 and 3/2 virtual photoproduction cross
sections for the proton \cite{Malace11}, and in parity-violating
electron--deuteron scattering \cite{PVDIS}.
To establish the dynamical origin of quark-hadron duality in the
nucleon requires one to also study the low-$Q^2$ structure of the
neutron. Models based on four-quark higher twist contributions to
DIS suggest that duality in the proton could arise from accidental
cancellations between quark charges, which would not occur for the
neutron \cite{Brodsky00}.
Unfortunately, the absence of high-density free neutron targets
means that essentially all information on the structure functions
of the neutron has had to be derived from measurements on deuterium.
Typically, the deuterium data are corrected for Fermi smearing and
other nuclear effects \cite{Malace10, Arrington12, Osipenko06,
Weinstein11, Hen11, Arrington09, CJ11},
which introduces an element of model
dependence into the extraction procedure.
This is particularly problematic in the nucleon resonance region,
where Fermi motion effects leads to significant smearing of the
resonant structures.
The existence of duality in the neutron $F_2$ structure function
was suggested recently \cite{Malace10} in an analysis which used
an iterative deconvolution method \cite{Kahn08} to extract neutron
resonance spectra from inclusive proton and deuteron $F_2$ data
\cite{Malace09}. A model independent confirmation of duality in the
neutron, however, was to date not possible.
Recently, a new experimental technique, based on spectator nucleon
tagging \cite{Fenker08}, has been used to extract the free neutron
$F_2$ structure function \cite{Baillie12}. By detecting low-momentum
protons at backward angles in electron deuteron scattering, the BONuS
experiment at Jefferson Lab measured $F_2$ for the neutron in both
the resonance and DIS regions, with minimal uncertainty from nuclear
smearing and rescattering corrections \cite{Tkachenko14}.
In the present work, we use the BONuS data to quantitatively measure
for the first time the degree to which duality holds for the $F_2$
structure function of the free neutron. Because the results reported
here use data from an experimentally--isolated neutron target,
one expects significantly reduced systematic uncertainties compared
with those in the model-dependent analysis of inclusive deuterium data
\cite{Malace10}.
For the theoretical analysis of duality we use the method of
truncated structure function moments \cite{Forte99, Forte01,
Piccione01, Kotlorz07}, which were applied to the resonance region
$F_2$ proton data by Psaker {\it et al.} \cite{Psaker08}.
Here, the $n$-th truncated moment of the $F_2$ structure function
is defined as
\begin{equation}
M_N(x_{\rm min},x_{\rm max},Q^2)
= \int_{x_{\rm min}}^{x_{\rm max}} dx\, x^{N-2} F_2(x,Q^2),
\label{eq:moments2}
\end{equation}
where the integration over $x$ is restricted to an interval
between $x_{\rm min}$ and $x_{\rm max}$.
This method avoids extrapolation of the integrand into poorly
mapped kinematic regions, and is particularly suited for the study
of duality where an $x$ interval can be defined by a resonance
width around an invariant mass $W^2 = M^2 + Q^2 (1-x)/x$,
where $M$ is the nucleon mass.
As the position of the resonance peak varies with $x$ for different
$Q^2$ values, the values for $x_{\rm min}$ and $x_{\rm max}$ evolve
to the appropriate invariant mass squared region.
For the BONuS data, we consider four ranges in $W^2$, corresponding
to the three prominent resonance regions as well as the combined
resonance region,
\begin{eqnarray}
\label{eq:Wregions}
&& 1.3 \leq W^2 \le 1.9~{\rm GeV}^2\ \ \
\textrm{[1st (or $\Delta$) region]}, \nonumber\\
&& 1.9 \leq W^2 \le 2.5~{\rm GeV}^2\ \ \
\textrm{[2nd region]}, \nonumber\\
&& 2.5 \leq W^2 \le 3.1~{\rm GeV}^2\ \ \
\textrm{[3rd region]}, \\
&& 1.3 \leq W^2 \le 4.0~{\rm GeV}^2\ \ \
\textrm{[total resonance]}. \nonumber
\end{eqnarray}
After reviewing the BONuS experiment in Sec.~\ref{sec:experiment},
the results for several low truncated moments (corresponding to
$N=2$, 4 and 6) of the neutron $F_2$ structure function are
presented in Sec.~\ref{sec:truncated}.
The implications of the new data for local quark-hadron duality
and its violation are discussed by comparing with recent global
PDF parametrizations and previous model-dependent data analyses
(Sec.~\ref{ssec:n_duality}).
The isospin dependence of local duality is studied by comparing
the neutron moments with corresponding moments of the proton
$F_2$ structure function (Sec.~\ref{ssec:n_isospin}).
Finally, we summarize our results and conclusions in
Sec.~\ref{sec:conclusion}.
\section{The BONuS Experiment}
\label{sec:experiment}
The results reported here rely on a novel experimental technique aimed
at eliminating or substantially reducing the theoretical uncertainties
involved in extracting neutron data from nuclear targets. The BONuS
(Barely Off--shell Nucleon Structure) experiment at Jefferson Lab
\cite{Fenker08, Baillie12, Tkachenko14} used a Radial Time Projection
Chamber (RTPC) to detect low momentum spectator protons produced in
electron--deuterium scattering in conjunction with electrons
detected with CLAS \cite{Mecking03} in Hall B.
The tagging technique essentially eliminates effects of Fermi smearing
\cite{FS88}, while the restriction to backward low-momentum spectator
protons minimizes final state interactions \cite{cdg04, Cosyn11,
Cosyn14} and ensures that the neutron is nearly on-shell
\cite{MSS97, Baillie12}.
The BONuS experiment ran in 2005 and acquired electron--deuteron
scattering data at two electron beam energies, $E=4.223$ and 5.262~GeV.
The RTPC consisted of three layers of gas electron multipliers
surrounding a thin, pressurized gas deuterium target, and was able
to detect protons with momenta as low as 70~MeV.
The experiment and data analysis are described in detail in
Ref.~\cite{Tkachenko14}. Ratios of neutron to proton $F_2$ structure
functions and the absolute neutron $F_2$ structure function were
extracted over a wide kinematic range and for spectator proton momenta
between 70 and 100~MeV. The total systematic uncertainty in the
neutron structure function extracted was 8.7\% \cite{Tkachenko14},
with an overall 10\% scale uncertainty.
\begin{figure}
\begin{center}
\includegraphics[scale=0.4]{bonus_q2x_coverage.pdf}
\end{center}
\caption{Kinematic coverage of the BONuS data.
The solid lines denote the fixed-$W^2$ thresholds for the
four final state mass regions in Eq.~(\ref{eq:Wregions}),
from $W^2=1.3$ to 4.0~GeV$^2$.}
\label{fig:bonus_kinematics}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[scale=0.43]{f2n_model_validation_overlay.pdf}
\end{center}
\caption{Representative neutron $F_2^n$ structure function spectra
from the BONuS experiment \cite{Tkachenko14} at
$Q^2=1.2$~GeV$^2$ (top panel) and
$Q^2=2.4$~GeV$^2$ (bottom panel). The open (filled)
circles represent data for a beam energy of $E=4.223$
(5.262)~GeV. The solid curve is computed from the ABKM
global PDF parametrization \cite{ABKM} including higher
twist effects and target mass corrections.}
\label{fig:spectra}
\end{figure}
The kinematic coverage, shown in Fig.~\ref{fig:bonus_kinematics}
(with the 4.223 and 5.262~GeV data combined), extends from the
threshold to the deep-inelastic region. The curves in
Fig.~\ref{fig:bonus_kinematics} represent the fixed-$W^2$
thresholds for the four mass regions considered.
Typical neutron $F_2^n$ spectra are shown in Fig.~\ref{fig:spectra}
for $Q^2=1.2$ and 2.4~GeV$^2$, with the data restricted to spectator
proton angles greater than 100$^\circ$ relative to the momentum
transfer, and proton momenta between 70 and 100~MeV.
The BONuS results are compared with the ABKM global fit \cite{ABKM}
to deep-inelastic and other high-energy scattering data, with the
inclusion of target mass corrections and higher twist effects.
The qualitative agreement between the parametrization and data
suggests evidence for quark--hadron duality, which we explore
more quantitatively in the following sections.
\section{Truncated Moments and Local Quark--Hadron Duality}
\label{sec:truncated}
Because the kinematic variables $Q^2$, $x$ and $W^2$ are not independent,
a range in $W^2$ at fixed $Q^2$ implies a corresponding range in $x$.
This makes possible a straightforward integration of the experimental
$F_2^n$ structure function data to obtain the truncated moments $M_n$
in Eq.~(\ref{eq:moments2}). To minimize the model dependence, we
evaluate the integrals based solely on the experimentally measured
data points, without using any interpolating or extrapolating function.
\subsection{Truncated neutron moments}
\label{ssec:n_duality}
The second ($N=2$) truncated neutron moments, $M_2^n$, obtained from
the BONuS data are shown in Fig.~\ref{fig:moments} as a function of
$Q^2$ for the four $W^2$ intervals defined in Eq.~(\ref{eq:Wregions}).
The numerical values for the moments are also listed in
Table~\ref{tab:moments}.
The quoted errors take into account the experimental statistical and
systematic uncertainties added in quadrature, but do not the include
the 10\% scale uncertainty.
\begin{figure}
\begin{center}
\includegraphics[scale=0.43]{bonus_moments_from_data2_0scale.pdf}
\end{center}
\caption{Second ($N=2$) neutron truncated moments $M_2^n$ versus $Q^2$
for the four resonance regions in Eq.~(\ref{eq:Wregions})
[labeled as ``first'', ``second'', ``third'' and ``total''].
The moments obtained from the BONuS data (filled circles)
are compared with moments computed from the ABKM global PDF
parametrization \cite{ABKM} including target mass and
higher twist corrections (shaded rectangles).}
\label{fig:moments}
\end{figure}
The experimental moments are compared with the moments calculated
from the ABKM global PDF parametrization \cite{ABKM}, including
finite-$Q^2$ corrections from the target mass and an $x$-dependent
parameterization of the leading (${\cal O}(1/Q^2)$) higher twist
effects.
The latter are needed in order to obtain a more quantitative
description of duality in the low-$Q^2$ region, to which the
structure functions from the global fits (which are primarily
constrained by high energy data) are extrapolated.
The comparison shows generally very good agreement in the second
and third resonance regions, and in the total integrated $W^2$
interval, while the ABKM results underestimate the data somewhat
in the $\Delta$ resonance region.
The corresponding higher order truncated moments (for $N=4$ and $N=6$)
are listed in Tables~\ref{tab:moments4} and~\ref{tab:moments6},
respectively. Comparison with the ABKM fit (not shown) reveals a
similar pattern as for the $N=2$ moments, although the deviation in
the lowest-$W$ interval is more pronounced, especially at low $Q^2$,
because of the greater weighting given to the high-$x$ region in
the higher moments.
\begin{table}[t]
\begin{tabular}{c|c|c|c|c} \hline
& \multicolumn{4}{c} {$M_2\, [\times 10^{-3}]$} \\ \hline
$Q^2$ [GeV$^2$]& 1st & 2nd & 3rd & total \\ \hline
1.0 & 31.5 $\pm$ 1.1 & 16.4 $\pm$ 0.4 & 14.1 $\pm$ 0.3 & 76.7 $\pm$ 1.2 \\
1.2 & 23.5 $\pm$ 0.5 & 15.3 $\pm$ 0.3 & 13.0 $\pm$ 0.3 & 67.4 $\pm$ 0.6 \\
1.4 & 17.7 $\pm$ 0.4 & 13.5 $\pm$ 0.2 & 12.1 $\pm$ 0.3 & 57.7 $\pm$ 0.5 \\
1.7 & 12.3 $\pm$ 0.3 & 10.7 $\pm$ 0.2 & 10.5 $\pm$ 0.2 & 46.7 $\pm$ 0.5 \\
2.0 & 8.4 $\pm$ 0.2 & 8.5 $\pm$ 0.2 & 9.0 $\pm$ 0.2 & 38.4 $\pm$ 0.4 \\
2.4 & 5.8 $\pm$ 0.2 & 6.1 $\pm$ 0.1 & 7.1 $\pm$ 0.3 & 29.5 $\pm$ 0.4 \\
2.9 & 3.4 $\pm$ 0.1 & 4.5 $\pm$ 0.1 & 5.3 $\pm$ 0.3 & 21.5 $\pm$ 0.4 \\
3.4 & 2.1 $\pm$ 0.1 & 3.1 $\pm$ 0.1 & 4.0 $\pm$ 0.2 & 15.8 $\pm$ 0.3 \\
4.1 & 1.3 $\pm$ 0.1 & 1.7 $\pm$ 0.1 & 2.8 $\pm$ 0.1 & --- \\ \hline
\end{tabular}
\caption{Second ($N=2$) truncated moments (in units of $10^{-3}$)
of the neutron $F_2$ structure function from the BONuS data
for the $W^2$ regions in Eq.~(\ref{eq:Wregions}).
The errors are a quadrature sum of statistical and systematic
uncertainties, but do not include the overall 10\%
normalization uncertainty.}
\label{tab:moments}
\end{table}
\begin{table}[h]
\resizebox{8.5 cm}{!}{
\begin {tabular}{c|c|c|c|c}\hline
& \multicolumn{4}{c} {$M_4\, [\times 10^{-3}]$} \\ \hline
$Q^2$ [GeV$^2$]& 1st & 2nd & 3rd & total \\ \hline
1.0 & 11.58 $\pm$ 0.43 & 3.09 $\pm$ 0.08 & 1.69 $\pm$ 0.04 & 17.49 $\pm$ 0.44 \\
1.2 & 9.80 $\pm$ 0.21 & 3.51 $\pm$ 0.06 & 1.95 $\pm$ 0.04 & 16.78 $\pm$ 0.22 \\
1.4 & 8.11 $\pm$ 0.17 & 3.60 $\pm$ 0.06 & 2.17 $\pm$ 0.04 & 15.61 $\pm$ 0.19 \\
1.7 & 6.27 $\pm$ 0.14 & 3.40 $\pm$ 0.06 & 2.33 $\pm$ 0.05 & 14.01 $\pm$ 0.17 \\
2.0 & 4.67 $\pm$ 0.14 & 3.08 $\pm$ 0.06 & 2.36 $\pm$ 0.06 & 12.45 $\pm$ 0.17 \\
2.4 & 3.48 $\pm$ 0.11 & 2.54 $\pm$ 0.06 & 2.20 $\pm$ 0.08 & 10.59 $\pm$ 0.15 \\
2.9 & 2.22 $\pm$ 0.10 & 2.11 $\pm$ 0.07 & 1.93 $\pm$ 0.09 & 8.52 $\pm$ 0.16 \\
3.4 & 1.44 $\pm$ 0.09 & 1.58 $\pm$ 0.07 & 1.64 $\pm$ 0.08 & 6.72 $\pm$ 0.15 \\
4.1 & 0.95 $\pm$ 0.08 & 0.98 $\pm$ 0.07 & 1.29 $\pm$ 0.06 & --- \\ \hline
\end{tabular}}
\caption{As in Table~\ref{tab:moments}, but for the $N=4$ moment.}
\label{tab:moments4}
\end{table}
\begin{table}[b]
\resizebox{8.5 cm}{!}{
\begin {tabular}{c|c|c|c|c}\hline
& \multicolumn{4}{c} {$M_6\, [\times 10^{-3}]$} \\ \hline
$Q^2$ [GeV$^2$]& 1st & 2nd & 3rd & total \\ \hline
1.0 & 4.39 $\pm$ 0.18 & 0.60 $\pm$ 0.01 & 0.20 $\pm$ 0.01 & 5.28 $\pm$ 0.18 \\
1.2 & 4.19 $\pm$ 0.10 & 0.82 $\pm$ 0.01 & 0.30 $\pm$ 0.01 & 5.45 $\pm$ 0.10 \\
1.4 & 3.79 $\pm$ 0.09 & 0.98 $\pm$ 0.02 & 0.39 $\pm$ 0.01 & 5.38 $\pm$ 0.09 \\
1.7 & 3.24 $\pm$ 0.08 & 1.09 $\pm$ 0.02 & 0.52 $\pm$ 0.01 & 5.17 $\pm$ 0.08 \\
2.0 & 2.62 $\pm$ 0.08 & 1.13 $\pm$ 0.02 & 0.62 $\pm$ 0.02 & 4.82 $\pm$ 0.09 \\
2.4 & 2.12 $\pm$ 0.07 & 1.06 $\pm$ 0.02 & 0.68 $\pm$ 0.02 & 4.41 $\pm$ 0.08 \\
2.9 & 1.45 $\pm$ 0.07 & 1.00 $\pm$ 0.03 & 0.71 $\pm$ 0.03 & 3.77 $\pm$ 0.08 \\
3.4 & 0.99 $\pm$ 0.07 & 0.82 $\pm$ 0.04 & 0.67 $\pm$ 0.03 & 3.14 $\pm$ 0.09 \\
4.1 & 0.68 $\pm$ 0.06 & 0.56 $\pm$ 0.04 & 0.60 $\pm$ 0.03 & --- \\ \hline
\end{tabular}}
\caption{As in Table~\ref{tab:moments}, but for the $N=6$ moment.}
\label{tab:moments6}
\end{table}
Note that while early phenomenological analyses of quark--hadron
duality typically compared resonance region data at low $Q^2$ with
scaling functions extrapolated from fits to high-$W$ cross sections
\cite{Bloom70, Bloom71}, more recent quantitative analyses
\cite{Malace09, Malace10} have emphasized the need to take
into account the $Q^2$ dependence in the high-$W$ data,
including both leading and higher twist contributions.
This is especially important in the high-$x$ region, where the
separation between the leading and higher twists is more model
dependent due to the absence of high-$Q^2$ measurements, and
comparison of resonance region data with the total extrapolated
structure functions reveals an enhanced persistence of duality
down to lower values of $Q^2$.
\begin{figure}
\begin{center}
\includegraphics[scale=0.42]{bonus_moments_f2ndth_ratio2_0scale_try.pdf}
\end{center}
\caption{Ratios of truncated moments of the neutron $F_2$ structure
function from the BONuS data to those computed from the ABKM
global PDF parametrization \cite{ABKM} including finite-$Q^2$
effects (filled circles) as a function of $Q^2$ for the four
$W^2$ intervals in Eq.~(\ref{eq:Wregions}).
The empirical moments are compared with the results of the
model-dependent analysis of inclusive DIS data \cite{Malace10}
(open circles), and with ratios computed from the CJ12
distributions \cite{CJ12}, with leading twist only
(dotted lines) and including finite-$Q^2$ effects
(solid lines).
All ratios are taken relative to the ABKM moments.}
\label{fig:data_theory_m2}
\end{figure}
To study local quark--hadron duality in detail, we form ratios of
the truncated moments of $F_2^n$ obtained from the BONuS data to the
moments computed from the ABKM reference structure function \cite{ABKM},
over the same range of $x$. The ratios for the $M_2^n$ moments are
shown in Fig.~\ref{fig:data_theory_m2} as a function of $Q^2$ for
the four invariant mass regions in Eq.~(\ref{eq:Wregions}).
The ratios for the second, third and total resonance regions are
close to unity, to within $\sim 10\%$ over nearly the entire range
of $Q^2=1-4$~GeV$^2$, and exhibit weak scale dependence.
This points to a dramatic confirmation of the validity of local
duality for the neutron in these regions.
In the first resonance region, the $\Delta$ resonance is
$\sim 20\%-30\%$ larger than the PDF-based fit, but still displays
a similar $Q^2$ behavior.
This could be interpreted as either a stronger violation of local
duality in the $\Delta$ region, which may be expected at lower $W$,
or possibly underestimated strength of the ABKM parametrization
in the large-$x$ regime, to which this $W$ region corresponds.
While this is difficult to disentangle experimentally, duality is
expected to hold to better accuracy for integrals obtained over
regions containing multiple final states.
The confirmation of the approximate validity of duality in $F_2^n$
from the BONuS data disfavors the suggestion \cite{Brodsky00} that
duality occurs in the proton because of accidental cancellations of
quark charges associated with higher twist, four-quark operators,
and disagrees with the prediction that duality should therefore
not be seen in the neutron.
This conclusion was also reached in the model-dependent analysis
by Malace {\it et al.} \cite{Malace10}, who studied duality in the
neutron by extracting the $F_2^n$ structure function from inclusive
DIS data using phenomenological deuteron wave functions and an
iterative deconvolution procedure \cite{Kahn08}.
Overall, the BONuS data are in good agreement with the earlier
results \cite{Malace10}, within the experimental uncertainties,
although they appear to lie systematically higher in the $\Delta$
region.
This may be associated with the nuclear corrections in the deuteron,
which are subject to greater uncertainties at the largest $x$
(smallest $W$) values, or a systematic bias of the subtraction
method in relation to the various theoretical assumptions and
models \cite{Arrington12}.
The relevance of large-$x$ uncertainties and finite-$Q^2$ corrections
in global PDF fits is illustrated in Fig.~\ref{fig:data_theory_m2},
where the experimental and computed ABKM moments are also compared
with the moments calculated from the CTEQ--Jefferson Lab (CJ) global
PDF parametrization \cite{CJ12} with and without higher twist
corrections.
While the ratio of the ABKM and CJ moments is close to unity over the
entire range of $Q^2$ considered when finite-$Q^2$ effects are included,
the deviation from unity of the ratio computed from only the leading
twist components of the CJ fit can be up to $30\%-40\%$ for the
integrated resonance region, and up to twice as much for the
$\Delta$ region.
This suggests an important role played by the finite-$Q^2$ corrections
to the scaling functions in effecting the cancellations between the
individual resonance regions necessary for the realization of
quark-hadron duality \cite{Isgur01, Close01, Close03}.
However, even incorporating finite-$Q^2$ corrections, global PDF
fits can differ significantly in the large-$x$ (low-$W$) regime.
Because of the $Q^2$ and $W^2$ cuts applied to the global data set,
PDFs in the largest-$x$ regions relevant for this analysis are
essentially unconstrained, resulting in large uncertainties in
the extrapolated $x \to 1$ behavior \cite{CJ10}.
\subsection{Isospin dependence}
\label{ssec:n_isospin}
The stronger violation of local duality in the $\Delta$ region is
also evident in the ratio of neutron to proton truncated moments,
shown in Fig.~\ref{fig:data_f2nf2p} compared with the reference
ABKM parametrization \cite{ABKM} that was used to compute both
the proton and neutron moments. To obtain the empirical proton
truncated moments in the resonance region, the Christy-Bosted
global fit \cite{Christy10} was utilized.
(Duality in the proton structure function moments themselves was
studied in detail in previous analyses \cite{Malace09}, and
generally confirmed at the $10\%-15\%$ level for the $N=2$ moment
when integrated over the entire resonance region.)
The significant duality violation in the neutron/proton ratio observed
in the $\Delta$ region can be understood from the isovector nature
of the $\Delta$-isobar and the relatively small nonresonant
background on which it sits.
In the limit of exact isospin symmetry, the transitions from a ground
state nucleon to an isospin-3/2 resonance would be identical for
protons and neutrons. Nonresonant background and isospin symmetry
breaking contributions give rise to differences between proton and
neutron moments, but these are typically very small in the $\Delta$
region.
In contrast, the proton and neutron deep-inelastic structure functions
(either leading twist only or with higher twist corrections) in the
$\Delta$ region are expected to be quite different, since at large $x$
the neutron structure function is strongly suppressed relative to the
proton, $F_2^n \ll F_2^p$ \cite{Close88, Meln96}.
The fact that the experimental $M_2^n/M_2^p$ ratio in the high-$x$
region lies somewhat higher than the DIS parametrization (even more
pronounced than in Ref.~\cite{Malace10}) is therefore consistent
with these expectations.
\begin{figure}
\begin{center}
\includegraphics[scale=0.43]{bonus_moments_f2nf2p_ratio2_0scale.pdf}
\end{center}
\caption{Ratios $M_2^n/M_2^p$ of neutron to proton truncated moments
of the $F_2$ structure function versus $Q^2$, for the four
$W$ regions in Eq.~(\ref{eq:Wregions}). The BONuS results
(filled circles) are compared with the moments computed from
the ABKM global PDF parametrization including target mass
and higher twist (solid lines) corrections.
In both cases the proton moments are evaluated from the
same ABKM fit \cite{ABKM}.}
\label{fig:data_f2nf2p}
\end{figure}
A similar comparison of the neutron to proton moments in the
second and third resonance regions in Fig.~\ref{fig:data_f2nf2p}
shows significantly better agreement with the DIS parametrization.
Based on simple quark models and assuming magnetic coupling dominance,
one would expect the resonance contribution to the neutron moments
to underestimate the DIS moment in the second resonance region.
This is due to the small couplings to octet spin-1/2 states.
In contrast, according to Refs.~\cite{Close01, Close03, Close09}
the proton moments would overestimate the DIS results in the
second and third $W$ intervals in Eq.~(\ref{eq:Wregions}).
While there was some evidence for such a pattern from the earlier,
model-dependent analysis of inclusive data \cite{Malace10}, there
is no indication from the BONuS results of a suppression in the
second resonance region. The slightly larger overall magnitude
of the neutron moments compared with Ref.~\cite{Malace10} brings
the present results into excellent agreement with the DIS moments
in the second region, with a small enhancement in the third region.
The corresponding enhancement of the proton data in the
third resonance region relative to the ABKM fit \cite{Niculescu00,
Malace09} then results in essentially no deviation of the neutron
to proton ratio here, as illustrated in Fig.~\ref{fig:data_f2nf2p}.
Finally, for the total integrated region between threshold and
$W = 2$~GeV, the empirical $M_2^n/M_2^p$ ratio is slightly above
the DIS result mostly because of the large enhancement of the
data in the $\Delta$ region.
\section{Conclusion}
\label{sec:conclusion}
In this work we have investigated local quark--hadron duality in the
neutron structure function based on data from the BONuS experiment
at Jefferson Lab \cite{Baillie12, Tkachenko14}, which used a novel
experimental technique to create an effective neutron target by
tagging low momentum spectator protons in electron-deuterium
scattering. The spectator tagging technique provides smaller
systematic uncertainties compared with the traditional method of
subtracting smeared hydrogen data and from inclusive deuterium
structure functions, using model assumptions for the nuclear
corrections.
We have evaluated the $N=2$, 4 and 6 truncated moments of the neutron
$F_2^n$ structure function for the three standard nucleon resonance
regions and the total integrated resonance region up to $W = 2$~GeV,
over the range $Q^2 = 1.0$ to 4.1~GeV$^2$.
Comparison of the experimental moments with moments computed from
global parametrizations of PDFs fitted to deep-inelastic and other
high energy scattering data, as well as with the corresponding
truncated moments for the proton, reveals a dramatic confirmation
of local duality for the neutron in the second, third and total
resonance regions to better than 10\% for the lowest moment.
The stronger ($\sim 20\%-30\%$) violation of duality in the
$\Delta$ region is consistent with the expectations based on
isospin symmetry for the isovector transition amplitudes and the
behavior of the $F_2^n/F_2^p$ ratio at large $x$ \cite{CJ12, Meln96}.
The confirmation of local duality in the neutron disfavors the model
\cite{Brodsky00} in which duality in the proton arises through
accidental cancellations of quark charges associated with higher
twist, four-quark operators, which would predict strong duality
violations in the neutron. Rather, it suggests a dynamical origin
of duality in which cancellations among nucleon resonances produce
a higher degree of duality over the entire resonance region, with
stronger violations locally \cite{Close01, Close03, Close09}.
On the other hand, detailed comparisons between the empirical
truncated moments and DIS parametrizations in the individual
resonance regions suggest a pattern of duality violation that is
more involved than that predicted by simple spin-flavor symmetric
quark models with magnetic coupling dominance.
Our results also confirm and refine the findings of earlier
model-dependent studies \cite{Malace10} of duality in the neutron
in which the neutron structure was extracted from inclusive proton
and deuteron data assuming a model for the nuclear corrections
and an iterative deconvolution procedure \cite{Kahn08}.
In particular, the BONuS moments are found to lie slightly higher
than the earlier results, especially in the $\Delta$ region,
but with a similar $Q^2$ dependence.
In the future, the spectator tagging technique will be used at
Jefferson Lab with an 11~GeV electron beam to extend the kinematical
coverage of $F_2^n$ measurements to higher values of $x$ and $Q^2$
\cite{BONuS12}. As well as providing more stringent constraints on
the leading twist PDFs in the limit $x \to 1$, the new data will
allow more definitive tests of local quark-hadron duality for the
neutron over a greater range of $Q^2$.
\acknowledgments
This work was supported by the United States Department of Energy (DOE)
Contract No.~DE-AC05-06OR23177, under which Jefferson Science Associates,
LLC operates Jefferson Lab.
The JMU group was supported by the National Science Foundation (NSF)
under Grant No.~PHY--1307196.
S.K., J.A., S.T., and K.G. acknowledge support from the DOE under grants
DE-FG02-96ER40960,
DE-AC02-06CH11357,
DE-FG02-97ER41025, and
DE-FG02-96ER41003, respectively.
|
2,869,038,154,340 | arxiv | \section{Introduction}
Abundances of heavy elements (metallicities) in the interstellar medium (ISM) of galaxies are enriched by stellar nucleosynthesis and trace star formation histories and gas-flow processes that ultimately shape the galaxy population.
In particular, the spatial distribution of metallicity offers a powerful probe on the role of mergers, outflows, gas mixing, and gas accretion in transforming galaxies \citep[e.g.,][]{Edmunds95,Kewley10,Torrey12,Magrini16,Finlator17,Ma17, Bresolin19,Tissera19,Hemler20}.
Spatial distributions of metals are often summarised as radial abundance gradients and azimuthal variations \citep[e.g.,][]{Searle71,Vila-Costas92,Li13,Ho15,Ho19}, with both
negative and flat metallicity gradients widely observed in the Milky Way and other local galaxies \citep[e.g.,][]{Deharveng00, Bresolin04, Berg13, Berg20}.
Spatially resolved studies of galaxies are now far more accessible compared to a decade ago, thanks to the advent of integral-field unit (IFU) spectroscopy.
Multiplexed IFU surveys (e.g. CALIFA \citealt{Sanchez12}; SAMI \citealt{Bryant15}; MaNGA \citealt{Bundy15}) have afforded large samples of gradient measurements in the local Universe. Studies find a dependence on stellar mass: low-mass galaxies ($\sim$10$^9$ $M_\odot$) show almost flat gradients, with negative gradients steepening to high masses \citep{Belfiore17, Poetrodjojo18}.
Spatially resolved measurements become more challenging at high redshift and observations show a substantial amount of scatter \citep{Yuan11, Jones13, Leethochawalit16, Carton18, Wang19b, Curti20b}. One major caveat in using this broad range of observations to develop a coherent model of galaxy evolution is that different measurement techniques are often in disagreement.
A number of observational methods exist for determining the oxygen abundance (metallicity hereafter) of the ISM in galaxies from emission line spectroscopy (see \citealt{MaiolinoMannucci19, Kewley19} for recent reviews).
However, different techniques often show large offsets up to 0.7 dex \citep[e.g.][]{KewleyEllison08,Peimbert17}.
This stark disagreement between different metallicity measurement techniques presents an ongoing challenge for studying chemical evolution of galaxies.
Emission line strengths in the photoionised nebulae around hot O- and B-type stars (H\,{\sc ii}\rm\, regions) are sensitive to electron temperature ($T_e$), in addition to ionic abundances, ionisation parameter, and ISM pressure. Thus, a desirable approach to metallicity measurement is to use ratios of auroral emission lines and corresponding strong nebular emission lines to explicitly determine $T_e$, and subsequently metallicity (Direct Method; e.g., see \citealt{PerezMontero17} for an overview).
This ``direct method'' is traditionally considered the gold standard in abundance determination \citep[e.g.][]{MaiolinoMannucci19}, and underpins the calibration of many alternative techniques \citep[e.g.][]{PettiniPagel04, Curti20a}.
However, one major practical issue with the direct method is that the faintness of the optical auroral lines severely limits its application. An alternative $T_e$-based method outlined by \citet{Jones20} determines oxygen abundance based instead on far-infrared oxygen lines (\OIIIs52\micron\, or \OIIIs88\micron). This is expected to be favourable beyond $z\gtrsim5$ where these far-IR features can be observed with millimeter instruments such as \emph{ALMA}, but is difficult to apply at lower redshifts.
Due to the faintness of auroral lines required for the direct method, strong-line methods are widely adopted in observations. Strong-line methods use ratios of the brightest rest-frame ultra-violet and optical emission lines to empirically determine the metallicity with calibrations based on either direct-method observations \citep[e.g.][]{PettiniPagel04, Pilyugin05, Curti20a} or stellar population synthesis and photoionisation models \citep[e.g.][]{Kewley02, KobulnickyKewley04, Dopita16}. Strong-line methods vastly expand the redshift and mass range of galaxies for which metallicities can be derived.
However, it has been widely observed that metallicities measured with different methods often disagree \citep[e.g.][]{KewleyEllison08, Moustakas10, MoralesLuis14}.
In particular, theoretical methods, are reliant on simple geometries, such as spherical or plane parallel, and assume a constant temperature, constant density, or a constant pressure.
Despite the baseline role of the direct method, it does have limitations beyond practical detection-rate issues \citep{Nicholls20, Yates20}.
H\,{\sc ii}\rm\, regions are complex structures and summarising their conditions with integrated measurements of emission line ratios carries many assumptions.
For example, H\,{\sc ii}\rm\, regions are known to have internal temperature variations \citep{Peimbert67, Kewley19}. An observed emission line ratio samples the luminosity-weighted average conditions of the emitting nebulae \citep{Nicholls20}.
The direct method is best applied by constructing a multi-zone temperature model using auroral lines from multiple ionic species \citep[e.g.][]{PerezMontero17, Berg20}. Commonly used auroral lines include those from O$^{2+}$, O$^{+}$, N$^{+}$ or S$^{2+}$ ions.\footnote{The Cl$^{2+}$ and Ar$^{3+}$ ions can provide similar temperature probes to complement O$^{2+}$ measurements, however are usually too faint to be detectable.} This allows internal temperature gradients to be sampled since ions with differing ionisation energies preferentially sample different sub-regions of the nebulae.
However, measuring auroral lines from multiple species in observations presents a difficult practical challenge. Even detection of a single auroral line, commonly {[O\,{\sc iii}]\,}$\lambda$4363, is generally considered a favourable outcome.
But since the {[O\,{\sc iii}]\,}$\lambda$4363 line is only produced in the hottest regions of a nebula, a resulting $T_e$-derived metallicity may be a lower limit to the true metallicity if there is a temperature gradient \citep{Kewley19}.
To overcome the lack of direct constraints on the multi-zone temperature structure, abundance measurements are often made adopting empirical relations between temperatures from different ions. For example, the {[O\,{\sc ii}]\,} temperature ($T_e$({[O\,{\sc ii}]})) is indirectly inferred from the {[O\,{\sc iii}]\,} temperature ($T_e$({[O\,{\sc iii}]}); based on {[O\,{\sc iii}]\,}$\lambda$4363) using the $T_e$({[O\,{\sc ii}]}) -- $T_e$({[O\,{\sc iii}]}) relation (e.g. \citealt{Izotov06, LopezSanchez12, PerezMontero17}).
Recently, \citet{Yates20} show that at low O$^{2+}$/O$^{+}$, this approach can lead to large deficits in the measured O$^{+}$ abundance, causing total oxygen abundances to be underestimated by up to $\sim$0.6 dex.
Studying metallicity in spatially resolved detail exacerbates the practical limitations of the direct method. Indeed, direct method metallicities have been mapped only for the Milky Way and small samples of large nearby spiral galaxies \citep{Deharveng00, Bresolin04, Li13, Berg13, Berg15, Berg20, Croxall15, Croxall16, Ho19}, exploring only a very narrow subset of the galaxy population.
Here we leverage public release IFU data from the SAMI Galaxy Survey to expand spatially resolved direct method metallicity measurements to a new parameter space. From a search of auroral lines in SAMI Data Release 2 data cubes, we identify one particularly strong candidate: SAMI609396. This target is a minor-merger system and one galaxy in the system (SAMI609396B) is experiencing a burst of star formation. SAMI609396B
is analogous to a high-redshift galaxy given its low-mass and high SFR. We detect prominent, spatially resolvable emission of three auroral lines: {[S\,{\sc ii}]\,}$\lambda\lambda$4069, 76, {[O\,{\sc iii}]\,}$\lambda$4363 and {[S\,{\sc iii}]\,}$\lambda$6312 in SAMI609396B.
In this contribution, we focus on this notable case to study direct method metallicity and electron temperature in a spatially resolved manner.
The presence of auroral lines from multiple ionic species allows us to investigate the common assumption of using an assumed temperature relation (e.g. $T_e$({[O\,{\sc ii}]}) -- $T_e$({[O\,{\sc iii}]}) relation) on the spatial distribution of metallicity in galaxies.
Additionally, comparisons to strong-line metallicity trends provide further insight into possible systematic effects in samples of gradients measured in the local and high-redshift Universe.
Given the rarity of spatially resolved $T_e$ studies at low redshift, and the relevance of this object to high-redshift comparisons, it warrants a detailed study of its own.
This work is organised as follows. In Section \ref{sec:samidata} we briefly describe the SAMI DR2 public release data, general properties of the SAMI609396 system, and selection of SAMI609396B. Our methodology for deriving spatially resolved electron temperature measurments is outlined in Section \ref{sec:te}.
In Section \ref{sec:OH_trends} we derive metallicity maps from three different ``direct method'' approaches and four different strong-line methods and discuss the differences in spatial trends favoured by each. We discuss further caveats in Section \ref{sec:discussion} before summarising and presenting conclusions in Section \ref{sec:conclusion}.
Detailed descriptions of the derivation of global properties, spectral fitting and emission line measurements are deferred to the Appendix. We also include a list of SAMI galaxies with visually identifiable auroral line emission in the Appendix.
Throughout this paper we adopt the \citet{Planck16} cosmology: $\Omega_\Lambda = 0.692$, $\Omega_M = 0.308$, $\sigma_8 = 0.815$, and $H_0 = 67.8$ km s$^{-1}$ Mpc$^{-1}$. All magnitudes are quoted in the $AB$ magnitude system \citep{Oke&Gunn83}.
\section{The SAMI Galaxy Survey}
\label{sec:samidata}
We conducted a search for auroral lines in SAMI Galaxy Survey Public Data Release 2\footnote{\url{https://sami-survey.org/abdr}} \citep{Bryant15, Green18, Scott18}.
The SAMI Galaxy Survey \citep{Bryant15} is a large IFU survey targeting low-redshift ($z\lesssim0.1$) galaxies with the Sydney -- Australian Astronomical Observatory Multi-Object Integral Field Spectrograph \citep{Croom12}. Reduced SAMI data cubes are formed by sampling dithered hexabundle observations onto a regular grid (refer to \citealt{Allen15} and \citealt{Sharp15} for details). The SAMI aperture of radius is approximately $\sim$7.5 arcsec with a sampling of 0$\farcs$5 $\times$ 0$\farcs$5 spaxels. The true spatial resolution is limited by the seeing, recorded as FWHM$_\textit{PSF}$ = 2.07 arcsec ($\sim$790 pc) for SAMI609396. SAMI observes in two spectral bands. The blue arm covers the observed wavelength range from 3750-5750 \AA{} at low spectral resolution (R $\sim$ 1808, $\sigma v \sim 74$ km s$^{-1}$, at 4800 \AA{}), while the red arm covers from 6300-7400 \AA{} at medium resolution (R $\sim$ 4304, $\sigma v \sim 29$ km s$^{-1}$, at 6850 \AA{}) \citep[e.g.,][]{Zhou17}.
For more detailed information on the SAMI survey and data products, the reader is referred to the above references.
Among nine SAMI galaxies in which we visually identified the presence of up to three auroral lines ({[S\,{\sc ii}]}$\lambda\lambda$4069, 76, {[O\,{\sc iii}]}$\lambda$4363 and {[S\,{\sc iii}]}$\lambda$6312), we highlight one notable case, SAMI609396 -- a minor-merger system (Figure~\ref{fig:imaging}).
The remainder of this paper is focused on this object. The list of SAMI galaxies we compiled with identifiable auroral line emission can be found in Appendix~\ref{ap:auroral_list}.
\subsection{SAMI609396}
\label{sub:sami609396}
\begin{figure*}
\includegraphics[width=\textwidth]{figs/fig1_thumbs_sdss_highz.png}
\caption{Left panel: $g$-band imaging of the SAMI609396 merger system from SDSS. Middle panel: $ugi$ RGB composite of the system. Prominent auroral line emission is associated with SAMI609396B, the lower-left object exhibiting strong blue colour in the $ugi$ composite. The white dashed circle in the middle panel shows the field of view observed by the SAMI IFU. The 10\farcs0 scale given for the $g$-band image applies also for the middle panel and corresponds to approximately 3.8 kpc in physical distance.
Right panel: simulated rest-frame $ugi$ colour composite after artificially redshifting the $u$-, $g$- and $i$-band imaging to $z\sim1$. After redshifting, these bandpasses correspond approximately to \emph{HST} filters ACS/F606W, ACS/814W, and WFC3/F160W. The pixel scale in the simulated image is 0\farcs1, similar to that of \emph{HST}/WFC3. The simulated depth of the image is similar to observations in 3D-HST \citep{Yuan20}.}
\label{fig:imaging}
\end{figure*}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{figs/samiDR2_figure_2x2.pdf}
\caption{Publicly available value-added data products from SAMI DR2. Panel (a) Gas velocity from 1-component fitting. Panel (b) Gas velocity dispersion from 1-component fitting. Panel (c) Per spaxel star-formation rate \citep{Medling18}. Panel (d) Star-forming mask. The large star-forming dominated region denoted with black `+' symbols in panel (d) is characterised by very high SFR, velocity dispersions of $\sim$30 -- 80 km s$^{-1}$, and relative velocities of $\sim$100 km s$^{-1}$ (in the scale of panel a). This region, designated SAMI609396B, is spatially associated with observed auroral lines and is the target of our investigation. The black dotted circle in panel (a) indicates the point-spread function measured for this SAMI observation and applies to all panels.}
\label{fig:sami_dr2}
\end{figure}
SAMI609396 (SDSS J114212.25+002004.0) is identified as a minor merger in the Sloan Digital Sky Survey (SDSS) images (Figure~\ref{fig:imaging}). The two merging galaxies are not deblended in the SDSS catalog with the merger system having a total $r$-band magnitude of 13.95. The SAMI input catalog gives the heliocentric redshift as $z=0.01824$.
The merger signatures are evident from the colour difference and tidal tails. A visual inspection of the system shows one smaller galaxy exhibiting a strong blue colour, with a larger companion that is significantly redder (see Figure~\ref{fig:imaging} middle panel).
Spatially-resolved 1D spectra from the publicly available SAMI datacube show that the smaller galaxy in this system (SAMI609396B) is experiencing a burst of star formation associated with strong {[O\,{\sc iii}]}$\lambda$5007 emission lines (Equivalent Width (EW) $\sim200$ \AA{}).
Several prominent auroral emission lines ({[S\,{\sc ii}]}$\lambda\lambda$4069, 76, {[O\,{\sc iii}]}$\lambda$4363, and {[S\,{\sc iii}]}$\lambda$6312) are detected in SAMI609396B.
Using spatially resolved star formation rate (SFR) maps and photometry from SAMI (Appendix~\ref{ap:global_properties}), we derive SFR and $M_*$ estimates for SAMI609396B of 4.21 $\pm$ 0.30 M$_\odot$ yr$^{-1}$ and log$(M_*/M_\odot)$ {=} 9.18 $\pm$ 0.05.
These values of SFR and $M_*$ place SAMI609396B 1.3 dex above the local star formation `main-sequence' \citep{RenziniPeng15}.
\subsubsection{SAMI609396B properties in the context of high-redshift galaxies}
A number of galaxy properties have been shown to evolve systematically with redshift including SFR \citep[e.g.][]{Speagle14}, metallicity \citep[e.g.][]{Zahid13, Sanders20b}, ionisation parameter \citep{Sanders16}, and nebular emission line ratios \citep[e.g.][]{Kewley13, Steidel14}.
Given that placing observational constraints on high-redshift galaxies is comparably much more challenging than for local galaxies, there has been interest in obtaining observational constraints for ``high-redshift analogues'' \citep[e.g.][]{Heckman05, Cardamone09, Green14, Bian16}. These are galaxies at low-redshift with properties that emulate those observed in high-redshift galaxies.
Given the rarity of auroral emission lines in IFU data, we consider that SAMI609396B is worthy of a detailed study on its own. However, we also consider how its properties compare to those seen in high-redshift galaxies.
As outlined above in \S~\ref{sub:sami609396}, the SFR and $M_*$ measurements for SAMI609396B are more than 1 dex above the local star-forming main sequence, more in line with values typical of galaxies at $z\gtrsim1$.
Global metallicity correlates positively with stellar mass at $z\sim0$ (Mass-Metallicity Relation; refer to \citealt{MaiolinoMannucci19} and references therein), and at fixed stellar mass, metallicity is seen to decrease with increasing redshift \citep{Zahid13, Sanders20b}.
According to a recent multi-diagnostic determination by \citet{Sanders20b}, galaxies of a mass comparable to SAMI609396B (log$(M_*/M_\odot) \approx 9.18$) would have a median metallcity of 12+log$(O/H)=8.55$ at $z\sim0$, 12+log$(O/H)=8.26$ at $z\sim2.3$, and 12+log$(O/H)=8.17$ at $z\sim3.3$.
Absolute metallicity values for individual galaxies are notoriously difficult to determine and depend strongly on the calibration used \citep[e.g.][]{KewleyEllison08}. Although we do not take the step of applying the same metallciity calibration used by \citet{Sanders20b}, according to the metallicities we derive for SAMI609396B in \S~\ref{sec:OH_trends} we expect that the metallicity of SAMI609396B would likely fall somewhere between the median values expected from the $z\sim0$ and $z\sim2.3$ samples.
Ionisation parameters and electron densities in $z\sim2.3$ galaxies have been shown to be systematically offset from local galaxies at fixed stellar mass \citep{Sanders16}.
Electron density is most commonly probed with the {[S\,{\sc ii}]\,}$\lambda$6716/$\lambda$6731 doublet ratio. MOSDEF galaxies at $z\sim2.3$ were found by \citet{Sanders16} to have a median {[S\,{\sc ii}]\,} doublet ratio of 1.13, corresponding densities of around 290 cm$^{-3}$ in the S$^+$ zone of emitting nebulae, much higher than typical SDSS values ({[S\,{\sc ii}]\,}$\lambda$6716/$\lambda$6731 = 1.41; density of 26 cm$^{-3}$).
We measure a global {[S\,{\sc ii}]\,} ratio of 1.29 for SAMI609396B, corresponding to a density of 118 cm$^{-3}$, placing SAMI609396B between the low- and high-redshift sample medians. Given the scatter about those median values in both the MOSDEF and SDSS samples (Figures 4 \& 5 in \citealt{Sanders16}), it is difficult to draw conclusions about how SAMI609396B compares to the two populations based on density.
Using the $O_{32}$\footnote{$O_{32}$ = {[O\,{\sc iii}]\,} $\lambda\lambda$4959, 5007 / {[O\,{\sc iii}]\,} $\lambda\lambda$3726, 29 in this context} strong-line ratio as a tracer for ionisation parameter, \citet{Sanders16} found that, like SDSS galaxies, $z\sim2.3$ MOSDEF galaxies show a trend of decreasing ionisation parameter with increasing stellar mass. They find the slope of this relation to be very similar to that of SDSS galaxies, however with a $\sim$0.6 dex offset toward higher O32 at fixed stellar mass in the $z\sim2.3$ sample (Fig 8 in \citealt{Sanders16}).
Given the stellar mass derived for SAMI609396B (log$(M_*/M_\odot) \approx 9.18$), SDSS galaxies have a median value of $O_{32} = -0.25$, while MOSDEF galaxies with comparable mass have much higher values ($O_{32} = 0.28$). We measure $O_{32} = 0.14$ for SAMI609396B, 0.39 dex higher than the median SDSS value and 0.14 dex below the median of $z\sim2.3$ MOSDEF galaxies.
Recent studies have found an offset in the locus inhabited by high-redshift galxies on the $N2$-BPT diagram \citep{BPT81} with high-redshift galaxies exhibiting higher {[O\,{\sc iii}]}/H$\beta$ at fixed {[N\,{\sc ii}]}/H$\alpha$\, \citep{Kewley13, Steidel14}. We find that BPT line ratios observed for SAMI609396B to be within the range of local galaxies. Our analysis of the spatially resolved BPT diagram of SAMI609396B is discussed in detail in \S~\ref{sec:discussion}.
To summarise, we find that the physical properties (SFR and ISM conditions) of SAMI609396B tend to be offset from median $z\sim0$ values, although are generally less extreme than $z\sim2$ galaxies.
In combination with the high EW({[O\,{\sc iii}]}), we consider that the physical properties of SAMI609396B might be analogous to intermediate-redshift ($0 < z \lesssim 1$) galaxies.
Low-mass galaxies like SAMI609396B are extremely difficult to resolve at high redshift.
To visually demonstrate what a system like SAMI609396 would look like at a higher redshift, we simulate the angular size and morphology of SAMI609396 at $z\sim1$ using similar techniques to those detailed in \citet{Yuan20}. The redshifted morphology is presented on the right panel of Figure~\ref{fig:imaging}. In order to resolve a low-mass system like SAMI609396B at $z\sim1$ with comparable physical resolution of SAMI, a minimal angular resolution of 0.1" is required. Such a fine resolution can be achieved either through ground-based adaptive optics or space instruments. The faintness of these low-mass systems also means the need for
next-generation facilities such as JWST/NIRSpec and ground-based ELTs.
\subsection{SAMI DR2: Value-added data products}
\label{sub:value_added}
SAMI DR2 includes a number of publicly available value-added data products, which we use to guide our initial understanding of the SAMI609396 system. Figure~\ref{fig:sami_dr2} shows publicly available maps for the gas velocity, gas velocity dispersion, and star-formation rate (Panels (a) -- (c)) derived from 1-component fits.
Panel (d) of Fig~\ref{fig:sami_dr2} shows a star-formation mask, determined according to \citealt{Kewley06} based on BPT \& VO87 diagnostic diagrams \citep{BPT81, VO87}, with green denoting spaxels passing selection as ``star-formation dominated''. Figure~\ref{fig:sami_dr2} shows that much of the SAMI field-of-view is dominated by emission from non-star-forming sources (yellow spaxels; ``other''). The yellow spaxels have higher velocity dispersion compared with star-forming dominated regions, characteristic of emission from shock-heated gas. The BPT diagram and the origin of emissions in these regions are discussed further in \S \ref{sub:bpt}.
The prominent auroral line emission we identify is spatially associated with the large star-formation dominated region in the left-hand (eastern) portion of the star-formation mask. This region has a median rest-frame gas velocity of $v_\textit{gas} \approx 100$ km s$^{-1}$ (refer to scale in Fig~\ref{fig:sami_dr2}), a velocity dispersion of range $\sigma_\textit{gas} \approx 30 - 80$ km s$^{-1}$, and high a star-formation rate (median SFR surface density $\approx 0.97$ $M_\odot$ yr$^{-1}$ kpc$^{-2}$).
We designate this object as ``SAMI609396B'' and define its selection within the SAMI609396 datacube as including spaxels labelled as star-formation dominated with $v_\textit{gas} > 0$, denoted by black `+' symbols in panel (d) of Fig~\ref{fig:sami_dr2}. Global SFR and stellar mass estimates for SAMI609396B and its companion galaxy are provided in Table~\ref{tab:sami_properties}. Details of how these are derived are provided in Appendix~\ref{ap:global_properties}.
\begin{table}
\caption{Global properties of SAMI609396B and its companion.}
\label{tab:sami_properties}
\begin{tabular}{ll}
\hline
\hline
Right Ascension~~~~~~~~~~~ & 11$^{\rm h}$ 42$^{\rm m}$ 12\fs25 \\
Declination & +00\degr 20$^{\rm m}$ 04\fs04 \\
$z$ & 0.01824 \\
\hline
\hline
\multicolumn{2}{l}{SAMI609396B:}\\
\hline
SFR (M$_\odot$ yr$^{-1}$) $^\text{a}$ & 4.21 $\pm$ 0.30 \\
log$(M_*/M_\odot)$ & 9.18 $\pm$ 0.05 \\
\hline
\hline
\multicolumn{2}{l}{Companion:}\\
\hline
SFR (M$_\odot$ yr$^{-1}$) $^\text{a}$ & 0.32 $\pm$ 0.08 \\
log$(M_*/M_\odot)$ & 9.88 $\pm$ 0.07 \\
\hline
\hline
\multicolumn{2}{l}{$^\text{a}$SFR measurement for area within SAMI FoV (see Fig~\ref{fig:imaging}).}\\
\multicolumn{2}{l}{This is best considered as a lower bound.
\end{tabular}
\end{table}
\section{Spatially Resolved Electron Temperature}
\label{sec:te}
The electron temperature ($T_e$) and electron density ($n_e$) are fundamental physical parameters in understanding the emission line physics of ionized nebulae. Abundance measurements from collisionally excited lines in H\,{\sc ii}\rm\, regions are very sensitive to these parameters. For this reason, chemical abundances derived following explicit measurements of $T_e$ and $n_e$ are generally used as a baseline calibration for understanding the chemistry of ionized nebulae \citep[e.g.][]{MaiolinoMannucci19}.
This is generally achieved with the so-called ``direct method'' via measurement of an auroral emission line and a strong nebular line of the same ionic species. This is most commonly applied to the O$^{2+}$ ion using the {[O\,{\sc iii}]\,}$\lambda$4363/$\lambda$5007 ratio, which is primarily sensitive to $T_e$ (its $n_e$ dependence is minimal over the density range of typical H\,{\sc ii}\rm\, regions). Within the typical rest-frame near-ultraviolet to near-infrared wavelength range observed for galaxies, auroral line ratios may be observable for a number of ionic species including
O$^{+}$, N$^{+}$, S$^{2+}$ and S$^{+}$,
each of which probe different zones within the emitting H\,{\sc ii}\rm\, regions according to the distribution of those ions within the nebular structure. Although we detect auroral lines from three ionic species in SAMI609396B ({[S\,{\sc ii}]}, {[O\,{\sc iii}]\,} and {[S\,{\sc iii}]}), we are able to derive electron temperature for only the {[O\,{\sc iii}]\,} and {[S\,{\sc ii}]\,} ionisation zones as we lack the spectral coverage to measure the {[S\,{\sc iii}]\,}$\lambda$9069 and {[S\,{\sc iii}]\,}$\lambda$9531 strong lines required to derive $T_e$({[S\,{\sc iii}]}).
\subsection{Auroral Emission Line Measurements}
\label{sub:auroral}
We derive flux maps for auroral lines from three ionic species ({[S\,{\sc ii}]\,}$\lambda\lambda$4069, 76, {[O\,{\sc iii}]\,}$\lambda$4363, and {[S\,{\sc iii}]\,}$\lambda$6312) identified in the SAMI609396 data cube, as the SAMI DR2 value-added data products do not include emission line maps for these fainter lines. We concomitantly re-derive strong emission line fluxes, rather than use SAMI DR2 emission line maps, ensuring self-consistency in our line ratio measurements. These flux maps are generated by applying standard methods to each spaxel, first fitting the stellar continuum, and then simultaneously fitting profiles to each emission line included in our analysis. Details of this spectral fitting are provided in Appendix~\ref{ap:fitting}.
We obtain $S/N \sim 3-15$ in individual spaxels for each of {[O\,{\sc iii}]\,}$\lambda$4363, {[S\,{\sc ii}]\,}$\lambda\lambda$4069, 76 and {[S\,{\sc iii}]\,}$\lambda$6312 across the majority of the spatial region selected as SAMI609396B.
We identify from visual inspection some degree of blending between {[O\,{\sc iii}]\,}$\lambda$4363 and a neighbouring faint {[Fe\,{\sc ii}]\,} emission line at $\lambda$4360, similar to that observed in other recent studies \citep[e.g.][]{Curti17, Berg20, ArellanoCordova20}.
We find that the {[O\,{\sc iii}]\,}$\lambda$4363 line is brighter than the $\lambda$4360 feature by a factor of $\sim$2 and that with the spectral resolution of the blue arm of the SAMI spectrograph we are able to reliably recover the {[O\,{\sc iii}]\,}$\lambda$4363 flux. Our efforts to test the reliability of our {[O\,{\sc iii}]\,}$\lambda$4363 flux measurements are outlined in detail in Appendix \ref{apsub:FeOIIIblending}.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{figs/tOIII_map.pdf}
\caption{\textit{Top}: Electron temperature map derived from the {[O\,{\sc iii}]}$\lambda$4363 / {[O\,{\sc iii}]}$\lambda$5007 ratio (see \S \ref{sub:te_OIII}). \textit{Bottom}: Measured signal-to-noise of {[O\,{\sc iii}]}$\lambda$4363 auroral line. The red circle depicts the FWHM PSF of this SAMI datacube and applies to both panels. The large scale spatial variations in $T_e$ do not appear to correlate with {[O\,{\sc iii}]}$\lambda$4363 $S/N$.}
\label{fig:tOIII_map}
\end{figure}
\subsection{[OIII] Electron Temperature}
\label{sub:te_OIII}
The emission line ratio most widely used to determine the electron temperature with the direct method is the {[O\,{\sc iii}]\,}$\lambda$4363 / {[O\,{\sc iii}]\,}$\lambda$5007 ratio. Despite the primary dependence of this {[O\,{\sc iii}]\,} ratio on temperature, the residual density dependence is often accounted for by measurement of a density sensitive line ratio, typically {[S\,{\sc ii}]\,}$\lambda$6716 / {[S\,{\sc ii}]\,}$\lambda$6731. \citet{Izotov06} use relations derived for these aforementioned {[O\,{\sc iii}]\,} and {[S\,{\sc ii}]\,} line ratios (Equations (1) and (2) in that reference) in an iterative manner, solving simultaneously for $T_e$ and $n_e$. This iterative approach is shared by the \texttt{getCrossTemDen} routine in the \texttt{PyNeb} package \citep{PyNeb15}, which allows for a flexible array of temperature- and density-sensitive line ratios.
However, it is important to consider that neither temperature nor density is expected to be constant throughout H\,{\sc ii}\rm\, regions. Additionally, emission from different ionic species may not be co-spatial. Certainly, {[S\,{\sc ii}]\,} emission is expected to arise from the outer regions of nebulae, thus densities measured from the {[S\,{\sc ii}]\,} line ratio do not necessarily provide a good indication of the density of the {[O\,{\sc iii}]\,} emission region (see Figure 2 in \citealt{Kewley19}).
Given these uncertainties, \citet{Nicholls20} instead propose a simplified approach in which $T_e$ is derived from an empirical relation of the auroral line ratio, derived from H\,{\sc ii}\rm\, region modelling, forgoing any attempt to account for $n_e$, suggesting that any improvements in temperature insight are outweighed by uncertainties induced by density variations and lack of co-spatiality.
Given the $\sim$1 kpc spatial resolution of SAMI, we are unable to resolve individual H\,{\sc ii}\rm\, regions, adding to the uncertainties described above.
Thus, we use this simplified approach to derive our $T_e$ from the {[O\,{\sc iii}]\,}$\lambda$4363 / {[O\,{\sc iii}]\,}$\lambda$5007 ratio according to the relation given in \citet{Nicholls20}. This relation is shown as Equation~\ref{eq:temp_OIII} here:
\begin{equation}
\label{eq:temp_OIII}
\text{log}_{10}(T_e(\text{[OIII]})) = \frac{3.3027 + 9.1917x}{1.0 + 2.092x - 0.1503x^2 - 0.0093x^3}
\end{equation}{}
where $x = \text{log}_{10}(f_{4363}/f_{5007})$, with $f_X$ referring to a line flux measurement of a collisionally excited line with rest-frame wavelength X \AA{}, and $T_e$ is in units of K.
The derived {[O\,{\sc iii}]\,} temperature map for spaxels with {[O\,{\sc iii}]\,}$\lambda$4363 of S/N $>3$ is shown in Figure~\ref{fig:tOIII_map}.
\subsection{[SII] Electron Temperature}
\label{sub:te_SII}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{figs/tSII_vs_tOIII.pdf}
\caption{Comparison of $T_e$({[O\,{\sc iii}]}) and $T_e$({[S\,{\sc ii}]}) electron temperature values. \textit{Panel (a)}: map of $T_e$({[O\,{\sc iii}]}) values for spaxels with $S/N > 5$ for {[O\,{\sc iii}]\,}$\lambda$4363. Red circle labelled `PSF' has diameter equal to the FWHM of the SAMI PSF for this observation and applies to panels (a--d). \textit{Panel (b)}: electron density derived from the {[S\,{\sc ii}]\,}$\lambda$6716 / $\lambda$6731 ratio. \textit{Panel (c)}: map of $T_e$({[S\,{\sc ii}]}) values for spaxels with $S/N > 5$ for {[S\,{\sc ii}]\,}$\lambda\lambda$4069, 76. \textit{Panel (d)}: map of $T_e$({[S\,{\sc ii}]}) from panel (c) smoothed with a Gaussian filter. \textit{Panel (e)}: Brown points show values of of $T_e$({[S\,{\sc ii}]}) and $T_e$({[O\,{\sc iii}]}) for individual spaxels with $S/N > 5$ on both auroral lines. Error bars shown reflect only measurement uncertainty and do not include associated modelling uncertainties. Temperatures derived for two mock apertures (indicated by blue and red dashed circles in panel a) are shown as the blue and red points in panel (e).}
\label{fig:tSII_v_tOIII}
\end{figure*}
In addition to $T_e$({[O\,{\sc iii}]}), spatially resolved measurements of the {[S\,{\sc ii}]\,} auroral lines allow us to measure $T_e$({[S\,{\sc ii}]}) from the {[S\,{\sc ii}]\,}$\lambda\lambda$4069, 76 / {[S\,{\sc ii}]\,}$\lambda\lambda$6716, 31 ratio.
Modelling indicates that at the low density limit ($1<n_e<50$ cm$^{-3}$), the residual density dependence of the {[S\,{\sc ii}]\,}$\lambda\lambda$4069, 76 / {[S\,{\sc ii}]\,}$\lambda\lambda$6716, 31 ratio is minimal. In contrast to the {[O\,{\sc iii}]\,} case, this {[S\,{\sc ii}]\,} temperature diagnostic is co-spatial with the {[S\,{\sc ii}]\,} density diagnostic, meaning that we are able to make a more reliable estimate of the density.
The $n_e$ values for SAMI609396 obtained with the {[S\,{\sc ii}]\,}$\lambda$6716 / {[S\,{\sc ii}]\,}$\lambda$6731 ratio \citep[Eq. 3 in ][]{Proxauf14} are shown in Fig~\ref{fig:tSII_v_tOIII} Panel (b). We find the median electron density to be $\widetilde{n_e} = 92$ cm$^{-3}$. This value is above the {[S\,{\sc ii}]\,}\ low density limit, indicating that {[S\,{\sc ii}]\,}$\lambda\lambda$4069, 76 / {[S\,{\sc ii}]\,}$\lambda\lambda$6716, 31 will have a residual density dependence.
Nonetheless, we derive the {[S\,{\sc ii}]\,}\ temperature with a similar approach to that outlined in \S \ref{sub:te_OIII} with a new rational polynomial fit to modelling data assuming a density of $n_e=100$ cm$^{-3}$. This fit is given in Equation~\ref{eq:temp_SII} where $x = \text{log}_{10}[(f_{4069}+f_{4076})/(f_{6716}+f_{6731})]$ and $T_e$ is in units of K.
\begin{equation}
\label{eq:temp_SII}
\text{log}_{10}(T_e(\text{[SII]})) = \frac{-0.08891 + 2.06354x + 3.38680x^2 + 0.10754x^3}{0.1 + 0.78000x + 0.94404x^2}
\end{equation}{}
$T_e$({[S\,{\sc ii}]}) values obtained for SAMI609396 are compared with $T_e$({[O\,{\sc iii}]}) values in Figure~\ref{fig:tSII_v_tOIII}. Panels (a) and (b) show maps of $T_e$({[S\,{\sc ii}]}) and $T_e$({[O\,{\sc iii}]}) respectively for spaxels where the relevant auroral line is detected with $S/N>5$. Panel (e) shows the direct comparison of $T_e$({[S\,{\sc ii}]})) and $T_e$({[O\,{\sc iii}]}) values on a spaxel-by-spaxel basis.
We observe that a majority of points in panel (e) of Fig~\ref{fig:tSII_v_tOIII} lie below the line of $T_e$({[S\,{\sc ii}]}) = $T_e$({[O\,{\sc iii}]}) (i.e. higher $T_e$({[O\,{\sc iii}]}) than $T_e$({[S\,{\sc ii}]})).
The large blue and red points in Fig~\ref{fig:tSII_v_tOIII} Panel (e) show derived $T_e$({[S\,{\sc ii}]}) and $T_e$({[O\,{\sc iii}]}) electron temperatures for two mock apertures which correspond to the regions shown as blue and red dashed circles in Panel (a).
These aperture temperatures appear to indicate that $T_e$({[S\,{\sc ii}]}) and $T_e$({[O\,{\sc iii}]}) do not exhibit strong positive correlation across different spatial regions of SAMI609396B. The implications of this temperature relation for metallicity measurement are discussed further in \S \ref{sec:OH_trends}.
\section{Spatial Trends In Metallicity}
\label{sec:OH_trends}
In Section~\ref{sec:te} we derived spatially resolved electron temperature ($T_e$) measurements.
Here we use these $T_e$ measurements to determine direct method oxygen abundances under three different sets of assumptions, showing that derived spatial variations in metallicity can be very sensitive to the assumed internal H\,{\sc ii}\rm\, region temperature structure.
Additionally we derive spatially resolved strong-line metallicities and discuss differences in observed spatial trends.
\subsection{Direct Method Metallicity}
\label{sub:direct_logOH}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{figs/direct_method_gradients.pdf}
\caption{
Observed spatial trends in direct method metallicity depend strongly on temperature structure assumptions.
Direct method metallicity maps (panels a-c) and spatial metallicity trends (d-f) are shown for SAMI609396B under three different $T_e$({[O\,{\sc ii}]}) temperature assumptions.
Panels (a, d) show $Z_\text{Te; LS12}$ : where $T_e$({[O\,{\sc ii}]}) is derived from $T_e$({[O\,{\sc iii}]}) via the relation of \citet{LopezSanchez12} (Eq~\ref{eq:tOII_tOIII}).
Panels (b, e) show $Z_\text{Te; Y20}$ : derived as for $Z_\text{Te; LS12}$ with the additional step of applying the empirical correction of \citetalias{Yates20} based on $O32$.
Panels (c, f) show $Z_\text{Te; SII}$ : metallicity is derived assuming $T_e$({[O\,{\sc ii}]}) = $T_e$({[S\,{\sc ii}]}).
See \S \ref{sub:direct_logOH} for details.
Maps in panels (a-b) include spaxels with $S/N_{\lambda4363} \geq 3$, while panel (c) additionally excludes spaxels with $S/N_{\lambda4069} < 3$. The red circle in panel (c) shows the FWHM of the SAMI PSF and applies to panels (a-c). The dashed red rectangles in panels (a-c) span the region of highest S/N for the {[O\,{\sc iii}]}$\lambda$4363 line and defines the spatial region examined in panels (d-f).
Panels (d-f) show individual points for which $S/N_{\lambda4363} > 5$. Trend lines indicate running medians of the points shown. Vertical error bars on individual points reflect only measurement uncertainties and are dominated by auroral line measurements.
}
\label{fig:direct_metal_map}
\end{figure*}
Since the abundance of neutral oxygen (O$^0$) and oxygen in ionization states higher than O$^{2+}$ is expected to be negligible in H\,{\sc ii}\rm\, regions, we assume that the total oxygen abundance can be approximated as Equation~\ref{eq:oxygen_abundance}:
\begin{equation}
\label{eq:oxygen_abundance}
\frac{O}{H} = \frac{O^+}{H^+} + \frac{O^{2+}}{H^+} .
\end{equation}
\smallskip
We derive abundances of these two ionisation states of oxygen using the following analytic relations set out in \citet{PerezMontero17}:
\begin{multline}
\label{eq:PM17_O3_abundance}
12+\text{log}\left(\frac{O^{2+}}{H^+}\right) = \text{log}\left(\frac{f_{4959}+f_{5007}}{f_{H\beta}}\right) + 6.1868 \\ + \frac{1.2491}{t(O^{2+})} - 0.5816\cdot\text{log}\Big(t(O^{2+})\Big)
\end{multline}
\begin{multline}
\label{eq:PM17_O2_abundance}
12+\text{log}\left(\frac{O^{+}}{H^+}\right) = \text{log}\left(\frac{f_{3726}+f_{3729}}{f_{H\beta}}\right) + 5.887 + \frac{1.641}{t(O^{+})} \\ - 0.543\cdot\text{log}\Big(t(O^{+})\Big) + 0.000114\cdot n_e
\end{multline}
where $t(O^{2+}) = T_e$({[O\,{\sc iii}]})$/10^4$ K, $t(O^{+}) = T_e$({[O\,{\sc ii}]})$/10^4$ K, $n_e$ is the electron density measured by the {[S\,{\sc ii}]\,} $\lambda$6716 / $\lambda$6731 ratio, and $f_X$ refers to a line flux measurement of the H$\beta$ Balmer line or a collisionally excited line with rest-frame wavelength X \AA{}. Deriving O$^{2+}$/H$^+$ in this way requires only {[O\,{\sc iii}]\,}$\lambda\lambda$4959, 5007 and H$\beta$ emission line fluxes in addition to the $T_e$({[O\,{\sc iii}]}) values derived in \S~\ref{sub:te_OIII}. On the other hand, the O$^+$/H$^+$ abundance from Eq.~\ref{eq:PM17_O2_abundance} calls for $T_e$({[O\,{\sc ii}]}), which we do not directly measure. Additionally, O$^+$/H$^+$ has residual dependence on $n_e$, although we simply adopt the same fixed denisty $n_e = 100$ cm$^{-3}$ used in the temperature calculations in \S \ref{sub:te_OIII}. Note that our derived metallicity values vary by less than 0.01 dex with changes in adopted density, provided those are below $n_e < 200$ cm$^{-3}$.
Unlike $T_e$({[O\,{\sc iii}]}), we do not directly measure $T_e$({[O\,{\sc ii}]}), since we are unable to detect either the {[O\,{\sc ii}]\,}$\lambda\lambda$7319, 30 or {[O\,{\sc ii}]\,}$\lambda\lambda$2470+ doublets.
A favourable alternative is to use temperatures derived from other ionic species, especially {[N\,{\sc ii}]\,} or {[S\,{\sc iii}]\,}, to probe the temperature structure \citep[e.g.][]{Berg20}.
However, given the faintness of auroral lines it is common that an observation may enable measurement of only the {[O\,{\sc iii}]\,} temperature zone.
In this scenario, a $T_e$({[O\,{\sc ii}]}) estimate can be obtained by adopting an empirical $T_e$({[O\,{\sc ii}]}) -- $T_e$({[O\,{\sc iii}]}) relation, for which a number of calibrations exist \citep[e.g.][]{Izotov06, LopezSanchez12}.
Despite expanding the number of observations for which direct metallicities can be derived, \citet{Yates20} (\citetalias{Yates20} hereafter) find that using $T_e$({[O\,{\sc ii}]}) -- $T_e$({[O\,{\sc iii}]}) relations can underestimate the direct metallicity by more than 0.5 dex for low-ionisation systems, highlighting the importance of constraining the internal temperature structure of H\,{\sc ii}\rm\, regions where possible. Additionally, \citetalias{Yates20} provide an empirical correction for this effect based on the {[O\,{\sc iii}]}/{[O\,{\sc ii}]\,} strong line ratio.
For this analysis, we determine our total oxygen abundance maps in three ways. Each differs in its approach to handling the O$^{+}$/H$^+$ abundance, while in all three cases the O$^{2+}$/H$^+$ abundance is determined from Eq~\ref{eq:PM17_O3_abundance} and our direct measurement of $T_e$({[O\,{\sc iii}]}). For the remainder of this paper, metallicities derived in these three ways will be abbreviated as $Z_\text{Te; LS12}$, $Z_\text{Te; Y20}$ and $Z_\text{Te; SII}$ (where $Z=12+\text{log}(O/H)$), described as follows:
\begin{enumerate}
\item $Z_\text{Te; LS12}$: O$^{+}$/H$^+$ is determined using $T_e$({[O\,{\sc ii}]}) derived from $T_e$({[O\,{\sc iii}]}) using the relation outlined in \citet{LopezSanchez12} (Eq~\ref{eq:tOII_tOIII}).\footnote{We note that alternative $T_e$([OII]) -- $T_e$([OIII]) relations, including the equations from \citet{Izotov06}, do not significantly affect the metallicity morphology obtained for SAMI609396B.} This is the most commonly adopted method.
\smallskip
\item $Z_\text{Te; Y20}$: As for $Z_\text{Te; LS12}$, with the subsequent application of the \citetalias{Yates20} empirical correction, based on {[O\,{\sc iii}]}/{[O\,{\sc ii}]\,} strong-line ratio (Eq~\ref{eq:y20_correction}). This is a relatively new correction and has not been widely implemented in literature yet.
\smallskip
\item $Z_\text{Te; SII}$: O$^{+}$/H$^+$ is determined with $T_e$({[O\,{\sc ii}]}) derived instead from $T_e$({[S\,{\sc ii}]}) using the assumption $T_e$({[O\,{\sc ii}]}) = $T_e$({[S\,{\sc ii}]}). This is uniquely enabled by the detection of {[S\,{\sc ii}]}\ auroral lines in this study.
\end{enumerate}
\subsubsection{Empirical $T_e$({[O\,{\sc ii}]}) -- $T_e$({[O\,{\sc iii}]}) relation}
For $Z_\text{Te; LS12}$ we adopt the $T_e$({[O\,{\sc ii}]}) -- $T_e$({[O\,{\sc iii}]}) relation as calibrated by \citet{LopezSanchez12}, given in Equation \ref{eq:tOII_tOIII}:
\begin{equation}
\label{eq:tOII_tOIII}
T_e\text{{[O\,{\sc ii}]}} = T_e\text{{[O\,{\sc iii}]}} + 450 - 70 \cdot \text{exp}\big[(T_e\text{{[O\,{\sc iii}]}}/5000)^{1.22}\big]
\end{equation}
Deriving $T_e$({[O\,{\sc ii}]}) in this way and applying Equations~\ref{eq:PM17_O3_abundance}--\ref{eq:PM17_O2_abundance} we obtain the total oxygen abundance map shown in panel (a) of Fig~\ref{fig:direct_metal_map}. The spatial structure of this map reflects that of the temperature map derived in Fig~\ref{fig:tOIII_map} and favours a strong trend in metallicity across the region of the highest signal-to-noise (Fig~\ref{fig:direct_metal_map}, panel d).
The measurement uncertainty is dominated by the flux uncertainty of the {[O\,{\sc iii}]\,}$\lambda$4363 emission line to the point where the measurement uncertainty contribution from the high $S/N$ {[O\,{\sc iii}]}, {[O\,{\sc ii}]\,} and H$\beta$ strong lines can be ignored. We see no obvious correlation between the $S/N$ of {[O\,{\sc iii}]\,}$\lambda$4363 and $T_e$({[O\,{\sc iii}]}) (Fig~\ref{fig:tOIII_map}). Increasing the minimum $S/N$ cut on the {[O\,{\sc iii}]\,}$\lambda$4363 auroral line from $S/N>3$ to $S/N>8$ changes the median metallicity by less than 0.005 dex. Together, these give us confidence that observed spatial variations in metallicity are not artifacts from measurement noise, although the effects of modelling uncertainty are discussed over the coming sections.
\subsubsection{Empirical O$^{+}$ Abundance Correction}
\label{sub:y20_correction}
\citet{Yates20} provide an empirical correction based on the observed {[O\,{\sc iii}]}/{[O\,{\sc ii}]\,} line ratio given by Equation \ref{eq:y20_correction},
\begin{equation}
\label{eq:y20_correction}
Z_\text{Te; Y20} = Z_\text{Te; LS12} - 0.71 \cdot (O32 - 0.29)
\end{equation}
where $Z_\text{Te; Y20}$ and $Z_\text{Te; LS12}$ are corrected and uncorrected values of 12+log(O/H) respectively; $O32$ = log({[O\,{\sc iii}]\,}$\lambda\lambda$4959, 5007 / {[O\,{\sc ii}]\,}$\lambda\lambda$3726, 9) and the correction is applied only when $O32 \leq 0.29$.
Values of $O32$ across SAMI609396B fall in the range for which this correction will be non-zero.
Our direct metallicity map after \citetalias{Yates20} correction is shown in Fig~\ref{fig:direct_metal_map} Panel \textit{(b)}.
Spatial variations in the $O32$ ratio result in a flattening of the spatial trend after application of this correction.
We note that, in addition to the empirical correction described here (``\citetalias{Yates20} correction''), \citet{Yates20} also outlined a novel method for determining semi-direct metallicities (``\citetalias{Yates20} method'') in which $T_e$({[O\,{\sc ii}]}) and metallicity are solved for simultaneously, rather than sequentially. This \citetalias{Yates20} method then also requires subsequent application of the \citetalias{Yates20} correction if $O32 \leq 0.29$, as above. Note that Figure 6 in \citet{Yates20} shows that the abundance deficit at low $O^{++}/O^+$, which the \citetalias{Yates20} correction adjusts for, is present to varying degrees for all $T_e$({[O\,{\sc ii}]}) -- $T_e$({[O\,{\sc iii}]}) relations considered in that work.
We find the \citetalias{Yates20} method gives a two-valued solution for SAMI609396B which may require an additional prior to select the best metallicity solution.
We found that applying the \citetalias{Yates20} method as originally outlined favoured the lower value of these two solutions which yielded a gradient comparable to that obtained from our $Z_\text{Te; Y20}$ approach here, albeit with a much lower normalisation ($\sim$0.3 dex).
We found that the normalisation of the upper-branch solution was in better agreement with our other determinations outlined here, however the spatial trend arising from this upper-branch solution is more difficult to interpret.
Discussion of our implementation of the \citet{Yates20} method and its two-valued nature is deferred to Appendix~\ref{ap:yates_method}.
\subsubsection{O$^{+}$ abundance with $T_e$({[S\,{\sc ii}]})}
\label{sub:ZSII}
The {[S\,{\sc ii}]\,} temperature samples a relatively narrow zone from the outer regions of nebulae and is consequently not widely used to constrain the temperature profile of emitting H\,{\sc ii}\rm\, regions. However, \citet{Croxall16} found general agreement of $T_e$({[S\,{\sc ii}]}) with $T_e$({[O\,{\sc ii}]}) and $T_e$({[N\,{\sc ii}]}) in H\,{\sc ii}\rm\, regions in NGC 5457. In the absence of the {[S\,{\sc iii}]\,} strong-lines, the {[N\,{\sc ii}]\,} auroral lines, or any other temperature probes, $T_e$({[S\,{\sc ii}]}) affords our only direct probe of the internal temperature structure of H\,{\sc ii}\rm\, regions in SAMI609396B.
We make the simplified assumption that $T_e$({[O\,{\sc ii}]}) = $T_e$({[S\,{\sc ii}]}) and update our total oxygen abundance using the measured $T_e$({[S\,{\sc ii}]}) map (Fig~\ref{fig:tSII_v_tOIII} panel c) to re-derive our O$^+$/H$^+$ values. These updated oxygen abundances are shown in Fig~\ref{fig:direct_metal_map} Panel (c), spanning a slightly smaller spatial extent due to the additional requirement of {[S\,{\sc ii}]\,} auroral line signal-to-noise. The spatial trend shown in Fig~\ref{fig:direct_metal_map} Panel (f) is seen to be opposite of that in Panel (d) where O$^+$/H$^+$ was derived using an empirical temperature relation, albeit with a larger scatter.
This stark reversal can be explained by the $T_e$({[S\,{\sc ii}]}) -- $T_e$({[O\,{\sc iii}]}) trends observed in Figure~\ref{fig:tSII_v_tOIII}.
Deriving $T_e$({[O\,{\sc ii}]}) from a relation with $T_e$({[O\,{\sc iii}]}) assumes that such a relation is fixed across the spatial region covered. This would mean that regions with elevated $T_e$({[O\,{\sc iii}]}) would also show increased $T_e$({[O\,{\sc ii}]}).
However, the apertures plotted in panel (e) of Fig~\ref{fig:tSII_v_tOIII} (blue and red bold points) show that despite the increase in $T_e$({[O\,{\sc iii}]}) from the `blue' aperture to the `red' aperture, measured $T_e$({[S\,{\sc ii}]}) instead decreases (albeit with large uncertainties). This suggests the absence of a strong positive correlation between these temperatures across the spatial region and highlights the limitations of applying empirical temperature relations to measure spatial metallicity trends.
This is likely driven by variations in the ionisation structure (i.e. $O^{2+}/O^+$ abundance ratio) and also explains the observed variations in $O32$ ratio that lead to the flattening of the spatial trend observed after applying the \citetalias{Yates20} correction.
We discuss this further in \S~\ref{sub:gradient}.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{figs/strong_line_maps.pdf}
\caption{Direct method and strong-line oxygen abundance maps for the star-formation selected region corresponding to SAMI609396B. \textit{Panel (a)}: Direct method metallicity using $T_e$ values derived from {[O\,{\sc iii}]\,}$\lambda$4363 / $\lambda$5007 ratio (see \S \ref{sub:direct_logOH}).
\textit{Panel (b)}: Direct method metallicity after applying the empirical correction of \citet{Yates20} (see \S \ref{sub:y20_correction}).
\textit{Panel (c)}: iterative solution for metallicity, solved simultaneously for metallicity with $N2O2$ and ionisation parameter with $O32$ using calibrations from \citet{Kewley19}. \textit{Panel (d)}: metallicity from $R_{23}$ strong-line diagnostic using calibration from \citet{Curti20a}. \textit{Panel (e)}: metallicity derived from $O3N2$ using calibration from \citet{Marino13}. \textit{Panel (f)}: metallicity derived from the $N2S2H\alpha$ diagnostic as outlined in \citet{Dopita16}. The peak $i$-band flux from SDSS imaging is marked in each panel with a white pentagon. FWHM of the spatial PSF is shown by the red circle in panel (f). The slit shown in panel (f) spans the region of highest S/N for the {[O\,{\sc iii}]\,}$\lambda$4363 line and is examined in detail in Fig~\ref{fig:slit_gradient}}
\label{fig:metallicity_maps}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{figs/logOH_slit.pdf}
\caption{Spatial trend in metallicity along a mock slit for seven different strong-line and direct method metallicity measurement techniques.
Panel (c) shows individual spaxels and running median trends measured with $N2O2$ (red), $O3N2$ (black), $R_{23}$ (blue) and $N2S2H\alpha$ (magenta). Colour coding is as indicated in the legend in panel (a). More details on these strong-line metallicities can be found in \S~\ref{sub:SL_logOH}.
Panel (b) reproduces trend lines for three different direct method assumptions from Fig~\ref{fig:direct_metal_map} for ease of comparison.
Panel (a) renormalises each of these seven trend lines to show metallicity deviation.
The horizontal axis is zeroed at the adopted core of SAMI609396B, taken as the location of the peak in $i$-band flux from SDSS imaging.
Vertical error bars show the measurement uncertainty carrying through from emission line measurements. Horizontal error bars indicate the FWHM of the spatial PSF of the SAMI observation in terms of physical distance.
}
\label{fig:slit_gradient}
\end{figure*}
\subsection{Strong-Line Metallicity}
\label{sub:SL_logOH}
In Figure~\ref{fig:metallicity_maps} we compare four different strong-line metallicity maps with $Z_\text{Te; LS12}$ and $Z_\text{Te; Y20}$ direct method metallicity maps derived in \S \ref{sub:direct_logOH}. Strong-line metallicities are derived using a selection of widely strong-line diagnostics, defined in Equations \ref{eq:line_ratio_N2O2} -- \ref{eq:line_ratio_D16}:
\begin{gather}
\label{eq:line_ratio_N2O2}
N2O2 = \text{log}_{10}\left(\text{{[N\,{\sc ii}]}/{[O\,{\sc ii}]}}\right)\\
\label{eq:line_ratio_O32}
O32 = \text{log}_{10}\left(\text{{[O\,{\sc iii}]}/{[O\,{\sc ii}]}}\right)\\
\label{eq:line_ratio_R23}
R_{23} = \text{log}_{10}\left(\frac{\text{{[O\,{\sc iii}]\,}}\lambda4959 + \text{{[O\,{\sc iii}]\,}}\lambda5007 + \text{{[O\,{\sc ii}]}}}{H\beta}\right)\\
\label{eq:line_ratio_N2}
N2 = \text{log}_{10}\left(\text{{[N\,{\sc ii}]}/H}\alpha\right)\\
\label{eq:line_ratio_O3N2}
O3N2 = \text{log}_{10}\left(\text{{[O\,{\sc iii}]}/H}\beta\right) - N2\\
\label{eq:line_ratio_D16}
N2S2H\alpha = \text{log}_{10}\left(\text{{[N\,{\sc ii}]}/{[S\,{\sc ii}]}}\right) - 0.264 \cdot N2
\end{gather}
\smallskip
where {[N\,{\sc ii}]\,} = {[N\,{\sc ii}]\,}$\lambda$6583, {[O\,{\sc ii}]\,} = ({[O\,{\sc ii}]\,}$\lambda$3726 + {[O\,{\sc ii}]\,}$\lambda$3729), {[S\,{\sc ii}]\,} = ({[S\,{\sc ii}]\,}$\lambda$6716 + {[S\,{\sc ii}]\,}$\lambda$6731), and {[O\,{\sc iii}]\,} = {[O\,{\sc iii}]\,}$\lambda$5007 unless otherwise specified.
We use strong-line calibrations based on a mixture of theoretical and observational calibrations, outlined as follows:
\begin{itemize}
\item \textbf{N2O2}: We use the theoretical calibration provided in \citet{Kewley19} to solve iteratively for metallicity and ionisation parameter using the $N2O2$ (Eq.~\ref{eq:line_ratio_N2O2}) and $O32$ (Eq.~\ref{eq:line_ratio_O32}) diagnostic line ratios.
\item \textbf{R}$_{23}$: We use the calibration provided by \citet{Curti20a} based on direct method measurements of stacked SDSS galaxies. The $R_{23}$ ratio (Eq.~\ref{eq:line_ratio_R23}) is two-valued with a turnover at around 12+log$(O/H)=8.1$. Using $N2$ (Eq.~\ref{eq:line_ratio_N2}) to distinguish between high- and low-metallicity branches, we find $N2>-1.0$ across the extent of SAMI609396B, prompting us to consider only the high-metallicity branch.
\item \textbf{O3N2}: Calibration based on large compilation of $T_e$ measurements in H\,{\sc ii}\rm\, regions from \citet{Marino13}.
\item \textbf{N2S2H$\alpha$}: This diagnostic was proposed by \citet{Dopita16} based on predictions from photoionisation modelling. We adopt the calibration presented therein.
\end{itemize}
The colour maps shown in Fig~\ref{fig:metallicity_maps} are shown with different normalisation so as to visualise any spatial trends in metallicity in each diagnostic, setting aside the expected discrepancies in normalisation between alternative diagnostics
\citep[e.g.][]{KewleyEllison08}. Indeed, even after applying the \citetalias{Yates20} correction, the median direct method metallicity ($\widetilde{Z}_{\text{Te; Y20}}=8.40$) is still nearly 0.3 dex lower than that of the theoretically calibrated $N2O2$ diagnostic ($\widetilde{Z}_{\text{N2O2}}=8.68$). This difference is consistent with previous work which has shown systematic offset between metallicities derived from $N2O2$ using theoretical and empirical calibrations \citep{Bresolin09, Bresolin15}.
\subsection{Is The Metallicity Gradient Positive Or Negative?}
\label{sub:gradient}
While it is widely known that different metallicity measurement techniques often disagree in normalisation, one would hope that at a minimum two methods should agree on the ranked order of metallicities they measure.
It is immediately striking from Fig~\ref{fig:metallicity_maps} that even qualitative spatial trends in metallicity are very sensitive to the adopted diagnostic.
Figure~\ref{fig:slit_gradient} illustrates these spatial trends as a 1D projection.
Given the disturbed morphology of SAMI609396B, we do not formally define a metallicity gradient, but instead examine 1D spatial trends along the mock slit shown in Fig~\ref{fig:direct_metal_map} panels (a-c) and Fig~\ref{fig:metallicity_maps} panel (f). This slit encompasses the region of highest emission line signal-to-noise and approximately corresponds to the region of highest $g$-band flux (Fig~\ref{fig:imaging}).
Panel (a) of Fig~\ref{fig:slit_gradient} shows the running medians in metallicity with projected distance along this mock slit for all four strong-line methods described in \S~\ref{sub:SL_logOH} as well as the three different direct method assumptions outlined in \S~\ref{sub:direct_logOH}. The distance axis has been zeroed at the location of peak $i$-band flux from SDSS imaging which we adopt as the core of SAMI609396B. Each trend line has been renormalised relative to the metallicity at $r=-1.3$ kpc. We renormalise at this projected distance rather than the core as the three direct method approaches show best agreement in this spatial region (Fig~\ref{fig:slit_gradient} Panel b). In particular, the \citetalias{Yates20} empirical corrections are smallest in this region.
Most striking in Fig~\ref{fig:slit_gradient} Panel (a) is the clear discrepancy between the $Z_\text{Te; LS12}$ direct method and all other methods.
The $Z_\text{Te; LS12}$ method favours a strong trend of decreasing metallicity left-to-right from negative projected distance toward the core. Strong-line methods show an opposite trend, with metallicity increasing in the same direction albeit with less overall deviation from uniform.
As outlined in \S~\ref{sub:direct_logOH}, we find that the $Z_\text{Te; Y20}$ and $Z_\text{Te; SII}$ direct methods both show a much flatter metallicity trend than the $Z_\text{Te; LS12}$ method, and are in better agreement with strong-line methods.
Given that strong-line methods have their own unsettled systematic uncertainties (Section~\ref{sub:finer_trends}), we do not assess the absolute correctness of `gradients' derived from each method. Instead, we discuss below the physical reason for why the gradient from the $Z_\text{Te; LS12}$ method is at odds with $Z_\text{Te; Y20}$ and $Z_\text{Te; SII}$ and the strong line methods.
\subsection{O$^{2+}$/O$^{+}$ abundance ratio variation}
\label{sub:o32_variation}
We attribute the cause of the discrepancy between $Z_\text{Te; LS12}$ and other methods to variations in the O$^{2+}$/O$^{+}$ abundance ratio, causing deviations from the fixed $T_e$({[O\,{\sc ii}]}) -- $T_e$({[O\,{\sc iii}]}) relation adopted by $Z_\text{Te; LS12}$.
Figure~\ref{fig:o32_abundance_ratio} shows separate O$^+$/H$^+$ and O$^{2+}$/H$^+$ abundance maps, derived using $T_e$({[S\,{\sc ii}]}) and $T_e$({[O\,{\sc iii}]}) respectively, with panel (a) showing elevated O$^+$/H$^+$ in the core region (lower-right; corresponding to Projected Distance $\approx0$ kpc in horizontal scale of Fig~\ref{fig:slit_gradient}).
A bulk change in the ionisation structure of H\,{\sc ii}\rm\, regions across SAMI609396B such as this would cause measured temperatures to deviate from the $T_e$({[O\,{\sc ii}]}) -- $T_e$({[O\,{\sc iii}]}) relation from \citet{LopezSanchez12} (Eq~\ref{eq:tOII_tOIII}).\footnote{Or, indeed, any fixed monotonic relation assumed between $T_e$({[O\,{\sc ii}]}) and $T_e$({[O\,{\sc iii}]}).}
In \S~\ref{sub:te_SII} we noted that $T_e$({[S\,{\sc ii}]}) and $T_e$({[O\,{\sc iii}]}) derived for two mock apertures indicated the absence of a strong positive correlation between $T_e$({[S\,{\sc ii}]}) and $T_e$({[O\,{\sc iii}]}) (Fig~\ref{fig:tSII_v_tOIII} Panel e). In particular, lower $T_e$({[S\,{\sc ii}]}) values obtained in the core region leads to systematically higher O$^{+}$ abundance measurements in $Z_\text{Te; SII}$ than $Z_\text{Te; LS12}$, driving the apparent reversal in the measured total oxygen abundance gradient.
Recently, \citet{Yates20} observed that for log(O$^{2+}$/O$^{+}) \lesssim 0.0$, ``semi-direct'' metallicities (that is, metallicities in which $T_e$({[O\,{\sc iii}]}) has been directly measured, but $T_e$({[O\,{\sc ii}]}) has been indirectly determined using an assumed $T_e$({[O\,{\sc ii}]}) -- $T_e$({[O\,{\sc iii}]}) relation) underestimated the total metallicity by up to $\sim$0.5 dex compared with metallicities derived using direct measurements of both $T_e$({[O\,{\sc ii}]}) and $T_e$({[O\,{\sc iii}]}).
This effect also correlates with the {[O\,{\sc iii}]}/{[O\,{\sc ii}]\,} strong-line ratio, motivating the \citetalias{Yates20} correction for observations with log({[O\,{\sc iii}]\,}$\lambda\lambda$4959, 5007 / {[O\,{\sc ii}]\,}$\lambda\lambda3726, 9) \leq 0.29$.
Figure~\ref{fig:o32_abundance_ratio} shows that O$^{2+}$/O$^{+}$ abundance ratios in SAMI609396B largely fall below log(O$^{2+}$/O$^{+}) \lesssim 0.0$, inside the range highlighted in \citetalias{Yates20} as giving rise to deficits in the total oxygen abundance when ``semi-direct'' methods are used. Furthermore, a spatial trend in O$^{2+}$/O$^{+}$ abundance ratio can be seen in panel (c) of Figure~\ref{fig:o32_abundance_ratio}, with lower O$^{2+}$/O$^{+}$ in the lower-right regions of SAMI609396B. \citetalias{Yates20} found that the ``semi-direct'' abundance deficit is more pronounced at lower values of O$^{2+}$/O$^{+}$. From this, we reason that it is likely that $Z_\text{Te; LS12}$ underestimates the total oxygen abundance across the majority of SAMI609396B. In particular, the lower O$^{2+}$/O$^{+}$ seen in the core of SAMI609396B indicate that the systematically lower metallicities obtained in the core versus higher radius for $Z_\text{Te; LS12}$ (panel d of Figure~\ref{fig:direct_metal_map}) can be explained by this semi-direct abundance deficit being amplified in the core region.
By not appropriately accounting for this trend, when applying the $Z_\text{Te; LS12}$ method the O$^{2+}$/O$^{+}$ abundance ratio trend instead masquerades as the trend in total oxygen abundance seen in Fig~\ref{fig:direct_metal_map} \& \ref{fig:slit_gradient}.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{figs/O32_abundance_ratio.pdf}
\caption{
Map of derived O$^{++}$/O$^{+}$ abundance ratio for SAMI609396B.
Panel (a): O$^{+}$/H$^{+}$ abundance derived from Eq~\ref{eq:PM17_O2_abundance} using $T_e$[OII]=$T_e$[SII].
Panel (b): O$^{++}$/H$^{+}$ abundance derived from Eq~\ref{eq:PM17_O3_abundance} with direct $T_e$[OIII] measurement.
Note that, unlike in Figures \ref{fig:direct_metal_map} \& \ref{fig:slit_gradient}, abundance maps derived here use maps of $T_e$[SII] and $T_e$[OIII] that have been smoothed by a gaussian filter (FWHM set to measured PSF) to aid in the visual representation of spatial trends.
Panel (c): O$^{++}$/O$^{+}$ abundance ratio. O$^{+}$ provides a larger contribution to the total oxygen abundance across the majority of SAMI609396B (log(O$^{++}$/O$^{+}) < 0$). Direct metallicities evaluated adopting an assumed $T_e$[OII] -- $T_e$[OIII] relation (e.g. $Z_\text{Te; LS12}$ in this paper) can underestimate the total oxygen abundance by up to $\sim$0.5 dex in this low ionisation regime (see Fig~7 in \citetalias{Yates20}).
Panel (d): Observed $O32$ strong-line ratios (Eq~\ref{eq:line_ratio_O32}) appear to correlate with the O$^{++}$/O$^{+}$ abundance ratio when O$^{+}$/H$^{+}$ abundance is derived in this way. Regions of lowest $O32$ correspond to the highest level of correction according to \citetalias{Yates20} correction (see \S~\ref{sub:y20_correction}).
}
\label{fig:o32_abundance_ratio}
\end{figure}
\section{Discussion}
\label{sec:discussion}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{figs/mixing_sequence.pdf}
\caption{BPT \& VO87 diagnostics diagrams for SAMI609396B. Line ratios for individual spaxels are shown as orange and purple points. \citet{Kewley01} and \citet{Kauffmann03} demarcation lines are shown as solid and dotted grey lines respectively. Purple points denote spaxels below these demarcation lines in each panel. Solid shapes are basis points predicted from photoionisation modelling for H\,{\sc ii}\rm\, regions (\citealt{Dopita13}; green circles), fast shocks (\citealt{Allen08}; blue triangle) and slow shocks (\citealt{Sutherland17}, \citealt{Dopita17}; yellow inverted triangle) according to the model parameters given in Table~\ref{tab:basis_points}. The DIG basis point (red stars) is adopted as the peak region of strong-line ratios from the 10\% lowest surface brightness spaxels in the \citet{Zhang17} MaNGA sample \citep{Sanders17}. Black dashed lines indicate fractional mixing sequences between these basis points.}
\label{fig:bpt}
\end{figure*}
\subsection{Finer metallicity trends from strong lines}
\label{sub:finer_trends}
The measurement uncertainties on direct method metallicities for SAMI609396B are too large to be used for anything more than the bulk trend.
While the strong-line methods show general agreement when considered in this bulk fashion, deviations exist in the finer details of their spatial trends (Fig~\ref{fig:metallicity_maps} panels c-f \& Fig~\ref{fig:slit_gradient} panel c).
Most notable is the tendency of $O3N2$ to continue to increase beyond the core ($r>0$ kpc in Fig~\ref{fig:slit_gradient}), out to the boundary of the star-formation selected region.
While other strong line methods, especially $N2S2H\alpha$ and $N2O2$, favour a peak in metallicity around $r=-0.6$ kpc and decreasing past the core and beyond.
We explore the possibility of this tension as arising from contaminating emission from non-star-forming sources below in \S~\ref{sub:bpt}.
\subsection{Dissecting the Emission Line Excitation Mechanisms on the BPT Diagram}
\label{sub:bpt}
\begin{table}
\caption{Input parameters for basis points shown in Figure~\ref{fig:bpt}.}
\label{tab:basis_points}
\begin{tabular}{lcccc}
\hline
\hline
& & $Z/Z_\odot$ & log($q$) & $\kappa$ \\
H\,{\sc ii}\rm\, region $^a$ & & 1.0 & 7.75 & 50 \\
\hline
& $v$ (km s$^{-1}$) & $Z/Z_\odot$ & $n$ (cm$^{-3}$) & $B$ ($\mu$G)\\
Fast shock $^{b, d}$ & 250 & 1.0 & 10 & 10\\
Slow shock $^{c, d}$ & 160 & 1.0 & 1000 & 6.1\\
\hline
\hline
\multicolumn{5}{l}{$^a$\citet{Dopita13}; $^b$\citet{Allen08}}\\
\multicolumn{5}{l}{$^c$\citet{Sutherland17}}\\
\multicolumn{5}{l}{$^d$Shock basis points include 50\% contribution from pre-cursor}
\end{tabular}
\end{table}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{figs/S2_O1_vs_VelDisp.pdf}
\caption{{[S\,{\sc ii}]}/H$\alpha$\, and {[O\,{\sc i}]}/H$\alpha$\, diagnostic line ratios plotted against velocity dispersion for the full SAMI609396 field of view. The positive correlation observed between each of these diagnostic line ratios and velocity dispersion indicates the presence of shocks. The line ratio shown on the horizontal axis is {[S\,{\sc ii}]\,}$\lambda\lambda$6716, 31 / H$\alpha$\, for the top panel and {[O\,{\sc i}]\,}$\lambda$6300 / H$\alpha$\, in the bottom panel. Colour coding is as for Figure \ref{fig:bpt}. Emission line fluxes and velocity dispersions shown in this figure are from 1-component fits provided in the SAMI DR2 value-added data products.}
\label{fig:s2o1_vd}
\end{figure}
Gas-phase metallicity studies such as this aim to determine abundances of nebulae photoionised by recently formed O- and B-type stars (H\,{\sc ii}\rm\, regions). However, emission from other sources including active galactic nuclei (AGN), shock-heated gas (shocks), and diffuse ionised gas (DIG), may contribute significantly to an observed extragalactic emission spectrum.
Since each of these sources exhibit characteristically different emission spectra, inference of the properties of ionised gas from an emission line spectrum requires knowledge (or an assumption) of the excitation mechanism causing the emission.
Different excitation sources are generally distinguished with BPT or VO87 diagnostic diagrams which compare {[O\,{\sc iii}]}/H$\beta$ to each of {[N\,{\sc ii}]}/H$\alpha$, {[S\,{\sc ii}]}/H$\alpha$, and {[O\,{\sc i}]}/H$\alpha$\, \citep{BPT81, VO87}. Demarcation lines that separate H\,{\sc ii}\rm\, regions from other sources of emission have been derived from photoionisation modelling \citep{Kewley01} and from large samples of observational data \citep{Kauffmann03}. These can be used to exclude observations which are dominated by emission sources other than H\,{\sc ii}\rm\, regions.
Of course, the presence of one emission source in an observation does not preclude the presence of any others. Indeed a so-called ``mixing sequence'' is often observed on diagnostic diagrams, spanning the regions between the loci inhabited by H\,{\sc ii}\rm\, regions and those of other ionizing sources.
Global spectra residing along this sequence are best explained as galaxies for which the global spectrum contains emission from both H\,{\sc ii}\rm\, regions and either AGN or shocks, with the position along this mixing sequence determined by the relative proportion of each of these sources of emission.
Further, when observations are made with IFU spectroscopy, mixing sequences can be spatially resolved within individual galaxies \citep{Ho14, Davies14a, Davies14b, Davies16, Davies17, Jones17, Zhang17, DAgostino18} due to differing spatial distributions of emission sources within these galaxies.
Figure~\ref{fig:bpt} shows diagnostic line ratios for individual spaxels from SAMI DR2 single component emission line fits over the full extent of the SAMI609396 merger system. Purple points are those which pass the H\,{\sc ii}\rm\, region \citet{Kewley01} selection criteria in all three panels. The spatial region selected as SAMI609396B analysed in this paper is a subset of these purple points (refer to Fig~\ref{fig:sami_dr2} for SAMI609396B spatial selection).
Overplotted on Fig~\ref{fig:bpt} are basis points predicted from photoionisation modelling for H\,{\sc ii}\rm\, regions (\citealt{Dopita13}; green circles), fast shocks (\citealt{Allen08}; blue triangle) and slow shocks (\citealt{Sutherland17}, \citealt{Dopita17}; yellow inverted triangle) as well as observed loci of DIG-dominated regions (\citealt{Zhang17, Sanders17}; red star).
The adopted model parameters for each of these basis points are summarised in Table~\ref{tab:basis_points}. Note that the shock model basis points include a contribution from precursor emission. We assume a 50:50 contribution from the shock and precursor. H\,{\sc ii}\rm\, region model parameters are based on metallicity and ionisation parameter values obtained from $N2O2$ and $O32$ line ratios (see \S~\ref{sub:SL_logOH}).
Shock model parameters are difficult to constrain as they are degenerate with fractional contribution and spatial variations, not to mention the large modelling uncertainties. Selected shock velocities (Table~\ref{tab:basis_points}) broadly reflect velocity dispersions observed in SAMI609396 (see Fig~\ref{fig:sami_dr2}~\&~\ref{fig:s2o1_vd}) and were chosen on the basis of how well they visually reproduced the individual points in Figure~\ref{fig:bpt}.
Black dashed lines show mixing models between H\,{\sc ii}\rm\, regions and each of these other emission sources. These lines indicate the sequence that arises by varying in the fractional contribution between the two fixed basis points. The mid-point of each sequence is labelled with a black cross.
In addition to emission line ratios, velocity dispersion is a useful tool for identifying the presence of shocks.
Emission from shocks often shows a positive correlation between velocity dispersion and {[S\,{\sc ii}]}/H$\alpha$\, or {[O\,{\sc i}]}/H$\alpha$\, diagnostic line ratios \citep{Ho14}, while DIG emission will not yield such a correlation.
In Figure~\ref{fig:s2o1_vd}, {[S\,{\sc ii}]}/H$\alpha$\, and {[O\,{\sc i}]}/H$\alpha$\, emission line ratios from SAMI609396 are plotted against measured velocity dispersion, supplementing our BPT and VO87 diagrams. Figure~\ref{fig:s2o1_vd} shows that both {[S\,{\sc ii}]}/H$\alpha$\, and {[O\,{\sc i}]}/H$\alpha$\, ratios are positively correlated with velocity dispersion in SAMI609396. While emission line ratios alone cannot definitively distinguish between emission from shocks and DIG (Fig~\ref{fig:bpt}), on the basis of Figure~\ref{fig:s2o1_vd} we conclude that the dominant source of non-star-forming emission observed in the SAMI609396 data cube is shock-heated gas.
\subsubsection{Effect of contaminating emission}
Given the limited ($\sim$kpc) spatial resolution of SAMI, some amount of contamination from non-star-forming emission sources is inevitable, despite limiting our analysis to the region of nominally star-forming dominated emission.
\citet{Sanders17} showed that contamination from DIG can lead to discrepancies in measured metallicity of up to $\sim$0.3 dex.
In resolved studies, \citet{Poetrodjojo19} found that the inclusion of DIG in metallicity gradient measurements affects all diagnostics to varying degrees.
Of particular concern to establish the robustness of gradient studies is the presence of significant systematic variation in the relative contribution of H\,{\sc ii}\rm\, region and non-star-forming emission. This has the potential to affect the inference on spatial metallicity trends.
Figure~\ref{fig:bpt} suggests that spaxels in this star-forming selected region may form the beginning of a spatial mixing sequence, perhaps indicating existence of spatial variations in the fractional contribution of shock emission to the total emission.
Given the multiple ways metallicities from different diagnostics can be affected by contaminating emission, these variations could help to explain differences in the apparent metallicity trends recovered.
Line ratios plotted in Figures \ref{fig:bpt} \& \ref{fig:s2o1_vd} support our assumption that the ``star-forming'' selected spaxels associated with SAMI609396B are indeed dominated by emission from H\,{\sc ii}\rm\, regions. However, it should be considered that even in regions with emission ``dominated'' by H\,{\sc ii}\rm\, regions, some amount of non-star-forming emission will invariably be present. In particular, the mixing sequences shown as black dashed lines in Figure~\ref{fig:bpt} highlight that there is room for variation in the relative contribution of different emission sources without moving outside the scope of what can be considered ``dominated'' by H\,{\sc ii}\rm\, regions.
A quantitative assessment of this effect is beyond the scope of this paper, but we note that variable contributions of non-star-forming emission in IFU observations of galaxies has the potential to affect measured trends in gas-phase abundances.
In \S~\ref{sec:OH_trends} we showed that, aside from the $Z_\text{Te; LS12}$ application of the direct method, our metallicity measurements favour a flattened metallicity gradient.
This flat gradient is likely due to the effects of the merger, which are known to produce flattened metallicity gradients due to strong inflows of pristine galaxies from the outskirts of galaxies \citep[e.g.][]{Kewley10}.
The measured gradient may be affected by the presence of shocks, however given that these metallicities were derived using a relatively small subset of the mixing sequence seen in Fig~\ref{fig:bpt} (i.e. the purple points) the effect of this contribution is likely not too significant.
\section{Conclusion} \label{sec:conclusion}
Following a search of the SAMI Galaxy Survey Data Release 2 Public Data, we identified SAMI609396B, an interacting galaxy showing high $S/N$, spatially-resolved detections of three auroral lines: {[O\,{\sc iii}]\,}$\lambda$4363, {[S\,{\sc ii}]\,}$\lambda\lambda$4069, 76 and {[S\,{\sc iii}]\,}$\lambda$6312. The source also has properties that make it a good candidate for a local analog of high redshift galaxies, in particular for its combination of moderate stellar mass, disturbed morphology and elevated specific star formation rate (see \S~\ref{sub:value_added} \& Appendix \ref{ap:global_properties}).
We use {[O\,{\sc iii}]\,} and {[S\,{\sc ii}]\,} auroral-to-strong line ratios to derive spatially resolved electron temperature measurements for two sub-regions within the emitting H\,{\sc ii}\rm\, regions ($T_e$({[O\,{\sc iii}]}) and $T_e$({[S\,{\sc ii}]})).
Our results indicate the absence of a strong positive correlation between the $T_e$({[S\,{\sc ii}]}) and $T_e$({[O\,{\sc iii}]}) temperatures across different spatial regions in SAMI609396B. Instead, Figure~\ref{fig:tSII_v_tOIII} shows $T_e$({[S\,{\sc ii}]}) and $T_e$({[O\,{\sc iii}]}) appearing to trend in opposite directions between two apertures. This deviates from the common assumption of a fixed positive monotonic relation between these different temperatures.
Our $T_e$({[O\,{\sc iii}]}) measurements allow for direct method O$^{2+}$/H$^+$ abundance measurements. We then derive direct method total oxygen abundances under three different treatments of the O$^{+}$/H$^+$ abundance:
\begin{enumerate}
\item $Z_\text{Te; LS12}$: $T_e$({[O\,{\sc ii}]}) is assumed from $T_e$({[O\,{\sc ii}]}) -- $T_e$({[O\,{\sc iii}]}) relation \citep{LopezSanchez12}.
\item $Z_\text{Te; Y20}$: As for $Z_\text{Te; LS12}$, with additional \citetalias{Yates20} empirical correction, based on {[O\,{\sc iii}]}/{[O\,{\sc ii}]} strong-line ratio.
\item $Z_\text{Te; SII}$: $T_e$({[O\,{\sc ii}]}) adopted as $T_e$({[O\,{\sc ii}]}) = $T_e$({[S\,{\sc ii}]}).
\end{enumerate}
We show that the disagreement between spatial metallicity trends returned by these methods is pronounced. $Z_\text{Te; LS12}$ favours a strong spatial trend with much lower total oxygen abundances being measured in the core, while $Z_\text{Te; Y20}$ and $Z_\text{Te; SII}$ instead suggest a flatter spatial trend, if anything perhaps opposite to the $Z_\text{Te; LS12}$ trend.
We conclude that the cause of this disagreement is variation in the O$^{2+}$/O$^{+}$ abundance ratio causing deviations from the assumed $T_e$({[O\,{\sc ii}]}) -- $T_e$({[O\,{\sc iii}]}) relation. Accordingly, $Z_\text{Te; LS12}$ results in systematically lower O$^{+}$ abundances across the whole of SAMI609396B than those of $Z_\text{Te; SII}$.
This gives rise to an apparent metallicity gradient as the effect is not spatially uniform: O$^{+}$ abundance is particularly elevated in the core when probed by $Z_\text{Te; SII}$.
The measured variation in the O$^{2+}$/O$^{+}$ abundance ratio correlates with variations in the {[O\,{\sc iii}]}/{[O\,{\sc ii}]\,} strong line ratio.
Thus, applying the empirical correction from \citet{Yates20} ($Z_\text{Te; Y20}$) results in a trend more in line with $Z_\text{Te; SII}$.
Additionally, we derive metallicity with four strong-line diagnostics ($R_{23}$, $N2O2$, $O3N2$ and $N2S2H\alpha$) using a mixture of observation- and theory-based calibrations. Spatial trends recovered by these strong-line methods again favour opposite trends to that of $Z_\text{Te; LS12}$ , much more in line with those observed with $Z_\text{Te; SII}$ and $Z_\text{Te; Y20}$ .
From diagnostic diagrams, we identify the presence of non-star-forming emission in the SAMI609396 system. We attribute this emission to shock-heated gas on the basis of the observed correlation between the {[S\,{\sc ii}]}/H$\alpha$\, emission line ratio and the measured velocity dispersion.
Despite applying our analysis to the star-forming selected region around SAMI609396B, we note that in reality each spaxel will contain some amount of contaminating, non-star-forming emission.
In particular, we show that spaxels in this star-forming selected region appear to form the beginning of a spatial mixing sequence, indicating spatial variations in the fractional contribution of non-star-forming emission to the total emission.
Given the different ways metallicities from different diagnostics can be affected by contaminating emission, these variations could help to explain differences in the apparent metallicity trends recovered.
Aside from the $Z_\text{Te; LS12}$ application of the direct method, our metallicity measurements favour a flat metallicity gradient for SAMI609396B. This flat gradient can be explained by the effects of the merger which are known to produce flattened metallicity gradients due to inflow of pristine gas from large radii \citep{Kewley10}. However, possible contamination from shock emission may affect the gradient measurement.
The direct method remains the main calibration baseline for studying the chemical evolution of galaxies. However, it is not immune to modelling uncertainties. This study highlights the importance of adequately constraining the internal ionisation and temperature structure within H\,{\sc ii}\rm\, regions when probing spatial variations of the metallicity across galaxies. We have shown here that abundance measurements based on $T_e$({[O\,{\sc iii}]}) alone are not a good indicator of the metallicity gradient in SAMI609396B due to their sensitivity to the ionisation parameter.
Spatially resolved applications of the direct method are currently limited even within the local Universe.
Low-mass galaxies ($<$10$^{9.5}$ $M_\odot$) contribute significantly to the stellar mass density and escape fraction of hydrogen ionizing photons at high redshift. However, the internal chemical distribution of these low-mass galaxies are rarely constrained owing to the spatial resolution and detection limit.
This situation will be improved by forthcoming facilities such as \emph{JWST}/NIRSpec and ground-based ELTs, which will push both the depth and spatial resolution attainable for IFU observations. In-depth analysis of local objects like SAMI609396B, thus set the stage for future detailed metallicity analysis of low-mass galaxies at high redshift.
\section*{Acknowledgements}
We would like to thank Rob Yates for detailed discussions about the Yates et al. (2020) semi-direct method and for sharing their
insights on using the $O32$ ratio to distinguish between lower and upper branch values in Figure D1.
We are grateful to Tucker Jones for valuable discussions about this work and to Jesse van de Sande and Ned Taylor for sharing SAMI-related expertise. We would also like to thank the referee for their constructive suggestions which certainly improved this work.
This study was based on publicly released data from the SAMI Galaxy Survey.
The SAMI Galaxy Survey is based on observations made at the Anglo-Australian Telescope. The Sydney-AAO Multi-object Integral field spectrograph (SAMI) was developed jointly by the University of Sydney and the Australian Astronomical Observatory.
This research was supported by the Australian Research Council Centre of Excellence for All Sky Astrophysics in 3 Dimensions (ASTRO 3D), through project number CE170100013. AJC acknowledges support from an Australian Government Research Training Program (RTP) Scholarship.
\section*{Data Availability}
This paper uses data from the SAMI Galaxy Survey Public Data Release 2 \citep{Scott18} which are available at \url{https://sami-survey.org/abdr}. Those data products include strong emission line flux maps; the auroral emission line flux maps used here are available from the corresponding author (AJC) on reasonable request.
A list of SAMI galaxies identified in our search as showing auroral line emission is given in Appendix~\ref{ap:auroral_list}.
\bibliographystyle{mnras}
|
2,869,038,154,341 | arxiv | \section{Introduction}
Anomaly-free effective models of loop quantum gravity, derived for spherically
symmetric configurations \cite{JR,HigherSpatial} and cosmological
perturbations at high density \cite{ScalarHol,ScalarHolInv}, have revealed an
unexpected phenomenon: At large curvature, signature change appears to be a
generic feature of quantum space-time geometry as provided by this theory
\cite{Action}. Not only the general phenomenon but also the specific form of
signature change seems to be universal in these models, giving further support
of the genericness of the effect. For the typical form of ``holonomy
modifications'' used widely in models of loop quantum gravity, the speed of
physical modes differs from the classical speed of light by a factor of
$\beta(h,K):=\alpha(h) \cos(2\delta K)$ where $\alpha(h)>0$ is a function of
the spatial geometry (the metric $h$), $\delta$ is a quantization parameter
often assumed to be related to the Planck length, and $K$ is a measure for
extrinsic curvature (or the Hubble parameter in cosmological models). For
large curvature $K>\pi/(4\delta)$, the speed is negative and commonly
hyperbolic mode equations turn elliptic, which in these models are of the form
\begin{equation} \label{Wave}
\frac{\partial^2\phi}{\partial t^2}- \frac{\beta(h,K)}{a^2} \Delta\phi=S[\phi]
\end{equation}
with source and lower-derivative terms $S[\phi]$. All modes
$\phi$, gravitational as well as matter, are affected in the same way.
The overall picture shows some relationships with other approaches to quantum
cosmology, mainly the no-boundary proposal of Hartle and Hawking
\cite{nobound}, and with other physical phenomena such as transonic flow or
phases of nano-wires. Related mathematical questions have been studied in the
mathematical literature since the 1930s, following seminal work by Tricomi
\cite{Tricomi}. Nevertheless, as a basis of effective space-time models,
equations of the form (\ref{Wave}) and the phenomenon of signature change show
several new and surprising features. In this article, we give a general
presentation of implications in cosmology.
Section~\ref{s:Def} gives a definition of signature change in the absence of a
classical space-time metric, and contains a brief comparison with classical
models of signature change. (A review of the formal origin and gauge
independence of equation (\ref{Wave}) and signature change can be found in
App.~\ref{s:SpaceTime}.) Section~\ref{s:Effects} provides several related
examples in different areas of physics together with the appropriate
mathematical formulation in terms of well-posed partial differential
equations. Section~\ref{s:Cosmo} applies these results to gravitational
questions, compares the pictures based on effective equations with
wave-function methods, draws cosmological implications, and ends with cautious
notes on global issues.
\section{Definitions}
\label{s:Def}
Despite first appearance, the wave equation (\ref{Wave}) is covariant under a
deformed algebra replacing classical coordinate or Poincar\'e
transformations. The corresponding quantum space-time structure is not
Riemannian, but has a well-defined canonical formulation using hypersurface
deformations, as reviewed briefly in the appendix.
A Riemannian manifold has a covariant metric tensor which transforms by
$g_{a'b'}=(\partial x^a/\partial x^a{}') (\partial x^b/\partial x^b{}')
g_{ab}$ when coordinates are changed on the manifold. Accordingly, the line
element ${\rm d}s^2=g_{ab}{\rm d}x^a{\rm d}x^b=g_{a'b'}{\rm d}x^{a'}{\rm
d}x^{b'}$ is invariant. Covariance under coordinate transformations (of
solutions to the field equations) is canonically represented as
gauge-invariance under hypersurface deformations in space-time
\cite{Regained}. Since the latter, along with Poincar\'e transformations, are
modified in the effective space-time structures we are considering, they can
no longer correspond to coordinate transformations. As a consequence, the
effective ``metric'' does not give rise to an invariant line element. Instead
of using metric-space notions such as geodesics and invariant scalar products
of 4-vectors, in order to extract predictions there is only the possibility of
computing canonical observables invariant under the modified gauge
transformations. In this way, one still has access to all observable
information. However, the lack of a metric structure implies that many of the
convenient and well-known techniques of evaluating solutions of general
relativity are no longer available.
Although there is no standard notion of geodesics and light rays or null
lines, it remains straightforward to associate a causal structure to the
modified space-time structure underlying (\ref{Wave}), as long as
$\beta>0$. Instead of computing null lines for a metric, we just use
characteristics of the wave equation (\ref{Wave}), for instance for
gravitational waves to be specific. (In the presence of inverse-triad
corrections, scalar modes may propagate at speeds different from tensor modes
\cite{LoopMuk,ScalarHolInv}.) A characteristic is then a hypersurface which at
any point is normal to a vector $k^a$ satisfying $(k^t)^2-\beta
|\vec{k}|^2=0$. With these characteristics instead of null lines, we can
define light cones, a causal structure, and derived notions such as Penrose
diagrams.
Characteristics exist as long as $\beta>0$. For holonomy modifications,
$\beta$ changes sign at high density, and the characteristic equation no
longer has non-trivial solutions $k^a\not=0$. As a consequence, (\ref{Wave})
does not provide a causal structure in such a regime. Since the same condition
of $\beta<0$ makes the mode equation (\ref{Wave}) turn elliptic, we call this
regime ``Euclidean,'' while for $\beta>0$ we call it ``Lorentzian.'' In this
way, we generalize the two standard discrete choices of signature to a
continuous range of values taken by $\beta$ as it varies from the classical
limit $\beta=1$ at low curvature to a negative value at high curvature. Unless
$\beta=\pm1$, the canonical fields from which (\ref{Wave}) is derived do not
provide a classical notion of Lorentzian space-time or 4-dimensional Euclidean
space. But the most important properties regarding physical consequences,
including the existence of a causal structure and the type of initial or
boundary value problem required for reliable solutions, only depend on the
sign of $\beta$ rather than its precise value. For this reason, we still speak
of Euclidean or Lorentzian signature even if we have neither Euclidean nor
Lorentzian space(-time).
Signature change has been studied in quite some detail in classical general
relativity ($\beta=1$); see for instance \cite{SigChangeClass}. However, such
models are crucially different from what is considered here, in that they have
$\beta{\rm sgn}(\det h_{ab})$ (or $\beta{\rm sgn} N^2$) changing
discontinuously. As a consequence, in classical models the transition from
Euclidean to Lorentzian signature is always singular, which is not the case in
our effective models. (Note that for this reason signature change in the model
of \cite{SigChangeHybrid} is {\em not} analogous to deformed space-time
structures.) The subtleties and controversies related to the distributional
nature of solutions with classical signature change, discussed for instance in
\cite{ClassSigChange,ClassSigChange2,SigAbs,SigSmooth,SigChangeJunction,BoundarySigChange,GeometrySig},
do not play a role in our context. (The models we study could be considered as
a version of dynamical signature change as anticipated in the conclusions of
\cite{EuclLor}.)
It is necessary to consider inhomogeneity in order to see modified space-time
structures in which modes obey (\ref{Wave}). In particular the phenomenon of
signature change, which is the main topic of this article and is realized
because $\beta$ may change sign, can only be seen in inhomogeneous models: it
is manifested by the relative sign in space and time derivatives in field
equations. However, signature change (or a modified space-time structure) is
not a consequence of inhomogeneity, which in the perturbative context would be
dubious \cite{DeformedCosmo}. The origin of signature change lies in the
modification that one makes in the classical dynamics if one quantizes the
theory following loop quantum gravity, which provides operators for holonomies
(exponentiated and integrated connections) instead of ordinary connection
components. This modification appears already in the background dynamics of a
homogeneous minisuperspace model, but in this context one does not notice
signature change because all spatial derivatives vanish. Perturbative
inhomogeneity then makes the effect visible. Still, inhomogeneity or
perturbation theory is not the origin of signature change, which is easy to
see by the following two arguments: First, signature change persists in a
perturbative field equation no matter how small the inhomogeneity is, as long
as it is non-zero. The coefficients of space and time derivatives in
(\ref{Wave}) depend on background quantities, and for $\beta<0$ they have the
same sign for all values of inhomogeneity. Secondly, the same form of
signature change appears in spherically symmetric models in which
inhomogeneity need not be treated perturbatively.
\section{Related effects}
\label{s:Effects}
In the explicit form as provided by models of loop quantum gravity, signature
change is a new effect. But it has precursors in physics as well as
mathematics. Besides the examples discussed in details below, the case of
helically symmetric binary systems \cite{Helical} is worth mentioning. The
configuration is described by partial differential equations which change
signature along a spacelike direction (far from the center), rather than a
timelike one as in our cosmological models.
\subsection{Hartle--Hawking wave function}
A Lorentzian space-time cannot be closed off at a finite time without
a boundary. As proposed in \cite{nobound}, however, one may postulate that
quantum gravity gives rise to a modified space-time structure that allows a
transition to Euclidean 4-dimensional space. A Euclidean cap can then be
attached to a space-time manifold which becomes Lorentzian at low
curvature. In \cite{nobound} and the literature based on it, this scenario has
been used mainly to specify the high-density asymptotics of wave functions
satisfying the Wheeler--DeWitt equation of homogeneous models.
The scenario suggested by effective constraints in models of loop quantum
gravity can be seen as a concrete realization of the quantum-gravity effects
that may give rise to signature change. The condition that
extrinsic curvature vanish at the interface between Euclidean and Lorentzian
parts of semiclassical solutions \cite{RealTunneling} is then replaced by
$\beta(h,K)=0$. Nevertheless, it is not guaranteed that the same
consequences are implied for wave functions, not the least because the
possibility of signature change relies on a quantum-geometry effect (based on
the use of holonomies in loop quantum gravity) which turns the Wheeler--DeWitt
equation into a difference equation \cite{cosmoIV,IsoCosmo}. The
minisuperspace discreteness implied by this difference equation is relevant
especially at high density, that is in the Euclidean phase where asymptotic
properties are discussed according to \cite{nobound}. Qualitatively different
conclusions for wave functions could then be reached, so that it is not clear
whether the close relationship in the space-time picture of \cite{nobound}
with the models discussed here implies a similar relationship in
predictions. Specific details of wave functions might well be closer to
\cite{tunneling} than to \cite{nobound}. We leave this question open in the
present article, in which we are concerned mainly with effective equations.
\subsection{Analog condensed-matter models}
\label{s:Analog}
In addition to related cosmological models in terms of space-time properties,
there are rather different physical phenomena which give rise to mathematical
descriptions similar to (\ref{Wave}). The main examples known to us are
transonic flow and hyperbolic metamaterials \cite{HypMM}. Especially the
latter phenomenon provides an interesting analog picture of the cosmological
effects.
An example of a hyperbolic metamaterial is given by an array of parallel
conducting nano-wires immersed in a dielectric medium. Such a material behaves
as a conductor in the direction parallel to the nano-wires while in the normal
one it exhibits dielectric properties \cite{HypMmSign}. Choosing the $z$-axis
to be parallel to the nano-wires, the wave equation for the component $E^z =:
\phi$ of the electromagnetic field can be written as \cite{HypMmSign}
\begin{equation}
\frac{\partial^2 \phi}{\partial t^2}-\frac{1}{\epsilon_{\bot}}
\frac{\partial^2 \phi}{\partial z^2}
-\frac{1}{\epsilon_{||}}\left(\frac{\partial^2 \phi}{\partial
x^2}+\frac{\partial^2 \phi}{\partial y^2}\right)=0\,.
\label{metamaterialEq}
\end{equation}
For the ``wired'' metamaterials the electric permittivity is
$\epsilon_{||}>0$, as for the standard dielectric medium. However, because the
metamaterial is conducting in the $z$-direction, $\epsilon_{\bot}$ will be
negative in a sufficiently low frequency range.\footnote{We assume here that
the magnetic properties of the metamaterial are as usual, that is magnetic
permeabilities in all directions are positive. However, it is worth
mentioning that metamaterials with both electric permittivity $\epsilon<0$
and magnetic permeability $\mu<0$ in some frequency band have been
constructed. An interesting property of such metamaterials is that the
refractive index is negative $n=-\sqrt{\epsilon \mu}$, leading to very
interesting and sometimes counterintuitive behavior \cite{negref}. The
relevance of this phenomenon in the context of analog models of quantum
gravity is an open issue.} In particular, for the Drude theory of a
conducting medium
\begin{equation}
\epsilon_{\bot}=1-\frac{\omega^2_{\rm P}}{\omega^2}\,,
\end{equation}
where $\omega_{\rm P}$ is the plasma frequency dependent on the geometry and
composition of the metamaterial. At low frequencies ($\omega<\omega_{\rm P}$),
$\epsilon_{\bot}>0$ and the spatial part of equation (\ref{metamaterialEq}) is
elliptic. However, for $\omega>\omega_{\rm P}$, we have $\epsilon_{\bot}<0$
and the sign in front of the second derivative with respect to $z$
changes. The spatial Laplace operator becomes hyperbolic, with the
$z$-component playing a role of the second time variable. The dispersion
relation
\begin{equation}
\omega^2- \frac{k_z^2}{\epsilon_{\bot}}-\frac{k^2_{||}}{\epsilon_{||}}=0\,,
\end{equation}
where $k^2_{||}:=k_x^2+k_y^2$, for fixed $\omega$ is in this regime no longer
represented by ellipsoids but hyperboloids. In consequence, the $k$-space
undergoes a topology change.
Passing between the elliptic and hyperbolic regions is not only a matter of
frequency dependence. For a fixed (sufficiently low) frequency, the transition
between the different signs of $\epsilon_{\bot}$ can be induced by a
rearrangement in the structural form of the ``wired" metamaterial. In
particular, melting of the nano-wires to the liquid phase (associated
with a first-order phase transition) has been observed
experimentally. During the process the value of $\epsilon_{\bot}$ smoothly
changes its sign (from negative to positive) with increasing temperature.
Alternatively one can imagine the transition to be of the second order. This
idea in the context of signature change due to holonomy corrections has been
discussed in \cite{SigChange,Silence,Critical}. Let us imagine that the
nanowires are immersed in a dielectric fluid, having an ability to
rotate. Furthermore, let us introduce magnetic dipole-dipole type interactions
between the wires by attributing magnetic moments to them. Then, in the high
temperature unordered phase, with all nano-wires pointing in random
directions, there is hardly any electric conductivity. But when the phase
changes to an ordered one (lowering the temperature), a significant electric
current starts flowing, just as time starts flowing in our universe models
when the density is small enough to trigger signature change to Lorentzian
space-time.
In the gravitational case, the process can be modelled by a bi-metric gravity
theory, where the effective metric experienced by the fields is
\begin{equation}
g_{\mu\nu} = \delta_{\mu\nu}-2 \chi_{\mu} \chi_{\nu}\,.
\end{equation}
The metric may be viewed as a 4-dimensional analog of the inverse of the
electric permittivity tensor $\epsilon_{ij}$. The $\chi_{\mu}$ is an
effective mean field having the interpretation of an order parameter, defined
such that in the unordered phase $|\vec{\chi}|=0$ (here $|\vec{\chi}| =
\sqrt{\delta^{\mu\nu} \chi_{\mu} \chi_{\nu}}$). In this case the fields
experience the Euclidean metric $g_{\mu\nu} = \delta_{\mu\nu}$. In the fully
ordered phase $|\vec{\chi}|=1$, the Lorentzian signature emerges in a
spontaneously chosen direction in the 4-dimensional space.
In order to describe the transition from the unordered to the ordered state
more quantitatively let us postulate the following form of the free energy for
a model with a test field $\phi$:
\begin{eqnarray}
F = \int {\rm d}V \underbrace{ \left( \delta^{\mu\nu}+\frac{2 \chi^{\mu}
\chi^{\nu}}{1-2 |\vec{\chi}|^2} \right)}_{g^{\mu\nu}}
\partial_{\mu} \phi \partial_{\nu} \phi +\underbrace{C \left[ \left(
\frac{\rho}{\rho_{\rm QG}}-1\right)|\vec{\chi}|^2
+\frac{1}{2}|\vec{\chi}|^4 \right]}_{V(\chi^{\mu},\rho)}\,, \nonumber
\end{eqnarray}
where $C$ is a constant. Because of the $\chi^{\mu} \chi^{\nu} \partial_{\mu}
\phi \partial_{\nu} \phi$ contribution to the kinetic term, $F$ is not
SO(4)-invariant in the internal indices of $\chi^{\mu}$. The explicit
symmetry-breaking term can be treated perturbatively. Then, equilibrium
corresponds to a minimum of the SO(4)-invariant potential
$V(\chi^{\mu},\rho)$. In order to model signature change in loop
quantum cosmology with maximum energy density $\rho_{\rm QG}$, the
parameters of the potential are fixed such that at energy densities $\rho
> \rho_{\rm QG}$ the vacuum state maintains the SO(4) symmetry --- the order
parameter is $|\vec{\chi}|=0$. For $\rho \leq \rho_{\rm QG}$,
we choose the potential so that its minimum is located at
$|\vec{\chi}|=\sqrt{1-\rho/\rho_{\rm QG}}$, with some spontaneously
chosen direction in the 4-dimensional configuration space of the order
parameter. Without loss of generality, we may assume
$\vec{\chi}$ to point in the $0$-direction ($\chi_0=|\vec{\chi}|$ and
$\chi_i=0$ for $i=1,2,3$); thus, $g_{00} =
1-2\chi_0\chi_0=-1+2\rho/\rho_{\rm QG} =-\beta$ and $g_{ii} = 1$. The
corresponding wave equation for the test field is
\begin{equation}
-\frac{1}{\beta}\frac{\partial^2}{\partial t^2} \phi+\Delta \phi= 0\,,
\label{EQMv}
\end{equation}
which is of the form (\ref{Wave}) with $\beta=1-2\rho/\rho_{\rm QG}$ as in
models of loop quantum cosmology. At $|\vec{\chi}|=0$ ($\beta=1$) the
equation exhibits the $R^4 \rtimes {\rm SO}(4)$ symmetry while in the fully
ordered state $|\vec{\chi}|=1$ ($\beta=-1$) the $R^{1,3} \rtimes {\rm SO}(1,3)$
Poincar\'e symmetry is satisfied.
It is worth stressing that the presented model relating signature change with
the spontaneous symmetry braking of $SO(4)$ has no support from more
fundamental considerations at the moment. Therefore, its physical validity
requires further studies. In particular, a possible relation of the order
parameter $\chi_{\mu}$ with the elementary degrees of freedom of loop quantum
gravity remains unclear. Nevertheless, the model presents an interesting
example because of its formal resemblance to signature change in loop quantum
cosmology. There is, however, an important conceptual difference: while analog
models of signature change are fully well-defined and even have some
observable properties, quantum-cosmology effects on space-time itself (rather
than matter fields in space-time) may be subject to several global problems on
which we will comment in Sec.~\ref{s:Global}.
\subsection{Mixed-type characteristic problem and Tricomi equation}
Partial differential equations of mixed type have been studied in the
mathematics literature since the 1930s, beginning with the Tricomi problem for
the differential equation
\begin{equation}
\frac{\partial^2 u}{\partial y^2}+y\frac{\partial^2u}{\partial x^2}=0
\label{TricomiEQ}
\end{equation}
in the $(x,y)$-plane. This type of equation can be seen as an approximation to
our equation (\ref{Wave}) around one of the finite boundaries of the elliptic
regime, where $\beta=0$. Here, we recall relevant features of a suitable
characteristic problem \cite{Tricomi,TricomiGen}.
Let us suppose that we would like to solve equation (\ref{TricomiEQ}) in a
domain $\Omega=\Omega_1\cup \Omega_2$, where $\Omega_1$ and $\Omega_2$ are
elliptic and hyperbolic domains, respectively. For the Tricomi problem, the
sub-domain $\Omega_2$ cannot be chosen in an arbitrary manner. Namely, the
boundary $\partial \Omega_2$ is determined by characteristics of
(\ref{TricomiEQ}) attached to the endpoints of $\partial \Omega_1$ at the
interface between the hyperbolic and elliptic regions; see
Fig.~\ref{Tricomi1}. The characteristics in the hyperbolic region are
determined by the characteristic equation to (\ref{TricomiEQ}): $({\rm
d}x)^2+y({\rm d}y)^2=0$. This equation has two families of solutions with a
different sign, enabling us to introduce the characteristic coordinates
\begin{eqnarray}
\xi := x+\frac{2}{3}(-y)^{3/2}\,, \\
\eta := x-\frac{2}{3}(-y)^{3/2}\,.
\end{eqnarray}
(We have $y<0$ in the hyperbolic region.)
\begin{figure}
\begin{center}
\includegraphics[width=12cm]{Tricomi1}
\caption{Graphical representation of the characteristic problem from the
mixed-type Tricomi equation. Solutions to a partial
differential equation (\ref{TricomiEQ}) (hyperbolic for $y<0$ and elliptic
for $y>0$) exist uniquely and stably in the shaded region
$\Omega_1\cup\Omega_2$, provided boundary data are imposed on one
characteristic $C$ of the hyperbolic region together with a curve $A$ in
the elliptic region. Characteristic coordinates (\ref{charac}) are
indicated by $\xi$ and $\eta$.}
\label{Tricomi1}
\end{center}
\end{figure}
Solutions to the Tricomi equation (\ref{TricomiEQ}) are unique and stable if
adequate boundary conditions are imposed. Applying the notation from
Fig.~\ref{Tricomi1}, they are as follows: in region $\Omega_1$, the value of
$u$ has to be specified at the boundary curve $A$, while in region $\Omega_2$
the value of $u$ at the characteristic curve $C$ has to be given. The value
of $u|_{B}$ is a result of fixing the boundary condition $u|_{C\cup A}$.
The reason why the boundary condition in the hyperbolic region is imposed on
the characteristic curve becomes clear by analyzing the case of the standard
wave equation with one spatial dimension:
\begin{equation}
\frac{\partial^2 u}{\partial y^2}-\frac{\partial^2u}{\partial x^2}=0\,.
\label{WaveEQ}
\end{equation}
The corresponding characteristic equation $({\rm d}x)^2-({\rm d}y)^2=0$ has
solutions $x\pm y=C$, which allow us to introduce the characteristic (light
cone) variables
\begin{eqnarray} \label{charac}
\xi := x+y\,, \\
\eta := x-y\,.
\end{eqnarray}
In terms of these variables the wave equation (\ref{WaveEQ}) reduces to the
canonical form
\begin{equation}
\frac{\partial^2 u}{\partial \xi \partial \eta}=0\,,
\end{equation}
with general solution $u=f(\xi)+g(\eta)$. An important observation is that an
initial-value problem (Cauchy problem) in the variables $(x,y)$ translates
into the boundary-value problem in the characteristic variables
$(\xi,\eta)$. In particular, the Cauchy-problem initial condition $u|_{y=0}$
and $\left. \partial u/\partial y \right|_{y=0} $ translates into the
Dirichlet boundary condition at the light cone $u|_{\xi=0}$ and $u|_{\eta=0}$.
Using characteristics, it is therefore easier to combine hyperbolic and
elliptic regimes. In the case of the Tricomi equation, the boundary conditions
in the elliptic domain have to be smoothly extended into the hyperbolic
sector. This requires the introduction of characteristic variables in the
hyperbolic sector, imposing boundary conditions at either $\xi=$ constant or
$\eta=$ constant (but not both).
\section{Cosmology}
\label{s:Cosmo}
Signature change plays an important role in cosmological scenarios. In
practical terms, one of the most important consequences is the presence of
instabilities, as indicated formally by an imaginary speed of sound in mode
equations. The same kind of instability plays a role in the mathematical
discussion of equations of the form (\ref{Wave}): One of the conditions for a
well-posed initial/boundary-value problem of a partial differential equation
is the stable dependence of solutions on the chosen data, in addition to the
conditions that a solution exist and be unique for given data.
\subsection{Instability}
Stability is easily violated when one attempts to use an initial-value problem
for an elliptic equation. For instance, Fourier modes of solutions to the
standard wave equation would not oscillate as per $\exp(\pm i\omega t)$ but
change exponentially according to $\exp(\pm\omega t)$. The exponentially
growing mode implies instability in the sense that solutions, if they exist
and are perhaps unique, depend sensitively on the initial values. Choosing a
boundary-value problem in $t$ (as well as spatial directions), on the other
hand, ties down the solution at both ends of the $t$-range, so that the
growing branch is sufficiently restricted for solutions to be stable.
Signature change implies instability of initial-value problems, but it is
stronger in two respects. First, signature change affects all modes (of matter
or gravity) equally, which all become unstable at the same ``time.'' It is
therefore a space-time effect, rather than an exotic matter
phenomenon. Secondly, it can be seen in space-time symmetries such as
hypersurface deformations or the Poincar\'e algebra. None of these effects
happen in known cases of instabilities in matter theories or higher-curvature
gravitational actions, such as \cite{CdD}. Signature change is therefore more
fundamental than other phenomena hat might imply instability. An important
property is the fact that the theory does not provide any causal structure
whatsoever when the signature turns Euclidean.
\subsection{Scenarios}
When used in cosmological model-building, signature change implies a new and
interesting mixture of linear and cyclic models. One of its main consequences
can be described as a finite beginning of the universe. In the simplest
versions of existing models of loop quantum cosmology, effective equations do
not imply divergences at high density but instead extend solutions to a new
low-density regime \cite{QuantumBigBang}. The main example is a modified
Friedmann equation
\begin{equation} \label{ModFried}
\frac{\sin(\delta \bar{\cal
H})^2}{\delta^2}= \frac{8\pi G}{3}\rho
\end{equation}
postulated for spatially flat isotropic models sourced by a free, massless
scalar $\phi$. The energy density $\rho$ is therefore of the form
$\rho=\frac{1}{2} p_{\phi}^2/a^6$ with the momentum $p_{\phi}$ of $\phi$ and
the scale factor $a$. In (\ref{ModFried}), $\delta$ is a parameter which
approaches $\delta\to0$ in the classical limit, and whose precise value
remains undetermined. (As a parameter motivated by quantum-gravity effects, it
is often assumed to be close to the Planck length.) Moreover, $\bar{\cal
H}=(2\delta)^{-1} \arcsin(2\delta\dot{a}/a)$ is a modified version of the
Hubble parameter. It is straightforward to see that the modified equation
(\ref{ModFried}) implies that the energy density, for fixed $\delta$, is
always bounded, $\rho \leq \rho_{\rm QG} := 3/(8\pi G \delta^2)$. The
parameter $\rho_{\rm QG}$ is the maximum kinetic energy density realized at
the bounce. There is then a good chance that cosmological singularities may be
resolved. Indeed, the quantum model with a free, massless scalar can be solved
completely \cite{BouncePert}, implying that the scale factor
\begin{equation} \label{at}
a(t)= a_0\left(1+ 24\pi G\rho_{\rm QG}\: t^2\right)^{1/6}
\end{equation}
never becomes zero. (See also \cite{BounceSols}.)
Equation (\ref{ModFried}), although it remains unclear how generically it is
realized for the background evolution \cite{BounceSqueezed} of general
homogeneous models, has suggested a picture in which our expanding universe
descends from a preceding collapse phase which produced large but not infinite
density at the big bang. However, when modes on such a bouncing background are
subject to equations of the form (\ref{Wave}), as required for a consistent
system of both background variables $(a,\phi)$ and inhomogeneous modes in
models of loop quantum gravity, the transition is not deterministic owing to
the lack of causal structure at high density. In the cosmological case, the
role of $K$ in (\ref{Wave}) is played by the modified Hubble parameter
$\bar{\cal H}$. With $\beta(\bar{\cal H})\propto \cos(2\delta\bar{\cal H})$,
one can easily see that signature change happens when the energy density is
half the maximum it can achieve (or half the bounce density of a homogeneous
model). Given the solution (\ref{at}), the mode equation (\ref{Wave}) is
elliptic for
\begin{equation} \label{t}
t^2 < t_{\rm max}^2:=\frac{1}{24\pi G \rho_{\rm QG}}\,.
\end{equation}
(For $\rho_{\rm QG}$ close to the Planck density, the $t$-range in which the
equation is elliptic is close to the Planck time, but it can be significantly
longer in models with a scalar potential or strong quantum fluctuations at
high density \cite{FluctEn}.) The high-density regime cannot be accessed
causally, and there is no deterministic transition from collapse to expansion.
For practical purposes, such a scenario therefore shares some features with
the traditional singular big-bang model, in which one must pose initial values
just after the singularity. Similarly, one must pose initial values ``after''
the Euclidean phase of a signature-change scenario. However, the relationship
between singular and signature-change models is subtle, as shown by the
detailed discussion of suitable data for mixed-type differential equations on
which we now embark.
\subsubsection{Gravitational waves}
\label{s:GW}
To be more specific, let us focus now on the case of gravitational waves with
holonomy corrections. In this case, the equation of motion for each component
of polarization of the gravitational waves is \cite{ScalarTensorHol}
\begin{equation}
\frac{\partial^2 \phi}{\partial
t^2}+\left(3H-\frac{\dot{\beta}}{\beta}\right)\frac{\partial \phi}{\partial
t}- \frac{\beta}{a^2}\Delta \phi =0\,,
\label{EOMphi}
\end{equation}
where $H :=\frac{\dot{a}}{a}$ is the Hubble parameter and the deformation
factor is
\begin{equation} \label{betarho}
\beta = \cos(2\delta \bar{\cal H}) = 1-2\frac{\rho}{\rho_{\rm QG}}\,.
\end{equation}
Depending on the value of the parameter $\beta$, equation (\ref{EOMphi})
can be either hyperbolic ($\beta > 0$), elliptic ($\beta < 0$) or parabolic
($\beta = 0$). The transition between elliptic and hyperbolic type (associated
with signature change) takes place at $\rho = \frac{1}{2} \rho_{\rm
QG}$. At this moment, the square of the Hubble factor $H$ reaches its maximal
value $H^2_{\rm max} = \frac{2}{3}\pi G \rho_{\rm QG}$. For the model with a
free scalar field introduced in the previous section, this takes place at $t =
\pm t_{\rm max} = \pm (24\pi G \rho_{\rm QG})^{-1/2}$.
Equation (\ref{EOMphi}) is precisely of the form (\ref{Wave}) with the source
term
\begin{equation}
S[\phi]=-\left(3H-\frac{\dot{\beta}}{\beta}\right)\frac{\partial
\phi}{\partial t}\, .
\end{equation}
Because generally $\dot{\beta}\neq 0$ at $\beta=0$, signature change is
associated with a divergence of the source. The divergence, however, does not
lead to pathological behavior at the level of solutions to equation
(\ref{EOMphi}) because, as we will see later, it is due to a regular pole,
which does not disturb regularity of the solution. In the vicinity of
$\beta=0$, equation (\ref{EOMphi}) can be approximated by
\begin{equation}
\frac{\partial^2 \phi}{\partial
t^2}+\left(3H-\frac{\dot{\beta}}{\beta}\right)\frac{\partial \phi}{\partial
t}\approx0\, ,
\end{equation}
which after double integration leads to the solution
\begin{equation}
\phi = c_1+c_2 \int^t \frac{\beta(t')}{a(t')^3}dt',
\label{ApproxSol1}
\end{equation}
where $c_1$ and $c_2$ are some functions of the spatial variables. Because
$\beta$ occurs in the numerator of the integrand, the approximate solution is
indeed regular.
Let us now analyze the behavior of equation (\ref{EOMphi}) in the vicinity of
the instance of signature change for the model with a free scalar
field. Without loss of generality, we perform an expansion around $t=-t_{\rm
max}$, for which
\begin{equation}
\left(3H-\frac{\dot{\beta}}{\beta}\right) = \frac{t (t^2-5t^2_{\rm
max})}{t^4-t^4_{\rm max}} =
-\frac{1}{t+t_{\rm max}}-\frac{1}{t_{\rm max}}+\frac{t+t_{\rm max}}{4t^2_{\rm
max}}+\mathcal{O}\left((t+t_{\rm max})^2\right)\, .
\end{equation}
(In the first step we have used (\ref{at}), (\ref{t}) and (\ref{betarho}) in
order to write $a(t)=a_0(1+t^2/t_{\rm max}^2)^{1/6}$ and
$\beta(t)=1-2(1+t^2/t_{\rm max}^2)^{-1}$.) Because $(t+t_{\rm
max})\left(3H-\dot{\beta}/\beta\right)=-1+ \mathcal{O}\left(t+t_{\rm
max}\right)$, the pole at $t=-t_{\rm max}$ is of the regular
type. Furthermore
\begin{equation}
\frac{\beta}{a^2}= \frac{(t/t_{\rm max})^2-1}{(1+(t/t_{\rm max})^2)^{4/3} }
= - \frac{t+t_{\rm max}}{2^{1/3}t_{\rm max}} +\mathcal{O}\left((t+t_{\rm
max})^2\right)\, ,
\end{equation}
where we fixed $a_{0}=1$.
Applying the above expansions to the leading order and reducing to
the (1+1)-dimensional case\footnote{Of course this reduction
is formal only because gravitational waves do not occur in (1+1)-dimensions.
Alternatively, one can consider the reduction as being a result of
introducing translational symmetry in the $y$ and $z$ directions, leading
to $\phi(t,x,y,z) = \phi(t,x)$.}, we obtain
\begin{equation}
\frac{\partial^2 \phi}{\partial t^2}-\frac{1}{t+t_{\rm max}}\frac{\partial
\phi}{\partial t}
+\frac{t+t_{\rm max}}{2^{1/3}t_{\rm max}}\frac{\partial^2 \phi}{\partial x^2}=0\,.
\label{ReducedEOM1}
\end{equation}
Redefining the time variable to
\begin{equation}
y:= \frac{1}{\left(2^{1/3} t_{\rm max}\right)^{1/3}} (t+t_{\rm max})\,,
\end{equation}
equation (\ref{ReducedEOM1}) can be rewritten as
\begin{equation}
\frac{\partial^2 \phi}{\partial y^2}+y\frac{\partial^2\phi}{\partial
x^2}=\frac{1}{y} \frac{\partial \phi}{\partial y}\, .
\label{ALaTricomi}
\end{equation}
The left-hand side of this equation is identical to Tricomi's expression in
(\ref{TricomiEQ}) while the right-hand side can be treated as a source
term. Applying the characteristic Tricomi problem to obtain solutions to
the equation (\ref{ALaTricomi}) is therefore justified.
It is worth noting at this point that equation (\ref{ALaTricomi}) in the
vicinity of $y=0$ reduces to
\begin{equation}
\frac{\partial^2 \phi}{\partial y^2}-\frac{1}{y} \frac{\partial
\phi}{\partial y}=0\, ,
\label{ALaTricomiReduced}
\end{equation}
with solution $\phi = c_1+c_2y^2$ (in agreement with (\ref{ApproxSol1})). The
solution is perfectly regular across the signature change, in contrast to
classical signature change. In particular, signature change resulting from the
line element ${\rm d}s^2 = -t{\rm d}t^2 +{\rm d}x_a{\rm d}x^a$ has been widely
studied in the literature \cite{ClassSigChange,ClassSigChange2,
SigChangeJunction}. In this case, the analog of equation
(\ref{ALaTricomiReduced}) is $\frac{\partial^2 \phi}{\partial
y^2}-\frac{1}{2y} \frac{\partial \phi}{\partial y}=0$, with the metric
signature change at $y=0$. While both forms of the equation look very similar,
the extra factor of ``2'' plays a significant role leading to the
non-differentiability of solution at the signature change. For discussion of
controversies around this issue as well as some proposals of dealing with the
problem we refer to Ref. \cite{SigChangeJunction}.
Coming back to expression (\ref{ALaTricomi}), the corresponding equation for
the Fourier component $\phi_k = (2\pi)^{-1/2}\int {\rm d}ke^{-ikx}\phi(x)$ is
\begin{equation}
\frac{\partial^2 \phi_k}{\partial z^2}-\frac{1}{z}\frac{\partial
\phi_k}{\partial z}-z\phi_k=0\, ,
\label{EOMphik}
\end{equation}
where $z:=y|k|^{2/3}$ has been introduced to absorb the wave number $k$. In
order to facilitate solving this equation, we observe that by differentiating
the Airy equation
\begin{equation}
\frac{{\rm d}^2u}{{\rm d}z^2}-zu=0\,,
\label{Airy}
\end{equation}
and subsequently substituting for $u$ using again (\ref{Airy}), we find an
equation
\begin{equation}
\frac{{\rm d}^3u}{{\rm d}z^3}-\frac{1}{z}\frac{{\rm d}^2u}{{\rm d}z^2}
-z\frac{{\rm d}u}{{\rm d}z}=0
\end{equation}
identical to (\ref{EOMphik}) if $\phi_k={\rm d}u/{\rm d}z$ for a suitable
($k$-dependent) $u$ solving (\ref{Airy}). Solutions to (\ref{EOMphik}) are
therefore
\begin{equation}
\phi_k = A_k {\rm Ai}'(z)+B_k {\rm Bi}'(z)\,,
\end{equation}
where ${\rm Ai}'(z)$ and $ {\rm Bi}'(z)$ are the Airy prime functions
(derivatives of the Airy functions) and $A_k$ and $B_k$ are $k$-dependent
constants of integration. Because of the reality condition for the field
$\phi$ the following relations have to be fulfilled: $A^*_k=A_{-k}$ and
$B^*_k=B_{-k}$.
As an example, we will now apply the Tricomi boundary conditions to the
resulting 2-dimensional solution
\begin{equation}
\phi(y,x) = \int_{-\infty}^{+\infty} \frac{{\rm d}k}{\sqrt{2\pi}}
e^{ikx}\left[A_k {\rm Ai}'\left(y|k|^{2/3} \right)+B_k {\rm
Bi}'\left(y|k|^{2/3} \right)\right]\, .
\label{AlaTricomiSolution}
\end{equation}
First, let us choose the contour $A := \left\{ (x,y) \in \mathbb{R}^2, y=0 ,
-\pi<x<\pi \right\}$ and fix $\phi|_{A}= \cos{x}$. This condition reduces
(\ref{AlaTricomiSolution}) to
\begin{equation}
\phi(y,x) = \cos(x) \left[ \frac{1-c_0 {\rm Bi}'(0)}{ {\rm Ai}'(0)} {\rm
Ai}'(y)+c_0 {\rm Bi}'(y) \right]\,,
\label{AlaTricomiSolutionReduced1}
\end{equation}
with a parameter $c_0 \in \mathbb{R}$. Secondly, the form of $A$ implies
that:
\begin{eqnarray}
C&:=& \left\{ (x,y) \in \mathbb{R}^2, x=\pi-\frac{2}{3}(-y)^{3/2}, -\left(
\frac{3\pi}{2}\right)^{2/3}<y<0 \right\}\, , \\
B&:=& \left\{ (x,y) \in \mathbb{R}^2, x=-\pi+\frac{2}{3}(-y)^{3/2}, -\left(
\frac{3\pi}{2}\right)^{2/3}<y<0 \right\}\, .
\end{eqnarray}
Finally, by choosing
\begin{equation}
\phi|_{C} = \cos\left(\pi-\frac{2}{3}(-y)^{3/2}\right) {\rm Ai}'(y)/{\rm Ai}'(0)
\end{equation}
the constant $c_0$ is fixed to equal zero, which leads to
\begin{equation}
\phi(y,x) = - 3^{1/3} \Gamma(1/3) \cos(x){\rm Ai}'(y)\,.
\label{AlaTricomiSolutionFinal}
\end{equation}
The solution together with the boundary $\partial \Omega = A\cup B\cup C$ is
shown in Fig.~\ref{Solution}. For all other values of $c_0$, the contribution
from ${\rm Bi}'(y)$ in (\ref{AlaTricomiSolutionReduced1}) does not vanish, and
therefore the solution is characterized by an exponential growth in the $y>0$
region. Nevertheless, in the domain enclosed by $\partial \Omega = A\cup B\cup
C$ the solution remains regular. It is transparent from Fig.~\ref{Solution}
that the monotonic solution in the elliptic region ($y>0$) transforms smoothly
into the oscillatory solution in the hyperbolic domain ($y<0$), as expected
intuitively.
\begin{figure}
\begin{center}
\includegraphics[width=12cm]{Solution1}
\caption{Density plot of the solution (\ref{AlaTricomiSolutionFinal}) together
with the boundary $\partial \Omega = A\cup B\cup C$. This special solution
is exponentially decreasing for $y>0$, as can be seen by the fading-away of
oscillations.}
\label{Solution}
\end{center}
\end{figure}
\subsubsection{Characteristic problem}
\label{s:Char}
We are now in a position to adapt the Tricomi problem to
effective space-time models of loop quantum gravity. We
begin with a qualitative discussion which also provides
heuristic reasons for why the mixed-type initial-boundary
value problem must be used.
In a non-singular universe model in which collapse is joined by expansion, one
can start with initial values in the collapsing Lorentzian phase in the past
and evolve all the way to the boundary with the Euclidean phase (at $t=-t_{\rm
max}$ in the solvable model). For a second-order hyperbolic differential
equation in this regime, we need to fix initial values $\phi$ and $\phi'$,
where we denote by $\phi'$ the derivative of $\phi$ normal to constant-$t$
hypersurfaces. In this regime, $\phi'$ may therefore be considered as the time
derivative of $\phi$. When we reach the Euclidean phase, time no longer exists
in a causal sense and our coordinate $t$ is one of the four spatial ones. We
will continue denoting the $t$-derivative as $\phi'$, now strictly in the
sense of a normal derivative replacing the time derivative.
\begin{figure}
\begin{center}
\includegraphics[width=12cm]{InitialBoundary1}
\caption{Illustration of indeterministic behavior implied by the mixed-type
differential equations of signature-change cosmology. The boxed data for a
field $\phi$ --- the field and its first normal derivative $\phi'$ in the
past Lorentzian phase as well as the field $\phi$ (but not its normal
derivative) at the beginning of the expanding phase --- must be
prescribed. The other data then follow from a solution to the mixed-type
differential equation: $\phi$ and $\phi'$ at the left boundary of the
Euclidean region by an initial-value problem in the past Lorentzian phase
(left arrow); $\phi'$ at the right boundary of the Euclidean phase as the
limiting normal derivative of the solution in the Euclidean phase based on
the evolved $\phi$ on the left boundary and the prescribed boundary data
$\phi$ on the right boundary (center circle); and finally the future field
$\phi$ and $\phi'$ in the expanding phase evolved from initial values at the
right boundary of the Euclidean phase (right arrow). This
initial/boundary-value formulation is made more precise in the Tricomi
problem, shown in Fig.~\ref{Fig:Tricomi}. \label{Fig:IB}}
\end{center}
\end{figure}
Once we reach the Euclidean phase and enter the range between $t=-t_{\rm max}$
and $t=t_{\rm max}$ in the model, we must switch to a boundary-value problem
for well-posedness. We can use our field $\phi$ evolved up to the part of the
Euclidean boundary bordering the collapse phase, at $t=t_{\rm max}$. But we
must complete it to a set of boundary values on a hypersurface enclosing the
Euclidean region in which we are looking for a solution. For a solution in all
of space(-time), this complete boundary includes asymptotic infinity in
directions other than $t$ as well as the boundary at $t=t_{\rm max}$ where the
Euclidean region borders on the expanding phase. We may take for granted that
asymptotic fall-off conditions for our fields are imposed at spatial infinity,
but we must choose a new and arbitrary function on the border with the
expanding phase. When this function is fixed, together with the evolved past
initial data, we obtain a solution in the Euclidean phase. We may then compute
normal derivatives on all constant-$t$ hypersurfaces in this region and take
the limit toward the border with the expanding phase. We obtain suitable data
for an initial-value problem in the latter. In summary, as illustrated in
Fig.~\ref{Fig:IB}, in addition to past initial data we must specify one
additional function for every mode at the beginning of the expansion
phase. Past initial data in the collapse phase do not determine a unique
solution in the expansion phase. There is no deterministic evolution across
the Euclidean high-density regime.
The intuitive description just given provides the correct picture of free and
determined data, but it is not the most suitable one for the problem. One
question one may pose is about the smoothness of solutions: A solution to
(\ref{Wave}) should be required to be smooth (or at least twice
differentiable) at the borders between Lorentzian and Euclidean regions. In
the picture of Fig.~\ref{Fig:IB}, it is not clear whether this requirement is
satisfied. We have to match two solutions, one in the past Lorentzian phase
and the one in the Euclidean phase, so that the normal derivative $\phi'$
obtained from these solutions changes differentiably across the border. Since
the field $\phi$ on the border with the expanding phase is free and affects
the solution in the Euclidean region but not in the collapse phase,
differentiability seems in danger unless one severely restricts the boundary
data left free around the Euclidean region: If we keep past initial data fixed
but vary the boundary field $\phi$ at the border $t=t_{\rm max}$ with the
expansion phase, the limit of $\phi'$ at $t=-t_{\rm max}$ will change for
solutions of the elliptic equation. Only a narrow range of boundary fields
$\phi$ would seem to provide a $\phi'$ matching the evolved initial data
in the collapse phase. If this is so, one would essentially be led back to an
initial-value problem across the whole domain, because $\phi$ and $\phi'$
everywhere would be uniquely determined by initial values, a conclusion which
cannot be correct if there is an elliptic regime. By these considerations, for
a smooth solution we would have to give up some of the initial values in the
collapse phase as free data, but using just $\phi$ and letting $\phi'$ be
determined by smoothness cannot be correct either, because we would end up
with a boundary-value problem which is not well-posed in the Lorentzian phase.
\begin{figure}
\begin{center}
\includegraphics[width=12cm]{Tricomi3}
\caption{Well-posed characteristic problem for a mixed-type partial
differential equation: The function is specified on one characteristic arc
$C$ of the equation in the Lorentzian phase and on an arc $A$ in the
Euclidean phase connecting the endpoints of $C$ and of a second
characteristic arc starting at the same point as $C$ (dashed). The data on
the second characteristic arc are determined for solutions of (\ref{Wave})
and cannot be chosen freely. \label{Fig:Tricomi}}
\end{center}
\end{figure}
In order to address differentiability, a Tricomi-style characteristic problem
is more appropriate. In the Lorentzian phase, one specifies data not on
initial-value surfaces but on characteristics of the differential equation as
long as it is hyperbolic. Far away from the border with the Euclidean region,
these characteristics are standard light rays, but they approach the border
normal to constant-$t$ surfaces and end there. One example is shown as curve
$C$ in Fig.~\ref{Fig:Tricomi}. Considering just one spatial dimension, a
second characteristic starting from the same point as $C$ completes the first
one to a characteristic triangle with the two characteristics as well as the
enclosed border with the Euclidean region as its sides.
The problem is well-posed \cite{Tricomi,TricomiGen} if one specifies the field
$\phi$ on one of the two characteristics and on an arc $A$ which starts and
ends at the border-endpoints of the two characteristics but stays in the
Euclidean region in between. If these data are smooth on the curve obtained as
the union of $C$ and $A$ one obtains a unique, stable and smooth solution in
the interior of the union of the characteristic triangle and the region
enclosed by the curve $A$. Smoothness is guaranteed in this way, with data on
the curve $A$ being completely free except that they must extend the data on
$C$ smoothly. The indeterministic behavior (requiring future data) is
therefore confirmed. If one lets $A$ approach the other border of the
Euclidean region, one can proceed as in the intuitive description based on
Fig.~\ref{Fig:IB}. An initial-value problem starting at the beginning of the
expanding phase is then well-posed toward the future.
In four dimensions, the characteristic triangle is extended to the interior of
the future light cone starting somewhere in the collapse phase, and the region
enclosed by a curve $A$ is extended to a dome enclosed by a 3-dimensional
hypersurface in the Euclidean region. Although we are not aware of a complete
mathematical proof, one may expect the characteristic problem to be well-posed
with data given by the mode on half the light cone and on the hypersurface in
the Euclidean region.
The smoothness of solutions shows that there is still a well-defined manifold
which combines the two Lorentzian phases and the Euclidean phase, but this
manifold is no longer Riemannian. Standard obstacles to signature change in
general relativity therefore do not apply. The use of a finite region in the
characteristic formulation does not cause a problem in cosmology because we
always have observational access to a finite part of our universe. The
characteristic triangle can always be chosen large enough to include the whole
observational range.
\subsubsection{Boundary conditions at the hypersurface of signature change}
Besides the boundary, initial-value and the characteristic problems discussed
in the previous subsection we would like to present one more possibility of
posing the initial-value problem for the evolution in the Lorentzian
domain. Let us start by finding the solution in the Euclidean domain.
In this domain (which we call $\Omega_E$) the equations of motion are of the
elliptic type. Therefore, for finding solutions in a subset $ \Omega\subset
\Omega_E$ the proper boundary conditions have to be imposed. Specifying $\phi
|_{\partial \Omega}$ (where $\partial \Omega$ is the boundary of $\Omega$) is
sufficient to determine a solution $\phi_E$ to the elliptic equation in the
domain $\Omega$ uniquely. An interesting situation corresponds to the case
when the boundary $\partial \Omega$ is expanded so that it encloses the whole
Euclidean domain: $\partial \Omega \rightarrow \partial \Omega_E$. Such a
boundary can be decomposed as follows $\partial \Omega_E = \partial
\Omega_{+}+ \partial \Omega_{-} + \partial \Omega_{\infty}$. The $\partial
\Omega_{\pm} $ are boundaries between the Euclidean and Lorentzian domains at
$\pm t_{\rm max}$, respectively, while $\partial \Omega_{\infty}$ encloses the
Euclidean domain at spatial infinity (assuming that the space part is
unbounded). The solution $\phi_E$ at $\Omega_E$ obtained by imposing boundary
(Dirichlet or von Neumann) conditions at $\partial \Omega_E$ can now be used
to determine initial (or final) conditions for the Lorentzian sectors
$\Omega_{\pm}$, where $``+"$ corresponds to $t>+t_{\rm max} $ and $``-"$ to
$t<-t_{\rm max}$. The $\pm t_{\rm max}$ (as previously) are the times at
which signature change is taking place.
Let us decide to impose the Dirichlet boundary condition $\phi_E |_{\partial
\Omega_E}$, leading to a solution $\phi_E$. Then, the Cauchy initial
conditions at $\pm t_{\rm max}$ for the solutions $\phi_{L\pm}$ in the
Lorentzian domains $\Omega_{L\pm}$ respectively are:
\begin{eqnarray}
\lim_{t\rightarrow \pm {t_{\rm max}}^{\pm}} \phi_{L\pm} &=& \phi_E|_{\partial
\Omega_{\pm}} \equiv \phi_{\pm}, \\
\lim_{t\rightarrow \pm {t_{\rm max}}^{\pm}}\frac{\partial \phi_L }{\partial t} &=&
\lim_{t\rightarrow \pm {t_{\rm max}}^{\mp} } \frac{\partial \phi_E }{\partial
t} \, .
\end{eqnarray}
In this way by solving the equation of motion in the whole Euclidean domain,
the initial (or final) data for evolution in the Lorentzian sector can be
recovered. It is worth stressing that, as already discussed in the previous
subsection, the continuity of solutions up to at least second time derivatives
is required. (The equations are of second order.) The additional conditions
\begin{equation}
\lim_{t\rightarrow \pm {t_{\rm max}}^{\pm}}\frac{\partial^2 \phi_L }{\partial t^2} =
\lim_{t\rightarrow \pm {t_{\rm max}}^{\mp} } \frac{\partial^2 \phi_E }{\partial t^2}
\label{SecondOrderCon}
\end{equation}
therefore have to be satisfied. In general, one can expect that only a subset
of all possible boundary values $ \phi_E|_{\partial \Omega_E}$ obey this
condition. Such a possibility might be attractive because it may put
restrictions on the form of the allowed boundary conditions, based on the
requirement of mathematical consistency. For the typical equation of motion
under consideration, Eq.~(\ref{Wave}), the requirement (\ref{SecondOrderCon})
leads to the following condition of continuity of the source term
\begin{equation}
\lim_{t\rightarrow \pm {t_{\rm max}}^{\pm}}S[\phi_{L\pm}]=
\lim_{t\rightarrow \pm {t_{\rm max}}^{\mp} }S[\phi_E]\, .
\end{equation}
In order to illustrate some features of the method discussed above, let us
study the equation of motion
\begin{equation}
\frac{\partial^2 \phi}{\partial t^2}-\beta(t) \frac{\partial^2\phi}{\partial x^2}=0\, ,
\label{StepFunctionEq}
\end{equation}
with the step-like $\beta$-function
\begin{equation}
\beta(t) = \left\{ \begin{array}{ccc}
1 & {\rm for} & t > t_{\rm max}\, , \\
-1 & {\rm for} & t_{\rm max} \geq t \geq -t_{\rm max}\, , \\
1 & {\rm for} & t < -t_{\rm max}\, .
\end{array} \right.
\end{equation}
In this (1+1)-dimensional example the variation of $\beta$ is not continuous
across the signature change, therefore condition (\ref{SecondOrderCon}) will
have no chance of being fulfilled. Nevertheless, let us choose the following
boundary conditions:
\begin{equation}
\phi_{\pm} = c_{\pm} \cos(x)\, ,
\end{equation}
where $c_{\pm} \in \mathbb{R}$ are constants. (The boundary condition at
$\partial \Omega_{\infty}$ will not play any role.) With boundary values
specified, a solution to Eq.~(\ref{StepFunctionEq}) in the Euclidean domain
$\Omega_E= \left\{(t,x)\in \mathbb{R}^2, t_{\rm max} \geq t \geq -t_{\rm
max}\right\}$ can be found:
\begin{equation}
\phi_E = \cos(x) \frac{c_{+}{\rm sh}(t+t_{\rm max})-c_{-}{\rm sh}(t-t_{\rm
max})}{{\rm sh}(2t_{\rm max})}\, .
\end{equation}
Using the solution we can find that
\begin{equation}
\lim_{t\rightarrow \pm {t_{\rm max}}^{\mp} } \frac{\partial \phi_E }{\partial t} =
\cos(x) \frac{\mp c_{\mp}\pm c_{\pm}{\rm ch}(2t_{\rm max})}{\sinh(2t_{\rm max})}\, ,
\end{equation}
which will be used to impose Cauchy initial conditions in the Lorentzian
domains. Taking this into account, solutions to Eq.~(\ref{StepFunctionEq}) in
the domains $\Omega_{\pm}= \left\{(t,x)\in \mathbb{R}^2, \pm t > t_{\rm
max}\right\}$ can be determined:
\begin{eqnarray}
\phi_{L+} &=& \cos(x) \frac{ c_{+}\left({\rm sh}(2t_{\rm max})\cos(t-t_{\rm max})
+{\rm ch}(2t_{\rm max})\sin(t-t_{\rm max})\right)-c_{-}\sin(t-t_{\rm max}) }{{\rm sh}(2t_{\rm max})}\, , \nonumber \\
&&\\
\phi_{L-} &=& \cos(x) \frac{ c_{-}\left({\rm sh}(2t_{\rm max})\cos(t+t_{\rm max})
-{\rm ch}(2t_{\rm max})\sin(t+t_{\rm max})\right)+c_{+}\sin(t+t_{\rm max}) }{{\rm sh}(2t_{\rm max})}\, . \nonumber \\
&&
\end{eqnarray}
The solutions $\phi_E$ and $\phi_{L\pm} $ are presented in
Fig.~\ref{EuclidSol1} in the form of the density plot for the sample values of
$c_{\pm}$ and $t_{\rm max}$.
\begin{figure}
\begin{center}
\includegraphics[width=12cm]{SolEuclidBB1}
\caption{ Density plot of the solutions $\phi_E$ and $\phi_{L\pm}$ to the equation
(\ref{StepFunctionEq}). The observed asymmetry is due to the different boundary
conditions at $t_{\rm max}$ and $-t_{\rm max}$: $\phi_{\pm}=c_{\pm} \cos (x)$.
The values $c_{+}=1$, $c_{-}=2$ and $t_{\rm max}=4$ have beed used.}
\label{EuclidSol1}
\end{center}
\end{figure}
The solutions obtained in this way are regular in the whole domain
$\Omega=\Omega_E\cup\Omega_{+} \cup \Omega_{-}$. However, in agreement with
the earlier expectations, the second derivatives are discontinuous across the
signature change:
\begin{equation}
\lim_{t\rightarrow \pm {t_{\rm max}}^{\mp} } \frac{\partial^2 \phi_E }{\partial t^2}
-\lim_{t\rightarrow \pm {t_{\rm max}}^{\pm}}\frac{\partial^2 \phi_{L\pm} }{\partial t^2} = 2 c_{\pm} \cos(x)\, .
\end{equation}
\subsubsection{Wave functions}
Mixed-type differential equations for modes are a consequence of properties of
effective equations in cosmological models of loop quantum gravity. These
properties follow from a general analysis of consistent and anomaly-free
realizations of field equations, so that physical predictions made in these
models are independent of coordinate or gauge choices. The same modifications
to space-time structure can be seen from commutator algebras at the quantum
level, but it is then more difficult to derive dynamical effects. It is
nevertheless interesting to discuss how signature change or its accompanying
instabilities could be seen in the evolution of wave functions.
As long as one has unitary evolution, as can easily be guaranteed in
deparameterized models which provide the majority of explicit wave-function
evolutions in quantum cosmology, the wave function cannot be subject to
instabilities. (However, we note that at a fundamental level it may be
difficult to have unitary evolution under all circumstances, owing to the
problem of time.) Its norm is preserved even throughout high
density. Accordingly, the Schr\"odinger or Klein--Gordon type equation which
the wave function is subject to does not change its form to become elliptic;
the relevant coefficients in the wave equation do not change sign. After all,
what changes is the signature of space(-time), not the signature of
(mini)superspace on which the wave function is defined. (See also
\cite{SigChangeMini}. In fact, depending on which representation one chooses
for the wave function, the equation it is subject to may be elliptic or
hyperbolic even for standard space-time.)
Signature change implies instabilities in an initial-value problem for the
modes, solving effective equations. The fields in effective equations are
expectation values of mode operators in a state obeying the equation for the
wave function. At the level of wave-function evolution, signature change would
therefore be recognized in an exponential change of expectation values of mode
operators. (If wave packets are used, their peak position would change
exponentially.) For these expectation values (or peak positions), the same
sensitivity to initial values as observed in solutions of effective equations
would occur, even if they are computed from an evolved wave function. These
observable quantities therefore evolve in an unstable way, causing the
same problem as seen in an initial-value formulation for effective
equations. Instability problems are just more hidden in schemes that first
evolve wave functions and then compute expectation values, but they are just
as pressing as in effective equations.
In this respect, the situation is reminiscent of quantum chaos, which,
compared with the classical phenomenon, is more difficult to define and
discuss, but not absent, for wave functions subject to linear differential
equations. In both cases, the sensitivity of evolved expectation values to
initial choices is relevant. It has been discussed in detail in the context of
quantum chaos. Following \cite{StabilityQuantumChaos,EnvQuantumChaos}, we can
use the same arguments for states in the presence of signature change: Unitary
quantum evolution ensures that the overlap
$|\langle\psi_1(t),\psi_2(t)\rangle|^2$ of two different initial states
$\psi_1(t_0)$ and $\psi_2(t_0)$ is conserved in time, and therefore does not
indicate any sensitivity to choices of initial values. However, quantum
evolution of classically chaotic systems is very sensitive to perturbations of
the Hamiltonian operator, in the sense that the overlap
$|\langle\psi_1(t),\psi_2(t)\rangle|^2$ changes rapidly if $\psi_1(t)$ and
$\psi_2(t)$ are now defined as states evolved from the same initial wave
function $\psi_1(t_0)=\psi_2(t_0)$ but with different Hamiltonians
$\hat{H}_1=\hat{H}$ and $\hat{H}_2=\hat{H}+\epsilon\hat{V}$ with some small
$\epsilon$ and a perturbation potential $\hat{V}$. Similarly, the same
definition of the overlap in Euclidean signature leads to rapid (exponential)
change because the perturbed evolution by $\hat{H}_2$ contains a factor of
$\exp(\epsilon\hbar^{-1} \hat{V} t)$. In discussions of quantum chaos,
$\epsilon \hat{V}$ is usually thought of as coming from unknown interactions
with an environment hard to control. The same source of $\epsilon \hat{V}$
exists in quantum cosmology because the precise degrees of freedom and
interactions at high density are not well known. In addition to this, loop
quantum gravity is subject to a large set of quantization ambiguities, so that
the dynamics is not precisely determined and a second source of
$\epsilon\hat{V}$ results.
Signature change in effective equations, in all existing models, is implied by
the requirement of anomaly freedom, so that predictions are guaranteed to be
independent of gauge choices. Most models in which one evolves wave functions
are based on deparameterization, in which one formulates ``evolution'' of a
wave packet in terms of a so-called internal time, which is not a coordinate
but one of the dynamical fields. The conditions on well-defined evolution
after deparameterization are less severe than the general covariance
conditions when one does not select a specific time, be it a coordinate or
internal. If one were to fix the gauge and work in one specific set of
coordinates, one could avoid having to use effective equations subject to
signature change. Similarly, the possibility of formulating stable
wave-function dynamics in one chosen internal time does not mean that wave
functions in general evolve in a stable manner. One would have to show, first,
that predictions of one's model do not depend on which field is chosen as
internal time (a task which has not been performed in deparameterized models
of quantum cosmology; see \cite{ReducedKasner,MultChoice}). After this, one
could reliably analyze evolution and test whether it is always meaningful or
has to be stopped when a Euclidean regime is reached. Performing this task
turns out to be much more complicated than analyzing anomaly freedom of
effective equations. But interestingly, even in the absence of such an
analysis there are hints that wave-function evolution becomes unstable at high
density, in the sense that expectation values of modes change exponentially
\cite{InhomThroughBounce}.
\subsubsection{Cosmological implications}
A well-posed treatment of initial and boundary values in loop quantum
cosmology implies significant departures from the scenarios commonly made in
this setting. For instance, there has been some interest in a
``super-inflationary'' phase at high density, around a bounce, during which
the Hubble parameter grows quickly \cite{SuperInflLQC}. Although the rapid
change happens for background variables and is therefore not the same as the
instabilities of mode equations such as (\ref{Wave}), one can easily check in
explicit models that the super-inflationary phase falls within the elliptic
range of the partial differential equations for inhomogeneity
\cite{Action}. The rapid growth of background variables cannot be used for
cosmological effects because their values at the border of the Euclidean phase
must be prescribed as part of the boundary-value problem. Obtaining a large
parameter with an ill-posed initial-value problem does not have physical
significance.
A related effect has been seen in a combination of loop-modified background
equations with inflationary models: At high density, the modified equation for
the inflaton has an anti-friction term which can easily push the inflaton up
to a high value in its potential \cite{Inflation,BounceCMB}. Given the right
potential for suitable inflation, the correct initial conditions can therefore
be provided. Also this effect is subject to the verdict of being based on
ill-posed data. An alteration may, however, still be realized if one can show
that for a given background value $\bar{\phi}$ at the beginning of the
expansion phase there must generically be a large $\bar{\phi}'$ according to
the well-posed characteristic formulation. Values of $\bar{\phi}$ far from the
minimum of a potential could then be achieved afterwards, using well-posed
evolution with the predicted initial values in the expansion phase.
With the discussion of initial and boundary values in Sec.~\ref{s:Char}, we
can make our expectations for cosmological scenarios more precise. Some data
must be chosen at the beginning of the expansion phase, even if the energy
density or other coefficients of the differential equation never become
infinite. This feature is shared with the singular big-bang model, in which
the main conceptual problem is caused not so much by divergences but rather by
the presence of unrestricted initial data. (In general relativity, it is
possible to extend solutions across singularities in a distributional sense,
but the extension is not unique \cite{HawkingEllis}.) If there would be a
unique way of deriving initial values at the beginning of the expansion phase,
the singularity of the traditional big-bang model would not be so much of a
problem. One could make clear predictions about the initial state and further
evolution of the universe, as attempted in inflationary scenarios with their
(somewhat controversial) initial conditions for the inflaton. In practical
terms, the singularity problem is therefore one of indeterminedness, which is
implied but not necessarily equivalent to the occurrence of divergences.
An example for the latter part of the preceding statement is given by
signature-change cosmology. There are no divergences in these models, and yet
an important part of initial values for the expansion phase remains free. The
resulting scenario, which we call a finite beginning to emphasize the absence
of divergences together with the requirement of choosing new initial values,
is rather close to the standard big-bang model. In particular, for practical
purposes the need for new initial values once the density is sufficiently low
to trigger signature change back to a Lorentzian structure makes the Euclidean
region appear as a singularity. We therefore speak of a finite beginning
instead of a non-singular one, keeping in mind that the main problem of a
singularity is the indeterminedness of initial data.
There may yet be a crucial difference with the standard big-bang model: What
we need to prescribe at the finite beginning, according to Fig.~\ref{Fig:IB},
is only the field $\phi$, not its normal derivative $\phi'$ if we start
evolution in the collapse phase. If one has some means to know $\phi$ in a
neighborhood of the border between the Euclidean region and the expansion
phase, for instance from some hypothetical observations, one can derive
$\phi'$ and draw conclusions about the field in the collapse phase. There is
therefore some connection between collapse and expansion, although it is not
causal and not fully deterministic. A thorough cosmological analysis of scalar
and tensor power spectra in signature-change models is required before one can
tell how much about the collapse phase could be deciphered in this indirect
way.
\subsubsection{Global issues}
\label{s:Global}
While effective methods for constrained systems provide reliable local
equations, as confirmed by the results of this paper, there may seem to be
several global problems related to the new type of partial differential
equations which imply the disappearance of time or causality at high
density.\footnote{``And well I know it is not right// to seek and stay Time in
his flight.'' \cite{Don}} One example has been discussed in \cite{Loss} in
the context of black holes, where the non-singular beginning in cosmology as
detailed here is replaced by a naked singularity (again in the sense of
indeterminism) with a Cauchy horizon. Even if local field equations are
regular, the lack of a causal structure in some regions may lead to
unacceptable indeterministic behavior at a global level.
A different kind of global problem is indicated by one feature of solutions to
Tricomi's problem which we have not mentioned so far. Again, locally the
equation and its solutions are well-defined. But generic solutions turn out to
have a pole at one point $A\cap B$ in Fig.~\ref{Fig:Tricomi}, where the
Euclidean arc ends \cite{Tricomi}: Although solutions are smooth in the
interior of the characteristic region (for smooth boundary data) most of them
have a pole at one endpoint of the boundary. (This result explains the
ubiquity of sonic booms in analogous acoustic models.) The solution may remain
finite, but not all of its derivatives do. Derived quantities such as the
energy density in a matter field could therefore diverge. If this happens, it
is not likely that cosmological perturbation theory gives reliable results,
even though the local perturbative equations are regular.
The acoustic analog illustrates a further point: One could think that
signature change in cosmology (or black-hole physics) is harmless because
analogous effects can appear in well-known systems such as transonic motion
(or the model discussed in Sec.~\ref{s:Analog}). The difference is that
transonic motion leads to signature change only for excitations in the fluid,
while all other propagating degrees of freedom including the bulk fluid motion
and space-time are still governed by deterministic equations. (The speed of
light is an absolute limit for any causal motion, very much unlike the speed
of sound in a fluid. If $\beta<0$ in (\ref{Wave}), all motion is eliminated.)
If the bulk of the fluid is forced to move faster than its own speed of sound,
it overtakes any sound wave generated in it, so that its density profile
provides future conditions for the wave. Mathematically, this is represented
by Tricomi's future data. However, the bulk fluid itself moves in a
deterministic (but not wave-like) way even if it is faster than its own speed
of sound, and no causality issues appear. The situation is very different when
signature change happens for space-time physics, in which case no reference
time remains to define a causal structure and no mode can evolve
deterministically. For this reason, signature change in effective space-time
models of loop quantum gravity has much more radical consequences than
analogous effects in matter systems, as discussed in the present paper as well
as \cite{Loss}.
\section{Conclusions}
We have described several fundamental properties of a new scenario, based on
signature change, that has emerged from spherically symmetric and cosmological
models of loop quantum gravity in recent years. We emphasize that this
scenario cannot be seen in the minisuperspace models traditionally studied in
loop quantum cosmology \cite{LivRev}. In fact, the possibility of signature
change casts significant doubt on the viability of minisuperspace models of
loop quantum cosmlogy because in such models one would not see any sign of a
disappearing causal structure at high density. Minisuperspace models of loop
quantum cosmology may be used for some estimates of background properties, but
they can no longer be considered as reliable sources of stand-alone
cosmological scenarios. One always has to go beyond homogeneity to make sure
that there is a well-defined space-time structure and to check whether mode
equations remain hyperbolic, or to exclude other exotic effects.
Signature-change cosmology is therefore a new scenario which crucially relies
on inhomogeneous features of loop quantum gravity (even though, needless to
say, it has not yet been derived from a full quantization of gravity). Its
details, regarding for instance power spectra, still have to be worked out,
but we believe that the more mathematical and conceptual properties discussed
in this article already show a large number of interesting features. The main
consequence in practical terms is the occurrence of instabilities, related to
the sensitive dependence of solutions on initial data. In some examples,
instabilities may be physical effects which imply rapid change but no
inconsistencies. In quantum gravity, however, instabilities of the kind
encountered in the presence of signature change are fatal: The theory remains
subject to a large number of quantization ambiguities in its equations, and
not much is known about suitable initial states for quantum space (and not
just quantum matter). With this inherent vagueness, one cannot afford any
instabilities that would magnify theoretical uncertainties in a short amount
of time, even if this time may be as small as the Planck time. No predictions
would be possible. We are therefore sympathetic with the verdict ``In light of
the fact that even this 'well-behaved' signature change system predicts its
own downfall, it may be prudent to reassess the inclusion of signature
changing metrics in quantum gravity theories.'' \cite{KleinSigChange}
obtained after a detailed analysis of initial-value problems in a model of
classical signature change. Our model is provided by effective equations of
loop quantum gravity in which signature change appears to be generic (and is
not included by choice). The verdict of \cite{KleinSigChange} can therefore be
applied to loop quantum gravity at least to the extent that extreme caution is
called for when one considers evolution through high density.
In the presence of signature change, instabilities can be avoided only if one
switches to a 4-dimensional boundary-value problem at high density, giving up
causality. Although ambiguities remain in the theory, there are several
qualitative effects with interesting implications. As one of the main
observations in this article, the mixed-type partial differential equations
for modes in this context strike a nice balance between deterministic cyclic
models and singular big-bang models. There are no divergences, and yet initial
data in the infinite past do not uniquely determine all of space(-time). For
every mode, one must specify one function at the beginning of the expansion
phase even if one has already chosen initial values for the contraction
phase. Still, the normal derivative of the field is not free and may carry
subtle but interesting information about the pre-big bang.
\section*{Acknowledgements}
This work was supported in part by NSF grant PHY-1307408 to MB. JM is
supported by the Grant DEC-2014/13/D/ST2/01895 of the National Centre of
Science.
\begin{appendix}
\section{Space-time}
\label{s:SpaceTime}
We present here a somewhat technical discussion of fundamental properties of
space-times underlying (\ref{Wave}). Symmetries and gauge transformations are
especially important in this context, as well as properties of the canonical
formulation of gravity. In our following exposition, we also provide more
details on the derivation and reliability of equations such as (\ref{Wave}) in
effective models of loop quantum gravity.
\subsection{Gauge transformations}
In spite of its appearance, equation (\ref{Wave}) is covariant in a
generalized sense, according to an effective (and canonically defined)
non-Riemannian structure of quantum space-time. The usual Lorentz and
Poincar\'e symmetries, under which the classical version of (\ref{Wave}) with
$\beta=1$ (and constant $a$) would be invariant, are realized in canonical
gravity as a subalgebra of the infinite-dimensional algebra (or rather,
algebroid \cite{ConsAlgebroid}) of deformations of 3-dimensional spacelike
hypersurfaces in space-time. (See for instance \cite{DeformedRel,CUP}.) These
hypersurface deformations are gauge transformations of any generally covariant
theory, including gravity.
Hypersurface deformations \cite{Regained} are more suitable than Poincar\'e
transformations in situations in which no background space-time metric is
assumed. They provide the proper framework for a discussion of generalized
space-time structures as the may be implied by canonical quantum
gravity. Hypersurface deformations have an infinite set of generators, spatial
ones given by $D[N^a]$ with spatial vector fields $N^a$ tangential to the
hypersurface, and normal ones $H[N]$ with functions $N$ on the hypersurface so
that the deformation is by an amount $Nn^a$ along the unit normal $n^a$. The
classical space-time geometry \cite{Regained} (as well as the canonical form
of general relativity \cite{DiracHamGR}) imply that these generators have
commutators (or classical Poisson brackets)
\begin{eqnarray}
\{D[N_1^a],D[N_2^b]\} &=& D[N_1^a{\rm D}_bN_2^b-N_2^a{\rm D}_aN_1^b]\\
\{H[N],D[N^a]\} &=& H[N^a{\rm D}_aN]\\
\{H[N_1],H[N_2]\} &=& -D[h^{ab}(N_1{\rm D}_bN_2-N_2{\rm D}_bN_1)] \label{HH}
\end{eqnarray}
with the induced metric $h_{ab}$ and covariant derivative ${\rm D}_a$ on a
hypersurface.
When one tries to quantize the theory canonically, one should turn the gauge
generators into operators, so that they still have closed brackets given by
commutators. Otherwise, the classical gauge transformations would be broken
and the quantum theory would not be consistent; it would have gauge anomalies.
The anomaly problem of canonical quantum gravity is important, but also very
difficult and unresolved so far. (One reason is the presence of the metric in
(\ref{HH}), which at the quantum level would be an operator and give rise to
complicated ordering questions with a quantized $D[N^a]$.) Nevertheless, there
have been several independent indications in recent years which give some hope
that the problem can be solved. At the operator level, especially
$2+1$-dimensional models have been analyzed in quite some detail, paying
attention to the full algebra of gauge generators. Consistent versions have
been found in different ways
\cite{ThreeDeform,TwoPlusOneDef,TwoPlusOneDef2,AnoFreeWeak}, including also
spherically symmetric models \cite{SphSymmOp}.
Independently, effective calculations start from the observation that the
quantum operation of commutators together with a closed algebra of operators
$\hat{C}_I$, $[\hat{C}_I,\hat{C}_J]=\hat{f}^K_{IJ}\hat{C}_K$ with structure
constants (or functions/operators) $\hat{f}^K_{IJ}$, implies a closed algebra
under Poisson brackets of effective constraints
$\langle\hat{C}_I\rangle$. These effective constraints are defined as
expectation values $\langle\hat{C}_I\rangle$ of the constraint operators in an
arbitrary state \cite{EffCons,EffConsRel}. Effective constraints are therefore
functions on the quantum state space, which can conveniently be parameterized
by expectation values and moments with respect to a set of basic operators. In
addition to these effective constraints, obtained as direct expectation values
of constraint operators, there is an infinite set of independent ones, derived
from the same constraint operators as $\langle\widehat{\rm
pol}\hat{C}_I\rangle$ for all polynomials in basic operators. All these
functions on the space of states would be zero on the subspace annihilated by
the constraints $\hat{C}_I$, imposed following Dirac's prescription. The need
for an infinite set of effective constraints for every single constraint
operator follows from the requirement that a whole tower of infinitely many
moments must be constrained together with every constrained expectation value.
A Poisson bracket on the quantum state space of expectation values and moments
can be defined by $\{\langle\hat{A}\rangle,\langle\hat{B}\rangle\}=
\langle[\hat{A},\hat{B}]\rangle/i\hbar$, extended by the Leibniz rule to
products of expectation values (as they appear in quantum fluctuations and
higher moments). With these definitions, it follows that effective constraints
form a closed algebra under Poisson brackets if the constraint operators form
a closed algebra under commutators. Practically, it is easier to evaluate
Poisson brackets than commutators, an observation on which the idea of
canonical effective equations \cite{EffAc,Karpacz} and constraints
\cite{EffCons,EffConsRel} is based. Also, there are useful approximation
schemes, such as a semiclassical one in which one would do calculations order
by order in the moments, which are easier to implement for effective
constraints than for constraint operators and allow one to handle the unwieldy
space of all states more efficiently. For finite orders in the moments, there
is a finite number of independent effective constraints, and the task of
computing their algebra becomes feasible.
If we have a closed algebra $[\hat{C}_I,\hat{C}_J]$ of operators, the
corresponding effective constraints truncated at some moment order form a
closed Poisson-bracket algebra up to this order, irrespective of the states
used. This formulation of effective theories is therefore much more general
than the usual idea of effective actions \cite{EffConsQBR}. (The latter are
often combined with further approximations such as derivative expansions, or
with restrictions of states such as near-vacuum states for the low-energy
effective action.) In our case, we need not make any assumption on the class
of states in order to obtain a closed algebra of effective constraints. One
may just have to include higher orders in the moments for a better
approximation to solutions corresponding to strong quantum states; but even at
lower orders, the algebra must be closed (to within the same order). Effective
constraints therefore provide a good test of possible anomaly-free quantum
constraint algebras. In particular, if one can show that no closed effective
constraint algebra exists to within some order in moments, using a
large-enough parameterization of quantum corrections, there cannot be a closed
algebra of constraint operators. And if closure of effective constraints can
be achieved only if the classical constraint algebra is modified, the full
quantum constraint algebra must be subject to quantum corrections which change
the form of gauge transformations. Signature change is just one of the
consequences found in this way: Instead of (\ref{HH}), we then have
\begin{equation} \label{HHbeta}
\{H[N_1],H[N_2]\} = -D[\beta h^{ab}(N_1{\rm D}_bN_2-N_2{\rm D}_bN_1)]
\end{equation}
with a phase-space function $\beta$ which may turn negative (while the other
brackets involving $D[N^a]$ remain unchanged). Canonical field equations
consistent with this modified bracket are of the form (\ref{Wave}), as shown
by \cite{Action} using the methods of \cite{Regained,LagrangianRegained}. (The
same modification of derivative terms is realized to higher orders in
derivatives \cite{ActionHigher}.)
So far, calculations in models of loop quantum gravity have been performed
only to zeroth order in the moments. But they still show interesting effects
because, in addition to moment terms, the theory implies further quantum
corrections. At high density, holonomy modifications are relevant, which are
implied by the basic assumption of loop quantum gravity
\cite{ThomasRev,Rov,ALRev} that the gravitational connection can be
represented as an operator only when it is integrated and exponentiated to a
holonomy. A second effect, inverse-triad corrections
\cite{InvScale,QuantCorrPert,LoopMuk}, is more indirect but also related to
the basic assumption. It implies corrections relevant at lower curvature
\cite{LoopMuk,InflTest,InflConsist}. Effective constraint algebras have been
computed in both cases and found to allow for consistent versions:
\cite{ConstraintAlgebra,LTBII,ModCollapse} as examples for inverse-triad
corrections, \cite{ScalarHol} for holonomy corrections, and
\cite{JR,ScalarHolInv,HigherSpatial} for combinations of both. Whenever
holonomy modifications are present, all generic consistent effective
constraints found so far imply mode equations of the form (\ref{Wave}).
The derivation shows that covariance is not broken because all classical gauge
generators have a valid analog as a constraint operator $\hat{C}_I$ or as an
effective constraint. For every classical gauge transformation, there is a
corresponding quantum or effective gauge transformation. No transformations
are violated, including the Lorentz and Poincar\'e ones that one obtains as a
special case of hypersurface deformations \cite{DeformedRel}. Nevertheless, it
is possible for (\ref{Wave}) to differ from standard covariant wave equations
because quantum gauge generators, while they must not be broken for an
anomaly-free theory, may be subject to quantum corrections. These corrections
can be seen in the structure operators or functions $\hat{f}^K_{IJ}$ of the
quantum or effective constraint algebras. For holonomy and inverse-triad
corrections, the classical structure functions are, as in (\ref{HHbeta}),
multiplied with a phase-space function $\beta$, which determines a new
consistent form of quantum space-time covariance. The same function appears in
mode equations (\ref{Wave}) derived from the effective constraints.
\subsection{Slicing independence}
Even though the speed in (\ref{Wave}) provided by quantum geometry depends on
the spatial metric and extrinsic curvature --- quantities that are not
covariant in classical space-time --- the effect is frame independent in a
subtle way. The models in which such modified speeds have been derived are
anomaly free, so that the classical set of gauge transformations (given by
coordinate changes) is not violated. However, not just the dynamics but also
the structure of space-time receives quantum corrections. There is a new set
of gauge transformations under which the quantum-corrected field equations are
invariant, and which has the full set of standard coordinate changes as its
classical limit. The effective theories, including modified speeds they
predict, are covariant under these deformed gauge
transformations. Accordingly, the space-time structure is no longer classical
or Riemannian, but it remains well-defined in canonical terms.
Put differently, one could worry that models in which waves propagate with
speeds depending on the spatial metric and extrinsic curvature of
constant-time hypersurfaces violate the slicing independence of the classical
theory. Classically, for any given space-time, one can, depending on one's set
of coordinates, choose equal-time slices with large or small extrinsic
curvature, even if the covariant curvature of space-time vanishes. A speed
depending on extrinsic curvature would then suggest that predictions depend on
the slicing or the choice of an initial-value surface, which would be
unacceptable. This worry is unjustified because the two initial-value
surfaces, one with small and one with large extrinsic curvature, would give
rise to different physical solutions of the effective theory even though they
would present the same classical solution. Predictions, for instance of the
speed, then differ simply because one would consider physically different
solutions. Transformations between different slicings are, just like
coordinate changes, gauge transformations which are modified in the effective
theory. Slicings of one and the same classical space-time, which are related
by classical gauge transformations, are not gauge related in the
quantum-corrected setting.
\end{appendix}
|
2,869,038,154,342 | arxiv |
\section{Introduction}
\glsaddall
With the increasing integration of renewable sources into the electrical grid, accurate forecasting of renewable source generation has become one of the most important challenges across several applications. Among them, balancing the electrical grid via activation of reserves is arguably one of the most critical ones to ensure a stable system. In particular, due to their intermittent and unpredictable nature, the more renewables are integrated, the more complex the grid management becomes \cite{Lara-Fanego2012,Voyant2017}.
In this context, as solar energy is one of the most unpredictable renewable sources, the increasing use of solar power in recent years has led to an increasing interest in forecasting irradiance over short time horizons. In particular, in addition to activation of reserves to manage the grid stability, short-term forecasts of solar irradiance are paramount for operational planning, switching sources, programming backup, short-term power trading, peak load matching, scheduling of power systems, congestion management, and cost reduction \cite{Hammer1999,Reikard2009,Voyant2017}.
\subsection{Solar irradiance forecasting}
The forecasting of solar irradiance can be typically divided between methods for \textit{global horizontal irradiance (GHI)} and methods for
\textit{direct normal irradiance (DNI)} \cite{Law2014}, with the latter being a component of the GHI (together with the diffuse solar irradiance). As in this work GHI is forecasted, \cite{Law2014} should be used for a complete review on methods for DNI. For the case of GHI, forecasting techniques are further categorized into two subfields according to the input data and the forecast horizon \cite{Diagne2013,Voyant2017}:
\begin{enumerate}
\item Time series models based on satellite images,
measurements on the ground level, or sky images. These methods are usually suitable for short-term forecasts up to 4-6 h. Within this field, the literature can be further divided into three groups.
\begin{enumerate}
\item Classical statistical models like ARMA models \cite{Ahmad2015}, ARIMA models \cite{Reikard2009}, the CARDS model \cite{Huang2013}, or the Lasso model \cite{Yang2015}.
\item Artificial intelligence models such as neural networks models \cite{Mellit2010,Lauret2015}, support vector machines \cite{Lauret2015}, decision trees-based models \cite{McCandless2015}, or Gaussian models \cite{Lauret2015}.
\item Cloud-moving vector models that use satellite images \cite{Lorenz2012}.
\end{enumerate}
\item \textit{Numerical weather prediction (NWP)} models that simulate weather conditions. These methods are suitable for longer forecast horizons, 4-6 hours onward, time scales where they outperform the statistical models \cite{Perez2010}. As the goal of this work are short-term forecasts, \cite{Diagne2013} should be used for more complete review of NWP methods.
\end{enumerate}
\noindent While the division in accuracy between NWP and time series models is given by the predictive horizon, establishing comparisons between time series models is more complex. In particular, while some authors have reported the superiority of statistical models over artificial intelligence methods \cite{Reikard2009}, others have obtained opposite results \cite{Sfetsos2000}.
\textred{The input features typically used in the literature to predict solar irradiance vary widely, e.g.~past irradiance values, satellite data, weather information, etc. In many cases, the inputs considered depend on the type of model used, e.g.~cloud moving vector models require satellite images. While a detailed review on the different methods and input features is outside the scope of this paper, \cite{Diagne2013} is a good source for a more thorough analysis.}
\subsection{Motivation}
To the best of our knowledge, due to the time series nature of the solar irradiance, the statistical and artificial intelligence methods proposed so far have considered past ground measurements of the solar irradiance as input regressors \cite{Diagne2013}. While this choice of inputs might be the most sensible selection to build time series models, it poses an important problem: local data is required at every site where a forecast is needed.
In particular, if the geographical dispersion of solar generators is considered, it becomes clear that forecasting solar irradiance is a problem that has to be resolved across multiple locations. If ground measurements of all these sites are required, the cost of forecasting irradiance can become very expensive. In addition to the cost, a second associated problem is the fact that obtaining local data is not always easy.
As a result, in order to obtain scalable solutions for solar irradiance forecasting, it is important to develop global models that can forecast without the need of local data. In this context, while current cloud-moving vectors might accomplish that, they are not always easy to deploy as they are complex forecasting techniques that involve several steps \cite{Diagne2013}.
\subsection{Contributions and Organization of the Paper}
In this paper, a novel forecasting technique is proposed that addresses the mentioned problem by providing a prediction model that, while being accurate and easy to deploy, forecasts solar irradiance without the need of local data. The prediction model is based on a \textit{deep neural network (DNN)} that, using SEVIRI\footnote{The SEVIRI (Spinning Enhanced Visible and InfraRed Imager) is a measurement instrument of the METEOSAT satellite.} satellite images and NWP forecasts, is as accurate as local time series models that consider ground measurements. Although the model uses satellite images just as cloud-moving vector models do, it is easier to deploy as it requires less complex computations. \textred{In addition, while obtaining satellite data might not be always easier or cheaper than installing local ground sensors, there are several locations where satellite data are available and the proposed model avoids going to the ground to install local measurements. An example of this is The Netherlands, where satellite data is provided by the national meteorological institute.}
\textred{It is important to note that, to the best of our knowledge, the proposed method is the first of its class that tries to remove the dependence of local telemetry even for training. Particularly, while other methods from the literature successfully remove the local data dependence during forecasting, e.g.~\cite{Larson2018}, they still require local telemetry at all sites of interest during training. While using local data in a small subsets of sites during training, the proposed model successfully predicts the irradiance in a much larger subset of locations without needing local telemetry from these sites at any stage of the estimation or the forecasting.
}
As a case study, 30 location in The Netherlands are considered and the model is estimated using 5 of these locations. Then, for the remaining 25 locations, the performance of the proposed estimated model is compared against individual time series models specifically trained for each site using ground data.
The remaining of the paper is organized as follows: Section \ref{sec:theory} introduces the preliminary concepts considered in this work. Next, Section \ref{sec:modelframe} presents the proposed general model for forecasting solar irradiance. Then, Section \ref{sec:casestudy} introduces the case study and discusses the performance of the proposed model when compared with local models. Finally, Section \ref{sec:conclusion} summarizes the main results and concludes the paper.
\section{Preliminaries}
\label{sec:theory}
In this section the concepts and algorithms that are used and/or modified in the paper are introduced.
\subsection{Deep Learning and DNNs}
\label{sec:deeplearning}
In the last decade, the field of neural networks has experienced several innovations that have lead to what is known as \textit{deep learning (DL)} \cite{Goodfellow2016}. In particular, one of the traditional issues of neural networks had always been the large computational cost of training large models. However, that changed completely when \cite{Hinton2006} showed that a deep belief network could be trained efficiently using an algorithm called greedy layer-wise pretraining. As related developments followed, researchers started to be able to efficiently train complex neural networks whose depth was not just limited to a single hidden layer (as in the traditional multilayer perceptron). As these new structures systemically showed better results and generalization capabilities, the field was renamed as deep learning to stress the importance of the depth in the achieved improvements \cite[Section~1.2.1]{Goodfellow2016}.
While this success of DL models initiated in computer science applications, e.g.~image recognition \cite{Krizhevsky2012}, speech recognition \cite{Hinton2012}, or machine translation \cite{Bahdanau2014}, the benefits of DL have also spread in the last years to several energy-related applications \cite{Wang2016,Feng2017,Suryanarayana2018,Coelho2017,Fan2017,Lago2018,Lago2018a}. Among these areas, wind power forecasting \cite{Wang2016,Feng2017} and electricity price forecasting \cite{Lago2018,Lago2018a} are arguably the fields that have benefited the most
While there are different DL architectures, e.g.~convolutional networks or recurrent networks, in this paper a DNN is considered, i.e.~a multilayer perceptron with more than a single hidden layer, in order to build the solar forecasting model. {The reason for this selection is twofold: (1) DNNs are less computationally intensive than the other DL architectures \cite{Goodfellow2016}; (2) DNNs have empirically outperformed the other DL architectures in a similar energy-based forecasts \cite{Lago2018a}, i.e.~the forecast of day-ahead electricity prices.
}
\subsubsection{Representation}
Defining by $\mathbf{X}=[x_1,\ldots,x_{n}]^\top\in\mathbb{R}^{n}$ the input of the network, by $\mathbf{Y}=[y_{{1}},y_{{2}},\ldots,y_{{{m}}}]^\top\in\mathbb{R}^{{m}}$ the output of the network, by $n_k$ the number of neurons of the $k^\mr{th}$ hidden layer, and by $\mathbf{z}_k=[z_{\ixhid1},\ldots,z_{k n_k}]^\top$ the state vector in the ${k}^\mr{th}$ hidden layer, a general DNN with two hidden layers can be represented as in Figure \ref{fig:hiddnetexa}.
\begin{figure}[htb]
\begin{center}
\input{figs/neuralnetexample}
\caption{Example of a DNN.}
\label{fig:hiddnetexa}
\end{center}
\end{figure}
In this representation, the parameters of the model are represented by the set of parameters $\vc{W}$ that establish the mapping connections between the different neurons of the network \cite{Goodfellow2016}.
\subsubsection{Training}
The process of estimating the model weights $\vc{W}$ is usually called training. In particular, given a training set $\mathcal{S_T}=\bigl\{(\mathbf{X}_k,\vc{Y}_k)\bigr\}_{k=1}^N$ with $N$ data points, the network training is done by solving a general optimization problem with the following structure:
\begin{mini}
{\vc{W}}{\sum_{k=1}^{N}g_k\Bigl(\vc{Y}_k, F(\vc{X}_k,\vc{W})\Bigr),}
{\label{eq:prob1}}{}
\end{mini}
\noindent where $F:\mathbb{R}^{n}\rightarrow\mathbb{R}^{{m}}$ is the neural network map, and $g_k$ is the problem-specific cost function, e.g.~the Euclidean norm or the average cross-entropy. Traditional methods to solve \eqref{eq:prob1} include the {gradient descent} or the {Levenberg–Marquardt} algorithm \cite{Weron2014}. However, while these methods work well for small sized-networks, they display computational and scalability issues for DNNs. In particular, for DNNs better alternatives are the {stochastic gradient descent algorithm} and all its variants \cite{Ruder2016}.
{It is important to note that \eqref{eq:prob1} is an approximation of the real problem one wish to solve. Particularly, in an ideal situation, the cost function w.r.t.~to the underlying data distribution would be minimized; however, as the distribution is unknown, the problem has to be approximated by minimizing the cost function over the finite training set. This is especially relevant for neural networks, where a model could be overfitted and have a good performance in the training set, but perform badly in the test set, i.e.~a set with a different data distribution. To avoid this situation, the network is usually trained in combination with regularization techniques, e.g.~early stopping, and using out-of-sample data to evaluate the performance \cite{Goodfellow2016}.}
\subsubsection{Network Hyperparameters}
In addition to the weights, the network has several parameters that need to be selected before the training process. Typical parameters include the number of neurons of the hidden layers, the number of hidden layers, or the learning rate of the stochastic gradient descent method. To distinguish them from the main parameters, i.e.~the network weights, they are referred to as the network hyperparameters.
\subsection{Hyperparameter Optimization and Feature Selection}
\label{sec:hyper}
In this paper, to perform the hyperparameter selection, a Bayesian optimization algorithm that has been widely used for hyperparameter selection is considered: the \textit{tree-structured Parzen estimator (TPE)} \cite{Bergstra2011}, an optimization algorithm within the family of {sequential model-based optimization} methods \cite{Hutter2011}. The basic principle of a sequential model-based optimization algorithm is to optimize a black-box function, e.g.~the performance of a neural network as a function of the hyperparameters, by iteratively estimating an approximation of the function and exploring the function space using the local minimum of the approximation. At any given iteration $i$, the algorithm evaluates the black-box function at a new point $\bm{\theta}_i$. Next, it estimates an approximation $\mathcal{M}_i$ of the black-box function by fitting the previously sampled points to the obtained function evaluations. Then, it selects the next sample point $\bm{\theta}_{i+1}$ by numerically optimizing $\mathcal{M}_i$ and starts the next iteration. Finally, after a maximum number of iterations $T$ have been performed, the algorithm selects the best configuration.
Algorithm \ref{al:smbo} represents an example of a sequential model-based optimization algorithm for hyperparameter selection.
\begin{algorithm}
\caption{Hyperparameter Optimization}
\label{al:smbo}
\begin{algorithmic}[1]
\Procedure{SMBO}{$T,\bm{\theta}_1$}
\State $\mathcal{H} \gets \emptyset $
\For{$i=1,\ldots,T$}
\State $ p_i \gets$ TrainNetwork($\bm{\theta}_i$)
\State $\mathcal{H} \gets \mathcal{H} \cup \bigl\{(p_i,\bm{\theta}_i)\bigr\}$
\If{$i < T$}
\State $\mathcal{M}_i(\bm{\theta}) \gets \mr{EstimateModel}(\mathcal{H})$
\State $ \bm{\theta}_{i+1} \gets \mr{argmax}_{\bm{\theta}} ~\mathcal{M}_i(\bm{\theta})$
\EndIf
\EndFor
\State $\bm{\theta}^* \gets \mr{BestHyperparameters}(\mathcal{H})$
\State \Return $\bm{\theta}^*$
\EndProcedure
\end{algorithmic}
\end{algorithm}
{In addition to optimizing the hyperparameters, the TPE algorithm is also employed for optimizing the selection of input features. In particular, the feature selection method proposed in \cite{Lago2018} is considered, which selects the input features by first defining the input features as model hyperparameters and then using the TPE algorithm to optimally choose among them. More specifically, the method considers that each possible input feature can be either modeled as a binary hyperparameter representing its inclusion/exclusion or as an integer hyperparameter representing how many historical values of the specific input are used. In solar forecasting, an example of the former could be whether to consider the hour of the day as an input feature and an example of the latter could be the optimal number of past irradiance values.}
\subsection{Performance Metrics}
\label{sec:metrics}
\textred{In order to evaluate the accuracy of the proposed model, a performance metric is needed. In this paper, following the standards of the literature of solar irradiance forecasting, three different metrics are considered: the \textit{relative root mean square error (rRMSE)}, the the \textit{mean bias error (MBE)}, and the forecasting skill $s$ as defined by \cite{Marquez2012}.
One of the most commonly used metrics for evaluating solar irradiance forecasting is the RMSE or rRMSE, which provide an assessment of the average spread of the forecasting errors. In particular, given a vector $\vc{Y}=[y_1,\ldots,y_N]^\top$ of real outputs and a vector $\vc{\hat{Y}}=[\hat{y}_1,\ldots,\hat{y}_N]^\top$ of predicted outputs, the rRMSE metric can be computed as:
\begin{equation}
\mathrm{rRMSE} = \frac{\sqrt{\frac{1}{N}\sum_{k=1}^{N}(y_k-\hat{y_k})^2}}{\frac{1}{N}\sum_{k=1}^{N} y_k}\cdot 100\,\,\%.
\end{equation}
A second metric that is widely used is the MBE, a measure of the overall bias of the model. Using the same definitions as before, the MBE metric can be computed as:
\begin{equation}
\frac{1}{N}\sum_{k=1}^{N}y_k-\hat{y_k}.
\end{equation}
While both metrics can properly assess and compare models using the same dataset, they are hard to interpret when it comes to make comparisons across multiple locations, climate, and time of the year \cite{Marquez2012}. A metric that tries to solve this issue is the forecasting skill $s$; particularly, $S$ defines first a metric $V$ that accounts for the \textit{variability} of the solar irradiance, i.e.~accounts for the specific variability due to location, climate, and time. Next, it defines a second metric $U$ that accounts for the \textit{uncertainty}, i.e.~errors, of the forecasting model. Finally, the forecasting skill $S$ is defined as:
\begin{equation}
s = 1-\frac{U}{V}.
\end{equation}
\noindent For the details on computing $U$ and $V$ as well as a detailed explanation on $s$, the reader is referred to \cite{Marquez2012}. The important aspect to consider for this study is that $s$ is a normalized metric w.r.t.~to a simple persistence model (see Section \ref{sec:pers}) that permits the comparison of models across different conditions. A normal forecaster should be characterized by $s\in[0,1]$ with higher values indicating better
forecasting; particularly, $s=1$ indicates that the solar irradiance is perfectly forecasted, and $s=0$ that the model is not better than a simple persistence model (by definition of $U$ and $V$ a persistence model will always have $s=0$). Negative values would then imply the forecaster is worse than the simple persistence model.
}
\section{Prediction Model}
\label{sec:modelframe}
In this section, the proposed prediction model for solar irradiance forecasting is presented.
\subsection{Model Structure}
\label{sec:structure}
A key element to build a prediction model that can be used without the need of ground data is to employ a model whose structure is flexible enough to generalize across multiple geographical locations. As DNNs are powerful models that can generalize across tasks \cite{Goodfellow2016,Lago2018}, they are selected as the base model for the proposed forecaster. This concept of generalization is further explained in Section \ref{sec:modelGen}.
While the model is a DNN as the one illustrated in Figure \ref{fig:hiddnetexa}, the number of layers, the size of the output, and the type of inputs are specifically selected according to the application. In particular, considering that 6 hours is the limit predictive horizon before NWP forecast outperform time series models \cite{Diagne2013}, the model consists of $6$ output neurons representing the forecasted hourly irradiance over the next 6 hours; this horizon is the standard choice for short-term irradiance forecasting \cite{Diagne2013}.
In terms of hidden layers, the model is not subject to any specific depth; instead, depending on the case study, i.e.~the geographical area where the forecasts are made, the number of hidden layers are optimized using hyperparameter optimization as explained in Sections \ref{sec:hyper}. For the case study in this paper, i.e.~forecasting irradiance in the Netherlands, the optimal network depth is 2 hidden layers. To select the number of neurons per layer, the same methodology applies, i.e.~they need to be optimized for each geographical location.
\subsection{Model Inputs}
\label{sec:inputs}
As indicated in the introduction, the aim of the model is to forecast solar irradiance without the need of ground data. As a result, to perform the selection of model inputs, it is paramount to consider the subset of inputs that, while correlating with solar irradiance, are general enough so that they can be easily obtained for any given location. Given that restriction, the proposed model considers three types of inputs: NWP forecasts of the solar irradiance, the clear-sky irradiance, and satellite images representing maps of past solar irradiance.
\subsubsection{Numerical weather prediction forecast}
The first type of input are NWP forecasts of the solar irradiance obtained from the \textit{European center for medium-range weather forecasts (ECMWF)}. As indicated in the introduction, NWP forecasts of the solar irradiance are less accurate than time series models for short-term horizons. However, as they strongly correlate with the real irradiance, they are very useful regressors to build time series models.
For the proposed model, the input data consists of the 6 forecasted values for the next 6 hours given by the latest available ECMWF forecast (typically available every day around 08:00-09:00 CET).
\subsubsection{Clear-sky irradiance}
As second input, the model considers the clear-sky irradiance ${\mathcal{I}}_\mathrm{c}$, i.e.~the GHI under clear-sky conditions, at every hour over the next 6 hours. The clear-sky irradiance is a deterministic input that is obtained using the clear-sky model defined in \cite{Ineichen2002}, which computes ${\mathcal{I}}_\mathrm{c}$ using the location and time of interest.
\subsubsection{Satellite images}
\label{sec:sat}
The third input are satellite data representing the past irradiance values of a geographical area. In particular, the input data consists of images from the SEVIRI instrument of the METEOSAT satellite that are transformed to irradiance values using two different methods:
\begin{enumerate}
\item For data corresponding to solar elevation angles above 12$^\circ$, the SEVIRI-based images are mapped to irradiance values using the \textit{Surface insolation under clear and cloudy skies (SICSS)} algorithm \cite{Greuell2013}.
\item For data corresponding to solar elevation angles below 12$^\circ$, i.e.~very early in the morning and late in the evening, the irradiance values are extracted by considering the interpolation method described in \cite{Deneke2008} applied to the clear sky index.
\end{enumerate}
This distinction depending on the solar elevation angle is required because: (1) the SICSS method considers cloud properties; (2) at low solar elevation angles the uncertainty in the cloud properties increases strongly \cite{Deneke2008}.
Once the satellite images are mapped to irradiance values, the input data simply consists of the past irradiance values in the individual pixel where the forecasting site is located. Then, to select which past irradiance values, i.e.~which past images, are relevant for building the general model, the feature selection method defined in Section \ref{sec:hyper} is employed.
As a final remark, it is important to note that these irradiance values have a resolution that is limited by the resolution of the satellite images, which in the case of the SEVIRI instrument are pixels of $3\times 3$\,km.
As a result, to represent the solar irradiance in a specific location, the accuracy of satellite-based measurements cannot be better than that of ground measurements.
\textred{\subsubsection{Input selection}
\label{sec:inpselection}
The three input features that the proposed model considers were selected from a larger set of input features. In particular, in order to ensure that the proposed model included the most relevant input features, a feature selection process was performed. During this feature selection process, the three considered inputs, i.e.~the NWP forecasts, the clear-sky irradiance, and the satellite images were selected as the most important features. However, in addition to these three, four other features were also considered:
\begin{itemize}
\item Historical values of the temperature.
\item Historical values of the humidity.
\item Forecast of the temperature.
\item Forecast of the humidity.
\end{itemize}
To perform the feature selection between these 7 input features, the feature selection method described in \cite{Lago2018} was employed; i.e.~the 7 input features were modeled as binary hyperparameters and the selection was performed together with the hyperparameter optimization described in Section \ref{sec:modehyper}. This optimization resulted in the 3 selected inputs.
}
\subsection{Hyperparameter Optimization and Feature Selection}
\label{sec:modehyper}
As briefly introduced in Section \ref{sec:structure}, the proposed model needs to be tuned for the specific geographical area where it is applied. In order to tune the model structure, the following four DNN hyperparameter are optimized:
\begin{enumerate}
\item \textbf{Number of hidden layers}: the neural network depth is a parameter that needs to be tuned in order to obtain a model that can correctly generalize across multiple geographical locations.
\item \textbf{Number of neurons per layer}: besides the number of hidden layers, the size of each layer also plays an important role in the generalization capabilities of the DNN.
\item \textbf{General learning rate}: the initial learning rate used in the stochastic gradient descent method. {In particular, while the stochastic gradient descent method automatically adapts the learning rate at every iteration of the optimization process, the learning rate at the first iteration has to be selected.}
\item \textbf{Dropout}: Dropout \cite{Srivastava2014} is included as a possible regularization technique to reduce overfitting and to improve the training performance. To do so, at each iteration, dropout selects a fraction of the neurons and prevents them from training. This fraction of neurons is defined as a real hyperparameter between 0 and 1.
\end{enumerate}
\textred{As explained in Section \ref{sec:hyper} and \ref{sec:inpselection}, in combination with the hyperparameter optimization, the proposed model also performs a feature selection. In particular, the feature selection method selects the most relevant inputs among a subset of 7 features and it also selects which past historical irradiance values are required.}
\subsection{Model Parameters}
The parameters of the DNN are represented by the set of weights that establish the mapping connections between the several neurons of the network:
\begin{itemize}
\item $\mathbf{W}_{\mr{i},i}$: the vector of weights between the input $\mathbf{X}$ and the neuron $i$ of the first hidden layer.
\item $\mathbf{W}_{{k, i}}$: the vector of weights between the $k^\mr{th}$ hidden layer and the neuron $i$ of the $(k+1)^\mr{th}$ hidden layer.
\item $\mathbf{W}_{\mr{o},i}$: the vector of weights between the last hidden layer and the irradiance price vector $\hat{\mathcal{I}}$.
\item $\mathbf{b}_k =[b_{\ixhid1},\ldots,b_{k{n_k}}]^\top$: the vector of bias weights in the ${k }^\mr{th}$ hidden layer, with $k=1,\,2$.
\item $\mathbf{b}_\mr{o}=[b_{\mr{o},1}\ldots,b_{\mr{o},6}]^\top$: the vector of bias weights in the output layer.
\end{itemize}
\subsection{Model Equations}
Using the above definitions, the equations of the DNN assuming two hidden layers can be defined as:
\begin{subequations}
\label{eq:deepnn}
\begin{alignat}{3}
\!\!\!\!z_{1i} &= f_{1i}\Bigl(\mathbf{W}_{\mr{i},i}^\top \cdot \mathbf{X}+b_{1i}\Bigr),\quad &&\mr{for~}i=1,\ldots n_1,\\
z_{2i} &= f_{2i}\Bigl(\mathbf{W}_{{2i}}^\top \cdot \mathbf{z}_1+b_{2i}\Bigr),\quad&&\mr{for~}i=1,\ldots n_2,\\
\!\!\!\! \hat{\mathcal{I}}_{h+i} &= \mathbf{W}_{\mr{o},i}^\top \cdot \mathbf{z}_2+b_{\mr{o},i},\quad &&\mr{for~} i=1,\ldots 6, \label{eq:genoutDNN}
\end{alignat}
\end{subequations}
\noindent where $f_{k i}$ represents the activation function of neuron $i$ in the $k^\mr{th}$ hidden layer. In particular, for the proposed model, the \textit{rectified linear unit (ReLU)} \cite{Nair2010} is selected as the activation function of the two hidden layers. {This choice is made because this activation function has become a standard for hidden layers of DNNs \cite{Goodfellow2016}.} It is important to note that, as the irradiance is a real number, no activation function is used for the output layer.
\subsection{Training
}
\label{sec:singledetails}
The DNN is trained by minimizing the mean square error\footnote{Note that minimizing the mean square error is equivalent to minimizing the rRMSE metric used throughout the paper to evaluate and compare the model.}. In particular, given the training set $\mathcal{S_T}=\bigl\{(\mathbf{X}_k,\hat{\mathbf{I}}_k)\bigr\}_{k=1}^N$, the optimization problem that is solved to train the neural network is:
\begin{mini}
{\vc{W}}{\sum_{k=1}^{N}\|\hat{\mathbf{{I}}}_k - F(\vc{X}_k,\vc{W}) \|_2^2,}
{}{}
\end{mini}
\noindent where $F:\mathbb{R}^{n}\rightarrow\mathbb{R}^{6}$ is the neural network map and $\mathbf{W}$ is the set comprising all the n weights and bias weights of the network.
\subsubsection{Generalizing across geographical sites}
\label{sec:modelGen}
A key element for the model to forecast without the need of ground data is to be able to generalize across locations. To do so, the proposed model is trained across a small subset of sites so that the model learns to generalize across geographical sites. It is important to note that, while ground data is required for this small subset of locations, the model generalizes across all other geographical locations where ground data is not needed. In particular, as it is shown in the case study for The Netherlands, the number of locations where ground data is required is relatively small, e.g.~3-5 sites.
\subsubsection{Generalizing across predictive horizons}
Enforcing generalization is not only good for obtaining a model that does not require ground data, but in general, it is also beneficial to obtain a DNN that does not overfit and that obtains more accurate predictions \cite{Goodfellow2016}. In particular, as it has been empirically shown in several studies \cite{Lago2018,Lago2018a}, by forcing the network to solve multiple related task, e.g.~forecasting multiple sites, the network might learn to solve individual tasks better.
Therefore, to further strengthen the generalization capabilities of the network, the DNN is trained to forecast over the next 6 hours but starting at any hour of the day. As with the geographical site generalization, the goal is to build a DNN that, by performing several related tasks, it is able to learn more accurate predictions.
\subsubsection{Implementation details}
{The optimization problem is solved using multi-start optimization and Adam \cite{Kingma2014}, a version of stochastic gradient descent that computes adaptive learning rates for each model parameter. The use of adaptive learning rates is selected for a clear reason: as the learning rate is automatically computed, the time needed to tune the learning rate is smaller in comparison with other optimization methods. Together with Adam, the forecaster also considers early stopping \cite{Yao2007} to avoid overfitting.}
\textred{\subsection{Issues}
Note that the proposed model depends on another type of forecasts provided by NWP models. As a consequence, if the NWP models are performing bad, they might impact the final performance of the prediction model. For the proposed model, one of the most accurate and well-known NWP forecast models is considered: the ECMWF forecast \cite{ecmwf}. If other NWP models are employed instead, the performance of the model might vary w.r.t.~the results shown in this paper."
}
\subsection{Representation}
Defining by $h$ the current hour, by $\hat{\mathcal{I}}_\mathrm{E}$ the values of the ECMWF forecast, by ${\mathcal{I}}_\mathrm{S}$ the irradiance values obtained from the satellite image, by ${\mathcal{I}}_\mathrm{c}$ the clear-sky irradiance, and by $\hat{\mathcal{I}}$ the forecasted values of the proposed model, the forecasting model can be represented as in Figure \ref{fig:hiddnet}.
\begin{figure*}[htb]
\begin{center}
\input{figs/neuralnet}
\caption{DNN to forecast day-ahead prices.}
\label{fig:hiddnet}
\end{center}
\end{figure*}
\noindent In this representation, it was assumed that the optimal depth was 2 hidden layers, and that the optimal past irradiance values are lags 0, 1, and 2 w.r.t.~the current hour $h$; i.e.~$\mathcal{I}_{\mathrm{S},h}$, $\mathcal{I}_{\mathrm{S},h-1}$, $\mathcal{I}_{\mathrm{S},h-2}$; and lag 24 w.r.t.~the 6 prediction hours $h+1,\,\ldots,\,h+6$; i.e.~$\mathcal{I}_{\mathrm{S},h-23},\,\ldots,\,\mathcal{I}_{\mathrm{S},h-18}$.
\section{Case study}
\label{sec:casestudy}
In order to evaluate the proposed model, 30 sites in the Netherlands are considered and the accuracy of the proposed model is compared with that of specific models individually trained using local data.
\subsection{Data description}
The dataset spans four years, i.e.~from 01/01/2014 until 31/12/2017, and comprises, for each of the 30 sites, the following four types of input data:
\begin{enumerate}
\item The historical ground data measured on site.
\item The satellite-based irradiance values.
\item The daily ECMWF forecasts.
\item The deterministic clear-sky irradiance.
\end{enumerate}
{In all four cases, these data represent hourly average values between two consecutive hours. In particular, a variable given at a time step $h$ represents the average variable between hours $h$ and $h+1$, e.g.~the irradiance $\mathcal{I}_{\mathrm{S},12}$ is the average irradiance obtained from satellite images between hours 12 and 13.}
\subsubsection{Data Sources}
For the irradiance values obtained from SEVIRI satellite images,
the processed irradiance values are directly obtained from the \textit{Royal Netherlands Meteorological Institute (KNMI)} via their Cloud Physical Properties model \cite{knmi}.
For the ground measurements, 30 of the meteorological stations in The Netherlands that are maintained by the KNMI \cite{knmi} and that measure irradiance values using pyranometers are considered. In particular, the following 30 stations are employed: Arcen, Berkhout, Cabauw, De Kooy, De Bilt, Deelen, Eelde, Eindhoven, Ell, Gilze-Rijen, Heino, Herwijnen, Hoek van Holland, Hoogeveen, Hoorn (Terschelling), Hupsel, Lauwersoog, Leeuwarden, Lelystad, Maastricht, Marknesse,
Nieuw Beerta, Rotterdam, Schiphol, Stavoren, Twenthe,
Vlissingen, Volkel, Westdorpe, and Wijk aan Zee. \textred{The geographical location of these 30 stations is illustrated in Figure \ref{fig:siteDistribution}.}
The ECMWF forecasts are directly obtained through the ECMWF website \cite{ecmwf}. Finally, for the clear-sky irradiance, the \texttt{python} \texttt{PVLIB} library \cite{Andrews2014} that implements the clear-sky model \cite{Ineichen2002} defined in Section \ref{sec:inputs} is used.
\subsubsection{Data division}
In order to perform the study, the data is divided into three subsets:
\begin{enumerate}
\item Training set (01/01/2014 to 31/12/2015): these 2 years of data are used for training and estimating the various models.
\item Validation set (01/01/2016 to 31/12/2016): a year of data is used to select the optimal hyperparameters and features, and to perform early-stopping when training the network.
\item Test set (01/01/2017 to 31/12/2017): a year of data that is not used at any step during the model estimation process, is employed as the out-of-sample data to compare the proposed model against local models.
\end{enumerate}
In addition to the time separation, the data is further divided according to the location:
\begin{enumerate}
\item Of the 30 sites, 5 are used to train the proposed models. In particular, the following 5 were randomly selected: Herwijnen, Wijk aan Zee, Schiphol, Twenthe, and Lelystad.
\item The remaining 25 act as out-of-sample data to show that the model can predict irradiance at any site without the need of local data.
\end{enumerate}
\textred{\noindent This separation is depicted in Figure \ref{fig:siteDistribution}, which represents the geographical distribution of the 30 sites distinguishing between training and test sites.}
\begin{figure}[htb]
\begin{center}
\includegraphics[width=0.97\columnwidth]{figs/TrainingTestBetter.png}
\end{center}
\caption{Geographical distribution of the 30 sites in the case study. The blue dots are the 5 sites used for estimating the model. The red dots represent the 25 out-of-sample sites to evaluate the model.}
\label{fig:siteDistribution}
\end{figure}
\noindent In short, the proposed model is trained using data from 5 sites spanning three years and it is evaluated in 25 additional locations and using an additional year of data.
It is important to note that the above separation in 5+25 locations only applies for the proposed model. In particular, for the local models used as benchmark, the data division is only performed as a function of time as, by definition, each local model considers only local data.
\subsubsection{Data Preprocessing}
To evaluate the proposed models, the hours of the day for which the irradiance is very small are disregarded. In particular, those hours that correspond with solar elevation angles below $3^\circ$ are disregarded. This limitation on the solar elevation angles implies that the number of forecasts per day available to evaluate the model changes throughout the year; e.g.~while in June the model makes 11-12 forecasts per day, in January that number is reduced to 3-4.
In addition to the above preprocessing step, the hourly time slots that have missing values are also disregarded.
\subsection{Local models}
To compare the proposed forecaster, four types of local models are considered: a persistence model \cite{Diagne2013}, an \textit{autoregressive model with exogenous inputs (ARX)} \cite{Lauret2015}, a \textit{gradient boosting tree (GBT)} algorithm \cite{Chen2016}, and a local neural network \cite{Lauret2015}.
Moreover, in addition to the local models, it is also included in the benchmark the ECMWF forecast. By doing so, the accuracy between the time series models and the NWP forecast can be compared as a function of the prediction horizon, .
\subsubsection{Persistence model}
\label{sec:pers}
When evaluating a new model, a standard approach in the literature of irradiance forecasting is to check whether the new model provides better predictions than a trivial model \cite{Diagne2013}. Moreover, the trivial model normally used is a persistence model, which assumes that the clear sky index $k_\mathrm{c}$ does not change from one time interval to the other \cite{Diagne2013}.
In particular, given the irradiance ${\mathcal{I}}_{h}$ at the current hour $h$, the clear sky index at $h$ is defined as the ratio of ${\mathcal{I}}_{h}$ to the clear sky irradiance ${\mathcal{I}}_{\mathrm{c},h}$, i.e.:
\begin{equation}
k_{\mathrm{c},h} = \frac{\mathcal{I}_h}{\mathcal{I}_{\mathrm{c},h}}.
\end{equation}
\noindent Then, defining by ${\mathcal{I}}_{\mathrm{c},h+p}$ the clear sky irradiance at the prediction time $h+p$,
the persistence model forecasts the irradiance ${\mathcal{{I}}}_{h+p}$ at the prediction time $h+p$ as follows:
\begin{equation}
{\mathcal{\hat{I}}}_{h+p} = k_{\mathrm{c},h}\,\mathcal{I}_{\mathrm{c},h+p}=\frac{\mathcal{I}_h}{\mathcal{I}_{\mathrm{c},h}}\,\mathcal{I}_{\mathrm{c},h+p}.
\end{equation}
\subsubsection{Linear model}
Another standard benchmark choice in the literature of irradiance forecasting are autoregressive linear models \cite{Lauret2015,Diagne2013}; hence, the second model considered in the comparison is a linear autoregressive model that can optimally select its exogenous inputs. As the model is local, a different model per location, per hour of the day $h$, and for prediction time $h+p$ is considered. Therefore, as the proposed model is evaluated in 25 locations, 6 forecasts per day are made, and each forecast is made for $6$ prediction times, a total of $25\times6\times6=900$ models are estimated.
The exogenous inputs of these models are similar to the DNN, but instead of using the satellite irradiance maps $\mathcal{I}_{\mathrm{S}}$, the models consider the historical irradiance ground measurements $\mathcal{I}_{\mathrm{G}}$. In particular, the model for the prediction time $h+p$ considers the clear-sky irradiance ${\mathcal{I}}_{\mathrm{c},h+p}$ and the ECMWF forecast $\hat{\mathcal{I}}_{\mathrm{E},h+p}$ at the prediction time. For the historical irradiance values $\mathcal{I}_{\mathrm{G}}$, as with the global model and the satellite-based irradiance $\mathcal{I}_{\mathrm{S}}$, the specific lagged values are optimally selected using the feature selection method described in Section \ref{sec:hyper}. \textred{In addition, to ensure that the differences between models are not due to differences in input data, the model is allowed to choose satellite data through the feature selection method.
}
\subsubsection{Gradient boosting tree}
As a third model, the XGBoost algorithm \cite{Chen2016} is considered, a GBT model that predicts data by combining several regression trees. In particular, the model is based on the principle of boosting \cite[Chapter 10]{Hastie2001}, i.e.~combining models with high bias and low variance in order to reduce the bias while keeping a low variance.
{It is important to note that, while several models based on regression trees have been proposed in the literature for forecasting solar irradiance \cite{Voyant2017}, the XGBoost algorithm has, to the best of our knowledge, not yet been used. Nevertheless, including this model in the benchmark was decided for two reasons: (1) it has been shown to outperform other regression tree methods and has recently become the winner of several challenges in Kaggle, a site that hosts machine learning competitions \cite{Chen2016}; (2) it has been successfully used in other energy-based forecasting applications, e.g.~forecasting electricity prices \cite{Lago2018a}.}
As with the linear model, a different GBT per location, hour, and prediction time is estimated; i.e.~900 different models are estimated. Similarly, the model inputs are the same as the linear models, i.e.~the clear-sky irradiance ${\mathcal{I}}_{\mathrm{c},h+p}$ and the ECMWF forecast $\hat{\mathcal{I}}_{\mathrm{E},h+p}$ at the prediction time, and the historical irradiance values $\mathcal{I}_{\mathrm{G}}$ optimally selected using the feature selection method. \textred{In addition, to ensure that the differences between models are not due to differences in input data, the model is allowed to choose satellite data through the feature selection method.
}
It is important to note that, as done with the proposed DNN, all the GBT hyperparameters (see \cite{Chen2016}) are optimally selected using the hyperparameter optimization algorithm define in Section \ref{sec:hyper}.
\subsubsection{Neural network}
As a fourth model, a local DNN that considers very similar inputs, outputs, structure, and training algorithm as the proposed global DNN is considered. The main difference w.r.t.~to the proposed DNN is that it considers the local measurements of the irradiance $\mathcal{I}_{\mathrm{G}}$ in addition to the satellite irradiance maps $\mathcal{I}_{\mathrm{S}}$. However, the type and number of hyperparameters that the model optimizes are the same as for the global DNN and they are also optimized using the hyperparameter optimization algorithm defined in Section \ref{sec:hyper}.
The reason for including this model in the case study is that, similar to the linear and the persistence models, neural networks are a standard choice in the literature of solar irradiance forecasting \cite{Deneke2008,Voyant2017}.
As the proposed DNN is evaluated in 25 sites and the model is local, 25 different local DNNs are estimated. Unlike the linear and GBT models, the same DNN is used for the different hours of the day; this was done because it was empirically observed that the distinction of a different DNN per hour of the day led to worse predictive accuracy.
\subsection{Hyperparameter Optimization and Feature Selection}
\label{sec:hyperfeat}
As defined in Section \ref{sec:modelframe}, the hyperparameters and input features of the global DNN are optimally selected according to the geographical location. {In this case study, the range of the hyperparameters considered in the optimization search and their obtained optimal values are listed in Table \ref{tab:hyper}.}
\begin{table}[h!]
\setlength{\abovecaptionskip}{-5pt}
\caption{Optimal hyperparameters for the global DNN.}
\label{tab:hyper}
\small
\renewcommand\arraystretch{1.3}
\begin{center}
\begin{tabular}{l c c}
\hline
\bfseries Hyperparameter &\bfseries Value &\bfseries Search Range\\
\hline
Number of hidden layers & 2 & \{1, 2, 3, 4\} \\
Neurons in $1^\mr{st}$ layer & 208 & [100, 400]\\
Neurons in $2^\mr{nd}$ layer & 63 & [50, 150]\\
Initial Learning Rate & $1.16\times10^{-3}$ & $[10^{-4}, 10^{-2}]$\\
Dropout & 0.14 & [0, 1]\\
\end{tabular}
\end{center}
\end{table}
\noindent In terms of the lagged satellite-based irradiance values, the optimal input features are defined by the irradiance values at lags 0, 1, 2, and 3 w.r.t.~the current hour $h$; i.e.~$\mathcal{I}_{\mathrm{S},h},\,\ldots,\,\mathcal{I}_{\mathrm{S},h-3}$; and at lag 24 w.r.t~the 6 prediction hours $h+1,\,\ldots,\,h+6$; i.e.~$\mathcal{I}_{\mathrm{S},h-23},\,\ldots,\,\mathcal{I}_{\mathrm{S},h-18}$.
For the local models, the hyperparameters and input features are also optimized. However, considering that 900 linear models, 900 GBT models, and 25 local DNNs are used, displaying all their optimal hyperparameters and input features is out of the scope of this paper. However, the main results can be summarized as follows:
\begin{enumerate}
\textred{ \item In terms of input features, all the local models (except for the persistence model) performed two types of selection:
\begin{enumerate}
\item Use satellite data in addition to local data.
\item Choose the relevant historical irradiance values.
\end{enumerate}
The addition of satellite data did not improve the performance w.r.t.~using ground data only; therefore, none of the local models considered this information. In addition, in terms of ground irradiance values $\mathcal{I}_{\mathrm{G}}$, all the local models consider the irradiance values at lags 0 and 1 w.r.t.~the current hour $h$ and at lag 24 w.r.t.~the prediction hour $h+p$. In addition, most of them also consider the irradiance values at lags 2 and 3 w.r.t.~the current hour $h$; the exception are models that predict the solar irradiance at early hours of the day when lags of 2-3 hours represent irradiance values of 0.}
\item In the case of the local DNNs, the number of hidden layers is 2 for all 25 sites. Moreover, the number of neurons in the first (second) hidden layer varies from 95 to 242 (51 to 199) neurons depending on the site. Similarly, the dropout and the learning rate respectively oscillate between 0 and 0.45, and between $5.825\times10^{-4}$ and $5.800\times10^{-2}$.
\item In the case of the GBT models, the range of the hyperparameters values varies in a larger range, e.g.~the number of trees per model fluctuates between 10 and 1000 and the depth of each tree varies between 1 and 20.
\end{enumerate}
\textred{\subsection{Overall results}
After defining the setup of the case study and describing the selection of hyperparameters and features, in this section the average performance of the global DNN is compared against that of the local models. Particularly, the first metrics to take into account to compare the models are the average metrics; i.e.~rRMSE, forecasting skill $s$, and MBE; across the 25 sites and the 6 prediction times. These average metrics are listed in Table \ref{tab:gencom}, where the forecasting skill was computed using the same window length employed in \cite{Marquez2012}, i.e.~200 samples\footnote{As in \cite{Marquez2012}, the window length for which $s$ was stable was analyzed. Similar to \cite{Marquez2012}, 200 samples were found to be a reasonable value.}.
\begin{table}[h!]
\setlength{\abovecaptionskip}{-5pt}
\caption{Comparison of the average predictive accuracy across sites and prediction times by means of rRMSE, forecasting skill $s$, and MBE. }
\label{tab:gencom}
\small
\renewcommand\arraystretch{1.3}
\begin{center}
\begin{tabular}{l c c c}
\hline
\bfseries Model &\bfseries rRMSE [\%] & \bfseries s [\%] & \bfseries MBE [W/m$^2$] \\
\hline
Global DNN & 31.31 & 22.42 & -1.04\\
Linear & 32.01 & 21.22 & -1.07\\
Local DNN & 32.10 & 19.29 & -1.43\\
ECMWF& 34.94 & 9.75 & -2.52\\
GBT & 35.85 & 9.92 & 1.50\\
Persistence & 41.98 & 0 & 11.60\\
\end{tabular}
\end{center}
\end{table}
From Table \ref{tab:gencom}, several observations can be drawn:
\begin{enumerate}
\item In terms of square errors, i.e.~rRMSE, the predictive accuracy of the proposed global model is slightly better than all the local models and significantly better than some of them, in particular the GBT model or the persistence model. Among the local models, both the linear and local DNN perform the best and the persistence model the worst.
\item This same observation can be inferred from looking at the forecasting skill: the proposed global model performs similar to the linear model, slightly better than the local DNN, and much better than the other models. In addition, when compared across all sites and predictive horizons, all models perform better than the persistence model.
\item In terms of model bias, i.e.~MBE, all models show a very small bias that indicates that the models are not biased. Particularly, considering that the average irradiance of the dataset is approximate 350\,W/m$^2$, the bias of all the models is around 0.3-0.8\% of the average irradiance, which represents a negligible bias. The exception to this is the persistence model, whose bias of 3\% of the average irradiance is a bit larger, but still quite small.
\end{enumerate}
}
\textred{\subsection{Comparison with previously validated forecast models}
While the proposed global model seems to be a good replacement of the local models considered in this paper, it is also very important to establish its quality w.r.t.~previously validated forecast models from the literature. As explained in Section \ref{sec:metrics}, while this comparison cannot fairly be done using a metric like rRMSE, it can be roughly assessed using the forecasting skill $s$. In particular, using the results of \cite{Marquez2012}, we can establish a comparison between the proposed global model, the local NARX model proposed in \cite{Marquez2012}, and the cloud motion forecast of \cite{Perez2010}. As both models from the literature were originally only evaluated for 1-hour step ahead forecasts, we also limit the comparison of the global model to that interval. The comparison is listed in Table \ref{tab:complit}.
\begin{table}[h!]
\setlength{\abovecaptionskip}{-5pt}
\caption{Comparison of the average predictive accuracy between the global model, a NARX model from the literature, and a cloud moving forecast from the literature. The comparison is done for 1-hour ahead forecasts and by means of forecasting skill .}
\label{tab:complit}
\small
\renewcommand\arraystretch{1.3}
\begin{center}
\begin{tabular}{l c}
\hline
\bfseries Model & \bfseries s [\%] \\
\hline
Global DNN & 10 \\
NARX \cite{Marquez2012} & 12 \\
Cloud moving \cite{Perez2010} & 8 \\
\end{tabular}
\end{center}
\end{table}
What can be observed from these results is that the overall quality of the proposed global model for 1-hour ahead forecasts is very similar to those from the literature. Therefore, as initially observed when comparing the average performance of the global model w.r.t.~to the local model considered in this paper, the proposed global model seems to be an excellent candidate to save the operational costs of installing local sensors and collecting ground measurements.
}
\textred{
\subsection{Comparison across prediction horizons}
A third step required to analyze the performance of the proposed global model is to verify that its average performance is satisfied across all prediction times. In particular, it is important to check whether the global models can build accurate predictions at all short-term horizons. To perform this comparison, the two metrics used for comparing predictive accuracy, i.e.~rRMSE and the forecasting skill $s$, are evaluated for each benchmark model and predictive horizon. This comparison is listed in Table \ref{tab:combySA} and illustrated in Figure \ref{fig:BySA}.
\begin{table}[h!]
\setlength{\abovecaptionskip}{-5pt}
\caption{Comparison of the predictive accuracy of the various forecasters across the 6 prediction times by means of rRMSE and forecasting skill $s$. The best model is marked with bold font.}
\label{tab:combySA}
\small
\setlength{\tabcolsep}{4pt}
\renewcommand\arraystretch{1.3}
\begin{center}
\begin{tabular}{l| c c c c c c}
\bfseries Horizon [h]& 1 & 2 & 3 &4 &5 &6 \\
\hline
\bfseries Model &\multicolumn{6}{c}{ rRMSE [\%]}\\
\hline
Global DNN & \bfseries 25.07& \bfseries30.18& \bfseries32.36& \bfseries34.19& \bfseries36.10& 38.71\\
Linear & 26.67& 31.36& 33.11& 34.63& 36.44& \bfseries 38.35\\
Local DNN & 26.82 & 30.90 & 32.91 & 34.67 & 36.68 & 39.88 \\
GBT & 30.05& 34.78& 36.95& 39.04& 40.67& 43.59 \\
Persistence & 28.74& 36.89& 42.29& 47.28& 52.05& 56.69\\
ECMWF & 35.91& 35.01& 35.12& 35.91& 37.45& 39.28 \\
\hline
\bfseries &\multicolumn{6}{c}{ $s$ [\%]}\\
\hline
Global DNN & \bfseries 9.98 & \bfseries18.38 & \bfseries23.40 & \bfseries27.04 & \bfseries 28.30 & 27.38 \\
Linear & 7.67 & 15.71 & 21.73 & 26.03 & 27.76 & \bfseries28.42 \\
Local DNN & 6.34 & 16.98 & 22.13 & 22.64 & 25.13 & 22.51 \\
GBT & -5.18 & 6.06 & 12.23 & 15.29 & 16.00 & 15.11 \\
Persistence & 0& 0& 0& 0& 0& 0\\
ECMWF & -29.07 & 4.68 & 16.77 & 22.74 & 23.23 & 20.19 \\
\end{tabular}
\end{center}
\end{table}
\setlength{\figW}{0.49\textwidth}
\setlength{\figH}{0.66\figW}
\begin{figure}[htb]
\begin{center}
\begin{subfigure}{\columnwidth}
\input{figs/BySA}
\caption{Comparison by means of rRMSE.}
\end{subfigure}
\begin{subfigure}{\columnwidth}
\input{figs/FSkill_BySA}
\caption{Comparison by means of the forecasting skill $s$}
\end{subfigure}
\caption{Comparison of the predictive accuracy of the various forecasters across the 6 prediction times.}
\label{fig:BySA}
\end{center}
\end{figure}
As can be seen from Table \ref{tab:combySA} and Figure \ref{fig:BySA}, the global model seems to be the best model for the first 5 prediction horizons (both in terms of rRMSE and forecasting skill $s$), and the second best (very close to the best one) for the last prediction horizon. Based on these results it can be observed that not only the global model is overall equal or better than the local models, but it also performs equally well or better than them across all prediction horizons. As a result, the proposed model is a very promising candidate to replace the local models and to save operational costs without compromising the forecasting quality.
In addition to this analysis of the global model performance, three additionally interesting observations can be made:
\begin{enumerate}
\item The persistence model is the worst across all prediction horizons except the first one. This result agrees with previous results from the literature \cite{Diagne2013} that stated that the persistence model only provides reasonable results for prediction horizons shorter than 1 hour.
\item Among the local models, the linear and DNN models show the best performance across all 6 prediction horizons.
\item The ECMWF forecast improves its accuracy relatively to the other models as the prediction time increases. In particular, in the case of the last prediction time, the ECMWF forecast has almost the same performance as the global DNN and the linear models. Considering previous results from the literature \cite{Diagne2013}, this is highly expected as NWP models start to perform better than time series models for prediction horizons larger than 4-6 hours.
\item For 1 hour ahead predictions, the ECMWF model is the worst; specially, considering its $s$ value for the first prediction horizon, the weather-based model is much worse than a simple persistence model.
\end{enumerate}
}
\subsection{Comparison across geographical site}
The final step to analyze the better or equal performance of the global model is to validate whether the quality of the performance is kept across the 25 different sites. In particular, it is important to check whether the global model can generalize and build accurate predictions across all geographical locations. For the same of simplicity, this comparison is only done in terms of the rRMSE metric; in particular, as it was the case with all previous results, the values of the forecasting skill $s$ fully agree with the rRMSE across all locations, and they are a bit redundant. The comparison across the geographical locations is listed in Table \ref{tab:combySite} and illustrated in Figure \ref{fig:BySite}.
\newcommand{\cellcolor{greenPlots!60!white}}{\cellcolor{greenPlots!60!white}}
\newcommand{\cellcolor{redPlots!60!white}}{\cellcolor{redPlots!60!white}}
\begin{table*}[h]
\setlength{\abovecaptionskip}{-5pt}
\caption{Comparison of the predictive accuracy of the various forecasters by means of rRMSE. The best model is marked with bold font.}
\label{tab:combySite}
\footnotesize
\setlength{\tabcolsep}{3pt}
\renewcommand\arraystretch{1.3}
\begin{center}
\begin{tabular}{l| c c c c c c c c c c c c c}
&\multicolumn{13}{c}{\bfseries Site}\\
\hline
\bfseries Model &
Arcen & Berkhout & Cabauw & De Kooy & Lauwers. & Deelen & Maastric. & Eindhov. & Westdorpe & Gilze-R. & Heino & Hoek v.~H.&Ell \\
\hline
Global &\bfseries 32.39& \bfseries30.24& \bfseries30.75& \bfseries29.49&\bfseries30.32&\bfseries34.55&\bfseries30.82&32.11&32.07&\bfseries 32.37&32.80&\bfseries29.24&\bfseries32.42 \\
Linear & 33.03&31.05&31.01& 29.87&31.16&35.47&31.73&32.28&33.11& 32.89&\bfseries32.75&30.53&32.50 \\
DNN &33.43&32.77&31.27& 31.14&30.95&35.75&31.48&\bfseries32.03&\bfseries31.93& 33.04&32.89&29.66&32.77 \\
GBT & 35.80&35.41&35.20& 33.45&35.79&39.62&35.68&37.22&36.33& 36.30&36.88&33.61&37.35 \\
Persistence &43.63 &41.04 &41.51 & 41.18 &41.14 &45.47 &41.20 &43.20 &40.28 & 42.86 &43.80 &40.59 &42.65
\\
ECMWF & 35.21&34.09&33.95& 32.94&33.83&38.61&34.93&34.95&36.63& 35.73&35.32&33.39&36.12
\\
\hline
\hline
\bfseries Model &Hoorn & Hoogev. & Hupsel & De Bilt & Leeuward. & Eelde & Marknes. & Rotterd. & Stavoren & Vlissing. & Volkel & Nieuw B.\\
\hline
Global & \bfseries
29.63&\bfseries31.44&32.88&\bfseries31.68&\bfseries30.16&\bfseries31.58&\bfseries31.19&\bfseries30.21&\bfseries29.38&\bfseries30.81&\bfseries32.46&32.37 \\
Linear & 30.63&32.05&\bfseries32.82&32.11&30.51&32.20&31.30&31.54&30.51&32.23&33.04&33.52 \\
DNN & 30.24&33.05&32.83&32.02&31.97&31.62&31.72&31.25&29.85&31.92&34.68&\bfseries32.34 \\
GBT & 34.46&36.44&36.99&35.94&35.20&36.19&35.30&34.53&34.14&35.50&36.50&36.98\\
Persistence & 40.35 &42.51 &42.42 &43.61 &40.80 &42.04 &41.24 &40.92 &40.01 &41.11 &42.71 &43.58 \\
ECMWF & 33.62&35.19&35.11&34.69&34.17&35.34&34.85&34.92&33.35&35.05&35.33& 36.27 \\
\end{tabular}
\end{center}
\end{table*}
\setlength{\figW}{\textwidth}
\setlength{\figH}{0.4\figW}
\begin{figure*}[!htb]
\begin{center}
\input{figs/BySite}
\caption{Comparison of the predictive accuracy of the various forecasters across the 25 locations.}
\label{fig:BySite}
\end{center}
\end{figure*}
As it can be seen from Table \ref{tab:combySite} and Figure \ref{fig:BySite}, the global model seems to validate and maintain its performance across all geographical locations. In particular, analyzing this results, it is clear that the global model performs equal or better than the local models across all 25 sites. In particular, as listed in Table \ref{tab:combySite}, the global DNN is the best model for 20 of the 25 locations, and shows an rRMSE performance that is very similar to the best model in the remaining 5 locations. \textred{Therefore, it can again be conclude that the global model is a good replacement for the local models as the performance of the former is, at least, equal to the performance of the latter.}
\subsubsection{Geographical dependences}
\textred{An interesting study to analyze is whether the rRMSE has any geographical dependence, i.e.~it might be possible that geography or climate might have an effect on the rRMSE. To study this effect, a color map with the geographical distribution of the rRMSE can be used. Such a plot is represented in Figure \ref{fig:siteDistribution2}, which depicts the geographical distribution of the rRMSE for the 6 different models. As can be observed, there is a clear difference between coastal and island sites with the latter displaying rRMSEs that are consistently higher. While this difference is not notorious, it does seem to indicate that forecasting solar irradiance at inland locations is slightly harder than at coastal sites. While analyzing the causality behind this difference is out of the scope of this paper, it is worth noting possible reasons that might cause it; particularly, differences in climate, altitude, or simple differences in irradiance ranges might explain this effect.}
\begin{figure*}[h!]
\begin{subfigure}{0.33\textwidth}
\includegraphics[width=\columnwidth]{figs/GlobalBetter.png}
\caption{Global}
\end{subfigure}
\begin{subfigure}{0.33\textwidth}
\includegraphics[width=\columnwidth]{figs/LinearBetter.png}
\caption{Linear}
\end{subfigure}
\begin{subfigure}{0.33\textwidth}
\includegraphics[width=\columnwidth]{figs/DNNBetter.png}
\caption{Local DNN}
\end{subfigure}
\begin{subfigure}{0.33\textwidth}
\includegraphics[width=\columnwidth]{figs/ECMWFBetter.png}
\caption{ECMWF}
\end{subfigure}
\begin{subfigure}{0.33\textwidth}
\includegraphics[width=\columnwidth]{figs/GBTBetter.png}
\caption{GBT}
\end{subfigure}
\begin{subfigure}{0.33\textwidth}
\includegraphics[width=\columnwidth]{figs/PersistentBetter.png}
\caption{Persistent}
\end{subfigure}
\caption{Geographical distribution of the rRMSE based on the 25 out-of-sample sites. Across the 6 models, it can be observed a clear difference between inland locations and coastal locations, with the latter having lower rRMSEs.}
\label{fig:siteDistribution2}
\end{figure*}
\subsubsection{rRMSE distribution}
\textred{A second interesting study is to analyze the rRMSE distribution across sites. In particular, while the variability of the rRMSE can be visually observed in Figure \ref{fig:BySite}, it is interesting to analyze its empirical distribution. To perform this analysis, the histogram of the rRMSE across the 25 sites is built for each of the 6 models. This is depicted in Figure \ref{fig:BySiteDist}, where each histogram bin represents a width of $0.5\%$ rRMSE. As it can be observed, the rRMSE distribution across the 6 locations is very similar with an interval spanning a width of 3\%-4\% rRMSE where the distribution is quite homogeneous and uniform, and an outlier on the right side representing a location with a much worse rRMSE. As can be seen from Figures \ref{fig:BySite} and \ref{fig:siteDistribution2}, this site representing the worst case-scenario is the same for all models: Deelen. Based on this result it can be concluded that, while the rRMSE is site-dependent, the range of variability of the rRMSE is small.
}
\setlength{\figW}{0.33\textwidth}
\setlength{\figH}{\figW}
\begin{figure*}[h!]
\begin{subfigure}{0.32\textwidth}
\input{figs/accuracyDistGlobal}
\caption{Global}
\end{subfigure} \hfill
\begin{subfigure}{0.32\textwidth}
\input{figs/accuracyDistLinear}
\caption{Linear}
\end{subfigure} \hfill
\begin{subfigure}{0.32\textwidth}
\input{figs/accuracyDistLocalDNN}
\caption{Local DNN}
\end{subfigure}
\begin{subfigure}{0.32\textwidth}
\input{figs/accuracyDistECMWF}
\caption{ECMWF}
\end{subfigure} \hfill
\begin{subfigure}{0.32\textwidth}
\input{figs/accuracyDistGBT}
\caption{GBT}
\end{subfigure} \hfill
\begin{subfigure}{0.32\textwidth}
\input{figs/accuracyDistPersistent}
\caption{Persistent}
\end{subfigure}
\caption{Distribution of the predictive accuracy of the global model across the 25 locations.}
\label{fig:BySiteDist}
\end{figure*}
\textred{\subsection{Discussion}
In the previous sections, the performance of the global model has been compared to that of the local models and that of validated models from the literature. Based on the obtained results one can conclude that: (1) the global model is slightly better than the best of the local models; (2) it performs similar to other models from the literature; (3) it provides unbiased forecasts.
While based on these results it cannot be stated that the proposed model is significantly better than all other models, it is important to keep in mind that its main purpose is not to be the best, but to perform equally well as local models so that the operational costs of installing and maintaining a wide sensor network are avoided. In that respect, it can be concluded that the proposed global model is an excellent replacement for the local models: the model is overall slightly better and performs better or equally well across all individual geographical locations and prediction times.}
\section{Conclusion}
\label{sec:conclusion}
In this paper, a general model for short-term forecasting of the global horizontal irradiance has been proposed. The main features of the model are that it replaces ground measurements by satellite-based irradiance values and that, unlike local models previously proposed in the literature, it does not need local measurements in each location where a forecast is needed.
The proposed model was shown to be equal or better than local models typically used in the literature, and in turn, to be an excellent replacement of these local models in order to save the operational costs of installing local sensors and gathering ground data.
In future research, the current work will be expanded with two further investigations. First, the model will be extended to larger regions to analyze whether it generalizes to larger geographical areas than The Netherlands. Second, the model accuracy will be improved by adding other relevant sources of input data, e.g.~weather-based input data like humidity levels or ambient temperature.
\section*{Acknowledgment}
This research has received funding from the European Union’s Horizon 2020 research and innovation program under the Marie Skłodowska-Curie grant agreement No 675318 (INCITE).
\section*{Copyright Information}
\noindent \copyright~2018. This manuscript version is made available under the CC-BY-NC-ND 4.0 license \url{http://creativecommons.org/licenses/by-nc-nd/4.0/}.
\vspace{0.25em}
\noindent \hspace{-0.4em}\doclicenseImage[imagewidth=6em]
\section*{References}
|
2,869,038,154,343 | arxiv | \section{Introduction}
\subsection{Selection-mutation-competition models}\label{subsec:sel-mut-comp}
\gr{The phenotypic diversity of a species impacts its ability to evolve. In particular, the importance of the variance of the population along a phenotypic trait is illustrated by the \emph{fundamental theorem of natural selection} \cite{Fisher}, and the \emph{breeder's equation} \cite{Lush}: the evolution speed of a population along a one dimensional fitness gradient (or under artificial selection) is proportional to the variance of the initial population. Recently, the phenotypic variance of populations has also come to light as an important element to describe the evolutionary dynamics of ecosystems (where many interacting species are considered) \cite{Violle,Bolnick,Vellend}.}
\medskip
\gr{Over the last decade, the thematic of \emph{Evolutionary Rescue} has emerged as an important question \cite{Bell,Carlson,Gonzales} (see also the seminal work of Luria and Delbr\"uck \cite{Luria}), and led to a new interest in the phenotypic distribution of populations, beyond phenotypic variance}. Evolutionary Rescue is concerned with a population living in an environment that changes suddenly. The population will survive either if some individuals in the population carry an unusual trait that turns out to be successful in the new environment, or if new mutants able to survive in the new environment appear before the population goes extinct (see \cite{Martin} for a discussion on the relative effect of \emph{de novo mutations} and \emph{standing variance} in Evolutionary Rescue). In any case, the fate of the population will not be decided by the properties of the bulk of its density, but rather by the properties of the tail of the initial distribution of the populations, close to the favourable traits for the new environment. A first example of such problem comes from emerging disease \cite{Gandon}: Animal infections sometimes are able to infect humans. This phenomena, called zoonose, is the source of many human epidemics: HIV, SARS, Ebola, MERS-CoV, etc. A zoonose may happen if a pathogen that reaches a human has the unusual property of being adapted \gr{to this new human host.} A second example comes from the emergence of microbes resistant to an antimicrobial drug that is suddenly spread in the environment of the microbe. This second phenomenon can easily be tested experimentally \cite{Bell,Toprak}, and \gr{has major public health implications} \gr{\cite{Canton}}.
\gr{Most papers devoted to the genetic diversity of populations structured by a continuous phenotypic trait describe the properties of mutation-selection equilibria. It is however also interesting to describe the genetic diversity of population that are not at equilibrium (\emph{transient dynamics}):} pathogen populations for instance are often in transient situations, either invading a new host, or being eliminated by the immune system. We refer to \cite{Hastings} for a review on transient dynamics in ecology. For asexual populations \gr{structured by a continuous phenotypic trait}, several models exist, corresponding to different biological assumptions \cite{Champagnat}. If the mutations are modeled by a diffusion, the steady populations \gr{(for a model close to \eqref{eqq0}, but where mutations are modelled by a Laplacian)} are Gaussian distributions \cite{Kimura, Burger}. \gr{Furthermore, \cite{Alfaro,Coville} have considered some transient dynamics for this model. In the model that we will consider (see \eqref{eqq0}), the mutations are modelled by a non-local term. It was shown in \cite{Burger2} (see also \cite{Burger}) that mutation-selection equilibria are then Cauchy profiles (under some assumptions), and this result has been extended to more general mutation kernels in \cite{Calsina}, provided that the mutation rate is small enough.} Finally, let us notice that the case of sexual population is rather different, since recombinations by themselves can imply that a \emph{mutation-recombination equilibrium} exists, even without selection. We refer to the infinitesimal model \cite{Bulmer}, and to \cite{Turelli} for some studies on the phenotypic distribution of sexual species in a context close to the one presented here for asexual populations.
\medskip
In this article, we consider a population consisting of individuals structured \gr{by} a quantitative phenotypic trait $x \in I$ ($I$ open interval of $\mathbb{R}$ containing $0$), and denote by $f : = f(t,x) \ge 0$ its density. Here, the trait $x$ is fully inherited by the offspring (if no mutation occurs), so that $x$ is indeed rather a breeding value than a phenotypic trait (see \cite{Mather}). We assume that the individuals reproduce with a rate $1$, and die at a rate
\[x^{2} + \int_{I}f(t,y)\,\mbox{d}y.\]
This means that the individuals with trait $x=0$ are those who are best adapted to their environment,
and that the fitness decreases like a parabola around this optimal trait (this is expected in the surroundings
of a trait of maximal fitness). It also means that the strength of the competition modeled by the logistic term
is identical for all traits. When an individual of trait $x\in I$ gives birth, we assume that the offspring will have the trait $x$ with probability $1-\varepsilon$, and a different trait $x'$ with probability $\varepsilon\in(0,1)$. $\varepsilon$ is then the probability that a mutation affects the phenotypic trait of the offspring. We can now define the growth rate of the population of trait $x$ (that is the difference between the rate of \emph{births without mutation}, minus the death rate) as
$$ r_\varepsilon(t,x) = 1-\varepsilon -x^{2} - \int_{I}f(t,y)\,\mbox{d}y. $$
When a mutation affects the trait of the offspring, we assume that the trait $x'$ of the mutated offspring is drawn from a law over the set of phenotypes $I\subset \mathbb R$ with a density $\gamma := \gamma (x)\in L^1(I)$. The function $\gamma$ then satisfies
\[\gamma(x)\ge 0,\quad \int_I \gamma(x)\, dx = 1,\]
and we assume moreover that $\gamma$ is bounded, $C^{1}$, with bounded derivative and strictly positive on $I$. The main assumption here is that the law of the trait of a mutated offspring does not depend of the trait of its parent. This classical assumption, known as \emph{house of cards} is not the most realistic, but it can be justified when the mutation rate is small \cite{Burger} (see also \cite{Calsina}\gr{)}. All in all, we end up with the following equation:
\begin{equation}
\label{eqq0}
\frac{\partial f_\varepsilon(t,x)}{\partial t}= r_\varepsilon(t,x) \, f_\varepsilon(t,x) +\varepsilon \,\gamma(x)\,\int_{I}f_\varepsilon(t,y)\,\mbox{d}y.
\end{equation}
This paper is devoted to the study of the asymptotic behaviour of
the solutions of equation \eqref{eqq0} when $\varepsilon$ is small
and $t$ large and it is organized as follows. In the rest of Section
1 the main results are quoted, first in an informal way, and then as
rigourous statements. Section 2 contains the proof of Theorem
\ref{theorem1} and its corollary and finally, in Section 3, Theorem
\ref{thm:smallt} is proved.
\subsection{Asymptotic study of the model}\label{subsec:asymptotics}
When we consider the solutions of \eqref{eqq0}, two particular profiles naturally appear:
\begin{itemize}
\item \emph{A Cauchy profile:} For a given mutation rate $\varepsilon>0$ small enough, one expects that $f_\varepsilon(t,x)$ will converge, as $t$ goes to infinity, to the unique steady-state of \eqref{eqq0}, wich is the following Cauchy profile
\begin{equation}
\label{eqq2}
f_\varepsilon(\infty,x) := \frac{\varepsilon\, \gamma(x)\, \mathcal I_\varepsilon(\infty)}{\mathcal I_\varepsilon(\infty) - (1-\varepsilon) + x^2} ,
\end{equation}
where $\mathcal I_\varepsilon(\infty)$ is such that $\int_I f_\varepsilon(\infty,x)\,dx=\mathcal I_\varepsilon(\infty)$.
This steady-state of \eqref{eqq0} is the so-called \emph{mutation-selection equilibrium} of the \emph{House of cards} model \eqref{eqq0}, which has been introduced in \cite{Burger2} (we also refer to \cite{Burger} for a broader presentation of existing results).
\smallskip
\item \emph{A Gaussian profile:} If $\varepsilon=0$, the solution of (\ref{eqq0}) can be written
\begin{equation}
\label{eqq1}
f_0(t,x) = f(0,x) \, e^{- \int_0^t \mathcal I_0(s)\, ds + t - t\,x^2} ,
\end{equation}
where $\mathcal I_0(t):=\int_I f_0(t,x)\,dx$, so that a Gaussian-like behavior (with respect to $x$) naturally appears in this case. Surprisingly, we are not aware of any reference to this property in the population genetics literature.
\end{itemize}
\medskip
We will show that, as suggested by the above arguments, we can describe the phenotypic distribution of the population, that is $x\mapsto f_\varepsilon(t,x)$, when either $t\gg 1$ (large time for a given mutation rate $\varepsilon>0$), or $0\leq\varepsilon\ll 1$ (small mutation rate, for a given time interval $t\in [0,T]$). Before providing the precise statements of our results (see Subsection~\ref{subsec:rigorous}), we will briefly describe them here, and illustrate them with numerical simulations. The numerical simulations presented in Fig.~\ref{fig1} and Fig.~\ref{fig2} are obtained thanks to a finite difference scheme (explicit Runge-Kutta in time), and we illustrate our result with a single simulation of \eqref{eqq0} with $\varepsilon=10^{-2}$, $I=[-3/2,3/2]$, $\gamma(x)=\frac 1{40\pi}e^{\frac{-x^2}{20}}$ and $f_\varepsilon(0,x)=\Gamma_2(\varepsilon,x-1)$ (see the definition of $\Gamma_2$ in eq. (\ref{eqq4}) below). The initial condition corresponds to a population at the mutation-selection equilibrium which environment suddenly changes (the optimal trait originally in $x=1$ moves to $x=0$ at $t=0$). This example is guided by the Evolutionary Rescue experiments described in Subsection~\ref{subsec:sel-mut-comp}, where the sudden change is obtained by the addition of e.g. salt or antibiotic to a bacterial culture.
\medskip
We describe two phases of the dynamics of the population:
\begin{itemize}
\item \emph{Large time: Cauchy profile.} We show that $f_\varepsilon(t,x)$ is asymptotically (when the mutation rate $\varepsilon>0$ is small) close to
\begin{equation}
\label{eqq4}
\Gamma_2(\varepsilon,x) =\frac{\varepsilon\, \gamma(0)}{\gamma(0)^2\pi^2\,\varepsilon^2 + x^2},
\end{equation}
provided $t\gg \varepsilon^{-4}$\gr{. The population is then a time-independent Cauchy distribution for large times}. This theoretical result is coherent with \gr{numerical} results: we see in Fig.~\ref{fig1} that $f_\varepsilon(t,\cdot)$ is well described by $\Gamma_2(\varepsilon,\cdot)$, as soon as $t\geq 10^5$, which is confirmed by the value of $\|f_\varepsilon(t,\cdot)-\Gamma_2(\varepsilon,\cdot)\|_{L^1(I)}$ for $t\geq 10^5$ given by Fig.~\ref{fig2}.
\medskip
\item \emph{Short time: Gaussian profile.} We also show that $f_\varepsilon(t,x)$ is asymptotically (when the mutation rate $\varepsilon>0$ is small) close to
\begin{equation}
\label{eqq3}
\Gamma_1(t,\varepsilon,x) = \frac{f(0,x)\, \sqrt t }{f(0,0)\int_I e^{-x^2}\,dx}e^{-x^2 \,t},
\end{equation}
provided $1\ll t\ll \varepsilon^{-2/3}$\gr{. The population has then a Gaussian-type distribution for} short (but not too short) times. This theoretical result is coherent with \gr{numerical} results: we see in Fig.~\ref{fig1} that $f_\varepsilon(t,\cdot)$ is well described by $\Gamma_1(t,\varepsilon,\cdot)$ for $t\in[10^2,10^4]$, which is confirmed by the value of $\|f_\varepsilon(t,\cdot)-\Gamma_2(\varepsilon,\cdot)\|_{L^1(I)}$ for $t\in[10^2,10^4]$ given by Fig.~\ref{fig2}.
\end{itemize}
\begin{figure}[tbp]
\begin{center}
\includegraphics[scale=0.3]{b0.jpg}\quad \includegraphics[scale=0.3]{b10.jpg}\quad
\includegraphics[scale=0.3]{b100.jpg}\quad
\includegraphics[scale=0.3]{b1000.jpg}\quad \includegraphics[scale=0.3]{b10000.jpg}\quad\includegraphics[scale=0.3]{b30000.jpg}\quad
\includegraphics[scale=0.3]{b100000.jpg}\quad
\includegraphics[scale=0.3]{b175000.jpg}
\caption{The different graphs correspond to different time points, from $t=0$ to $t=175\,000$,
of the same simulation of \eqref{eqq0} for $\varepsilon=10^{-2}$ (see in the text for a complete description). In each of these plots, the blue (resp. red, black) line represents $x\mapsto f_\varepsilon(t,x)$ (resp. $x\mapsto\Gamma_1(t,\varepsilon,x)$, $x\mapsto\Gamma_2(\varepsilon,x)$). Note that in this figure, the scales of both axis change from one graph to the other, to accommodate with the dynamics of the solution $f(t,\cdot)$.}\label{fig1}
\end{center}
\end{figure}
\begin{figure}[tbp]
\begin{center}
\includegraphics[scale=0.4]{distances.jpg}
\caption{Simulation of \eqref{eqq0} with $\varepsilon=10^{-2}$ (see in the text for a complete description). The red line represents $\|f_\varepsilon(t,\cdot)-\Gamma_1(t,\varepsilon,\cdot)\|_{L^1(I)}$, while the black line represents $\|f_\varepsilon(t,\cdot)-\Gamma_2(\varepsilon,\cdot)\|_{L^1(I)}$.}\label{fig2}
\end{center}
\end{figure}
Another way to look at these results is to \gr{consider} $t\geq 0$ and $\varepsilon>0$ as two parameters, and to see the approximations presented above as approximations of $f_\varepsilon(t,\cdot)$ for some set of parameters: $f_\varepsilon(t,\cdot)\sim_{\varepsilon\to 0} \Gamma_2(\varepsilon,\cdot)$ for $(t,\varepsilon)\in\{(\tilde t,\tilde\varepsilon);\tilde t\gg \tilde{\varepsilon}^{-4}\}$, while $f_\varepsilon(t,\cdot)\sim_{\varepsilon\to 0} \Gamma_1(t,\varepsilon,\cdot)$ for $(t,\varepsilon)\in\{(\tilde t,\tilde\varepsilon);1\ll\tilde t\ll \tilde{\varepsilon}^{-2/3}\}$. We have represented these sets in Fig~\ref{fig3}.
\begin{figure}[tbp]
\begin{center}
\includegraphics[scale=0.4]{set3.jpg}
\caption{Representation of the set $\{(\tilde t,\tilde\varepsilon);\tilde t\gg \tilde{\varepsilon}^{-4}\}$ (in blue), where the approximation $f_\varepsilon(t,\cdot)\sim_{\varepsilon\to 0} \Gamma_2(\varepsilon,\cdot)$ holds provided that $\varepsilon>0$ is small enough; and of the set $\{(\tilde t,\tilde\varepsilon);1\ll\tilde t\ll \tilde{\varepsilon}^{-2/3}\}$ (in red), where the approximation $f_\varepsilon(t,\cdot)\sim_{\varepsilon\to 0} \Gamma_1(t,\varepsilon,\cdot)$ holds provided that $\varepsilon>0$ is small enough.}\label{fig3}
\end{center}
\end{figure}
As described in the Subsection~\ref{subsec:sel-mut-comp}, the phenotypic distribution of species is involved in many ecological and epidemiological problematics. Our study is a general analysis of this problem and we do not have a particular application in mind. An interesting and (to our knowledge) new feature described by our study is that the tails of the traits distribution in a population can change drastically between "short times", that is $1\ll t\ll \varepsilon^{-2/3}$ and "large times", that is $t\gg \varepsilon^{-4}$: the distribution is initially close to a Gaussian distribution, with small tails, and then converges to a thick tailed Cauchy distribution. \gr{This result could have significant consequences in for \emph{evolutionary rescue}: the tails of the distribution then play an important role. Quantifying the effect of this property of the tails of the distributions would however require further work, in particular on the impact of stochasticity (the number of pathogen is typically large, but finite). The plasticity of the pathogen (see \cite{Chevin}) may also play an important role.}
\subsection{Rigorous statements} \label{subsec:rigorous}
Here we state the two main theorems of the paper\gr{, each of them followed by a corollary}. To do so we start by defining the linear operator
$$
(A_{\varepsilon}f) (x):=(1-\varepsilon)f(x)
-x^{2}\,f(x)+\varepsilon \gamma(x)\,\int_{I}f(y)\mbox{d}y
$$
and denoting by $\lambda_{\varepsilon}$ the dominant eigenvalue of $A_{\varepsilon}$ and by $\psi_{\varepsilon}(x)=\frac{\varepsilon \gamma(x)}{\lambda_{\varepsilon}-(1-\varepsilon)+x^2}$ the corresponding eigenvector (see Proposition \ref{proposition1}).
\begin{theorem}
\label{theorem1} Let us assume that the initial datum $f_0\ge 0$ is integrable
on $I$ ($I= ]a,b[$, $-\infty \le a < b \le +\infty$), and $f_0$ is not identically (i.-e. a.e.) $0$.
Then the initial value problem for (\ref{eqq0}) with
$f(0,x)=f_0(x)$ has a unique (global for positive times) mild
solution.
Moreover, for $\varepsilon>0$ small enough, and any $\rho_{\varepsilon} < (\gamma(0)\pi\varepsilon)^2$,
there exists a constant $C_{\varepsilon}>0$ (depending on
$f_0$ and $\varepsilon$) such that
$$ \left\|f(t,\cdot)-\lambda_{\varepsilon}\,\psi_{\varepsilon} \right\|_{L^1(I)} \leq C_{\varepsilon}\,
e^{-\rho_{\varepsilon}\,t}.
$$
Furthermore, taking
$\rho_{\varepsilon}=\frac{\alpha_{\varepsilon}}{2}=\frac{\lambda_{\varepsilon}-(1-\varepsilon)}{2}$, the following
more explicit (in terms of dependence w.r.t $\varepsilon$) estimate holds
$$ \left\|f(.,t)-\lambda_{\varepsilon}\,\psi_{\varepsilon} \right\|_{L^1(I)} \leq
K\,
\varepsilon^{\frac{-\hat{K}}{\varepsilon^2}}\, e^{\frac{-\alpha_{\varepsilon}t}{2}},
$$
where $K, \hat{K}>0$ depend on $f_0$ but not on $\varepsilon$.
\end{theorem}
\begin{corollary}\label{cor1}
Under the same hypotheses, there exists positive constants $K, \hat{K}$ and $\tilde{K}$ (independent of $\varepsilon$) such that
$$
\left\|f(t,\cdot) - \frac{\varepsilon
\gamma(0)}{(\gamma(0)\pi\varepsilon)^2+x^2}\right\|_{L^{1}(I)} \leq K\,
\varepsilon^{\frac{-\hat{K}}{\varepsilon^2}}\,
e^{-\hat{K}\varepsilon^2 t}+ \tilde{K}\varepsilon
\ln{\left(\frac{1}{\varepsilon}\right)}.
$$
\end{corollary}
\begin{theorem}\label{thm:smallt}
Let $\gamma\in C^0(I)\cup L^\infty(I)$ such that $\int_I \gamma(x)\,dx=1$. Let $f(0,\cdot)\in W^{1,\infty}(I)$ satisfying $f(0,0)>0$ and $\int_I f(0,x)\,dx<1$. There exists a constant $C>0$ such that the solution $f\in C^1(\mathbb R_+\times I)$ of \eqref{eqq0} satisfies
\gr{\begin{equation}\label{est:final-smallt}
\forall t\geq 0,\quad \Bigg\| x \mapsto f(t,x)-\frac{f(0,x)\sqrt t e^{-x^2 t}}{f(0,0)\,\int_I e^{-y^2}\,dy}\Bigg\|_{L^1(I)}\leq
C \left(\frac 1{\sqrt t}+\varepsilon\, t^{\frac 3 2}\,e^{C\,\varepsilon\, t
\right).
\end{equation}}
\end{theorem}
\begin{remark}\label{rem:C}
As can be seen from the proof, the constant $C$ appearing in \eqref{est:final-smallt} indeed only depends on some upper bounds on $\|\gamma\|_{L^\infty}$, $\|f(0,\cdot)\|_{W^{1,\infty}}$ and a lower bound on $f(0,0)$, and on $|a-b|$.
\end{remark}
In particular, Theorem~\ref{thm:smallt} implies the following description of the population's phenotypic diversity during transitory times, that is times $t$ satisfying $1\ll t\ll\varepsilon^{-\frac 23}$:
\begin{corollary}\label{cor:smallt}
Let $\gamma\in C^0(I)\cup L^\infty(I)$ such that $\int_I \gamma(x)\,dx=1$. Let $f(0,\cdot)\in W^{1,\infty}(I)$ satisfying $f(0,0)>0$ and $\int_I f(0,x)\,dx<1$. There exists $C>0$ such that for $\kappa>0$ small enough, as soon as $\varepsilon<\kappa$, the solution $f\in C^1(\mathbb R_+\times I)$ of \eqref{eqq0} satisfies
\[\forall t\in\left[\kappa^{-2},\kappa^{\frac 23}\varepsilon^{-\frac 23}\right],\quad\Bigg\|f(t,x)-\frac{f(0,x)\sqrt t e^{-x^2 t}}{f(0,0)\,\int_I e^{-y^2}\,dy}\Bigg\|_{L^1}\leq C\kappa.\]
\end{corollary}
These results hold for models which are slightly more general than equation (\ref{eqq0}). In fact, in both theorems one can assume that the competition term is a weighted population instead of the total population number. In Theorem \ref{thm:smallt}, one could also assume that the mutation kernel depends on the parents trait.
\section{Proof of Theorem \ref{theorem1} and of Corollary \ref{cor1}}
We start here the proof of Theorem \ref{theorem1}. We recall that
$I=]a,b[$, $-\infty \leq a <0< b \leq \infty$, and $\gamma: = \gamma(x)$ is a
bounded, $C^1$ function with bounded derivative, such that $\gamma(x) > 0$ and $\int_{I}
\gamma(x) \,dx=1$. We begin with the study of the linear
operator associated to eq. (\ref{eqq0}).
\subsection{Spectrum of the linear operator}
Let us recall that
\begin{equation}
\label{eq2} (A_{\varepsilon}f) (x):=(1-\varepsilon)f(x)
-x^{2}\,f(x)+\varepsilon \gamma(x)\,\int_{I}f(y)\,dy
\end{equation}
is the operator corresponding to the linear part in eq. (\ref{eqq0}). It
acts on functions of the variable $x \in I$.
We begin with a basic lemma which enables to define the semigroup
associated with this operator.
\medskip
\begin{lemma}\label{lemanou} The linear operator $A_{\varepsilon}$, defined
on $L^1(I)$ and with domain $D(A_{\varepsilon})=\{f \in
L^1(I) : \int_{I} x^{2}\, |f(x)|\, \,dx < \infty \}$, generates
an irreducible positive $C^0$-semigroup (denoted from now on by
$T_{\varepsilon}(t)$).
\end{lemma}
\begin{proof} The multiplication linear operator
$(A_{\varepsilon}^{0}f)(x) := (1-\varepsilon)f(x) -x^{2}\,f(x)$ is
the generator of a positive $C^0$-semigroup.
Since $\gamma$ is strictly positive,
$A_{\varepsilon}-A_{\varepsilon}^0 $ is a positive bounded perturbation whose
only invariant closed ideals are ${0}$ and the whole space $L^1(I)$.
So $T_{\varepsilon}(t)$ is irreducible (see \cite{clement},
Corollary 9.22).
\end{proof}
Next, we present a proposition which gives information about the
spectrum of $A_{\varepsilon}$.
\medskip
\begin{proposition}
\label{proposition1}
The linear operator $A_{\varepsilon}$ has only one eigenvalue. It is a strictly dominant algebraically simple eigenvalue $\lambda_{\varepsilon}
>1-\varepsilon$ and a pole of the resolvent, with corresponding normalized positive eigenvector $$ \psi_{\varepsilon}(x)=\frac{\varepsilon \gamma(x)}{\lambda_{\varepsilon}-(1-\varepsilon-x^{2})}. $$ Moreover, for $\varepsilon$ small enough, $\lambda_{\varepsilon}<1$.
\par
The rest of the spectrum of the linear operator $A_{\varepsilon}$ is
equal to the interval $J =
[\min(1-\varepsilon-a^2,1-\varepsilon-b^2), 1-\varepsilon].$
\end{proposition}
\begin{proof} In the sequel, the norm $||\,\, ||$ is the $L^1$ norm on $I$.
Let us first show that any $\lambda$ belonging to the set
$J=\text{Range}(1-\varepsilon -x^2)$ belongs to the spectrum of
$A_{\varepsilon}$. In order to do this, for $\lambda=1-\varepsilon
-x_0^2, x_0 \in \mathring{I}$, let us define
$f_n(x)=\frac{n}{2}\left(\chi_{[x_0,x_0+\frac{1}{n}]}(x)-\chi_{[x_0-\frac{1}{n},x_0]}(x)\right)$
for $n$ such that $[x_0-\frac{1}{n},x_0+\frac{1}{n}]\subset I$. We
then have $\|f_n\|=1$ and $\left\|(A_{\varepsilon}-\lambda Id)f_n\right\| =
\frac{n}{2}\int_{x_0-\frac{1}{n}}^{x_0+\frac{1}{n}}|x^2-x_0^2|dx
\rightarrow 0$. So $(\min(1-\varepsilon-a^2,1-\varepsilon-b^2),
1-\varepsilon]$ is contained in the spectrum of $A_{\varepsilon}$.
The claim follows from the fact that the spectrum is a closed set.
On the other hand, notice that (for $x_0 \in I$), $1-\varepsilon
-x_0^2$ is not an eigenvalue, since the potential corresponding
eigenfunction $\frac{\gamma(x)}{x_0^2-x^2}$ is not an integrable
function on $I$ (remember that $\gamma$ does not vanish).
Let us now compute the resolvent operator of $A_{\varepsilon}$, that is,
let us try to solve the equation
\begin{equation}
\label{eq3}
A_{\varepsilon}f-\lambda f= g \in L^1(I).
\end{equation}
For $\lambda \notin J$, defining $p:= \int_{I}f(y)\,dy$,
(\ref{eq3}) gives
\begin{equation}
\label{eq4} f(x)=\frac{\varepsilon
\gamma(x)p-g(x)}{\lambda-(1-\varepsilon-x^2)} .
\end{equation}
Integrating, we get
\begin{equation}
\label{eq5}
\bigg(1-\varepsilon
\int_{I}\frac{\gamma(x)}{\lambda-(1-\varepsilon-x^2)}\,dx
\bigg)\,p
=\int_{I}\frac{-g(x)}{\lambda-(1-\varepsilon-x^2)}\,dx ,
\end{equation}
and $\lambda$ belongs to the resolvent set unless the factor of $p$
on the left hand side vanishes. Therefore $\sigma(A)=J \cup \{
\lambda \in \mathbb{C} :\varepsilon
\int_{I}\frac{\gamma(x)}{\lambda-(1-\varepsilon-x^2)}\,dx=1
\}$.
\par
Since for any real number $\lambda>1-\varepsilon$, the function
$F_{\varepsilon}(\lambda):= \varepsilon
\int_{I}\frac{\gamma(x)}{\lambda-(1-\varepsilon-x^2)}\,dx$ is
continuous, strictly decreasing, and satisfies $\lim_{\lambda \to
1-\varepsilon}F_{\varepsilon}(\lambda)=+\infty$ (recall that
$\gamma(0)>0$) and $\lim_{\lambda \to
+\infty}F_{\varepsilon}(\lambda)=0$, we see that there is a unique
real solution of $F_{\varepsilon}(\lambda)=1$ in
$(1-\varepsilon,\infty)$. We denote it by $\lambda_{\varepsilon}$.
\par
Taking $g(x)=0$ in (\ref{eq3}), we see that $\lambda_{\varepsilon}$
is an eigenvalue with corresponding normalized strictly positive
eigenvector
$$\psi_{\varepsilon}=\frac{\varepsilon
\gamma(x)}{\lambda_{\varepsilon}-\left(1-\varepsilon-x^2\right)}.$$
\par
Taking $g(x)=\psi_{\varepsilon}$ and $\lambda = \lambda_\varepsilon$
we see that the left hand side in (\ref{eq5}) vanishes, whereas the
right hand side is strictly negative, so that
$A_{\varepsilon}f-\lambda_{\varepsilon}f=\psi_{\varepsilon}$ has no
solution and hence $\lambda_{\varepsilon}$ is algebraically simple.\\
\par
Indeed, it also follows from (\ref{eq5}) that the range of
$A_{\varepsilon}-\lambda_{\varepsilon}\, Id$ coincides with the
kernel of the linear form defined on $L^1(I)$ by the $L^\infty$
function $\frac{1}{\lambda_\varepsilon-(1-\varepsilon)+x^2}$ (which is the
eigenvector corresponding to the eigenvalue $\lambda_{\varepsilon}$
of the adjoint operator $A_{\varepsilon}^{*}$) and hence it is a
closed subspace of $L^1(I)$. Therefore, $\lambda_{\varepsilon}$ is a
pole of the resolvent (see Theorem A.3.3 of \cite{clement}).
Furthermore, since
$$
F_{\varepsilon}(1) = \varepsilon
\int_{I}\frac{\gamma(x)}{\varepsilon
+x^{2}}\,dx=\int_{I}\frac{\gamma(x)}{1
+\left(\frac{x}{\sqrt{\varepsilon}}\right)^{2}}\,dx
\stackrel{\scriptscriptstyle \varepsilon \rightarrow
0}{\displaystyle \longrightarrow}0,
$$
we see that $F_{\varepsilon}(1)< 1$ for $\varepsilon$ small enough,
and hence $\lambda_{\varepsilon}<1$.
\par
Substituting $\lambda$ by $a+bi$ in the characteristic equation
\begin{equation}
\label{characteristic} 1+\varepsilon
\int_{I}\frac{\gamma(x)}{(1-\varepsilon-x^2-\lambda)}\,dx=0
\end{equation}
we have that the imaginary part is $-\varepsilon b
\int_{I}\frac{\gamma(x)}{(1-\varepsilon-x^2-\lambda)}\,dx$.
Since $\gamma(x)>0$, there are no non real solutions of
\eqref{characteristic}
\end{proof}
\begin{remark}
Note that $\lim_{\varepsilon \to 0} \lambda_{\varepsilon}=1.$
\end{remark}
We now write an expansion of the eigenvalue $\lambda_{\varepsilon}$.
\medskip
\begin{proposition}
\label{proposition2}
Let $\lambda_{\varepsilon}$ be the dominant eigenvalue of the operator $A_{\varepsilon}$. Then
$$
\left|\lambda_{\varepsilon}-(1-\varepsilon)-\gamma(0)^2\pi^{2}\varepsilon^{2}\right|=O\left(\varepsilon^3\ln{\frac{1}{\varepsilon}}\right)
$$
\end{proposition}
\begin{proof}
Let us consider the change of variable $x=\nu_{\varepsilon}z$ where $\nu_{\varepsilon}=\sqrt{\lambda_{\varepsilon}-(1-\varepsilon)}$. We have
$$
1=\varepsilon
\int_{a}^{b}\frac{\gamma(x)}{(\lambda_{\varepsilon}-(1-\varepsilon-x^2))}\,dx=\varepsilon
\int_{\frac{a}{\nu_{\varepsilon}}}^{\frac{b}{\nu_{\varepsilon}}}\frac{\gamma(\nu_{\varepsilon}z)}
{\nu_{\varepsilon}^{2}+(\nu_{\varepsilon}z)^2}
\nu_{\varepsilon}\,dz=\frac{\varepsilon}{\nu_{\varepsilon}}
\int_{\frac{a}{\nu_{\varepsilon}}}^{\frac{b}{\nu_{\varepsilon}}}\frac{\gamma(\nu_{\varepsilon}z)}
{1+z^2}\,dz.
$$
Then
$$
\begin{array}{rcl}
\Big| \frac{\nu_{\varepsilon}}{\varepsilon}- \gamma(0)\pi\Big| &=& \Big|\int_{\frac{a}{\nu_{\varepsilon}}}^{\frac{b}{\nu_{\varepsilon}}}\frac{\gamma(\nu_{\varepsilon}z)}
{1+z^2}\,dz-\gamma(0)\pi\Big|\\ \\
& \leq & \Big|\int_{\frac{a}{\nu_{\varepsilon}}}^{\frac{b}{\nu_{\varepsilon}}}\frac{\gamma(\nu_{\varepsilon}z)}
{1+z^2}\,dz- \int_{\mathbb{R}}\frac{\gamma(\nu_{\varepsilon}z)}
{1+z^2}\,dz\Big|+ \Big| \int_{\mathbb{R}}\frac{\gamma(\nu_{\varepsilon}z)-\gamma(0))}
{1+z^2}\,dz\Big|\\ \\
& \leq & 4 \|\gamma\|_{\infty}\int_{\frac{B}{\nu_{\varepsilon}}}^{+\infty}\frac{\,dz}{1+z^2}+2 \|\gamma'\|_{\infty}\nu_{\varepsilon}\int_{0}^{\frac{A}{\nu_{\varepsilon}}}\frac{z}{1+z^2}\,dz
\end{array}$$
where we have used
$$
|\gamma(\nu_{\varepsilon}z)-\gamma(0)|\leq \min \left(\|\gamma\|_{\infty},\|\gamma'\|_{\infty} \nu_{\varepsilon}|z|\right)
$$
and have denoted $A:= \frac{\|\gamma\|_{\infty}}{\|\gamma'\|_{\infty}}$ and $B:=\min (|a|, b, A)$.\\
Since
$$
4 \|\gamma\|_{\infty}\int_{\frac{B}{\nu_{\varepsilon}}}^{+\infty}\frac{\,dz}{1+z^2} = 4 \|\gamma\|_{\infty} \arctan{\left(\frac{\nu_{\varepsilon}}{B}\right)} \leq 4 \|\gamma\|_{\infty}\frac{\nu_{\varepsilon}}{B}
$$
and
$$
2 \|\gamma'\|_{\infty}\nu_{\varepsilon}\int_{0}^{\frac{A}{\nu_{\varepsilon}}}\frac{z}{1+z^2}\,dz= \|\gamma'\|_{\infty}\nu_{\varepsilon} \ln{\left(1+\frac{A^2}{\nu_{\varepsilon}^2}\right)}
$$
we obtain
\begin{equation}
\label{ine10}
\Big|\nu_{\varepsilon}-\varepsilon\gamma(0)\pi\Big| \leq \varepsilon \nu_{\varepsilon} \left(\frac{4 \|\gamma\|_{\infty}}{B} + \|\gamma'\|_{\infty}
\ln{\left(1+\frac{A^2}{\nu_{\varepsilon}^2}\right)} \right)
\end{equation}
which implies
\begin{align*}
&\varepsilon \left( \gamma(0)\pi - \nu_{\varepsilon}\left(\frac{4 \|\gamma\|_{\infty}}{B} + \|\gamma'\|_{\infty}
\ln{(1+\frac{A^2}{\nu_{\varepsilon}^2})} \right) \right) \\
&\quad \leq \nu_{\varepsilon} \leq \varepsilon \left( \gamma(0)\pi + \nu_{\varepsilon}\left(\frac{4 \|\gamma\|_{\infty}}{B} + \|\gamma'\|_{\infty}
\ln{\left(1+\frac{A^2}{\nu_{\varepsilon}^2}\right)} \right) \right).
\end{align*}
Since
$$
\nu_{\varepsilon}\left(\frac{4 \|\gamma\|_{\infty}}{B} + \|\gamma'\|_{\infty}
\ln{\left(1+\frac{A^2}{\nu_{\varepsilon}^2}\right)} \right)
\stackrel{\scriptscriptstyle \varepsilon \rightarrow
0}{\displaystyle \longrightarrow}0
$$
we have
\begin{equation}
\label{ineprop1}
\frac{\gamma(0)\pi\varepsilon}{2}\leq \nu_{\varepsilon} \leq 2\, \gamma(0)\pi\varepsilon.
\end{equation}
for $\varepsilon$ small enough.\\
Therefore, using \eqref{ineprop1} in \eqref{ine10} we get
\begin{equation}
\label{ineprop2}
\Big|\nu_{\varepsilon}-\varepsilon\gamma(0)\pi\Big| \leq \varepsilon^2 2 \gamma(0) \pi \left(\frac{4 \|\gamma\|_{\infty}}{B} + \|\gamma'\|_{\infty}
\ln{\left(1+\frac{4A^2}{\gamma(0)^2 \pi^{2} \varepsilon^{2}}\right)} \right) \leq C \varepsilon^2 \ln{\left(\frac{1}{\varepsilon}\right)}.
\end{equation}
Finally, by \eqref{ineprop1} and \eqref{ineprop2},
$$
|\lambda_{\varepsilon}-(1-\varepsilon)-\gamma(0)^2\pi^{2}\varepsilon^{2}|=
|\nu_{\varepsilon}+\gamma(0)\pi\varepsilon|\;|\nu_{\varepsilon}-\gamma(0)\pi\varepsilon|\leq 3 \gamma(0) \pi C\varepsilon^3 \ln{\left(\frac{1}{\varepsilon}\right)}.
$$
\end{proof}
\subsection{Asymptotic behavior of the nonlinear equation}
Let us start this subsection with a lemma in which properties of the
spectrum of $\tilde{A}_{\varepsilon}=A_{\varepsilon}-\lambda_{\varepsilon}Id$ are used to study the
asymptotic behavior of the semigroup $\tilde{T}_{\varepsilon}(t)$
generated by $\tilde{A}_{\varepsilon}$.
\begin{lemma}
\label{lemma2.0}
\begin{itemize}
\item[a)] The essential growth bound of the semigroup generated by $\tilde{A}_{\varepsilon}$ is $\omega_{ess}(\tilde{T}_{\varepsilon}) = 1-\varepsilon-\lambda_{\varepsilon}.$
\item[b)] The growth bound of the semigroup generated by $\tilde{A}_{\varepsilon}$ is $\omega_{0}(\tilde{T}_{\varepsilon}) = 0.$
\end{itemize}
\end{lemma}
\begin{proof}
\begin{itemize}
\item[a)] $\tilde{A}_{\varepsilon}$ is a compact (one rank)
perturbation of
$\tilde{A}_{\varepsilon}^0f:=(1-\varepsilon-x^2-\lambda_{\varepsilon})f.$
Then
$\omega_{ess}\left(\tilde{T}_{\varepsilon}\right)=\omega_{ess}\left(\tilde{T}_{\varepsilon}^0\right)$
where $\tilde{T}_{\varepsilon}^0(t)$ is the semigroup generated by
$\tilde{A}_{\varepsilon}^0$ (see \cite{nagel}).
Since $\tilde{A}_{\varepsilon}^0$ is a multiplication operator,
$\omega_{ess}\left(\tilde{T}_{\varepsilon}^0\right) =
1-\varepsilon-\lambda_{\varepsilon}$ and the result follows.
\item[b)] By Proposition \ref{proposition1}, the spectral bound of $\tilde{A}_{\varepsilon}$ is $0$ and
the spectral mapping theorem holds for any positive
$C^{0}-$semigroup on $L^{1}$ (see \cite{clement}).
\end{itemize}
\end{proof}
Let us now write, for a positive non identically zero $f_0$,
$\left(\tilde{T}_{\varepsilon}(t)\right)f_{0}(x)=c_{f_{0}}\psi_{\varepsilon}(x)+v(t,x)$
where $\psi_{\varepsilon}(x)$ is
the eigenvector corresponding to the eigenvalue 0 of $\tilde{A}_{\varepsilon}$ and
$c_{f_{0}}\psi_{\varepsilon}(x)$ is the spectral projection of $f_0$
on the kernel of $\tilde{A}_{\varepsilon}$ (Note that $c_{f_{0}}>0$
since $f_{0}$ is positive and $\tilde{A}_{\varepsilon}$ is the
generator of an irreducible positive semigroup). We also define
$\varphi(t) := \int_I v(t,x)\, dx$.
The following lemma gives the asymptotic behavior of $c_{f_0}$:
\begin{lemma}
\label{lemma4} Let us assume that $f_0$ is a positive integrable
function on $I$. Then there exist positive constants $K_1$, $K_2$
(independent of $\varepsilon$ but depending on $f_0$) such that
$K_1\,\varepsilon^2 \leq c_{f_0} \leq K_2.$ Moreover,
$\lim_{\varepsilon \to 0} c_{f_0} = 0.$
\end{lemma}
\begin{proof}
Recall that $c_{f_0}= \langle \psi_{\varepsilon}^{*},f_0 \rangle$
where $\psi_{\varepsilon}^{*}$ is the eigenvector of the adjoint
operator $A_{\varepsilon}^{*}$ corresponding to the eigenvalue
$\lambda_{\varepsilon}$, normalized such that $\langle
\psi_{\varepsilon}^{*},\psi_{\varepsilon} \rangle=1$. Since
$$\psi_{\varepsilon}^{*}= \frac{\gr{\left(\varepsilon
\int_{I}\frac{\gamma(x)}{(\lambda_{\varepsilon}-(1-\varepsilon-x^2))^2}\,dx\right)^{-1}}}{\lambda_{\varepsilon}-(1-\varepsilon-x^2)},$$
we see that
$$
c_{f_0}=
\frac{\int_{I}\frac{f_0(x)}{\lambda_{\varepsilon}-(1-\varepsilon-x^2)}\,dx}{\varepsilon
\int_{I}\frac{\gamma(x)}{(\lambda_{\varepsilon}-(1-\varepsilon-x^2))^2}\,dx}.
$$
Let us start by bounding the denominator from above. Using that, by Proposition \ref{proposition2}, for
$\varepsilon$ small enough, $\lambda_{\varepsilon}-(1-\varepsilon)
\geq \frac{(\gamma(0)\pi\varepsilon)^2}{2},$ we obtain the bound
\begin{equation}
\label{ine1}
\begin{array}{rcl}
\varepsilon
\int_{I}\frac{\gamma(x)}{\left(\lambda_{\varepsilon}-(1-\varepsilon-x^2)\right)^2}\,dx
& \leq & \varepsilon \sup_{x}\gamma(x)
\,\int_{\mathbb{R}}\frac{1}{\left(\frac{(\gamma(0)\pi\varepsilon)^2}{2}+x^2\right)^2}\,dx
\\ \\ & = &\sup_{x}\gamma(x)\, \frac{\sqrt{2}}{\gamma(0)^3(\pi
\varepsilon)^2}=:\frac{K_0}{\varepsilon^2}.
\end{array}
\end{equation}
Similarly, since for $\varepsilon$ small enough,
$\lambda_{\varepsilon}-(1-\varepsilon) \leq
2(\gamma(0)\pi\varepsilon)^2$, so that
\begin{equation}
\label{ine2} \varepsilon
\int_{I}\frac{\gamma(x)}{\left(\lambda_{\varepsilon}-(1-\varepsilon-x^2)\right)^2}\,dx
\geq \varepsilon \min_{[-a,a]}\gamma(x)
\int_{-a}^{a}\frac{\,dx}{\left(2(\gamma(0)\pi\varepsilon)^2+x^2\right)^2}
\geq \frac{K_3}{\varepsilon^2} .
\end{equation}
For the numerator we have, on the one hand,
\begin{equation}
\label{ine3} \varepsilon^{2}
\int_{I}\frac{f_0(x)}{\lambda_{\varepsilon}-(1-\varepsilon-x^2)}\,dx
\leq
\int_{I}\frac{\varepsilon^2}{\frac{(\gamma(0)\pi\varepsilon)^2}{2}+x^2}f_0(x)\,dx,
\end{equation}
where the right hand side tends to $0$ when $\varepsilon$ goes to
$0$ by an easy application of the Lebesgue dominated convergence
theorem (note that the integrand is bounded above by
$\frac{2}{(\gamma(0)\pi)^2}f_0(x)$).
On the other hand, notice that there exists an interval $J \subset I
$ which does not contain $0$ such that $\int_{J}f_0(x)\,dx>0$.
Then, since
$$
\int_{I}\frac{f_0(x)}{\lambda_{\varepsilon}-\left(1-\varepsilon-x^2\right)}\,dx
\geq
\int_{J}\frac{f_0(x)}{\lambda_{\varepsilon}-\left(1-\varepsilon-x^2\right)}\,dx
$$
and
$$
\lim_{\varepsilon \to 0}
\int_{J}\frac{f_0(x)}{\lambda_{\varepsilon}-\left(1-\varepsilon-x^2\right)}\,dx=\int_{J}\frac{f_0(x)}{x^2}\,dx
>0,
$$
there exists a constant $K_4>0$ such that
\begin{equation}
\label{ine4}
\int_{I}\frac{f_0(x)}{\lambda_{\varepsilon}-\left(1-\varepsilon-x^2\right)}\,dx
> K_4.
\end{equation}
By \eqref{ine2} and\eqref{ine3},
$$
c_{f_0}=\frac{\varepsilon^{2}
\int_{I}\frac{f_0(x)}{\lambda_{\varepsilon}-(1-\varepsilon-x^2)}\,dx}{\varepsilon^{3}
\int_{I}\frac{\gamma(x)}{\lambda_{\varepsilon}-(1-\varepsilon-x^2)}\,dx}
\leq \frac{\varepsilon^{2}
\int_{I}\frac{f_0(x)}{\lambda_{\varepsilon}-(1-\varepsilon-x^2)}\,dx}{K_3}
\stackrel{\scriptscriptstyle \varepsilon \rightarrow
0}{\displaystyle \longrightarrow}0
$$
and by \eqref{ine1} and \eqref{ine4}, and $\varepsilon$ small
enough,
$$
c_{f_0} \geq \frac{K_4}{\frac{K_0}{\varepsilon^2}}=:K_1
\varepsilon^2.
$$
This completes the proof.
\end{proof}
\begin{remark}
If $f_0(x)$ is bounded below by a positive number $c$ in a
neighbourhood $(-\delta, \delta)$ of $0$, then the lower estimate
can be improved using that
$$
\int_{-\delta}^{\delta}\frac{\varepsilon}{k^2
\varepsilon^2+x^2}\,dx = \frac{2}{k} \arctan\left(\frac{\delta}{k
\varepsilon}\right) \stackrel{\scriptscriptstyle \varepsilon \rightarrow
0^{+} }{\displaystyle \longrightarrow}\frac{\pi}{k}.
$$
Indeed, for $\varepsilon$ small enough
$$
\begin{array}{rcl}
\varepsilon
\int_{I}\frac{f_0(x)}{\lambda_{\varepsilon}-(1-\varepsilon-x^2)}\,dx
& \geq & \varepsilon \int_{I}\frac{f_0(x)}{2 (\gamma(0) \pi
\varepsilon)^2+x^2}\,dx \\ \\ & \geq &c
\int_{-\delta}^{\delta}\frac{\varepsilon}{(\sqrt{2} \gamma(0) \pi)^2
\varepsilon^2+x²}\,dx \stackrel{\scriptscriptstyle \varepsilon
\rightarrow 0^{+}}{\displaystyle \longrightarrow}\frac{c}{\sqrt{2}
\gamma(0)}.
\end{array}
$$
So in this case, for $\varepsilon$ small enough,
$$
c_{f_0} \geq
\frac{\frac{c}{\sqrt{2}\gamma(0)\varepsilon}}{\frac{K_0}{\varepsilon^2}}=:K\varepsilon
$$
for some constant $K$ independent of $\varepsilon$.
\end{remark}
The next two lemmas enable to estimate $\varphi(t)$ (defined above Lemma \ref{lemma4}).
In the first one, the dependence w.r.t. $\varepsilon$ is not explicit.
\begin{lemma}
\label{lemma1} For $\varepsilon$ small enough and any
$\rho_{\varepsilon} < (\gamma(0)\pi\varepsilon)^2$ there exists
$K_{\varepsilon}>0$ such that $| \varphi(t)| \leq \|v(t,\cdot)\|
\leq K_{\varepsilon}\,e^{-\rho_{\varepsilon}t}\, \|f_{0}\|.$
\end{lemma}
\begin{proof}
Since by Lemma \ref{lemma2.0}
$\omega_{ess}(\tilde{A}_{\varepsilon})<\omega_{0}(\tilde{A}_{\varepsilon})$,
we can apply Theorem $9.11$ in \cite{clement}, and get the estimate
$$
\|
v(t,\cdot)\|=\|\tilde{T}_{\varepsilon}(t)f_{0}-c_{f_{0}}\psi_{\varepsilon}\|\leq
K_{\varepsilon}e^{-\eta t}\|f_{0}\| \quad \forall \eta <
\lambda_{\varepsilon}-(1-\varepsilon).
$$
Proposition \ref{proposition2} gives then the statement.
\end{proof}
We now give an estimate of the dependence of $K_{\varepsilon}$ on
$\varepsilon$, provided that $\rho_{\varepsilon}$ is chosen far
enough from its limit value. More precisely, we choose
$\rho_{\varepsilon}=\frac{\lambda_{\varepsilon}-(1-\varepsilon)}{2}=:\frac{\alpha_{\varepsilon}}{2}.$
\begin{lemma}
\label{lem6} For $\varepsilon$ small enough there exists a constant
$K$ independent of $\varepsilon$ and of $f_0$ such that
$$
\left\|\tilde{T}_{\varepsilon}(t)f_{0}-c_{f_{0}}\,\psi_{\varepsilon}\right\|\leq
K \,\varepsilon^{-4} \,e^{\frac{-\alpha_{\varepsilon}}{2}\, t}\,
\|f_0\| .
$$
\end{lemma}
\begin{proof}
Since the proof of this result is quite technical we delay it to the end of this section (subsection \ref{app}).
\end{proof}
We now rewrite equation (\ref{eqq0}) as
\begin{equation}
\label{eq6}
\frac{\partial f(t,x)}{\partial t}= \tilde{A}_{\varepsilon}f(t,x)+ \bigg(\lambda_{\varepsilon}-\int_{I}f(t,y)\,dy \bigg)\, f(t,x) .
\end{equation}
We look for solutions of (\ref{eq6}) (with positive initial
condition $f_{0} \in L^1(I)$) which can be written as
$f(t,x)=h(t)(\tilde{T}_{\varepsilon}(t)f_{0})(x)$, with $h := h(t)$
a function of time such that $h(0)=1$. Substituting in (\ref{eq6}),
it follows that $f$ is indeed a solution of eq. (\ref{eqq0}) if
$h(t)$ satisfies the following initial value problem for an ordinary
differential equation:
\begin{equation}
\label{eq7}
h'(t)=\Big(\lambda_{\varepsilon}-h(t)\int_{I}\left(\tilde{T}_{\varepsilon}(t)f_{0}\right)(x)\,dx\Big)h(t),
\qquad h(0)=1,
\end{equation}
or equivalently
\begin{equation}
\label{eq8}
h'(t)=\Big(\lambda_{\varepsilon}-(c_{f_{0}}+\varphi(t))\,
h(t)\Big)\, h(t), \qquad h(0)=1.
\end{equation}
The two next lemmas explain the asymptotic behavior of $h(t)$. In
the first one, the dependence w.r.t. $\varepsilon$ of the constants is not
explicit.
\begin{lemma}
\label{lemma3} For $\varepsilon>0$ small enough and any
$\rho_{\varepsilon} < (\gamma(0)\pi\varepsilon)^2$, there exists a
positive constant $\hat{C}_{\varepsilon}>0$ such that
$\left|h(t)-\frac{\lambda_{\varepsilon}}{c_{f_{0}}}\right| \leq
\hat{C}_{\varepsilon}\, e^{-\rho_{\varepsilon}\, t}.$
\end{lemma}
\begin{proof}
The solution of \eqref{eq8} is explicitly given by
$$
h(t)=\frac{e^{\lambda_{\varepsilon}t}}{1+\int_{0}^{t}(c_{f_{0}}+\varphi(s))\,
e^{\lambda_{\varepsilon}s}\,
\,ds}=\frac{1}{e^{-\lambda_{\varepsilon}t}+\frac{c_{f_0}}{\lambda_{\varepsilon}}\,
(1-e^{-\lambda_{\varepsilon}t})+e^{-\lambda_{\varepsilon}t}\int_{0}^{t}
\varphi(s)\,e^{\lambda_{\varepsilon}s}\,{d}s}.
$$
Then
$$
\begin{array}{rcl}
\left|h(t)-\frac{\lambda_{\varepsilon}}{c_{f_0}}\right|&=&\left|\frac{1}{e^{-\lambda_{\varepsilon}t}+\frac{c_{f_0}}{\lambda_{\varepsilon}}(1-e^{-\lambda_{\varepsilon}t})+e^{-\lambda_{\varepsilon}t}\int_{0}^{t}
\varphi(s)e^{\lambda_{\varepsilon}s}\,{d}s}-\frac{1}{\frac{c_{f_0}}{\lambda_{\varepsilon}}}\right|\\ \\
&=&
\frac{\frac{\lambda_{\varepsilon}}{c_{f_0}}\left|e^{-\lambda_{\varepsilon}t}\big(1-\frac{c_{f_0}}{\lambda_{\varepsilon}}\big)+e^{-\lambda_{\varepsilon}t}\int_{0}^{t}
\varphi(s)e^{\lambda_{\varepsilon}s}\,ds\right|}{e^{-\lambda_{\varepsilon}t}+e^{-\lambda_{\varepsilon}t}\int_{0}^{t}(c_{f_{0}}+\varphi(s))e^{\lambda_{\varepsilon}s}\,{d}s}\\ \\
& \leq &\hat{C}_{\varepsilon}e^{-\rho_{\varepsilon}t},
\end{array}$$
where for the last inequality we have used that the denominator is a
positive continuous function bounded below (it takes the value $1$
for $t=0$ and its limit is $\frac{c_{f_0}}{\lambda_{\varepsilon}}$
when $t$ goes to infinity). We also used the following estimate for
the numerator: since, by Lemma \ref{lemma1}, $|\varphi(s)| \leq
K_{\varepsilon}\,e^{-\rho_{\varepsilon}s}\|f_0\|$, then
$$
\begin{array}{rcl}
\left|e^{-\lambda_{\varepsilon}t}\big(1-\frac{c_{f_0}}{\lambda_{\varepsilon}}\big)+e^{-\lambda_{\varepsilon}t}\int_{0}^{t}
\varphi(s)\,e^{\lambda_{\varepsilon}s}\,ds\right|&\leq&
e^{-\lambda_{\varepsilon}t}\left(\left|1-\frac{c_{f_0}}{\lambda_{\varepsilon}}\right|-\frac{K_{\varepsilon}\,\|f_0\|}{\lambda_{\varepsilon}-\rho_{\varepsilon}}\right)+\frac{K_{\varepsilon}
\,\|f_0\|}{\lambda_{\varepsilon}-\rho_{\varepsilon}}\, e^{-\rho_{\varepsilon}t}\\
\\ & \leq & 2K_{\varepsilon}\, e^{-\rho_{\varepsilon}t}\|f_0\| .
\end{array}
$$
\end{proof}
In order to give an estimate of the dependence of
$\hat{C}_{\varepsilon}$ w.r.t. $\varepsilon,$ we need to bound the
denominator more precisely and to take a value of
$\rho_{\varepsilon}$ separated of its limit value. As in Lemma
\ref{lem6}, we choose
$\rho_{\varepsilon}=\frac{\lambda_{\varepsilon}-(1-\varepsilon)}{2}=:\frac{\alpha_{\varepsilon}}{2}.$
\begin{lemma}
\label{lem5} For $\varepsilon>0$ small enough, there exist
constants $K_{7}$ and $K_{8}$ (independent of $\varepsilon$) such that
$$
\left|h(t)-\frac{\lambda_{\varepsilon}}{c_{f_0}}\right| \leq K_8\,
\varepsilon^{\frac{-K_7}{\varepsilon^2}}\,
e^{-\frac{\alpha_{\varepsilon} t}{2}}.
$$
\end{lemma}
\begin{proof}
Using Lemma \ref{lemma1} and the fact that the second term is
positive we see that
\begin{equation}
\begin{array}{rcl}
e^{-\lambda_{\varepsilon} t} + e^{-\lambda_{\varepsilon} t}
\int_0^t(c_{f_0}+\varphi(s))\,e^{\lambda_{\varepsilon} s} \, ds
&\geq& e^{-\lambda_{\varepsilon} t} + \max \left(0,
c_{f_0}(1-e^{-\lambda_{\varepsilon}
t})-K_{\varepsilon}e^{-\rho_{\varepsilon} t} \right) \\
\\ \geq
e^{-\lambda_{\varepsilon} t_{\varepsilon}}
\end{array}\label{eq666}
\end{equation}
for any $t_{\varepsilon}$ such that
\begin{equation}
\label{eq66} c_{f_0}\,(1-e^{-\lambda_{\varepsilon}
t_{\varepsilon}})-K_{\varepsilon}\, e^{-\rho_{\varepsilon}
t_{\varepsilon}} \geq e^{-\lambda_{\varepsilon} t_{\varepsilon}}.
\end{equation}
(Notice that the left hand side in \eqref{eq66} is an increasing
function of $t_{\varepsilon}$). This indeed happens if
$K_{\varepsilon}\, e^{-\rho_{\varepsilon} t_{\varepsilon}} \leq
\frac{c_{f_0}}{2}$ and $(1+c_{f_0})\,e^{-\lambda_{\varepsilon}
t_{\varepsilon}} \leq \frac{c_{f_0}}{2}.$ Since the second condition
is weaker than the first one for $\varepsilon$ small enough,
\eqref{eq66} holds whenever $t_{\varepsilon}$ is such that
$e^{-\rho_{\varepsilon} t_{\varepsilon}} \leq
\frac{c_{f_0}}{2K_{\varepsilon}}$, i.e., $e^{-\lambda_{\varepsilon}
t_{\varepsilon}} \leq \left( \frac{c_{f_0}}{2 K_{\varepsilon}}
\right)^{\frac{\lambda_{\varepsilon}}{\rho_{\varepsilon}}}$ and
$\varepsilon>0$ is sufficiently small. So, $\left( \frac{c_{f_0}}{2
K_{\varepsilon}}
\right)^{\frac{\lambda_{\varepsilon}}{\rho_{\varepsilon}}}$ is also
a lower bound in \eqref{eq666}, and we finally have
$$
\left|e^{-\lambda_{\varepsilon} t} + e^{-\lambda_{\varepsilon} t}
\int_0^t(c_{f_0}+\varphi(s))e^{-\lambda_{\varepsilon} s} ds\right| \geq
\left( \frac{c_{f_0}}{2 K_{\varepsilon}}
\right)^{\frac{\lambda_{\varepsilon}}{\rho_{\varepsilon}}}.
$$
Using the bound on the numerator given in the proof of Lemma
\ref{lemma3}, the previous estimate and using also Lemma \ref{lem6},
Lemma \ref{lemma4} and Proposition \ref{proposition2}, we obtain
\medskip
\begin{equation}\label{bound1}
\begin{array}{rcl}
\left|h(t)-\frac{\lambda_{\varepsilon}}{c_{f_0}}\right|& \leq &
\frac{2K_{\varepsilon}\, e^{-\rho_{\varepsilon}t}\|f_0\|}{\left(
\frac{c_{f_0}}{2 K_{\varepsilon}}
\right)^{\frac{\lambda_{\varepsilon}}{\rho_{\varepsilon}}}}\\ \\
& \leq &
\frac{2K_{5}\,\varepsilon^{-4}\,e^{-\frac{\alpha_{\varepsilon}t}{2}}\,\|f_0\|}{\left(\frac{K_1
\varepsilon^{2}}{2K_{5}\, \varepsilon^{-4}}\right)^{K_{6}\,
\varepsilon^{-2}}}\\
\\
& = &
2K_{5}\,\left(\frac{2K_5}{K_1}\right)^{-\frac{K_6}{\varepsilon^{2}}}\varepsilon^{-4-
\frac{6K_6}{\varepsilon^2}}\, e^{-\frac{\alpha_{\varepsilon}t}{2}}\,
\|f_0\|
\\
\\ &\leq& K_8 \,\varepsilon^{\frac{-K_7}{\varepsilon^2}}\,
e^{-\frac{\alpha_{\varepsilon}t}{2}}.
\end{array}
\end{equation}
\end{proof}
\medskip
We are now in position to conclude the proof of Theorem
\ref{theorem1}.
\medskip
We recall that $h$ satisfies the integral equation
$$h(t) = 1 +
\int_0^t
\bigg(\lambda_{\varepsilon}-h(s)\,\int_{I}\left(\tilde{T}_{\varepsilon}f_0\right)(x)\,
dx \, \bigg)\, h(s)\, ds
$$
from which the following identity follows
$$h(t)\tilde{T}_{\varepsilon}(t)f_0 = \tilde{T}_{\varepsilon}(t)f_0 +
\int_0^t
\tilde{T}_{\varepsilon}(t-s)\bigg(\lambda_{\varepsilon}-h(s)\int_{I}\left(\tilde{T}_{\varepsilon}(s)f_0\right)(x)dx\bigg)h(s)\tilde{T}_{\varepsilon}(s)f_0ds,
$$
i.e., $f(x,t)$ is a solution of the variations of constants
equation.\\
On the other hand, the nonlinear part of the right hand side of
(\ref{eq6}) is a locally Lipschitz function of $f \in L^1(I)$. From
this uniqueness follows, whereas global existence is clear from the
previous lemmas.
\medskip
Finally, a standard application of the triangular inequality and
Lemmas \ref{lemma4}, \ref{lemma1} and \ref{lemma3} gives
\begin{equation}
\label{triangle}
\begin{array}{rcl}\left\|f(.,t)-\lambda_{\varepsilon}\psi_{\varepsilon}(x) \right\| &\leq &
\left|h(t)-\frac{\lambda_{\varepsilon}}{c_{f_0}}\right|\;\left\|\tilde{T}_{\varepsilon}(t)f_{0}\right\|
+\frac{\lambda_{\varepsilon}}{c_{f_0}}\left\|\tilde{T}_{\varepsilon}(t)f_{0}-c_{f_0}\,\psi_{\varepsilon}(x)\right\| \\ \\
& \leq & \hat{C}_{\varepsilon}\,e^{-\rho_{\varepsilon}t} \left(K_{2}+K_{\varepsilon}\,e^{-\rho_{\varepsilon}t}\, ||f_0||\right)+\frac{1}{K_1 \varepsilon^2}K_{\varepsilon}e^{-\rho_{\varepsilon}t} \\ \\
& \leq & C_{\varepsilon} \,e^{-\rho_{\varepsilon}t}.
\end{array}
\end{equation}
Using Lemmas \ref{lem6} and \ref{lem5} in the second inequality of
\eqref{triangle}, the last statement of Theorem \ref{theorem1} follows.
\subsection{Proof of Corollary \ref{cor1}}
By the triangular inequality,
$$
\begin{array}{rcl}
\left\|f(t,\cdot) - \frac{\varepsilon \gamma(0)}{(\gamma(0)\pi\varepsilon)^2+\cdot^2}\right\|_{L^{1}(I)} &\leq& \left\|f(t,\cdot)-\lambda_{\varepsilon}\psi_{\varepsilon}\right\|_{L^{1}(I)}\\ \\ & +& \left\|\lambda_{\varepsilon}\psi_{\varepsilon}(x)-\frac{\varepsilon \gamma(0)}{(\gamma(0)\pi\varepsilon)^2+\cdot^2}\right\|_{L^{1}(I)}
\end{array} .
$$
Hence by Proposition \ref{proposition2} and Theorem \ref{theorem1}, we only need to estimate the last term, for which we have
\begin{align*}
&\left\|\frac{\lambda_{\varepsilon}\varepsilon \gamma}{\lambda_{\varepsilon}-(1-\varepsilon)+\cdot^2}- \frac{\varepsilon \gamma(0)}{(\gamma(0)\pi\varepsilon)^2+\cdot^2}\right\|_{L^{1}(I)} \\
& \quad\leq \left\|\frac{\varepsilon (\lambda_{\varepsilon}-1)\gamma}{\lambda_{\varepsilon}-(1-\varepsilon)+ \cdot^2}\right\|_{L^{1}(I)}
+ \left\|\frac{\varepsilon \gamma}{\lambda_{\varepsilon}-(1-\varepsilon)+\cdot^2}- \frac{\varepsilon \gamma}{(\gamma(0)\pi\varepsilon)^2+\cdot^2}\right\|_{L^{1}(I)}\\
&\qquad+\left\|\frac{\varepsilon (\gamma-\gamma(0))}{(\gamma(0)\pi\varepsilon)^2+\cdot^2}\right\|_{L^{1}(I)}.
\end{align*}
Let us bound the three terms. For the first one we have, by Proposition \ref{proposition2},
$$
\begin{array}{rcl}
\left\|\frac{\varepsilon (\lambda_{\varepsilon}-1)\gamma}{\lambda_{\varepsilon}-(1-\varepsilon)+\cdot^2}\right\|_{L^{1}(I)}& \leq & (\lambda_{\varepsilon}-1)\|\gamma\|_{\infty}\int_{\mathbb{R}}\frac{\varepsilon \,dx}{\frac{(\gamma(0)\pi\varepsilon)^2}{2}+x^2} \\ \\
& = & (\lambda_{\varepsilon}-1)\|\gamma\|_{\infty}\frac{\sqrt{2}}{\gamma(0)}= O(\varepsilon).
\end{array}
$$
For the second one, by Proposition \ref{proposition2} and \eqref{ine1},
\begin{align*}
&\left\|\frac{\varepsilon \gamma}{\lambda_{\varepsilon}-(1-\varepsilon)+\cdot^2}- \frac{\varepsilon \gamma}{(\gamma(0)\pi\varepsilon)^2+\cdot^2}\right\|_{L^{1}(I)} \\
&\quad \leq |(\gamma(0)\pi\varepsilon)^2-(\lambda_{\varepsilon}-(1-\varepsilon))| \varepsilon \|\gamma\|_{\infty}\int_{\mathbb{R}}\frac{\,dx}{\Big(\frac{(\gamma(0)\pi\varepsilon)^2}{2}+x^2\Big)^{2}}\\
&\quad = \left|(\gamma(0)\pi\varepsilon)^2-(\lambda_{\varepsilon}-(1-\varepsilon))\right| \frac{K_0}{\varepsilon^2}=O\left(\varepsilon \ln{\frac{1}{\varepsilon}}\right).
\end{align*}
For the third one, similarly to the proof of Proposition \ref{proposition2}, denoting by $A:= \frac{\|\gamma\|_{\infty}}{\|\gamma'\|_{\infty}}$,
\begin{eqnarray*}
\left\|\frac{\varepsilon (\gamma-\gamma(0))}{(\gamma(0)\pi\varepsilon)^2+\cdot^2}\right\|_{L^{1}(I)}&\leq& 2 \varepsilon \int_{0}^{A} \frac{\|\gamma'\|_{\infty}x}{(\gamma(0)\pi\varepsilon)^2+x^2}\,dx+ 2 \varepsilon\int_{A}^{+\infty} \frac{\|\gamma\|_{\infty}}{(\gamma(0)\pi\varepsilon)^2+x^2}\,dx\\
&=& \varepsilon \|\gamma'\|_{\infty} \ln{(1+\frac{A^2}{(\gamma(0)\pi\varepsilon)^2})}+2 \frac{\|\gamma\|_{\infty}}{\gamma(0)\pi} \arctan{\Big(\frac{\gamma(0)\pi \varepsilon}{A}\Big)}\\
& = & O\left(\varepsilon \ln{\frac{1}{\varepsilon}}\right).
\end{eqnarray*}
\subsection{Proof of Lemma \ref{lem6} }\label{app}
Let us consider the linear initial value problem
\begin{equation}
\left\{\begin{array}{rcl}
\frac{\partial u(t,x)}{\partial t}& = & \tilde{A}_{\varepsilon}u(t,x)=(a_{\varepsilon}(x)-\lambda_{\varepsilon})\, u(t,x) +\varepsilon \gamma(x)\,\int_{I}u(t,y)\,dy,\\
\\ u(0,x)&=&u_{0}(x),
\end{array}\right.
\end{equation}
where $a_{\varepsilon}(x):=1-\varepsilon-x^2$. Let us recall that
$s(\tilde{A}_{\varepsilon})=0$ and $\varepsilon\int_{I}
\frac{\gamma(x)}{\lambda_{\varepsilon}-a_{\varepsilon}(x)}\,dx=1$ (see Proposition \ref{proposition1}).
Applying the Laplace transform with respect to $t$ to the previous
equation, we obtain the identity
$$
\mu \,\mathcal{L}[u](\mu, x)-u_{0}(x)=
(a_{\varepsilon}(x)-\lambda_{\varepsilon})\,\mathcal{L}[u](\mu,x)+\varepsilon\,
\gamma(x)\,\int_{I}\mathcal{L}[u](\mu,y)\, \,dy,
$$
that is
\begin{equation}
\label{eq13}
\mathcal{L}[u](\mu,x)=\frac{u_{0}(x)}{\mu+\lambda_{\varepsilon}-a_{\varepsilon}(x)}+\frac{\varepsilon
\,\gamma(x)}{\mu+\lambda_{\varepsilon}-a_{\varepsilon}(x)}\int_{I}\mathcal{L}[u](\mu,
y)\,dy.
\end{equation}
Integrating (with respect to $x$), we obtain
$$
\int_{I}\mathcal{L}[u](\mu,x)\,dx=\frac{\int_{I}\frac{u_{0}(x)}{\mu+\lambda_{\varepsilon}-a_{\varepsilon}(x)}\,dx}
{1- \int_{I}\frac{\varepsilon
\gamma(x)}{\mu+\lambda_{\varepsilon}-a_{\varepsilon}(x)}\,dx}=\frac{\int_{I}\frac{u_{0}(x)}{\mu+\lambda_{\varepsilon}-a_{\varepsilon}(x)}\,dx}
{\varepsilon \mu
\int_{I}\frac{\gamma(x)}{(\lambda_{\varepsilon}-a_{\varepsilon}(x))(\mu+\lambda_{\varepsilon}-a_{\varepsilon}(x))}\,dx},
$$
where we have used, for the second equality, $\varepsilon\int_{I}
\frac{\gamma(x)}{\lambda_{\varepsilon}-a_{\varepsilon}(x)}=1$.
Substituting in \eqref{eq13},
we get
\begin{equation}
\label{eq14}
\mathcal{L}[u](\mu,x)=\frac{u_{0}(x)}{\mu+\lambda_{\varepsilon}-a_{\varepsilon}(x)}+
\frac{\int_{I}\frac{u_{0}(x)}{\mu+\lambda_{\varepsilon}-a_{\varepsilon}(x)}\,dx}
{\mu
\int_{I}\frac{\gamma(x)}{(\lambda_{\varepsilon}-a_{\varepsilon}(x))(\mu+\lambda_{\varepsilon}-a_{\varepsilon}(x))}\,dx}\frac{\gamma(x)}{(\mu+\lambda_{\varepsilon}-a_{\varepsilon}(x))}.
\end{equation}
This Laplace transform is analytic for Re $\mu >0$ (note that
$\lambda_{\varepsilon}-a_{\varepsilon}(x)$ is positive and tends to
zero when $\varepsilon$ tends to zero). Then, for $s>0$, we know, by
the inversion theorem, that
$$
u(t,x)=\frac{1}{2\pi
i}\int_{s-i\infty}^{s+i\infty}\mathcal{L}[u](\mu,x)\,e^{\mu t}\,
\,d\mu.
$$
Using the theorem of residues, we can shift the integration path to
the left in order to obtain, for any $s' \in
(1-\varepsilon-\lambda_{\varepsilon},0),$
$$
u(t,x)=\text{Res}_{\mu=0} \Big(\mathcal{L}[u](\mu, x)e^{\mu
t}\Big)+\frac{1}{2\pi
i}\int_{s'-i\infty}^{s'+i\infty}\mathcal{L}[u](\mu,x)e^{\mu\, t}\,
\,d\mu,
$$
where
$$
\begin{array}{rcl}
\text{Res}_{\mu=0}\Big(\mathcal{L}[u](\mu, x)e^{\mu
t}\Big)&=& \lim_{\mu \to 0} \mu \mathcal{L}[u](\mu, x)\\ \\
&=& \lim_{\mu \to 0} \left(\frac{\mu
u_{0}(x)}{\mu+\lambda_{\varepsilon}-a_{\varepsilon}(x)}+\frac{
\int_{I}\frac{u_{0}(x)}{\mu+\lambda_{\varepsilon}-a_{\varepsilon}(x)}\,{d}x}
{\int_{I}\frac{\gamma(x)}{(\lambda_{\varepsilon}-a_{\varepsilon}(x))(\mu+\lambda_{\varepsilon}
-a_{\varepsilon}(x))}\,{d}x}\,\frac{\gamma(x)}{\mu+\lambda_{\varepsilon}-a_{\varepsilon}(x)}\right)\\ \\
&=& \frac{\langle u_0, \psi_{\varepsilon}^{*}\rangle}{\langle
\psi_{\varepsilon},
\psi_{\varepsilon}^{*}\rangle}\,\psi_{\varepsilon}(x)=c_{u_0}\,\psi_{\varepsilon}(x)
\end{array}
$$
(let us recall that $\psi_{\varepsilon}(x)=\frac{\varepsilon
\gamma(x)}{\lambda_{\varepsilon}-a_{\varepsilon}(x)}$ and
$\psi_{\varepsilon}^{*}(x)= \frac{\left(\varepsilon
\int_{I}\frac{\gamma(x)\,dx}{(\lambda_{\varepsilon}-(1-\varepsilon-x^2))^2}\right)^{-1}}{\lambda_{\varepsilon}-(1-\varepsilon-x^2)}$).
Thus, we obtain that, for $s' \in
\left(1-\varepsilon-\lambda_{\varepsilon},0\right)$,
\begin{equation}
\label{eq14bis}
u(t,x)=c_{u_0}\psi_{\varepsilon}(x)+ \frac{1}{2\pi
}\int_{-\infty}^{+\infty}\mathcal{L}[u](s'+i\tau, x)\,
e^{(s'+i\tau)\, t}\, \,d\tau .
\end{equation}
We now define
$g_{\varepsilon}(\mu):=\frac{\int_{I}\frac{u_{0}(x)\,dx}{\mu+\lambda_{\varepsilon}-a_{\varepsilon}(x)}}
{\mu
\int_{I}\frac{\gamma(x)\,dx}{(\lambda_{\varepsilon}-a_{\varepsilon}(x))(\mu+\lambda_{\varepsilon}-a_{\varepsilon}(x))}},
$ so that we can write
\begin{equation}
\label{eq14bisbis}
\begin{array}{rcl}
\frac{1}{2\pi
}\int_{-\infty}^{+\infty}\mathcal{L}[u](s'+i\tau,x)e^{(s'+i\tau) t}
\,d\tau&=& \frac{1}{2\pi}
u_0(x)e^{s't}\int_{-\infty}^{\infty}\frac{e^{i\tau
t}}{s'+\lambda_{\varepsilon}-a_{\varepsilon}(x)+i
\tau}\,d\tau\\ \\ &+&
\frac{1}{2\pi} \gamma(x)e^{s't}\int_{-\infty}^{\infty}\frac{g_{\varepsilon}(s'+i \tau)e^{i\tau t}}{s'+\lambda_{\varepsilon}-a_{\varepsilon}(x)+i \tau}\,d\tau\\ \\
&=&e^{-(\lambda_{\varepsilon}-a_{\varepsilon}(x))t}u_{0}(x)\\ \\
&+&\frac{1}{2\pi}
\gamma(x)e^{s't}\int_{-\infty}^{\infty}\frac{g_{\varepsilon}(s'+i
\tau)e^{i\tau t}}{s'+\lambda_{\varepsilon}-a_{\varepsilon}(x)+i
\tau}\,d\tau,
\end{array}
\end{equation}
where we used the estimate
$s'+\lambda_{\varepsilon}-a_{\varepsilon}(x)>0$ and the identity
$\int_{-\infty}^{\infty}\frac{e^{i \tau t}}{\alpha + i
\tau}\,d\tau=2 \pi e^{-\alpha t}$ (for $\alpha >0$).
\par
We now would like to find a bound for $\left\|\frac{1}{2\pi}
\gamma(x)e^{s't}\int_{-\infty}^{\infty}\frac{g_{\varepsilon}(s'+i
\tau)e^{i\tau t}}{s'+\lambda_{\varepsilon}-a_{\varepsilon}(x)+i
\tau}\,d\tau\right\|_{\infty}$. \\We see that
\begin{equation}
\label{eq15}
\begin{array}{rcl}
\left\|
\gamma(x)\,e^{s't}\int_{-\infty}^{\infty}\frac{g_{\varepsilon}(s'+i
\tau)e^{i\tau t}}{s'+\lambda_{\varepsilon}-a_{\varepsilon}(x)+i
\tau}\,d\tau\right\|_{\infty}&\leq& e^{s't}
\|\gamma\|_{\infty}\sup_{x}\Big|\int_{-\infty}^{\infty}\frac{g_{\varepsilon}(s'+i
\tau)e^{i\tau t}}{s'+\lambda_{\varepsilon}-a_{\varepsilon}(x)+i
\tau}\,d\tau \Big|\\ \\ &=& e^{s't}
\sup_{x}\Big|\int_{-\infty}^{\infty}\frac{g_{\varepsilon}(s'+i
\tau)e^{i\tau t}}{s'+\lambda_{\varepsilon}-a_{\varepsilon}(x)+i
\tau}\,d\tau \Big|
\end{array}
\end{equation}
and
\begin{equation}
\label{eq16}
\begin{array}{rcl}
\Big|\int_{-\infty}^{\infty}\frac{g_{\varepsilon}(s'+i \tau)e^{i\tau
t}}{s'+\lambda_{\varepsilon}-a_{\varepsilon}(x)+i \tau}\,d\tau
\Big| &\leq & \int_{-\infty}^{\infty}\frac{|g_{\varepsilon}(s'+i
\tau)|}{|s'+\lambda_{\varepsilon}-a_{\varepsilon}(x)+i
\tau|}\,d\tau\\ \\ & \leq &
\int_{-\infty}^{\infty}\frac{|g_{\varepsilon}(s'+i
\tau)|}{|s'+\lambda_{\varepsilon}-(1-\varepsilon)+i
\tau|}\,d\tau
\end{array}
\end{equation}
since $|s'+\lambda_{\varepsilon}-a_{\varepsilon}(x)+i \tau| \geq
|s'+\lambda_{\varepsilon}-(1-\varepsilon)+i \tau|$.
\par
Let us then find an upper bound for $g_{\varepsilon}(s'+i\tau)$. For
the numerator of $g_{\varepsilon}(s'+i\tau)$ we can estimate
$$
\left|\int_{I}\frac{u_{0}(x)}{s'+i\tau+\lambda_{\varepsilon}-a_{\varepsilon}(x)}\,dx\right|
\leq \frac{\|u_{0}\|_{1}}{|s'+i \tau +
\lambda_{\varepsilon}-(1-\varepsilon)|}.
$$
We now find a lower bound for the denominator of
$g_{\varepsilon}(s'+i\tau)$. We use the elementary estimate $|z|
\geq \max (|\text{Re}z|, |\text{Im}z|) $ and we start with the real
part.
$$
\begin{array}{rcl}
\Big|\text{Re}\int_{I}\frac{\gamma(x)}{(\lambda_{\varepsilon}-a_{\varepsilon}(x))(s'+i\tau+\lambda_{\varepsilon}-a_{\varepsilon}(x))}\,dx\Big|&=&
\Big|\int_{I}\frac{\gamma(x)}{(\lambda_{\varepsilon}-a_{\varepsilon}(x))}\frac{s'+\lambda_{\varepsilon}-a_{\varepsilon}(x)}{|s'+i\tau+\lambda_{\varepsilon}-a_{\varepsilon}(x)|^2}\,dx\Big| \\ \\&=&
\int_{I}\frac{\gamma(x)}{(\lambda_{\varepsilon}-a_{\varepsilon}(x))}\frac{s'+\lambda_{\varepsilon}-a_{\varepsilon}(x)}{|s'+i\tau+\lambda_{\varepsilon}-a_{\varepsilon}(x)|^2}\,dx
\\ \\&=&
\int_{I}\frac{\gamma(x)}{(\lambda_{\varepsilon}-a_{\varepsilon}(x))\big(s'+\lambda_{\varepsilon}-a_{\varepsilon}(x)+\frac{\tau^2}{s'+\lambda_{\varepsilon}-a_{\varepsilon}(x)}\big)}\,dx\\ \\
& \geq &
\int_{I}\frac{\gamma(x)}{(\lambda_{\varepsilon_{0}}-(1-{\varepsilon_{0}})+x^2)\big(\lambda_{\varepsilon_{0}}-(1-{\varepsilon_{0}})+x^2+\frac{\tau^2}{x^2}\big)}\,dx
\\ & = &
\int_{I}\frac{x^2\gamma(x)}{(\lambda_{\varepsilon_{0}}-(1-{\varepsilon_{0}})+x^2)((\lambda_{\varepsilon_{0}}-(1-{\varepsilon_{0}})+x^2)x^2+\tau^2)}\,dx\\
\\ & =:&F(\tau),
\end{array}
$$
where in the last inequality we used the estimates
$s'+\lambda_{\varepsilon}-a_{\varepsilon}(x)<
\lambda_{\varepsilon}-a_{\varepsilon}(x)$,
$s'+\lambda_{\varepsilon}-(1-\varepsilon)>0$. We also used that,
since $\lambda_{\varepsilon}-(1-\varepsilon)$ is strictly positive
and tends to zero when $\varepsilon$ goes to zero,
there exists $\varepsilon_0$ such that $\forall \varepsilon < \varepsilon_0$ we have $\lambda_{\varepsilon_{0}}-(1-{\varepsilon_{0}})> \lambda_{\varepsilon}-(1-{\varepsilon})$.\\
In a similar way, for the imaginary part,
$$
\begin{array}{rcl}
\Big|\text{Im}\int_{I}\frac{\gamma(x)}{(\lambda_{\varepsilon}-a_{\varepsilon}(x))(s'+i\tau+\lambda_{\varepsilon}-a_{\varepsilon}(x))}\,dx\Big|&=&
\Big|\int_{I}\frac{\gamma(x)}{(\lambda_{\varepsilon}-a_{\varepsilon}(x))}\frac{-\tau}{(s'+\lambda_{\varepsilon}-a_{\varepsilon}(x))^2+
\tau^2}\,dx\Big|\\ \\ &=& | \tau |
\int_{I}\frac{\gamma(x)}{(\lambda_{\varepsilon}-a_{\varepsilon}(x))\big((s'+\lambda_{\varepsilon}-a_{\varepsilon}(x))^2+
\tau^2\big)}\,dx \\ \\ & \geq & | \tau |
\int_{I}\frac{\gamma(x)}{(\lambda_{\varepsilon_0}-(1-\varepsilon_0)+x^2)\big((\lambda_{\varepsilon_0}-(1-\varepsilon_0)+x^2)^2+
\tau^2\big)}\,dx \\ \\ &=:& G(\tau).
\end{array}
$$
Defining $H(\tau):=\max(F(\tau),G(\tau))$ we see that
\begin{equation}
\label{eq17}
|g_{\varepsilon}(s'+i \tau)| \leq \frac{\frac{\|u_0\|_{1}}{|s'+i \tau
+\lambda_{\varepsilon}-(1-\varepsilon)|}}{|s'+i \tau| H(\tau)},
\end{equation}
and then, using \eqref{eq15}, \eqref{eq16} and \eqref{eq17}
\begin{align*}
&\left|\left|
\gamma(x)e^{s't}\int_{-\infty}^{+\infty}\frac{g_{\varepsilon}(s'+i
\tau)e^{i\tau t}}{s'+\lambda_{\varepsilon}-a_{\varepsilon}(x)+i
\tau}\,d\tau\right|\right|_{\infty} \\
&\quad \leq e^{s't}\int_{-
\infty}^{+\infty} \frac{\,d\tau}{\sqrt{s'^{2}+ \tau^{2}}|s'+i
\tau + \lambda_{\varepsilon}-(1-\varepsilon)|^2 H(\tau)}\|u_0\|_{1}.
\end{align*}
Now, since $F$ and $G$ are strictly positive continuous functions,
$F(0)>0$ and $\tau G(\tau)$ tends to a positive limit when $\tau$
goes to $\infty$, there exists a constant $C>0$ (independent of
$\varepsilon$) such that $H(\tau) \geq \frac{C}{1+\tau}$. Choosing
$s'= -\frac{\alpha_{\varepsilon}}{2}$, where $\alpha_{\varepsilon} =
\lambda_{\varepsilon}-(1-\varepsilon),$ we can write
\begin{eqnarray*}
\|\gamma(x)\,e^{s't}\,\int_{-\infty}^{+\infty}\frac{g_{\varepsilon}(s'+i
\tau)e^{i\tau t}}{s'+\lambda_{\varepsilon}-a_{\varepsilon}(x)+i
\tau}\,d\tau\|_{\infty} &\leq& \frac{e^{-
\alpha_{\varepsilon}t}}{C}\int_{0}^{+\infty}\frac{2(1+\tau)}{\big((\frac{\alpha_{\varepsilon}}{2})^2+\tau^2\big)^\frac{3}{2}}\,d\tau\|u_{0}\|_{1}\\
&=&
\frac{e^{-\frac{\alpha_{\varepsilon}t}{2}}}{C}\left(\frac{8}{\alpha_{\varepsilon}^2}+\frac{4}{\alpha_{\varepsilon}}\right)
\|u_{0}\|_1.
\end{eqnarray*}
Finally, going back to \eqref{eq14bis} and using \eqref{eq14bisbis},
we end up with
$$
\|u((, \cdot)-c_{u_{0}}\psi_{\varepsilon}\| \leq
\left(1+\frac{1}{\pi
C}\left(\frac{4}{\alpha_{\varepsilon}^2}+\frac{2}{\alpha_{\varepsilon}}\right)
\right)\, e^{-\frac{\alpha_{\varepsilon}t}{2}}\,\|u_{0}\|_1\leq
K_5\,\varepsilon^{-4}e^{-\frac{\alpha_{\varepsilon}t}{2}}\,\|u_{0}\|_1.
$$
\section{Proof of Theorem~\ref{thm:smallt}}
We start here the proof of Theorem~\ref{thm:smallt}. From now on, $C$ will designate a
strictly positive constant depending only on some upper bounds on $\|\gamma\|_{L^\infty}$, $\|f(0,\cdot)\|_{W^{1,\infty}}$, a lower bound on $f(0,0)$ (see Remark~\ref{rem:C}), and on $|b-a|$.
\medskip
Thanks to the variation of the constant formula, the solution $f$ of (\ref{eqq0}) satisfies:
\begin{eqnarray}
f(t,x)&=&f(0,x)\, e^{(1-\varepsilon-x^2)\,t-\int_0^t\int_I f(s,y)\,dy\,ds}\nonumber\\
&&+\, \varepsilon \int_0^t\left(\gamma(x)\int_I f(s,y)\,dy\right)e^{(1-\varepsilon-x^2)(t-s)-\int_s^t\int_I f(\sigma,y)\,dy\,d\sigma}\,ds\nonumber\\
&=&f(0,x)\, e^{(1-\varepsilon-x^2)\,t-\int_0^t\gr{\mathcal I}(s)\,ds}\nonumber\\
&&+\,\varepsilon \int_0^t\gr{\left(\gamma(x)\, \mathcal I(s) \right)} e^{(1-\varepsilon-x^2)(t-s)-\int_s^t {\mathcal I}(\sigma)\,d\sigma}\,ds,\label{eq:varconst}
\end{eqnarray}
where
\begin{equation*}
\mathcal I(t):=\int_I f(t,y)\,dy.
\end{equation*}
Obtaining a precise estimate on $t\mapsto e^{(1-\varepsilon)(t-s)-\int_s^t\mathcal I(\sigma)\,d\sigma}$ is the key to prove Theorem~\ref{thm:smallt}.
\subsection{Preliminary estimates}
If we sum \eqref{eq:varconst} along $x\in\mathbb R$, we get, for $t\geq0$:
\begin{eqnarray}
\mathcal I(t)&=&\left(\int_I f(0,x)\, e^{-x^2t}\,dx\right) e^{(1-\varepsilon)t-\int_0^t\mathcal I(s)\,ds}\nonumber\\
&&+\,\varepsilon \int_0^t\left(\int_I\int_I\gamma(x) f(s,y)e^{-x^2(t-s)}\,dx\,dy\right)e^{(1-\varepsilon)(t-s)-\int_s^t\mathcal I(\sigma)\,d\sigma}\,ds\nonumber\\
&=&\frac{z_1(t)}{\sqrt{t}} e^{(1-\varepsilon)t-\int_0^t\mathcal I(s)\,ds}+\varepsilon \int_0^t\frac{z_2(s,t-s)}{\sqrt{t-s}}e^{(1-\varepsilon)(t-s)-\int_s^t\mathcal I(\sigma)\,d\sigma}\,ds,\label{eq:varconst2}
\end{eqnarray}
where
\[z_1(t):=\sqrt{t} \int_I f(0,x)\,e^{-x^2t}\,dx,\quad z_2(\sigma,\tau)=\sqrt{\tau}\int_I\int_I \gamma(x) \,f(\sigma,y)\, e^{-x^2\tau}\,dx\,dy.\]
If we differentiate $\mathcal I$ with respect to $t$, we get
\begin{eqnarray*}
\frac{\partial \mathcal I}{\partial t}(t)&=&\mathcal I(t)\left(1-\varepsilon-\mathcal I(t)\right)-\int_I x^2f(t,x)\,dx+\varepsilon\int_I\int_I \gamma(x)f(t,y)\,dx\,dy\\
&\leq&\mathcal I(t)\left(1-\varepsilon -\mathcal I(t)\right)+\varepsilon\, \mathcal I(t)\\
&\leq&\mathcal I(t)\left(1-\mathcal I(t)\right),
\end{eqnarray*}
which implies, since $\mathcal I(0)\leq 1$, that
\begin{equation}\label{alphabound}
0\leq \mathcal I(t)\leq 1.
\end{equation}
Thanks to \eqref{eq:varconst2}, \eqref{alphabound} and the nonnegativity of $z_1,\,z_2$, one gets
\begin{equation}\label{estz1}
\frac{z_1(t)}{\sqrt{t}} e^{(1-\varepsilon)t-\int_0^t\mathcal I(s)\,ds}\leq C,
\end{equation}
while for some constants $C,\,C'>0$,
\begin{eqnarray*}
z_1(t)&=&\int_I f\left(0,\frac x {\sqrt{t}}\right)e^{-x^2}\,dx\\
&\geq&\frac 1C\int_{-C'}^{C'}f\left(0,\frac x {\sqrt{t}}\right)\,dx\geq \frac 1{C},
\end{eqnarray*}
for $t\geq 1$. Note that here we used a lower bound on $f(0,\cdot)$ around $x=0$ (\gr{we have assumed} that $f(0,0)>0$ and that $f(0,\cdot)$ is continuous). \gr{Thanks to this lower bound,} \eqref{estz1} becomes
\begin{equation}\label{estexpalpha}
e^{(1-\varepsilon)t-\int_0^t\mathcal I(s)\,ds}\leq C\,\sqrt t .
\end{equation}
Thanks to \eqref{estexpalpha} and \eqref{alphabound}, we can estimate the second term of \eqref{eq:varconst2} as follows:
\begin{eqnarray}
w(t)&:=&\varepsilon \int_0^t\frac{z_2(s,t-s)}{\sqrt{t-s}}\gr{e^{(1-\varepsilon)\,t-\int_0^t\mathcal I(\sigma)\,d\sigma}e^{\varepsilon s+\int_0^s\left(\mathcal I(\sigma)-1\right)\,d\sigma}}\,ds\nonumber\\
&\leq&C\,\varepsilon\,\sqrt t\, \|z_2\|_{L^\infty(I)}
\int_0^t\frac{e^{C\varepsilon s}}{\sqrt{t-s}}\,ds\nonumber\\
&\leq&C\,\varepsilon\,\sqrt t\,\|z_2\|_{L^\infty(I)}\, e^{C\varepsilon t} \int_0^t\frac{e^{\gr{-}C\varepsilon s}}{\sqrt{s}}\,ds\leq C\, \varepsilon t\, \|z_2\|_{L^\infty(I)}\, e^{C\varepsilon t}.\label{def:w}
\end{eqnarray}
In order to estimate $\|z_2\|_{L^\infty(I)}$, we proceed as follows:
\begin{eqnarray*}
z_2(s,\tau)&=&\sqrt{\tau}\int_I\int_I \gamma\left(\frac x{\sqrt \tau}\right)\, f(s,y)\,e^{-x^2\tau}\,dx\,dy\\
&\leq&\frac {C\,\mathcal I(s)}{\sqrt \tau}\,\int_I e^{-x^2\,\tau}\,dx \leq C.
\end{eqnarray*}
This estimate combined with \eqref{def:w} implies that $w(t)\geq 0$ satisfies
\begin{equation}\label{eq:est4}
\gr{w(t)\leq C\,\varepsilon \,t \,e^{C\,\varepsilon\, t},}
\end{equation}
Since $f(0,\cdot)\in W^{1,\infty}(I)$, we can estimate
\begin{eqnarray}
z_1(t)&=&\int_I \left(f(0,0)+\int_0^{\frac x{\sqrt t}}\frac{\partial f}{\partial x}\left(0,z\right)\,dz\right)
\,e^{-x^2}\,dx\nonumber\\
&=&f(0,0)\, \int_I e^{-x^2}\,dx +\lambda(t),\label{def:lambda}
\end{eqnarray}
where
\begin{eqnarray}
|\lambda(t)|&\leq& \int_I \bigg|\int_0^{\frac x{\sqrt t}}\frac{\partial f}{\partial x}(0,z)\,dz\bigg|e^{-x^2}\,dx\leq\frac{C}{\sqrt t}\int_I |x|e^{-x^2}\,dx\nonumber\\
&\leq&\frac C{\sqrt t}.\label{estlambda}
\end{eqnarray}
\subsection{Estimation of $e^{(1-\varepsilon) t-\int_0^t\mathcal I(s)\,ds}$}
Thanks to \eqref{eq:varconst2} (and the definition of $\lambda$ and $w$: see \eqref{def:lambda} and \eqref{def:w} respectively), we see that
\begin{equation}\label{eq:alpha}
\mathcal I(t)=\frac{f(0,0)\, \int_I e^{-x^2}\,dx +\lambda(t)}{\sqrt t}e^{(1-\varepsilon)t-\int_0^t\mathcal I(s)\,ds}+w(t),
\end{equation}
so that
\begin{eqnarray}
e^{\int_0^t\mathcal I(s)\,ds}&=&e^{\int_0^1\mathcal I(s)\,ds}+\int_1^t\frac{d}{ds}\left(e^{\int_0^s\mathcal I(\sigma)\,d\sigma}\right)(s)\,ds\nonumber\\
&=&e^{\int_0^1\mathcal I(s)\,ds}+\int_1^t\frac{f(0,0)\, \int_I e^{-x^2}\,dx }{\sqrt s}e^{(1-\varepsilon)s}\,ds\nonumber\\
&&+\int_1^t\frac{\lambda(s)}{\sqrt s}e^{(1-\varepsilon)s}\,ds+\int_1^tw(s)e^{\int_0^s\mathcal I(\sigma)\,d\sigma}\,ds .
\label{eq:intalpha}
\end{eqnarray}
We will now estimate each of the terms on the right hand side of \eqref{eq:intalpha}. We start by estimating the third term on the right hand side, thanks to \gr{\eqref{estlambda}} and an integration by parts:
\begin{eqnarray}
\left|\int_1^t\frac{\lambda(s)}{\sqrt s}e^{(1-\varepsilon)s}\,ds\right|&\leq& C\int_1^t\frac{e^{(1-\varepsilon)s}}s\,ds\nonumber\\
&\leq& C\left[\frac{e^{(1-\varepsilon)t}}{(1-\varepsilon)t}+\int_1^t\frac{e^{(1-\varepsilon)s}}{(1-\varepsilon)s^2}\,ds\right]\nonumber\\
&\leq& \frac{C}{1-\varepsilon}\left[\frac{e^{(1-\varepsilon)t}}{t}+t\max_{s\in [1,t]}\frac{e^{(1-\varepsilon)s}}{s^2}\right]\nonumber\\
&\leq& \frac{2C}{(1-\varepsilon)t}e^{(1-\varepsilon)t},\label{eq:esttruc}
\end{eqnarray}
provided $t>0$ is large enough, and $\varepsilon>0$ is small enough (to ensure that $\max_{s\in [1,t]}\frac{e^{(1-\varepsilon)s}}{s^2}=\frac{e^{(1-\varepsilon)t}}{t^2}$).
\par
We now estimate the second term on the right hand side of \eqref{eq:intalpha}, using an integration by parts:
\begin{align*}
&\int_1^t\frac{f(0,0)\,\int_I e^{-x^2}\,dx }{\sqrt{s}}e^{(1-\varepsilon)s}\,ds \\
&\quad=f(0,0)\,\gr{\left(\int_I e^{-x^2}\,dx\right)}\,\left(\frac{e^{(1-\varepsilon)t}}{(1-\varepsilon)\sqrt{t}}-\frac {e^{1-\varepsilon}}{1-\varepsilon}+\int_1^t\frac{e^{(1-\varepsilon)s}}{2(1-\varepsilon)s^{3/2}}\,ds\right),
\end{align*}
and then, applying an estimate similar to the one used to obtain \eqref{eq:esttruc}, we get, provided that $t>0$ is large enough, and that $\varepsilon>0$ is small enough,
\begin{equation}\label{eq:est1st-term}
0\leq \int_1^t\frac{e^{(1-\varepsilon)s}}{2\,(1-\varepsilon)s^{3/2}}\,ds\leq \int_1^t\frac{e^{(1-\varepsilon)\,s}}{2\,(1-\varepsilon)\,s}\,ds
\leq\frac{1}{(1-\varepsilon)^2\,t}e^{(1-\varepsilon)\,t}.
\end{equation}
Finally, we estimate the last term of the right hand side of \eqref{eq:intalpha}, thanks to estimates \eqref{eq:est4} and \eqref{alphabound}:
\begin{eqnarray}
0\leq \int_1^tw(s)e^{\int_0^s\mathcal I(\sigma)\,d\sigma}\,ds&\leq& \int_1^t \gr{|w(s)|} e^{\|\mathcal I\|_{L^\infty(\mathbb R_+)} s}\,ds\nonumber\\
&\leq&C\,\varepsilon\int_1^t s\, e^{C\varepsilon s}\,e^{
}\,ds\nonumber\\
&\leq& C\,\varepsilon\frac{t\,e^{\left(1+C\,\varepsilon\right)\,t}}{1+C\,\varepsilon},\label{eq:est3rd-term}
\end{eqnarray}
where we have used an integration by part to obtain the last inequality.
\par
Combining these estimates, estimate \eqref{eq:intalpha} becomes:
\begin{equation}\label{eq:expalpha1}
e^{\int_0^t\mathcal I(s)\,ds-(1-\varepsilon)t}=\frac{f(0,0)\,\int_I e^{-x^2}\,dx}{(1-\varepsilon)\sqrt t}+\mu(t),
\end{equation}
or
\begin{equation}\label{eq:expalpha2}
e^{(1-\varepsilon)t-\int_0^t\mathcal I(s)\,ds}=\left(\frac{f(0,0)\, \int_I e^{-x^2}\,dx}{(1-\varepsilon)\sqrt t}+\mu(t)\right)^{-1},
\end{equation}
where, thanks to \eqref{eq:intalpha}, \eqref{eq:esttruc}, \eqref{eq:est1st-term} and \eqref{eq:est3rd-term}, for $t\geq 1$,
\begin{equation}\label{eq:estmu}
-\frac C t\leq\mu(t)\leq C\left(\frac 1t+\varepsilon t e^{C\varepsilon t}\right).
\end{equation}
\medskip
\subsection{\gr{Estimation of} $\left|e^{(1-\varepsilon)t-\int_0^t\mathcal I(s)\,ds}-\frac{\sqrt t}{f(0,0)\, \int_I e^{-x^2}\,dx}\right|$}
Thanks to \eqref{eq:expalpha2},
\begin{align}
&\left|e^{(1-\varepsilon)t-\int_0^t\mathcal I(s)\,ds}-\frac{\sqrt t}{f(0,0)\,\int_I e^{-x^2}\,dx}\right|\nonumber\\
&\quad =\left|\left(\frac{f(0,0)\, \int_I e^{-x^2}\,dx}{(1-\varepsilon)\sqrt t}+\mu(t)\right)^{-1}-\frac{\sqrt t}{f(0,0)\, \int_I e^{-x^2}\,dx}\right|\\
&\quad=\frac{\sqrt t}{f(0,0)\, \int_I e^{-x^2}\,dx}\left|\left(\frac 1{1-\varepsilon}+\frac{\mu(t)\,\sqrt t}{f(0,0)\, \int_I e^{-x^2}\,dx}\right)^{-1}-1\right|.\label{eq:expalpha3}
\end{align}
We notice that thanks to estimate \eqref{eq:estmu},
\begin{equation}\label{eq:est1}
f(0,0)\,\int_I e^{-x^2}\,dx+(1-\varepsilon)\mu(t) \sqrt t\geq \frac {f(0,0)\,\int_I e^{-x^2}\,dx}2,
\end{equation}
as soon as $t\geq T$, for some large time $T>0$. Also, for $t\geq 1$,
\begin{equation}\label{eq:est2}
|\mu(t)|\leq C\left(\frac 1t+\varepsilon \,t\,e^{C\,\varepsilon\, t}\right).
\end{equation}
Using the bounds \eqref{eq:est1} and \eqref{eq:est2}, we can show that as soon as $t\geq T$,
\begin{eqnarray}
\left|\left(\frac 1{1-\varepsilon}+\frac{\mu(t)\sqrt t}{f(0,0)\,\int_I e^{-x^2}\,dx}\right)^{-1}-1\right|
&=&\left|\frac{-\varepsilon f(0,0)\,\int_I e^{-x^2}\,dx-(1-\varepsilon)\mu(t) \sqrt t}{f(0,0)\, \int_I e^{-x^2}\,dx+(1-\varepsilon)\mu(t) \sqrt t}\right|\nonumber\\
&\leq&C\left(\frac 1{\sqrt t}+\varepsilon t^{\frac 32}\,e^{C\,\varepsilon \, t}\right),\label{eq:est3}
\end{eqnarray}
so that identity \eqref{eq:expalpha3} leads to the bound
\begin{equation}
\left|e^{(1-\varepsilon)t-\int_0^t\mathcal I(s)\,ds}-\frac{\sqrt t}{f(0,0)\,\int_I e^{-x^2}\,dx}\right|\leq C\left(1+\varepsilon \, t^{2}\, e^{C\,\varepsilon\, t}\right). \label{eq:est6}
\end{equation}
Notice also, as this is going to be useful further on, that for $s\geq 1$, thanks to \eqref{eq:expalpha1} and \eqref{eq:est2},
\begin{eqnarray}
\left|e^{\int_0^s\mathcal I(\sigma)\,d\sigma-(1-\varepsilon)s}-\frac{f(0,0)\, \int_I e^{-x^2}\,dx}{\sqrt s}\right|&=&\left|\mu(s)+\varepsilon\frac{f(0,0)\,\int_I e^{-x^2}\,dx}{(1-\varepsilon)\sqrt s}\right|\nonumber\\
&\leq& C\left(\frac 1{s}+\varepsilon \,s\,e^{C\,\varepsilon\, s}\right).\label{eq:est5}
\end{eqnarray}
\medskip
\subsection{Conclusion of the proof of Theorem~\ref{thm:smallt}}
In this last part of the proof, we consider times $t\geq T$. We estimate
\begin{align}
&\left\|f(t,x)-\frac{f(0,x)\,\sqrt t \,e^{-x^2 t}}{f(0,0)\,\int_I e^{-x^2}\,dx}\right\|_{L^1(I)}\nonumber\\
&\quad\leq \left\|f(t,x)-\frac{f(0,x)\,e^{-x^2 t}}{\frac{f(0,0)\,\int_I e^{-x^2}\,dx}{(1-\varepsilon)\, \sqrt t}+\mu(t)}\right\|_{L^1(I)}\nonumber\\
&\qquad +\, \Bigg\|\frac{f(0,x)\,e^{-x^2 t}}{\frac{f(0,0)\,\int_I e^{-x^2}\,dx}{(1-\varepsilon)\,\sqrt t}+\mu(t)}-\frac{f(0,x)\,\sqrt t\, e^{-x^2 t}}{f(0,0)\,\sqrt \pi}\Bigg\|_{L^1(I)}.\label{eq:diff}
\end{align}
Let us start by estimating the second term on the right hand side of \eqref{eq:diff}, thanks to
estimate \eqref{eq:est3}:
\begin{align}
&\Bigg\|\frac{f(0,x)e^{-x^2 t}}{\frac{f(0,0)\,\int_I e^{-x^2}\,dx}{(1-\varepsilon)\sqrt t}+\mu(t)}-\frac{f(0,x)\sqrt t e^{-x^2 t}}{f(0,0)\,\int_I e^{-x^2}\,dx}\Bigg\|_{L^1(I)}\nonumber\\
&\quad\leq \left\|\frac{f(0,x)\sqrt t e^{-x^2 t}}{f(0,0)\,\int_I e^{-x^2}\,dx}\left|\left(\frac 1 {1-\varepsilon}+\frac{\mu(t)\sqrt t}{f(0,0)\,\int_I e^{-x^2}\,dx}\right)^{-1}-1\right|\;\right\|_{L^1(I)}\nonumber\\
&\quad\leq \frac{f(0,x)\sqrt t }{f(0,0)\,\int_I e^{-x^2}\,dx}\left|\left(\frac 1 {1-\varepsilon}+\frac{\mu(t)\sqrt t}{f(0,0)\,\int_I e^{-x^2}\,dx}\right)^{-1}-1\right|\int_I e^{-x^2 t}\,dx\nonumber\\
&\quad \leq C \left(\frac 1{\sqrt t}+\varepsilon t^{\frac 32}\,e^{C\,\varepsilon \,t}\right)\label{eq:estprofil1}
\end{align}
We now rewrite the first term on the right hand side of \eqref{eq:diff}, using formula \eqref{eq:varconst} and \eqref{eq:expalpha2}:
\begin{align*}
&\left\|f(t,x)-\frac{f(0,x)e^{-x^2 t}}{\frac{f(0,0)\, \int_I e^{-x^2}\,dx}{(1-\varepsilon)\sqrt t}+\mu(t)}\right\|_{L^1(I)}\\
&\quad= \left\|\varepsilon \int_0^t\left(\int_I \gamma(x)f(s,y)\,dy\right)e^{(1-\varepsilon-x^2)(t-s)-\int_s^t\mathcal I(\sigma)\,d\sigma}\,ds\right\|_{L^1}\\
&\quad\leq C\varepsilon \int_I\int_0^t\mathcal I(s)e^{-x^2(t-s)}e^{(1-\varepsilon)t-\int_0^t\mathcal I(\sigma)\,d\sigma} e^{\int_0^s\mathcal I(\sigma)\,d\sigma-(1-\varepsilon)s}\,ds\,dx
\end{align*}
\gr{and then, thanks to \eqref{alphabound}, \eqref{eq:est6} and \eqref{eq:est5},
\begin{align}
&\left\|x \mapsto f(t,x)-\frac{f(0,x)e^{-x^2 t}}{\frac{f(0,0)\,\int_I e^{-x^2}\,dx}{(1-\varepsilon)\sqrt t}+\mu(t)}\right\|_{L^1(I)}\nonumber\\
&\quad \leq C\,\varepsilon \int_0^1\left(\int_I e^{-x^2(t-s)}\,dx\right) \left(\frac{\sqrt t}{f(0,0)\,\int_I e^{-x^2}\,dx}+1+\varepsilon\, t^{2}\, e^{C\,\varepsilon\, t}\right)\,ds\nonumber\\
&\qquad +C\varepsilon \int_1^t\left(\int_I e^{-x^2(t-s)}\,dx\right) \left(\frac{f(0,0)\, \int_I e^{-x^2}\,dx}{\sqrt s}+\frac 1{ s}+\varepsilon\, s \,
e^{C\,\varepsilon\, s}\right)\nonumber\\
&\phantom{dqsfesrgqdreg}\,\left(\frac{\sqrt t}{f(0,0)\,\int_I e^{-x^2}\,dx}+1+\varepsilon \,t^2\, e^{C\,\varepsilon \,t}\right)\,ds\nonumber\\
&\quad\leq C\,\varepsilon \,\frac 1{\sqrt t}\left(\sqrt t+1+\varepsilon\, t^{2}\, e^{C\,\varepsilon \,t}\right)\nonumber\\
&\qquad+C\varepsilon \int_1^t\frac 1{\sqrt {t-s}} \left(\frac 1{ \sqrt s}+\varepsilon\, s\, e^{C\,\varepsilon\, s}\right)\,\left(\sqrt t+1+\varepsilon \,t^2\, e^{C\,\varepsilon\, t}\right)\,ds.\nonumber
\end{align}
We estimate
\[\int_1^t\frac{s e^{C\varepsilon s}}{\sqrt{t-s}}\,ds\leq t e^{C\varepsilon t}\int_1^t\frac{ds}{\sqrt{t-s}}\leq Ct^{\frac 32} e^{C\varepsilon t},\]
and then
\begin{align}
&\left\| x \mapsto f(t,x)-\frac{f(0,x)e^{-x^2 t}}{\frac{f(0,0)\,\int_I e^{-x^2}\,dx}{(1-\varepsilon)\sqrt t}+\mu(t)}\right\|_{L^1(I)}\nonumber\\
&\quad\leq C\varepsilon \left(1+\frac 1{\sqrt t}+\varepsilon\, t^{\frac 3 2}\, e^{C\,\varepsilon\, t}\right)\nonumber\\
&\qquad+C\,\varepsilon \,\left(1+\varepsilon t^{\frac 32} e^{C\varepsilon t}\right)\, \left(\sqrt t+1+\varepsilon\, t^2 \,e^{C\,\varepsilon\, t}\right)\nonumber\\
&\quad\leq C\left(\varepsilon+\varepsilon\sqrt t+\frac \varepsilon{\sqrt t}+ \left(\varepsilon t^{\frac 32}+\varepsilon^2 t^2+\varepsilon^2t^{\frac 32}+\varepsilon^3t^{\frac 72}\right)e^{C\varepsilon t}\right)\nonumber\\
&\quad\leq C\left(\frac \varepsilon{\sqrt t}+ \varepsilon t^{\frac 32}e^{C\varepsilon t}\right),\label{eq:estprofil2}
\end{align}
where we have used the fact that $\varepsilon t\leq Ce^{C\varepsilon t}$. Thanks to \eqref{eq:estprofil1} and \eqref{eq:estprofil2}, \eqref{eq:diff} becomes:
\begin{equation*}
\Bigg\|x \mapsto f(t,x)-\frac{f(0,x)\sqrt t e^{-x^2 t}}{f(0,0)\,\int_I e^{-y^2}\,dy}\Bigg\|_{L^1(I)}\leq C \left(\frac 1{\sqrt t}+\varepsilon\, t^{\frac 3 2}\,e^{C\,\varepsilon\, t}\right).
\end{equation*}
Theorem~\ref{thm:smallt} follows from this estimate.}
\subsection{Proof of Corollary~\ref{cor:smallt}}
If we assume that $t\in\left[\frac 1{\kappa^2}, \kappa^{\frac 23} \varepsilon^{-\frac 23}\right]$, then \eqref{est:final-smallt} becomes
\begin{eqnarray*}
\Bigg\| x \mapsto f(t,x)-\frac{f(0,x)\sqrt t e^{-x^2 t}}{f(0,0)\, \int_I e^{-y^2}\,dy}\Bigg\|_{L^1(I)}&\leq& C \left(\kappa+\kappa\,e^{C\, \kappa^{\frac 23} \varepsilon^{\frac 13}
\right),
\end{eqnarray*}
and if furthermore $\varepsilon\leq \kappa\leq 1$, then
\begin{eqnarray*}
\Bigg\|x \mapsto f(t,x)-\frac{f(0,x)\,\sqrt t\, e^{-x^2 t}}{f(0,0)\, \int_I e^{-y^2}\,dy}\Bigg\|_{L^1(I)}&\leq& C\,\kappa,
\end{eqnarray*}
which proves Corollary~\ref{cor:smallt}\gr{, provided that $\kappa>0$ is small enough.}
\medskip
{\bf{Acknowledgement}}: The research leading to this paper was funded by
the French ``ANR blanche'' project Kibord: ANR-13-BS01-0004, and ``ANR JCJC'' project MODEVOL ANR-13-JS01-0009. A.C. and S.C. were partially supported by the Ministerio de Ciencia e Innovación,
Grants MTM2011-27739-C04-02 and MTM2014-52402-C3-2-P.
|
2,869,038,154,344 | arxiv | \section{Introduction}
Wave breaking occurs at the ocean surface at moderate to high wind speed, with significant impacts on the transfer of momentum, energy, and mass between the ocean and the atmosphere \citep{MELVILLE1996,DEIKE2022}. When waves break, the water surface overturns, which generates sea spray and largely enhances the gas exchange. Visually it manifests as white-capping, widely observable at sea above a certain wind speed. Breaking acts as an energy sink for the waves: it limits the wave height by transferring the excessive wave energy into underwater turbulence and currents, therefore influencing the upper-ocean dynamics as well \citep{MCWILLIAMS2016,ROMERO2017}.
Describing breaking waves analytically and numerically has been challenging due to its nonlinear nature and the fact that the interface becomes multi-valued. Considering a single breaker, scaling analysis have been successfully proposed for energy dissipation, validated by laboratory experiments \citep{DRAZEN2008,PERLIN2013}; and thanks to advances in numerical methods and increasing computational power, high fidelity simulations on single 3D breakers have emerged \citep{WANG2016,DEIKE2016,MOSTERT2022,GAO2021}.
\citet{PHILLIPS1985} introduced the $\Lambda(\boldsymbol{c_b})$ distribution to describe the statistics of breaking waves, where $\Lambda(\boldsymbol{c_b})d\boldsymbol{c_b}$ is the expected length per unit sea surface area of breaking fronts propagating with speeds in the range of $(\boldsymbol{c_b}, \boldsymbol{c_b}+d\boldsymbol{c_b})$. The independent variable breaking front propagating speed $\boldsymbol{c_b}$ is chosen in place of wavenumber $\boldsymbol{k}$ because it is a more observable quantity. The link to the wave spectrum is made through the core assumption that $c_{b}$ is proportional to the wave phase speed $c$, which in turn relates to $k$ by the linear dispersion relation $c=\sqrt{gk}$. The omni-directional $\Lambda(c)$ distribution is predicted to have a $c^{-6}$ shape. The moments of the distribution have a physical interpretation, with the second moment related to the whitecap coverage, the third to mass exchange, the fourth to momentum flux and the fifth to energy dissipation by breaking \citep{PHILLIPS1985,KLEISS2010,ROMERO2019,DEIKE2018,DEIKE2022}.
Several observational studies have been conducted, which provide measurements of the $\Lambda(c)$ distribution, and its moments \citep{GEMMRICH2008,KLEISS2010,SUTHERLAND2013,BANNER2014,SCHWENDEMAN2015}, made possible by technical advancement including ship-borne and air-borne visible and infrared imagery. Scaling relations have been proposed to describe the breaking statistics for a wide range of conditions, but are facing the usual challenges in scatter of field data \citep{SUTHERLAND2013,DEIKE2018}, combined with ongoing discussions about the interpretation of \citet{PHILLIPS1985} original framework \citep{BANNER2014}.
Beyond the single breaker description, numerical methods have so far been unable to describe the breaking statistics emerging from an ensemble of propagating surface waves. We propose a numerical framework, leveraging a novel multi-layer formulation of the Navier-Stokes equations and its numerical implementation \citep{POPINET2020}, which is able to capture the multi-scale nonlinear wave field, together with the intermittent incidences of breaking. The wave field is initialised using characteristic wind wave spectra based on field observations. We report the kinematics of the breaking statistics, $\Lambda(c)$, and its scaling with the mean-square slope and discuss how to link our results to field measurements.
\section{Numerical method} \label{sec:multilayer}
\subsection{The multi-layer framework}
We introduce the modelling framework (sketched in figure \ref{fig:layers_illustration}) proposed by \citet{POPINET2020}, based on a vertically-Lagrangian discretisation of the Navier-Stokes equations. It extends the shallow-water single-layer Saint-Venant model to include multiple layers.
We solve a weak form of the equation (vertically-integrated conservation laws) in a generalised vertical coordinate.
Given $N_L$ layers in total, for layer number $l$ the mass and the momentum conservation equations are \citep{POPINET2020}:
\begin{eqnarray}
\frac{\partial h_l}{\partial t} + \nabla_H \cdot (h\boldsymbol{u})_l &=& 0\label{eqn:numerical1} \\
\frac{\partial (h\boldsymbol{u})_l}{\partial t} + \nabla_H \cdot (h\boldsymbol{u}\boldsymbol{u})_l &=& -g h_l \nabla_H \eta - \nabla_H(hp_{nh})_l + [p_{nh}\nabla_H z ]_l\label{eqn:numerical2} \\
\frac{\partial (hw)_l}{\partial t} + \nabla_H \cdot(hw \boldsymbol{u})_l &=& -[p_{nh}]_l\label{eqn:numerical3} \\
\nabla_H \cdot (h\boldsymbol{u})_l + [w-\boldsymbol{u}\cdot \nabla_H z ]_l &=& 0 \label{eqn:numerical4}
\end{eqnarray}
with $l$ the index of the layer, $h$ its thickness, $\boldsymbol{u}$, $w$ the horizontal and vertical components of the velocity, $p_{nh}$ the non-hydrostatic pressure (divided by the density). The surface elevation $\eta = z_b + \sum_{l=0}^{N_L} h_l$, and the $[\;]_l$ operator denotes the vertical difference, i.e. $[f]_l = f_{l+1/2} - f_{l-1/2}$.
There are four unknowns $h_l$, $\boldsymbol{u_l}$, $w_l$ and $p_{nhl}$ for each layer. Equation \ref{eqn:numerical1} represents conservation of volume in each layer for layer thicknesses $h_l$ following material surfaces (i.e. the discretization is vertically Lagrangian). Equation \ref{eqn:numerical2} and \ref{eqn:numerical3} are the horizontal and vertical momentum equations. Equation \ref{eqn:numerical4} is the mass conservation equation. The time integration includes an `advection' step and a `remapping' step. In the `advection' step, equation \ref{eqn:numerical1} to \ref{eqn:numerical4} are advanced in time. In the `remapping' step, the layers are remapped, if necessary, onto a prescribed coordinate to prevent any severe distortion of the layer interface.
\begin{comment}
Equation \ref{eqn:numerical2} is the horizontal momentum equation, where the pressure (divided by density) is split into a
hydrostatic part $p_h$ and a non-hydrostatic part $p_{nh}$:
\begin{equation}
p(\boldsymbol{x},z,t)/\rho = p_{h} (\boldsymbol{x},t) + p_{nh}(\boldsymbol{x},z,t)
\end{equation}
where $\boldsymbol{x}=(x,y)$ is the horizontal coordinates. The barotropic part $p_h$ is not a function of $z$ and is given by $p_h = g\eta$
The last term is the momentum flux due to the non-hydrostatic pressure projected onto the horizontal gradient of the vertical position of each layer. Equation \ref{eqn:numerical3} is the vertical momentum equation. Equation \ref{eqn:numerical4} is the mass conservation equation.
\end{comment}
Note that this set of equations does not make any assumption on the slope of the layers, which explains the $\nabla_H z$ `metric' terms appearing in the horizontal momentum equation (\ref{eqn:numerical2}) and incompressibility condition (\ref{eqn:numerical4}). This is particularly important in the context of steep breaking waves. One can further demonstrate that this set of semi-discrete equations is a consistent discretisation of the incompressible Euler equations with a free-surface and bottom boundary \citep{POPINET2020}.
Note that, in the hydrostatic and small-slope limit, generalised vertical coordinates are widely used in ocean models \citep{GRIFFIES2020}, due to the anisotropic nature of geophysical flows. The choice of the target remapped discretisation is flexible and reflects physical considerations. Here, the remapping step uses a geometric progression of the layer thicknesses which ensures higher vertical resolution of the boundary layer under the free-surface.
The numerical schemes (spatial and temporal discretisations, field collocation, grid remapping, etc.) are described in detail in \citet{POPINET2020}, and ensure accurate dispersion relations and momentum conservation.
\begin{figure}
\centering
\includegraphics[width=0.5\linewidth]{figures/illustration.pdf}
\caption{The layers in the multilayer model, and the fields of each layer. All the fields are functions of horizontal position $\boldsymbol{x}=(x,y)$ and time $t$. Due to the geometric progression choice, there is a fixed depth ratio between two adjacent layers.}
\label{fig:layers_illustration}
\end{figure}
\subsection{Numerical model for breaking}
The dissipation due to breaking is modelled with a simple, \emph{ad-hoc} approximation which can be related to the dissipation due to hydraulic jumps in the Saint-Venant system, known to be a surprisingly good first-order model for (shallow-water) breaking waves \citep{BROCCHINI2008}. The horizontal slope for any layer $\partial z/\partial x$ in equation \ref{eqn:numerical2} and \ref{eqn:numerical4} is limited by a maximum value $s_\text{max}$:
\begin{equation}
\partial z/ \partial x = \left\{
\begin{array}{ll}
\partial z/\partial x, & |\partial z/\partial x| \leq s_\text{max} \\[2pt]
\text{sign}(\partial z/\partial x)s_\text{max}, & |\partial z/\partial x| > s_\text{max}.
\end{array} \right.
\end{equation}
The maximum slope $s_{max}$ is set to be 0.577. The same applies to $\partial z/\partial y$. The slope limiter acts to stabilise the solver, and dissipates some amount of energy.
We have tested that altering the value of $s_\text{max}$ between 0.4 and 0.6 does not change the numerical results significantly. Note that given enough horizontal resolution and vertical layers, and added viscous diffusion terms, the multilayer model converges to the full Navier-Stokes equations, with underwater turbulence, and the dissipation rate obtained from breaking is close to that obtained with direct numerical simulations.
In the rest of the paper, we analyse the \emph{occurrence} of breaking fronts as geometric features of the surface height $\eta$, and investigate the relation between the wave statistics (wave spectrum) and breaking statistics (distribution of length of breaking crest).
\subsection{Numerical simulations of actively breaking wave fields}
We initialise the wave field with an azimuth-integrated wavenumber spectrum of the following shape (inspired by field measurements such as \citet{ROMERO2010} and \citet{LENAIN2017}; see the discussion in \citet{DEIKE2022}),
\begin{equation}\label{eqn:spectrum_init}
\phi(k) = Pg^{-1/2}k^{-2.5}\exp[-1.25(k_p/k)^2].
\end{equation}
The value of $P$ controls how energetic the wave field is, and is of dimension of a velocity while $k_p$ is the peak wavenumber of the wave spectrum. Variations of the spectra parameters $k_p$ and $P$ can be summarised into a single non-dimensional global effective slope $k_p H_s$, where $H_s = 4\langle \eta^2 \rangle ^ {1/2}$ is the significant wave height. The global slope $k_p H_s$ is varied from 0.1 to 0.32 (almost no breaking waves to strongly breaking field).
The ratio $k_pL_0$, with $L_0$ the domain size, is kept constant at a sufficient large value ($k_pL_0=10\pi$) to avoid confinement effects, and we have verified that the results are independent from this ratio. The total water depth is chosen to be $2\pi/k_p$ to ensure a deep water condition.
The directional spectrum is $F(k,\theta) = (\phi(k)/k){\text{cos}^{N}(\theta)}/{\int_{-\pi/2}^{\pi/2} \cos^N(\theta) d\theta}$, with $\theta \in [-\pi/2, \pi/2]$.
The directional spreading is controlled by $N$, with $N=5$ for most cases, and we have tested $N=2$ (more spreading) and $N=10$ (less spreading).
The initial wave field is a superposition of linear waves: $\eta = \sum_{i,j} a_{ij}\text{cos}(\psi_{ij})$, with the amplitude $a_{ij} = [2F(k_{xi},k_{yj})dk_xdk_y]^{1/2}$, and the initial random phase $\psi_{ij} = k_x x + k_y y + \psi_{\text{rand}}$.
The corresponding orbital velocity is initialised similarly according to the linear wave relation. We use a uniformly spaced initial grid of 32 $\times$ 33 array of ($k_{xi}$,$k_{yj}$). The wavenumbers are truncated, and chosen at discrete values of $k_{x} = ik_p/5$ for $i \in [1,32]$, and $k_{y} = jk_p/5$ for $j \in [-16,16]$, respectively. The horizontal resolution is $N_x = N_y= 1024$, and layer number $N_L = 15$, with a geometric progression common ratio 1.29. We have verified that the results presented here are numerically converged in terms of layer number (by running cases with 30 vertical layers); as well as horizontal resolution (see \S\ref{sec:Lambda_c}).
\begin{figure}
\centering
\includegraphics[width=0.75\linewidth]{figures/figure3_new.png}
\label{fig:spectrum_evo_c}
\caption{(a) Snapshots of the wave field development for the case of effective slope $k_pH_s=0.233$. Breaking statistics are collected between $\omega_p t=124$ and $\omega_p t=149$ (indicated by the red box). (b) The wave energy spectrum on the frequency-wavenumber plane. The dotted white line is the linear dispersion relation of surface gravity waves $k=\omega^2/g$. (c) Time evolution of the omni-directional wave spectrum $\phi(k)$, corresponding to the snapshots in (a). (d) Energy evolution of the wave field. Purple: potential energy $E_p$; blue: kinetic energy $E_k$; pink: total energy. (e) 3D rendering of the breaking wave field with the colour indicating the surface layer flow velocity. Inset shows the curvature of the breaking fronts as the detection criterion. (f) A more focused view taken from the dotted white square in (a); The arrows are showing the velocity magnitude and direction of each length element of the breaking fronts.}
\label{fig:spectrum_evo}
\end{figure}
Figure \ref{fig:spectrum_evo}(a) shows the time evolution of the wave field and the corresponding wave spectrum. The wave field is visually realistic. The space-time wave elevation spectrum is shown in figure \ref{fig:spectrum_evo}(b) and the energy is localised around the curve given by the gravity wave linear dispersion relation $\omega=\sqrt{gk}$, together with an extra branch corresponding to bound waves.
Since we start with a truncated spectrum, the initial wave field is smooth while the small scale features develop over time. There is an energy transfer into the higher wavenumbers which eventually leads to a stable spectrum shape. It is not a Kolmogorov--Zakharov type wave energy cascade as described by wave turbulence theory \citep{ZAKHAROV1992}, since the weak non-linearity assumption does not hold and the small features are mostly generated by the breaking events.
A quasi-steady spectrum is obtained typically for $\omega_p t>100$ as seen on figure \ref{fig:spectrum_evo}(c) with the wave statistics independent from the initial conditions, and we measure the breaking statistics between $\omega_p t=124$ and $\omega_p t=149$. Since there is no forcing mechanism, the total wave energy is slowly decaying (as shown in figure \ref{fig:spectrum_evo}(d)). The dissipation is primarily due to the slope-limiter, and is of the order of magnitude of known dissipation due to breaking \citep{DRAZEN2008}. We have also verified that the spatially and temporally-averaged statistics is a good representation of the ensemble average.
\subsection{Procedure of breaking front detection and velocity measurement}
The wave field evolves and breaking occurs intermittently in space and time. We detect the breaking fronts and their velocity, and construct the length of breaking crest distribution. The breaking fronts are defined geometrically as sharp enough ridges of the surface, as illustrated in figure \ref{fig:spectrum_evo}(e). Given a surface elevation $\eta(x,y)$ at one time instance, we find its Gaussian curvature $\kappa_1$ and $\kappa_2$, and determine the location of the breaking fronts by the threshold $\kappa_2 < -3k_p$ (`ridges' of the $\eta$ surface), which works well across the different scales.
After the breaking regions (areas) are detected, we extract the breaking fronts (lines), shown in figure \ref{fig:spectrum_evo}(f). Then we use the surface layer Eulerian velocity ($\boldsymbol{u_{l-1}}$ in figure \ref{fig:layers_illustration}) as an estimate of the Lagrangian velocity of the breaking fronts $\boldsymbol{c_b}$. The velocity is mapped on each discretised cell on the lines, which represents an element of length $L_0/N_x$. Figure \ref{fig:spectrum_evo}(f) shows the mapped velocity magnitude and direction with arrows. The directionality of $\boldsymbol{c_b}$ is not discussed in this work, i.e. we only consider the magnitude $c_b=|\boldsymbol{c_b}|$. We have tested an alternative velocity mapping method by computing the correlation function between two consecutive images of the detected crests (similar to particle tracking velocimetry), and found no significant difference in the velocity magnitude detected or the resulting $\Lambda(c_b)$ distribution.
We follow \citet{PHILLIPS1985,KLEISS2010,SUTHERLAND2013} and assume that $c=c_b$; and we use a correspondence between the breaking front velocity and the underlying wavenumber through the dispersion relation $c=\sqrt{g/k}$.
We note that observations have shown that $c_{b} = \alpha c$ where $\alpha$ is between 0.7 to 0.95 \citep{RAPP1990,BANNER2014,ROMERO2019}, at least for large breakers. In the processing we filter out the smaller scale breakers by imposing a filter $\eta(x,y) > 2.5 \langle \eta \rangle^{1/2}$. It means that only the large breakers with surface elevation above 2.5 rms value are included. As a result, no further corrections for the underlying long wave orbital velocity is needed.
\section{Statistics of wave breaking} \label{sec:Lambda_c}
\subsection{Wave statistics}
We study the relation of the breaking statistics with the wave spectrum. Figure \ref{fig:spectra+lambdac}(a) shows the non-dimensional wave spectra for the various conditions, with variations in spectrum maxima larger than one order of magnitude, and described by power laws ranging from $\phi(k)\propto k^{-2.5}$ to $\phi(k)\propto k^{-3}$. Although the energy close to the peak frequency varies, a fixed level of saturation seems reached for the steeper cases with overlapping spectra in the $k^{-3}$ range.
Together with the global slope $k_pH_s$, wave statistics can be characterised by the root mean square slope $\sigma$ \citep{MUNK2009}, which is more sensitive to high frequencies. The low-pass filtered steepness parameter $\mu(k)$ is defined as the cumulative root mean square slope: $\mu^2(k) = \int_0^k k'^2\phi(k') dk'$, and $\sigma$ is the asymptotic value of $\mu$ with a cutoff at the highest wavenumber we can numerically resolve $k_{max}$: $\sigma^2 = \mu^2(k\to k_{max})$. As we see in figure \ref{fig:spectra+lambdac}(a), the value of $\mu(k)$ plateaus due to the drop-off of the spectrum. In weak nonlinear theories (such as wave-turbulence theory), $\mu < 0.1$ is used to justify the asymptotic expansions, at least for the range of $k$ considered \citep{ZAKHAROV1992}. All the breaking cases in our simulation have $\mu$ closer or higher than 0.1 underlying the strong non-linearities of the breaking wave field. The correlation between the two global slope parameters $k_pH_s$ (zeroth moment of the spectrum) and $\sigma$ (second moment of the spectrum) is shown by figure \ref{fig:spectra+lambdac}(b), which we caution is specific to the spectrum shape.
\subsection{$\Lambda(c)$ distribution}
Figure \ref{fig:spectra+lambdac}(c) shows the breaking distribution $\Lambda(c)$ for increasing $k_p H_s$ (and $\sigma$) values and the various directionalities.
There is a clear peak indicating the most probable smaller breakers, increasing from $c= 0.2c_p$ to $0.3c_p$ when the slope increases. There is no breaking for the smallest $\sigma=0.065$ case ($k_p H_s= 0.117$) (not shown in figure \ref{fig:spectra+lambdac}(c)) an increase in slope to $\sigma=0.085$ ($k_p H_s=0.150$) and $\sigma=0.101$ ($k_p H_s= 0.169$) starts to generate breakers. The extent of breaking speeds is further increased for the steeper cases with $\sigma>0.101$, with a clear $\Lambda(c)\propto c^{-6}$ scaling up to around $0.9c_p$. It indicates that there exists a critical value of $\sigma$, below which the breaking wave field is not saturated, with the threshold expected to depend on the spectrum shape.
The shaded area in figure \ref{fig:spectra+lambdac}(c) spans the range of the breaker velocity between the peak $\Lambda_{max}$ and $\Lambda(c)=0.01\Lambda_{max}$ (for the case of $\sigma=0.153$), where a $\Lambda(c)\propto c^{-6}$ scaling can be clearly observed. The same range in the $k$-space is shaded in figure \ref{fig:spectra+lambdac}(a) as well. The upper limit of $\Lambda(c)=0.01\Lambda_{max}$ corresponds to a lower limit of $k\approx4k_p$. Above that velocity, breakers near $k_p$ are very rare. We note that removing the filter of $\eta(x,y) > 2.5 \langle \eta \rangle^{1/2}$ only changes the part of the $\Lambda(c)$ distribution left of the peak, but does not affect the part with $c$ larger than the peak. Similarly, further increasing the horizontal resolution would extend the move up and toward even smaller $c$, but the presented $\Lambda(c)\propto c^{-6}$ is unchanged.
$\Lambda(c)$ distributions from spectra of different directionality $N$ are also shown with different lines (dashed lines indicate more spreading ($N=2$) and dotted lines less spreading ($N=10$)). For steep enough cases ($\sigma > 0.101 $), there is little difference in the $\Lambda(c)$ distribution between cases with different $N$, while for intermediate steepness ($\sigma = 0.85$ and $0.101$), there is a notable sensitivity to $N$. For the $N=10$ cases with more concentrated wave energy, there is overall more breaking events.
\begin{figure}
\centering
\includegraphics[width=0.75\linewidth]{figures/figure4_new-eps-converted-to.pdf}
\caption{\label{fig:spectra+lambdac} (a) The wave energy spectra in non-dimensional form; the vertical gray line is $k_pL_0 = 10\pi$. Darker colour indicates larger global slope $k_pH_s$ (see (b) for the values). (b) The correlation of root mean square slope $\sigma$ and the global slope $k_pH_s$ in the simulated cases. (c) The non-dimensional breaking distribution $\Lambda(c)$ normalised by $c_p$ and $g$. Solid lines: directional spreading parameter $N=5$; dashed lines: $N=2$; dotted lines: $N=10$. (d) Proposed scaling for the $\Lambda(c)$ distribution using $\sigma$ and $c_p$.}
\end{figure}
Non-dimensional scalings of the $\Lambda(c)$ distribution have been proposed, with \citet{PHILLIPS1985} using only the wind speed, while papers based on field data used a combination of wind speed, wave spectrum peak speed and significant wave height \citep{SUTHERLAND2013,DEIKE2018}. Since we have no wind forcing in the simulations, the breaking statistics is expected to scale only with the non-dimensional slope $\sigma$ and spectrum peak speed $c_p$. By rescaling $c$ using $\hat{c} = c/(\sigma c_p)$ and $\Lambda(c)$ using $\hat{\Lambda}(c) = \Lambda(c)c_p^3g^{-1}\sigma^{-2}$, we obtain a normalised distribution:
\begin{equation}
\Lambda(c)c_p^3g^{-1}\sigma^{-2} \propto \left(c/(\sigma c_p)\right)^{-6} \label{eqn:lambdac_scaling}
\end{equation}
shown in figure \ref{fig:spectra+lambdac}(d), which collapses not only the $\hat{c}^{-6}$ power law region but also the peak location well (for the steep enough cases).
Alternatively, since the effective slope $k_pH_s$ and $\sigma$ are correlated, a scaling using $(k_pH_s)^{1/2}$ could be proposed. However, we have found that $\sigma$ works better than $k_pH_s$ as a scaling parameter in this case.
\subsection{Comparison to the \citet{PHILLIPS1985} theory and observations}
\citet{PHILLIPS1985} predicted a purely wind-based scaling $\Lambda(c)\propto u_*^3 g c^{-6}$ through an energy balance argument. The wave action balance equation $d[g\phi(k)/\omega]/dt = S_{nl}(k)+S_{in}(k)+S_{diss}(k)$ involves the following source terms: divergence of the nonlinear energy flux $S_{nl}$, wind input $S_{in}$, and dissipation due to breaking $S_{diss}$, written as \citep{PHILLIPS1985}
\begin{equation}
S_{nl} \propto gk^{-3}B^3(k), \; S_{in} \propto gk^{-3}(\frac{u_*}{c})^2 B(k), \; \text{and} \; S_{diss} \propto gk^{-3} f(B(k))
\end{equation}
with the saturation $B(k)=k^3\phi(k)$, and $f(B(k))$ a functional dependence solely on $B(k)$ (assuming that breaking and consequent dissipation `\textit{are the result of local excesses, however these excesses are produced}'). The balance between $S_{nl}$ and $S_{diss}$ leads to $f(B)\propto B^3$, and therefore $S_{diss} \propto gk^{-3}B^{-3}$.
The breaking front distribution $\Lambda(c)$ is then obtained by writing the equality between dissipation in the $k$-space and the $c$-space: $\epsilon(k)dk = \epsilon(c)dc$. The LHS is $\epsilon(k)dk = (S_{diss}\omega) dk$; the RHS can be related to the fifth moment of $\Lambda(c)$ through a scaling argument $\epsilon(c)dc = bg^{-1}c^5\Lambda(c)dc$, where $b$ is a non-dimensional breaking parameter \citep{DUNCAN1981,PHILLIPS1985}. Substituting a spectral shape of $\phi(k)\propto k^{-5/2}$ into the $S_{diss}$ would then lead to $\Lambda(c)\propto c^{-6}$. Considering the equilibrium range, $S_{nl} \propto S_{in}$ \citep{PHILLIPS1985}, gives $\phi(k) \propto u_*g^{-1/2}k^{-2.5}$, which leads $\Lambda(c) \propto u_*^3gc^{-6}$.
Several field campaigns have observed the $c^{-6}$ power-law, despite also finding that the purely wind-based prefactor $u_*^3$ does not describe the data well. Empirical modifications have been proposed \citep{SUTHERLAND2013,DEIKE2018} using $\sqrt{gH_s}$ and $c_p$ in addition to $u_*$, in the form of $\Lambda(c)c_p^3 g^{-1} (c_p/u_*)^{1/2}\propto (c/\sqrt{g H_s})^{-6}$, which significantly improved the collapse between data sets.
To better interpret the empirical scaling found in field observations, we perform re-analysis of the numerical data together with the field data.
We examine the slope-based scaling in the present work and the mixed-wind-slope-based scaling found in field data. Since there is no explicit wind forcing in our simulations, the information of wind speed and fetch/duration are encoded in the spectrum. We use the empirical (but very robust) fetch-limited relationships \citep{TOBA1972}, that link the non-dimensional wave energy $gH_s u_*^{-2}$ and the non-dimensional frequency $\omega_p g^{-1}u_*$ (wave age $u_*/c_p$) by
\begin{equation}
gH_s/u_*^2 = C (u_*/c_p)^{-3/2} \label{eqn:fetch_limited}
\end{equation}
where $C$ is an order 1 constant. Using (\ref{eqn:fetch_limited}) it is straightforward to show that the scaling (\ref{eqn:lambdac_scaling}) with $\sigma \propto \sqrt{k_pH_s}$ is equivalent to the scaling from \citet{SUTHERLAND2013}, since $k_pH_s \propto (c_p/u_*)^{1/2}$. Figure \ref{fig:data}(a) shows the breaking distributions scaled following \citet{SUTHERLAND2013} (the wind speed $u_*$ is inferred using (\ref{eqn:fetch_limited}) in our modelled cases), and good agreement is observed, indicating that the slope-based scaling is fully compatible with the mixed-wind-slope-based scalings in the field. Note that the observational data are obtained from complex sea states, and do not necessarily have the same spectrum shape as the current numerical data.
\begin{figure}
\centering
\includegraphics[height=0.27\linewidth]{figures/figure4a_level10.pdf}
\includegraphics[height=0.27\linewidth]{figures/figure4b_level10.pdf}
\caption{\label{fig:data} Comparison with observational data. (a) Rescaled $\Lambda(c)$ distribution following \cite{SUTHERLAND2013} with simplifications \citep{DEIKE2022}. (b) Whitecap coverage $W$ as a function of 10-meter wind speed $U_{10}$.
}
\end{figure}
The wave field at a certain time and space is the result of the wind forcing history, and breaking is caused by excessive energy in the wave spectrum. For a mature and steep enough wave field, breaking (particularly those at large scale) is primarily dictated by the wave spectrum itself, while for younger and less steep wave fields, breaking can be more closely coupled to wind forcing. It explains why the slope-based scaling or a mixed-wind-slope-based scaling can better fit data from various sea states.
Finally, we can infer classic breaking metrics such as the whitecap coverage from our simulations and compare with more field data sets. The whitecap coverage $W$ quantifies the fraction of the wave surface covered by white foam, and can be estimated through the second moment of $\Lambda(c)$ as $W=2\pi\gamma g^{-1}\int c^2\Lambda(c) \:d{c}$, where $\gamma$ is a dimensionless constant representing the ratio of breaking time to wave period (here $\gamma = 0.56$ following \citet{ROMERO2019}). Figure \ref{fig:data}(b) shows $W$ as a function of the 10-meter wind speed $U_{10}$ (estimated from $u_*$ for our data using the COARE parameterisation \citep{EDSON2013}). The modelled whitecap coverage falls within the scatter of recent data sets \citep{CALLAGHAN2008,KLEISS2010,SCHWENDEMAN2015,BRUMER2017}.
\section{Conclusion}
We demonstrate that a novel multilayer model \citep{POPINET2020} can be used to study the breaking statistics associated with an ensemble of phase-resolved surface waves simulated in the physical space. We analyse the breaking front distribution introduced by \cite{PHILLIPS1985}, and find good agreement with field observations. The breaking distribution follows $\Lambda(c) \propto c^{-6}$ even in the absence of wind input, and can be scaled by the mean square slope, indicating that the universal breaking kinematics is primarily governed by the wave field itself, while the wind controls the development of the wave spectrum. The proposed scaling in terms of the mean square slope is fully compatible with empirical relationships used to describe field data.
Our approach provides an unprecedented numerical framework to study breaking statistics for complex wave spectra, which could help to understand the breaking distribution in complex seas (in the presence of swell or currents) and complement existing modelling approaches such as \citet{ROMERO2019}. In addition to the physical discussion of breaking statistics, we demonstrate the capability of the multi-layer approach to solve highly nonlinear geophysical flows with strong vertical-horizontal anisotropy.
\bibliographystyle{jfm}
|
2,869,038,154,345 | arxiv | \section{Introduction}
Following the pioneering work of Gupta and Kumar \cite{Gupta00}, scaling laws for \emph{throughput} have been extensively studied in the literature, e.g., \cite{Gamal06a, Gamal06b, Grossglauser02, Neely05, Sharma06} which studied dense and extended networks considering static and mobile nodes. This literature that started with Gupta-Kumar's multi-hop scheme that achieved a total throughput of $O(\sqrt{n})$ for the network, and hence, $O(\frac{1}{\sqrt{n}})$ throughput per-user, has culminated in the seminal papers of Ozgur et al. \cite{Ayfer07, Ayfer10} which achieved a total throughput of $O(n)$ for the network, and hence, $O(1)$ throughput per-user, by making use of \emph{hierarchical cooperation} between nodes. In this paper, we study scaling of \emph{age of information} in large wireless networks.
Age of information is a recently proposed metric that measures the freshness of the received information. A typical model to study age of information includes a source which acquires time-stamped status updates from a physical phenomenon. These updates are transmitted over a network to the receiver(s) and age of information in this network or simply the $age$ is the time elapsed since the most recent update at the receiver was generated at the transmitter. In other words, at time $t$, age $\Delta(t)$ of a packet which was generated at time $u(t)$ is $\Delta(t) = t-u(t)$. Age of information as the freshness metric of a system has been extensively studied in a queueing-theoretic setting in references \cite{Kaul12b, Costa14, Huang15, Yates15, Bedewy16, Sun16b, Najm17, Kadota16} and in an energy harvesting setting in references \cite{Arafa17b, Arafa17a, Bacinoglu15, Wu18, Arafa18a, Arafa18b, Arafa18d, Baknina18a, Baknina18b}.
Considering dense IoT deployments and the increase in the number of users in networks supplying time-sensitive information, the scalability of age as a function of the number of nodes has become a critical issue. What makes age analysis in a large network challenging is the fact that good age performance corresponds to neither high throughput nor low delay. As argued in references \cite{Kadota16} and \cite{Pappas15}, for optimized timeliness of the updates, we need regular packet delivery with low delay. Maximum throughput can be achieved by sending as many updates as possible from the source. However, this may cause congestion in the system resulting in stale packet deliveries at the destination. Likewise, delay in the network can be reduced by decreasing the update rate which results in outdated information at the destination since the update delivery frequency is low. In this paper, we balance these two opposing objectives, and develop an achievable scheme that strikes a balance between the two in large networks.
References that are most closely related to our work are \cite{Ioannidis09, Zhong17a, Buyukates18, Jiang18a}. Reference \cite{Ioannidis09} studies a mobile social network with a single service provider and $n$ communicating users, and shows that under Poisson \emph{contact processes} among users and uniform rate allocation from the service provider, the average age of the content at the users is $O(\log n)$. They also show that without the contact process between the users, age grows linearly in $n$, as the service provider serves only one user at a time. Reference \cite{Zhong17a} studies the age scaling in a multicast setting and shows that if the source waits for the earliest $k$ of the total $n$ nodes to successfully receive the update during the multicast before sending the next one, a constant age scaling can be achieved. In \cite{Buyukates18}, we extend this result to two-hop multicast networks showing that if earliest $k_1$ and earliest $k_2$ schemes are adapted in the first and second hops, respectively, we can again achieve a constant average age scaling at the end nodes. Note that all these works focus on a setting in which there is a single source that updates multiple destination nodes.
In this work, we focus on a multiple source-multiple destination setting and study a network of $n$ nodes on a fixed area that want to communicate with each other. Each node serves both as a source and a destination. In other words, nodes are paired randomly and we have $n$ source-destination (S-D) pairs to serve. Our goal is to find a tranmission scheme which allows all $n$ S-D pairs to communicate and achieve the smallest average age scaling at the destination nodes.
As studied in \cite{Jiang18a}, a straightforward way to achieve successful communication between all S-D pairs is to use a round-robin policy such that at each turn only one source transmits to its destination and stays silent while all other sources transmit successively during their respective turns. This direct method achieves an age scaling of $O(n)$ meaning that age increases linearly in $n$ since under this policy average inter-update times at a destination node increases linearly as $n$ grows making the updates less frequent and causing age to increase.
As in the setting of \cite{Gupta00}, a multihop scheme that involves successive transmissions between the source and destination nodes can be utilized. In that work the network is divided into cells such that each cell contains at least one node. Their scheme involves hops between these cells so that $O(\sqrt{n})$ messages are carried by each cell. Each of these cells can be considered a queue with multiple sources. As studied in \cite{Yates12}, age of a single update packet that is served by a queue with $O(\sqrt{n})$ different packet streams is also $O(\sqrt{n})$ under LCFS with preemption policy. This means that using multihop scheme after one hop age of an update packet becomes $O(\sqrt{n})$ since the queue is shared by many other packets. Considering the fact that the number of hops needed is $O(\sqrt{n})$, using a multihop scheme, the average age scales as $O(n)$ as in \cite{Jiang18a}.
As studied in reference \cite{Ayfer07}, a more complicated scheme involving hierarchical cooperation between the users locally and MIMO transmissions across the network can be employed. Although this approach gives the optimal throughput scaling in a large network, in \cite{Ayfer10} authors also show that their original hierarchical scheme has a poor delay performance, in that, it scales as $O(n^{\frac{h}{h+1}})$, where $h$ is the number of hierarchy levels. When we have just one hierarchy level, i.e., $h=1$, the delay scaling is $O(\sqrt{n})$, which is the same as that of a multihop scheme, as shown in \cite{Gamal06a}. However, as we increase $h$ to achieve better throughput, delay performance tends to $O(n)$.
In this paper, considering all these previous results, we propose a three-phase transmission scheme to serve all $n$ S-D pairs such that time average age of each node is small. Our scheme utilizes local cooperation between the users as in \cite{Ayfer07}. We divide the network into cells of equal number of users. In the first phase, nodes from the same cell communicate to create a \emph{mega update packet}, which contains the updates of all nodes from that cell. In the second phase, inter-cell communication takes place and each cell sends its mega packet to the corresponding destination cells. The main idea behind the mega update packets is to serve many nodes at once to decrease inter update time. In the third and final phase, received mega update packet is relayed to the intended recipient node in the cell. During all these phases, we make use of the spatial separation of the nodes to allow multiple simultaneous transmissions provided that there is no destructive interference caused by others. Using this scheme, we achieve an age scaling of $O(n^{\frac{1}{4}})$ per-user, which to the best of our knowledge, is the best age scaling result in a multiple source-multiple destination setting. We note that we do not utilize any hierarchy as \cite{Ayfer10} shows that it gives a poor delay performance.
\section{System Model and Age Metric} \label{model}
We consider $n$ nodes that are uniformly and independently distributed on a square of fixed area $S$. Every node is both a source and a destination. These sources and destinations are paired randomly irrespective of their locations. Thus, for a total of $n$ nodes in the network, we have $n$ S-D pairs. Sources create update packets and transmit them to their respective destinations using the common wireless channel. Each source has the same traffic rate and transmit power level. Each source wants to keep its destination as up-to-date as possible. This makes each S-D pair need regular updates sent with low transmission delay and hence brings up the concept of age of information. Age is measured for each destination node and for node $i$ at time $t$ age is the random process $\Delta_i(t) = t - u_i(t)$ where $u_i(t)$ is the timestamp of the most recent update at that node. The metric we use, time averaged age, is given by,
\begin{align}
\Delta_i = \lim_{\tau\to\infty} \frac{1}{\tau} \int_{0}^{\tau} \Delta_i(t) dt \label{avg_age}
\end{align}
for node $i$. We will use a graphical argument similar to \cite{Buyukates18} to derive the average age for a single S-D pair.
Inspired by \cite{Ayfer07}, we will propose a scheme based on clustering nodes and making use of what we call \emph{mega update packets} to increase the spatial reuse of the common wireless channel. This entails dividing $n$ users into $\frac{n}{M}$ cells with $M$ users in each cell with high probability. The users communicate locally within cells to form the mega update packets. We model the delay in these intra-cell communications as i.i.d.~exponential with parameter $\lambda$. Then, mega packets are transmitted between the cells. We model the delay in these inter-cell communications as i.i.d.~exponential with parameter $\tilde{\lambda}$. Finally, the individual updates are extracted from mega updates and distributed to the intended destinations within cells again via intra-cell communications. While intra-cell communications occur simultaneously in parallel across the cells (see Section~\ref{note_phaseI} for details), inter-cell updates occur sequentially one-at-a-time.
Due to i.i.d.~nature of service times in intra- and inter-cell communications, all destination nodes experience statistically identical age processes and will have the same average age. Thus, we will drop user index $i$ in (\ref{avg_age}) and use $\Delta$ instead of $\Delta_i$ in the ensuing analysis.
Finally, we denote the $k$th order statistic of random variables $X_1, \ldots ,X_n$ as $X_{k:n}$. Here, $X_{k:n}$ is the $k$th smallest random variable, e.g., $X_{1:n}=\min\{X_i\}$ and $X_{n:n}=\max\{X_i\}$. For i.i.d.~exponential random variables $X_i$ with parameter $\lambda$,
\begin{align}
E[X_{k:n}] =& \frac{1}{\lambda}(H_n - H_{n-k}) \\
Var[X_{k:n}] =& \frac{1}{\lambda^2}(G_{n} - G_{n-k})
\end{align}
where $H_n = \sum_{j=1}^{n} \frac{1}{j}$ and $G_n = \sum_{j=1}^{n} \frac{1}{j^2}$. Using these,
\begin{align}
E[X_{k:n}^2] =& \frac{1}{\lambda^2}\left((H_n - H_{n-k})^2 + G_{n} - G_{n-k} \right)
\end{align}
\section{Age Analysis of a Single S-D Pair} \label{subsection:age}
The network operates in sessions such that during each session all $n$ sources successfully send their update packets to their corresponding destinations. Each session lasts for $Y$ units of time. Here, we calculate the age of a single S-D pair $(s_i,d_i)$ considering a generic \emph{session time} $Y$. In the next section when the proposed scheme is presented, a specific session time will be characterized. As explained in Section~\ref{model}, it is sufficient to analyze the age of a single S-D pair since each pair experiences statistically identical ages.
Session $j$ starts at time $T_{j-1}$ and all sources including $s_i$ generate their respective $j$th update packets. This session lasts until time $T_j = T_{j-1}+Y_j$, at which, all $n$ destinations successfully receive their updates from their corresponding sources. In other words, a session ends when the last S-D pair finishes the packet transmission. Thus, a destination can receive its update packet before the session ends. Fig.~\ref{fig:ageEvol} shows the evolution of the age at a destination node over time. It is in the usual sawtooth shape with the age increasing linearly over time and dropping to a smaller value as the updates are received at the destination. The process repeats itself at time $T_j$ when all sources including $s_i$ generate the next update packet, namely update $j+1$.
Using Fig.~\ref{fig:ageEvol} the average age for an S-D pair is given by
\begin{align}
\Delta =& \frac{E[A]}{E[L]} \label{age_formula}
\end{align}
where $A$ denotes the shaded area and $L$ is its length. From the figure, we observe that $E[L] = E[Y]$ and $E[A] = \frac{E[Y^2]}{2}+E[D]E[Y]$. Using these in (\ref{age_formula}) we obtain,
\begin{align}
\Delta =& E[D]+ \frac{E[Y^2]}{2E[Y]} \label{age}
\end{align}
where $D$ denotes the time from the generation of an update till its arrival at the destination. Note that in some systems $D$ may be directly equal to the link delay. However, as in our model here, $D$ may capture some additional delays that may occur during the service time of an update. This will be further clarified in the next section.
\begin{figure}[t]
\centering \includegraphics[width=0.885\columnwidth]{ageEvol_shaded3.eps}
\caption{Sample age $\Delta(t)$ evolution for a single S-D pair. Update deliveries are shown with $\bullet$. Session $j$ starts at time $T_{j-1}$ and lasts until $T_j = Y_j + T_{j-1}$.}
\label{fig:ageEvol}
\vspace{-0.5cm}
\end{figure}
\section{Proposed Transmission Scheme} \label{section:scheme}
Proposed scheme involves clustering nodes and making use of \emph{mega update packets} to serve many S-D pairs at once to reduce the session time. In this section, we describe the proposed three-phase transmission scheme and define mega update packets. As in \cite{Ayfer07}, we divide the square network area into $\frac{n}{M}$ cells of equal area such that each cell includes $M$ nodes with high probability which tends to $1$ as $n$ increases. The transmission delays between the nodes belonging to the same cell are denoted by $X_i$ whereas the transmission delays between the nodes from different cells are denoted by $\tilde{X}_i$. Note that $X_i$ and $\tilde{X}_i$ are independent; $X_i$ are i.i.d.~exponential with parameter $\lambda$ and $\tilde{X}_i$ are i.i.d.~exponential with parameter $\tilde{\lambda}$.
{\bf Phase~I. Creating mega update packets:} In a cell, each one of the $M$ nodes distributes its current update packet to remaining $M-1$ nodes. This operation resembles the wait-for-all scheme studied in \cite{Zhong17a} since each node keeps transmitting until all $M-1$ nodes receive its packet. Thus, the time needed for each node to distribute its update packet to other nodes in the cell is $U = X_{M-1:M-1}$. Considering $M$ successive transmissions for each node in the cell, this phase is completed in $V = \sum_{i=1}^{M} U_i$ units of time. By the end of this phase in a cell, each one of the $M$ nodes has $M$ different update packets one from each other node in that cell. Each node combines all these $M$ packets to create what we call a \emph{mega update packet} (see Fig.~\ref{fig:Phase1}). In order to reduce the session time, cells work in parallel during Phase~I (see Section~\ref{note_phaseI} for a detailed description of this operation). This phase ends when the slowest cell among simultaneously operating cells finishes creating its mega update packet. Phase~I takes $Y_I = V_{\frac{n}{M}:\frac{n}{M}}$ units of time, where $Y_I$ denotes the duration of Phase~I.
{\bf Phase~II. MIMO-like transmissions:} In this phase, each cell successively performs MIMO-like transmissions using the mega update packets created in Phase~I. In each cell, all $M$ source nodes send the mega update packet through the channel at the same time to the respective destination cells in which the destination nodes are located. Since every node sends the same mega packet which includes all $M$ packets to be transmitted from that cell, this does not create interference. Thus, this is equivalent to sending update packets of all $M$ sources with $M$ copies each all at once (see Fig.~\ref{fig:Phase2}). Hence, this significantly reduces the time needed to transmit updates of all $M$ sources from that cell to their respective destinations. Note that this stage does not require the destination nodes to be in the same cell. In fact, considering that we have $M$ nodes in a cell, each cell can at most have $M$ different destination cells. Since we are sending $M$ copies of each update to a destination cell in which there are $M$ receiver nodes, only the earliest successful transmission is important. Thus, it takes $\tilde{U} = \tilde{X}_{1:M^2}$ units of time for a source node $s$ from cell $j$ to send its update to the destination cell where the destination node $d$ lies in. This MIMO-like transmissions of cell $j$ continues until the slowest source from that cell transmits its update. Hence, for each cell, this phase lasts for $\tilde{V} = \tilde{U}_{M:M}$. We repeat this for each cell, making the session time of this phase $Y_{II} =\sum_{i=1}^{\frac{n}{M}} \tilde{V}_i$.
{\bf Phase~III. In-cell relaying to the destination nodes:} By the end of Phase~II, each cell receives a mega packet for each one of its nodes. These packets may be received directly by their intended destination nodes. However, considering the worst case where they are received by any other node, we need to relay them to their actual designated recipient nodes. Thus, in this phase, all $M$ mega update packets received during Phase~II are sent to their recipients one at a time. Since this phase has intra-cell transmissions, it is performed in parallel across cells. For a single node this takes $X$ units of time, consequently we need $\hat{V} = \sum_{i=1}^{M} X_i$ to finish this process in a cell. As in Phase~I, we need to wait for the slowest cell to finish this phase. Then, $Y_{III} = \hat{V}_{\frac{n}{M}:\frac{n}{M}}$. Once the destination node receives the mega update packet it extracts the actual update sent from its own source.
\begin{figure}[t]
\centering \includegraphics[width=0.8\columnwidth]{Phase1_v2.eps}
\caption{Cell formation for $M=4$ and $n=100$. Simultaneous intra-cell transmissions are depicted for three S-D pairs from cells $P$, $Q$, $R$ and $S$.}
\label{fig:Phase1}
\vspace{-0.5cm}
\end{figure}
The total session time of the proposed scheme is,
\begin{align}
Y = Y_I + Y_{II} + Y_{III} = V_{\frac{n}{M}:\frac{n}{M}}+ \sum_{i=1}^{\frac{n}{M}} \tilde{V}_i + \hat{V}_{\frac{n}{M}:\frac{n}{M}} \label{sessiontime}
\end{align}
where $V$, $\tilde{V}$, and $\hat{V}$ are defined above. Note that in our proposed scheme we have,
\begin{align}
D = Y_I + Y_{II} + Z \label{D}
\end{align}
where, as noted earlier, $D$ denotes the time between generation of an update at certain source node till its arrival at the corresponding destination node. Assuming no S-D pair is in the same cell, in our scheme, arrivals to destination nodes occur in Phase~III. This assumption is not critical because an S-D pair being in the same cell leads to a smaller $D$ and consequently to a smaller age. Therefore, by assuming no S-D pair is in the same cell, we essentially consider the worst case. Thus, any successful packet delivery will be no earlier than the duration of the first two phases $Y_I + Y_{II}$. On top of that, Phase~III involves $M$ successive in-cell transmissions for each node of a particular cell. Hence, depending on the cell that the source node lies in, as well as the realization of the transmission delay $X$, the corresponding destination node may receive the packet some time after Phase~III starts. For example, if a packet is the $j$th to be transmitted in Phase~III, then delivery will be at $Y_I+ Y_{II} + \sum_{i=1}^{j} {X}_i$. Then, the random variable $Z$ is of the form $Z=\sum_{i=1}^{j} {X}_i$.
Substituting (\ref{sessiontime})-(\ref{D}) in (\ref{age}) we obtain,
\begin{align}
\Delta =& E[Y_I] + E[Y_{II}] + E [Z] + \frac{E[Y^2] }{2E[Y]} \label{age_propScheme}
\end{align}
which is the average age of an S-D pair under the proposed transmission scheme.
Before we perform the explicit age calculation using (\ref{age_propScheme}), we make some observations to simplify our analysis. First, we note that, when the transmission delays $\tilde{X}$ are i.i.d.~exponential with rate $\tilde{\lambda}$, then $\tilde{U} = \tilde{X}_{1:M^2}$ is also exponential with rate $M^2\tilde{\lambda}$ \cite{Yates07}. Second, we have the following upper bound for the duration of Phase~I.
\begin{figure}[t]
\centering \includegraphics[width=1\columnwidth]{Phase2_v3.eps}
\caption{In Phase~II cells take turns to perform inter-cell transmissions. These inter-cell transmissions are shown for the same three S-D pairs depicted in Fig.~\ref{fig:Phase1}.}
\label{fig:Phase2}
\vspace{-0.5cm}
\end{figure}
\begin{lemma} \label{lemma1}
$Y_I$ satisfies the following inequality,
\begin{align}
Y_I \leq \bar{V} \label{age_ineq}
\end{align}
where $\bar{V} = \sum_{i=1}^{M}\bar{U}_i$ and $\bar{U} = X_{n:n}$.
\end{lemma}
\begin{Proof}
Recall that $Y_I = V_{\frac{n}{M}:\frac{n}{M}}$, where $V = \sum_{i=1}^{M} U_i$ and $U = X_{M-1:M-1}$. To show the inequality we make the following observation: In Phase~I, $\frac{n}{M}$ cells operate simultaneously. First nodes of each of these cells start transmitting their packets to all other $M-1$ nodes of their cell at the same time. Since intra-cell transmission delays are all i.i.d.~across cells and packets, what we essentially have in this case is simultaneous transmission to $\frac{n}{M}(M-1)\approx n$ nodes, and therefore all first nodes will be done in $X_{n:n}$ units of time.
We repeat this for the second nodes of each cell and so on to get $\bar{V} = \sum_{i=1}^{M}(X_{n:n})_i = \sum_{i=1}^{M}\bar{U}_i$. In this way of operation, a cell waits for all other cells to finish distributing the update packet of the first node and then continues with the second node and so on. In a way, for each of its nodes it waits for the slowest cell to finish. However, in our constructed scheme during Phase I, inside a cell, nodes distribute their packets to other nodes of that cell without considering other cells and phase ends when all cells finish this process for all their $M$ nodes. Thus, $\bar{V}$ is an upper bound on $Y_I$.
\end{Proof}
Although our proposed Phase~I lasts shorter than the scheme described in Lemma~\ref{lemma1}, for tractability and ease of calculation we worsen our scheme in terms of session time, and take the upper bound in Lemma~\ref{lemma1} as our Phase~I duration such that from now on $Y_I = \bar{V}$. Third, we have the following upper bound for the duration of Phase~III.
\begin{lemma} \label{lemma2}
$Y_{III}$ satisfies the following inequality,
\begin{align}
Y_{III} \leq \bar{\bar{V}} \label{age_ineq2}
\end{align}
where $\bar{\bar{V}} = \sum_{i=1}^{M}\bar{\bar{U}}_i$ and $\bar{\bar{U}} = X_{\frac{n}{M}:\frac{n}{M}}$.
\end{lemma}
We omit the proof of Lemma~\ref{lemma2} since it follows similar to the proof of Lemma~\ref{lemma1}. Due to the same tractability issues, we worsen Phase~III as well in terms of duration and take $Y_{III} = \bar{\bar{V}}$ from now on.
As a result of above lemmas, (\ref{sessiontime}) becomes
\begin{align}
Y = \bar{V}+ \sum_{i=1}^{\frac{n}{M}} \tilde{V}_i + \bar{\bar{V}}\label{sessiontime2}.
\end{align}
Now, we are ready to drive an age expression using above lemmas in (\ref{age_propScheme}). This is stated in the following theorem.
\begin{theorem} \label{thm1}
Under the constructed transmission scheme, the average age of an S-D pair is given by,
\begin{align}
\Delta =& \frac{M}{\lambda} H_n + \frac{n}{M^3 \tilde{\lambda}} H_M + \frac{M-1}{2\lambda}H_{\frac{n}{M}} +\frac{1}{\lambda} \nonumber\\ &+ \frac{\frac{M^2}{\lambda^2}H_n^2+\frac{M}{\lambda^2}G_{n}}{2\left(\frac{M}{\lambda}H_n+\frac{n}{M^3\tilde{\lambda}}H_M +\frac{M}{\lambda}H_{\frac{n}{M}}\right)} \nonumber \\ &+ \frac{\frac{n^2}{M^6\tilde{\lambda}^2}H_M^2+\frac{n}{M^5\tilde{\lambda}^2}G_{M}}{2\left(\frac{M}{\lambda}H_n+\frac{n}{M^3\tilde{\lambda}}H_M + \frac{M}{\lambda} H_{\frac{n}{M}}\right)} \nonumber \\ &+ \frac{\frac{M^2}{\lambda^2}H_{\frac{n}{M}}^2+\frac{M}{\lambda^2}G_{\frac{n}{M}}}{2\left(\frac{M}{\lambda}H_n+\frac{n}{M^3\tilde{\lambda}}H_M +\frac{M}{\lambda}H_{\frac{n}{M}}\right)} \nonumber \\ &+
\frac{\frac{n}{M^2\lambda \tilde{\lambda}}H_n H_M + \frac{M^2}{\lambda^2}H_n H_{\frac{n}{M}} + \frac{n}{M^2 \lambda \tilde{\lambda}}H_M H_{\frac{n}{M}} }{\left(\frac{M}{\lambda}H_n+\frac{n}{M^3\tilde{\lambda}}H_M +\frac{M}{\lambda}H_{\frac{n}{M}}\right)} \label{thm1res}
\end{align}
\end{theorem}
\begin{Proof}
The proof follows upon substituting (\ref{sessiontime2}) back in (\ref{age_propScheme}) and taking expectations of order statistics of exponential random variables as in Section~\ref{model}. Doing these, we obtain
\begin{align}
E[Y_I] =& \frac{M}{\lambda}H_n \label{thm1_eq1st} \\
E[Y_{II}] =& \frac{n}{M^3\tilde{\lambda}}H_M \\
E[Y_{III}] =& \frac{M}{\lambda}H_{\frac{n}{M}} \\
E[Y_I^2] =& \frac{M^2}{\lambda^2}H_n^2 + \frac{M}{\lambda^2}G_{n} \label{E[Y_I^2]} \\
E[Y_{II}^2] =& \frac{n^2}{M^6\tilde{\lambda}^2}H_M^2 + \frac{n}{M^5\tilde{\lambda}^2}G_{M} \\
E[Y_{III}^2] =& \frac{M^2}{\lambda^2}H_{\frac{n}{M}}^2 + \frac{M}{\lambda^2}G_{\frac{n}{M}}
\end{align}
Lastly, we need to calculate $E[Z]$ where the random variable $Z$ is the additional amount of time after Phase~II ends until the destination node receives the update. Let us take an S-D pair $(s,d)$ where source node $s$ is from cell $j+1$. In Phase~III, $d$ has to wait for all other $j$ mega packets from the first $j$ cells to be distributed among the nodes. When its turn comes, $d$ just needs $X$ amount of time to get its packet. Then, $d$ has $Z = \sum_{i=1}^{j} \bar{\bar{U}}_i + X$. Taking expectation on $\bar{\bar{U}}$, $j$ and $X$ by noting their mutual independence we get
\begin{align}
E[Z] \!=\! \left(\frac{1}{M}\! \sum_{j=0}^{M-1}j \right)\!E[\bar{\bar{U}}] +E[X] = \frac{M-1}{2\lambda}H_{\frac{n}{M}} + \frac{1}{\lambda} \label{thm1_eqlast}
\end{align}
Using (\ref{thm1_eq1st})-(\ref{thm1_eqlast}) in (\ref{age_propScheme}) yields the expression.
\end{Proof}
Having derived the expression for the average age $\Delta$ of an S-D pair, we are now ready to work with large $n$.
\begin{theorem} \label{thm2}
For large $n$ and with $M = n^b$, where $0 < b \leq 1$, average age $\Delta$ in Theorem~\ref{thm1} approximately becomes,
\begin{align}
\Delta \approx& \frac{n^b}{\lambda} \log n + \frac{n}{n^{3b} \tilde{\lambda}} b\log n+ \frac{n^b-1}{2\lambda}(1-b)\log n+\frac{1}{\lambda} \nonumber \\ &+
\frac{\frac{n^{2b}}{\lambda^2}(\log n)^2 +\frac{n^b}{\lambda^2}\frac{\pi^2}{6}}{2\left(\frac{n^b}{\lambda}\log n+\frac{n}{n^{3b}\tilde{\lambda}}b\log n + \frac{n^b}{\lambda}(1-b)\log n\right)} \nonumber\\ &+ \frac{\frac{n^2}{n^{6b}\tilde{\lambda}^2}b^2(\log n)^2+ \frac{n}{n^{5b}\tilde{\lambda}^2}\frac{\pi^2}{6}}{2\left(\frac{n^b}{\lambda}\log n+\frac{n}{n^{3b}\tilde{\lambda}}b\log n + \frac{n^b}{\lambda}(1-b)\log n\right)} \nonumber \nonumber\\ &+
\frac{\frac{n^{2b}}{\lambda^2}(1-b)^2(\log n)^2 +\frac{n^b}{\lambda^2}\frac{\pi^2}{6}}{2\left(\frac{n^b}{\lambda}\log n+\frac{n}{n^{3b}\tilde{\lambda}}b\log n + \frac{n^b}{\lambda}(1-b)\log n\right)} \nonumber \\ &+
\frac{\frac{n}{n^{2b}\lambda\tilde{\lambda}}b(\log n)^2 + \frac{n^{2b}}{\lambda^2}(1-b)(\log n)^2}{\left(\frac{n^b}{\lambda}\log n+\frac{n}{n^{3b}\tilde{\lambda}}b\log n + \frac{n^b}{\lambda}(1-b)\log n\right)} \nonumber\\ &+
\frac{\frac{n}{n^{2b}\lambda\tilde{\lambda}}b(1-b)(\log n)^2}{\left(\frac{n^b}{\lambda}\log n+\frac{n}{n^{3b}\tilde{\lambda}}b\log n + \frac{n^b}{\lambda}(1-b)\log n\right)}
\label{thm2_res}
\end{align}
\end{theorem}
\begin{Proof}
The expression follows upon substituting $M = n^b$ in (\ref{thm1res}) and noting that for large $n$, we have $H_n \approx \log n$. Further, $G_{n}$ is monotonically increasing and converges to $\frac{\pi^2}{6}$. Since we have $M=n^b$, as $n$ grows large $M$ does too, resulting in $H_M \approx b\log n$ and $G_{M}$ converging to $\frac{\pi^2}{6}$.
\end{Proof}
\begin{theorem} \label{thm3}
For large $n$, and for $\frac{1}{4} \leq b \leq 1$, the average age of an S-D pair $\Delta$ given in (\ref{thm2_res}) reduces to,
\begin{align}
\Delta \approx c n^b\log n
\end{align}
with a constant $c$. That is, age is $O(n^b\log n)$, for $\frac{1}{4} \leq b \leq 1$.
\end{theorem}
\begin{Proof}
By analyzing the result of Theorem~\ref{thm2} we note that the first and third terms are $O(n^b\log n)$, and the second term is $O(n^{1-3b}\log n)$, and fourth term is a constant independent of $n$. Assuming $b \geq 1-3b$, the fifth term becomes
\begin{align}
\frac{n^{2b}(\log n)^2\left(\frac{1}{\lambda^2}+\frac{\pi^2}{6n^b\lambda^2(\log n)^2}\right)}{n^b\log n\left(\frac{2}{\lambda}+\frac{2b}{\tilde{\lambda}n^{4b-1}} + \frac{2(1-b)}{\lambda}\right)}
\end{align}
which is $O(n^b\log n)$. Continuing similarly for the remaining terms yields the result.
\end{Proof}
Thus, the proposed transmission scheme, which involves intra-cell cooperation and inter-cell MIMO-like transmissions, allows the successful communication of $n$ S-D pairs, and achieves an average age scaling of $O(n^{\frac{1}{4}}\log n)$ per-user. Noting that $\log n$ is negligible compared to the $n^{\frac{1}{4}}$ term, we state our result more succinctly as $O(n^{\frac{1}{4}})$.
\section{Note on Phases~I and~III} \label{note_phaseI}
We use the protocol model introduced in \cite{Gupta00} to model the interference such that two nodes can be active if they are sufficiently spatially separated from each other. In other words, we allow simultaneous transmissions provided there is no destructive interference caused by other active nodes. Suppose that node $i$ transmits its update to node $j$. Then, node $j$ can successfully receive this update if the following is satisfied for any other node $k$ that is simultaneously transmitting,
\begin{align}
d(j,k) \geq (1+\gamma)d(j,i) \label{prot_model}
\end{align}
where function $d(x,y)$ denotes the distance between nodes $x$ and $y$ and $\gamma$ is a positive constant determining the guard zone.
In the proposed scheme, we assume that the intra-cell transmissions that take place in Phases~I and~III work in parallel across the cells, whereas inter-cell transmissions that take place during MIMO-like transmissions in Phase~II work in sequence. In order to implement parallel intra-cell communications in Phases~I and~III, we follow a 9-TDMA scheme as in \cite{Ayfer07}. Specifically, $\frac{n}{9M}$ of the total $\frac{n}{M}$ cells work simultaneously so that Phases~I and~III are completed in 9 successive sub-phases. Using the protocol model, cells that are at least $(1+\gamma)r\sqrt{2}$ away from a cell can operate simultaneously during these phases, where $r=\sqrt{SM/n}$ is the length of each square cell and $S$ is the network area. Noting that there are at least two inactive cells in between two active cells under a 9-TDMA operation, this scheme satisfies (\ref{prot_model}) if the guard zone parameter $\gamma \leq \sqrt{2}-1$. Since 9 here is constant and valid for any $n$, it does not affect the scaling results.
\section{Conclusions} \label{conc}
We have studied the scalability of age of information in a large wireless network of fixed area with randomly paired $n$ source-destination pairs that want to update each other. We have proposed a three-phase transmission scheme which uses local cooperation between nodes and mega update packets to achieve an average age scaling of $O(n^{\frac{1}{4}})$.
Our scheme divides the network into $\frac{n}{M}$ cells of $M$ nodes each. The first and third phases include intra-cell transmissions and take place simultaneously across all cells. The second phase includes inter-cell transmissions and therefore during this phase cells operate one at a time. We create mega update packets in Phase I such that each mega packet includes all $M$ update packets to be transmitted from that cell. In the second phase, all $M$ nodes of a cell transmit this mega update packet to respective destination cells. Thus, by utilizing these mega update packets, we serve all $M$ source-destination pairs at once. Finally, in the third phase received mega update packet is relayed to its actual intended destination nodes. This node then extracts its update from the mega packet.
|
2,869,038,154,346 | arxiv | \section{Introduction}
\subsection{Gravity and renormalizability}
It is a well know fact that general relativity is not a renormalizable theory. This means that, successful as is may be as a classical theory of gravity, it should be viewed as an effective theory that breaks down at some scale. Beyond that scale general relativity is not enough to describe the gravitational interaction or spacetime itself (depending on the perspective) and one cannot construct its quantum counterpart using conventional quantization techniques.
On the other hand, given that within this perspective general relativity is an effective theory, the Einstein-Hilbert action contains only the lowest order terms in a curvature expansion. A reasonable question is then whether a renormalizable theory can be obtained by including higher order curvature terms. Intuitively this could work, as such terms would modify the propagator of the graviton at high energies. Indeed, in 1977 Stelle showed that theories whose action includes invariants quadratic in the curvature are renormalizable \cite{stelle}. However, unfortunately this comes at a very high price, as such theories contain ghost degrees of freedom and are, therefore, not unitary. This is because by adding higher order curvature invariants in the action, one adds higher order time derivatives.
One could attempt to modify the propagator by adding higher order spatial derivatives without adding higher order time derivatives. This could presumably lead to a theory with improved ultraviolet (UV) behaviour without having to face the dreadful consequences of having higher order time derivatives. Obviously, following such a prescription requires treating spatial and time derivatives on a different footing and as such is in clash with Lorentz invariance. On the other hand, since the modified behaviour of the propagator is strictly needed in the UV, it is conceivable that Lorentz invariance could be recovered in the infrared (IR), or at least that Lorentz violations in the IR could stay below current experimental constraints.
Ho\v{r}ava's proposal \cite{Horava:2008ih,Horava:2009uw}, which has led to a spree of publications recently and is now commonly referred to as Ho\v{r}ava--Lifshitz gravity, can be seen as an attempt to put these heuristic arguments in a rigorous framework where one can test whether they are indeed valid when it comes to gravity theories. In what comes next an overview of this framework is given and its major developments to date are briefly discussed. This is far from an exhaustive account of an already very extensive literature, but hopefully it covers the most critical topics and provides a concise introduction to the subject.\footnote{See also Ref.~\cite{Weinfurtner:2010hz} for an early review of the projectable Ho\v{r}ava--Lifshitz gravity.}
\subsection{Lorentz violations as a field theory regulator}
What has been discussed above is essentially the idea of giving up Lorentz invariance in order to keep a quantum filed theory finite. Clearly, this idea goes beyond gravity theories and it has been considered in the past for other fields, see for example Ref.~\cite{anselmi}. In fact, it would be best to start from a simpler example, such as a scalar field. This section will closely follow Ref.~\cite{Visser:2009fg}.
Let us first of all point out that there is nothing wrong with using Lorentz violations as a field theory regulator from a perspective of logical consistency, as there is no reason why a theory should necessarily exhibit Lorentz symmetry in the far UV. On the other hand, Loretz violations are severely constrained in a wide range of energies and especially in the IR. Therefore, as mentioned before, the real question is whether one can construct a field theory that exhibits Lorentz violations which in the far UV lead to renormalizability, but remain suppressed below experimental accuracy at lower energies. This is difficult to answer without concrete models and enough to motivate further investigation.
Consider the scalar field action
\begin{equation}
S_\phi=\int dt dx^d\left\{ \dot{\phi}^2-\sum_{m=1}^z a_m\phi(-\Delta)^m \phi+\sum_{n=1}^M g_n \phi^n \right\},
\end{equation}
where $\dot{}\equiv \partial/\partial t$, $\Delta=\vec{\nabla}^2$ is the spatial Laplacian and $z$ and $M$ are positive integers still to be specified. A theory is said to be ``power counting renormalizable'' when all of its interaction terms scale like energy to some non-positive power, as in this case Feynman diagrams are expected to be convergent (or have at most a logarithmic divergence). To check whether this is true or not we first have to choose the engineering dimensions of space and time. The engineering dimensions
\begin{equation}
[dt]=[\kappa]^{-z},\qquad [dx]=[\kappa]^{-1},
\end{equation}
where $\kappa$ is a place holder symbol with dimensions of momentum, are compatible with the choice $a_z=1$ and the requirement that the action has to be dimensionless. This choice of engineering dimensions, therefore, appears to be natural in the UV. The dimensions for the scalar field then turn out to be
\begin{equation}
[\phi]=[\kappa]^{(d-z)/2}.
\end{equation}
What remain is to check the scaling for the rest of the terms. Requiring that an interaction term scales like energy to some non-positive power is equivalent to the requirement that the coupling of this interaction scales like energy to some non-negative power (since the action is dimensionless). Also, we find it convenient to work in terms of momenta instead of energies. So, our theory will be power counting renormalizable if all couplings have non-negative momentum dimensions.
It is straightforward to verify that
\begin{equation}
[a_m]=[\kappa]^{2(z-m)}, \qquad [g_n]=[\kappa]^{d+z-n(d-z)/2}.
\end{equation}
It is then obvious that $a_m$ has non negative momentum dimension for all values of $m$. It is also easy to realize that, for $z\geq d$, $g_n$ has non-negative momentum dimensions for all values of $n$. When $z<d$, $g_n$ has non-negative momentum dimension only when $n\leq[2(d+z)]/(d-z)$. One can then easily recover well know results. For example one can verify that the case $d=3$, $z=1$, $M=4$ which correspond to the usual relativistic $\phi^4$ theory in $3+1$ dimensions is a power counting renormalizable theory. The $d=z=1$ case, {\em i.e.}~the relativistic scalar in $1+1$ dimensions, is power counting renormalizable for all values of $M$. In fact this case belongs to the class $z=d$ and is directly analogous to the case $d=z=3$, which will also have the same property and is of interest to us. In fact one can argue that any theory with $z=d$ is actually finite with normal ordering, while theories with $z>d$ are finite even without normal ordering, see Ref.~\cite{Visser:2009fg} for a more detailed discussion using the ``superficial degree of divergence''.
How would things go if instead of a scalar field we wanted to consider a graviton? The main difference lies in the graviton self-interaction vertices. In the case of the scalar field considered above, momenta did not enter these interactions. For a graviton, however, one would have to deal with self-interaction vertices which contain spatial derivatives \cite{Visser:2009ys}. This introduces some further complication, but it is not enough to spoil the power-counting renormalizability, as long as $z\geq d$, {\em i.e.}~as long as the action contains operators with at least $2d$ spatial derivatives. The presence of self-interaction vertices which contain spatial derivatives is enough though to argue that, unlike the scalar theory consider above, its graviton analogue will not be finite (even with normal ordering) \cite{Visser:2009ys}.
All of the arguments presented here support the statement that field theories that contain at least $2d$ spatial derivatives in $d+1$ dimensions are power counting renormalizable. Power counting renormalizability is a strong indication that a theory is indeed renormalizable. Little progress has been made so far in the case of gravity to check renormalizability beyond power counting for such theories.
\section{From the Lifshitz scalar to non-relativistic gravity}
We proceed to consider the exact structure of a gravity theory with the characteristics mention in the previous section. We will work in $3$ dimensions from now on but generalizing to more than 3 dimensions is straightforward. The basic requirement is that the theory should have only two time derivates but at least $2d$ spatial derivatives, that is $6$ in our case. The fact that we want to have more spatial than time derivatives implies that we need to treat space and time on a different footing. This is naturally achieved by working in the Arnowitt--Deser--Misner (ADM) decomposition of spacetime
\begin{equation}
\label{admmetric}
d s^2 = - N^2 c^2 d t^2 + g_{ij}(d x^i + N^i d t) (d x^j + N^j d t).
\end{equation}
One is essentially forced to pick a preferred foliation of spacetime in order to write down the action according to the prescription laid out in the previous section. Consequently, such an action cannot be invariant under standard diffeomorphisms, as general relativity is. It can, however, be invariant under a more restricted set: foliation preserving diffeomorphisms, {\em i.e.}~space-independent time reparametrization together with time-dependent spatial diffeomorphisms
\begin{equation}
t\rightarrow \tilde{t}(t),\qquad x^i\rightarrow \tilde{x}^i(t,x^i).
\end{equation}
This is the symmetry that the action will have to respect.
It is straightforward to realize that the only covariant quantity under spatial diffeomorphisms that contains a time derivative of the spatial metric is the extrinsic curvature
\begin{equation}
K_{ij} = {1\over2N} \left\{ \dot g_{ij} - \nabla_i N_j - \nabla_j N_i \right\},
\end{equation}
which also transforms like a scalar under time reparametrization. A dot denotes differentiation with respect to the time coordinate and $\nabla_i$ is the covariant derivative associated with the spatial metric $g_{ij}$. We want the theory to be second order in time derivatives, so we need to consider terms quadratic in the extrinsic curvature. Note that there are no invariants under the symmetry mentioned above that one can construct with time derivatives of the lapse $N$ or the shift $N_j$, without including higher order time derivatives of $g_{ij}$ as well. Therefore, the most general action one can write is
\begin{equation}
S=\frac{M_{\rm pl}^2}{2}\int d^3x d t N \sqrt{g} \left\{ K^{ij} K_{ij} - \lambda K^2 -V(g_{ij},N)\right\}\, ,
\end{equation}
where $M_{\rm pl}$ is a constant which, with some foresight, we identify with the Planck mass, $g$ is the determinant of the spatial metric $g_{ij}$, and $\lambda$ is a dimensionless running coupling. $V$ generically can depend on $g_{ij}$ and $N$ and their spatial derivatives. It does not contain time derivatives, neither does it depends on the shift $N_i$, as the symmetry of the theory does not allow one to construct any suitable invariants. Finally, power counting renormalizability requires $V$ to contain terms which are at least sixth order in spatial derivatives. From now on we will restrict ourselves to theories that contain sixth order time derivatives but not higher, as these are the simplest ones which satisfy the power counting renormalizability requirement.
\section{Potential, constraints and the various versions of the theory}
Clearly there are numerous terms one could include in $V$. Different choices have lead to different versions of the theory. In this section we list these versions, briefly presenting their basic characteristics and differences. We refrain from discussing or commenting upon their viability or consistency, as this, together with their phenomenology, will be the subject of the coming section.
\subsection{Detailed balance}
Ho\v{r}ava proposed a symmetry that $V$ would have to satisfy, that would drastically reduced the number of invariants one should consider \cite{Horava:2009uw}. This symmetry is dubbed {\em detailed balance} and it is inspired by condensed matter systems. It sums up to the requirement that $V$ should be derivable from a superpotential $W$:
\begin{equation}
V=E^{ij}{\cal G}_{ijkl}E^{kl},
\end{equation}
where
\begin{equation}
E^{ij}=\frac{1}{\sqrt{g}}\frac{\delta W}{\delta g_{ij}},
\end{equation}
and
\begin{equation}
{\cal G}^{ijkl}=\frac{1}{2}\left(g^{ik}g^{jl}+g^{il}g^{jk}\right)-\lambda g^{ij}g^{kl}.
\end{equation}
The most general action one can write with $V$ satisfying the conditions above is
\begin{eqnarray}
\label{dbaction}
S_{db}=\frac{M_{\rm pl}^2}{2}\int d^3x d t N \sqrt{g} \Bigg\{ &&\!\!\!\!\!\!\!\!\!\!K^{ij} K_{ij} - \lambda K^2-\frac{\alpha^4}{M_{\rm pl}^4} C_{ij} C^{ij}+\frac{2\alpha^2\beta}{M_{\rm pl}^3}\frac{\epsilon^{ijk}}{\sqrt{g}}R_{il}\nabla_j R^l_{\phantom{a}k}-\frac{\beta^2}{M_{\rm pl}^2}R^{ij}R_{ij}\nonumber\\
&&+\frac{\beta^2}{4}\frac{1-4\lambda}{1-3 \lambda} R^2+\frac{\beta^2 \,\zeta}{1-3\lambda} R-\frac{3\beta^2 \, \zeta^2}{1-3\lambda} M^2_{\rm pl}\Bigg\}\, ,
\end{eqnarray}
where $\epsilon^{ijk}$ is the Levi-Civita symbol,
\begin{equation}
C^{ij}=\frac{\epsilon^{ikl}}{\sqrt{g}}\nabla^k\left(R^j_{\phantom{a}l}-\frac{1}{4} R \delta^j_{\phantom{a}l}\right)
\end{equation}
is the Cotton tensor, which in 3 dimensions plays the role of the Weyl tensor, and $\alpha$, $\beta$ and $\zeta$ are dimensionless couplings. Notice that there are only 3 new couplings for a total of 6 terms in $V$.
The advantages of detailed balance is that it drastically reduces the number of terms one needs to consider and that it introduces a superpotential, which might simplify quantization. On the other hand, the action above contains a term that violates parity (the fifth order operator). Additionally, as is obvious, the last term in the action, which plays the role of a cosmological constant, is restricted to have the wrong sign with respect to the one suggested by observations when $\lambda>1/3$, as pointed out in Refs.~\cite{Sotiriou:2009gy,Sotiriou:2009bx} ($\lambda$ is supposed to be close to unity, if severe Lorentz violation are not to occur).\footnote{This is a bare cosmological constant. Also, one could analytically continue the parameters as explained in Ref.~\cite{Lu:2009em}, but in this case $W$ would cease to be real.} Note that there is nothing fundamental about detailed balance, so it is safe to consider it just a simplicity assumption.
\subsection{Projectable Ho\v{r}ava--Lifshitz gravity}
Another possible simplifying restriction proposed by Ho\v{r}ava in Ref.~\cite{Horava:2009uw} is that of {\em projectability}, which is the assumption that the lapse is just a function of time. That is $N=N(t)$. Again there is no fundamental principle behind such an assumption. Ho\v{r}ava's main motivation for initially considering it was that under this assumption one has enough gauge freedom to set $N=1$, as in general relativity. This is not possible without projectability, as the symmetries of the action allow only space independent time reparametrizations, as explained earlier.
In any case, projectability drastically reduces the number of invariants $V$ can include even if detailed balance is not imposed, as all terms with spatial derivatives of $N$ would now vanish. Let us see that in more detail, following the line of Refs.~\cite{Sotiriou:2009gy,Sotiriou:2009bx}. Since $N$ is now just a function of time there is no invariant one can built out of it without using time derivatives. So $V$ will only depend on the metric and its spatial derivatives. Essentially, this means that the action should include all of the curvature invariants one can construct with $g_{ij}$ with up to six spatial derivatives.
An important simplification comes from the fact that we are in 3 spatial dimensions. Therefore, the Weyl tensor vanishes identically and the Riemann tensor can be expressed in terms of the Ricci tensor. Taking also into account Bianchi identities and ignoring surface terms, one can arrive to the conclusion that the most general action is
\begin{eqnarray}
\label{paction}
S_p&=&\frac{M_{\rm pl}^2}{2}\int d^3x d t N \sqrt{g} \Bigg\{ K^{ij} K_{ij} - \lambda K^2 -g_0 \, M_{\rm pl}^2 -g_1 R - g_2 \,M_{\rm pl}^{-2}\,R^2 - g_3 \, M_{\rm pl}^{-2}\, R_{ij} R^{ij} \nonumber\\
&&\qquad\qquad\qquad\qquad- g_4 \, M_{\rm pl}^{-4}\,R^3 - g_5 \,M_{\rm pl}^{-4}\, R (R_{ij} R^{ij})- g_6 \,M_{\rm pl}^{-4}\, R^i{}_j R^j{}_k R^k{}_i \nonumber\\
&&\qquad\qquad\qquad\qquad
- g_7\,M_{\rm pl}^{-4}\, R \nabla^2 R - g_8 \,M_{\rm pl}^{-4}\, \nabla_i R_{jk} \, \nabla^i R^{jk}\Bigg\}\, ,
\end{eqnarray}
where the $g_i$ are dimensionless couplings. As long as we have not coupled matter to the theory we are free to rescale the coordinate in order to set $g_1=-1$, which is the value it has in general relativity. We then have 8 remaining couplings $g_i$ which can be used to tune the scales suppressing the various operators. The following remarks are in order:
\begin{itemize}
\item We have suppressed parity violating terms.
\item Enforcing projectability on top of detailed balance (without suppressing parity violating terms) would leave action (\ref{dbaction}) unaffected, apart from the fact that $N$ would become just a function of time. As we will see later, this is quite an important difference.
\item There are just 3 more operators in the most general projectable action than in the one with detailed balance. In this sense, detailed balance does not bring significant simplification once projectability has been assumed. For this reason, and for the drawbacks listed in the end of the previous section, we will not consider detailed balance further when assuming projectabiltity.
\item $g_0$ controls the value of the cosmological constant, which, unlike the case with detailed balance, is not restricted.
\item There are two types of Lorentz violating terms in the action. The ones in $V$, which are suppressed by some scale that can be determined by tuning the couplings $g_2$ to $g_8$, and one that comes from the kinetic part due to the fact that $\lambda$ is not necessarily equal to 1. This will be a generic feature of all versions of the theory as we will see. Clearly, the term of the second kind is far more dangerous, as it introduces Lorentz violations at low energy scales.
\item All couplings are running. One can, therefore, hope that $\lambda$ will run to 1 in the IR (or sufficiently close to it for experimental constraints to be satisfied), the rest of the operators will be heavily suppressed, and the theory will effectively reduce to general relativity. Diffeomorphism invariance and Lorentz invariance would emerge as IR (approximate) symmetries. Whether or not this will indeed be the case requires a study of the renormalization group flow which is still pending.
\end{itemize}
\subsection{Non-projectable Ho\v{r}ava-Lifshitz gravity}
We now proceed to consider the case where neither detailed balance nor projectability are enforced. This version of the theory is called non-projectable. To avoid confusion it is worth pointing out explicitly, that the version with detailed balance can be a non-projectable version. However, in this section we wish to go beyond that. Once one abandons detailed balance, adding just some specific choice of extra terms is not really an option. Radiative corrections will generate all possible terms compatible with the symmetries of the theory and, thus, all such terms should be taken into account, much like the projectable case treated above. However, as first pointed out is Ref.~\cite{Blas:2009qj}, it is not only curvature invariants of $g_{ij}$ that one can include in $V$ in this case. One can also use the quantity
\begin{equation}
a_i=\partial_i \ln{N}\,,
\end{equation}
as contractions of it with itself or curvature terms also lead to invariants. The lowest order invariant one can construct with $a_i$ is $a^i a_i$, which comes at the same order as $R$. The action will then be of the form
\begin{eqnarray}
\label{npaction}
S_{np}&=&\frac{M_{\rm pl}^2}{2}\int d^3x d t N \sqrt{g} \Bigg\{ K^{ij} K_{ij} - \lambda K^2 +\xi R + \eta \, a_ia^i+\frac{1}{M_A^2}L_4+\frac{1}{M_B^4}L_6\Bigg\}\, ,
\end{eqnarray}
where $L_4$ and $L_6$ include all possible 4th and 6th order operators respectively that one can construct using $a_i$ and $g_{ij}$. For example, $R^2$, $(a^ia_i)^2$, $\nabla^i\nabla_i R$ and $a_ia_j R^{ij}$ are some of the terms that are included in $L_4$.\footnote{$\nabla^i\nabla_i R$ and other similar terms are not surface terms, as the $N$ in the 4-volume is space dependent.} As was the case before for $g_1$, $\xi$ can now be set to unit by a coordinate rescaling when matter is not coupled to the theory. This version of the theory was first studied in Ref.~\cite{Blas:2009qj} in order to resolve some inconsistencies in the dynamics of the non-projectable version with detailed balance, as we will discuss shortly. Although it is often referred to as an ``extension" of Ho\v{r}ava-Lifshitz gravity, this terminology is probably unfortunate: this is Ho\v{r}ava-Lifshitz gravity, {\em i.e.}~a theory constructed consistently according to the prescription described in Ho\v{r}ava's paper Ref.~\cite{Horava:2009uw} and without any restriction, such as detailed balance or projectability.
The following points are worth stressing for the non-projectable version (apart from some of the remarks already made for the projectable case and apply here as well).
\begin{itemize}
\item For simplicity the cosmological constant term has been suppressed, but it can be straightforwardly restored.
\item The scales $M_A$ and $M_B$ suppressing the higher order operators have been left arbitrary. This is in analogy with having arbitrary dimensionless couplings $g_i$ in the projectable case.
\item The number of operators in the non-projectable case is an order on magnitude larger that in the projectable case.
\item If general relativity where to be recovered in the IR, not only $\lambda$ would have to run to 1 but also $\eta$ would have to run to 0.
\end{itemize}
\subsection{Imposing further symmetries}
All of the actions presented above, which correspond to different versions of Ho\v{r}ava--Lifshitz gravity, are invariant under foliation preserving diffeomorphisms only, and not the full set of diffeomorphisms. So, even though the action is in all cases quadratic in the time derivatives of $g_{ij}$, less symmetry generically implies that more degrees of freedom will be excited. We will see explicitly in the next section that this is indeed the case, and that this can, in most cases, have undesirable consequences for the consistency and viability of the theory.
A way to do away with the extra degrees of freedom is to attempt to nontrivially extend the gauge symmetry of the theory, so that it will have as many generators per spacetime point as general relativity. It was pointed out in Ref.~\cite{Horava:2010zj} that this could be done if the action would be suitably modified to acquire an extra local $U(1)$ symmetry. More precisely in Ref.~\cite{Horava:2010zj} a step by step construction of the action led to
\begin{eqnarray}
\label{hmt}
S_{{\rm extra}\, U(1)}&=&\frac{M_{\rm pl}^2}{2}\int d^3x d t N \sqrt{g} \Big\{ K^{ij} K_{ij} - K^2 -V(g_{ij},N)\nonumber\\&&\qquad\qquad+\nu \Theta^{ij}(2 K_{ij}+\nabla_i\nabla_j \nu)-A(R-2 \Omega)/N\Big\}\, ,
\end{eqnarray}
where $\Omega$ is a constant, $A$ acts as a Lagrange multiplier, $\nu$ is an auxiliary scalar field,
\begin{equation}
\Theta^{ij}\equiv R^{ij}-\frac{1}{2} g^{ij} R+\Omega g^{ij},
\end{equation}
and $V$ should as usual contain 6th order operators and may or may not satisfy detailed balance. Note that $A$ transforms as a spatial scalar and a time vector under foliation preserving diffeomorpisms, that is
\begin{equation}
A \rightarrow A+ \dot{f} A+f \dot{A}+\xi^i\partial_i A,
\end{equation}
where $\xi^i$ is the generator. On the other hand, under a local $U(1)$ gauge transformation with generator $\varphi$, $A$ and $\nu$ transform as
\begin{eqnarray}
&&A\rightarrow A+\dot{\varphi}-N^i\nabla_i \varphi,\\
&& \nu\rightarrow \nu+\varphi.
\end{eqnarray}
See Ref.~\cite{Horava:2010zj} for more details on constructing the action, as well as for discussions on anisotropic scaling, hamiltonian formulation, linearization, etc. It was argued there that in the IR this theory has very good chances to reduce to general relativity, as the high order operators in $V$ would be suppressed (as in the the previous versions of the theory), the extra fields in the action are just auxiliary fields, and the extra gauge symmetry leads to just the usual spin 2 graviton excitation. Moreover, it was claimed in Ref.~\cite{Horava:2010zj} that the $U(1)$ symmetry forces the coefficient $\lambda$ of $K^2$ in the action, which was a running coupling in all other versions of the theory, to be equal to 1 here, as in general relativity.
This last claim has been contested recently in Ref.~\cite{daSilva:2010bm} where it has been claimed that the extra $U(1)$ symmetry does not necessarily require $\lambda=1$ and that even for arbitrary $\lambda$ the theory propagates only a spin 2 graviton. If this is true, $\lambda$ has to remain a running coupling and flow close to $1$ in the IR for general relativistic phenomenology to be recovered according to Ref.~\cite{daSilva:2010bm}. So, at the moment, whether or not the value of $\lambda$ is fixed to 1 can be considered a matter of debate.
Another potential difficulty with this version of Horava-Lifshitz gravity has to do with matter coupling. It is straightforward to notice that the Lagrange multiplier in action (\ref{hmt}) forces $R$ to be constant. Even though this is not necessarily a problem in vacuum, it cannot be generically the case once matter is coupled to the theory. This implies that matter would have to be coupled to $A$ if the theory is to resemble general relativity at low energies, and that a suitable coupling that leads to standard phenomenology in the IR should exist. Whether this is indeed the case requires further investigation (some first steps where done in Ref.~\cite{daSilva:2010bm}).
It is also worth pointing out that action (\ref{hmt}) is susceptible to radiative corrections which could modify its form.
Since this last version of Horava-Lifshitz gravity has been proposed very recently and has not yet been studied extensively, we will not consider it further. Its cosmological aspects where recently studied in Ref.~\cite{Wang:2010wi}.
\section{Projectable version: Dynamics, consistency and low energy behaviour}
\subsection{Degrees of freedom and propagators}
We return now to the projectable version of the theory and take a closer look at its dynamics. We will consider the most general action in this version, action (\ref{paction}), {\em i.e.}~we will not enforce detailed balance but our results will include it as a subcase. The field equations for this action have been derived in detail in Ref.~\cite{Sotiriou:2009bx}. We will refrain from rewriting then here due to space limitations but we will make the following remark: since $N=N(t)$, the variation with respect to $N$ will not lead to a usual local super-Hamiltonian constraint, but to a global Hamiltonian constraint. That is, the constraint will come in the form of an integral over space. This will be a major difference with respect to the non-projectable case.
We now move straight to the linearization. If for simplicity one sets the cosmological constant term to 0, then flat space is a suitable background. After suitable gauge choices, see Ref.~\cite{Sotiriou:2009bx}, one finds that the theory propagates a spin-2 mode, the usual graviton, that now satisfies a modified dispersion relation, {\em i.e.}
\begin{equation}
\ddot{ \widetilde H}_{ij} = -\left[ g_1 \partial^2 + g_3 M_{\rm pl}^{-2} \partial^4 + g _8 M_{\rm pl}^{-4} \partial^6 \right] \widetilde H_{ij},
\end{equation}
where ${\widetilde H}_{ij}$ is transverse and traceless. Notice that the couplings $g_3$ and $g_8$ control the scale that the lorentz violating terms become important in the dispersion relation. Recall also that in vacuum one has the freedom to rescale the coordinates and set $g_1=-1$.
As mentioned earlier, less symmetry generically means more degrees of freedom, so since the theory is invariant under foliation preserving diffeomorphisms only, we expect to find extra excitations. This is indeed the case. There is an extra scalar degree of freedom, whose linearized dynamics are governed by the action \cite{Sotiriou:2009bx}
\begin{equation}
S^p_2=- M_{\rm pl}^2 \int d^3x d t \left[{1 \over c_h^2}\dot{h}^2 + h\partial^2 h+\frac{8g_2+3g_3}{M_{\rm pl}^2} h\partial^4 h -\frac{8 g_7-3 g_8}{M_{\rm pl}^4} h \partial^6 h \right ] , \label{quad}
\end{equation}
where
\begin{equation}
c_h=\frac{1-\lambda}{3 \lambda -1},
\end{equation}
and we have set $g_1=-1$ by rescaling the coordinates, in order to guarantee the stability of the spin 2 graviton.
Given the overall minus sign in eq.~(\ref{quad}), the scalar mode is a ghost whenever $1>\lambda>1/3$. On the other hand, the scalar is classically unstable at low energies whenever $\lambda>1$ or $\lambda<1/3$ \cite{Sotiriou:2009bx}. It is conceivable that the instability can be cut off by the higher order derivatives. If we call $M_\star$ the scale where the higher order term would take over, then the time scale of the instability would be at least $1/|c_h| M_\star$. The relation between $M_\star$ and $M_{\rm pl}$ is clearly governed by the $g_i$ coefficients in eq.~(\ref{quad}). On the other hand, if the instability is not to develop within the lifetime of the universe then $|c_h| M_\star< H_0$, where $H_0$ is the Hubble parameter. \cite{Koyama:2009hc}. Experiments rule out modifications of Newton's law at scales above $10\mu m$ which corresponds roughly to the constraint $M_\star\gtrsim 0.1 {\rm eV}$. In turn, this implies $|1-\lambda|\lesssim 10^{-61}$. This value is clearly very low and it seems unlikely that the renormalization group flow could drive $\lambda$ to it.
In Refs.~\cite{Huang:2010rq,Wang:2010ug} linearization around a de Sitter, instead of Minkowski, background was considered. As expect, the large scale modes exhibit better behaviour. However, qualitatively the result is the same overall: $\lambda$ needs to be sufficiently close to $1$ for the instability not to be a concern. Note that requiring $\lambda$ to be very close to one is not only a potential fine tuning problem. $c_h\rightarrow 0$ as $\lambda\rightarrow 1$, which means that the low momentum phase velocity of the scalar mode will be much smaller than the speed of light for such values of $\lambda$. When a velocity of a certain mode is smaller than the speed of light in a medium, relativistic particles traveling in this medium decay via the Cherenkov process \cite{Elliott:2005va}. The fact that we observe high energy cosmic rays which need to travel a significant distance to reach us is evidence enough that such modes do not exist.
\subsection{Strong coupling}
In the previous section we only considered the linearized dynamics as described by the part of the perturbative action quadratic in $h$. However, it has been pointed out by several authors that such a perturbative treatment breaks down when $\lambda$ approaches $1$ and the scalar mode gets strongly coupled \cite{Charmousis:2009tc,Blas:2009yd,Koyama:2009hc}. To see how this comes about we give below the cubic interactions of $h$,
\begin{equation}
S^p_3=M_{\rm pl}^2\int d t d^3x \left\{h (\partial h)^2 - {2 \over c_h^4} \dot{h} \partial_i h {\partial^i \over \partial^2} \dot{h}
+ \frac{3}{2}\left[{1 \over c_h^4} h \left( {\partial_i \partial_j \over \partial^2}\dot{h}\right)^2 - {(2 c_h^2+1) \over c_h^4} h \dot{h}^2 \right]
\right\} \label{cubic},
\end{equation}
where we have neglected cubic interactions coming from higher order operators. The redefinitions $\hat{t}=|c_h| t$ and $\hat{h}=c_h^{-1/2} M_{\rm pl}\, h$ canonically normalize the lower operator part of the quadratic action (\ref{quad}) (first two terms). After these redefinitions the cubic action reads
\begin{equation}
S^p_3=\frac{1}{|c_h|^{3/2} M_{\rm pl}}\int d \hat{t} d^3x \left\{
c_h^2\,\hat{h} (\partial \hat{h})^2 - 2 \hat{h}' \partial_i \hat{h} {\partial^i \over \partial^2}\hat{h}'
+ \frac{3}{2}\left[ \hat{h} \left( {\partial_i \partial_j \over \partial^2}\hat{h}'\right)^2 - (2 c_h^2+1) \hat{h}(\hat{h}')^2 \right]
\right\} \label{cubicred},
\end{equation}
where now $'=\partial/\partial_{\hat{t}}$. As is obvious from the equation, there are cubic interactions which are suppressed by the scale $|c_h|^{3/2} M_{\rm pl}$ with respect to the quadratic ones. Therefore, the theory gets strongly coupled at the scale $M_{sc}=|c_h|^{3/2} M_{\rm pl}$.
Given the constraints on $\lambda$ from stability discussed previously, the strong coupling scale is phenomenologically unacceptably low, as we know that we can treat gravity perturbatively at low energies. There is also the issue of renormalizability: our arguments for power-counting renormalizability given above where based on the validity of the perturbative treatment at all energies. If there is strong coupling such arguments simply fail.
The only subtlety here is the fact that the strong coupling arguments presented so far neglect the higher order operators. However, for such operators to become important and invalidate our treatment, the strong coupling scale $M_{sc}$ would have to be higher than the scale $M_\star$ that suppresses them. Given how low $M_{sc}$ is here, this would imply that higher order operators modify the graviton dynamics at very low energies, which would be in conflict with current experimental evidence.
Since the analysis presented here is linear, one could ask whether including non-linear effects could resolve the strong coupling problem, by an analogue of the Vainstein mechanism in massive gravity \cite{vainshtein}, {\em i.e.}~a non-perturbative restoration of the $\lambda\rightarrow 1$ limit. Recently, in Ref.~\cite{Mukohyama:2010xz} it has been claimed that this is indeed the case in spherically symmetric, static configurations. This matter deserves further attention.
\section{Non-projectable version: Dynamics, consistency and low energy behaviour}
\subsection{Degrees of freedom and propagators}
We now turn our attention to the non-projectable version of the theory. As before, we will not enforce detailed balance in order to obtain the most general results possible. However, we will discuss explicitly that case as well.
The dynamics of the spin 2 graviton are essentially the same as in the projectable version so we will not consider them again in more detail.
As in the projectable case, also here there is an extra scalar degree of freedom, and we will focus on that.
The low energy dynamics of this scalar graviton are governed by the action
\begin{equation}
S^{np}_2=- M_{\rm pl}^2 \int d^3x d t \left[{1 \over c_h^2}\dot{h}^2 + \frac{\eta-2}{\eta} h\partial^2 h \right ] , \label{quadnp}
\end{equation}
which one obtains by linearizing action (\ref{npaction}) around flat space and considering only the lowest order operators. $\xi$ has been set to $1$ by a suitable coordinate rescaling. Note that $c_h$ is defined in the same way as above and, thus, it is {\em not} the low momentum phase velocity of the scalar in the non-projectable version. The square of the latter is now
\begin{equation}
c^{\prime 2}_h= c^2_h\frac{\eta-2}{\eta}.
\end{equation}
For $h$ not to be a ghost one needs $c_h^2<1$, and for $h$ to be classically stable one needs $c_{h}^{\prime 2}>0$. These conditions are satisfied whenever \cite{Blas:2009qj}
\begin{equation}
\label{npcon}
\lambda>1, \quad 0<\eta<2,
\end{equation}
or
\begin{equation}
\lambda<1/3, \quad 0<\eta<2.
\end{equation}
In the second region in the parameter space $\lambda$ is far from $1$, which is the value it has in general relativity, so we will not consider this option further.
The presence of the operator $a_i a^i$ in action (\ref{npaction}), which is a lowest order operator and contributes to the quadratic action, has drastically improved the behaviour of the scalar graviton with respect to the projectable case we examined previously. In particular, there is now a region in parameter space, described by the constraints (\ref{npcon}), for which $h$ is neither a ghost nor classically unstable \cite{Blas:2009qj}. Even thought we have neglected higher order operators here, the behaviour we have found would remain qualitatively the same if we had included them.
Let us examine what would happen if we had enforced detailed balance. The $a^i a_i$ term in the action would not be allowed which means $\eta=0$. In this case, the coefficient of the second term in (\ref{quadnp}) blows up and the scalar field appears to freeze. Things, however, are actually more complicated, as a more detailed analysis shows \cite{Blas:2009yd}. When detailed balance is enforced, the scalar does propagate around backgrounds that are neither static nor homogeneous. In fact it appears to satisfy a first order differential equation, which constitutes a serious problem for the theory, see Ref.~\cite{Blas:2009yd} for more details. Similar conclusions have been obtained by considering the hamiltonian formulation of the theory with detailed balance \cite{Li:2009bg,Henneaux:2009zb}. In particular, the Hamiltonian constraint is not automatically preserved by time evolution (second class constraint) and the theory propagate $5/2$ half degrees of freedom, the $1/2$ corresponding to the scalar mode which satisfies a first order equation and the rest to the two graviton polarizations. Such a theory, though mathematically consistent, exhibits no time evolution and is, therefore, physically meaningless \cite{Henneaux:2009zb}.
\subsection{Strong coupling}
We just saw that, judging from the quadratic action, the non-projectable version exhibits improved scalar dynamics with respect to the projectable version, as long as detailed balance is not enforced. However, we found earlier that the scalar mode in the projectable version gets strongly coupled at unacceptably low energies, so one has to check whether this is the case in the non-projectable version as well. To do so, we write down the cubic action for the scalar \cite{Papazoglou:2009fj}:
\begin{eqnarray}
S^{np}_3&=&M_{\rm pl}^2\int d t d^3x \left\{\left(1-\frac{4(1-\eta)}{\eta^2}\right)h (\partial h)^2 - {2 \over c_h^4} \dot{h} \partial_i h {\partial^i \over \partial^2} \dot{h} \right. \nonumber \\&& \left.~~~~~~~~~~~~~~~~~~~
+ \left(\frac{3}{2}+\frac{1}{\eta}\right)\left[{1 \over c_h^4} h \left( {\partial_i \partial_j \over \partial^2}\dot{h}\right)^2 - {(2 c_h^2+1) \over c_h^4} h \dot{h}^2 \right]
\right\} \label{cubicnp}.
\end{eqnarray}
Same as before for the projectable case, we canonically normalize the low energy quadratic action. This requires the redefinitions
\begin{equation}
t=\sqrt{\frac{\eta}{2-\eta}}\frac{\hat{t}}{|c_h|}, \qquad h=\left(\frac{\eta}{2-\eta}\right)^{1/4} \sqrt{|c_h|} \frac{\hat{h}}{M_{\rm pl}}.
\end{equation}
Recall that we are only interested in the region of the parameter space where conditions (\ref{npcon}) are satisfied. Under the redefinitions the cubic action reads
\begin{eqnarray}
S^{np}_3&=&\frac{(2-\eta)^2}{\eta^{1/2} c^{\prime\,3/2}_h M_{\rm pl}}\int d \hat{t} d^3x \left\{
c_h^2\,\left(1-\frac{8(1-\eta)}{(2-\eta)^2}\right)\hat{h} (\partial \hat{h})^2 - 2 \hat{h}' \partial_i \hat{h} {\partial^i \over \partial^2}\hat{h}' \right. \nonumber \\&& \left.~~~~~~~~~~~~~~~~~~~
+ \left(\frac{3}{2}+\frac{1}{\eta}\right)\left[ \hat{h} \left( {\partial_i \partial_j \over \partial^2}\hat{h}'\right)^2 - \left(\frac{2\, \eta\, c_h^{\prime\,2}}{2-\eta}+1\right) \hat{h}(\hat{h}')^2 \right]
\right\} \label{cubicrednp},
\end{eqnarray}
where we have expressed everywhere $c_h$ in terms of the physical quantity $c'_h$. From eq.~(\ref{cubicrednp}) one directly infers that the cubic interactions are suppressed with respect to the quadratic ones by various scales $f(|\lambda-1|,\eta) M_{\rm pl}$, where $f$ is an algebraic function whose functional form depends on which terms one considers. The scalar mode would become strongly coupled at that lowest of these scales.
Both $|\lambda-1|$ and $\eta$ essentially measure deviations from Lorentz invariance which are present at arbitrarily low scales. They are to be small for the theory to avoid experimental constraints on Lorentz violations. It is rather straightforward to verify that in this case, also the scale that the theory gets strongly coupled has to be low. Instead of performing a general analysis leaving both $|\lambda-1|$ and $\eta$ small but arbitrary, we will focus on the most interesting case where $c'_h\sim 1$. This value of $c'_h$ is the one dictated by Cherenkov radiation constraints \cite{Elliott:2005va}, see discussion above. We then have $\eta\sim|\lambda-1|$ and the strong coupling scale turns out to be $M_{sc}\sim\sqrt{\eta} M_{\rm pl}\sim\sqrt{|\lambda-1|} M_{\rm pl}$ \cite{Papazoglou:2009fj,Kimpton:2010xi}.
Since $\eta, |\lambda-1|\ll 1$ the strong coupling scale will be much lower that the Planck scale. Absence of preferred frame effects in the Solar system observations requires $\eta, |\lambda-1| \lesssim 10^{-7}$ \cite{Will:2005va}, which in turn implies that $M_{sc} \lesssim 10^{15} {\rm GeV}$. Therefore, the strong coupling scale is too high to be phenomenologically accessible from gravitational experiments. In this sense, strong coupling in non-projectable Ho\v{r}ava--Lifshitz gravity would pose no threat if the theory were to be treated as an effective theory describing gravity at low energies. On the other hand, the mere presence of strong coupling even at relatively high energies, as in this case, is enough to seriously question the power counting renormalizability of the theory \cite{Papazoglou:2009fj}. Indeed, as mentioned also earlier when considering the projectable case, renormalizability arguments were based on the assumption that a perturbative treatment can be used to arbitrarily high energies.
However, the strong coupling arguments given here neglect the role of the higher order operators in the action. As proposed in Ref.~\cite{Blas:2009ck}, a way to avoid strong coupling is to lower the scale that suppresses the higher order operators in action (\ref{npaction}) below $M_{sc}$, {\em i.e.}~impose that $M_A\sim M_B\sim M_\star$ and $M_\star<M_{sc}$. This clearly requires the introduction of a second scale, different from $M_{\rm pl}$, as well as a peculiar hierarchy of scales. Also, it does seem to involve some large dimensionless couplings as $M_{\rm pl} \gg M_{\star}$. However, as it has been argued in Refs.~\cite{Blas:2009ck,Blas:2010hb} this hierarchy of scales is technically natural.
An important consequence of this effective constraint on $M_\star$ coming from the requirement to avoid strong coupling, first pointed out in Ref.~\cite{Papazoglou:2009fj}, is that now $M_\star$ becomes bound from both above and below. Being the scale that suppresses higher order derivatives in the dispersion relations, one would expect that the larger $M_\star$ is, the better. Assuming that this same scale suppresses higher order derivatives in the matter dispersion relations (which would also be modified in a way similar to gravity), and in particular photons, places a strong lower bound on $M_\star$: The absence of a time delay in different frequency $\gamma$-rays coming from distant astrophysical sources implies that \cite{Albert:2007qk,:2009zq} $M_\star \gtrsim 10^{11}{\rm GeV}$. However, now one has also an upper bound if strong coupling is not to be a problem, and the combined set of constraints yields
\begin{equation}
10^{15} {\rm GeV} \gtrsim M_\star \gtrsim 10^{11}{\rm GeV}.
\end{equation}
There seems to be a comfortable window for $M_\star$ within which non-projectable Ho\v{r}ava--Lifshitz gravity (without detailed balance or other restrictions) avoids strong coupling without exhibiting detectable Lorentz violations, at least with current experimental accuracy. Improving the accuracy with which we can constraint both preferred frame effect in the solar system and modifications in the dispersion relations could potentially close this window.
\subsection{Relation to Einstein-Aether theory}
Non-projectable Ho\v{r}ava--Lifshitz gravity is a theory with a preferred foliation, which will exhibit Loretz invariance violations even at low energies. This should be clear by now by inspecting the action (\ref{npaction}) and taking into account that, given the dynamics of the scalar discussed previously, one can no longer hope that $\lambda$ and $\eta$ will run to $1$ and $0$ respectively in the IR, in order for Lorentz symmetry to emerge. Note that the operators related to these couplings are of lowest order, {\em i.e.}~dimension 2.
A model theory for gravity with preferred frame effects is Einstein-aether theory \cite{Jacobson:2000xp,Jacobson:2008aj}. It is the most general theory for a unit timelike vector field coupled to gravity (but not to matter), which is second order in derivatives.
The most general action for Einstein-aether theory, up to total derivative terms and setting aside matter coupling, is
\begin{equation} \label{S}
S_{\ae} = \frac{1}{16\pi G_{\ae}}\int \sqrt{-\bar{g}}~ (-\bar{R} + L_{\ae})
~d^{4}x
\end{equation}
where $\bar{g}_{\mu\nu}$ is the 4-dimensional metric, $\bar{g}$ its determinant, $\bar{R}$ the Ricci scalar of this metric, $\bar{\nabla}_\mu$ is the covariant derivative associated with $\bar{g}_{\mu\nu}$,
\begin{equation} \label{Lae}
L_{\rm \ae} = -M^{\alpha\beta\mu\nu} \bar{\nabla}_\alpha u_\mu \bar{\nabla}_\beta u_\nu, and
\end{equation}
with $M^{\alpha\beta\mu\nu}$ defined as
\begin{eqnarray} M^{\alpha\beta\mu\nu} = c_1 \bar{g}^{\alpha\beta}\bar{g}^{\mu\nu}+c_2\bar{g}^{\alpha\mu}\bar{g}^{\beta\nu
+c_3 \bar{g}^{\alpha\nu}\bar{g}^{\beta\mu}+c_4 u^\alpha u^\beta \bar{g}_{\mu\nu}.
\end{eqnarray}
Greek indices run from $0$ to $3$, the $c_i$ are dimensionless coupling constants,
and it is assumed that $u_\mu$ is constrained to be a unit
vector, $g^{\mu\nu}u_\mu u_\nu=1$. This constraint can be explicitly imposed by the use of a lagrange multiplier, or implicitly taken into account in the variation of the action by allowing only variations that respect it.
One may wonder whether there is any relation between Einstein-aether theory and the (naive) IR limit of non-projectable Ho\v{r}ava--Lifshitz gravity (that is, a theory described by action (\ref{npaction}) without the higher order operators suppressed by $M_A$ and $M_B$). It was shown in Ref.~\cite{Jacobson:2010mx} that the latter is actually a limiting case of the former. Let us briefly go through the arguments presented there.
Starting from the action (\ref{S}), impose the restriction that the aether be hypersurface orthogonal:
\begin{equation}
\label{ho}
u_\alpha=\frac{\partial_\alpha T}{\sqrt{g^{\mu\nu}\partial_\mu T \partial_\nu T}}\,,
\end{equation}
where the unit constraint on the aether has been taken into account. $T$ will satisfy a dynamical equation. However, since it is a scalar, on shell its dynamical equation will be implied by the contracted Bianchi identity, provided that $T$ is not a constant (this can be shown for a general scalar field coupled to gravity, provided that the theory is manifestly covariant and the other fields are assumed to satisfy their field equations). However, $T$ cannot be a constant as it now defines the foliation. Thus, we are allowed to not explicitly impose the equation of motion for $T$, which in turns implies that $T$ itself can be chosen as the time coordinate. We then have
\begin{equation}
\label{ut}
u_\alpha=\delta_{\alpha T} (g^{TT})^{-1/2}=N\delta_{\alpha T} \,.
\end{equation}
This choice selects a preferred foliation with $N$ being the lapse function of this foliation. Substituting eq.~(\ref{ut}) into the Einstein-aether theory action (\ref{S}), one gets after some algebra
\begin{equation}\label{SBPSH}
S= \frac{M_{\rm pl}^2}{2}\!\int dt d^3x \, N\sqrt{g}(K_{ij}K^{ij} - \lambda K^2
+ \xi {}^{(3)}\!R + \eta\, a_ia^i),
\end{equation}
with the following correspondence of parameters:
\begin{equation}
\label{HLpar}
\frac{1}{8\pi M_{\rm pl}^2 \,G_{\ae}}=\xi=\frac{1}{1-c_{13}}, \quad \lambda=\frac{1+c_2}{1-c_{13}},\quad \eta=\frac{c_{14}}{1-c_{13}},
\end{equation}
where we use the convention $c_{ij}=c_i+c_j$.\footnote{Note that there is a typo in the formula relating $\xi$ and $c_{13}$ in eq.~(10) of Ref.~\cite{Jacobson:2010mx}.} Clearly, action (\ref{SBPSH}) is the same as action (\ref{npaction}) without the higher order operators. Therefore, the IR limit of non-projectable Ho\v{r}ava-Lifshitz gravity is equivalent to Einstein-aether theory with a hypersurface orthogonal aether.
This equivalence provides new insights to both theories. For Einstein-aether theory it implies that there may exist a UV completion, at least for some subclasses. For non-projectable Ho\v{r}ava--Lifshitz gravity it tells us that it can be viewed as a theory were a preferred foliation is dynamically defined by a scalar field. Note that one would expect to be able to supplement Einstein-aether gravity (with a hypersurface orthogonal aether) with suitable higher order operators and construct a covariant equivalent of the full non-projectable Ho\v{r}ava--Lifshitz gravity (see Ref.~\cite{Germani:2009yt} for an early attempt).
Clearly, on account of the equivalence, some result obtained for Einstein-aether theory go through to the IR limit of Ho\v{r}ava--Lifshitz gravity and vice versa. A characteristic example is spherically symmetric solutions, as the aether is always hypersurface orhtogonal in spherical symmetry \cite{Jacobson:2010mx,Blas:2010hb}. For example, some spherically symmetric solutions found in Ref.~\cite{Kiritsis:2009vz} for Ho\v{r}ava--Lifshitz gravity had actually already been found earlier for Einstein-aether theory \cite{Eling:2006df}. On the other hand, some important results do not go through from general Einstein-aether theory, as they hinge on the exact form of the field equations, and specifically on whether the vector is a gradient of a scalar or not. For instance, though qualitatively similar, quantitatively the PPN constraints on non-projectable Ho\v{r}ava--Lifshitz gravity cannot simply be inferred from those of Einstein-aether theory via a parameter mapping and have been obtain independently in Ref.~\cite{Blas:2010hb}.
\section{Cosmology}
As already mentioned, all versions of Ho\v{r}ava--Lifshitz gravity presented above are written in a preferred foliation, and, therefore, they do not constitute diffeomorphism invariant theories, but they are invariant under diffeomorphisms that preserve the foliation. This symmetry provides enough gauge freedom to choose
\begin{equation}
N=1,\quad N^i=0,\quad g_{ij}=a(t)^2\delta_{ij},
\end{equation}
and bring the ADM line element (\ref{admmetric}) into the usual Friedmann--Lema\^{i}tre--Robertson--Walker (FLRW) form
\begin{equation}
ds^2=-dt^2+a(t)^2 \left[\frac{dr^2}{1-k r^2}+r^2\left(d\theta^2+\sin^2 \theta d\phi^2\right)\right],
\end{equation}
under the usual cosmological assumptions of homogeneity and isotropy. Here, $a(t)$ is the scale factor and $k=+1, 0, -1$ according to whether the universe is hyperspherical, spatially flat, or hyperbolic.
Clearly, in this setting $N$ has no space dependence. Therefore, its spatial derivatives vanish. This will make the actions of the projectable and the non-projectable versions of the theory, eqs. (\ref{paction}) and (\ref{npaction}) coincide (assuming that all operators allowed by the symmetries of the theory have been consistently taken into account). The field equations do not coincide completely, as it does make a difference whether $N$ is assumed to be space-independent {\em a priori} or whether the latter is enforced as a gauge choice. The difference lies on the Hamiltonian constraint and the rest of the equations coincide. The former is global in the projectable theory and local in the non-projectable theory. So, when it comes to background cosmology, one can study both version of the theory at the same time, just taking into account the subtle issue of the Hamiltonian constraint. The dynamics of perturbation around the background will, of course, differ in the two versions, but we will not discuss perturbations here.
Cosmology in Ho\v{r}ava--Lifshitz gravity was first studied in Ref.~\cite{Kiritsis:2009sh,Sotiriou:2009bx}. A recent review of cosmology in the projectable version has recently been given in Ref.~\cite{Mukohyama:2010xz}, and as explained above, most of the results summarized there are applicable also on the non-projectable version. Here we will only give a very brief overview of background cosmology and mention in passing some key advantages of Ho\v{r}ava--Lifshitz cosmology. We refer the reader to the relevant literature for more details.
We start by presenting the field equations in an FLRW background. The supermomentum constraint is trivially satisfied, so we are left with two equations, corresponding to the first and second Friedmann equations. In the projectable version, the Hamiltonian constraint yields
\begin{equation}
\label{f1p}
\int d^3 x a^3 \left\{\frac{3\lambda-1}{2}\; {\dot a^2\over a^2} - {V(a)\over 6} - {8\pi G_N \rho\over 3}\right\}=0,
\end{equation}
where $8\pi G_N\equiv M_{\rm pl}^{-2}$, $\rho\equiv-g^{-1/2} \delta S_M/\delta N$, $S_M$ is the matter action, and
\begin{eqnarray}
V(a) = g_0\, M_{\rm pl}^2 + {6 g_1 k \over a^2} + {12(3g_2+g_3) k^2 \over M_{\rm pl}^2\,a^4}
+ {24(9 g_4 + 3g_5+ g_6) k\over M_{\rm pl}^4 \, a^6}.
\end{eqnarray}
In the non-projectable case, the Hamiltonian constraint is local and one can do away with the integral:
\begin{equation}
\label{f1np}
\frac{3\lambda-1}{2}\; {\dot a^2\over a^2} - {V(a)\over 6} = {8\pi G_N \rho\over 3}.
\end{equation}
The dynamical equations yield in both cases
\begin{eqnarray}
\label{f2}
- \frac{3\lambda-1}{2} \; {\ddot a\over a} &=& {1\over2}\frac{3\lambda-1}{2} {\dot a^2\over a^2}
- {1\over12 a^2} {d[V(a)\, a^3]\over d a}
+ 4\pi G_N p,
\end{eqnarray}
where $p\equiv -g^{ij}(2/N\sqrt{g})\delta S_m/\delta g^{ij}$.
Eqs.~(\ref{f1p}) and (\ref{f2}) and Eqs.~(\ref{f1np}) and (\ref{f2}) are governing background cosmology in the projectable and non-projectable versions respectively. Notice the following: In the non-projectable version one can eliminate $\dot{a}$ from eq.~(\ref{f2}) by using eq.~(\ref{f1np}) to get
\begin{eqnarray}
\label{f3np}
- \frac{3\lambda-1}{2} \; {\ddot a\over a} &=& -{1\over 12 a} {d[V(a) a^2] \over d a} +{4\pi G_N\over 3} (\rho+3p),
\end{eqnarray}
Differentiating eq.~(\ref{f1np}) and subtracting from eq.~(\ref{f3np}) yields the standard conservations law
\begin{equation}
\label{mc}
\dot{\rho}+3\frac{\dot{a}}{a} (\rho+p)=0.
\end{equation}
This implies that the conservation law for matter is implied by the two (modified) Friedmann equation and, thus, imposing it separately is neither needed nor will it constrain the dynamics. This is in direct analogy to general relativity.
Things are quite different in the projectable case, given the global nature of the Hamiltonian constraint. It has been argued in Ref.~\cite{Mukohyama:2009mz} that a global constraint, such as eq.~({\ref{f1p}), is irrelevant for the local patch of the universe inside the Hubble horizon which the FLRW spacetime is supposed to approximate. Now, we could, instead of differentiating eq.~(\ref{f1p}), integrate eq.~(\ref{f2}) and ignore eq.~(\ref{f1p}) completely. One then gets
\begin{equation}
\label{f1p2}
\frac{3\lambda-1}{2}\; {\dot a^2\over a^2} - {V(a)\over 6} = \frac{8\pi G_N}{3} \left(\rho+\frac{C(t)}{a^3}\right),
\end{equation}
where the functional form of $C(t)$ depends on the conservation law matter satisfies. If we eliminate $\dot{a}$ from eq.~(\ref{f2}) by using eq.~(\ref{f1p2}) we get
\begin{eqnarray}
\label{f3p}
- \frac{3\lambda-1}{2} \; {\ddot a\over a} &=& -{1\over 12 a} {d[V(a) a^2] \over d a} +{4\pi G_N\over 3} \left(\rho+\frac{C(t)}{a^3}+3p\right).
\end{eqnarray}
Now, the conservation law for matter has become essential to the dynamics via the presence of $C(t)$. The terms containing $C(t)$ are the only difference between eqs.~(\ref{f1p2}) and (\ref{f3p}), which describe the background dynamics in the projectable case, and eqs.~(\ref{f1np}) and (\ref{f3np}), which describe the background dynamics in the non-projectable case.\footnote{This analysis, leading to the same conclusions, can be performed also before imposing the FLRW ansatz and the relevant symmetries, see Ref.~\cite{Mukohyama:2009mz}.}
Now that we have the field equations in a suitable form for both cases we can proceed to discuss phenomenology. Suppose that matter is coupled in such a way that at low energies $\rho$ and $p$ have the usual meaning and satisfy the standard conservation law of eq.~(\ref{mc}) (this is a strong assumption, see also next section). For the projectable case this implies that $C(t)=C_0$, where $C_0$ is a constant. The $C_0$ related terms then play the role of a dark matter component \cite{Mukohyama:2009mz}. This dark matter component will not exist in the non-projectable case.
Setting aside the role of matter, the main difference between Ho\v{r}ava--Lifshitz gravity and general relativity in background cosmology is the presence of the last two terms in $V$. The first one is a what is known as a dark radiation component, as it scales as $a^{-4}$. The second one is what one could call a stiff matter component, as it scales as $a^{-6}$. Note that the sign of the $a^{-6}$ term depends on the sign of the $k$ and that both terms vanish when $k=0$. An interesting feature related to the presence of these terms is that they can lead to solutions with classical cosmological bounces, provided that their coefficients will have appropriate signs \cite{Sotiriou:2009bx,Kiritsis:2009sh}.
One last important property of Ho\v{r}ava--Lifshitz cosmology which is worth mentioning before closing this brief overview, is that is seems to lead to a scale invariant spectrum of cosmological perturbations, without the need for inflation \cite{Mukohyama:2009gg}. This property is related to the anisotropic scaling between space and time that the theory exhibits at high energies, which is also the key ingredient for power counting renormalizability.
\section{Conclusions and future perspectives}
A brief overview of Ho\v{r}ava--Lifshitz gravity was given and special attention was payed in distinguishing between the various different versions. After that, the main focus of this review was on the dynamics and the viability of these versions. In summary, the projectable version has serious viability problems because it harbours a scalar mode which is either classically or quantum mechanically unstable (depending on the position in parameter space) and also exhibits strong coupling at low energies (whether or not the Vainshtein mechanism can alleviate the latter is not clear yet, see also above). The non-projectable version, on the other hand, seems to have similar (and other) problems if detailed balance is imposed. These problems, however, appear to be overcome once detailed balanced is abandoned and all operators allowed by the symmetries of the theory are consistently taken into account. Alleviating the strong coupling problem requires the introduction of a second scale in the theory which should be parametrically smaller than the Planck scale, and should be the scale at which higher order operators become important. The dynamics of the version exhibiting the extra $U(1)$ symmetry, which was very recently proposed, have not been adequately studied yet.
There are many open issues regarding Ho\v{r}ava--Lifshitz gravity, the two most important ones being the following. Firstly, in the introduction we argued that the theory is power-counting renormalizable, as is usually done in the literature. Even though this is a strong indication for UV completeness, renormalizability beyond power counting has not been explicitly shown. Additionally, the renormalization group flow of the various couplings has not been studied, which implies that, for the time being, one does not really know if the theory approaches GR in the IR ($\lambda\to 1$, $\eta\to 0$) or not. Secondly, the role of matter and its coupling to gravity have not been fully clarified yet. The matter action will have to include higher order spatial derivatives, which implies that there will be modifications in the dispersion relations of matter fields that can lead to serious constraints. Additionally, couplings between the matter and the scalar graviton could lead to violations of the equivalence principle. Even if initially omitted, such couplings would be generically generated by radiative corrections, unless some symmetry prohibits them ({\em e.g.}~supersymmetry \cite{GrootNibbelink:2004za}). Hopefully, future work will shed some light onto these issues.
\ack
I am indebted to my collaborators Antonios Papazoglou, Matt Visser and Silke Weinfurtner for their hard work and invaluable input in our joint projects on Ho\v{r}ava--Lifshitz gravity. I would also like to thank Ted Jacobson, Ian Kimpton and Tony Padilla for numerous stimulating and enlightening discussions. Many thanks also to the participants of the Peyresq 2010 meeting. This work was supported by a Marie Curie Incoming International Fellowship.
\section*{References}
|
2,869,038,154,347 | arxiv | \section{Introduction}
Deep neural networks (DNNs, e.g., \cite{lecun2015deep,He2016DeepRL}), properly trained via empirical risk minimization (ERM), have been demonstrated to significantly improve benchmark performances in a wide range of application domains.
However, minimizing empirical risk over finite or biased datasets often results in models latching on to \emph{spurious correlations} that do not show a robust relationship between the input data and output labels.
Moreover, benchmark evaluations based solely on average accuracy may overlook these critical issues. For instance, Fig.~\ref{fig:teaser} shows that on CIFAR10, even though a standard ERM model reaches human-level test accuracy (\textit{red line}), if we dive deeper into each class and compute their respective worst test accuracy stratified by background colors, they are inconsistent across the ten classes and the degradation from total accuracy is huge (\textit{black line}) for some. Such inconsistency and discrepancy have huge real-world implications, suggesting DNN models may make biased decisions against or in favor of specific spurious factors, such as certain background colors.
\begin{figure*}[t]
\centering
\includegraphics[width=\textwidth]{figures/worstgroups_2layer.pdf}
\caption{\textbf{FlowAug reduces subgroup degradation.}
\textbf{CIFAR10-B} enables us to observe the worst test time subgroup accuracy in each class. Standard ERM shows \textit{subgroup degradation}, uneven subgroup performances across all classes, and a huge gap between total accuracy (\textit{red line}) and the worst subgroup accuracy (\textit{black line}). This issue persists even after common DAs are used (\textit{top}). Our proposed \textbf{FlowAug} mitigates this issue (\textit{bottom}) and also reports improved overall performance.}
\label{fig:teaser}
\vspace{-4mm}
\end{figure*}
Researchers have been working in different directions to understand the effect of spurious correlations, including model over-parameterization~\cite{sagawa2020investigation}, causality~\cite{Arjovsky2019InvariantRM} and information theory~\cite{lovering2020predicting,zhou2021examining}.
Various techniques have emerged over the years to address this challenge, among which DA~\cite{shorten2019survey} has stood out for its simplicity and effectiveness.
DA is an essential tool that generates additional examples to the training set by applying label-preserving transformations to the data, and is showing better generalization results in various machine learning tasks than other approaches~\cite{Zhang2018mixupBE, Yun2019CutMixRS, wei-zou-2019-eda, Guo2019AugmentingDW, Wang2021MultiFormatCL, Shen2020ASB}.
These augmentation methods, however, are often based on heuristic and coarse image processing techniques such as flipping, rotating, blurring,
or slightly manipulating images by mixing attributes from other inputs \cite{Zhang2018mixupBE, Yun2019CutMixRS, Devries2017ImprovedRO, Hendrycks2020AugMixAS} (Fig.~\ref{fig:augmentations}); therefore, they can only address limited aspects of spurious correlations, for which we will show an example in \S~\ref{sec:case}. To address this limitation, instead of mixing low-level features, we seek to augment the training set by learning \emph{semantic} deep representations and then using them to generate new images.
\begin{figure*}[t]
\centering
\includegraphics[width=\textwidth]{figures/augmentations.pdf}
\caption{\textbf{Examples of different augmentation methods.} }
\label{fig:augmentations}
\vspace{-3mm}
\end{figure*}
\input{subgroups_car}
In this paper, as the first step towards scalable and comprehensive evaluation of subgroup performance against semantically meaningful and realistic spurious correlations, we first conduct a case study experiment to investigate background colors as spurious features (\S\ref{sec:case}), for their commonality in image classification and immediate implications for trustworthiness \cite{10.1145/2939672.2939778}. To directly quantify the results, we manually split the test data of CIFAR10 and CIFAR100 into subgroups based on natural image background colors, yielding two benchmark datasets \textbf{CIFAR10-B}ackground (CIFAR10-B) and \textbf{CIFAR100-B}ackground (CIFAR100-B).
Specifically, we split the images in the test sets into eight different subgroups of background colors: \{green, blue, gray, brown, white, red, black, others\} (Fig.~\ref{fig:subgroups_car}).
Equipped with the new datasets, we can investigate the reliance on background color of deep neural models in a multi-class multi-subgroup setup \ming{rewritten}.
We observe that even though standard DNNs have achieved human-level performance in image classification tasks, the accuracies fluctuate rapidly across different subgroups. This phenomenon demonstrates the reliance on background colors as spurious features.
Moreover, applying some popular DA methods does not prevent the models from producing uneven accuracies across subgroups, as shown in Fig.~\ref{fig:teaser}~\&~\ref{fig:case_study}, which further demonstrates that low-level feature manipulations are not sufficient to address spurious correlations. To quantify our observations, we propose \textit{MacroStd}, a metric to quantify subgroup performance discrepancy and imply the reliance on spurious correlations (\S~\ref{sec:metric}). \ming{rewritten.
To enable semantic data augmentations and address the issue of uneven accuracies, we propose \textbf{FlowAug}, a novel DA method which is capable of manipulating images semantically via decoupled representations learned from invertible generative flows~\cite{decoupling2021} (\S\ref{sec:flowaug}). Concretely, our deep generative augmentation approach incorporates a novel flow-based generative model that encourages disentanglement of local and global representations from images, which arguably correspond to the image ``style'' and ``content'' \cite{Gatys2015ANA, CycleGAN2017}, respectively.
By operating on the global representation that is isolated with the image class label, FlowAug semantically creates new images for DA.
More consistent performance across subgroups demonstrates the effectiveness of FlowAug. Furthermore, though not our main focus, we also find that superior experimental results on various in-distribution (ID) and out-of-distribution (OOD) benchmarks, including CIFAR10, CIFAR100~\cite{Krizhevsky2009LearningML}, CIFAR10.1\cite{recht2018cifar10.1, torralba2008tinyimages}, CIFAR10-C and CIFAR100-C~\cite{hendrycks2019robustness} bolster our belief that low-level manipulations are not sufficient. To the best of our knowledge, we are the first to leverage normalizing flow for image classification data augmentation.
To summarize, our contributions are four-fold,
\vspace{-1mm}
\begin{itemize}
\vspace{-1mm}
\item We curate \textbf{CIFAR10-B} and \textbf{CIFAR100-B}, new test sets based on CIFAR10 and CIFAR100, that enables us to study the impact of semantically meaningful and realistic spurious correlations in a multi-class multi-subgroup setup. \ming{rewritten}
\vspace{-1mm}
\item We propose \textbf{FlowAug}, a novel augmentation method that leverages ``expert knowledge'' of the deep generative model to change semantic attributes of images.
\vspace{-1mm}
\item We propose a generic metric that captures the subgroup degradation phenomenon of ERM and common DA methods and measures the sensitivity of model performances to spurious correlations, and demonstrate FlowAug's effectiveness in this regard. \ming{written}
\vspace{-1mm}
\item As an additional benefit, we conduct experiments on CIFAR10/100, CIFAR10.1, and CIFAR10/100-C, and show FlowAug can further provide superior or competitive performances on ID and OOD datasets.
\end{itemize}
\input{labelingMethod_fig}
\begin{figure*}
\centering
\includegraphics[width=1\textwidth]{figures/cifar10_bestworst_standard_autoaug_flowaug.png}
\caption{\textbf{Class average performances (dark bar) and their worst subgroup performances (light bar).} Although a standard CNN model can reach human-level accuracy (\textit{red line}), we find that the subgroup performances can be surprisingly low. Even after data augmentation, the same phenomenon remains (\textit{middle}). FlowAug can mitigate this phenomenon \textit{(right)}.}
\label{fig:case_study}
\vspace{-4mm}
\end{figure*}
\section{A Motivating Example of Subgroup Degradation}\label{sec:case}
We investigate background color as the spurious correlation with our CIFAR10-B (\S~\ref{sec:philosophy}) by first training a standard Resnet18 for 250 epochs with weight decay $5\times 10^{-4}$, initial learning rate 0.1 and learning rate decay at [100, 150] epochs by a factor of 0.1. We observe significant performance degradation in the subgroups of some classes, for example class ``airplane," ``bird" and ``deer" (left plot in Fig.~\ref{fig:case_study}). Even after DAs such as AutoAug \cite{Cubuk2018AutoAugmentLA}, the same phenomenon remains, for instance observe the ``bird" class in the middle plot of Fig.~\ref{fig:case_study}.
In summary, while a standard Resnet model or a Resnet with popular DAs can achieve more than 90\% class average accuracies on CIFAR10, their respective background subgroup performances can be surprisingly low. This phenomenon, which we call ``subgroup degradation" or ``in-class variability," shows that background colors play a role in the performance of a standard DNN model and constitute spurious correlations. Otherwise the performances should be relatively consistent. This triggers our interest in mitigating the performance variability in subgroups, i.e. the reliance on background attributes. And we show in Fig.~\ref{fig:teaser}~\&~Fig.\ref{fig:case_study} that using FlowAug achieves more uniform subgroup results.
\section{Methods}\label{sec:flowaug}
In principle, DA takes the form of a particular set of transformation functions $T$ where each $t\sim T$ transforms an input $x$ in a particular fashion. Moreover, an expert may have the knowledge to design label-preserving transformations $T$ in a way that $t(x)$'s leave the label unchanged.
After the transformations, the dataset $\mathcal{D}$ will be augmented to $\{(x_i^{1:K},y_i)\}_{i=1}^{n}$, where $K$ is the number of times $x_i$ is transformed. From a frequentist point of view, we can apply any MLE algorithm to the augmented dataset, and the hope is that the learned model can better estimate the true model since we have more data.
In this section, we discuss our generative flow model, present our datasets CIFAR10-B and CIFAR100-B for studying spurious correlation, and detail our augmentation algorithms. Lastly, we introduce two metrics to quantify the effect of spurious correlation.
\subsection{Decoupling representations with Flow-based Generative Models}\label{sec:flow_property}
\cite{decoupling2021} has shown that embedding a invertible normalizing flow model as a decoder in a variational autoencoder (VAE) can decouple global ($z$) and local ($\nu$) representations of images in an unsupervised fashion. This approach provides state-of-the-art sample quality, and can more importantly, \emph{switch} the decoupled representations of different images to alter their semantic attributes (see Fig.~\ref{fig:switch}). We presume the global information corresponds to the \textit{style} of the image and local leans toward the \textit{content} in the neural style transfer literature \cite{Gatys2015ANA, CycleGAN2017}. In this work, we use the flow model $\mathcal{F}$ in an off-the-shelf manner, with which we can encode images into global and local representations and also decode them back to image space like VAEs,
\vspace{-2mm}
\begin{equation}\label{eq:encoder_decoder}
z,\nu \leftarrow \mathcal{F}_{enc} (x);~~x' \leftarrow \mathcal{F}_{dec} (z,\nu)
\end{equation}
where $z \sim \mathcal{N}(\mu(x)), \sigma(x))$, $\mu(x)$ and $\sigma(x)$ are neural networks learned from the data, and $\nu \sim \mathcal{N}(0, I)$. $z$ is a $d_z$-dimensional vector where $d_z$ is the dimension of the latent space and the size of $\nu$ is the same as the input image $x$.
We further hypothesize that $z$ carries more information about the colors, which is spurious to the ground truth, and $\nu$ bears more information about the shape, object, etcetera, which is more indicative of the labels. In \S~\ref{sec:ablation_loc_v_global}, we will do an ablation study to attest this hypothesis.
\subsection{Datasets quantifying spurious background correlations }\label{sec:philosophy}
We curate CIFAR-10-B \& CIFAR-100-B to identify and study spurious information in images, and we choose to label the major background colors of CIFAR10 and CIFAR100 validation sets. By learning the subgroup performances, we can measure the sensitivity of a model to different spuriously correlated colors. As shown in Fig.~\ref{fig:subgroups_car}, we manually label the background colors of CIFAR10 and CIFAR100, and split them into eight separate groups. We understand people have different criteria toward determining the background color; therefore, we provide our four main labeling principles as follows,
\vspace{-3mm}
\begin{enumerate}
\item We label the color that has the most coverage around the object. In Fig.~\ref{fig:label_phil}(a), one may argue the red patch or blue ocean has taken up most of the image in the background, but the ``baby" is surrounded completely by the green area, and the ``flatfish" is in the red area.
\vspace{-3mm}
\item When two colors take almost the same coverage other than the object, we choose the color that appears further away. In Fig.~\ref{fig:label_phil}(b), black is farther away from the ``bowl", and so is the blue sky for the ``can".
\vspace{-3mm}
\item When two colors take almost the same coverage and appear to be at a similar distance, we make a judgment call on the color that has more coverage (Fig.~\ref{fig:label_phil}(c)).
\vspace{-3mm}
\item When multiple colors appear in the background and none is significantly larger than the rest (Fig.~\ref{fig:label_phil}(d)), or when the object takes up almost all the space in the picture so that we cannot judge the color in the background, or when the perceived color does not belong to our categories, we put it in the ``others'' category.
\end{enumerate}
\begin{algorithm}[t]
\caption{FlowAug-Gaussian Global $z$}\label{alg:gaussian}
\label{alg}
\KwInput{Flow: $\mathcal{F}$, Dataset: $X$, $L$, $\mu, \sigma, b$}
\For{$l = 1, ..., L$}
{
$x\sim X$\tcp*{Sample image}
$z,\nu \leftarrow \mathcal{F}_{enc}(x)$ \tcp*{Encode the image}
$\epsilon \sim \mathcal{N}_{trunc}(\mu,\sigma^2;b) $\tcp*{Sample perturbations}
$z \leftarrow z + \epsilon $\tcp*{explore space}
$x_{aug} \leftarrow \mathcal{F}_{dec}(z,\nu)$ \tcp*{Decode global and local back to image space}
}
\KwOutput{$X_{aug}$}
\end{algorithm}
\vspace{-5mm}
\subsection{Algorithms}
\ming{rewriten.}Knowing properties of $\nu$ and $z$ discussed in \S\ref{sec:flow_property}, we design two families of transformations to operate on global $z$: (1) $T_1$: we add perturbations to $z$, and (2) $T_2$: we interpolate global information extracted from different images.
The over-arching rationale behind is: by equipping models with label-preserving images under diverse environments (i.e., backgrounds), the model should learn more robust correlations\cite{Arjovsky2019InvariantRM}. The second and third row of Fig.~\ref{fig:augmentations} demonstrate our method ability in this regard.
More specifically, in $T_1$ we add truncated Gaussian perturbation $\epsilon$ to $z$,
\vspace{-3mm}
\begin{multline*}
T_1 :=\{t(x)=\mathcal{F}_{dec}(z+\epsilon,\nu)|(z,\nu)=\mathcal{F}_{enc}(x), \\ \epsilon\sim\mathcal{N}_{trunc}(\mu,\sigma^2; b),~~\forall x\}.
\end{multline*} instead of a Gaussian noise, since a Gaussian noise may sample large numbers that potentially destroy the decoding of $\mathcal{F}_{dec}(z,\nu)$. \ming{rewritten.}
For $T_2$, we decode two random images $x_1, x_2$ to retrieve $z_1, z_2$ and then interpolate $z_1$ and $z_2$ stochastically with a parameter $m$ drawn from a Beta distribution,
\vspace{-3mm}
\begin{multline*}
T_2:=\{t(x_i)=\mathcal{F}_{dec}(z_{new},\nu)| z_{new} = m z_i + (1-m) z_j, \\
m\sim Beta(\alpha,\alpha), (z_i,\nu_i)=\mathcal{F}_{enc}(x_i), ~~\forall i\neq j\}.
\end{multline*}
Detailed transformations are elaborated in Algorithm~\ref{alg:gaussian} \&~\ref{alg:mix}.
\begin{algorithm}[t]
\caption{FlowAug-Mix Gloabl $z$}\label{alg:mix}
\label{alg}
\KwInput{Flow: $\mathcal{F}$, Dataset: $X$, Threshold: $tr$, $L$, $\alpha$}
\For{$l = 1, ..., L$}
{ $x_1, x_2 \sim X$\tcp*{Sample images}
$z_1,\nu_1 \leftarrow \mathcal{F}_{enc}(x_1)$ \tcp*{Encode $x_1$}
$z_2,\nu_2 \leftarrow \mathcal{F}_{enc}(x_2)$ \tcp*{Encode $x_2$}
$m\sim Beta(\alpha,\alpha)$\tcp*{Sample interpolation parameter}
\If{$m < tr$}{
$m\leftarrow1-m$ \tcp*{Avoid drastic change in $style$}
}
$z_1 \leftarrow m z_1 + (1-m) z_2$\tcp*{explore space}
$x_{aug} \leftarrow \mathcal{F}_{dec}(z_1,\nu_1)$ \tcp*{Decode global and local}
}
\KwOutput{$X_{aug}$}
\end{algorithm}
We train our models with the following learning objectives: (1) training only with transformed images from $T_1$ or $T_2$ instead of the original examples, (2) in addition to transformed images, adding the original dataset, and (3) combining the two algorithms and the original dataset,
\vspace{-1mm}
\begin{equation}\label{eq:single}
\mathcal{L}_{FlowAug} = \mathcal{L}(f(t(x)),y; \theta),~~ t\sim T_1~~ or ~~t \sim T_2,
\end{equation}
\vspace{-3mm}
\begin{multline}\label{eq:single_std}
\mathcal{L}_{FlowAug+std} = \mathcal{L}(f(t(x)),y; \theta) + \lambda\mathcal{L}(f(x),y; \theta), \\~~ t\sim T_1~~ or ~~t \sim T_2,
\end{multline}
\vspace{-3mm}
\begin{multline}\label{eq:gauss_std_mix}
\mathcal{L}_{combine} = \mathcal{L}(f(t_1(x)),y; \theta) + \lambda_1 \mathcal{L}(f(t_2(x)),y; \theta) + \\
\lambda_2 \mathcal{L}(f(x),y; \theta), t_1\sim T_1~~and~~t_2 \sim T_2,
\end{multline}
\subsection{Quantifying subgroup degradation}\label{sec:metric}
To quantify the reliance on background attributes, we first propose using the weighted standard deviation,
\vspace{-1mm}
\begin{equation}\label{eq:wstd}
\sigma_w = \sqrt{\frac{\sum_{i=1}^G w_i (s_i-\Bar{s}^*)^2}{\sum_{i=1}^G w_i -DOF}},
\end{equation}
where $s_i$'s are the subgroup accuracies, $\Bar{s}^*$ the weighted mean, $w_i$'s the weights determined by the number of examples in the subgroup, $G$ the number of groups and \textit{DOF} is the degree of freedom. Weighted Std can be applied to subgroups performances within a class (as in Fig.~\ref{fig:cat_acc}), and across all accuracies from different classes and subgroups.
The second metric we propose is macro standard deviation (\emph{MacroStd}),
\vspace{-1mm}
\begin{equation}\label{eq:macrostd}
\sigma_{Macro}= \sqrt{\frac{1}{C}\sum_{i=1}^{C}{\sigma_{w}^{(i)}}^2},
\end{equation}
where $\sigma_w^{(i)}$ is the weighted standard deviation for each class, and $C$ is the number of classes.
\emph{MacroStd} treats each class equally and measures the sensitivity of a model performance across classes. If \emph{MacroStd} is high, this suggests the model has imbalanced performances across classes and also could be affected by background colors.
\input{cifar10_subgroup_fig}
\section{Experiments}\label{sec:experiment}
In this section, we discuss our empirical results on the study of spurious correlation with our CIFAR10-B \& CIFAR100-B. Secondly, although not our primary foci, we present ID and OOD image classification experiments on five datasets --- CIFAR10, CIFAR100, CIFAR10.1, CIFAR10-C and CIFAR100-C --- to test the generalization capabilities of applying FlowAug. Lastly, we analyze and provide intuitions on how our approach is superior. Furthermore, the comparing baselines and implementation details are provided.
\subsection{Datasets}
CIFAR10 and CIFAR100 \cite{Krizhevsky2009LearningML} are benchmark datasets for image classification task. They contain 10 and 100 classes respectively and for each class they have 5000 / 500 examples for training and 1000 and 100 images for testing.
CIFAR10.1 \cite{recht2018cifar10.1} is a test set consists of 2000 images collected from TinyImages \cite{torralba2008tinyimages} and contains the same class labels as CIFAR10.
Finally, CIFAR10-C \& CIFAR100-C \cite{hendrycks2019robustness} are benchmark datasets to model generalization abilities in the presence of 18 shallow semantic corruptions including blurring, contrast, shift, etc.
\subsection{Baselines}\label{sec:baselines}
We compare our proposed method with four types of low-level DA methods (1) mixing by interpolations, (2) fill-in-with-blank, (3) mixing by fill-in-the-blank, (4) combinations of image manipulations. In our experiments, we compare with the best setups reported in their papers.
\vspace{-5mm}
\paragraph{Mixup}~\cite{Zhang2018mixupBE} does linear interpolation on two random images $x_1, x_2$ and mix them as $x_{new}=\lambda x_1 + (1-\lambda) x_2, $ where $\lambda\sim Beta(\alpha,\alpha)$, and the same applies to the label, $y_{new}=\lambda y_1 + (1-\lambda) y_2$.
\vspace{-5mm}
\paragraph{Cutout}~\cite{Devries2017ImprovedRO} randomly crops out a portion of an image and fills it with a specific color, and the label remains unchanged.
\vspace{-5mm}
\paragraph{Cutmix}~\cite{Yun2019CutMixRS} crops out an area of image, but fills the area with a portion of the same size from another image. The label of the augmented image is adjusted according to the proportion of the area of two engaging examples.
\vspace{-5mm}
\paragraph{Autoaug}~\cite{Cubuk2018AutoAugmentLA} uses reinforcement learning to optimize a pre-defined set of policies, combinations of low-level image manipulation, and then learns the best policy for DA.
\vspace{-5mm}
\paragraph{Standard} refers to the models trained on the original datasets, without using any DA methods.
\subsection{Implementation Details}
\paragraph{Generative models} We pre-train the normalizing flow models as in \cite{decoupling2021}, and they achieve the negative log-likelihood scores in bits/dim (BPD) 3.27 and 3.31 on CIFAR10 and CIFAR100 respectively.
\vspace{-5mm}
\paragraph{Hyperparameters} In Algorithm~\ref{alg:gaussian}, we simply set $\mu=0$ and $\sigma=0.1$ for the truncated Gaussian distribution. As for truncation $b$, we empirically find that $z$ has an average maximum value around 4 and so we set $b=4$. In Algorithm~\ref{alg:mix}, we simply set $\alpha=1$ and $tr=0.5$. For all models reported in Table~\ref{tab:acc}, we train Resnet18 for 250 epochs with weight decay 0.0005. Also, the learning rate starts at 0.1 and is divided by 10 at [100, 150] epochs. For our learning objectives, we lightly fine-tune $\lambda$ in Eq.~\eqref{eq:single_std} with values of $\{0.01, 0.05, 0.1\}$, and $\lambda_1, \lambda_2$ in Eq.~\eqref{eq:gauss_std_mix} with $\lambda_1=1$ and $\lambda_2 \in\{0.01, 0.05, 0.1\}$.
The generative flow models are trained on two NVIDIA A40 GPUs, while the Resnet18 are trained on one NVIDIA A40 GPU.
\subsection{Empirical Results}
\vspace{-1mm}
\paragraph{MacroStd and WeightedStd} Table~\ref{tab:var} reports the \emph{MacroStd} and the weighted standard deviation of subgroup performances from the whole dataset. Our approach consistently has both lower \emph{MacroStd} and lower WeightedStd over the baselines. Moreover, in Fig.~\ref{fig:cat_acc}, our approach also achieves lower WeightedStd at the class level. These results show evidence that our approach is less affected by the background colors and hence is more robust.
\input{table_cifarSeries_std}
\vspace{-3mm}
\paragraph{CIFAR10 and CIFAR100} Although ID and OOD generalization performances are not our main foci, our FlowAug demonstrates significant gains on CIFAR10 and CIFAR100 and we report our experimental results in Table~\ref{tab:acc}. Algorithm~\ref{alg:gaussian} itself achieves results better than all the baselines. Algorithm~\ref{alg:mix} also performs better than the \emph{Standard} baseline and is competitive with other methods.
When Algorithm~\ref{alg:gaussian} and \ref{alg:mix} are combined with the \emph{Standard} loss (Eq.~\eqref{eq:gauss_std_mix}), it can further enhance the performances. The best improvements on CIFAR10 and CIFAR100 are at 1.3\% and 1.4\% respectively. The superior results of our deep generative augmentation approach with decoupled representations has shown greater generalization potentials.
\input{table_cifarSeries_acc}
\vspace{-3mm}
\paragraph{CIFAR10.1} On another OOD dataset CIFAR10.1, we also observe significant improvements in performances from FlowAug over the baselines (up to +1\% better than the best of all other baselines). These results again demonstrate that FlowAug is more robust and has better generalizability.
\vspace{-3mm}
\paragraph{CIFAR10-C and CIFAR100-C}
Aside from generalization on \emph{i.i.d.} data, we are interested in FlowAug's capabilities to generalize to out-of-domain (OOD) data, which is another aspect of robustness. We use models of the last epoch from Table~\ref{tab:acc} to test on CIFAR10-C and CIFAR100-C. Although FlowAug does not explicitly add corruptions such as various kinds of blurring, contrast and so on to training data, we observe comparable performances (Fig.~\ref{fig:cifarc}) with augmentation methods that have corruption effects, such as \emph{MixUp} and \emph{AutoAug}, and are better than \emph{Cutout} and \emph{Cutmix} (+1.35\% and +1.26\%, respectively).
\input{cifar10-c_fig}
Our experiments on CIFAR10.1, CIFAR10-C and CIFAR100-C show FlowAug's generalizability to OOD data and validate our direction of using deep decoupled representations for DA.
\subsection{Analysis}
Our two FlowAug algorithms~\ref{alg:gaussian} \& \ref{alg:mix} both improve over the baselines, and combining the two shows even superior results. Conceptually, we know that the flow model $\mathcal{F}$ can map $X$ to a Gaussian prior distribution (cf. Eq.~\ref{eq:encoder_decoder}), but not necessarily all the points in the Gaussian distribution would follow the reverse $g$ to a realistic image. Then intuitively, given that $z_1, z_2$ comes from real images, Algorithm~\ref{alg:mix}'s interpolating $z_1, z_2$ can be interpreted as finding an optimal point between two proven optimal points in the space, i.e., Algorithm~\ref{alg:mix} explores the Gaussian space in an efficient way.
On the other hand, adding a sampled perturbation to $z$ as in Algorithm~\ref{alg:gaussian} can stretch the search space to outside of the Gaussian, which brings good performances. It also explains why the combined approach such as Eq.~\ref{eq:gauss_std_mix} can generally achieve superior performances over the rest since Algorithm~\ref{alg:gaussian} and \ref{alg:mix} can be complementary.
\vspace{-1mm}
\section{Ablation studies}
To further study global and local representations, we can make some design choices applied to $z$ and $\nu$. In \S~\ref{sec:flowaug}, we assume $z$ and $\nu$ carry information about the background and ground truth respectively, and we want to test the assumptions and the generality of Algorithm \ref{alg:gaussian}
\vspace{-1mm}
\subsection{Perturbing Local ($\nu$) or Global ($z$)?} \label{sec:ablation_loc_v_global}
\S~\ref{sec:experiment} has shown that perturbing $z$ improves generalization and the robustness of models. On the other hand, we can also decode realistic images by perturbing $\nu$ (Fig.~\ref{fig:augmentations}(h)), which we assume affects the prediction. We apply Algorithm~\ref{alg:gaussian} \& \ref{alg:mix} with same parameters on $\nu$, and the results deteriorate on all datasets by at least 0.5\% and up to 7\%, suggesting that our assumption about $\nu$'s correspondence to the ground truth label is reasonable.
\vspace{-1mm}
\subsection{Does perturbation type matter? A case study on Gaussian vs Uniform.} Algorithm~\ref{alg:gaussian} uses a truncated Gaussian perturbation, but in fact, we can also add noise sampled from other distributions, such as a Uniform perturbation. To have about the same amount of probability density in the same range as $\mathcal{N}(\mu=0,\sigma=0.1)$, we choose $\mathcal{U}(-0.2,0.2)$ for our study. Table~\ref{tab:acc} shows that adding uniform noise is comparable to adding truncated Gaussian, and when combined with algorithm~\ref{alg:mix} and \emph{Standard}, the improvements are also similar, achieving over a 1\% gain on CIFAR10 and CIFAR100, and more than a 2\% gain on CIFAR10.1. This study suggests the generalization capability of FlowAug on symmetric noise distributions.
\section{Related Works}
\paragraph{Representation Learning.} Deep learning models' success is generally attributed to its ability to learn complex and meaningful representations~\cite{6472238}, and most attempts to learning quality representations require certain inductive biases, for instance, space invariance of CNNs \cite{726791}. Of particular interest to our work, generative models such as VAEs \cite{Burgess2018UnderstandingDI,Mathieu2018DisentanglingD, Chen2018IsolatingSO, Ding2020GuidedVA} enforce constraints such as independent multivariate Gaussain in the latent layers to learning disentangled representations. Our work leverages a model that learns two decoupled representations instead of the factorial ones.
\vspace{-5mm}
\paragraph{Data Augmentation.}
Image DA often helps achieve improved generalization. One line of approach performs low-level basic image operations such as mixing examples \cite{Zhang2018mixupBE}, or random erasing \cite{Yun2019CutMixRS, Devries2017ImprovedRO}, etc. Another approach uses reinforcement learning to learn the best policy of basic image operations \cite{Cubuk2018AutoAugmentLA, Cubuk2020RandaugmentPA}. \cite{Mao2021GenerativeIF} use causal inference to guide their method and add interventions during the generation process. Our work uses decoupled global representations to isolate spurious correlations and then learn robust correlations to the objects. We refer readers to \cite{Feng2021ASO} for recent surveys.
\vspace{-5mm}
\paragraph{Robustness.} Robustness issues in Deep Learning have drawn the attention of the community largely since \cite{Goodfellow2015ExplainingAH,Kurakin2017AdversarialML}. Since then multiple lines of research were proposed to study robustness, including using Distributionally Robust Optimization \cite{duchi2020learning,10.2307/23359484}, Adversarial Training \cite{Madry2018TowardsDL, chen2021depois,DBLP:journals/corr/abs-2003-01908,NEURIPS2018_358f9e7b}, and certifiable bounds \cite{DBLP:journals/corr/abs-1906-06316,pmlr-v97-cohen19c}. \cite{Arjovsky2019InvariantRM} proposed a scenario where representations learned should be robust in different environments, and \cite{Scholkopf2021TowardCR, Scholkopf2022FromST} suggests that learning Causal Representations can be an ultimate approach to robustness in deep learning. Our work is in line with the idea of \cite{Arjovsky2019InvariantRM}.
\section{Conclusion and Future Work}
In this work, we contributed two datasets to study the effect of spurious correlations and proposed \textbf{FlowAug}, a semantic DA method using pretrained flow-based generative models. FlowAug is less sensitive to spurious correlations based on our datasets and proposed metrics, meaning FlowAug helps learn robust correlations. On the other hand, we showed the potential of using disentangled representations for DA by achieving superior generalization performances on both ID and OOD datasets.
Additionally, our datasets can be combined with CIFAR10/100-C for further studies on robustness. And FlowAug can be used with other types of DA such as AutoAug to further boost performances.
\ming{rewritten.}We believe our work serves as a leap to advance studies in robustness, fairness, and even causality in DNNs, as we can now quantify the effect of spurious correlations in a stronger setting and we understand that higher-level DA is needed to overcome uneven subgroup performances. Last but not least, due to limitation of computing resources, we did not pre-train the flow model on larger datasets such as Imagenet\cite{5206848} but we believe it will further enhance the performance and it is a promising direction.
{\small
\bibliographystyle{ieee_fullname}
\section{Appendix}
\begin{table*}
\centering
\begin{adjustbox}{max width=\textwidth}
\begin{tabular}{lcc}
\toprule
& CIFAR10-C & CIFAR100-C \\
\midrule
Baseline & 90.44/90.38 &67.84/67.84\\
Mixup & 92.71/92.54 &69.47/68.69\\
Cutout & 90.27/90.15 &65.32/65.29 \\
Cutmix & 90.31/90.23 &67.05/66.61 \\
AutoAug & 92.87/92.81 &71.8/71.25 \\
\midrule
Ours (Trunc Gaussian on $\nu$) & 90.77/90.77 & 68.03/67.93 \\
Ours (Mix $\nu$) & 87.94/87.13 & 63.15/60.06 \\
Ours (Mix $z$) & 90.79/90.42 &66.15/65.56 \\
+ Std & 90.45/90.47 &67.29/67.16 \\
Ours (Uniform on $z$) & 91.33/91.3 &69.36/69.38 \\
Ours (Trunc Gaussian on $z$) & 91.52/91.49 &69.24/69.25\\
+ Std & 91.1/91.27 &69.2/69.07 \\
+ Std + Mix $z$ & 91.31/91 &69.01/69.05\\
\bottomrule
\end{tabular}
\end{adjustbox}
\caption{\textbf{Cifar10-C and Cifar100-C.} }
\label{tab:cifarc}
\end{table*}
|
2,869,038,154,348 | arxiv | \section{Introduction}
Modeling count data is an interesting topic in variety fields of
applied sciences, such as actuarial sciences, economics,
sociology, engineering, etc. In many practical situation the
popular classical Poisson regression model fails to model count
data which exhibit overdispersion (i.e., the variance of the
response variable exceeds its mean). Moreover, strict assumptions
on Poisson distribution make it more less applicable in situation
that such assumption cannot be strictly verified. The Negative
Binomial distribution/regression has become more and more popular
as a more flexible alternative to Poisson distribution/regression.
In a situation that strict requirements for Poisson distribution
cannot be verified, the Negative Binomial distribution is an
appropriate choice (Johnson et al., 2005). Moreover, the Negative
Binomial is an appropriate choice for overdispersed count data
that are not necessarily heavy--tailed (Aryuyuen \& Bodhisuwan,
2013).
For count data, the overdispersed behavior has been arrived by
{\it either} observing excess of a single value more than number
of expected under the model {\it or} the target population
consisting of several sub-populations. Using k-Inflated and
mixture models are two popular statistical approach to dealing
with an overdispersed behavior. Simar (1976) and Laird (1978) were
two authors who employed Poisson mixture models to considering an
overdispersed behavior. Lambert (1992) considered a zero-inflated
Poisson regression model to take into account an overdispersed
behavior. Wedel et al. (1993) Br\"anna\"as \& Rosenqvist (1994),
Wang et al. (1996), Alf\'o \& Trovato (2004), Wang et al. (1998),
among others, developed idea of using a finite mixture Poisson
regression model to handel overdispersion.
Greene (1994) and Hall (2000) were pioneer authors who employed
zero-inflated Negative Binomial regression to model
overdispersion. The ordinary Negative Binomial distribution can be
viewed as a mixture of Poisson and gamma distributions (Simon,
1961). To handel an overdispersion phenomena, several extension of
Negative Binomial distribution have been introduced by authors.
For instance Negative Binomial exponential distribution (Panjer \&
Willmot, 1981), Negative Binomial Pareto distribution (Meng et
al., 1999), Negative Binomial Inverse Gaussian distribution
(G\'omez-D\'eniz et al., 2008), Negative Binomial Lindley
distribution (Zamani \& Ismail, 2010), Negative Binomial Beta
Exponential distribution (Pudprommarat, 2012), and Negative
Binomial Generalized Exponential distribution (Pudprommarat et
al., 2012).
In 2014, Lim et al. considered a k-Inflated Poisson mixture model
which simultaneously takes into account both inflated and mixture
approaches to handel an overdispersion phenomena. Moreover Tzougas
et al. (2014)' introduced a Negative Binomial mixture model to
model an overdispersion phenomena. This article follows Lim et al.
(2014)'s and generalized Tzougas et al. (2014)'s findings. More
precisely, It introduces a k-Inflated Negative Binomial mixture
distribution/regression. To show practical application of our
finding, we consider the problem of designing an optimal
rate--making system. Then, premium of such optimal rate--making
system has been evaluated using the result of this article.
This article has been structured as follows. The k-Inflated
Negative Binomial mixture model, some of its properties, and an EM
algorithm, to estimated its parameters, have been developed in
Section 2. The Pareto mixture regression model has been given in
Section 3. Application of the k-Inflated Negative Binomial mixture
model along with a Pareto mixture model to design an optimal
rate--making system have been given in Section 4. Section 5
employs our model, as well as other well-known model, to evaluate
rate, base, and pure premiums under a rate--making system for
Iranian third party insurance dataset. Base upon three comparison
methods, Section 6 shows that our model provides more accurate (in
some sense) results. Section 7 employs our well fitted models to
calculate rate and pure premiums under two different Scenarios.
Conclusion and suggestions have been given in Section 8
\section{k-Inflated Negative Binomial mixture regression model}
The k-Inflated Negative Binomial mixture, say kINBM, distribution
arrives by combining $m$ weighted mixture Negative Binomial
distribution with a single mass at point $k.$ The probability mass
function for a kINBM distribution has been given by \footnotesize{
\begin{eqnarray}
\label{kINBM_distribution} P(Y=y|{\boldsymbol \theta}) &=&
p_1I_{\left(y=k\right)}+\sum_{j=2}^{m}p_j\left(\genfrac{}{}{0pt}{}{y+{\alpha }_j-1}{y}\right){\left(\frac{{\tau }_j}{{\alpha }_j+{\tau }_j}\right)}^{{\alpha }_j}{\left(\frac{{\alpha }_j}{{\alpha }_j+{\tau
}_j}\right)}^yI_{{\Bbb
N}}(y),
\end{eqnarray}}\normalsize
where $k\in{\Bbb N}$ and ${\boldsymbol\theta}$ stands for all $3m$
unknown parameters. Moreover, $\sum_{j=1}^{m}p_j=1$ and
$p_j,\alpha_j,\tau_j\geq0,$ for all $j=1,\cdots,m.$ By a
straightforward calculation, one may show that
\begin{eqnarray*}
E(Y) &=& p_1k+\sum_{j=2}^{m}p_j\alpha_j^2/\tau_j \\
M_Y(t) &=& p_1e^{kt}+ \sum_{j=2}^{m}p_j\left(\frac{\tau_j}{\tau_j+\alpha_j(1-e^t)} \right)^{\alpha_j}, t\leq-\max\left\{log(\frac{\alpha_j}{\alpha_j+\tau_j}),j=2,\cdots,m\right\}\\
F_Y(r)
&=&p_1I_{[k,\infty)}(r)+1-p_1+\sum_{j=2}^{m}p_jRIBeta_{\alpha_j/(\alpha_j+\tau_j)}(r+1;\alpha_j),
\end{eqnarray*}
where $RIBeta_x(a;b)=\int_0^x
t^{a-1}(1-t)^{b-1}dt\Gamma(a+b)/(\Gamma(a)\Gamma(b)),$ for
$a,b\geq0, x\in[0,1],$ stands for the regularized incomplete beta
function.
It is well-known that a Negative Binomial distribution can be
arrived by mixing two Poisson and gamma distributions (Simon,
1961). The following generalized the above fact to the kINBM
distribution.
\begin{corollary}\label{kINBM_Corollary}
Suppose random variable $Y,$ given parameter $\Lambda=\lambda,$
has been distributed according to a k-Inflated Poisson
distribution with probability mass function
$P(Y=y|\lambda)=pI_{\{k\}}(y)+q\exp(-\lambda_i)(\lambda_i)^y/y!,$
where $p~\&~q\in[0,1]$ and $p+q=1.$ Moreover, suppose that
parameter $\lambda$ has been distributed according to a finite
mixture gamma distribution
$f_{\Lambda}(\lambda)=\sum^m_{j=1}{{\varphi }_j{\lambda }^{{\alpha
}_j-1}{{\tau }_j}^{{\alpha }_j}e^{-{\tau }_j\lambda
}/\mathit{\Gamma}({\alpha }_j)},$ where, for all $j=1,\cdots,m,$
${\varphi}_j\in[0,1]$ and $\sum^m_{j=1}{{\varphi }_j}=1.$ Then,
unconditional distribution of $Y$ has a kINB finite mixture
distribution with probability mass function
\begin{equation}\label{EQ_kINBM_Corollary}
P\left(Y=y\right)=pI_{\{k\}}(y)+q\sum^m_{j=1}{{\varphi
}_j\left(\genfrac{}{}{0pt}{}{y+{\alpha
}_j-1}{y}\right){(\frac{{\tau }_j}{1+{\tau }_j})}^{{\alpha
}_j}{(\frac{1}{1+{\tau }_j})}^y}.
\end{equation}
\end{corollary}
For practical application, in Equation \eqref{EQ_kINBM_Corollary},
we set $q:=\sum_{s=1}^{m}\omega_s$ and
${\varphi}_j:=\omega_j/\sum_{s=1}^{m}\omega_s.$
Now, to formulated a kINBM regression model, suppose that for an
$i^{\hbox{th}}$ individual, information on count response
variables $Y_{i1},\cdots, Y_{it}$ along with information on $p$
covariates $X_1,\cdots,X_p$ are available. Also suppose that
$Y_{il}$ given parameter $\Lambda_{il}=\lambda_{il}$ has been
distributed according to a k-Inflated Poisson distribution with
probability mass function
$P(Y_{il}=y_{il}|\lambda_{il})=pI_{\{k\}}(y_{il})+q\exp(-\lambda_{il})(\lambda_{il})^y_{il}/y_{il}!,$
where $p~\&~q\in[0,1]$ and $p+q=1.$ Moreover, suppose that
parameter $\lambda_{il}$ can be evaluated by the following
regression model
\begin{equation*}
{\mathrm{log} \left({\lambda}_{il}\right)=\
}\beta_{0il}+\sum_{k=1}^{p}\beta_{kil}x_{ki}+\epsilon_i,
\end{equation*}
where $\beta_{0i},\cdots,\beta_{pi}$ are regression coefficients
and $u_i=\exp(\epsilon_i)$ has been distributed according to a
finite mixture gamma distribution with density function
\begin{equation}\label{density_finite_mixture_gamma}
f_{U_i}(u_i)=\sum^m_{j=2}{{\varphi }_j\frac{{u_i}^{{\alpha
}_j-1}{{\alpha
}_j}^{{\alpha}_j}e^{-{\alpha}_ju_i}}{\mathit{\Gamma}({\alpha
}_j)}},
\end{equation}
where $\sum^m_{j=2}{\varphi }_j=1$ and ${\alpha}_j\geq0$. To have
$E(\epsilon_i)=0,$ we set both parameters of all gamma
distributions, in the finite mixture gamma distribution, to be
equal.
Using the law of total probability and setting
$d_{il}:=\beta_{0il}+\sum_{k=1}^{p}\beta_{kil}x_{ki},$ one may
show that
\begin{eqnarray*}
P(Y_{il}=y_{il}|\theta) &=&
\int_{0}^{\infty}P(Y_{il}=y_{il}|\theta,u_i)f_{U_i}(u_i)du_i\\
&=&pI_{\{k\}}(y_{il})+\sum^m_{j=2}q{\varphi }_j\dfrac{d_{ijl}^{y_{il}}\alpha_j^{\alpha_j}}{y_{il}!\Gamma(\alpha_j)}\int_{0}^{\infty}e^{-(d_{ijl}+\alpha_j)u_i}u_i^{y_{il}+\alpha_j-1} du_i\\
&=&pI_{\{k\}}(y_{il})+\sum^m_{j=2}q{\varphi
}_j\frac{\Gamma(y_{il}+\alpha_j)}{y_{il}!\Gamma(\alpha_j)}\dfrac{d_{ijl}^{y_{il}}\alpha_j^{\alpha_j}}{(d_{ijl}+\alpha_j)^{y_{il}+\alpha_j}},
\end{eqnarray*}
where $\theta$ stands for all unknown parameters. Now by setting
$p=1/(1+\sum^m_{s=2}{e^{\omega_s}}),$
$q{\varphi}_j=e^{\omega_j}/(1+\sum^m_{s=2}{e^{\omega_s}}),$ for
$j=2,\dots,m,$ and
$\left(\genfrac{}{}{0pt}{}{y_{il}+{\alpha}_j-1}{y_{il}}\right):=\frac{\Gamma(y_{il}+\alpha_j)}{y_{il}!\Gamma(\alpha_j)},$
the kINBM regression model can be restated as
\begin{eqnarray}
\label{kINBM_Regression} \displaystyle
P\left(Y_{il}=y_{il}\right)
&=&\frac{1}{1+\displaystyle\sum^m_{j=2}{e^{\omega_j}}}I_{\{k\}}(y_{il})\\
\nonumber
&&+\sum^m_{j=2}{\frac{e^{\omega_j}}{1+\displaystyle\sum^m_{s=2}{e^{\omega_s}}}\left(\genfrac{}{}{0pt}{}{y_{il}+{\alpha}_j-1}{y_{il}}\right){t_{ilj}}^{{\alpha}_j}{\left(1-t_{ilj}\right)}^{y_{il}}},,
\end{eqnarray}
where $t_{ilj}:=\alpha_j/(\alpha_j+\exp\{X_iB_{il}\})$ and
$X_iB_{il}:=\beta_{0il}+\sum_{k=1}^{p}\beta_{kil}x_{ik}.$
\subsection*{Parameters estimation}
All unknown parameters of the kINBM regression
\eqref{kINBM_Regression} can be represented as
${\boldsymbol\theta}:=({\boldsymbol\omega},{\boldsymbol\alpha},
{\boldsymbol B}).$ Now to provide a Maximum likelihood estimator,
say MLE, for ${\boldsymbol\theta},$ one may employ an EM
algorithm. In statistical literature, the EM algorithm is a
well-known and practical method to obtain the Maximum likelihood
estimators for parameters in an arbitrary finite mixture model
(McLachlan \& Krishnan, 1997). Now suppose that number of
components, $m,$ is given, and ${\boldsymbol{\
}\boldsymbol{v}}_i=(v_{i1},\cdots,v_{im})$ stands for the latent
vector of component indicator variables, where for $i=1,\cdots,n$
and $j=1,\cdots,m,$ $v_{ij}=1$ whenever observation $i$ comes from
$j^{\hbox{th}}$ component and $v_{ij}=0,$ otherwise. Therefore, we
assume that each observation has been arrived from one of the $m$
components, but the component it belongs to is unobservable and
therefore considered to be the missing data.
Now using the Multinomial distribution for the unobservable vector
${\boldsymbol{\ }\boldsymbol{v}}_i,$ the complete data
loglikelihood function, for, the kINB regression model, can be
written as the following, see Rigby \& Stasinopoulos (2009) for an
update information.\footnotesize{
\begin{eqnarray}\label{complete_loglikelihood}
l_c\left({\boldsymbol\theta}|y_i,{\boldsymbol
v}_i,X_i\right) &=& \sum^n_{i=1}v_{i1}\mathrm{log}
\left(\frac{1}{1+\sum^m_{l=2}\exp\left(\omega_l\right)}\right)I_{\left(y_i=k\right)}\\
\nonumber&&+\sum^n_{i=1}\sum^m_{j=2}v_{ij}\mathrm{log}
\left(\frac{\exp\left(\omega_j\right)}{1+\sum^m_{l=2}{\exp\left(\omega_l\right)}}\left(\genfrac{}{}{0pt}{}{y_i+\alpha_{j}-1}{y_i}\right){t_j}^{\alpha_j}{\left(1-t_j\right)}^{y_i}\right),
\end{eqnarray}}\normalsize
where
${\boldsymbol\theta}:=({\boldsymbol\omega},{\boldsymbol\alpha},
{\boldsymbol B})$ stands for all unknown parameters,
$t_j:=\alpha_j/(\alpha_j+\exp\{X_iB_j\}),$ and
$X_iB_j:=\beta_{0j}+\sum_{k=1}^{p}\beta_{kj}x_{ik}.$
The EM algorithm employs the following two steps to maximize the
above loglikelihood function.
\begin{description}
\item[E-step:] In this step, using given data along with current estimates ${\widehat{\boldsymbol\theta}}^{(r)}:=({\widehat{\boldsymbol\omega}}^{(r)},{\widehat{\boldsymbol\alpha}}^{(r)},
{\widehat{\boldsymbol B}}^{(r)})$ obtained from the
$r^{\hbox{th}}$ iteration, the probability ${\hat{v}}_{ij}$
estimates. This probability, $(r+1)^{\hbox{th}}$ iteration, can be
stated as\footnotesize{
\begin{eqnarray*}
\hat{v}_{ij}^{(r+1)} &=&
E\left[v_{ij}|y_i,X_i,{\widehat{\boldsymbol\theta}}^{(r)}\right]=1\times
P\left(v_{ij}=1|y_i,X_i,{\widehat{\boldsymbol\theta}}^{(r)}\right)+0\times
P\left(v_{ij}=0|y_i,X_i,{\widehat{\boldsymbol\theta}}^{(r)}\right)\\
&=&\frac{f\left(y_i|v_{in}=1,X_i,{\widehat{\boldsymbol\theta}}^{(r)}\right)P\left(v_{in}=1|X_i,{\widehat{\boldsymbol\theta}}^{(r)}\right)}{f\left(y_i|X_i,{\widehat{\boldsymbol\theta}}^{(r)}\right)}\\
&=&\frac{\exp\left({\hat{\omega}}^{\left(r\right)}_j\right)\left(\genfrac{}{}{0pt}{}{y_i+{\widehat{\alpha
}}^{\left(r\right)}_j-1}{y_i}\right){\left({\widehat{\boldsymbol
t}}^{(r)}_j\right)}^{{\widehat{\alpha
}}^{\left(r\right)}_j}{\left(1-{\widehat{\boldsymbol
t}}^{(r)}_j\right)}^{y_i}}{1+\sum^m_{s=2}{\exp\left({\hat{\omega}}^{\left(r\right)}_s\right)\left(\genfrac{}{}{0pt}{}{y_i+{\widehat{\alpha
}}^{\left(r\right)}_s-1}{y_i}\right){\left({\widehat{\boldsymbol
t}}^{(r)}_s\right)}^{{\widehat{\alpha
}}^{\left(r\right)}_s}{\left(1-{\widehat{\boldsymbol
t}}^{(r)}_s\right)}^{y_i}}},
\end{eqnarray*}}\normalsize
where ${\widehat{\boldsymbol
t}}^{(r)}_j:={\widehat\alpha}^{(r)}_j/({\widehat\alpha}^{(r)}_j+\exp\{X_i{\widehat
B}^{(r)}_j\}),$ and $X_i{\widehat
B}^{(r)}_j:={\widehat\beta}^{(r)}_{0j}+\sum_{k=1}^{p}{\widehat\beta}^{(r)}_{kj}x_{ik}.$
\item[M-step:] Given the probability $\hat{v}_{ij},$ this step
maximizes, in the $(r+1)^{\hbox{th}}$ iteration, the following loglikelihood $Q(\cdot)$ with respect
to ${\boldsymbol\theta}:=({\boldsymbol\omega},{\boldsymbol\alpha},
{\boldsymbol B}).$\footnotesize{
\begin{eqnarray*}
\boldsymbol{Q} &=&
E\left[l_c|y_i,X_i,{\widehat{\boldsymbol\theta}}^{(r)}\right]\\
&=& \sum^n_{i=1}{{{\hat{v}}_{i1}}^{\left(r+1\right)}{\mathrm{\log} \left(\frac{1}{1+\sum^m_{l=2}{\exp\left(\omega_l\right)}}\right)\ }I_{\left(y_i=k\right)}+\sum^n_{i=1}\sum^m_{l=2}{\left[{{\hat{v}}_{il}}^{\left(r+1\right)}{\mathrm{log} \left(\frac{\exp\left({\widehat{\boldsymbol \omega}}^{(r)}_l\right)}{1+\sum^m_{s=2}{\exp\left({\widehat{\boldsymbol \omega}}^{(r)}_s\right)}}\right)\ }\right]}}\\
&&+ \sum^n_{i=1}{\sum^m_{l=2}{{{\hat{v}}_{il}}^{\left(r+1\right)}{\mathrm{\log} \left(\left(\genfrac{}{}{0pt}{}{y_i+{{\widehat{\boldsymbol \alpha}}^{(r)}}_l-1}{y_i}\right){\left(\frac{{{\widehat{\boldsymbol \alpha}}^{(r)} }_l}{{{\widehat{\boldsymbol \alpha}}^{(r)} }_l+\exp(X_i{\widehat{\boldsymbol B}}^{(r)}_l)}\right)}^{{{\widehat{\boldsymbol \alpha}}^{(r)} }_l}{\left(\frac{\exp(X_i{\widehat{\boldsymbol B}}^{(r)}_l)}{{{\widehat{\boldsymbol \alpha}}^{(r)} }_l+\exp(X_i{\widehat{\boldsymbol B}}^{(r)}_l)}\right)}^{y_i}\right)\
}}}\\
&=:& Q_1+Q_2.
\end{eqnarray*}}\normalsize
Updated parameters
${\widehat{\boldsymbol\theta}}^{(r+1)}:=({\widehat{\boldsymbol\omega}}^{(r+1)},{\widehat{\boldsymbol\alpha}}^{(r+1)},
{\widehat{\boldsymbol B}}^{(r+1)})$ have been arrived by solving
the following equation.\footnotesize{
\begin{eqnarray*}
\frac{\partial Q_1}{\partial \omega_j} &=& \sum^n_{i=1}{\frac{\partial }{\partial\omega_j}{{\hat{v}}_{i1}}^{\left(r+1\right)}{\mathrm{log}
\left(\frac{1}{1+\sum^m_{l=2}{\exp\left(\omega_l\right)}}\right)\
}I_{\left(y_i=k\right)}+\sum^n_{i=1}\frac{\partial
}{\partial\omega_j}{{\hat{v}}_{ij}}^{\left(r+1\right)}{\mathrm{log}
\left(\frac{\exp\left(\omega_j\right)}{1+\sum^m_{l=2}{\exp\left(\omega_l\right)}}\right)\
}}=0; \\
\frac{\partial Q_2}{\partial B_j} &=& \sum^n_{i=1}\frac{\partial}{\partial
B_j}{{\hat{v}}_{in}}^{\left(r+1\right)}{\mathrm{log}
\left(\left(\genfrac{}{}{0pt}{}{y_i+{\alpha
}_j-1}{y_i}\right){\left(\frac{{\alpha }_j}{{\alpha
}_j+exp(X_iB_j)}\right)}^{{\alpha
}_j}{\left(\frac{\exp(X_iB_j)}{{\alpha
}_j+exp(X_iB_j)}\right)}^{y_i}\right)\ }=0; \\
\frac{\partial Q_2}{\partial {\alpha}_j} &=& \sum^n_{i=1}\frac{\partial
}{\partial
\alpha_j}{{\hat{v}}_{in}}^{\left(r+1\right)}{\mathrm{log}
\left(\left(\genfrac{}{}{0pt}{}{y_i+{\alpha
}_j-1}{y_i}\right){\left(\frac{{\alpha }_j}{{\alpha
}_j+exp(X_iB_j)}\right)}^{{\alpha
}_j}{\left(\frac{\exp(X_iB_j)}{{\alpha
}_j+exp(X_iB_j)}\right)}^{y_i}\right)\ }=0.
\end{eqnarray*}}\normalsize
Since the above three equations cannot solve explicitly, such
updated parameters have been obtained using the following
Iteratively Reweighted Least Squares, say IRLS, method.
\begin{eqnarray*}
{\hat{\omega}}^{\left(r+1\right)}_j &=& {\hat{\omega}}^{\left(r\right)}_j+{\left(E\left(\frac{-{\partial }^2Q_1}{\partial
{\omega_j}^2}\right)\right)}^{-1}.\frac{\partial Q_1}{\partial \omega_j}; \\
{\hat{B}}^{\left(r+1\right)}_j &=& {\hat{B}}^{\left(r\right)}_j+{\left(E\left(\frac{-{\partial }^2Q_2}{\partial
{B_j}^2}\right)\right)}^{-1}.\frac{\partial Q_2}{\partial B_j};\\
{\widehat{\alpha }}^{\left(r+1\right)}_j &=& {\widehat{\alpha
}}^{\left(r\right)}_j+{\left(E\left(\frac{-{\partial }^2Q_2}{\partial {{\alpha
}_j}^2}\right)\right)}^{-1}.\frac{\partial Q_2}{\partial {\alpha
}_j}.
\end{eqnarray*}
In IRLS method, $E\left(-{\partial}^2\left({\mathrm{log}
\mathrm{likelihood}\ }\right)/\partial
{\mathrm{parameter}}^{\mathrm{2}}\right)$ can be viewed as the
Fisher information matrix and $\partial \left({\mathrm{log}
\mathrm{likelihood}\ }\right)/\partial \mathrm{parameter}$ as
score function.
\end{description}
After updated Parameter estimates
${\widehat{\boldsymbol\theta}}^{(r+1)}:=({\widehat{\boldsymbol\omega}}^{(r+1)},{\widehat{\boldsymbol\alpha}}^{(r+1)},
{\widehat{\boldsymbol B}}^{(r+1)}),$ the complete data
loglikelihood for $(r+1)^{\hbox{th}}$ iteration, arrives
by\footnotesize{
\begin{eqnarray*}
l^{\left(r+1\right)}_c&=&\sum^n_{i=1}{{{\hat{v}}_{i1}}^{\left(r+1\right)}{\mathrm{log} \left(\frac{1}{1+\sum^m_{l=2}{\exp\left({\hat{\omega}}^{\left(r+1\right)}_l\right)}}\right)\
}I_{\left(y_i=k\right)}}\\
&&+\sum^n_{i=1}{\sum^m_{j=2}{{{\hat{v}}_{ij}}^{\left(r+1\right)}{\mathrm{log}
\left(\frac{\exp\left({\hat{\omega}}^{\left(r+1\right)}_j\right)}{1+\sum^m_{l=2}{\exp\left({\hat{\omega}}^{\left(r+1\right)}_l\right)}}\left(\genfrac{}{}{0pt}{}{y_i+{\widehat{\alpha
}}^{\left(r+1\right)}_j-1}{y_i}\right){\left({\widehat{\boldsymbol
t}}^{(r+1)}_j\right)}^{{\widehat{\alpha
}}^{\left(r+1\right)}_j}{\left(1-{\widehat{\boldsymbol
t}}^{(r+1)}_j\right)}^{y_i}\right)\ }}},
\end{eqnarray*}}\normalsize
where ${\widehat{\boldsymbol
t}}^{(r+1)}_j:={\widehat\alpha}^{(r+1)}_j/({\widehat\alpha}^{(r+1)}_j+\exp\{X_i{\widehat
B}^{(r+1)}_j\}),$ and $X_i{\widehat
B}^{(r+1)}_j:={\widehat\beta}^{(r+1)}_{0j}+\sum_{k=1}^{p}{\widehat\beta}^{(r+1)}_{kj}x_{ik}.$
Now, in the E-step $v_{ij}-$s have been estimated. This loop has
been repeated until the difference
$\left|l^{\left(r+1\right)}_c-l^{\left(r\right)}_c\right|$ has
been converged, in some sense.
It is worthwhile to mention that, since regression coefficients
${\boldsymbol B}=(\beta_{0},\cdots,\beta_{p})^\prime$ have been
estimated using the MLE methods. therefore, number of mixture
component impact on such estimators.
\section{Pareto mixture regression model}
The Pareto mixture distribution arrives by combining $m$ weighted
mixture Pareto distributions. The density function for a Pareto
mixture distribution has been given by \footnotesize{
\begin{eqnarray}
\label{Pareto_mixture_distribution} f_{Z}(z|{\boldsymbol
\vartheta}) &=&
\sum_{j=1}^{m}\rho_j\alpha_j\frac{\gamma_j^{\alpha_j}}{(z+\gamma_j)^{\alpha_j+1}},
\end{eqnarray}}\normalsize
where ${\boldsymbol\vartheta}=({\boldsymbol\rho},
{\boldsymbol\alpha}, {\boldsymbol\gamma})$ stands for all $3m$
unknown parameters. Moreover, $\sum_{j=1}^{m}\rho_j=1$ and
$\rho_j,\alpha_j\geq0,$ for all $j=1,\cdots,m.$ More details on
this distribution can be found in Tzougas et al. (2014).
Tzougas et al. (2014) showed that a Pareto mixture distribution
can be arrived by mixing two exponential and inverse gamma
distributions.
Now, to formulated a Pareto mixture regression model, suppose that
for an $i^{\hbox{th}}$ individual, information on continuous
response variables $Z_{i1},\cdots, Z_{it}$ along with information
on $p$ covariates $W_1,\cdots,W_p$ are available. Also suppose
that $Z_{il}$ given parameter $\Theta_{il}=\theta_{il}$ has been
distributed according to an exponential distribution with density
function
$f_{Z_{il}|\Theta_{il}=\theta_{il}}(z_{il})=\exp\{-z_{il}/\theta_{il}
\}/\theta_{il}.$ Moreover, suppose that parameter $\theta_{il}$
can be evaluated by the following regression model
\begin{equation*}
{\mathrm{log} \left({\theta}_{il}\right)=\
}d_{0il}+\sum_{k=1}^{p}d_{kil}w_{ki}+\epsilon_i,
\end{equation*}
where $d_{0i},\cdots,d_{pi}$ are regression coefficients and
$u_i=\exp(\epsilon_i)$ has been distributed according to a finite
mixture Inverse gamma distribution with density function
\begin{equation}\label{density_finite_mixture_Inverse_gamma}
f_{U_i}(u_i)=\sum^m_{j=1}\rho_j\frac{(\alpha_j-1)^{\alpha_j}u_i^{-\alpha_j-1}}{\Gamma(\alpha_j)}e^{-(\alpha_j-1)/u_i},
\end{equation}
where $\sum^m_{j=1}\rho_j=1$ and $\rho_j,~{\alpha}_j\geq0$. To
have $E(\epsilon_i)=0$ in Equation
\eqref{Pareto_mixture_distribution} we set $\gamma_j=\alpha_j-1,$
for $j=1,\cdots,m.$
Using the law of total probability and setting
$b_{il}:=d_{0il}+\sum_{k=1}^{p}d_{kil}w_{ki},$ one may show that
\begin{eqnarray*}
f_{Z_{il}|\vartheta}(z_{il}) &=&
\int_{0}^{\infty}f_{Z_{il}||\vartheta,u_i}(z_{il})f_{U_i}(u_i)du_i\\
&=& \int_{0}^{\infty} e^{-z_{il}\exp\{-b_{il}\}/u_i}\exp\{-b_{il}\}u_i^{-1}\sum^m_{j=1}\rho_j\frac{(\alpha_j-1)^{\alpha_j}u_i^{-\alpha_j-1}}{\Gamma(\alpha_j)}e^{-(\alpha_j-1)/u_i} du_i\\
&=&\sum_{j=1}^{m}\rho_j\alpha_j\frac{(\alpha_j-1)^{\alpha_j}}{(z+\alpha_j-1)^{\alpha_j+1}}.
\end{eqnarray*}
Similar to the kINB regression/distribution the maximum liklihood
estimator for parameters of a Pareto mixture
regression/distribution can be obtained using the EM algorithm.
Fortunately, Rigby \& Stasinopoulos (2001) developed a R package,
named 'GAMLSS`, for such propose, see Rigby \& Stasinopoulos
(2001, 2009) for more details.
\section{Application to posteriori rate--making system}
The rate--making system is a non-life actuarial system which rates
policyholders based upon their last $t$ years record (Payandeh
Najafabadi et al., 2015). A rate--making system based upon
policyholders' characteristics assigns a priori premium for each
policyholder. Then, it employs the last $t$ years claims
experience of each insured to update such priori premium and
provides posteriori premium (Boucher \& Inoussa, 2014). The
Bonus--Malus system is a commercial and practical version of the
rate--making system which takes into account current year
policyholders' experience to determine their next year premium.
There is a considerable attention from authors to study
rate--making systems (or Bonus--Malus systems). For instance:
Several mathematical tools for pricing a rate--making system has
been provided by Lange (1969). Dionne \& Vanasse (1989, 1992)
employed available asymmetric information under Poisson and
Negative Binomial regression models to determine premium of a
rate--making system. In 1995, Lemaire designed an optimal
Bonus--Malus system based on Negative Binomial distribution.
Pinquet (1997) considered Poisson and Lognormal distributions to
design an optimal Bonus--Malus system. Walhin \& Paris (1999)
considered a Hofmann's distribution along with a finite mixture
Poisson distribution to evaluate elements of a Bonus--Malus
system. The relatively premium of a rate--making system under the
exponential loss function has been evaluated by Denuit \& Dhaene
(2001). In 2001, Frangos \& Vrontos designed an optimal
Bonus--Malus system using both Pareto and Negative Binomial
distributions. Using the bivariate Poisson regression model
Berm\'udez \& Morata (2009) studied priori rate--making procedure
for an automobile insurance database which has two different types
of claims. In 2011, Berm\'udez \& Karlis employed a Bayesian
multivariate Poisson model to determine premium of a rate--making
system which has a non-ignorable correlation between types of its
claims. Boucher \& Inoussa (2014) introduced a new model to
determine premium of a rate--making system whenever panel or
longitudinal data are available. The Sichel distribution along
with a Negative Binomial distribution have been considered by
Tzougas \& Frangos (2013, 2014a, 2014b). Tzougas et al. (2014)
employed a finite mixture distribution to model frequency and
severity of accidents. Payandeh Najafabadi et al. (2015) employed
Payandeh Najafabadi (2010)'s idea to determine credibility premium
for a rate--making system whenever number of reported claims
distributed according to a zero-inflated Poisson distribution.
Several authors have been employed zero-inflated models in
actuarial science, see instance Yip \& Yau (2005), Boucher et al.
(2009), Boucher \& Denuit (2008), and Boucher et al. (2007), among
others.
Under a rate--making system the pure premium of an $i^{\hbox{th}}$
policyholder at $(t+1)^{\hbox{th}}$ year has been estimated by
multiplication of estimated base premium, say ${\hat BP}(t+1),$
into corresponding estimated rate premium, say ${\hat Rate}(t+1).$
From decision theory point of view, the Bayes estimator offers an
intellectually and acceptable estimation for both the rate premium
$Rate(t+1)$ and the base premium $BP(t+1).$ Such Bayes estimators,
under the quadratic loss function, can be obtained by posterior
expectation of risk parameters given number and severity of
reported claims at first $t+1$ years, see Denuit et al. (2007) for
more details.
Therefore, to determine premium for $i^{\hbox{th}}$ policyholder,
under a rate--making system, one has to determine both Bayes
estimators. The following two theorems develop such estimators.
Namely, in the first step, it supposes that number of reported
claim $Y_1,\cdots,Y_t,$ given risk parameter
$\Lambda_i=\lambda_i,$ has been distributed according to a
k-Inflated Poisson distribution and risk parameter $\Lambda_i$
distributed as a finite mixture Gamma. In the second step, it
supposes that claim size random variable $Z_1,\cdots,Z_t,$ given
risk parameter $\Theta_i=\theta_i,$ has been distributed according
to an exponential distribution and risk parameter $\Theta_i$
distributed as a finite mixture inverse Gamma. Finally, it derives
such Bayes estimators for risk parameters $\theta_i$ and
$\lambda_i.$
\begin{theorem}
\label{Bayes_estimator_rate_premium} Suppose that for an
$i^{\hbox{th}}$ policyholder, number of reported claims in the
last $t$ years have been restated as ${\bf
Y}_{i}=(Y_{i1},\cdots,Y_{it}).$ Also suppose that, for
$l=1,\cdots,t,$ $Y_{il}$ given parameter
$\Lambda_{il}=\lambda_{ik}$ has been distributed according to a
k-Inflated Poisson distribution with probability mass function
$P(Y_{il}=y_{il}|\lambda_{il})=pI_{\{k\}}(y_{il})+q\exp(-\lambda_{il})(\lambda_{il})^y_{il}/y_{il}!,$
where $p~\&~q\in[0,1]$ and $p+q=1.$ Moreover, suppose that risk
parameter $\Lambda_i$ can be restated as regression model
$\log({\lambda}_{il})=C_{i}B_{il}+{\epsilon}_i$ where ${\bf
C}_{i}=(1,c_{i,1},\dots,c_{i,p})$ is the vector of $p$
characteristics/covariates for an $i^{\hbox{th}}$ policyholder,
$B_{il}=(\beta_{0il},\cdots,\beta_{pil})^\prime$ is the vector of
the regression coefficients, and $u_i=\exp(\epsilon_i)$ has been
distributed according to finite mixture gamma distribution with
density function $f_{U_i}(u_i)=\sum^m_{j=1}{{\varphi
}_j{{u_i}^{{\alpha }_j-1}{{\alpha }_j}^{{\alpha }_j}e^{-{\alpha
}_ju_i}}/{\mathit{\Gamma}({\alpha }_j)}},$ where $u_i>0$, ${\alpha
}_j>0$ and $\sum^m_{j=1}{{\varphi }_j=1}$. Then, Bayes estimator
for the rate premium $\widehat{ Rate}_i(t+1),$ of an
$i^{\hbox{th}}$ policyholder at $(t+1)^{\hbox{th}}$ year, is given
by \footnotesize{
\begin{equation} \label{Eq_Bayes_Rate_Premium} \widehat{ Rate}_i(t+1)=e^{C_{i}B_{it+1}}\frac{\int^{\infty
}_0{u_i\prod^t_{l=1}{h_{il}(u_i)}}\sum^m_{j=1}k_{j}(u_i)d_{u_i}}{\int^{\infty
}_0{\prod^t_{l=1}{h_{il}(u_i)}}\sum^m_{j=1}k_{j}(u_i)d_{u_i}},
\end{equation}}\normalsize
where $k_{j}(u_i):={{\varphi }_j{{u_i}^{{\alpha }_j-1}{{\alpha
}_j}^{{\alpha }_j}e^{-{\alpha }_ju_i}}/{\mathit{\Gamma}({\alpha
}_j)}},$ ${\mathit{\Gamma}(\cdot)}$ stands for the Gamma function,
and
$h_{il}(u_i):=pI_{\left(y_{il}=k\right)}+qe^{-\exp(C_{i}B_{il})u_i}{\left(\exp(C_{i}B_{il})u_i\right)}^{y_{il}}/y_{il}!.$
\end{theorem}
{\it Proof.} The Bayes estimator for the rate premium
$Rate_i(t+1),$ under the quadratic loss function, is mean of
posterior distribution $\Lambda_{it+1}|({\bf Y}_{i}, {\bf
C}_{i}).$ Such the posterior distribution can be restated as the
following.
\begin{eqnarray*}
f_{\Lambda_{it+1}|({\bf Y}_{i}, {\bf
C}_{i})}(\lambda_{it+1}) &=&
\dfrac{\prod_{l=1}^{t}P(Y_{il}=y_{il}|\Lambda_{il})P(\Lambda_{il}=e^{C_{i}B_{il}}u_i)}{\int_{0}^{\infty}\prod_{l=1}^{t}P(Y_{il}=y_{il}|\Lambda_{il})P(\Lambda_{il}=e^{C_{i}B_{il}}u_i)du_i}\\
&=&\dfrac{\prod^t_{l=1}{h_{il}(u_i)}\sum^m_{j=1}k_{j}(u_i)}{\int_{0}^{\infty}\prod^t_{l=1}{h_{il}(u_i)}\sum^m_{j=1}k_{j}(u_i)d_{u_i}}.
\end{eqnarray*}
Now the desired result arrives by $$\widehat{
Rate}_i(t+1)=\int_{0}^{\infty}e^{C_{i}B_{it+1}}u_if_{\Lambda_{it+1}|({\bf
Y}_{i}, {\bf C}_{i})}(\lambda_{it+1})du_i~~\square$$
In a situation that $q=1,$ the rate premium ${\hat Rate}_i(t+1)$
can be restated as
\begin{eqnarray*}
\widehat{ Rate}_i(t+1) &=&
e^{C_{i}B_{it+1}}\dfrac{\sum^m_{j=1}{\varphi}_j\alpha_j^{\alpha_j}\Gamma(\alpha_j+y_{i.}+1)/(\Gamma(\alpha_j)(\alpha_j+\sum_{l=1}^te^{C_{i}B_{il}}))}{\sum^m_{j=1}{\varphi}_j\alpha_j^{\alpha_j}\Gamma(\alpha_j+y_{i.})/(\Gamma(\alpha_j)(\alpha_j+\sum_{l=1}^te^{C_{i}B_{il}}))},
\end{eqnarray*}
where $y_{i\cdot}=\sum_{l=1}^ty_{il}.$ This situation has been
studied by Dionne \& Vanasse (1992) for an one mixture
distribution and by Tzougas et al. (2014) for an $m$ mixture
distribution. In the case that $t=0,$ one may show that $\widehat{
Rate}_i(1)=e^{C_{i}B_{i1}}.$
\begin{remark}\label{Bayes_estimator_rate_premium-under-distribution}
For the situation that no covariate information has been taken
into account, say a distribution model, and the risk parameter
$\Lambda_i$ has been distributed according to a finite mixture
gamma distribution with density function given by
\eqref{density_finite_mixture_gamma}. Result of Theorem
\eqref{Bayes_estimator_rate_premium} can be reformulated as
\footnotesize{
\begin{equation*} \label{Eq_Bayes_Rate_Premium}
\widehat{ Rate}_i(t+1)=\frac{\int^{\infty
}_0{\lambda_{it+1}\prod^t_{l=1}\left(pI_{\left(y_{il}=k\right)}+q\frac{e^{-\lambda_{it+1}}{\left(\lambda_{it+1}\right)}^{y_{il}}}{y_{il}!}\right)}\sum^m_{j=1}{{\varphi
}_j\frac{{\lambda_{it+1}}^{{\alpha }_j-1}{{\tau}_j}^{{\alpha
}_j}e^{-{\tau}_j\lambda_{it+1}}}{\mathit{\Gamma}({\alpha
}_j)}}d_{\lambda_{it+1}}}{\int^{\infty
}_0{\prod^t_{l=1}\left(pI_{\left(y_{il}=k\right)}+q\frac{e^{-\lambda_{it+1}}{\left(\lambda_{it+1}\right)}^{y_{il}}}{y_{il}!}\right)}\sum^m_{j=1}{{\varphi
}_j\frac{{\lambda_{it+1}}^{{\alpha }_j-1}{{\tau}_j}^{{\alpha
}_j}e^{-{\tau}_j\lambda_{it+1}}}{\mathit{\Gamma}({\alpha
}_j)}}d_{\lambda_{it+1}}}.
\end{equation*}}\normalsize
\end{remark}
The following theorem develops a Bayes estimator for the base
premium $BP_i(t+1)$ for an $i^{\hbox{th}}$ policyholder at
$(t+1)^{\hbox{th}}$ year.
\begin{theorem}\label{Bayes_estimator_base_premium}
Suppose that for an $i^{\hbox{th}}$ policyholder, severity/size of
claims in the last $t$ years have been restated as ${\bf
Z}_{i}=({\bf Z}_{i1},\cdots,{\bf Z}_{it}).$ Also suppose that, for
$l=1,\cdots,t,$ ${\bf Z}_{il}=(Z_{il1},\cdots,Z_{ilk_{il}}),$
where $k_{il}$ stands for number of reported claims by
$i^{\hbox{th}}$ policyholder at $l^{\hbox{th}}$ year, and for
$s=1,\cdots,k_{il},$ assume that $Z_{ils}$ given parameter
$\Theta_{il}=\theta_{il}$ has been distributed according to an
exponential distribution function with density function
$f_{Z_{il}|\Theta_{il}=\theta_{il}}(z_{il})=\exp\{-z_{il}/\theta_{il}\}/\theta_{il}.$
Moreover, suppose that risk parameter $\Theta_i$ can be restated
as $\log({\theta }_{i,l})={\bf W}_{i}{\bf D}_{il}+\epsilon_i,$
where ${\bf W}_{i}=(1,w_{i,1},\dots,w_{i,p})$ is the vector of $p$
characteristics/covariates for an $i^{\hbox{th}}$ policyholder,
${\bf D}_{il}=(d_{0il},\cdots,d_{pil})^\prime$ is the vector of
the regression coefficients, and $u_i=\exp(\epsilon_i)$ has been
distributed according to a finite mixture Inverse Gamma with
density function
$f_{U_i}(u_i)=\sum_{j=1}^{m}\phi_j(\eta_j-1)^{\eta_j}u_i^{-\eta_j-1}\exp(-(\eta_j-1)/u_i)/\Gamma(\eta_j),$
where $u_i>0$, ${\eta}_j>0$ and $\sum^m_{j=1}{\phi_j=1}$. Then,
Bayes estimator for the the base premium $BP_i(t+1)$ for an
$i^{\hbox{th}}$ policyholder at $(t+1)^{\hbox{th}}$ year, is given
by
\begin{equation} \label{Eq_Bayes_Base_Premium}
\widehat{BP}_i^{t+1}=e^{{
W}_{i}{D}_{it+1}}\dfrac{\sum^m_{j=1}\phi_j\frac{(\eta_j-1)^{\eta_j}}{\Gamma(\eta_j)}\frac{\Gamma(\eta_j+K_i-1)}{(\eta_j+\sum_{l=1}^{t}\sum_{s=1}^{k_{il}}\exp(-W_{i}D_{il})z_{ils}-1)^{\eta_j+K_i-1}}}{\sum^m_{j=1}\phi_j\frac{(\eta_j-1)^{\eta_j}}{\Gamma(\eta_j)}\frac{\Gamma(\eta_j+K_i)}{(\eta_j+\sum_{l=1}^{t}\sum_{s=1}^{k_{il}}\exp(-W_{i}D_{il})z_{ils}-1)^{\eta_j+K_i}}},
\end{equation}
where $K_i=\sum_{l=1}^{t}k_{il}.$
\end{theorem}
{\it Proof.} The posterior distribution of $\Theta_{it+1}|({\bf
Z}_{i}, {\bf W}_{i})$ can be restated as
\begin{eqnarray*}
f_{\Theta_{it+1}|({\bf Z}_{i}, {\bf W}_{i})}(\theta_{it+1}) &=&
\dfrac{\prod_{l=1}^{t}\prod_{s=1}^{k_{il}}f_{Z_{ils}|\Theta_{il}}(z_{ils})f_{\Theta_{il}}(e^{W_{i}D_{il}}u_i)}{\int_{0}^{\infty}\prod_{l=1}^{t}\prod_{s=1}^{k_{il}}f_{Z_{ils}|\Theta_{il}}(z_{ils})f_{\Theta_{il}}(e^{W_{i}D_{il}}u_i)du_i}\\
&=&\dfrac{\sum^m_{j=1}\phi_j\frac{(\eta_j-1)^{\eta_j}}{\Gamma(\eta_j)}u_i^{-\eta_j-K_i-1}\exp\{-\frac{\eta_j+\sum_{l=1}^{t}\sum_{s=1}^{k_{il}}\exp(-W_{i}D_{il})z_{ils}-1}{u_i}\}}{\int_{0}^{\infty}\sum^m_{j=1}\phi_j\frac{(\eta_j-1)^{\eta_j}}{\Gamma(\eta_j)}u_i^{-\eta_j-K_i-1}\exp\{-\frac{(\eta_j+\sum_{l=1}^{t}\sum_{s=1}^{k_{il}}\exp(-W_{i}D_{il})z_{ils}-1)}{u_i}\}
du_i}.
\end{eqnarray*}
The desired Bayes estimator arrives by
$$\widehat{BP}_i^{t+1}=\int_{0}^{\infty}e^{W_{i}D_{it+1}}u_if_{\Theta_{it+1}|({\bf Z}_{i}, {\bf W}_{i})}(\theta_{it+1})du_i.~~\square$$
The above result also obtained by Tzougas et al. (2014).
\begin{remark}\label{Bayes_estimator_base_premium-under-distribution}
For the situation that no covariate information has been taken
into account, say a distribution model, and the risk parameter
$\Theta_{i}$ has been distributed according to a finite mixture
Inverse Gamma with density function given by
\eqref{density_finite_mixture_Inverse_gamma}. Result of Theorem
\eqref{Bayes_estimator_base_premium} can be reformulated as
\begin{equation*}
\widehat{BP}_i^{t+1}=\dfrac{\sum^m_{j=1}\phi_j\frac{(\eta_j-1)^{\eta_j}}{\Gamma(\eta_j)}\frac{\Gamma(\eta_j+K_i-1)}{(\eta_j+\sum_{l=1}^{t}\sum_{s=1}^{k_{il}}z_{ils})^{\eta_j+K_i-1}}}{\sum^m_{j=1}\phi_j\frac{(\eta_j-1)^{\eta_j}}{\Gamma(\eta_j)}\frac{\Gamma(\eta_j+K_i)}{(\eta_j+\sum_{l=1}^{t}\sum_{s=1}^{k_{il}}z_{ils})^{\eta_j+K_i}}}.
\end{equation*}
\end{remark}
To show practical application of our findings, the next section
provides an real example.
\section{Numerical Application}
Now, we considered available data from Iranian third party
liability, at 2011 year. After a primary investigation, we just
trusted information about 8874 policyholders. We used 4
independent variables, as covariates, presented in Table 1. For
each policyholder we have the initial information at the beginning
of the period and we are interested such covariates to model
frequency/severity of claims for evaluating pure premium of each
policyholder under a rate--making system.
\begin{center}\scriptsize{\tiny
Table 1: Available covariates information for each policyholder.
\begin{tabular}{c l}
\hline
Variable & Description \\
\hline
Gender & Equal to 0 for woman \& 1 for man\\
Age & Equal to 1 for $18\leq age< 30;$ 2 for $30\leq age< 40;$ 3 for $40\leq age< 50;$ \&
4 for $50\leq age$ \\
Car's price & Equal to 1 for $price<2\times10^4;$ 2 for $2\times10^4\leq price<5\times10^4;$ 3 for $5\times10^4\leq
price<10^5;$ \& 4 for $10^5\leq price$\\
Living area & Equal to 1 for
$population~size<10^5;$ 2 for $10^5\leq
population~size<5\times10^5;$ 3
for $5\times10^5\leq population~size<10^6;$\\
& \& 4 for $10^6\leq
population~size$\\
\hline
\end{tabular}}\normalsize
\end{center}
For simplicity in presentation hereafter, we represent $kINBM_m$
for a k-Inflated Negative Binomial model with $m$ mixture
components and Pareto$M_m$ for a Pareto model with $m$ mixture
components.
To find an appropriate distribution for the frequency of claim, in
the first step, we considered the $kINBM_m$ model along with all
distributions that have been considered, by authors, to model
frequency of claims in a rate--making system. Namely, we
considered the kINBM, Delaporte, Sichel, and Poisson Inverse
Gaussian, say PIG, distributions for frequency and the Pareto$M_m$
distribution for severity and estimate their parameters.
The maximum likelihood estimator for the $kINBM_m$ we develop our
R codes while the maximum likelihood estimator for other
distributions have been computed using the GAMLSS package in R.
Table 2 represents the maximum likelihood estimator for
significant parameters of such distributions. The significant test
for each parameter has been tested by the Wald test.
Now using a backward elimination selection method, we find
covariates that may impact on response variable for each
regression model. The significant test for each covariate has been
done by the Wald test. Table 3 shows result of the backward
selection method for frequency/severity of accidents.
\begin{landscape}
\begin{center}\scriptsize{\tiny
Table 2. Estimation for parameters on various model for
frequency/severity
of claims.\\
\begin{tabular}[c]{lllllllll}
\hline\\
\multicolumn{7}{c}{{\bf Distribution:}} \\
\hline\\
$NBM_1$ & $0INBM_1$ & $1INBM_1$ & $2INBM_1$ & $3INBM_1$ & $NBM_2$ & $0INBM_2$ \\
\hline\\
${\boldsymbol \omega}=(*,1)$ & ${\boldsymbol \omega}=(0.001,0.999)$ & ${\boldsymbol \omega}=(0.136,0.861)$ & ${\boldsymbol \omega}=(0,1)$ & ${\boldsymbol \omega}=(0.001,0.999)$ & ${\boldsymbol \omega}=(*,0.005,0.995)$ & ${\boldsymbol \omega}=(0.001,0.004,0.995)$\\
${\boldsymbol \tau}=23.390$ & ${\boldsymbol \tau}=22.256$ & ${\boldsymbol \tau}=1.755$ & ${\boldsymbol \tau}=19.408$ & ${\boldsymbol \tau}=32.333$ & ${\boldsymbol \tau}=(14.152,141.857)$ & ${\boldsymbol \tau}=(26.027,124.000)$ \\
${\boldsymbol \alpha}=5.717$ & ${\boldsymbol \alpha}=5.376$ & ${\boldsymbol \alpha}=0.217$ & ${\boldsymbol \alpha}=4.734$ & ${\boldsymbol \alpha}=7.730$ & ${\boldsymbol \alpha}=(39.02,32.53)$ & ${\boldsymbol \alpha}=(71.74,30.31)$ \\
\hline\\
\multicolumn{7}{c}{{\bf Distribution:}} \\
\hline\\
$1INBM_2$ & $2INBM_2$ & $3INBM_2$ & $NBM_3$ & $0INBM_3$ & $1INBM_3$ & Delaporte\\
\hline\\
${\boldsymbol \omega}=(0.116,0.831,0.053)$ & ${\boldsymbol \omega}=(0,1, 0)$ & ${\boldsymbol \omega}=(0.001,0.997,0.002)$ &${\boldsymbol \omega}=(*,0.014, 0.982,0.004)$ & ${\boldsymbol \omega}=(0.003,0.007,0.986,0.004)$ & ${\boldsymbol \omega}=(0.125,0.644,0.033,0.198)$& $\lambda=0.243$\\
${\boldsymbol \tau}=(8.174,2.690)$& ${\boldsymbol \tau}=(19.408,30.250)$ & ${\boldsymbol \tau}=(25.316,42.478)$ &${\boldsymbol \tau}=(124.000,124.000,30.250)$ & ${\boldsymbol \tau}=(141.857,141.857,27.571)$ & ${\boldsymbol \tau}=(4.587,2.623,4.181)$& $\sigma=77.67$\\
${\boldsymbol \alpha}=(0.807,2.305)$& ${\boldsymbol\alpha}=(4.735,6.327)$ & ${\boldsymbol \alpha}=(6.054,8.88)$ & ${\boldsymbol \alpha}=(30.47,29.48,85.17)$ & ${\boldsymbol \alpha}=(33.70,32.58,77.67)$ & ${\boldsymbol \alpha}=(0.357,2.628,0.729)$& $\nu=0.913$\\
\hline\\
\multicolumn{7}{c}{{\bf Distribution:}} \\
\hline\\
Sichel & PIG & Pareto$M_1$ & Pareto$M_2$ & Pareto$M_3$ &\\
\hline\\
$\mu=0.242$ & $\mu=0.242$& ${\boldsymbol\rho}=1$ & ${\boldsymbol\rho}=(0.519, 0.481)$& ${\boldsymbol\rho}=(0.332,0.321, 0.347)$\\
$\sigma=NS$ & $\sigma=0.225$&${\boldsymbol\alpha}=1.871$ &${\boldsymbol\alpha}=(1.871,1.871)$ &${\boldsymbol\alpha}=(1.871,1.873,1.873)$\\
$\nu=-4.961$& ---& ${\boldsymbol\gamma}=16.44$& ${\boldsymbol\gamma}=(16.43,16.44)$ &${\boldsymbol\gamma}=(16.44,16.43,16.43)$\\
\hline\\
\end{tabular}\\
where the first element in ${\boldsymbol \omega}$ stands for
weight of inflated part and we use $*$ whenever the distribution
is non-inflated distribution and $NS$ stands for not significant
at 5\% level. }\normalsize
\end{center}\newpage
\begin{center}\scriptsize{\tiny
Table 3. Regression coefficients for various model for
frequency/severity of claims.\\
\begin{tabular}[c]{llllllll}
\hline\\
\multicolumn{6}{c}{{\bf Regression model:}} \\
\hline\\
& $NBM_1$ & $0INBM_1$ & $1INBM_1$ & $2INBM_1$ & $3INBM_1$ & $NBM_2$ & $0INBM_2$ \\
\hline\\
& ${\boldsymbol \omega}=(*,1)$ & ${\boldsymbol \omega}=(0.041,0.959)$ & ${\boldsymbol \omega}=(0.111,0.889)$ & ${\boldsymbol \omega}=(0.001,0.999)$ & ${\boldsymbol \omega}=(0.002,0.998)$ & ${\boldsymbol \omega}=(*,0.893,0.107)$ & ${\boldsymbol \omega}=(0.047,0.025,0.928)$\\
& ${\boldsymbol \alpha}=7.560$ & ${\boldsymbol \alpha}=22.56$ & ${\boldsymbol \alpha}=0.440$ & ${\boldsymbol \alpha}=7.950$ & ${\boldsymbol \alpha}=9.980$ & ${\boldsymbol \alpha}=(0.074,22.01)$ & ${\boldsymbol \alpha}=(0.061,18.71)$ \\
Intercept & ${\boldsymbol \beta_0}=-0.738$ & ${\boldsymbol \beta_0}=-0.698$ & ${\boldsymbol \beta_0}=-0.887$ & ${\boldsymbol \beta_0}=-0.744$ & ${\boldsymbol \beta_0}=-0.745$ & ${\boldsymbol \beta_0}=(NS,-0.766)$ & ${\boldsymbol \beta_0}=(NS,-0.719)$ \\
Gender & ${\boldsymbol \beta_1}=NS$ & ${\boldsymbol \beta_1}=NS$ & ${\boldsymbol \beta_1}=NS$ & ${\boldsymbol \beta_1}=NS$ & ${\boldsymbol \beta_1}=NS$ & ${\boldsymbol \beta_1}=(NS,NS)$ & ${\boldsymbol \beta_1}=(NS,NS)$ \\
Age& ${\boldsymbol \beta_2}=-0.118$ & ${\boldsymbol \beta_2}=-0.118$ & ${\boldsymbol \beta_2}=-0.213$ & ${\boldsymbol \beta_2}=-0.121$ & ${\boldsymbol \beta_2}=-0.114$ & ${\boldsymbol \beta_2}=(NS,-0.116)$ & ${\boldsymbol \beta_2}=(NS,-0.117)$ \\
Car's price & ${\boldsymbol \beta_3}=-0.189$ & ${\boldsymbol \beta_3}=-0.189$ & ${\boldsymbol \beta_3}=-0.263$ & ${\boldsymbol \beta_3}=-0.190$ & ${\boldsymbol \beta_3}=-0.196$ & ${\boldsymbol \beta_3}=(0.709,0.175)$ & ${\boldsymbol \beta_3}=(0.587,0.180)$\\
Living area& ${\boldsymbol \beta_4}=NS$ & ${\boldsymbol \beta_4}=NS$ & ${\boldsymbol \beta_4}=NS$ & ${\boldsymbol \beta_4}=NS$ & ${\boldsymbol \beta_4}=NS$ & ${\boldsymbol \beta_4}=(NS,NS)$ & ${\boldsymbol \beta_4}=(NS,NS)$ \\
\hline\\
\multicolumn{6}{c}{{\bf Regression model:}} \\
\hline\\
& $1INBM_2$ & $2INBM_2$ & $3INBM_2$ & $BM_3$ & $0INBM_3$ & $1INBM_3$ & Delaporte\\
\hline\\
& ${\boldsymbol \omega}=(0.120,0.833,0.047)$ & ${\boldsymbol \omega}=(0.002,0.024,0.974)$ & ${\boldsymbol \omega}=(0.002,0.022,0.976)$ &${\boldsymbol \omega}=(*,0.93,0.02,0.05)$ & ${\boldsymbol \omega}=(0.03,0.07,0.46,0.44)$ & ${\boldsymbol \omega}=(0.13,0.07,0.40,0.40)$ & $\sigma=59.56$\\
& ${\boldsymbol \alpha}=(0.685,9.471)$ & ${\boldsymbol\alpha}=(0.087,19.00)$ & ${\boldsymbol \alpha}=(0.090,18.94)$ & ${\boldsymbol \alpha}=(31.22,30.95,0.030)$ & ${\boldsymbol \alpha}=(13.29,11.32,0.157)$ & ${\boldsymbol \alpha}=(9.562,0.643,0.396)$ & $nu=0.910$\\
Intercept & ${\boldsymbol \beta_0}=(1.009,0.663)$ & ${\boldsymbol \beta_0}=(NS,-0.775)$ & ${\boldsymbol \beta_0}=(NS,-0.775)$ & ${\boldsymbol \beta_0}=(.0.638,1.032,NS)$ & ${\boldsymbol \beta_0}=(0.531,0.384,0.165)$ & ${\boldsymbol \beta_0}=(0.535,2.514,2.754)$ & ${\boldsymbol \beta_0}=-0.749$ \\
Gender& ${\boldsymbol \beta_1}=(-0.185,NS)$ & ${\boldsymbol \beta_1}=(NS,NS)$ & ${\boldsymbol \beta_1}=(NS,NS)$ & ${\boldsymbol \beta_1}=(NS,NS,NS)$ & ${\boldsymbol \beta_1}=(NS,NS,NS)$ & ${\boldsymbol \beta_1}=(0.770,0.632,NS)$ & ${\boldsymbol \beta_1}=NS$ \\
Age& ${\boldsymbol \beta_2}=(NS,-0.442)$ & ${\boldsymbol \beta_2}=(NS,-0.115)$ & ${\boldsymbol \beta_2}=(NS,-0.114)$ & ${\boldsymbol \beta_2}=(0.305,0.337,NS)$ & ${\boldsymbol \beta_2}=(NS,0.151,0.165)$ & ${\boldsymbol \beta_2}=(0.343,0.548,0.813)$ & ${\boldsymbol \beta_2}=-0.113$ \\
Car's price& ${\boldsymbol \beta_3}=(0.607,0.076)$ & ${\boldsymbol \beta_3}=(0.618,0.177)$ & ${\boldsymbol \beta_3}=(0.675,0.179)$& ${\boldsymbol \beta_3}=(NS,0.676,0.332)$ & ${\boldsymbol \beta_3}=(0.230,0.151,0.310)$ & ${\boldsymbol \beta_3}=(NS,0.298,1.545)$ & ${\boldsymbol \beta_3}=-0.189$\\
Living area& ${\boldsymbol \beta_4}=(NS,NS)$ & ${\boldsymbol \beta_4}=(NS,NS)$ & ${\boldsymbol \beta_4}=(NS,NS)$ & ${\boldsymbol \beta_4}=(NS,NS,NS)$ & ${\boldsymbol \beta_4}=(NS,NS,NS)$ & ${\boldsymbol \beta_4}=(0.238,2.382,0.623)$ & ${\boldsymbol \beta_4}=NS$ \\
\hline\\
\multicolumn{6}{c}{{\bf Regression model:}} \\
\hline\\
& Sichel & PIG & Pareto$M_1$ & Pareto$M_2$ & Pareto$M_3$ &\\
\hline\\
& $\sigma=NS$ & $\sigma=0.174$ & ${\boldsymbol\rho}=1$ & ${\boldsymbol\rho}=(0.542, 0.458)$ & ${\boldsymbol\rho}=(0.341,0.312, 0.347)$\\
& $\nu=-5.688$ & --- &${\boldsymbol\alpha}=1.927$ &${\boldsymbol\alpha}=(1.927,1.927)$ &${\boldsymbol\alpha}=(1.93,1.93,1.93)$\\
Intercept & ${\boldsymbol \beta_0}=-0.737$ & ${\boldsymbol \beta_0}=-0.736$ & ${\boldsymbol \beta_0}=16.15$ & ${\boldsymbol \beta_0}=(6.15,6.15)$ & ${\boldsymbol \beta_0}=(16.15,16.15,16.15)$ \\
Gender & ${\boldsymbol \beta_1}=NS$ & ${\boldsymbol \beta_1}=NS$ & ${\boldsymbol \beta_1}=NS$ & ${\boldsymbol \beta_1}=(NS,NS)$ & ${\boldsymbol \beta_1}=(NS,NS,NS)$ \\
Age & ${\boldsymbol \beta_2}=-0.119$ & ${\boldsymbol \beta_2}=-0.119$ & ${\boldsymbol \beta_2}=NS$ & ${\boldsymbol \beta_2}=(NS,NS)$ & ${\boldsymbol \beta_2}=(NS,NS,NS)$ \\
Car's price & ${\boldsymbol \beta_3}=-0.190$ & ${\boldsymbol \beta_3}=-0.190$ & ${\boldsymbol \beta_3}=0.157$ & ${\boldsymbol \beta_3}=(-0.16,-0.16)$ & ${\boldsymbol \beta_3}=(-0.16,-0.16,-0.16)$ \\
Living area & ${\boldsymbol \beta_4}=NS$ & ${\boldsymbol \beta_4}=NS$ & ${\boldsymbol \beta_4}=NS$ & ${\boldsymbol \beta_4}=(NS,NS)$ & ${\boldsymbol \beta_4}=(NS,NS,NS)$ \\
\hline\\
\end{tabular}\\
where the first element in ${\boldsymbol \omega}$ stands for
weight of inflated part and we use $*$ whenever the distribution
is non-inflated distribution and $NS$ stands for not significant
at 5\% level.}\normalsize
\end{center}\end{landscape}
\subsection{Model comparison}
To obtain an appropriate model for a given rate--making system,
this section begins by considering the $kINBM_m$ model along with
all distributions that have been considered, by authors, to model
frequency of claims in a rate--making system. Now in order to
compare result of regression/distribution models, we conducted
three evaluation approaches. Namely: ({\bf 1}) In the first
approach, to study performance of count distributions, we employ
each fitted distribution, 200 times, to simulate 8874 data. Then,
using the mean square error, say MSE, criteria, we compare
stimulated data with observed data (see Table 3 for result on such
comparison); ({\bf 2}) The second approach provides a pairwise
comparison between fitted count regression/distribution models
based upon {\it either } the Vuong test (for two non-nested
models) {\it or} the likelihood ratio test (for two nested
models), see Table 4 for such comparison study; and finally, ({\bf
3}) The third approach employs the Akaike Information Criterion
(AIC) and the Schwarz Bayesian information Criterion (SBIC) to
compare regression/distribution models for both frequency and
severity of claims, result of such comparison has been reported in
Table 5.
\subsection*{Generating Data approach:}
To study performance of fitted count distributions given in Table
1. We employ the GAMLSS package in R to generate samples from the
Delaporte, the Sichel, the Poisson Inverse Gaussian distributions.
Lim et al. (2014) introduced an idea to generate sample from a
given Zero-inflated Poisson mixture distribution. Now, we employ
their idea to generate samples from a given a $kINBM_m$
distribution. Based upon their idea, to generate sample $y_i$ from
a $kINBM_m$ distribution with probability mass function
\begin{eqnarray*} P(Y=y|{\boldsymbol \theta}) &=&
p_1I_{\left(y=k\right)}+\sum_{j=2}^{m}p_j\left(\genfrac{}{}{0pt}{}{y+{\alpha }_j-1}{y}\right){\left(\frac{{\tau }_j}{{\alpha }_j+{\tau }_j}\right)}^{{\alpha }_j}{\left(\frac{{\alpha }_j}{{\alpha }_j+{\tau
}_j}\right)}^y,
\end{eqnarray*}
where all parameters ${\boldsymbol\theta}=({\boldsymbol\alpha},
{\boldsymbol\tau}, {\boldsymbol p})$ and $k$ are given. We start
with a dummy variable, say $s_i,$ which generated from an uniform
(0,1) distribution. If $0\leq s_i\leq p_1,$ we set $y_i=k;$ If
$p_1< s_i\leq p_1+p_2,$ then $y_i$ is a draw from a Negative
Binomial distribution $(\alpha_1,\alpha_1/(\alpha_1+\tau_1));$ If
$p_1+p_2< s_i\leq p_1+p_2+p_3,$ then $y_i$ is a draw from a
Negative Binomial distribution $(\alpha_2,
\alpha_2/(\alpha_2+\tau_2));$ If $p_1+p_2+p_3< s_i\leq
p_1+p_2+p_3+p_4,$ then $y_i$ is a draw from a Negative Binomial
distribution $(\alpha_3, \alpha_3/(\alpha_3+\tau_3));$ and so on.
We employed the GAMLSS package and the above idea to simulate 8874
data (200 times). Table 4 reports mean (mean square error, say
MSE) of frequency for such 200 times simulated samples.
\begin{center}\scriptsize{\tiny
Table 4. Mean and the MSE of frequency for generated data under count distributions given in Table 1.\\
\begin{tabular}[c]{ccccccc}
\hline\\
\multicolumn{5}{c}{{\bf Mean (MSE) of frequency for generated data under Distribution:}} \\
\hline\\
Observed(Freq.) & Delaporte & Sichel & PIG & $NBM_1$ &$0INBM_1$& $1INBM_1$ \\
\hline\\
0(6956) & 7018.240(5134.78) & 7027.435(6384.40)& 7001.370(4548.36) & 69980.515(1579.14)& 7001.825(2964.21)& 6958.575(1498.88)\\
1(1751) & 1614.615(19105.90) & 1584.62(28655.82)& 1620.145(22066.28)& 1626.805(13343.36)& 1625.580(16799.97)& 1748.300(1598.76)\\
2(122) & 203.210(6774.82) & 227.235(11188.30)& 224.160(11090.70) & 222.960(11472.20) & 221.325(11189.41)& 120.835(136.98)\\
3(31) & 25.540(55.10) & 29.680(26.90) & 25.585(49.58) & 23.645(81.46) & 22.950(73.84)& 32.555(33.04) \\
4(9) & 6.470(12.50) & 4.230(24.28) & 2.485(39.78) & 1.910(48.10) & 2.145(44.23) & 9.510(9.50) \\
5(3) & 2.940(1.82) & 0.595(6.12) & 0.230(7.78) & 0.175(6.94) & 0.175(7.90) & 2.910(2.54) \\
6(2) & 1.430(1.36) & 0.160(3.74) & 0.000(4.00) & 0.000(4.00) & 0.00(4.00) & 0.895(1.54) \\
$>6$(0) & 1.560(2.88) & 0.000(0.00) & 0.000(0.00) & 0.000(0.00) & 0.00(0.00) & 0.420(0.53) \\
\hline\\
\multicolumn{5}{c}{{\bf Mean (MSE) of frequency for generated data under Distribution:}} \\
\hline\\
Observed(Freq.) & $2INBM_1$ & $3INBM_1$ & $NBM_2$ & $0INBM_2$ & $1INBM_2$ & $2INBM_2$ \\
\hline\\
0(6956) & 6988.835(2476.51) &7003.555(3652.21) & 7019.265(6530.50) & 7019.135(3004.08)& 6958.730(1529.84)&7002.304(3033.16)\\
1(1751) & 1624.415(15079.51)&1626.055(16892.21) & 1619.010(22792.00)& 1616.830(6657.27)& 1748.690(1448.56)&1621.425(16415.15)\\
2(122) & 232.030(11339.35) &214.305(8663.16) & 198.745(6347.56) & 200.715(10290.01)& 122.380(124.82) &220.400(10608.91)\\
3(31) & 26.405(72.11) &28.205(20.45) & 23.780(46.54) & 23.940(35.47)& 30.530(24.16) &26.855(53.16) \\
4(9) & 2.015(41.56) &1.750(51.70) & 7.010(11.76) & 7.150(10.84) & 9.030(8.68) &2.800(38.85) \\
5(3) & 2.910(2.54) &0.155(8.51) &3.635(2.79) & 3.350(3.60) & 3.115(2.56) &0.155(9.01) \\
6(2) & 0.115(3.85) &0.210(3.38) & 1.730(1.18) & 1.640(2.04) & 1.075(1.26) &0.095(3.175) \\
$>6$(0) & 0.000(0.00) &0.000(0.00) & 1.11(3.44) & 1.090(2.13) & 0.450(0.60) &0.000(0.000)\\
\hline\\
\multicolumn{5}{c}{{\bf Mean (MSE) of frequency for generated data under Distribution:}} \\
\hline\\
Observed(Freq.) & $3INBM_2$ &$BM_3$ &$0INBM_3$ & $1INBM_3$\\
\hline\\
0(6956) &7019.200(5955.36) &7015.995(1585.00) &7019.785(7553.77) &6959.035(1127.89)\\
1(1751) &1605.105(22133.02)&1619.510(11018.88) &1617.120(21482.31) &1747.870(1038.81)\\
2(122) &215.755(7985.44) &200.635(8149.00) &199.745(5686.31) &121.985(213.64)\\
3(31) &32.100(56.55) &24.635(65.20) &24.355(124.61) &30.630(52.47)\\
4(9) &1.650(54.61) &6.9100(10.76) &6.955(12.64) &9.605(8.34)\\
5(3) &0.205(9.04) &3.635(2.78) &3.400(3.20) &3.245(1.94)\\
6(2) &0.000(4.00) &1.600(1.14) &1.595(0.675) &1.215(1.37)\\
$>6$(0) &0.000(0.00) &0.650(1.28) &1.045(2.01) &0.415(0.68)\\
\hline\\
\end{tabular}
}\normalsize
\end{center}
Results of the simulation study, given in Table 4, shows that the
MSE for the all 1-Inflated Negative Binomial mixture
distributions, is considerably less than the MSE of other fitted
distributions. Therefore, based upon this simulation study, one
may conclude that the 1-Inflated Negative Binomial mixture
distributions are appropriate distributions for claim frequency of
Iranian policyholders.
\subsection*{The Vuong and the likelihood ratio tests' approach:}
To make a decision about statistical hypothesis
\begin{eqnarray*}
H_0 &:& ~\hbox{Observed data came from a population with distribution function}~F\\
H_1 &:& ~\hbox{Observed data came from a population with
distribution function}~G.
\end{eqnarray*}
If both of distributions are belong to a family of distributions
with different parameters (nested models), one may employ the
likelihood ratio test to make such decision. Otherwise, where
models are belong to two different family of distributions
(Non-nested models) the Vuong has to used, see Denuit et al.
(2007, \S 2) for more details.
Table 5 represents a pairwise comparison between fitted count
regression/distribution models given in Tables 1 and 2.
\begin{center}\scriptsize{\tiny
Table 5. Result of the Vuong test (for two non-nested models)
{\it or} the likelihood ratio test (for two nested
models).\\
\begin{tabular}[c]{cccc}
\hline\\
\multicolumn{4}{c}{{\bf Panel A:} Result of the Vuong test} \\
\hline\\
Model 1 & Model 2 & Decision on fitted regression& Decision on fitted distribution \\
\hline\\
Delaporte & $1INBM_1$ & $1INBM_1$ (Statistic=-37.63 \& P-value=0.00)& $1INBM_1$ (Statistic=-51.58 \& P-value=0.00)\\
PIG & $1INBM_1$ & $1INBM_1$ (Statistic=-33.15 \& P-value=0.00)& $1INBM_1$ (Statistic=-38.94 \& P-value=0.00)\\
$0INBM_1$ & $1INBM_1$ & $1INBM_1$ (Statistic=-24.75 \& P-value=0.00)& $1INBM_1$ (Statistic=-35.57 \& P-value=0.00)\\
$2INBM_1$ & $1INBM_1$ & $1INBM_1$ (Statistic=-26.80 \& P-value=0.00)& $1INBM_1$ (Statistic=-37.09 \& P-value=0.00)\\
$3INBM_1$ & $1INBM_1$ & $1INBM_1$ (Statistic=-22.41 \& P-value=0.00)& $1INBM_1$ (Statistic=-31.22 \& P-value=0.00)\\
$0INBM_2$ & $1INBM_2$ & $1INBM_2$ (Statistic=-43.51 \& P-value=0.00)& $1INBM_2$ (Statistic=-52.27 \& P-value=0.00)\\
$2INBM_2$ & $1INBM_1$ & $1INBM_2$ (Statistic=-40.89 \& P-value=0.00)& $1INBM_2$ (Statistic=-37.22 \& P-value=0.00)\\
$3INBM_2$ & $1INBM_2$ & $1INBM_2$ (Statistic=-40.36 \& P-value=0.00)& $1INBM_2$ (Statistic=-33.16 \& P-value=0.00)\\
$0INBM_3$ & $1INBM_3$ & $1INBM_3$ (Statistic=-34.28 \& P-value=0.00)& $1INBM_3$ (Statistic=-20.48 \& P-value=0.00)\\
\hline\\
\multicolumn{4}{c}{{\bf Panel B:} Result of the likelihood ratio test} \\
\hline\\
Model 1 & Model 2 & Decision on fitted regression& Decision on fitted distribution \\
\hline\\
$NBM_1$ & $1INBM_1$ & $1INBM_1$ (Statistic=62.21 \& P-value=0.00)& $1INBM_1$ (Statistic=105.00 \& P-value=0.00)\\
$NBM_2$ & $1INBM_2$ & $1INBM_2$ (Statistic=80.46 \& P-value=0.00)& $1INBM_2$ (Statistic=49.66 \& P-value=0.00)\\
$NBM_3$ & $1INBM_3$ & $1INBM_3$ (Statistic=111.65 \& P-value=0.00)& $1INBM_3$ (Statistic=49.75 \& P-value=0.00)\\
$1INBM_1$ & $1INBM_2$ & $1INBM_2$ (Statistic=45.35 \& P-value=0.00)& $1INBM_1$ (Statistic=0.18 \& P-value=0.90)\\
$1INBM_1$ & $1INBM_3$ & $1INBM_2$ (Statistic=88.65 \& P-value=0.00)& $1INBM_1$ (Statistic=0.21 \& P-value=0.87)\\
$1INBM_2$ & $1INBM_3$ & $1INBM_3$ (Statistic=42.31 \& P-value=0.00)& $1INBM_2$ (Statistic=0.3 \& P-value=0.97)\\
\hline\\
\end{tabular}\\
}\normalsize
\end{center}
Based upon results of Table 5, one may conclude that the
1-Inflated Negative Binomial mixture distributions/regressions, at
5\% significant level, defeat other distributions/regressions.
\subsection*{The Akaike Information Criterion (AIC) and the
Schwarz Bayesian information criterion approaches:} The Akaike
Information Criterion (AIC) and the Schwarz Bayesian Information
Criterion (SBIC) are two measure to select an appropriate model
among a set of candidate models. Both criteria are defined based
on -2 times the maximum log-likelihood, penalized by {\it either}
number of estimated parameters, for AIC, {\it or} number of
estimated parameters times logarithm of number of observations,
for SBIC. Given a set of candidate models, a preferred model is
the one which has the minimum AIC (SBIC) value, see Denuit et al.
(2007, \S 1) for more details.
Table 6 provides the AIC and the SBIC for fitted
regression/distribution models for both frequency and severity of
claims.
\begin{center}\scriptsize{\tiny
Table 6. Result of the Akaike Information Criterion
(AIC) and the Schwarz Bayesian information Criterion (SBIC).\\
\begin{tabular}[c]{ccccccc}
\hline\\
& & {\bf Regression model} && &{\bf Distribution model}&\\
\hline\\
Model & df & AIC & SBIC& df & AIC & SBIC \\
\hline\\
$NBM_1$ &6 &10656.42 &10698.88 &2 &10784.70 &10798.88\\
Delaporte &7 &10615.42 &10665.07 &3 &10734.99 &10756.26\\
Sichel &7 &10648.96 &10699.16 &3 &10772.67 &10793.94\\
PIG &6 &10653.90 &10696.47 &2 &10781.11 &10795.29\\
$0INBM_1$ &7 &10664.59 &10714.25 &3 &10786.74 &10808.01\\
$1INBM_1$ &7 &10596.11 &10645.77 &3 &10681.69 &10702.96\\
$2INBM_1$ &7 &10658.36 &10708.02 &3 &10787.05 &10808.33\\
$3INBM_1$ &7 &10653.08 &10702.73 &3 &10783.25 &10804.53\\
$INBM_2$ &13 &10635.22 &10727.86 &5 &10735.14 &10770.59\\
$0INBM_2$ &14 &1064.07 &10746.80 &6 &10737.30 &10779.84\\
$1INBM_2$ &14 &10558.76 &10658.58 &6 &10687.80 &10730.02\\
$2INBM_2$ &14 &10638.66 &10748.39 &6 &10793.05 &10835.60\\
$3INBM_2$ &14 &10632.58 &10732.31 &6 &10789.88 &10832.42\\
$NBM_3$ &20 &10632.11 &10775.11 &8 &10741.26 &10797.99\\
$0INBM_3$ &21 &10654.22 &10803.49 &9 &10743.23 &10807.05\\
$1INBM_3$ &21 &10536.45 &10685.28 &9 &10693.19 &10757.33\\
Pareto$M_1$ &6 &63948.20 &63990.75 &2 &64102.30 &64116.48\\
Pareto$M_2$ &13 & 63968.20 &64054.38 &5 &64108.30 &64143.75\\
Pareto$M_3$ &20 &63974.20 &64108.93 &8 &64114.30 &64171.00\\
\hline\\
\end{tabular}\\
}\normalsize
\end{center}
The AIC and SBIC for fitted models, given Table 6, show that the
1-Inflated Negative Binomial mixture distributions/regressions are
better than other distribution/regression models.
\section{Rate--making Examples}
To show practical application of our findings. We calculate the
rate and pure premiums for the set of well fitted
distributions/regression models that were presented in above
sections. Since we are interested in the differences between rate
premium of various classes. Therefore, we set the rate premium for
a new policyholder equal to 1 unite, at $t=0.$ Moreover, we
considered three different categories, described in Table 7.
\begin{center}\scriptsize{\tiny
Table 7: Categories which considered to evaluate rate and pure
premiums under well fitted models.
\begin{tabular}{c l}
\hline
Category & Description \\
\hline
$A_1$ & For a situation that no covariate information have been used for premium calculation\\
$A_2$ & Whenever, chosen policyholder is a young man at age of 25 years old who owns a car\\ & with
price greater than $2\times 10^4$ and living in a city with population size larger than $10^6.$\\
$A_3$ & Whenever, chosen policyholder is a mature woman at the age of 55 years old who owns\\ & a car with
price less than $2\times 10^4$ and living in a city with population size less than $10^5.$\\
\hline
\end{tabular}}\normalsize
\end{center}
Now to calculate rate premium for three categories $A_1,$ $A_2,$
and $A_3,$ given in Table 7, using well fitted models. We consider
two different approaches. The first approach just considers number
of cumulated claims in the last yeas. While the second approach
considers the exact number of reported claim for each year in a
history of the policyholder. \footnote{It worthwhile to mention
that the second approach can be used just for inflated models.}
Tables 8 and 9 represent calculated rate premium for three
categories, given in Table 7, using well fitted models for both
approaches.
\begin{center}\scriptsize{\tiny
Table 8: The rate premium for three categories $A_1,$ $A_2,$ and
$A_3$ using well fitted models, whenever number of cumulated
claims has been considered.\\
\begin{tabular}{cc c cccccccc}
\hline
\multicolumn{11}{c}{~~~~~~~~~{\bf Model:}}\\ \cline{3-11}\\
& Number of cumulated & &$NBM_1$ & & & $NBM_2$ & & & $NBM_3$ & \\
\cline{3-11}\\
Year &claims up to this year ($K$) & $A_1$ & $A_2$ & $A_3$ & $A_1$ & $A_2$ & $A_3$ & $A_1$ &$A_2$ & $A_3$ \\
\hline
$t=0$ & --- & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ \hline\\
& $K=0$ & 0.96 & 0.96 & 0.98 & 0.95 & 0.91 & 0.98 & 0.95 & 0.87 & 0.90 \\
& $K=1$ & 1.13 & 1.08 & 1.11 & 1.02 & 1.03 & 1.10 & 1.02 & 1.00 & 1.09 \\
$t=1$ & $K=2$ & 1.29 & 1.21 & 1.24 & 1.54 & 1.18 & 1.35 & 1.42 & 1.13 & 1.96 \\
& $K=3$ & 1.46 & 1.33 & 1.37 & 5.05 & 1.99 & 2.91 & 4.30 & 1.67 & 10.05 \\
& $K=4$ & 1.63 & 1.46 & 1.50 & 10.40 & 5.78 & 9.12 & 9.76 & 4.88 & 24.99 \\ \hline\\
& $K=0$ & 0.92 & 0.92 & 0.96 & 0.93 & 0.88 & 0.96 & 0.94 & 0.83 & 0.88 \\
& $K=1$ & 1.08 & 1.04 & 1.09 & 0.97 & 1.00 & 1.08 & 0.98 & 0.97 & 1.04 \\
$t=2$ & $K=2$ & 1.24 & 1.16 & 1.22 & 1.04 & 1.06 & 1.19 & 1.04 & 1.05 & 1.24 \\
& $K=3$ & 1.40 & 1.28 & 1.35 & 1.53 & 1.18 & 1.66 & 1.40 & 1.15 & 2.39 \\
& $K=4$ & 1.57 & 1.40 & 1.47 & 4.69 & 1.60 & 4.00 & 3.92 & 1.41 & 8.56 \\
\hline
\end{tabular}}\normalsize
\end{center}
\begin{center}\scriptsize{\tiny
Table 9: The rate premium for three categories $A_1,$ $A_2,$ and
$A_3$ using well fitted models, whenever exact number of reported
claim for each year of the policyholder's experience
has been considered.\\
\begin{tabular}{cc c cccccccc}
\hline
\multicolumn{11}{c}{{\bf Model:}}\\ \cline{3-11}\\
Year & Number of reported & &$1INBM_1$ & & & $1INBM_2$ & & & $1INBM_3$ & \\
\cline{3-11}\\
&claims at year $l$ ($k_l$) & $A_1$ & $A_2$ & $A_3$ & $A_1$ & $A_2$ & $A_3$ & $A_1$ &$A_2$ & $A_3$ \\
\hline
$t=0$ & --- &1 &1 &1 &1 &1 &1 &1 &1 &1 \\\hline\\
& $k_1=0$ &0.64 &0.63 &0.88 &0.83 &0.81 &0.96 &0.79 &0.63 &0.96 \\
& $k_1=1$ &1.81 &1.55 &1.54 &1.26 &1.30 &1.13 &1.39 &1.29 &1.17 \\
$t=1$ & $k_1=2$ &6.52 &3.91 &4.86 &2.50 &2.43 &2.55 &3.55 &2.54 &2.21 \\
& $k_1=3$ &9.44 &4.94 &6.86 &3.30 &3.29 &3.64 &4.93 &3.50 &2.85 \\
& $k_1=4$ &12.37 &6.38 &8.85 &4.13 &4.12 &4.54 &6.31 &4.46 &3.49 \\\hline\\
& $k_1=0, k_2=0$ &0.48 &0.46 &0.78 & 0.72 &0.68 &0.93 &0.65 &0.48 &0.93 \\
& $k_1=0, k_2=1$ &1.15 &1.03 &1.33 &1.06 &1.05 &1.09 &1.10 &0.84 &1.03 \\
& $k_1=0, k_2=2$ &4.78 &2.56 &4.33 &2.19 &2.01 &2.46 &2.98 &1.79 &2.16 \\
& $k_1=1, k_2=0$ &1.15 &1.03 &1.33 &1.06 &1.05 &1.09 &1.10 &0.84 &1.13 \\
$t=2$ & $k_1=1, k_2=1$ &2.87 &2.06 &2.31 &1.57 &1.61 &1.30 &1.90 &1.53 &1.16 \\
& $k_1=1, k_2=2$ &6.79 &3.58 &5.63 &2.76 &2.60 &2.82 &3.93 &2.47 &2.38 \\
& $k_1=2, k_2=0$ &4.78 &2.56 &4.33 &2.19 &2.01 &2.46 &2.98 &1.79 &2.15 \\
& $k_1=2, k_2=1$ &6.79 &3.58 &5.63 &2.76 &2.60 &2.82 &3.93 &2.47 &2.38 \\
& $k_1=2, k_2=2$ &9.10 &4.67 &7.88 &3.67 &3.34 &3.99 &5.31 &3.09 &3.37 \\
\hline
\end{tabular}}\normalsize
\end{center}
To illustrate a guideline to use result of Tables 8 and 9, suppose
that {\it either} Negative Binomial with 2 mixture components,
$NBM_2,$ {\it or} 1-Inflated Negative Binomial with 2 mixture
components, $1INBM_2,$ can be considered as an appropriate model.
Now consider the following three different scenarios.
\begin{description}
\item[Scenario 1:] For a given policyholder, no covariates information is available, category $A_1$ in Table
7. Based upon Table 8's and Table 9's result, respectively, his/her second year rate premium under $NBM_2$ model is 0.95 units
while his/her second year rate premium under $1INBM_2$ model is 0.83 units, whenever such policyholder does not report any claim in the first
year. But in the situation that such policyholder reports 2 claims in the
first year. He/she has to pay 1.54 units, under $NBM_2$ model, and 2.50
units, under $1INBM_2$ model.
\item[Scenario 2:] The given policyholder belongs to category $A_2$ of Table
7. Based upon Table 8's and Table 9's result, respectively, his second year rate premium, under $NBM_2$ model, is 0.91 units
while his second year rate premium, under $1INBM_2$ model, is 0.81
units, whenever such policyholder does not report any claim in the
first year. But in the situation that such policyholder reports 2 claims in
the first year. He has to pay 1.18 units, under $NBM_2$ model, and
2.43 units, under $1INBM_2$ model.
\item[Scenario 3:] The given policyholder belongs to category $A_3$ of Table 7. Based
upon Table 8's and Table 9's result, respectively, her second year
rate premium, under $NBM_2$ model, is 0.98 units while her second
year rate premium, under $1INBM_2$ model, is 0.96 units, whenever
such policyholder does not report any claim in the first year. But
in the situation that such policyholder reports 2 claims in the
first year. She has to pay 1.35 units, under $NBM_2$ model, and
2.55 units, under $1INBM_2$ model.
\end{description}
The above simple example, as well as other possible examples,
shows that: ({\bf 1}) the inflated models and covariates
information improve fairness of calculated rate premium; and ({\bf
2}) in the situation that number of reported claims uniformly
distributed in past experience of a policyholder (for instance
$k_1=1$ and $k_2=1$ instead of $k_1=0$ and $k_2=2$). His/Her rate
premium under inflated models is more fair and acceptable.
Now, to estimate the pure premium, we consider one mixture Pareto
distribution/regression model, as an appropriate model for claim's
severity, along with other well fitted counting models. Moreover,
we study situation that total claim size is {\it either} 1000
units (Case A) {\it or} 5000 unites (Case B). Table 10 and Table
11 show the pure premium under these assumptions.
\begin{landscape}
\begin{center}\scriptsize{\tiny
Table 10: The pure premium for three categories $A_1,$ $A_2,$ and
$A_3$ using well fitted models, whenever total claim size either
1000 or 5000 unites and exact number of reported claim for each
year of the policyholder's experience
has been considered.\\
\begin{tabular}{cc c cccccccc}
\hline\\
\multicolumn{11}{c}{{\bf Case A: Total of reported claim reach to 1000 unites}}\\
\hline\\
\multicolumn{11}{c}{{\bf Model:}}\\ \cline{3-11}\\
& Number of cumulated & &$NBM_1$ \& Pareto$M_1$ & & & $NBM_2$ \& Pareto$M_1$ & & & $NBM_3$ \& Pareto$M_1$ & \\
\cline{3-11}\\
Year &claims up to this year ($K$) & $A_1$ & $A_2$ & $A_3$ & $A_1$ & $A_2$ & $A_3$ & $A_1$ &$A_2$ & $A_3$ \\
\hline
$t=0$ & --- & 613.739 &723.848 &461.546 & 607.554 &730.992& 463.354 & 623.147 & 726.489 &449.722 \\\hline\\
& $K=0$ & 589.189 &694.894 &452.315 & 574.376 &667.716 &453.895 & 594.470 &634.635& 405.854 \\
& $K=1$ & 629.269& 723.241 &460.357 & 561.898 &650.758 &441.088 & 576.836 &631.235& 423.590 \\
$t=1$ & $K=2$ & 622.519& 707.323 &448.918 & 735.309 &652.794 &470.045 & 697.730 &620.015& 661.912 \\
& $K=3$ & 621.617& 689.808 &440.058 & 2128.105&974.677& 900.347 & 1860.741& 811.248& 3013.693 \\
& $K=4$ & 620.905& 680.503 &432.993 & 3921.243&2548.529& 2535.831 & 3775.850& 2137.017& 6735.508 \\\hline\\
& $K=0$ & 564.640& 665.940 &443.084 & 568.061 &641.339 &446.372 & 587.939 &605.724 &396.865 \\
& $K=1$ & 601.425& 696.455 &452.063 & 533.528 &626.073 &429.213 & 553.127 &607.359 &399.204 \\
$t=2$ & $K=2$ & 598.391& 678.095 &441.677 & 497.293 &580.094 &413.935 & 510.093 &570.899 &416.165 \\
& $K=3$ & 596.071& 663.875 &433.633 & 643.233 &573.089 &510.020 & 603.755 &555.112 &714.869 \\
& $K=4$ & 598.049& 652.537 &424.333 & 1769.746&702.016 &1106.136 & 1514.878& 613.404& 2298.398 \\
\hline
\multicolumn{11}{c}{{\bf Case B: Total of reported claim reach to 5000 unites}}\\
\hline\\
\multicolumn{11}{c}{{\bf Model:}}\\ \cline{3-11}\\
& Number of cumulated & &$NBM_1$ \& Pareto$M_1$ & & & $NBM_2$ \& Pareto$M_1$ & & & $NBM_3$ \& Pareto$M_1$ & \\
\cline{3-11}\\
Year &claims up to this year ($K$) & $A_1$ & $A_2$ & $A_3$ & $A_1$ & $A_2$ & $A_3$ & $A_1$ &$A_2$ & $A_3$ \\
\hline
$t=0$ & --- & 613.739& 723.848& 461.546 & 607.554& 730.992& 463.354 & 623.147 &726.489 &449.722\\\hline\\
& $K=0$ & 589.189& 694.894& 452.315 & 574.376& 667.716& 453.895 & 594.470& 634.635& 405.854\\
& $K=1$ & 799.370& 944.429& 550.863 & 713.788& 686.966& 456.491 & 732.764 &666.356 &438.382 \\
$t=1$ & $K=2$ & 790.796& 923.642& 537.174 & 934.074& 689.115& 486.460 & 886.338 &654.512 &685.028\\
& $K=3$ & 789.650& 900.770& 526.572 & 2703.365& 1028.907& 931.789 & 2363.729& 856.385& 3118.939\\
& $K=4$ & 788.745& 888.620& 518.119 & 4981.215& 2690.327& 2624.388& 4796.520& 2255.920& 6970.728\\\hline\\
& $K=0$ & 564.640& 665.940& 443.084 & 568.061 &641.339 & 446.372 & 587.939 &605.724 &396.865 \\
& $K=1$ & 763.999& 909.450& 540.937 & 677.749 &643.612 &436.741 & 702.646& 624.374 &406.205\\
$t=2$ & $K=2$ & 760.145& 885.475& 528.510 & 631.719 &596.345 &421.194 & 647.979 &586.893 &423.463\\
& $K=3$ & 757.198& 866.907& 518.885 & 817.108 &589.144 &518.964 & 766.960 &570.663 &727.407\\
& $K=4$ & 759.711& 852.102& 507.757 & 2248.135 &721.683 &1125.535 & 1924.372 &630.588 &2338.707\\
\hline
\end{tabular}}\normalsize
\end{center}
\end{landscape}
\begin{landscape}
\begin{center}\scriptsize{\tiny
Table 11: The pure premium for three categories $A_1,$ $A_2,$ and
$A_3$ using well fitted models, whenever total claim size either
1000 or 5000 unites and exact number of reported claim for each
year of the policyholder's experience
has been considered.\\
\begin{tabular}{cc c cccccccc}
\hline\\
\multicolumn{11}{c}{{\bf Case A: Total of reported claim reach to 1000 unites}}\\
\hline\\
\multicolumn{11}{c}{{\bf Model:}}\\ \cline{3-11}\\
Year & Number of reported & &$1INBM_1$ \& Pareto$M_1$ & & & $1INBM_2$ \& Pareto$M_1$ & & & $1INBM_3$ \& Pareto$M_1$ & \\
\cline{3-11}\\
&claims at year $l$ ($k_l$) & $A_1$ & $A_2$ & $A_3$ & $A_1$ & $A_2$ & $A_3$ & $A_1$ &$A_2$ & $A_3$ \\
\hline
$t=0$ & -- & 623.391 & 757.191& 571.461& 611.317 & 733.685 & 534.525& 609.443& 790.408& 566.132\\\hline\\
& $k_1=0$ & 398.970 & 477.030 & 502.886 & 507.393 & 594.285 & 513.144 & 481.452 & 497.957 & 543.487\\
& $k_1=1$ & 1023.795 & 1085.798 & 790.796 & 698.893 & 882.399 & 542.755 & 768.625 & 943.307 & 595.197\\
$t=1$ & $k_1=2$ & 3195.859 & 2390.932 & 2178.476 & 1201.672 & 1439.796 & 1069.149 & 1701.115 & 1621.325 & 981.386\\
& $k_1=3$ & 4019.222 & 2562.144 & 2203.502 &1405.025 &1706.367 &1169.205 &2099.022 &1815.284 &915.449 \\
& $k_1=4$ & 4712.021 &2973.705 & 2554.659 &1573.213 &1920.324 &1310.525 &2403.626 &2078.797 &1007.430 \\\hline\\
& $k_1=0, k_2=0$ & 219.378 & 259.780 & 332.448 & 322.694 & 372.101 & 366.774 & 290.423 & 282.966 & 388.461\\
& $k_1=0, k_2=1$ & 650.477 & 721.531 & 682.960 & 587.958 & 712.707 & 523.542 & 608.265 & 614.247 & 523.977\\
& $k_1=0, k_2=2$ & 2342.976 & 1565.418 & 1940.906 & 1052.665 & 1190.942 & 1031.414 & 1427.979 & 1142.587 & 954.742\\
& $k_1=1, k_2=0$ & 650.477 & 721.531 & 682.960 & 587.958 & 712.707 & 523.542 & 608.265 & 614.247 & 523.977\\
$t=2$ & $k_1=1, k_2=1$ & 1406.766 & 1259.673 & 1035.449 & 754.650 & 953.939 & 545.056 & 910.456 & 976.625 & 559.523\\
& $k_1=1, k_2=2$ & 2936.409 & 1942.306 & 2239.077 & 1170.474 & 1366.822 & 1049.038 & 1661.517 & 1398.870 & 937.710\\
& $k_1=2, k_2=0$ & 2342.976 & 1565.418 & 1940.906 & 1052.665 & 1190.942 & 1031.414 & 1427.979 & 1142.587 & 954.742\\
& $k_1=2, k_2=1$ & 2936.409 & 1942.306 & 2239.077 & 1170.474 & 1366.822 & 1049.038 & 1661.517 & 1398.870 & 937.710\\
& $k_1=2, k_2=2$ & 3520.914 & 2276.944 & 2816.357 & 1392.471 & 1577.924 & 1333.877 & 2008.510 & 1572.678 & 1193.225\\
\hline
\multicolumn{11}{c}{{\bf Case B: Total of reported claim reach to 5000 unites}}\\
\hline\\
\multicolumn{11}{c}{{\bf Model:}}\\ \cline{3-11}\\
Year & Number of reported & &$1INBM_1$ \& Pareto$M_1$ & & & $1INBM_2$ \& Pareto$M_1$ & & & $1INBM_3$ \& Pareto$M_1$ & \\
\cline{3-11}\\
&claims at year $l$ ($k_l$) & $A_1$ & $A_2$ & $A_3$ & $A_1$ & $A_2$ & $A_3$ & $A_1$ &$A_2$ & $A_3$ \\
\hline
$t=0$ & -- & 623.391 &757.191 &571.461 &611.317 & 733.685 & 534.525 &609.443 &790.408 &566.132\\\hline\\
& $k_1=0$ & 398.970 & 477.030 &502.886 &507.393 &594.285 &513.144 &481.452 &497.957 &543.487\\
$t=1$ & $k_1=1$ & 1300.542 & 1417.866 &946.265 &887.815 &1152.261 &649.459 &976.396 & 1231.797 &712.211\\
& $k_1=2$ & 4059.749 &3122.146 &2606.761 &1526.503 &1880.126 &1279.341 &2160.953 &2117.171 &1174.325\\
& $k_1=3$ & 5105.682 &3345.717 &2636.704 &1784.825 &2228.221 &1399.067 &2666.421 &2370.447 &1095.424\\
& $k_1=4$ & 5985.752 &3883.148 &3056.902 &1998.477 &2507.613 &1568.174 &3053.363 &2714.552 &1205.490 \\\hline\\
& $k_1=0, k_2=0$ & 219.378 &259.780 &332.448 &322.694 & 372.101 &366.774 &290.423 &282.966 &388.461\\
& $k_1=0, k_2=1$ & 826.311 &942.195 &817.229 &746.892 &930.673 &626.469 &772.688 &802.100 &626.989\\
& $k_1=0, k_2=2$ & 2976.319 &2044.167 &2322.484 &1337.216 &1555.166 &1234.188 &1813.983 &1492.022 &1142.443\\
& $k_1=1, k_2=0$ & 826.311 & 942.195 &817.229 &746.892 &930.673 & 626.469 &772.688 &802.100 &626.989\\
$t=2$ & $k_1=1, k_2=1$ & 178.7037 & 1644.916 &1239.016 &958.644 &1245.680 &652.213 &1156.567 & 1275.304 &669.525\\
& $k_1=1, k_2=2$ & 3730.166 &2536.317 &2679.275 &1486.871 &1784.835 &1255.277&2110.651&1826.684 &1122.062\\
& $k_1=2, k_2=0$ & 2976.319 &2044.167 & 2322.484 &1337.216 & 1555.166 &1234.188 &1813.983 &1492.022 &1142.443\\
& $k_1=2, k_2=1$ & 3730.166 &253.6317 & 2679.275 &148.6871 &1784.835 & 1255.277&2110.651&1826.684 &1122.062\\
& $k_1=2, k_2=2$ & 4472.672 &2973.297 &3370.048 &1768.877 &2060.497 &1596.115 &2551.441 &2053.647 &1427.811\\
\hline
\end{tabular}}\normalsize
\end{center}
\end{landscape}
Same as the above, to illustrate a guideline to use result of
Tables 10 and 11, suppose that {\it either} Negative Binomial with
2 mixture components, $NBM_2,$ {\it or} 1-Inflated Negative
Binomial with 2 mixture components, $1INBM_2,$ can be considered
as an appropriate model for claim frequency. Now consider the
following three different scenarios.
\begin{description}
\item[Scenario 1:] For a given policyholder in category $A_1$ of Table
7. Based upon Table 10's and Table 11's result, respectively, his/her second year pure premium under $NBM_2$ model is 622.519 units
while his/her second year pure premium under $1INBM_2$ model is 3195.859 units, whenever such policyholder reported 2
claims with total size 1000 units in the first
year. But in the situation that total size of two reported claims reach to 5000 units. He/she has to pay 790.796 units, under $NBM_2$ model, and
4059.749 units, under $1INBM_2$ model.
\item[Scenario 2:] The given policyholder belongs to category $A_2$ of Table
7. Based upon Table 10's and Table 11's result, respectively, his second year pure premium, under $NBM_2$ model, is 707.323 units
while his second year pure premium, under $1INBM_2$ model, is 2390.932
units, whenever such policyholder reported 2 claims with total size 1000 units in the first
year. But in the situation that total size of two reported claims reach to 5000 units. He has to pay 932.642 units, under $NBM_2$ model, and
3122.146 units, under $1INBM_2$ model.
\item[Scenario 3:] The given policyholder belongs to category
$A_3$ of Table 7. Based upon Table 10's and Table 11's result,
respectively, her second year pure premium, under $NBM_2$ model,
is 440.918 units while her second year pure premium, under
$1INBM_2$ model, is 2178.476 units, whenever such policyholder
reported 2 claims with total size 1000 units in the first year.
But in the situation that total size of two reported claims reach
to 5000 units. She has to pay 537.174 units, under $NBM_2$ model,
and 2606.761 units, under $1INBM_2$ model.
\end{description}
The above simple example shows that: ({\bf 1}) the inflated models
provides more fair pure premium of policyholders who made some
claims in their past experience. While for both cases A and B, the
pure premium under non-inflated models do not fairly penalized
such policyholders; and ({\bf 2}) in the situation that number of
reported claims uniformly distributed in past experience of a
policyholder (for instance $k_1=1$ and $k_2=1$ instead of $k_1=0$
and $k_2=2$). His/Her pure premium under inflated models is more
appealing and acceptable.
\section{Conclusion and suggestion}
This article introduces an k-Inflated Negative Binomial mixture
(kIBNM) distribution/regression model and provides an EM algorithm
to estimate its parameters. As an application of the kIBNM
distribution/regression to model number of reported claim under a
rate--making system has been given. Moreover, in order to compute
the pure premium under the system, severity of reported claim has
been model with a Pareto mixture distribution/regression model. As
an application frequency of reported claim of Iranian third party
liability, at 2011, has been model by the kIBNM and all of
possible models that have been used by authors. Numerical
illustration shows that: ({\bf 1}) the kIBNM models provide more
fair rate/pure premiums for policyholders under a rate--making
system; and ({\bf 2}) in the situation that number of reported
claims uniformly distributed in past experience of a policyholder
(for instance $k_1=1$ and $k_2=1$ instead of $k_1=0$ and $k_2=2$).
The rate/pure premium under the kIBNM models are more appealing
and acceptable.
We conjecture that the result of this article may be improved by
considering a Double Inflated Negative Binomial with probability
mass function $P(Y=y|{\boldsymbol
\theta})=p_1I_{k_1}(y)+p_2I_{k_2}(y)+\sum_{j=3}^{m}p_j\left(\genfrac{}{}{0pt}{}{y+{\alpha
}_j-1}{y}\right){\left(\frac{{\tau }_j}{{\alpha }_j+{\tau
}_j}\right)}^{{\alpha }_j}{\left(\frac{{\alpha }_j}{{\alpha
}_j+{\tau
}_j}\right)}^yI_{{\Bbb
N}}(y),$ where $k_1,k_2\in{\Bbb N},$ $\sum_{j=1}^{m}p_j=1,$ and
$p_j,\alpha_j,\tau_j\geq0,$ for all $j=1,\cdots,m.$
|
2,869,038,154,349 | arxiv | \section{Electronic Submission}
\label{submission}
Submission to ICML 2023 will be entirely electronic, via a web site
(not email). Information about the submission process and \LaTeX\ templates
are available on the conference web site at:
\begin{center}
\textbf{\texttt{http://icml.cc/}}
\end{center}
The guidelines below will be enforced for initial submissions and
camera-ready copies. Here is a brief summary:
\begin{itemize}
\item Submissions must be in PDF\@.
\item \textbf{New to this year}: If your paper has appendices, submit the appendix together with the main body and the references \textbf{as a single file}. Reviewers will not look for appendices as a separate PDF file. So if you submit such an extra file, reviewers will very likely miss it.
\item Page limit: The main body of the paper has to be fitted to 8 pages, excluding references and appendices; the space for the latter two is not limited. For the final version of the paper, authors can add one extra page to the main body.
\item \textbf{Do not include author information or acknowledgements} in your
initial submission.
\item Your paper should be in \textbf{10 point Times font}.
\item Make sure your PDF file only uses Type-1 fonts.
\item Place figure captions \emph{under} the figure (and omit titles from inside
the graphic file itself). Place table captions \emph{over} the table.
\item References must include page numbers whenever possible and be as complete
as possible. Place multiple citations in chronological order.
\item Do not alter the style template; in particular, do not compress the paper
format by reducing the vertical spaces.
\item Keep your abstract brief and self-contained, one paragraph and roughly
4--6 sentences. Gross violations will require correction at the
camera-ready phase. The title should have content words capitalized.
\end{itemize}
\subsection{Submitting Papers}
\textbf{Paper Deadline:} The deadline for paper submission that is
advertised on the conference website is strict. If your full,
anonymized, submission does not reach us on time, it will not be
considered for publication.
\textbf{Anonymous Submission:} ICML uses double-blind review: no identifying
author information may appear on the title page or in the paper
itself. \cref{author info} gives further details.
\textbf{Simultaneous Submission:} ICML will not accept any paper which,
at the time of submission, is under review for another conference or
has already been published. This policy also applies to papers that
overlap substantially in technical content with conference papers
under review or previously published. ICML submissions must not be
submitted to other conferences and journals during ICML's review
period.
Informal publications, such as technical
reports or papers in workshop proceedings which do not appear in
print, do not fall under these restrictions.
\medskip
Authors must provide their manuscripts in \textbf{PDF} format.
Furthermore, please make sure that files contain only embedded Type-1 fonts
(e.g.,~using the program \texttt{pdffonts} in linux or using
File/DocumentProperties/Fonts in Acrobat). Other fonts (like Type-3)
might come from graphics files imported into the document.
Authors using \textbf{Word} must convert their document to PDF\@. Most
of the latest versions of Word have the facility to do this
automatically. Submissions will not be accepted in Word format or any
format other than PDF\@. Really. We're not joking. Don't send Word.
Those who use \textbf{\LaTeX} should avoid including Type-3 fonts.
Those using \texttt{latex} and \texttt{dvips} may need the following
two commands:
{\footnotesize
\begin{verbatim}
dvips -Ppdf -tletter -G0 -o paper.ps paper.dvi
ps2pdf paper.ps
\end{verbatim}}
It is a zero following the ``-G'', which tells dvips to use
the config.pdf file. Newer \TeX\ distributions don't always need this
option.
Using \texttt{pdflatex} rather than \texttt{latex}, often gives better
results. This program avoids the Type-3 font problem, and supports more
advanced features in the \texttt{microtype} package.
\textbf{Graphics files} should be a reasonable size, and included from
an appropriate format. Use vector formats (.eps/.pdf) for plots,
lossless bitmap formats (.png) for raster graphics with sharp lines, and
jpeg for photo-like images.
The style file uses the \texttt{hyperref} package to make clickable
links in documents. If this causes problems for you, add
\texttt{nohyperref} as one of the options to the \texttt{icml2023}
usepackage statement.
\subsection{Submitting Final Camera-Ready Copy}
The final versions of papers accepted for publication should follow the
same format and naming convention as initial submissions, except that
author information (names and affiliations) should be given. See
\cref{final author} for formatting instructions.
The footnote, ``Preliminary work. Under review by the International
Conference on Machine Learning (ICML). Do not distribute.'' must be
modified to ``\textit{Proceedings of the
$\mathit{40}^{th}$ International Conference on Machine Learning},
Honolulu, Hawaii, USA, PMLR 202, 2023.
Copyright 2023 by the author(s).''
For those using the \textbf{\LaTeX} style file, this change (and others) is
handled automatically by simply changing
$\mathtt{\backslash usepackage\{icml2023\}}$ to
$$\mathtt{\backslash usepackage[accepted]\{icml2023\}}$$
Authors using \textbf{Word} must edit the
footnote on the first page of the document themselves.
Camera-ready copies should have the title of the paper as running head
on each page except the first one. The running title consists of a
single line centered above a horizontal rule which is $1$~point thick.
The running head should be centered, bold and in $9$~point type. The
rule should be $10$~points above the main text. For those using the
\textbf{\LaTeX} style file, the original title is automatically set as running
head using the \texttt{fancyhdr} package which is included in the ICML
2023 style file package. In case that the original title exceeds the
size restrictions, a shorter form can be supplied by using
\verb|\icmltitlerunning{...}|
just before $\mathtt{\backslash begin\{document\}}$.
Authors using \textbf{Word} must edit the header of the document themselves.
\section{Format of the Paper}
All submissions must follow the specified format.
\subsection{Dimensions}
The text of the paper should be formatted in two columns, with an
overall width of 6.75~inches, height of 9.0~inches, and 0.25~inches
between the columns. The left margin should be 0.75~inches and the top
margin 1.0~inch (2.54~cm). The right and bottom margins will depend on
whether you print on US letter or A4 paper, but all final versions
must be produced for US letter size.
Do not write anything on the margins.
The paper body should be set in 10~point type with a vertical spacing
of 11~points. Please use Times typeface throughout the text.
\subsection{Title}
The paper title should be set in 14~point bold type and centered
between two horizontal rules that are 1~point thick, with 1.0~inch
between the top rule and the top edge of the page. Capitalize the
first letter of content words and put the rest of the title in lower
case.
\subsection{Author Information for Submission}
\label{author info}
ICML uses double-blind review, so author information must not appear. If
you are using \LaTeX\/ and the \texttt{icml2023.sty} file, use
\verb+\icmlauthor{...}+ to specify authors and \verb+\icmlaffiliation{...}+ to specify affiliations. (Read the TeX code used to produce this document for an example usage.) The author information
will not be printed unless \texttt{accepted} is passed as an argument to the
style file.
Submissions that include the author information will not
be reviewed.
\subsubsection{Self-Citations}
If you are citing published papers for which you are an author, refer
to yourself in the third person. In particular, do not use phrases
that reveal your identity (e.g., ``in previous work \cite{langley00}, we
have shown \ldots'').
Do not anonymize citations in the reference section. The only exception are manuscripts that are
not yet published (e.g., under submission). If you choose to refer to
such unpublished manuscripts \cite{anonymous}, anonymized copies have
to be submitted
as Supplementary Material via CMT\@. However, keep in mind that an ICML
paper should be self contained and should contain sufficient detail
for the reviewers to evaluate the work. In particular, reviewers are
not required to look at the Supplementary Material when writing their
review (they are not required to look at more than the first $8$ pages of the submitted document).
\subsubsection{Camera-Ready Author Information}
\label{final author}
If a paper is accepted, a final camera-ready copy must be prepared.
For camera-ready papers, author information should start 0.3~inches below the
bottom rule surrounding the title. The authors' names should appear in 10~point
bold type, in a row, separated by white space, and centered. Author names should
not be broken across lines. Unbolded superscripted numbers, starting 1, should
be used to refer to affiliations.
Affiliations should be numbered in the order of appearance. A single footnote
block of text should be used to list all the affiliations. (Academic
affiliations should list Department, University, City, State/Region, Country.
Similarly for industrial affiliations.)
Each distinct affiliations should be listed once. If an author has multiple
affiliations, multiple superscripts should be placed after the name, separated
by thin spaces. If the authors would like to highlight equal contribution by
multiple first authors, those authors should have an asterisk placed after their
name in superscript, and the term ``\textsuperscript{*}Equal contribution"
should be placed in the footnote block ahead of the list of affiliations. A
list of corresponding authors and their emails (in the format Full Name
\textless{}email@domain.com\textgreater{}) can follow the list of affiliations.
Ideally only one or two names should be listed.
A sample file with author names is included in the ICML2023 style file
package. Turn on the \texttt{[accepted]} option to the stylefile to
see the names rendered. All of the guidelines above are implemented
by the \LaTeX\ style file.
\subsection{Abstract}
The paper abstract should begin in the left column, 0.4~inches below the final
address. The heading `Abstract' should be centered, bold, and in 11~point type.
The abstract body should use 10~point type, with a vertical spacing of
11~points, and should be indented 0.25~inches more than normal on left-hand and
right-hand margins. Insert 0.4~inches of blank space after the body. Keep your
abstract brief and self-contained, limiting it to one paragraph and roughly 4--6
sentences. Gross violations will require correction at the camera-ready phase.
\subsection{Partitioning the Text}
You should organize your paper into sections and paragraphs to help
readers place a structure on the material and understand its
contributions.
\subsubsection{Sections and Subsections}
Section headings should be numbered, flush left, and set in 11~pt bold
type with the content words capitalized. Leave 0.25~inches of space
before the heading and 0.15~inches after the heading.
Similarly, subsection headings should be numbered, flush left, and set
in 10~pt bold type with the content words capitalized. Leave
0.2~inches of space before the heading and 0.13~inches afterward.
Finally, subsubsection headings should be numbered, flush left, and
set in 10~pt small caps with the content words capitalized. Leave
0.18~inches of space before the heading and 0.1~inches after the
heading.
Please use no more than three levels of headings.
\subsubsection{Paragraphs and Footnotes}
Within each section or subsection, you should further partition the
paper into paragraphs. Do not indent the first line of a given
paragraph, but insert a blank line between succeeding ones.
You can use footnotes\footnote{Footnotes
should be complete sentences.} to provide readers with additional
information about a topic without interrupting the flow of the paper.
Indicate footnotes with a number in the text where the point is most
relevant. Place the footnote in 9~point type at the bottom of the
column in which it appears. Precede the first footnote in a column
with a horizontal rule of 0.8~inches.\footnote{Multiple footnotes can
appear in each column, in the same order as they appear in the text,
but spread them across columns and pages if possible.}
\begin{figure}[ht]
\vskip 0.2in
\begin{center}
\centerline{\includegraphics[width=\columnwidth]{icml_numpapers}}
\caption{Historical locations and number of accepted papers for International
Machine Learning Conferences (ICML 1993 -- ICML 2008) and International
Workshops on Machine Learning (ML 1988 -- ML 1992). At the time this figure was
produced, the number of accepted papers for ICML 2008 was unknown and instead
estimated.}
\label{icml-historical}
\end{center}
\vskip -0.2in
\end{figure}
\subsection{Figures}
You may want to include figures in the paper to illustrate
your approach and results. Such artwork should be centered,
legible, and separated from the text. Lines should be dark and at
least 0.5~points thick for purposes of reproduction, and text should
not appear on a gray background.
Label all distinct components of each figure. If the figure takes the
form of a graph, then give a name for each axis and include a legend
that briefly describes each curve. Do not include a title inside the
figure; instead, the caption should serve this function.
Number figures sequentially, placing the figure number and caption
\emph{after} the graphics, with at least 0.1~inches of space before
the caption and 0.1~inches after it, as in
\cref{icml-historical}. The figure caption should be set in
9~point type and centered unless it runs two or more lines, in which
case it should be flush left. You may float figures to the top or
bottom of a column, and you may set wide figures across both columns
(use the environment \texttt{figure*} in \LaTeX). Always place
two-column figures at the top or bottom of the page.
\subsection{Algorithms}
If you are using \LaTeX, please use the ``algorithm'' and ``algorithmic''
environments to format pseudocode. These require
the corresponding stylefiles, algorithm.sty and
algorithmic.sty, which are supplied with this package.
\cref{alg:example} shows an example.
\begin{algorithm}[tb]
\caption{Bubble Sort}
\label{alg:example}
\begin{algorithmic}
\STATE {\bfseries Input:} data $x_i$, size $m$
\REPEAT
\STATE Initialize $noChange = true$.
\FOR{$i=1$ {\bfseries to} $m-1$}
\IF{$x_i > x_{i+1}$}
\STATE Swap $x_i$ and $x_{i+1}$
\STATE $noChange = false$
\ENDIF
\ENDFOR
\UNTIL{$noChange$ is $true$}
\end{algorithmic}
\end{algorithm}
\subsection{Tables}
You may also want to include tables that summarize material. Like
figures, these should be centered, legible, and numbered consecutively.
However, place the title \emph{above} the table with at least
0.1~inches of space before the title and the same after it, as in
\cref{sample-table}. The table title should be set in 9~point
type and centered unless it runs two or more lines, in which case it
should be flush left.
\begin{table}[t]
\caption{Classification accuracies for naive Bayes and flexible
Bayes on various data sets.}
\label{sample-table}
\vskip 0.15in
\begin{center}
\begin{small}
\begin{sc}
\begin{tabular}{lcccr}
\toprule
Data set & Naive & Flexible & Better? \\
\midrule
Breast & 95.9$\pm$ 0.2& 96.7$\pm$ 0.2& $\surd$ \\
Cleveland & 83.3$\pm$ 0.6& 80.0$\pm$ 0.6& $\times$\\
Glass2 & 61.9$\pm$ 1.4& 83.8$\pm$ 0.7& $\surd$ \\
Credit & 74.8$\pm$ 0.5& 78.3$\pm$ 0.6& \\
Horse & 73.3$\pm$ 0.9& 69.7$\pm$ 1.0& $\times$\\
Meta & 67.1$\pm$ 0.6& 76.5$\pm$ 0.5& $\surd$ \\
Pima & 75.1$\pm$ 0.6& 73.9$\pm$ 0.5& \\
Vehicle & 44.9$\pm$ 0.6& 61.5$\pm$ 0.4& $\surd$ \\
\bottomrule
\end{tabular}
\end{sc}
\end{small}
\end{center}
\vskip -0.1in
\end{table}
Tables contain textual material, whereas figures contain graphical material.
Specify the contents of each row and column in the table's topmost
row. Again, you may float tables to a column's top or bottom, and set
wide tables across both columns. Place two-column tables at the
top or bottom of the page.
\subsection{Theorems and such}
The preferred way is to number definitions, propositions, lemmas, etc. consecutively, within sections, as shown below.
\begin{definition}
\label{def:inj}
A function $f:X \to Y$ is injective if for any $x,y\in X$ different, $f(x)\ne f(y)$.
\end{definition}
Using \cref{def:inj} we immediate get the following result:
\begin{proposition}
If $f$ is injective mapping a set $X$ to another set $Y$,
the cardinality of $Y$ is at least as large as that of $X$
\end{proposition}
\begin{proof}
Left as an exercise to the reader.
\end{proof}
\cref{lem:usefullemma} stated next will prove to be useful.
\begin{lemma}
\label{lem:usefullemma}
For any $f:X \to Y$ and $g:Y\to Z$ injective functions, $f \circ g$ is injective.
\end{lemma}
\begin{theorem}
\label{thm:bigtheorem}
If $f:X\to Y$ is bijective, the cardinality of $X$ and $Y$ are the same.
\end{theorem}
An easy corollary of \cref{thm:bigtheorem} is the following:
\begin{corollary}
If $f:X\to Y$ is bijective,
the cardinality of $X$ is at least as large as that of $Y$.
\end{corollary}
\begin{assumption}
The set $X$ is finite.
\label{ass:xfinite}
\end{assumption}
\begin{remark}
According to some, it is only the finite case (cf. \cref{ass:xfinite}) that is interesting.
\end{remark}
\subsection{Citations and References}
Please use APA reference format regardless of your formatter
or word processor. If you rely on the \LaTeX\/ bibliographic
facility, use \texttt{natbib.sty} and \texttt{icml2023.bst}
included in the style-file package to obtain this format.
Citations within the text should include the authors' last names and
year. If the authors' names are included in the sentence, place only
the year in parentheses, for example when referencing Arthur Samuel's
pioneering work \yrcite{Samuel59}. Otherwise place the entire
reference in parentheses with the authors and year separated by a
comma \cite{Samuel59}. List multiple references separated by
semicolons \cite{kearns89,Samuel59,mitchell80}. Use the `et~al.'
construct only for citations with three or more authors or after
listing all authors to a publication in an earlier reference \cite{MachineLearningI}.
Authors should cite their own work in the third person
in the initial version of their paper submitted for blind review.
Please refer to \cref{author info} for detailed instructions on how to
cite your own papers.
Use an unnumbered first-level section heading for the references, and use a
hanging indent style, with the first line of the reference flush against the
left margin and subsequent lines indented by 10 points. The references at the
end of this document give examples for journal articles \cite{Samuel59},
conference publications \cite{langley00}, book chapters \cite{Newell81}, books
\cite{DudaHart2nd}, edited volumes \cite{MachineLearningI}, technical reports
\cite{mitchell80}, and dissertations \cite{kearns89}.
Alphabetize references by the surnames of the first authors, with
single author entries preceding multiple author entries. Order
references for the same authors by year of publication, with the
earliest first. Make sure that each reference includes all relevant
information (e.g., page numbers).
Please put some effort into making references complete, presentable, and
consistent, e.g. use the actual current name of authors.
If using bibtex, please protect capital letters of names and
abbreviations in titles, for example, use \{B\}ayesian or \{L\}ipschitz
in your .bib file.
\section*{Accessibility}
Authors are kindly asked to make their submissions as accessible as possible for everyone including people with disabilities and sensory or neurological differences.
Tips of how to achieve this and what to pay attention to will be provided on the conference website \url{http://icml.cc/}.
\section*{Software and Data}
If a paper is accepted, we strongly encourage the publication of software and data with the
camera-ready version of the paper whenever appropriate. This can be
done by including a URL in the camera-ready copy. However, \textbf{do not}
include URLs that reveal your institution or identity in your
submission for review. Instead, provide an anonymous URL or upload
the material as ``Supplementary Material'' into the CMT reviewing
system. Note that reviewers are not required to look at this material
when writing their review.
\section*{Acknowledgements}
\textbf{Do not} include acknowledgements in the initial version of
the paper submitted for blind review.
If a paper is accepted, the final camera-ready version can (and
probably should) include acknowledgements. In this case, please
place such acknowledgements in an unnumbered section at the
end of the paper. Typically, this will include thanks to reviewers
who gave useful comments, to colleagues who contributed to the ideas,
and to funding agencies and corporate sponsors that provided financial
support.
\nocite{langley00}
\section{Introduction}
Random feature decomposition is an important technique for the linearization of nonlinear kernel functions with theoretical guarantees such as unbiasedness and concentration around the true kernel value. Linearization allows a significant reduction in computations from quadratic to linear complexity in the size of the operator induced by the kernel. The technique emerged under the name of \textit{random kitchen sinks} (RKS) introduced in \cite{rfs2, rfs, rfs3} and was used in many applications such as kernel SVM \cite{svmrfs,rf-core-1,rf-core-4,rf-core-5}, dimensionality reduction \cite{rf-core-2,rf-core-3}, neural networks \cite{cho, nnrfs, xie, han, rf-core-6}, function-to-function regression \cite{oliva}, kernel regression \cite{laparra, kernel-ridge-rfs}, nonparametric adaptive control \cite{boffi}, differentially-private ML algorithms \cite{sarwate}, operator-valued kernels \cite{minh} and semigroup kernels \cite{yang}. An in-depth theoretical analysis of random features was performed by \citet{liton, nystromrfs, sutherland, szabo}.
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{derf_drawing.pdf}
\caption{\textbf{(top)} Venn diagram of the new types of random features (green) we propose. \textbf{(bottom)} Logarithm of the relative variance of different random feature maps on pairs of vectors sampled from CIFAR10 and MNIST/CIFAR10. A new random feature map SDERF results in a consistent variance reduction of the previous best method GERF, up to $\approx e^{10}$ and $\approx e^{5}$ times. Figure \ref{fig:vars} extends this plot, see Section \ref{sec:expvars} for details.}
\label{fig:derf_map}
\end{figure}
An exciting recent application of random features is in the area of scalable Transformer networks \cite{performer,hrfs,tr-kernel,pmlr-v119-katharopoulos20a}, where the self-attention matrix is approximated as a low-rank matrix when the sequence is long. However, the RKS family of methods relies on the Fourier transform, resulting in $\sin$ and $\cos$ types of random features, which were shown to be unsuitable for application in Transformers due to negative values in the low-rank matrix. \citet{performer} proposed a solution in the form of positive-valued random features relying on the exponential function (\textit{positive random features, PosRFs}), yielding a method they called \textit{Fast Attention Via Orthogonal positive Random features (FAVOR+)} for self-attention approximation. This solution was improved by \citet{crt} by means of a careful choice of the linear combination parameters under the exponent, and the so-called \textit{homogeneity heuristic}, which allows a choice of one set of parameters for all approximated values. The resulting random features were called \textit{generalized exponential random features (GERFs)}, and the corresponding self-attention approximation method was termed \textit{FAVOR++}.
\textbf{Contributions:} In this paper, we make a leap forward in the design of positive-valued random features by proposing \textit{dense exponential random features (DERFs)} which contain both PosRFs and GERFs as special cases. Instead of scalar parameters as in GERFs, DERFs rely on matrix parameters and dense quadratic forms inside the exponent. We show how to select parameters of the new random features efficiently without harming the overall subquadratic complexity.
More technically, our contributions are as follows:
1. We show that the homogeneity heuristic of \citet{crt} may in fact be viewed not as a heuristic, but a closed-form optimum of the \textit{shifted log-variance objective}.
2. We introduce DERFs and three special instantiations: \textit{asymmetric} DERFs (\textit{ADERFs}), \textit{symmetric} DERFs (\textit{SDERFs}), and \textit{simplified} ADERFs (\textit{SADERFs}). All these instantiations contain GERFs as a special case (Figure \ref{fig:derf_map}, top). For each instantiation we show how to find a closed-form optimum of the shifted log-variance objective efficiently.
3. We show that our new variants result in lower variance than GERFs and other previous methods in practice (e.g. up to $e^{10}$ times variance improvement as in Figure \ref{fig:derf_map}, bottom). Further, we show that DERFs outperform other random features in kernel regression and Transformer setups (speech modelling and natural language processing). We refer to the DERF-based self-attention approximation method as \textit{FAVOR\#}.
\section{Prerequisites}
\subsection{Scaled Softmax Kernel and Random Features}
By the \textit{scaled softmax kernel} $K^{(\alpha)}: \mathbb{R}^d \times \mathbb{R}^d \to (0, +\infty)$, where $\alpha \in \mathbb{R}$, we denote a mapping defined as $K^{(\alpha)} (\*x, \*y) = \exp(\alpha \| \*x \|^2 + \*x^\top \*y + \alpha \| \*y \|^2)$ for all $\*x, \*y \in \mathbb{R}^d$ where $\| \cdot \|$ is an $L_2$-norm. Two important special cases of the scaled softmax kernel are 1) the \textit{Gaussian kernel} $K^{(-1/2)} (\*x, \*y) = \exp(- \| \*x - \*y \|^2 / 2)$ and 2) the \textit{softmax kernel} $K^{(0)} (\*x, \*y) = \exp(\*x^\top \*y)$. For two sets of vectors $\mathcal{X} = \{ \*x^{(i)} \in \mathbb{R}^d \}_{i = 1}^L$ and $\mathcal{Y} = \{ \*y^{(j)} \in \mathbb{R}^d \}_{j = 1}^L$, by $\mathcal{K}(\mathcal{X}, \mathcal{Y}) \in \mathbb{R}^{L \times L}$ we denote a matrix where $\mathcal{K}^{(\alpha)}(\mathcal{X}, \mathcal{Y})_{i,j} = K^{(\alpha)}(\*x^{(i)}, \*y^{(j)})$ for all $1 \leq i, j \leq L$.
In this paper, we will be interested in the problem of computing $\mathcal{K}^{(\alpha)} (\mathcal{X}, \mathcal{Y}) \*C$ where $\mathcal{X}$, $\mathcal{Y}$ and a matrix $\*C \in \mathbb{R}^{L \times n}$ are provided as an input. A naive solution requires $O(L^2 (d + n))$ computations for constructing $\mathcal{K}^{(\alpha)} (\mathcal{X}, \mathcal{Y})$ ($O (L^2 d)$) and computing the matrix multiplication $\mathcal{K}^{(\alpha)} (\mathcal{X}, \mathcal{Y}) \times \*C$ ($O(L^2 n)$). Instead, we will use an efficient Monte-Carlo approximation of $\mathcal{K}^{(\alpha)} (\mathcal{X}, \mathcal{Y}) \times \*C$ using the following notion of \textit{random features (RFs) for the scaled softmax kernel}:
\begin{definition} \label{def:rfs}
By random features for the scaled softmax kernel $K^{(\alpha)}$, $\alpha \in \mathbb{R}$, we denote a triple $\mathcal{T} = \langle \nu, f^{(1)}, f^{(2)} \rangle$ where $\nu$ is a probability distribution over random objects $\bs{\omega} \in \Omega$ and $f^{(i)} : \Omega \times \mathbb{R}^d \to \mathbb{R}$, $i \in \{ 1, 2 \}$, are such mappings that, for all $\*x, \*y \in \mathbb{R}^d$,
\begin{equation}
K^{(\alpha)} (\*x, \*y) = \mathbb{E}_{\nu} \left[ f^{(1)} (\bs{\omega}, \*x) f^{(2)} (\bs{\omega}, \*y) \right] . \label{eq:rfdec}
\end{equation}
\end{definition}
The decomposition of type (\ref{eq:rfdec}) can be used for an efficient unbiased approximation of $\mathcal{K}^{(\alpha)} (\mathcal{X}, \mathcal{Y}) \*C$. Let $\bs{\omega}^{(1)}, \dots, \bs{\omega}^{(M)} \in \Omega$ be i.i.d. samples from $\nu$. Define matrices $\*P, \*S \in \mathbb{R}^{L \times M}$ where for all $1 \leq i, j \leq L$,
\begin{align}
\*P_{i,:} &= M^{-1/2}(f^{(1)} (\bs{\omega}^{(m)}, \*x^{(i)}))_{m = 1}^M, \label{eq:ps1} \\
\*S_{j,:} &= M^{-1/2}(f^{(2)} (\bs{\omega}^{(m)}, \*y^{(j)}))_{m = 1}^M ,\label{eq:ps2}
\end{align}
where $\*P_{i,:}, \*S_{j,:} \in \mathbb{R}^d$ are \textit{column vectors} corresponding to the rows of $\*P, \*S$. Then according to (\ref{eq:rfdec}), $\widehat{\mathcal{K}} = \*P \*S^\top$ is an unbiased \textit{Monte Carlo (MC)} approximation of $\mathcal{K}^{(\alpha)} (\mathcal{X}, \mathcal{Y})$ on $M$ samples. The variance $\mathrm{Var} \, \widehat{\mathcal{K}}_{i,j} = M^{-1} \mathrm{Var}_\nu f^{(1)} (\bs{\omega}, \*x^{(i)}) f^{(2)} (\bs{\omega}, \*y^{(j)})$ of this approximation is inversely proportional to $M$, hence $M$ is a tradeoff parameter between computations and precision. Now, $\widehat{\mathcal{K}} \*C$ is an unbiased approximation of $\mathcal{K}^{(\alpha)} (\mathcal{X}, \mathcal{Y}) \*C$ but $\widehat{\mathcal{K}} = \*P \*S^\top$ is a rank-$M$ matrix, hence computing $\widehat{\mathcal{K}} \*C$ has $O(L M n)$ complexity. Assuming that sampling each $\bs{\omega}^{(m)}$ and computing $f^{(\cdot)} (\cdot, \cdot)$ are $O(d)$ operations, which is usually the case, precomputing $\*P$ and $\*S$ takes $O(L M d)$ computations, resulting in a total $O(L M (d + n))$ computational complexity. By choosing $M \ll L$, we obtain a significant reduction in computations compared to the exact variant: $O (L M (d + n)) \ll O (L^2 (d + n))$.
Operations of type $\mathcal{K}^{(\alpha)} (\mathcal{X}, \mathcal{Y}) \*C$, especially for the Gaussian kernel $\alpha = -1/2$, emerge in kernel SVM \cite{rfs}, kernel regression \cite{nadaraya,watson} and in physics in the form of the Gauss transform \cite{gausstr}. Another important application has recently emerged in the area of efficient Transformers and is discussed in the next section \cite{performer}.
\subsection{Random Features for Efficient Transformers} \label{sec:efftr}
RFs found a prominent application in the area of efficient long-sequence Transformers \cite{performer}. Transformers rely on a self-attention block for propagating information between elements of the sequence. If the sequence length is $L$ and input matrices are denoted as $\*Q, \*K, \*V \in \mathbb{R}^{L \times d}$ (\textit{queries}, \textit{keys} and \textit{values}), then self-attention outputs the following matrix:
\begin{equation}
\*Y = \mathrm{diag} (\mathcal{K}^{(0)} (\mathcal{X}, \mathcal{Y}) \*1_L)^{-1} \mathcal{K}^{(0)} (\mathcal{X}, \mathcal{Y}) \*V \in \mathbb{R}^{L \times d}, \label{eq:att}
\end{equation}
where $\*1_L \in \mathbb{R}^L$ is a vector of all ones, $\mathrm{diag} (\cdot)$ returns a diagonal $(L \times L)$-sized matrix with the argument on the diagonal, $\mathcal{X} = \{ \*x^{(i)} = d^{-1 / 4} \*Q_{i,:} \in \mathbb{R}^d \}$, and $\mathcal{Y} = \{ \*y^{(j)} = d^{-1 / 4} \*K_{j,:} \in \mathbb{R}^d \}$. Hence, substitution of $\widehat{\mathcal{K}}$ instead of $\mathcal{K}^{(0)} (\mathcal{X}, \mathcal{Y})$ in (\ref{eq:att}) reduces computational complexity from $O (L^2 d)$ to $O (L M d)$ ($n = d + 1$). $\mathrm{diag} (\mathcal{K}^{(0)} (\mathcal{X}, \mathcal{Y}) \*1_L)^{-1} \mathcal{K}^{(0)} (\mathcal{X}, \mathcal{Y})$ is the result of a \textit{softmax} operation performed on rows of $d^{-1/2} \*Q \*K^\top$.
\subsection{Existing Random Features for the Softmax Kernel} \label{sec:exist}
Representation (\ref{eq:rfdec}) is not unique and different RFs can be proposed for a single $K^{(\alpha)}$. Note that if $\langle \nu, f^{(1)}, f^{(2)} \rangle$ are RFs for $K^{(0)}$, then $\langle \nu, \widehat{f}^{(1)}, \widehat{f}^{(2)} \rangle$ are RFs for $K^{(\alpha)}$ for $\alpha \in \mathbb{R}$ where $\widehat{f}^{(k)} (\bs{\omega}, \*x) = \exp(\alpha \| \*x \|^2) f^{(k)} (\bs{\omega}, \*x)$. Hence, hereafter we focus on the softmax kernel $K^{(0)}$ without loss of generality.
\citet{perf0} proposed to use \textit{trigonometric random features (TrigRFs)} from \cite{rfs} in the efficient Transformer application:
{\small
\begin{gather}
\Omega_\mathrm{trig} = \mathbb{R}^{d + 1}, \,\, \nu_\mathrm{trig} = \mathrm{Unif} ([0, 2 \pi]) \times \mathcal{N} (\*0_d, \*I_d)^d, \label{eq:trigrf1} \\
f^{(1)}_\mathrm{trig} ((\theta, \widetilde{\bs{\omega}}), \*x) = \sqrt{2} \exp (\| \*x \|^2 / 2) \cos(\widetilde{\bs{\omega}}^\top \*x + \theta), \\
f^{(2)}_\mathrm{trig} ((\theta, \widetilde{\bs{\omega}}), \*y) = \sqrt{2} \exp(\| \*y \|^2 / 2) \cos(- \widetilde{\bs{\omega}}^\top \*y \! + \! \theta), \label{eq:trigrf3}
\end{gather}}
where $\bs{\omega} = (\theta, \widetilde{\bs{\omega}})$, $\mathrm{Unif} (\cdot)$ denotes a uniform distribution on the argument set, $\mathcal{N} (\*0_d, \*I_d)$ is a multivariate Gaussian distribution with mean $\*0_d$ (vector of $d$ zeros) and covariance matrix $\*I_d$ (identity matrix of size $d \times d$).
The next iteration of efficient attention approximators \cite{performer} observed a problem with TrigRFs (\ref{eq:trigrf1}-\ref{eq:trigrf3}). The \textit{attention matrix} $\mathrm{diag} (\mathcal{K}^{(0)} (\mathcal{X}, \mathcal{Y}) \*1_L)^{-1} \mathcal{K}^{(0)} (\mathcal{X}, \mathcal{Y})$ from (\ref{eq:att}) is \textit{right stochastic} meaning that its entries are nonnegative and each row sums to $1$ due to the normalizing term $\mathrm{diag} (\mathcal{K}^{(0)} (\mathcal{X}, \mathcal{Y}) \*1_L)^{-1}$. However, since $f^{(\cdot)}_\mathrm{trig}$ can be arbitrary real numbers, $\*P, \*S$ (\ref{eq:ps1}-\ref{eq:ps2}) and, therefore, $\widehat{\mathcal{K}}$ can take negative values. Hence, $\mathrm{diag} (\widehat{\mathcal{K}} \*1_L)^{-1} \widehat{\mathcal{K}}$ is not right stochastic in general and entries of $\widehat{\mathcal{K}} \*1_L$ can take very small and/or negative values resulting in unstable behaviour when inverting $\mathrm{diag} (\widehat{\mathcal{K}} \*1_L)^{-1}$. \citet{performer} therefore proposed a new type of \textit{positive random features (PosRFs)} which have the form:
\begin{gather}
\Omega_\mathrm{pos} = \mathbb{R}^d, \quad \nu_\mathrm{pos} = \mathcal{N} (0, 1)^d, \label{eq:posrf1} \\
f^{(1)}_\mathrm{pos} (\bs{\omega}, \*x) = f^{(2)}_\mathrm{pos} (\bs{\omega}, \*x) = \exp (\bs{\omega}^\top \*x - \| \*x \|^2 / 2). \label{eq:posrf3}
\end{gather}
It is clear that such $f^{(\cdot)}_\mathrm{pos}$ only take strictly positive values resulting in the right stochastic $\mathrm{diag} (\widehat{\mathcal{K}} \*1_L)^{-1} \widehat{\mathcal{K}}$ and a stable Transformer training procedure.
\citet{crt} extended PosRFs, proposing \textit{generalized exponential random features (GERFs)}\footnote{\citet{crt} define these RFs for $K^{(-1/2)}$ but we adapt them for $K^{(0)}$ using the trick mentioned above.} for $K^{(0)}$:
\begin{gather}
\Omega_\mathrm{GE} = \mathbb{R}^d, \quad \nu_\mathrm{GE} = \mathcal{N} (0, 1)^d, \quad f^{(1)}_\mathrm{GE} (\bs{\omega}, \*x) = \label{eq:gerf1} \\
= \! f^{(2)}_\mathrm{GE} (\bs{\omega}, \! \*x) \! = \! D \exp (A \| \bs{\omega} \|^2 \!\! + \! B \bs{\omega}^\top \*x \! + \! C \| \*x \|^2 / 2), \label{eq:gerf3}
\end{gather}
where $A, B, C, D$ are real numbers\footnote{\citet{crt} consider a more generalized form when $A, B, C, D$ are complex with an additional parameter $s = \pm 1$, however only the subfamily (\ref{eq:gerf1}-\ref{eq:gerf3}) with $s = 1$ is proposed for use in the Transformer application.} satisfying:
\begin{equation*}
1 - 8 A > 0, \, B = \sqrt{1 - 4 A}, \, C = -\frac12, \, D = (1 - 4 A)^{\frac{d}{4}} .
\end{equation*}
\citet{crt} express $B, C, D$ through $A$ and find a closed-form equation for the variance of (\ref{eq:rfdec}):
\begin{gather}
\mathrm{Var}_{\nu_\mathrm{GE}} f^{(1)}_\mathrm{GE} (\bs{\omega}, \*x) f^{(2)}_\mathrm{GE} (\bs{\omega}, \*y) \! = \! e^{\mathcal{L}_\mathrm{GE} (A, \*x, \*y)} \! - \! K^{(0)} (\*x, \*y)^2, \nonumber \\
\mathcal{L}_\mathrm{GE} (A, \*x, \*y) = d \log \left( \frac{1 - 4 A}{\sqrt{1 - 8 A}} \right) + \frac{2 (1 - 4 A)}{1 - 8 A} \label{eq:gevar1} \\
\times \| \*x + \*y \|^2 - \| \*x \|^2 - \| \*y \|^2 .\label{eq:gevar2}
\end{gather}
The minimum variance corresponds to the minimum $ \mathcal{L} (A, \*x, \*y)$ since $K^{(0)} (\*x, \*y)^2$ does not depend on $A$. Since $\mathcal{L} (A, \*x, \*y)$ is defined for a single pair of $\*x, \*y$ and not for sets $\mathcal{X}, \mathcal{Y}$, \citet{crt} propose a \textit{homogeneity heuristic} when they replace $\| \*x + \*y \|^2$, $\| \*x \|^2$, $\| \*y \|^2$ in (\ref{eq:gevar1}-\ref{eq:gevar2}) with averages over $\mathcal{X}, \mathcal{Y}$: $L^{-2} \sum_{i,j} \| \*x^{(i)} + \*y^{(j)} \|^2$, $L^{-1} \sum_i \| \*x^{(i)} \|^2$ and $L^{-1} \sum_j \| \*y^{(j)} \|^2$ respectively. This heuristic is based on the assumption that $\{ \*x^{(i)} \}$ and $\{ \*y^{(j)} \}$ are homogeneous and their statistics are tightly concentrated around the mean. After this substitution, the minimum of (\ref{eq:gevar1}-\ref{eq:gevar2}) with respect to $A$ can be found in closed form.
\section{Dense-Exponential Random Features (DERFs)} \label{sec:derf}
We prove that the homogeneity heuristic corresponds to a certain minimization problem. Then, we present DERFs which generalize GERFs and provide a tighter solution of that problem.
\subsection{The Objective Minimized by GERFs} \label{sec:gerfobj}
Our first contribution is showing that the homogeneity heuristic adopted in GERFs is actually an analytic solution of a certain optimization problem. Define
{\small \begin{align}
\overline{\mathcal{L}} (\bs{\theta}; \mathcal{X}, \mathcal{Y}, \mathcal{T}) = L^{-2} \sum_{1 \leq i,j \leq L} \log (\mathrm{Var}_\nu [f^{(1)} (\bs{\omega}, \*x^{(i)}) \nonumber \\
\times f^{(2)} (\bs{\omega}, \*y^{(j)})] + K^{(0)} (\*x^{(i)}, \*y^{(j)})^2 ), \label{eq:ldef}
\end{align}}
where $\mathcal{T} = \langle \nu, f^{(1)}, f^{(2)} \rangle$ are RFs for the kernel $K^{(0)}$ and $\bs{\theta}$ are their parameters.
(\ref{eq:ldef}) is a mean log-variance shifted by $K^{(0)}(\*x^{(i)}, \*y^{(j)})^2$. The best possible value of (\ref{eq:ldef}) is $\log K^{(0)} (\*x^{(i)}, \*y^{(j)})$ which corresponds to all variances $\mathrm{Var} f^{(1)} (\bs{\omega}, \*x^{(i)}) f^{(2)} (\bs{\omega}, \*y^{(j)})$ being zero, meaning that RFs provide exact kernel estimation. Hence, minimization of (\ref{eq:ldef}) leads to more precise estimators on $\mathcal{X}, \mathcal{Y}$. We call the loss function $\overline{\mathcal{L}} (\bs{\theta}; \mathcal{X}, \mathcal{Y}, \mathcal{T})$ the \textit{shifted log-variance} objective.
If $\mathcal{T}_\mathrm{GE} = \langle \nu_\mathrm{GE}, f^{(1)}_\mathrm{GE}, f^{(2)}_\mathrm{GE} \rangle$ are taken in (\ref{eq:ldef}), then $\bs{\theta}_\mathrm{GE} = \{ A, B, C, D \}$ and $\overline{\mathcal{L}} (\bs{\theta}_\mathrm{GE}; \mathcal{X}, \mathcal{Y}, \mathcal{T}_\mathrm{GE}) = L^{-2} \sum_{i,j} \mathcal{L}_\mathrm{GE} (A; \*x^{(i)}, \*y^{(j)})$. Using (\ref{eq:gevar1}-\ref{eq:gevar2}), we get:
\begin{gather*}
\overline{\mathcal{L}} (\bs{\theta}_\mathrm{GE}; \mathcal{X}, \mathcal{Y}, \mathcal{T}_\mathrm{GE}) = d \log \left( \frac{1 - 4 A}{\sqrt{1 - 8 A}} \right) +
\frac{2 - 8 A}{1 - 8 A} \\
\times \frac{1}{L^2} \sum_{i,j} \| \*x^{(i)} + \*y^{(j)} \|^2 \! - \! \frac{1}{L} \sum_i \| \*x^{(i)} \|^2 \! - \! \frac{1}{L} \sum_j \| \*y^{(j)} \|^2 .
\end{gather*}
That is, $\mathcal{L} (A; \mathcal{X}, \mathcal{Y})$ coincides with (\ref{eq:gevar1}-\ref{eq:gevar2}) when $\| \*x + \*y \|^2$, $\| \*x \|^2$, $\| \*y \|^2$ are replaced by their average statistics computed on $\mathcal{X}, \mathcal{Y}$. Hence, the homogeneity heuristic is nothing but minimization of (\ref{eq:ldef}). While in general it's unclear how to find a closed-form optimum of $\mathrm{Var} \widehat{\mathcal{K}}$ or $\mathrm{Var} (\widehat{\mathcal{K}} \*C)$, the global minimum of (\ref{eq:ldef}) is feasible and can be computed in $O(1)$ time. Further, \citet{crt} show that optimization of (\ref{eq:ldef}) leads to very good results in large-scale applications of efficient Transformers. In the next section, we present a number of extensions of GERFs, all of which aim to minimize (\ref{eq:ldef}) in closed form.
\section{Towards DERFs}
Dense-exponential random features (DERFs) are an extension of GERFs where scalars $A, B, C$ are replaced with dense matrices.
DERFs may be viewed as a generalization that
contain the previously introduced classes as special cases.
We define DERFs as follows: $\Omega_\mathrm{DE} = \mathbb{R}^d$, $\nu_\mathrm{DE} = \mathcal{N} (0, 1)^d$ and for $k \in \{ 1, 2 \}$:
\begin{equation*}
f^{(k)}_\mathrm{DE} (\bs{\omega}, \*x) \! = \! D \exp(\bs{\omega}^\top \*A \bs{\omega} + \bs{\omega}^\top \*B^{(k)} \*x + \*x^\top \*C^{(k)} \*x),
\end{equation*}
where $\*B^{(k)}, \*C^{(k)} \in \mathbb{R}^{d \times d}$, $D \in \mathbb{R}$, $\*A \in \mathbb{S}_d$ (a set of $d \times d$ real symmetric matrices). Clearly, GERFs with parameters $A, B, C, D$ can be expressed via DERFs with parameters $\*A = A \*I_d$, $\*B^{(1)} = \*B^{(2)} = B \*I_d$, $\*C^{(1)} = \*C^{(2)} = C \*I_d$, $D$ is unchanged. Our first theoretical result is giving the conditions when $\mathcal{T}_\mathrm{DE} = \langle \nu_\mathrm{DE}, f^{(1)}_\mathrm{DE}, f^{(2)}_\mathrm{DE} \rangle$ are valid RFs:
\begin{theorem} \label{th:derf}
Let the following conditions hold:
\begin{gather*}
8 \*A \prec \*I_d, \quad (\*B^{(1)})^\top (\*I_d - 4 \*A)^{-1} \*B^{(2)} = \*I_d, \quad \*C^{(k)} = \\
- \frac{1}{2} (\*B^{(k)})^\top (\*I_d - 4 \*A)^{-1} \*B^{(k)}, \,\, D = \det (\*I_d - 4 \*A)^{1/4}
\end{gather*}
where $k \in \{ 1, 2 \}$. Then $\mathcal{T}_\mathrm{DE}$ are RFs for $K^{(0)}$ and, for all $\*x, \*y \in \mathbb{R}^d$:
\begin{align}
&\mathrm{Var}_{\nu_\mathrm{DE}} f^{(1)}_\mathrm{DE} (\bs{\omega}, \*x) f^{(2)}_\mathrm{DE} (\bs{\omega}, \*y) = D^4 \det (\*I_d - 8 \*A)^{-1/2} \nonumber \\
&\times \exp\biggl(2 \*x^\top \left( \*C^{(1)} + (\*B^{(1)})^\top (\*I_d - 8 \*A)^{-1} \*B^{(1)} \right) \*x \nonumber \\
&+ 2 \*y^\top \left( \*C^{(2)} + (\*B^{(2)})^\top (\*I_d - 8 \*A)^{-1} \*B^{(2)} \right) \*y \nonumber \\
&+ \! 4 \*x^\top (\*B^{(1)})^\top (\*I_d - 8 \*A)^{-1} \*B^{(2)} \*y \! \biggr) \! - \! K^{(0)} (\*x, \*y)^2 . \label{eq:devar}
\end{align}
\end{theorem}
Our ultimate goal is to find optimal parameters $\*A, \*B^{(k)}, \*C^{(k)}$ and $D$ minimizing the variance of the low-rank approximation of $\mathcal{K}^{(0)} (\mathcal{X}, \mathcal{Y})$ where sets $\mathcal{X}, \mathcal{Y}$ are provided. Our first observation is that we can assume that $\*A \in \mathbb{D}_d$ (a set of $d \times d$ real diagonal matrices). Indeed, any symmetric $\*A$ can be expressed as $\*Q \widetilde{\*A} \*Q^\top$ where $\*Q \in \mathbb{O}_d$ (a set of orthogonal matrices $\{ \*Z \in \mathbb{R}^{d \times d} \, | \, \*Z^\top \*Z = \*I_d \}$) and $\widetilde{\*A} \in \mathbb{D}_d$. Let $\bs{\omega} \sim \mathcal{N} (\*0_d, \*I_d)$. Then, for any $\*x \in \mathbb{R}^d$, $k \in \{ 1, 2 \}$,
\begin{gather*}
f^{(k)}_\mathrm{DE} (\bs{\omega}, \! \*x) \! = \! D \exp(\bs{\omega}^\top \*Q \! \widetilde{\*A} \*Q^\top \bs{\omega} \! + \! \bs{\omega}^\top \*B^{(k)} \*x \! + \! \*x^\top \*C^{(k)} \*x) \\
= \exp(\widetilde{\bs{\omega}}^\top \widetilde{\*A} \widetilde{\bs{\omega}} + \widetilde{\bs{\omega}}^\top \! \widetilde{\*B}^{(k)} \*x + \*x^\top \! \*C^{(k)} \*x) = \widetilde{f}^{(k)}_\mathrm{DE} (\widetilde{\bs{\omega}}, \*x) ,
\end{gather*}
where $\widetilde{\*B}^{(k)} = \*Q^\top \*B^{(k)}$, $\widetilde{\bs{\omega}} = \*Q^\top \bs{\omega} \sim \mathcal{N} (\*0_d, \*I_d)$ since the distribution $\bs{\omega} \sim \mathcal{N} (\*0_d, \*I_d)$ is \textit{isometric}, i.e. rotation-invariant and $\widetilde{f}^{(k)}_\mathrm{DE}$ are DERFs with parameters $\widetilde{\*A}$, $\widetilde{\*B}^{(k)}$, $\*C^{(k)}$, $D$. We conclude that with any $\*A$, $f^{(k)}_\mathrm{DE} (\bs{\omega}, \*x)$ can be expressed as DERFs $\widetilde{f}^{(k)}_\mathrm{DE}$ with $\widetilde{\*A} \in \mathbb{D}_d$. Hence, hereafter we only consider $\*A \in \mathbb{D}_d$ without loss of generality.
Since $\*B^{(k)}, \*C^{(k)}$ are dense matrices in general, evaluation of $f^{(k)}_\mathrm{DE} (\bs{\omega}, \*x)$ takes $O(d^2)$ time which is bigger than the $O(d)$ complexity for TrigRFs, PosRFs and GERFs. However, $\*P$ and $\*S$ matrices (\ref{eq:ps1}-\ref{eq:ps2}) can be still computed in a time subquadratic in $L$. For that, precompute $(\*B^{(k)})^\top \bs{\omega}^{(m)}$, $\*C^{(1)} \*x^{(i)}$, $\*C^{(2)} \*y^{(j)}$ for all $k \in \{ 1, 2\}$, $1 \leq m \leq M$, $1 \leq i, j \leq L$ in $O((M + L) d^2)$ time. Then, computing $f^{(1)}_\mathrm{DE} (\bs{\omega}^{(m)}, \*x^{(i)})$, $f^{(2)}_\mathrm{DE} (\bs{\omega}^{(m)}, \*y^{(j)})$ for all $1 \leq i, j \leq L$, $1 \leq m \leq M$ takes $O(L M d)$ operations. The complexity of constructing (\ref{eq:ps1}-\ref{eq:ps2}) then is $O (L (M d + d^2) + M d^2)$ which is still subquadratic in $L$.
Our goal is to minimize $\overline{\mathcal{L}} (\bs{\theta}_\mathrm{DE}; \mathcal{X}, \mathcal{Y}, \mathcal{T}_\mathrm{DE})$ for $\bs{\theta}_\mathrm{DE} = \{ \*A, \*B^{(1)}, \*B^{(2)}, \*C^{(1)}, \*C^{(2)}, D \}$. However, we find that even for a single pair of $\*x, \*y$ it's unclear how to minimize the variance (\ref{eq:devar}) in closed form. Hence, below we consider
special cases where an analytic solution is feasible.
\subsection{Asymmetric Dense-Exponential Random Features} \label{sec:aderf}
Define RFs $\mathcal{T}_\mathrm{ADE} = \langle \nu_\mathrm{ADE}, f^{(1)}_\mathrm{ADE}, f^{(2)}_\mathrm{ADE} \rangle$ in the same way as $\mathcal{T}_\mathrm{DE}$ with the only difference that $\*A = A \*I_d$ where $\lambda \in \mathbb{R}$. We refer to these RFs as \textit{asymmetric dense-exponential RFs (ADERFs)} since $f^{(1)}_\mathrm{ADE} \neq f^{(2)}_\mathrm{ADE}$ in general. The only additional restriction of ADERFs compared to DERFs is that all diagonal entries of $\*A \in \mathbb{D}_d$ are the same. The parameters of $\mathcal{T}_\mathrm{ADE}$ are $\bs{\theta}_\mathrm{ADE} = \{ A, \*B^{(1)}, \*B^{(2)}, \*C^{(1)}, \*C^{(2)}, D \}$. By $\Theta_\mathrm{ADE}$ denote a set of all possible $\bs{\theta}_\mathrm{ADE}$'s resulting in correct RFs for the kernel $K^{(0)}$, i.e. satisfying conditions from Theorem \ref{th:derf} with $\*A = A \*I_d$. The following result gives an analytic formula for a global minimum of $\overline{\mathcal{L}} (\bs{\theta}_\mathrm{ADE}; \mathcal{X}, \mathcal{Y}, \mathcal{T}_\mathrm{ADE})$. In the theorem, we use notions of SVD and eigendecomposition of a symmetric matrix \cite{nla} (all proofs are in the Appendix).
\begin{theorem}
\label{th:aderf}
Let $\mathcal{X} = \{ \*x^{(i)} \in \mathbb{R}^d \}_{i = 1}^L$, $\mathcal{Y} = \{ \*y^{(j)} \in \mathbb{R}^d \}_{j = 1}^L$. Let $\*M^{(1)} = \frac{1}{L} \sum_{i = 1}^L \*x^{(i)} (\*x^{(i)})^\top$, $\*M^{(2)} = \frac{1}{L} \sum_{j = 1}^L \*y^{(j)} (\*y^{(j)})^\top$.
Suppose that $\*M^{(1)}, \*M^{(2)} \in \mathbb{S}_d$ are nonsingular. Define $\mu^{(3)} = d^{-1} L^{-2} \left( \sum_{i = 1}^L \*x^{(i)} \right)^\top \left( \sum_{j = 1}^L \*y^{(j)} \right) \in \mathbb{R}$. For $k \in \{ 1, 2 \}$, let $\*M^{(k)} = \*Q^{(k)} \bs{\Lambda}^{(k)} (\*Q^{(k)})^\top$ be eigendecomposition of a symmetric $\*M^{(k)}$ where $\*Q^{(k)} \in \mathbb{O}_d$. $\bs{\Lambda}^{(k)} \in \mathbb{D}_d$ has strictly positive diagonal values since $\*M^{(k)} \succeq 0$ by definition and $\*M^{(k)}$ is nonsingular. Let $\*U \bs{\Sigma} \*V^\top$ be SVD decomposition of $(\bs{\Lambda}^{(1)})^\frac12 (\*Q^{(1)})^\top \*Q^{(2)} (\bs{\Lambda}^{(2)})^\frac12$ where $\*U, \*V \in \mathbb{O}^d$, $\bs{\Sigma} \in \mathbb{D}_d$ has nonnegative diagonal entries.
One of the solutions $\bs{\theta}_\mathrm{ADE}^* = \{ A, \*B^{(1)}, \*B^{(2)}, \*C^{(1)}$, $\*C^{(2)}, D \}$ of $\min_{\bs{\theta}_\mathrm{ADE} \in \Theta_\mathrm{ADE}}$ $\overline{\mathcal{L}} (\bs{\theta}_\mathrm{ADE}; \mathcal{X}, \mathcal{Y}, \mathcal{T}_\mathrm{ADE})$ is as follows. Set $\phi = 2 d^{-1} \sum_{l = 1}^d \bs{\Sigma}_{l,l} + 2 \mu^{(3)}$ and, for $k \in \{ 1, 2 \}$,
\begin{small}
\begin{gather*}
A = \frac{1}{16} \left( 1 - 2 \phi - \sqrt{\left(2 \phi + 1\right)^2 + 8 \phi} \right), \\
\*B^{(1)} = \sqrt{1 - 4 A} \bs{\Sigma}^{1/2} \*U^\top (\bs{\Lambda}^{(1)})^{-1/2} (\*Q^{(1)})^\top, \\
\*B^{(2)} = \sqrt{1 - 4 A} \bs{\Sigma}^{-1/2} \*U^\top (\bs{\Lambda}^{(1)})^{1/2} (\*Q^{(1)})^\top, \\
\*C^{(k)} = -\frac{1}{2 (1 - 4 A)} (\*B^{(k)})^\top \*B^{(k)}, \quad D = (1 - 4 A)^{d / 4} .
\end{gather*}
\end{small}
Further, we have:
\begin{small}
\begin{gather}
\overline{\mathcal{L}} (\bs{\theta}_\mathrm{ADE}^*; \mathcal{X}, \mathcal{Y}, \mathcal{T}_\mathrm{ADE}) \! = \! d \biggl( \log (1 \! - \! 4 A) - \frac{1}{2} \log(1 \! - \! 8 A) \nonumber \\
+ 2 (1 - 8 A)^{-1} \left( d^{-1} \sum_{l = 1}^d \bs{\Sigma}_{l,l} + \mu^{(3)} \right) + 2 \mu^{(3)} \biggr). \label{eq:adevar}
\end{gather}
\end{small}
\end{theorem}
Theorem \ref{th:aderf} implies an algorithm for finding $\bs{\theta}^*_\mathrm{ADE}$ efficiently. Namely, compute $\*M^{(k)}$, $k \in \{ 1, 2 \}$ ($O(L d^2)$ time) and $\mu^{(3)}$ ($O (L d)$ time). Then, perform matrix decompositions to obtain $\*Q^{(k)}, \bs{\Lambda}^{(k)}$, $k \in \{ 1, 2 \}$, and $\*U, \bs{\Sigma}, \*V$ in $O(d^3)$ time. After that, $A, \*B^{(1)}, \*B^{(2)}, \*C^{(1)}, \*C^{(2)}, D$ can be all evaluated in $O(d^3)$ time using formulae from Theorem \ref{th:aderf}. The total time complexity of the approximation scheme is therefore $O(L (Md + d^2) + M d^2 + d^3)$ which is subquadratic in $L$ as required.
\subsection{Symmetric Dense-Exponential Random Features}
Define $\mathcal{T}_\mathrm{SDE} = \langle \nu_\mathrm{SDE}, f^{(1)}_\mathrm{SDE}, f^{(2)}_\mathrm{SDE} \rangle$ in the same way as $\mathcal{T}_\mathrm{DE}$ with the only difference that $\*B^{(1)} = \*B^{(2)} = \*B$. From the conditions in Theorem \ref{th:derf} it follows immediately that also $\*C^{(1)} = \*C^{(2)} = \*C$. Hence, $f^{(1)}_\mathrm{SDE} = f^{(2)}_\mathrm{SDE}$ and we refer to these RFs as \textit{symmetric dense-exponential RFs (SDERFs)}. The parameters of $\mathcal{T}_\mathrm{SDE}$ are $\bs{\theta}_\mathrm{SDE} = \{ \*A, \*B, \*C, D \}$. By $\Theta_\mathrm{SDE}$ denote a set of all possible $\bs{\theta}_\mathrm{SDE}$'s resulting in correct RFs for the kernel $K^{(0)}$, i.e. satisfying conditions from Theorem \ref{th:derf} with $\*B^{(k)} = \*B$, $\*C^{(k)} = \*C$, $k \in \{ 1, 2 \}$. The following theorem gives an analytic solution for a global minimum of $\overline{\mathcal{L}} (\bs{\theta}_\mathrm{SDE}; \mathcal{X}, \mathcal{Y}, \mathcal{T}_\mathrm{SDE})$.
\begin{theorem}
\label{th:sderf}
Let $\mathcal{X} = \{ \*x^{(i)} \in \mathbb{R}^d \}_{i = 1}^L$, $\mathcal{Y} = \{ \*y^{(j)} \in \mathbb{R}^d \}_{j = 1}^L$ and let $\*M^{(1)}$, $\*M^{(2)}$ be defined as in Theorem \ref{th:aderf} and define $\bs{\mu}^{(4)} = \frac{1}{L} \sum_{i = 1}^L \*x^{(i)} \in \mathbb{R}^d$, $\bs{\mu}^{(5)} = \frac{1}{L} \sum_{j = 1}^L \*y^{(j)} \in \mathbb{R}^d$.
Further, let $\*Q^{(3)} \bs{\Lambda}^{(3)} (\*Q^{(3)})^\top$ be eigendecomposition of of a symmetric positive semidefinite matrix $\*M^{(1)} + \bs{\mu}^{(4)} (\bs{\mu}^{(5)})^\top + \bs{\mu}^{(5)} (\bs{\mu}^{(4)})^\top + \*M^{(2)}$ where $\*Q^{(3)} \in \mathbb{O}_d$ and $\bs{\Lambda}^{(3)} \in \mathbb{D}_d$ with nonnegative diagonal entries. Further, we assume that the entries on the diagonal of $\bs{\Lambda}^{(3)}$ are sorted in the non-ascending order.
One of the solutions $\bs{\theta}_\mathrm{SDE}^* = \{ \*A, \*B, \*C, D \}$ of $\min_{\bs{\theta}_\mathrm{SDE} \in \Theta_\mathrm{SDE}}$ $\overline{\mathcal{L}} (\bs{\theta}_\mathrm{SDE}; \mathcal{X}, \mathcal{Y}, \mathcal{T}_\mathrm{SDE})$ is as follows. $\*A \in \mathbb{D}_d$, for all $1 \leq l \leq d$:
\begin{small}
\begin{equation*}
\*A_{l,l} = \frac{1}{16} \left( 1 - 2 \bs{\Lambda}^{(3)}_{l,l} - \sqrt{\left(2 \bs{\Lambda}^{(3)}_{l,l} + 1\right)^2 + 8 \bs{\Lambda}^{(3)}_{l,l}} \right),
\end{equation*}
\end{small}
$\*B = (\*I_d - 4 \*A)^{1/2} (\*Q^{(3)})^\top$, $\*C = -\frac12 \*I_d$, $D = \det (\*I_d - 4 \*A)^{1/4}$. Further, we have:
\begin{small}
\begin{gather}
\overline{\mathcal{L}} (\bs{\theta}_\mathrm{SDE}; \mathcal{X}, \mathcal{Y}, \mathcal{T}_\mathrm{SDE}) = \sum_{l = 1}^d \biggl( \log (1 - 4 \*A_{l,l}) \nonumber \\
- \frac12 \log (1 - 8 \*A_{l,l}) + \left(1 + (1 - 8 \*A_{l,l})^{-1}\right) \bs{\Lambda}^{(3)}_{l,l} \biggr) \nonumber \\
- L^{-1} \sum_{i = 1}^L \| \*x^{(i)} \|^2 - L^{-1} \sum_{j = 1}^L \| \*y^{(j)} \|^2 \label{eq:sdevar}
\end{gather}
\end{small}
\end{theorem}
\begin{figure*}[t]
\centering
\includegraphics[width=0.8\textwidth]{log_rel_vars.png}
\caption{Log of the relative variance of new and existing RF mechanisms, mean value over multiple samples. $0.1 \leq \sigma \leq 1$.}.
\label{fig:vars}
\end{figure*}
Again, Theorem \ref{th:sderf} implies an algorithm for finding $\bs{\theta}^*_\mathrm{SDE}$ in a time subquadratic in $L$. That is, we can compute $\*M^{(1)}$, $\*M^{(2)}$, $\mu^{(3)}$, $\bs{\mu}^{(4)}$, $\bs{\mu}^{(5)}$ in $O(L d^2)$ total time. Then, perform an eigendecomposition to obtain $\*Q^{(3)}, \bs{\Lambda}^{(3)}$ in $O(d^3)$ time. After that, $\*A, \*B, \*C, D$ can be computed in $O(d^3)$ time using formulae from Theorem \ref{th:sderf}. The total time complexity of the approximation scheme is the same as for ADERFs: $O(L (Md + d^2) + M d^2 + d^3)$ or $O(L (Md + d^2) + M d^2)$ if we assume that $L \geq d$.
\subsection{Simplified ADERFs}
While having a compact and closed-form expression, both ADERFs and SDERFs rely on eigendecomposition and SVD decompositions: operations for which implementation has not yet matured in popular deep learning libraries with GPU and TPU support. For this reason, we propose \textit{simplified ADERFs (SADERFs)} $\mathcal{T}_\mathrm{SADE} = \langle \nu_\mathrm{SADE}, f^{(1)}_\mathrm{SADE}, f^{(2)}_\mathrm{SADE} \rangle$ which extend GERFs but require only basic unary operations. SADERFs are defined via GERFs as follows: $\Omega_\mathrm{SADE} = \mathbb{R}^d$, $\nu_\mathrm{SADE} = \mathcal{N} (0, 1)^d$, $f^{(1)}_\mathrm{SADE} (\bs{\omega}, \*x) = f^{(1)}_\mathrm{GE} (\bs{\omega}, \bs{\Psi} \*x)$, $f^{(2)}_\mathrm{SADE} (\bs{\omega}, \*y) = f^{(2)}_\mathrm{GE} (\bs{\omega}, \bs{\Psi}^{-1} \*y)$
where $\bs{\Psi} \in \mathbb{D}_d$ is a diagonal matrix with nonzero diagonal entries. First of all, $\mathcal{T}_\mathrm{SADE}$ are valid random features for the softmax kernel $K^{(0)}$ since:
\begin{gather*}
\mathbb{E}_{\nu_\mathrm{SADE}} [ f^{(1)}_\mathrm{SADE} (\bs{\omega}, \*x) f^{(2)}_\mathrm{SADE} (\bs{\omega}, \*y) ] = \mathbb{E}_{\nu_\mathrm{GE}} [ f^{(1)}_\mathrm{GE} (\bs{\omega}, \bs{\Psi} \*x) \\
\times f^{(2)}_\mathrm{GE} (\bs{\omega}, \bs{\Psi}^{-1} \*y) ] = K^{(0)} (\bs{\Psi} \*x, \bs{\Psi}^{-1} \*y) = K^{(0)} (\*x, \*y),
\end{gather*}
where we use $K^{(0)} (\*x, \*y) = \exp (\*x^\top \*y)$ by the definition.
We find $\bs{\Psi}$ by optimizing the objective (\ref{eq:ldef}) for $\mathcal{T}_\mathrm{SADE}$, the form of which is easily deduced from $\overline{\mathcal{L}} (\bs{\theta}_\mathrm{GE}; \mathcal{X}, \mathcal{Y}, \mathcal{T}_\mathrm{GE})$:
\begin{gather}
\overline{\mathcal{L}} (\bs{\theta}_\mathrm{SADE}; \mathcal{X}, \mathcal{Y}, \mathcal{T}_\mathrm{SADE}) - d \log \left( \frac{1 - 4 A}{\sqrt{1 - 8 A}} \right) \! = \! \frac{2 - 8 A}{1 - 8 A} \nonumber \\
\times \frac{1}{L^2} \sum_{i,j} \| \bs{\Psi} \*x^{(i)} + \bs{\Psi}^{-1} \*y^{(j)} \|^2 - \frac{1}{L} \sum_i ( \| \bs{\Psi} \*x^{(i)} \|^2 \nonumber \\
+ \| \bs{\Psi}^{-1} \*y^{(j)} \|^2) = \frac{1}{L^2(1 - 8 A)} \sum_{i,j} \| \bs{\Psi} \*x^{(i)} + \bs{\Psi}^{-1} \*y^{(j)} \|^2 \nonumber \\
+ \frac{2}{L^2} \sum_{i,j} (\*x^{(i)})^\top \*y^{(j)}, \label{eq:lsrf}
\end{gather}
where we move a term not depending on $\bs{\Psi}$ to the left-hand side. Since $1 - 8 A > 0$, we conclude that minimizing (\ref{eq:lsrf}) is equivalent to minimizing
\begin{gather}
\sum_{i,j} \| \bs{\Psi} \*x^{(i)} + \! \bs{\Psi}^{-1} \*y^{(j)} \|^2 \! = \! \sum_{l} \sum_{i,j} (\bs{\Psi}_{l,l} \*x^{(i)}_l + \bs{\Psi}^{-1}_{l,l} \*y^{(j)}_l)^2 \nonumber \\
= \sum_{l} \sum_{i,j} (\bs{\Psi}_{l,l}^2 (\*x^{(i)}_l)^2 + 2 \*x^{(i)}_l \*y^{(j)}_l + \bs{\Psi}^{-2}_{l,l} (\*y^{(j)}_l)^2). \label{eq:psiopt}
\end{gather}
Optimizing (\ref{eq:psiopt}) reduces to independent optimization problems with respect to $\bs{\Psi}_{l,l}$, $1 \leq l \leq d$. Each problem is convex and the solution is found trivially by setting the derivative to zero:
\begin{equation}
\forall 1 \leq l \leq d : \, \bs{\Psi}_{l,l}^* = ( \sum_j (\*y^{(j)}_l)^2 / \sum_i (\*x^{(i)}_l)^2 )^{1/4}. \label{eq:psist}
\end{equation}
(\ref{eq:psist}) can be computed in $O(d L)$ time, after which the parameters of $f^{(1)}_\mathrm{GE}, f^{(2)}_\mathrm{GE}$ can be found efficiently as described in Section \ref{sec:exist}.
It is easy to see that $\mathcal{T}_\mathrm{SADE}$ are a special case of ADERFs (Section \ref{sec:aderf}) which explains their name. Furthermore, the case $\bs{\Psi} = \*I_d$ reduces $\mathcal{T}_\mathrm{SADE}$ to $\mathcal{T}_\mathrm{GE}$, hence the latter is a special case of the former. Figure \ref{fig:derf_map} (top) illustrates all the new types of random features in a Venn diagram.
\begin{figure*}
\centering
\includegraphics[width=0.8\textwidth]{uci.pdf}
\caption{Kernel classification, test accuracy (\%). The last plot shows average curves over 8 benchmarks. We observe that our proposed method SDERF, shows the best accuracy across most of the (benchmark, $M$) pairs and also shows the best average performance.}
\label{fig:uci}
\end{figure*}
\section{Experiments}
In this section, we evaluate DERFs experimentally in various machine learning applications. More details about each experiment can be found in Appendix \ref{app:exp}.
\subsection{Variance Comparison} \label{sec:expvars}
We follow the variance comparison setup from \cite{crt}: we sample pairs of vectors $\*x, \*y$ and compute relative variances of the approximation $\mathrm{Var} \widehat{K}^{(0)} (\*x, \*y) / K^{(0)} (\*x, \*y)$ where $\widehat{K}^{(0)}$ denotes the RF approximation and $\mathrm{Var} \widehat{K}^{(0)} (\*x, \*y)$ is evaluated via (\ref{eq:devar}). We set $d = 64$ as in \cite{crt} and take 6 different regimes for sampling $\*x, \*y$: \texttt{normal} where $\*x, \*y$ are drawn from $\mathcal{N} (\*0_d, \sigma^2 \*I_d)$, \texttt{sphere} where $\*x, \*y$ are drawn uniformly on a sphere $\sigma \mathcal{S}^{d-1}$, \texttt{heterogen} where $\*x, \*y$ are drawn from different distributions $\mathcal{N} (\*0_d, \sigma^2 \*I_d)$ and $\mathcal{N} (\sigma \*1_d, \sigma^2 \*I_d)$. \texttt{mnist} and \texttt{cifar10} are where $\*x, \*y$ are random images from MNIST \cite{mnist} or CIFAR10 \cite{cifar10}, resized to $8 \times 8$, scaled by $\sigma > 0$ and flattened. Finally, \texttt{mnist/cifar10} is a regime where $\*x$ is drawn as in \texttt{mnist} and $\*y$ is drawn as in \texttt{cifar10}.
We do not report SADERFs since they're a special case of ADERFs (Figure \ref{fig:vars}). SDERFs outperform or are on par with other methods in all setups -- about $e^{5}$ times better than GERFs in \texttt{heterogen}, \texttt{mnist} and \texttt{mnist/cifar10} and about $e^{10}$ times better in \texttt{cifar10}. Further, ADERFs outperform GERFs by around $e^{3}$ times in \texttt{mnist/cifar10} where $\*x$ and $\*y$ are drawn ``asymmetrically''.
\subsection{Kernel Classification}
In this experiment, we compare accuracy of different RF methods in kernel classification on 8 benchmarks from UCI \cite{uci}, following the setup of \citet{crt}. Kernel regression \cite{nadaraya,watson} is applied for predicting class probabilities. Training objects are denoted as $\*u^{(1)}, \dots, \*u^{(L)} \in \mathbb{R}^d$ and their one-hot labels as $\*r^{(1)}, \dots, \*r^{(L)} \in \mathbb{R}^n$. During testing, the goal is to predict the class of a new object $\*u^*$ as $\mathrm{argmax}_{1 \leq l \leq n} \*r^*$ where $\*r^* = \sum_{i = 1}^L K^{(-0.5)} (\sigma \*u^*, \sigma \*u^{(i)}) \*r^{(i)}$ and $\sigma > 0$ is tuned on the validation set. With $O(n L M)$ preprocessing, RFs are used to find an unbiased approximation of $\*r^*$ in $O(n M)$ instead of $O(n L)$ exact computation.
For each benchmark, we range the values of $M$ from $2^4$ to $2^7$ (Figure \ref{fig:uci}). We observe that SDERF, which is proposed in this paper, shows the best accuracy across most of the (benchmark, $M$) pairs and also shows the best average performance.
\begin{figure*}
\centering
\includegraphics[width=0.7\textwidth]{favorsharp-speech.pdf}
\caption{Comparison of FAVOR\# using SDRF with FAVOR++ Performer for regular Conformer-Transducer training with $m$ random features (TRANS-$m$) as well as the Noisy Student Training variant with $m$ random features (NST-$m$) on the \textit{LibriSpeech} corpus. We report commonly used normalized word error rate (NWER) metric.}
\label{fig:speech}
\end{figure*}
\subsection{DERFs for Long-sequence Transformers}
In this section, we evaluate DERFs for self-attention approximation in a number of Performer-Transformer training setups \cite{performer}. We refer to the DERF-based self-attention approximation method as \textit{FAVOR\#}.
\begin{table*}[t]
\small
\centering
\caption{GLUE Dev results on base sized models.
Number of training examples is reported below each task.
MCC score is reported for CoLA, F1 score is reported for MRPC, Spearman correlation is reported for STS-B, and accuracy scores are reported for the other tasks. The \textbf{best} result, \underline{second best}.}
\label{tab:glue_dev}
\begin{tabular}{@{}lcccccccc@{}}
\toprule
System & MNLI(m) & QQP & QNLI & SST-2 & CoLA & STS-B & MRPC & RTE \\
& 392k & 363k & 108k & 67k & 8.5k & 5.7k & 3.5k & 2.5k \\
\midrule
ELU \cite{pmlr-v119-katharopoulos20a} & \underline{82.58} & 90.05 & \underline{89.81} & \underline{92.43} & 58.63 & \underline{87.91} & 87.50 & 67.15 \\
RELU \cite{performer} & 82.49 & \textbf{90.71} & 89.68 & 92.32 & 57.57 & \textbf{88.15} & 87.25 & \underline{68.95}\\
FAVOR+ \cite{performer} & 77.69 & 86.69 & 89.41 & 91.80 & 54.87 & 83.78 & 80.73 & 66.19\\
FAVOR++ \cite{crt} & {82.29} & {90.43} & {89.73} & {92.20} & \underline{58.85} & 85.90 & \textbf{{88.73}} & 67.63\\
\midrule
FAVOR\# & \textbf{82.69} & \underline{90.68} & \textbf{90.01} & \textbf{92.53} & \textbf{59.33} & 85.48 & \underline{87.99} & \textbf{69.68} \\
\bottomrule
\end{tabular}
\end{table*}
\subsubsection{Speech Modelling}
In our first set of experiments, we focus on speech models. We train Performer-encoders and test them on the \textit{LibriSpeech} corpus \cite{librispeech}, commonly used for benchmarking speech models.
We considered two Transformers architectures/training setups: \textbf{(a)} Conformer-Transducer \cite{conformer-transducer} trained in a regular way ($\mathrm{TRANS}$) as well as: \textbf{(b)} the Noisy Student Training ($\mathrm{NST}$) variant
introduced in \cite{nst}.
We compare ``performized'' variants of these architectures, applying FAVOR\# with SDERF (since it worked best in the previous setups) as well as FAVOR++ \cite{crt}.
In the first setting, we see that FAVOR\# consistently outperforms FAVOR++ for smaller $m$ (where reduced variance of the softmax-kernel estimation is more critical) and both achieve similar scores for larger $m$. In the NST-experiment, we focused on the smaller $m$ variant, where FAVOR\# again beats FAVOR++. All results are presented in Fig. \ref{fig:speech}.
\subsubsection{Natural language processing}
The General Language Understanding Evaluation (GLUE) benchmark \citep{wang2018glue} consists of 8 different natural language understanding tasks with the sequence length ranging from 32 to 128. We use this to test the performance of different low rank attention methods on NLP tasks. We used the same training parameters as mentioned in \cite{devlin2018bert} (see Appendix \ref{sec:appedinx_text} for details). We warm start all low-rank Transformers with a pre-trained BERT-base model checkpoint \cite{devlin2018bert}, thus contrasting how well the low rank methods approximate the softmax kernel.
We compared FAVOR++ \citep{crt}, FAVOR+ \citep{performer}, ELU \citep{pmlr-v119-katharopoulos20a} and ReLU \citep{performer} variants of the Performers \cite{performer} against the FAVOR\# variant and report the results in Table \ref{tab:glue_dev}. We couldn't use SDERF in this setup because eigendecomposition led to errors on TPUs due to a different implementation compared to the speech modelling experiment. For this reason, we used SADERF which doesn't require any matrix decompositions. On most tasks we find that FAVOR\# is the best performing variant showcasing its effectiveness in modelling the softmax kernel for transformers.
\section{Conclusion}
We proposed an extension of generalized exponential random features (GERFs) for the Gaussian and softmax kernels: dense-exponential random features (DERFs). DERFs employ matrix parameters and are more flexible than GERFs.
We evaluated DERFs in
kernel regression and two Transformers training setups, demonstrating significant benefits.
\section{Acknowledgements}
V. L. acknowledges support from the Cambridge Trust and DeepMind. V. L. was part-time employed by Google while a PhD student. A.W. acknowledges support from a Turing AI Fellowship under EPSRC grant EP/V025279/1, The Alan Turing Institute, and the Leverhulme Trust via CFI.
\section{Introduction}
\section{Prerequisites}
\subsection{Scaled Softmax Kernel and Random Features}
By the \textit{scaled softmax kernel} $K^{(\alpha)}: \mathbb{R}^d \times \mathbb{R}^d \to (0, +\infty)$, where $\alpha \in \mathbb{R}$, we denote a mapping defined as $K^{(\alpha)} (\*x, \*y) = \exp(\alpha \| \*x \|^2 + \*x^\top \*y + \alpha \| \*y \|^2)$ for all $\*x, \*y \in \mathbb{R}^d$ where $\| \cdot \|$ is an $L_2$-norm. Two important special cases of the scaled softmax kernel are 1) a \textit{Gaussian kernel} $K^{(-1/2)} (\*x, \*y) = \exp(- \| \*x - \*y \|^2 / 2)$ and 2) a \textit{softmax kernel} $K^{(0)} (\*x, \*y) = \exp(\*x^\top \*y)$. For two sets of vectors $\mathcal{X} = \{ \*x^{(i)} \in \mathbb{R}^d \}_{i = 1}^L$ and $\mathcal{Y} = \{ \*y^{(j)} \in \mathbb{R}^d \}_{j = 1}^L$, by $\mathcal{K}(\mathcal{X}, \mathcal{Y}) \in \mathbb{R}^{L \times L}$ we denote a matrix where $\mathcal{K}^{(\alpha)}(\mathcal{X}, \mathcal{Y})_{i,j} = K^{(\alpha)}(\*x^{(i)}, \*y^{(j)})$ for all $1 \leq i, j \leq L$.
In this paper, we will be interested in the problem of computing $\mathcal{K}^{(\alpha)} (\mathcal{X}, \mathcal{Y}) \*C$ where $\mathcal{X}$, $\mathcal{Y}$ and a matrix $\*C \in \mathbb{R}^{L \times n}$ are provided as an input. A naive solution requires $O(L^2 (d + n))$ computations for constructing $\mathcal{K}^{(\alpha)} (\mathcal{X}, \mathcal{Y})$ ($O (L^2 d)$) and computing the matrix multiplication $\mathcal{K}^{(\alpha)} (\mathcal{X}, \mathcal{Y}) \times \*C$ ($O(L^2 n)$). Instead, we will use an efficient Monte-Carlo approximation of $\mathcal{K}^{(\alpha)} (\mathcal{X}, \mathcal{Y}) \times \*C$ using the following notion of \textit{random features (RFs) for the scaled softmax kernel}:
\begin{definition} \label{def:rfs}
By random features for the scaled softmax kernel $K^{(\alpha)}$, $\alpha \in \mathbb{R}$, we denote a triple $\mathcal{T} = \langle \nu, f^{(1)}, f^{(2)} \rangle$ where $\nu$ is a probability distribution over random objects $\bs{\omega} \in \Omega$ and $f^{(i)} : \Omega \times \mathbb{R}^d \to \mathbb{R}$, $i \in \{ 1, 2 \}$, are such mappings that, for all $\*x, \*y \in \mathbb{R}^d$,
\begin{equation}
K^{(\alpha)} (\*x, \*y) = \mathbb{E}_{\nu} \left[ f^{(1)} (\bs{\omega}, \*x) f^{(2)} (\bs{\omega}, \*y) \right] . \label{eq:rfdec}
\end{equation}
\end{definition}
The decomposition of type (\ref{eq:rfdec}) can be used for an efficient unbiased approximation of $\mathcal{K}^{(\alpha)} (\mathcal{X}, \mathcal{Y}) \*C$. Let $\bs{\omega}^{(1)}, \dots, \bs{\omega}^{(M)} \in \Omega$ be i.i.d. samples from $\nu$. Define matrices $\*P, \*S \in \mathbb{R}^{L \times M}$ where for all $1 \leq i, j \leq L$,
\begin{align}
\*P_{i,:} &= M^{-1/2}(f^{(1)} (\bs{\omega}^{(m)}, \*x^{(i)}))_{m = 1}^M, \label{eq:ps1} \\
\*S_{j,:} &= M^{-1/2}(f^{(2)} (\bs{\omega}^{(m)}, \*y^{(j)}))_{m = 1}^M \label{eq:ps2}
\end{align}
where $\*P_{i,:}, \*S_{j,:} \in \mathbb{R}^d$ are \textit{column vectors} corresponding to the rows of $\*P, \*S$. Then according to (\ref{eq:rfdec}), $\widehat{\mathcal{K}} = \*P \*S^\top$ is an unbiased \textit{Monte Carlo (MC)} approximation of $\mathcal{K}^{(\alpha)} (\mathcal{X}, \mathcal{Y})$ on $M$ samples. The variance $\mathrm{Var} \, \widehat{\mathcal{K}}_{i,j} = M^{-1} \mathrm{Var}_\nu \re{ f^{(1)} (\bs{\omega}, \*x^{(i)}) f^{(2)} (\bs{\omega}, \*y^{(j)})}$ of this approximation is inverse-proportional to $M$, hence $M$ is a tradeoff parameter between computations and precision. Now, $\widehat{\mathcal{K}} \*C$ is an unbiased approximation of $\mathcal{K}^{(\alpha)} (\mathcal{X}, \mathcal{Y}) \*C$ but $\widehat{\mathcal{K}} = \*P \*S^\top$ is a rank-$M$ matrix, hence computing $\widehat{\mathcal{K}} \*C$ has $O(L M n)$ complexity. Assuming that sampling each $\bs{\omega}^{(m)}$ and computing $f^{(\cdot)} (\cdot, \cdot)$ are $O(d)$ operations, which is usually the case, precomputing $\*P$ and $\*S$ takes $O(L M d)$ computations, resulting in the total $O(L M (d + n))$ computational complexity. By choosing $M \ll L$, we obtain a significant reduction in computations compared to the exact variant: $O (L M (d + n)) \ll O (L^2 (d + n))$.
Operations of type $\mathcal{K}^{(\alpha)} (\mathcal{X}, \mathcal{Y}) \*C$, especially for the Gaussian kernel $\alpha = -1/2$, emerge in kernel SVM \citep{rfs}, kernel regression \cite{nadaraya,watson} and in physics in the form of a Gauss transform \citep{gausstr}. Another important application has recently emerged in the area of efficient Transformers and is discussed in the next section \citep{performer}.
\subsection{Random Features for Efficient Transformers} \label{sec:efftr}
RFs found a very prominent application in the area of efficient long-sequence Transformers \citep{performer}. Transformers rely on a self-attention block for propagating information between elements of the sequence. If the sequence length is $L$ and input matrices are denoted as $\*Q, \*K, \*V \in \mathbb{R}^{L \times d}$ (\textit{queries}, \textit{keys} and \textit{values}), then self-attention outputs the following matrix:
\begin{equation}
\*Y = \mathrm{diag} (\mathcal{K}^{(0)} (\mathcal{X}, \mathcal{Y}) \*1_L)^{-1} \mathcal{K}^{(0)} (\mathcal{X}, \mathcal{Y}) \*V \in \mathbb{R}^{L \times d} \label{eq:att}
\end{equation}
where $\*1_L \in \mathbb{R}^L$ is a vector of all ones, $\mathrm{diag} (\cdot)$ returns a diagonal $(L \times L)$-sized matrix with the argument on the diagonal, $\mathcal{X} = \{ \*x^{(i)} = d^{-1 / 4} \*Q_{i,:} \in \mathbb{R}^d \}$, $\mathcal{Y} = \{ \*y^{(j)} = d^{-1 / 4} \*K_{j,:} \in \mathbb{R}^d \}$. Hence, substitution of $\widehat{\mathcal{K}}$ instead of $\mathcal{K}^{(0)} (\mathcal{X}, \mathcal{Y})$ in (\ref{eq:att}) reduces computational complexity from $O (L^2 d)$ to $O (L M d)$ ($n = d + 1$). $\mathrm{diag} (\mathcal{K}^{(0)} (\mathcal{X}, \mathcal{Y}) \*1_L)^{-1} \mathcal{K}^{(0)} (\mathcal{X}, \mathcal{Y})$ is the result of a \textit{softmax} operation performed on rows of $d^{-1/2} \*Q \*K^\top$.
\subsection{Existing Random Features for the Softmax Kernel} \label{sec:exist}
Representation (\ref{eq:rfdec}) is not unique and different RFs can be proposed for a single $K^{(\alpha)}$. Note that if $\langle \nu, f^{(1)}, f^{(2)} \rangle$ are RFs for $K^{(0)}$, then $\langle \nu, \widehat{f}^{(1)}, \widehat{f}^{(2)} \rangle$ are RFs for $K^{(\alpha)}$ for $\alpha \in \mathbb{R}$ where $\widehat{f}^{(k)} (\bs{\omega}, \*x) = \exp(\alpha \| \*x \|^2) f^{(k)} (\bs{\omega}, \*x)$. Hence, hereafter we focus on the softmax kernel $K^{(0)}$ without loss of generality.
\citet{perf0} proposed to use \textit{trigonometric random features (TrigRFs)} from \citep{rfs} in the efficient Transformer application:
\begin{gather}
\Omega_\mathrm{trig} = \mathbb{R}^{d + 1}, \,\, \nu_\mathrm{trig} = \mathrm{Unif} ([0, 2 \pi]) \times \mathcal{N} (\*0_d, \*I_d)^d, \label{eq:trigrf1} \\
f^{(1)}_\mathrm{trig} ((\theta, \widetilde{\bs{\omega}}), \*x) = \sqrt{2} \exp (\| \*x \|^2 / 2) \cos(\widetilde{\bs{\omega}}^\top \*x + \theta), \\
f^{(2)}_\mathrm{trig} ((\theta, \widetilde{\bs{\omega}}), \*y) = \sqrt{2} \exp(\| \*y \|^2 / 2) \cos(- \widetilde{\bs{\omega}}^\top \*y + \theta) \label{eq:trigrf3}
\end{gather}
where $\bs{\omega} = (\theta, \widetilde{\bs{\omega}})$, $\mathrm{Unif} (\cdot)$ denotes a uniform distribution on the argument set, $\mathcal{N} (\*0_d, \*I_d)$ is a multivariate Gaussian distribution with a mean $\*0_d$ (vector of $d$ zeros) and covariance matrix $\*I_d$ (identity matrix of size $d \times d$).
The next iteration of efficient attention approximators \citep{performer} encountered a problem with TrigRFs (\ref{eq:trigrf1}-\ref{eq:trigrf3}). The \textit{attention matrix} $\mathrm{diag} (\mathcal{K}^{(0)} (\mathcal{X}, \mathcal{Y}) \*1_L)^{-1} \mathcal{K}^{(0)} (\mathcal{X}, \mathcal{Y})$ from (\ref{eq:att}) is \textit{right stochastic} meaning that its entries are nonnegative and each row sums to $1$ due to the normalizing term $\mathrm{diag} (\mathcal{K}^{(0)} (\mathcal{X}, \mathcal{Y}) \*1_L)^{-1}$. However, since $f^{(\cdot)}_\mathrm{trig}$ can be arbitrary real numbers, $\*P, \*S$ (\ref{eq:ps1}-\ref{eq:ps2}) and, therefore, $\widehat{\mathcal{K}}$ can take negative values. Hence, $\mathrm{diag} (\widehat{\mathcal{K}} \*1_L)^{-1} \widehat{\mathcal{K}}$ is not right stochastic in general and entries of $\widehat{\mathcal{K}} \*1_L$ can take very small and/or negative values resulting in unstable behaviour when inverting $\mathrm{diag} (\widehat{\mathcal{K}} \*1_L)^{-1}$. \citet{performer} therefore propose a new type of \textit{positive random features (PosRFs)} which have a form:
\begin{gather}
\Omega_\mathrm{pos} = \mathbb{R}^d, \quad \nu_\mathrm{pos} = \mathcal{N} (0, 1)^d, \label{eq:posrf1} \\
f^{(1)}_\mathrm{pos} (\bs{\omega}, \*x) = f^{(2)}_\mathrm{pos} (\bs{\omega}, \*x) = \exp (\bs{\omega}^\top \*x - \| \*x \|^2 / 2) \label{eq:posrf3}
\end{gather}
It's clear that such $f^{(\cdot)}_\mathrm{pos}$ only take strictly positive values resulting in the right stochastic $\mathrm{diag} (\widehat{\mathcal{K}} \*1_L)^{-1} \widehat{\mathcal{K}}$ and a stable Transformer training procedure.
\citet{crt} extend PosRFs and propose \textit{generalized exponential random features (GERFs)}\footnote{\citet{crt} define these RFs for $K^{(-1/2)}$ but we adapt them for $K^{(0)}$ using the trick mentioned above.} for $K^{(0)}$:
\begin{gather}
\Omega_\mathrm{GE} = \mathbb{R}^d, \quad \nu_\mathrm{GE} = \mathcal{N} (0, 1)^d, \quad f^{(1)}_\mathrm{GE} (\bs{\omega}, \*x) = \label{eq:gerf1} \\
f^{(2)}_\mathrm{GE} (\bs{\omega}, \*x) \! = \! D \exp (A \| \bs{\omega} \|^2 \! + \! B \bs{\omega}^\top \*x \! + \! C \| \*x \|^2 / 2), \label{eq:gerf3}
\end{gather}
where $A, B, C, D$ are real numbers\footnote{\citet{crt} consider a more generalized form when $A, B, C, D$ are complex with an additional parameter $s = \pm 1$, however only the subfamily (\ref{eq:gerf1}-\ref{eq:gerf3}) with $s = 1$ is proposed to use in the Transformer application.} satisfying:
\begin{equation*}
1 - 8 A > 0, \, B = \sqrt{1 - 4 A}, \, C = -\frac12, \, D = (1 - 4 A)^{\frac{d}{4}} .
\end{equation*}
\citet{crt} express $B, C, D$ through $A$ and find a close form equation for the variance of (\ref{eq:rfdec}):
\begin{gather}
\mathrm{Var}_{\nu_\mathrm{GE}} f^{(1)}_\mathrm{GE} (\bs{\omega}, \*x) f^{(2)}_\mathrm{GE} (\bs{\omega}, \*y) \! = \! e^{\mathcal{L}_\mathrm{GE} (A, \*x, \*y)} \! - \! K^{(0)} (\*x, \*y)^2, \nonumber \\
\mathcal{L}_\mathrm{GE} (A, \*x, \*y) = d \log \left( \frac{1 - 4 A}{\sqrt{1 - 8 A}} \right) + \frac{2 (1 - 4 A)}{1 - 8 A} \label{eq:gevar1} \\
\times \| \*x + \*y \|^2 - \| \*x \|^2 - \| \*y \|^2 \label{eq:gevar2}
\end{gather}
The minimum variance corresponds to the minimum $ \mathcal{L} (A, \*x, \*y)$ since $K^{(0)} (\*x, \*y)^2$ doesn't depend on $A$. Since $\mathcal{L} (A, \*x, \*y)$ is defined for a single pair of $\*x, \*y$ and not for sets $\mathcal{X}, \mathcal{Y}$, \citet{crt} propose a \textit{homogeneity heuristic} when they replace $\| \*x + \*y \|^2$, $\| \*x \|^2$, $\| \*y \|^2$ in (\ref{eq:gevar1}-\ref{eq:gevar2}) with its averages over $\mathcal{X}, \mathcal{Y}$: $L^{-2} \sum_{i,j} \| \*x^{(i)} + \*y^{(j)} \|^2$, $L^{-1} \sum_i \| \*x^{(i)} \|^2$ and $L^{-1} \sum_j \| \*y^{(j)} \|^2$ respectively. This heuristic is based on the assumption that $\{ \*x^{(i)} \}$ and $\{ \*y^{(j)} \}$ are homogeneous and their statistics are tightly concentrated around the mean. After this substitution, the minimum of (\ref{eq:gevar1}-\ref{eq:gevar2}) with respect to $A$ can be found in a close form.
\section{The Objective Minimized by GERFs}
Our first contribution is showing that homogeneity heuristic adapted in GERFs is actually an analytic solution of a certain optimization problem. Define
\begin{gather}
\overline{\mathcal{L}} (\bs{\theta}; \mathcal{X}, \mathcal{Y}, \mathcal{T}) = L^{-2} \sum_{1 \leq i,j \leq L} \log (\mathrm{Var}_\nu [f^{(1)} (\bs{\omega}, \*x^{(i)}) \nonumber \\
\times f^{(2)} (\bs{\omega}, \*y^{(j)})] + K^{(0)} (\*x^{(i)}, \*y^{(j)})^2 ) \label{eq:ldef}
\end{gather}
where $\mathcal{T} = \langle \nu, f^{(1)}, f^{(2)} \rangle$ are RFs for the kernel $K^{(0)}$ and $\bs{\theta}$ are their parameters.
(\ref{eq:ldef}) is a mean log-variances shifted by $K^{(0)}(\*x^{(i)}, \*y^{(j)})^2$. The best possible value of (\ref{eq:ldef}) is $\log K^{(0)} (\*x^{(i)}, \*y^{(j)})$ which corresponds to all variances $\mathrm{Var} f^{(1)} (\bs{\omega}, \*x^{(i)}) f^{(2)} (\bs{\omega}, \*y^{(j)})$ being zero, meaning that RFs provide exact kernel estimation. Hence, minimization of (\ref{eq:ldef}) leads more precise estimators on $\mathcal{X}, \mathcal{Y}$. We call the loss function $\overline{\mathcal{L}} (\bs{\theta}; \mathcal{X}, \mathcal{Y}, \mathcal{T})$ the \textit{shifted log-variance} objective.
If $\mathcal{T}_\mathrm{GE} = \langle \nu_\mathrm{GE}, f^{(1)}_\mathrm{GE}, f^{(2)}_\mathrm{GE} \rangle$ are taken in (\ref{eq:ldef}), then $\bs{\theta}_\mathrm{GE} = \{ A, B, C, D \}$ and $\overline{\mathcal{L}} (\bs{\theta}_\mathrm{GE}; \mathcal{X}, \mathcal{Y}, \mathcal{T}_\mathrm{GE}) = L^{-2} \sum_{i,j} \mathcal{L}_\mathrm{GE} (A; \*x^{(i)}, \*y^{(j)})$. Using (\ref{eq:gevar1}-\ref{eq:gevar2}), we get:
\begin{gather*}
\overline{\mathcal{L}} (\bs{\theta}_\mathrm{GE}; \mathcal{X}, \mathcal{Y}, \mathcal{T}_\mathrm{GE}) = d \log \left( \frac{1 - 4 A}{\sqrt{1 - 8 A}} \right) +
\frac{2 (1 - 4 A)}{1 - 8 A} \\
\times \frac{1}{L^2} \sum_{i,j} \| \*x^{(i)} + \*y^{(j)} \|^2 \! - \! \frac{1}{L} \sum_i \| \*x^{(i)} \|^2 \! - \! \frac{1}{L} \sum_j \| \*y^{(j)} \|^2 .
\end{gather*}
That is, $\mathcal{L} (A; \mathcal{X}, \mathcal{Y})$ coincides with (\ref{eq:gevar1}-\ref{eq:gevar2}) when $\| \*x + \*y \|^2$, $\| \*x \|^2$, $\| \*y \|^2$ are replaced by their average statistics computed on $\mathcal{X}, \mathcal{Y}$. Hence, the homogeneity heuristic is nothing but minimization of (\ref{eq:ldef}). While in general it's unclear how to find a close-form optimum of $\mathrm{Var} \widehat{\mathcal{K}}$ or $\mathrm{Var} (\widehat{\mathcal{K}} \*C)$, the global minimum of (\ref{eq:ldef}) is feasible and can be computed in $O(1)$ time. Further, \citet{crt} show that optimization of (\ref{eq:ldef}) leads to very good results in large-scale applications of efficient Transformers.
The advantage of the shifted log-variance objective is that it can be efficiently optimized. Another natural choice for the loss function is the regular mean squared error (MSE) loss given for $X(\mathbf{x}^{(i)},\mathbf{y}^{(j)})=f^{(1)}(\bs{\omega},\mathbf{x}^{(i)}) \times f^{(2)}(\bs{\omega},\mathbf{y}^{(j)})$ as:
\begin{equation}
\mathcal{L}_{\mathrm{MSE}}(\theta; \mathcal{X}, \mathcal{Y}, \mathcal{T}) = \sum _{1 \leq i,j\leq L} \mathrm{Var}_{\nu}[X(\mathbf{x}^{(i)},\mathbf{y}^{(j)})]
\end{equation}
In Section \ref{sec:arf}, we will show the results obtained by using also this objective.
\section{Towards DERFs - Asymmetric RFs (ARFs)}
\label{sec:arf}
Note that for GERF we have: $f_{\mathrm{GE}}^{(1)} = f_{\mathrm{GE}}^{(2)}$. Our first step to go beyond GERF is to give up this symmetry for the softmax kernel ($\alpha=0$). We rely on a simple observation (see for instance: \cite{hrfs}) that $K^{(0)}(\mathbf{x},\mathbf{y})=K^{(0)}(\mathbf{x}^{\prime},\mathbf{y}^{\prime})$, where: $\mathbf{x}^{\prime}=\mathbf{A}\mathbf{x}$ and $\mathbf{y}^{\prime}=\mathbf{A}^{-T}\mathbf{y}$ for the given invertible $\mathbf{A} \in \mathbb{R}^{d \times d}$. Thus effectively, we have introduced an additional set of parameters to the RF mechanism (encoded by a matrix $\mathbf{A}$) that can be tuned. One way of choosing $\mathbf{A}$ is to minimize the objective given by Equation \ref{eq:ldef}. The following is true:
\begin{lemma}[optimizing asymmetric RFs]
An invertible matrix $\mathbf{A} \in \mathbb{R}^{d \times d}$ minimizing the objective given in Equation \ref{eq:ldef} for $\alpha=0$ satisfies the following matrix equation:
\begin{equation}
\label{arf-eq}
\mathbf{A} \cdot \mathbf{M}^{(1)} \cdot \mathbf{A} = (\mathbf{A}^{-2})^{\top} \cdot \mathbf{M}^{(2)},
\end{equation}
where $\mathbf{M}^{(1)}$ and $\mathbf{M}^{(2)}$ are data covariance matrices:
\begin{equation}
\mathbf{M}^{(1)} = \frac{1}{L} \sum_{i = 1}^L \*x^{(i)} (\*x^{(i)})^\top, \*M^{(2)} = \frac{1}{L} \sum_{j = 1}^L \*y^{(j)} (\*y^{(j)})^\top.
\end{equation}
\end{lemma}
A prominent special instantiation of the above mechanism is for $\mathbf{A} \in\mathbb{R}^{d \times d}$ constrained to be diagonal. We have:
\begin{lemma} [asymmetric RFs with diagonal $\mathbf{A}$]
\label{lemma:arf-diag}
A diagonal matrix $\mathbf{A} \in \mathbb{R}^{d \times d}$ with diagonal $(a_{1},...,a_{d})$ and minimizing the objective given in Equation \ref{eq:ldef} for $\alpha=0$ satisfies:
\begin{equation}
a_{j} = \mp \left(\frac{\sum_{i=1}^{L}(\mathbf{x}^{i}_{j})^{2}}{\sum_{i=1}^{L}(\mathbf{y}^{i}_{j})^{2}}\right)^{\frac{1}{4}}
\end{equation}
\end{lemma}
It is easy to notice that the solution from Lemma \ref{lemma:arf-diag} can be obtained from Eq. \ref{arf-eq} by zeroing out non-diagonal entries of $\mathbf{M}^{(1)}$ and $\mathbf{M}^{(2)}$.
We refer to the class of the above RF methods shortly as \textit{asymmetric random features} or (ARFs).
\subsection{MSE-guided optimization of ARFs}
Now let us consider the MSE-objective and see how we can optimize $\mathbf{A}$ accordingly. We have for $\rho=\frac{1}{1-8A}$:
\begin{align}
\begin{split}
\mathcal{L}_{\mathrm{MSE}}(\mathcal{X},\mathcal{Y}) =
\left(\frac{\rho+1}{2\sqrt{\rho}}\right)^{d}
\sum_{1 \leq i,j \leq L} \exp(2(\mathbf{x}^{(i)})^{\top}\mathbf{y}^{(j)}) \cdot \\ [\exp(\rho\|\mathbf{A}\mathbf{x}^{(i)}+(\mathbf{A}^{-1})^{\top}\mathbf{y}^{(j)}\|_{2}^{2})-1]
\end{split}
\end{align}
To optimize $\mathcal{L}_{\mathrm{MSE}}(\mathcal{X},\mathcal{Y})$ with respect to $\mathbf{A}$, it thus suffices to minimize the following objective for $\tau^{i,j}=\exp(2(\mathbf{x}^{(i)})^{\top}\mathbf{y}^{(j)})$:
\begin{equation}
\widehat{\mathcal{L}}_{\mathrm{MSE}}(\mathcal{X},\mathcal{Y}) = \sum_{1 \leq i,j \leq L} \tau^{i,j}\exp(\rho\|\mathbf{A}\mathbf{x}^{(i)}+(\mathbf{A}^{-\top})\mathbf{y}^{(j)}\|_{2}^{2})
\end{equation}
Our first relaxation is to instead minimize the upper bound on $\widehat{\mathcal{L}}_{\mathrm{MSE}}(\mathcal{X},\mathcal{Y})$ obtained by applying Cauchy-Schwartz inequality. We get:
\begin{align}
\begin{split}
\widehat{\mathcal{L}}_{\mathrm{MSE}}(\mathcal{X},\mathcal{Y}) \leq
\sum_{1 \leq i \leq L} \exp((1+2\rho)\|\mathbf{A}\mathbf{x}^{(i)}\|^{2}_{2}) \cdot \\ \sum_{1 \leq j \leq L} \exp((1+2\rho)\|\mathbf{A}^{-\top}\mathbf{y}^{(j)}\|^{2}_{2}) = \\
\sum_{1 \leq i,j \leq L} \sum_{k,l \in \{0,1,...\}}
\frac{(1+2\rho)^{k+l}\|\mathbf{A}\mathbf{x}^{(i)}\|_{2}^{2k}
\|\mathbf{A}^{-\top}\mathbf{y}^{(y)}\|^{2l}_{2}}{k! l!}
\end{split}
\end{align}
We will assume that $A<0$ which implies that $\rho>0$ and consequently that all the terms of the above sum are non-negative. By further relaxing the original optimization problem, we might aim to minimize the highest-order elements of the truncated (with respect to $k,l$) version of the above sum. Thus effectively we can define the following \textit{$n^{th}$-order relaxation} of the original optimization problem:
\begin{equation}
\min_{\mathbf{A} \in \mathbb{R}^{d \times d}} \sum_{1 \leq i,j \leq L} \|\mathbf{A}\mathbf{x}^{(i)}\|_{2}^{2n}
\|\mathbf{A}^{-\top}\mathbf{y}^{(y)}\|^{2n}_{2}
\end{equation}
For $n=1$ this problem has the closed-form solution that can be efficiently computed.
\begin{lemma}[\cite{arps}]
\label{lemma:arps}
The solution $\mathbf{A}^{*}$ of the 1st-order relaxation is of the following form:
\begin{align}
\begin{split}
\mathbf{A}^{*}= \mathbf{D}^{\frac{1}{2}}\mathbf{U}^{\top}\mathbf{Q}_{\mathcal{X}}^{-\top}, \\
\mathbf{M}^{(1)} = \mathbf{Q}_{\mathcal{X}}^{\top}\mathbf{Q}_{\mathcal{X}}, \textrm{ }
\mathbf{M}^{(2)} = \mathbf{Q}_{\mathcal{Y}}^{\top}\mathbf{Q}_{\mathcal{Y}}, \\
\mathbf{Q}_{\mathcal{X}}\mathbf{Q}_{\mathcal{Y}}^{\top} = \mathbf{U}\mathbf{D}\mathbf{V}^{\top},
\end{split}
\end{align}
where matrices $\mathbf{Q}_{\mathcal{X}},\mathbf{Q}_{\mathcal{Y}} \in \mathbb{R}^{d \times d}$ define the decomposition of the data covariance matrices $\mathbf{M}^{(1)},\mathbf{M}^{(2)}$ and furthermore: diagonal matrix $\mathbf{D} \in \mathbb{R}^{d \times d}$ and orthogonal matrices $\mathbf{U},\mathbf{V} \in \mathbb{R}^{d \times d}$ define the SVD-decomposition of $\mathbf{Q}_{\mathcal{X}}\mathbf{Q}_{\mathcal{Y}}^{\top}$.
\end{lemma}
Interestingly, if we further constrain $\mathbf{A}$ to be diagonal, then the solution from Lemma \ref{lemma:arps} has exactly the same form as the one from Lemma \ref{lemma:arf-diag}. This is not that surprising since the corresponding objectives, even though not the same, are related as: one works in the $\mathrm{log}$-space and the other takes the linear terms of the Taylor series expansion. The question of finding the exact solution for the $n^{th}$-order relaxation for $n>1$ remains open.
\section{Dense-Exponential Random Features} \label{sec:derf}
In this section, we propose \textit{dense-exponential random features (DERFs)} -- an extension of GERFs where scalars $A, B, C$ are replaced with matrices.
DERF is effectively our most general variant that contains previously introduced classes as its special instantiations.
We define DERFs as follows: $\Omega_\mathrm{DE} = \mathbb{R}^d$, $\nu_\mathrm{DE} = \mathcal{N} (0, 1)^d$ and for $k \in \{ 1, 2 \}$:
\begin{equation*}
f^{(k)}_\mathrm{DE} (\bs{\omega}, \*x) \! = \! D \exp(\bs{\omega}^\top \*A \bs{\omega} + \bs{\omega}^\top \*B^{(k)} \*x + \*x^\top \*C^{(k)} \*x),
\end{equation*}
where $\*B^{(k)}, \*C^{(k)} \in \mathbb{R}^{d \times d}$, $D \in \mathbb{R}$, $\*A \in \mathbb{S}_d$ (a set of $d \times d$ real symmetric matrices). Clearly, GERFs with parameters $A, B, C, D$ can be expressed via DERFs with parameters $\*A = A \*I_d$, $\*B^{(1)} = \*B^{(2)} = B \*I_d$, $\*C^{(1)} = \*C^{(2)} = C \*I_d$, $D$ is unchanged. Our first theoretical result is giving the conditions when $\mathcal{T}_\mathrm{DE} = \langle \nu_\mathrm{DE}, f^{(1)}_\mathrm{DE}, f^{(2)}_\mathrm{DE} \rangle$ are valid RFs:
\begin{theorem} \label{th:derf}
Let the following conditions hold:
\begin{gather*}
8 \*A \prec \*I_d, \quad (\*B^{(1)})^\top (\*I_d - 4 \*A)^{-1} \*B^{(2)} = \*I_d, \quad \*C^{(k)} = \\
- \frac{1}{2} (\*B^{(k)})^\top (\*I_d - 4 \*A)^{-1} \*B^{(k)}, \,\, D = \det (\*I_d - 4 \*A)^{1/4}
\end{gather*}
where $k \in \{ 1, 2 \}$. Then $\mathcal{T}_\mathrm{DE}$ are RFs for $K^{(0)}$ and, for all $\*x, \*y \in \mathbb{R}^d$:
\begin{align}
&\mathrm{Var}_{\nu_\mathrm{DE}} f^{(1)}_\mathrm{DE} (\bs{\omega}, \*x) f^{(2)}_\mathrm{DE} (\bs{\omega}, \*y) = D^4 \det (\*I_d - 8 \*A)^{-1/2} \nonumber \\
&\times \exp\biggl(2 \*x^\top \left( \*C^{(1)} + (\*B^{(1)})^\top (\*I_d - 8 \*A)^{-1} \*B^{(1)} \right) \*x \nonumber \\
&+ 2 \*y^\top \left( \*C^{(2)} + (\*B^{(2)})^\top (\*I_d - 8 \*A)^{-1} \*B^{(2)} \right) \*y \nonumber \\
&+ \! 4 \*x^\top (\*B^{(1)})^\top (\*I_d - 8 \*A)^{-1} \*B^{(2)} \*y \! \biggr) \! - \! K^{(0)} (\*x, \*y)^2 . \label{eq:devar}
\end{align}
\end{theorem}
Our ultimate goal is to find optimal parameters $\*A, \*B^{(k)}, \*C^{(k)}$ and $D$ minimizing variance of the low-rank approximation of $\mathcal{K}^{(0)} (\mathcal{X}, \mathcal{Y})$ where sets $\mathcal{X}, \mathcal{Y}$ are provided. Our first observation is that we can assume that $\*A \in \mathbb{D}_d$ (a set of $d \times d$ real diagonal matrices). Indeed, any symmetric $\*A$ can be expressed as $\*Q \widehat{\*A} \*Q^\top$ where $\*Q \in \mathbb{O}_d$ (a set of orthogonal matrices $\{ \*Z \in \mathbb{R}^{d \times d} \, | \, \*Z^\top \*Z = \*I_d \}$) and $\widetilde{\*A} \in \mathbb{D}_d$. Let $\bs{\omega} \sim \mathcal{N} (\*0_d, \*I_d)$. Then, for any $\*x \in \mathbb{R}^d$, $k \in \{ 1, 2 \}$,
\begin{gather*}
f^{(k)}_\mathrm{DE} (\bs{\omega}, \! \*x) \! = \! D \exp(\bs{\omega}^\top \*Q \! \widetilde{\*A} \*Q^\top \bs{\omega} \! + \! \bs{\omega}^\top \*B^{(k)} \*x \! + \! \*x^\top \*C^{(k)} \*x) \\
= \exp(\widetilde{\bs{\omega}}^\top \widetilde{\*A} \widetilde{\bs{\omega}} + \widetilde{\bs{\omega}}^\top \! \widetilde{\*B}^{(k)} \*x + \*x^\top \! \*C^{(k)} \*x) = \widetilde{f}^{(k)}_\mathrm{DE} (\widetilde{\bs{\omega}}, \*x)
\end{gather*}
where $\widetilde{\*B}^{(k)} = \*Q^\top \*B^{(k)}$, $\widetilde{\bs{\omega}} = \*Q^\top \bs{\omega}$, $\widetilde{\bs{\omega}} \sim \mathcal{N} (\*0_d, \*I_d)$ since $\mathcal{N} (\*0_d, \*I_d)$ is \textit{isometric}, i.e. rotation-invariant and $\widetilde{f}^{(k)}_\mathrm{DE}$ are DERFs with parameters $\widetilde{\*A}$, $\widetilde{\*B}^{(k)}$, $\*C^{(k)}$, $D$. We conclude that with any $\*A$, $f^{(k)}_\mathrm{DE} (\bs{\omega}, \*x)$ can be expressed as DERFs $\widetilde{f}^{(k)}_\mathrm{DE}$ with $\widetilde{\*A} \in \mathbb{D}_d$. Hence, hereafter we consider $\*A \in \mathbb{D}_d$ only without loss of generality.
Since $\*B^{(k)}, \*C^{(k)}$ are dense matrices in general, evaluation of $f^{(k)}_\mathrm{DE} (\bs{\omega}, \*x)$ takes $O(d^2)$ time which bigger than $O(d)$ complexity for TrigRFs, PosRFs and GERFs. However, $\*P$ and $\*S$ matrices (\ref{eq:ps1}-\ref{eq:ps2}) can be still computed in $O(L M d)$. For that, precompute $(\*B^{(k)})^\top \bs{\omega}^{(m)}$, $\*C^{(1)} \*x^{(i)}$, $\*C^{(2)} \*y^{(j)}$ for all $k \in \{ 1, 2\}$, $1 \leq m \leq M$, $1 \leq i, j \leq L$ ($O((M + L) d^2)$). Then, computing $f^{(1)}_\mathrm{DE} (\bs{\omega}^{(m)}, \*x^{(i)})$, $f^{(2)}_\mathrm{DE} (\bs{\omega}^{(m)}, \*y^{(j)})$ for all $1 \leq i, j \leq L$, $1 \leq m \leq M$ takes $O(L M d)$ operations. The complexity of constructing (\ref{eq:ps1}-\ref{eq:ps2}) then is $O (L (M d + d^2) + M d^2)$ or $O(L M d)$ assuming that $d \leq M \ll L$ which is the case in practice.
Our goal is to minimize $\overline{\mathcal{L}} (\bs{\theta}_\mathrm{DE}; \mathcal{X}, \mathcal{Y}, \mathcal{T}_\mathrm{DE})$ for $\bs{\theta}_\mathrm{DE} = \{ \*A, \*B^{(1)}, \*B^{(2)}, \*C^{(1)}, \*C^{(2)}, D \}$. However, we find that even for a single pair of $\*x, \*y$ it's unclear how to minimize the variance (\ref{eq:devar}) in a close form. Hence, below we consider very general special cases where analytic solution is feasible.
\subsection{Asymmetric Dense-Exponential Random Features}
Define RFs $\mathcal{T}_\mathrm{ADE} = \langle \nu_\mathrm{ADE}, f^{(1)}_\mathrm{ADE}, f^{(2)}_\mathrm{ADE} \rangle$ in the same way as $\mathcal{T}_\mathrm{DE}$ with the only difference that $\*A = A \*I_d$ where $\lambda \in \mathbb{R}$. We refer to these RFs as \textit{asymmetric dense-exponential RFs (ADERFs)} since $f^{(1)}_\mathrm{ADE} \neq f^{(2)}_\mathrm{ADE}$ in general. The only additional restriction of ADERFs compared to DERFs is that all diagonal entries of $\*A \in \mathbb{D}_d$ are the same. The parameters of $\mathcal{T}_\mathrm{ADE}$ are $\bs{\theta}_\mathrm{ADE} = \{ A, \*B^{(1)}, \*B^{(2)}, \*C^{(1)}, \*C^{(2)}, D \}$. By $\Theta_\mathrm{ADE}$ denote a set of all possible $\bs{\theta}_\mathrm{ADE}$'s resulting in correct RFs for the kernel $K^{(0)}$, i.e. satisfying conditions from Theorem \ref{th:derf} with $\*A = A \*I_d$. The following result gives an analytic formula for a global minimum of $\overline{\mathcal{L}} (\bs{\theta}_\mathrm{ADE}; \mathcal{X}, \mathcal{Y}, \mathcal{T}_\mathrm{ADE})$. In the theorem, we use notions of SVD decomposition and eigendecomposition of a symmetric matrix \citep{nla}.
\begin{theorem}
\label{th:aderf}
Let $\mathcal{X} = \{ \*x^{(i)} \in \mathbb{R}^d \}_{i = 1}^L$, $\mathcal{Y} = \{ \*y^{(j)} \in \mathbb{R}^d \}_{j = 1}^L$. Let
\begin{equation*}
\*M^{(1)} = \frac{1}{L} \sum_{i = 1}^L \*x^{(i)} (\*x^{(i)})^\top, \quad \*M^{(2)} = \frac{1}{L} \sum_{j = 1}^L \*y^{(j)} (\*y^{(j)})^\top.
\end{equation*}
Suppose that $\*M^{(1)}, \*M^{(2)} \in \mathbb{S}_d$ are nonsingular. Define $\mu^{(3)} = d^{-1} L^{-2} \left( \sum_{i = 1}^L \*x^{(i)} \right)^\top \left( \sum_{j = 1}^L \*y^{(j)} \right) \in \mathbb{R}$. For $k \in \{ 1, 2 \}$, let $\*M^{(k)} = \*Q^{(k)} \bs{\Lambda}^{(k)} (\*Q^{(k)})^\top$ be eigendecomposition of a symmetric $\*M^{(k)}$ where $\*Q^{(k)} \in \mathbb{O}_d$. $\bs{\Lambda}^{(k)} \in \mathbb{D}_d$ has strictly positive diagonal values since $\*M^{(k)} \succeq 0$ by definition and $\*M^{(k)}$ is nonsingular. Let $\*U \bs{\Sigma} \*V^\top$ be SVD decomposition of $(\bs{\Lambda}^{(1)})^\frac12 (\*Q^{(1)})^\top \*Q^{(2)} (\bs{\Lambda}^{(2)})^\frac12$ where $\*U, \*V \in \mathbb{O}^d$, $\bs{\Sigma} \in \mathbb{D}_d$ has nonnegative diagonal entries.
One of the solutions $\bs{\theta}_\mathrm{ADE}^* = \{ A, \*B^{(1)}, \*B^{(2)}, \*C^{(1)}$, $\*C^{(2)}, D \}$ of $\min_{\bs{\theta}_\mathrm{ADE} \in \Theta_\mathrm{ADE}}$ $\overline{\mathcal{L}} (\bs{\theta}_\mathrm{ADE}; \mathcal{X}, \mathcal{Y}, \mathcal{T}_\mathrm{ADE})$ is as follows. Set $\phi = 2 d^{-1} \sum_{l = 1}^d \bs{\Sigma}_{l,l} + 2 \mu^{(3)}$ and, for $k \in \{ 1, 2 \}$,
\begin{gather*}
A = \frac{1}{16} \left( 1 - 2 \phi - \sqrt{\left(2 \phi + 1\right)^2 + 8 \phi} \right), \\
\*B^{(1)} = \sqrt{1 - 4 A} \bs{\Sigma}^{1/2} \*U^\top (\bs{\Lambda}^{(1)})^{-1/2} (\*Q^{(1)})^\top, \\
\*B^{(2)} = \sqrt{1 - 4 A} \bs{\Sigma}^{-1/2} \*U^\top (\bs{\Lambda}^{(1)})^{1/2} (\*Q^{(1)})^\top, \\
\*C^{(k)} = -\frac{1}{2 (1 - 4 A)} (\*B^{(k)})^\top \*B^{(k)}, \quad D = (1 - 4 A)^{d / 4} .
\end{gather*}
Further, we have:
\begin{gather}
\overline{\mathcal{L}} (\bs{\theta}_\mathrm{ADE}^*; \mathcal{X}, \mathcal{Y}, \mathcal{T}_\mathrm{ADE}) \! = \! d \biggl( \log (1 \! - \! 4 A) - \frac{1}{2} \log(1 \! - \! 8 A) \nonumber \\
+ 2 (1 - 8 A)^{-1} \left( d^{-1} \sum_{l = 1}^d \bs{\Sigma}_{l,l} + \mu^{(3)} \right) + 2 \mu^{(3)} \biggr). \label{eq:adevar}
\end{gather}
\end{theorem}
\subsection{Symmetric Dense-Exponential Random Features}
Define $\mathcal{T}_\mathrm{SDE} = \langle \nu_\mathrm{SDE}, f^{(1)}_\mathrm{SDE}, f^{(2)}_\mathrm{SDE} \rangle$ in the same way as $\mathcal{T}_\mathrm{DE}$ with the only difference that $\*B^{(1)} = \*B^{(2)} = \*B$. From the conditions in Theorem \ref{th:derf} it follows immediately that also $\*C^{(1)} = \*C^{(2)} = \*C$. Hence, $f^{(1)}_\mathrm{SDE} = f^{(2)}_\mathrm{SDE}$ and we refer to these RFs as \textit{symmetric dense-exponential RFs (SDERFs)}. The parameters of $\mathcal{T}_\mathrm{SDE}$ are $\bs{\theta}_\mathrm{SDE} = \{ \*A, \*B, \*C, D \}$. By $\Theta_\mathrm{SDE}$ denote a set of all possible $\bs{\theta}_\mathrm{SDE}$'s resulting in correct RFs for the kernel $K^{(0)}$, i.e. satisfying conditions from Theorem \ref{th:derf} with $\*B^{(k)} = \*B$, $\*C^{(k)} = \*C$, $k \in \{ 1, 2 \}$. The following theorem gives an analytic solution for a global minimum of $\overline{\mathcal{L}} (\bs{\theta}_\mathrm{SDE}; \mathcal{X}, \mathcal{Y}, \mathcal{T}_\mathrm{SDE})$.
\begin{theorem}
\label{th:sderf}
Let $\mathcal{X} = \{ \*x^{(i)} \in \mathbb{R}^d \}_{i = 1}^L$, $\mathcal{Y} = \{ \*y^{(j)} \in \mathbb{R}^d \}_{j = 1}^L$ and let $\*M^{(1)}$, $\*M^{(2)}$ be defined as in Theorem \ref{th:aderf} and define
\begin{equation*}
\bs{\mu}^{(4)} = \frac{1}{L} \sum_{i = 1}^L \*x^{(i)} \in \mathbb{R}^d, \quad \bs{\mu}^{(5)} = \frac{1}{L} \sum_{j = 1}^L \*y^{(j)} \in \mathbb{R}^d .
\end{equation*}
Further, let $\*Q^{(3)} \bs{\Lambda}^{(3)} (\*Q^{(3)})^\top$ be eigendecomposition of of a symmetric positive semidefinite matrix $\*M^{(1)} + \bs{\mu}^{(4)} (\bs{\mu}^{(5)})^\top + \bs{\mu}^{(5)} (\bs{\mu}^{(4)})^\top + \*M^{(2)}$ where $\*Q^{(3)} \in \mathbb{O}_d$ and $\bs{\Lambda}^{(3)} \in \mathbb{D}_d$ with nonnegative diagonal entries. Further, we assume that the entries on the diagonal of $\bs{\Lambda}^{(3)}$ are sorted in the non-ascending order.
One of the solutions $\bs{\theta}_\mathrm{SDE}^* = \{ A, \*B, \*C, D \}$ of $\min_{\bs{\theta}_\mathrm{SDE} \in \Theta_\mathrm{SDE}}$ $\overline{\mathcal{L}} (\bs{\theta}_\mathrm{SDE}; \mathcal{X}, \mathcal{Y}, \mathcal{T}_\mathrm{SDE})$ is as follows. $\*A \in \mathbb{D}_d$, for all $1 \leq l \leq d$:
\begin{equation*}
\*A_{l,l} = \frac{1}{16} \left( 1 - 2 \bs{\Lambda}^{(3)}_{l,l} - \sqrt{\left(2 \bs{\Lambda}^{(3)}_{l,l} + 1\right)^2 + 8 \bs{\Lambda}^{(3)}_{l,l}} \right),
\end{equation*}
$\*B = (\*I_d - 4 \*A)^{1/2} (\*Q^{(3)})^\top$, $\*C = -\frac12 \*I_d$, $D = \det (\*I_d - 4 \*A)^{1/4}$. Further, we have:
\begin{gather}
\overline{\mathcal{L}} (\bs{\theta}_\mathrm{SDE}; \mathcal{X}, \mathcal{Y}, \mathcal{T}_\mathrm{SDE}) = \sum_{l = 1}^d \biggl( \log (1 - 4 \*A_{l,l}) \nonumber \\
- \frac12 \log (1 - 8 \*A_{l,l}) + \left(1 + (1 - 8 \*A_{l,l})^{-1}\right) \bs{\Lambda}^{(3)}_{l,l} \biggr) \nonumber \\
- L^{-1} \sum_{i = 1}^L \| \*x^{(i)} \|^2 - L^{-1} \sum_{j = 1}^L \| \*y^{(j)} \|^2 \label{eq:sdevar}
\end{gather}
\end{theorem}
\section{Correlated Random Features}
\subsection{Optimal Quasi Monte Carlo Estimators}
Eventually, all RFs from Sections \ref{sec:exist} and \ref{sec:derf} are used in MC schemes with independent $\bs{\omega}^{(1)}, \dots, \bs{\omega}^{(M)} \in \mathcal{N}(\*0_d, \*I_d)$. In this section, we consider \textit{correlated} $\bs{\omega}^{(m)}$'s instead. That is, suppose the concatenation of all $\bs{\omega}^{(1)}, \dots, \bs{\omega}^{(M)}$ is coming from a multivariate Gaussian distribution $\mathcal{N} (\*0_{Md}, \bs{\Theta})$ where $\bs{\Theta} \in \mathbb{R}^{(Md) \times (Md)}$,
\begin{equation}
\bs{\Theta} = \begin{bmatrix}
\*I_d & \bs{\Psi} & \dots & \bs{\Psi} \\
\bs{\Psi}^\top & \*I_d & \dots & \bs{\Psi} \\
\dots & \dots & \dots & \dots \\
\bs{\Psi}^\top & \dots & \bs{\Psi}^\top & \*I_d
\end{bmatrix} \label{eq:thetadef}
\end{equation}
where $\bs{\Psi} \in \mathbb{R}^{d \times d}$ is a correlation matrix between pairs of $\bs{\omega}^{(m)}$'s. If $\bs{\Psi} = \*0_{d \times d}$ (a matrix of $d \times d$ zeros), we obtain a standard Monte-Carlo scheme with independent $\bs{\omega}^{(m)}$'s. Otherwise, $\bs{\omega}^{(m)}$'s are correlated and the corresponding scheme is referred to as \textit{quasi Monte Carlo (QMC)}. Note that, since $\bs{\omega}^{(m)}$'s are still marginally coming from $\mathcal{N} (\*0_d, \*I_d)$, QMC estimate is unbiased. Indeed, let $\*x, \*y \in \mathbb{R}^d$ and denote a function $Z (\bs{\omega}^{(m)}) = f^{(1)} (\bs{\omega}^{(m)}, \*x) f^{(2)} (\bs{\omega}^{(m)}, \*y)$. Then
\begin{equation*}
\mathbb{E} \left( M^{-1} \sum_{m = 1}^M Z (\bs{\omega}^{(m)}) \right) = \mathbb{E} Z (\bs{\omega}^{(1)}) = K^{(0)} (\*x, \*y) .
\end{equation*}
Consider the variance of this QMC estimator:
\begin{gather*}
\mathrm{Var} \left( M^{-1} \sum_{m = 1}^M Z (\bs{\omega}^{(m)}) \right) = M^{-2} \sum_{m = 1}^M \mathrm{Var} \, Z (\bs{\omega}^{(m)}) \\
+ M^{-2} \sum_{m_1 \neq m_2} \mathrm{Cov} (Z (\bs{\omega}^{(m_1)}), Z (\bs{\omega}^{(m_2)})) \\
= \frac{1}{M} \mathrm{Var} \, Z (\bs{\omega}^{(1)}) + \frac{M - 1}{M} \mathrm{Cov} (Z (\bs{\omega}^{(1)}), Z (\bs{\omega}^{(2)})) .
\end{gather*}
Here, $\mathrm{Var} \, Z (\bs{\omega}^{(1)})$ term is constant since it depends on the marginal distribution of $\bs{\omega}^{(1)}$. The term $\mathrm{Cov} (Z (\bs{\omega}^{(1)}), Z (\bs{\omega}^{(2)}))$ is zero for the MC estimator since $\bs{\omega}^{(1)}, \bs{\omega}^{(2)}$ are independent. Our goal is to make it negative in the QMC case. For the marginal distribution of $(\bs{\omega}^{(1)}, \bs{\omega}^{(2)})$, we have:
\begin{gather*}
\begin{bmatrix} \bs{\omega}^{(1)} \\ \bs{\omega}^{(2)} \end{bmatrix} \! \sim \! \mathcal{N} \! \left( \! \*0_{2d}, \begin{bmatrix} \*I_d & \bs{\Psi} \\ \bs{\Psi}^\top & \*I_d \end{bmatrix} \! \right), \, \mathrm{Cov} (Z (\bs{\omega}^{(1)}), \! Z (\bs{\omega}^{(2)})) \\
= \mathbb{E} \left( Z (\bs{\omega}^{(1)}) Z (\bs{\omega}^{(2)}) \right) - \left( \mathbb{E} Z (\bs{\omega}^{(1)}) \right) \left( \mathbb{E} Z (\bs{\omega}^{(2)}) \right) .
\end{gather*}
The second term only depends on fixed marginal distributions of $\bs{\omega}^{(1)}$ and $\bs{\omega}^{(2)}$. Hence, our goal is to minimize the first term with respect to $\bs{\Psi}$. Denote $Z (\bs{\omega}) = Z_{\*x, \*y} (\bs{\omega})$ since $Z$ was defined for a fixed pair of $\*x, \*y$. Our goal is to minimize $\mathbb{E} \left( Z_{\*x,\*y} (\bs{\omega}^{(1)}) Z_{\*x,\*y} (\bs{\omega}^{(2)}) \right)$ for all $\*x \in \mathcal{X}$, $\*y \in \mathcal{Y}$. We find that it's hard to find an analytic formula for the minimum of the average $L^{-2} \sum_{\*x \in \mathcal{X}} \sum_{\*y \in \mathcal{Y}} \mathbb{E} \left( Z_{\*x,\*y} (\bs{\omega}^{(1)}) Z_{\*x,\*y} (\bs{\omega}^{(2)}) \right)$. Instead, we discover that minimization of the following average logarithm is tractable: $\overline{\mathcal{M}}(\bs{\Psi}, \mathcal{X}, \mathcal{Y}, \mathcal{T}_\mathrm{DE}) =$
\begin{equation}
L^{-2} \sum_{\*x \in \mathcal{X}, \*y \in \mathcal{Y}} \log \left( \mathbb{E} \left( Z_{\*x,\*y} (\bs{\omega}^{(1)}) Z_{\*x,\*y} (\bs{\omega}^{(2)}) \right) \right) . \label{eq:mdef}
\end{equation}
Here, we consider DERFs $f^{(1)} = f^{(1)}_\mathrm{DE}$, $f^{(2)} = f^{(2)}_\mathrm{DE}$ with fixed parameters $\bs{\theta}_\mathrm{DE}$ since DERFs generalize other positive RF variants (PosRFs, GERFs, ADERFs and SDERFs). The following result derives an expression for the optimal \textit{diagonal} matrix $\bs{\Psi} \in \mathbb{D}_d$:
\begin{theorem} \label{th:qmc}
Assume that $M > 1$ and let $\bs{\Psi} \in \mathbb{D}_d$. Then $\bs{\Theta}$ (\ref{eq:thetadef}) is a valid covariance matrix if and only if
\begin{equation}
\forall 1 \leq l \leq d : \quad - (M - 1)^{-1} \leq \bs{\Psi}_{l,l} \leq 1. \label{eq:psicond}
\end{equation}
Further, let $\bs{\theta}_\mathrm{DE} = \{ \*A, \*B^{(1)}, \*B^{(2)}, \*C^{(1)}, \*C^{(2)}, D \}$ denote parameters of $\mathcal{T}_\mathrm{DE}$ satisfying conditions from Theorem \ref{th:derf} with $\*A \in \mathbb{D}_d$. The minimum $\bs{\Psi}^*$ of $\overline{\mathcal{M}}(\bs{\Psi}, \mathcal{X}, \mathcal{Y}, \mathcal{T}_\mathrm{DE})$ with respect to $\bs{\Psi} \in \mathbb{D}_d$ satisfying (\ref{eq:psicond}) has the following form:
\begin{gather*}
\xi_l = \left(\*B^{(1)}_{l,:}\right)^\top \*M^{(1)} \*B^{(1)}_{l,:} + \left(\*B^{(2)}_{l,:}\right)^\top \*M^{(2)} \*B^{(2)}_{l,:} \\
+ 2 \left( \left(\*B^{(1)}_{l,:}\right)^\top \bs{\mu}^{(4)} \right) \left( \left(\*B^{(2)}_{l,:}\right)^\top \bs{\mu}^{(5)} \right)
\end{gather*}
where $\*M^{(1)}, \*M^{(2)}, \bs{\mu}^{(4)}, \bs{\mu}^{(5)}$ are defined as in Theorems \ref{th:aderf}, \ref{th:sderf}.
\end{theorem}
\begin{corollary}
\end{corollary}
\subsection{Sampling QMC Estimators Efficiently}
\section{Introduction}
\section{Prerequisites}
\subsection{Scaled Softmax Kernel and Random Features}
By the \textit{scaled softmax kernel} $K^{(\alpha)}: \mathbb{R}^d \times \mathbb{R}^d \to (0, +\infty)$, where $\alpha \in \mathbb{R}$, we denote a mapping defined as $K^{(\alpha)} (\*x, \*y) = \exp(\alpha \| \*x \|^2 + \*x^\top \*y + \alpha \| \*y \|^2)$ for all $\*x, \*y \in \mathbb{R}^d$ where $\| \cdot \|$ is an $L_2$-norm. Two important special cases of the scaled softmax kernel are 1) a \textit{Gaussian kernel} $K^{(-1/2)} (\*x, \*y) = \exp(- \| \*x - \*y \|^2 / 2)$ and 2) a \textit{softmax kernel} $K^{(0)} (\*x, \*y) = \exp(\*x^\top \*y)$. For two sets of vectors $\mathcal{X} = \{ \*x^{(i)} \in \mathbb{R}^d \}_{i = 1}^L$ and $\mathcal{Y} = \{ \*y^{(j)} \in \mathbb{R}^d \}_{j = 1}^L$, by $\mathcal{K}(\mathcal{X}, \mathcal{Y}) \in \mathbb{R}^{L \times L}$ we denote a matrix where $\mathcal{K}^{(\alpha)}(\mathcal{X}, \mathcal{Y})_{i,j} = K^{(\alpha)}(\*x^{(i)}, \*y^{(j)})$ for all $1 \leq i, j \leq L$.
In this paper, we will be interested in the problem of computing $\mathcal{K}^{(\alpha)} (\mathcal{X}, \mathcal{Y}) \*C$ where $\mathcal{X}$, $\mathcal{Y}$ and a matrix $\*C \in \mathbb{R}^{L \times n}$ are provided as an input. A naive solution requires $O(L^2 (d + n))$ computations for constructing $\mathcal{K}^{(\alpha)} (\mathcal{X}, \mathcal{Y})$ ($O (L^2 d)$) and computing the matrix multiplication $\mathcal{K}^{(\alpha)} (\mathcal{X}, \mathcal{Y}) \times \*C$ ($O(L^2 n)$). Instead, we will use an efficient Monte-Carlo approximation of $\mathcal{K}^{(\alpha)} (\mathcal{X}, \mathcal{Y}) \times \*C$ using the following notion of \textit{random features (RFs) for the scaled softmax kernel}:
\begin{definition} \label{def:rfs}
By random features for the scaled softmax kernel $K^{(\alpha)}$, $\alpha \in \mathbb{R}$, we denote a triple $\mathcal{T} = \langle \nu, f^{(1)}, f^{(2)} \rangle$ where $\nu$ is a probability distribution over random objects $\bs{\omega} \in \Omega$ and $f^{(i)} : \Omega \times \mathbb{R}^d \to \mathbb{R}$, $i \in \{ 1, 2 \}$, are such mappings that, for all $\*x, \*y \in \mathbb{R}^d$,
\begin{equation}
K^{(\alpha)} (\*x, \*y) = \mathbb{E}_{\nu} \left[ f^{(1)} (\bs{\omega}, \*x) f^{(2)} (\bs{\omega}, \*y) \right] . \label{eq:rfdec}
\end{equation}
\end{definition}
The decomposition of type (\ref{eq:rfdec}) can be used for an efficient unbiased approximation of $\mathcal{K}^{(\alpha)} (\mathcal{X}, \mathcal{Y}) \*C$. Let $\bs{\omega}^{(1)}, \dots, \bs{\omega}^{(M)} \in \Omega$ be i.i.d. samples from $\nu$. Define matrices $\*P, \*S \in \mathbb{R}^{L \times M}$ where for all $1 \leq i, j \leq L$,
\begin{align}
\*P_{i,:} &= M^{-1/2}(f^{(1)} (\bs{\omega}^{(m)}, \*x^{(i)}))_{m = 1}^M, \label{eq:ps1} \\
\*S_{j,:} &= M^{-1/2}(f^{(2)} (\bs{\omega}^{(m)}, \*y^{(j)}))_{m = 1}^M \label{eq:ps2}
\end{align}
where $\*P_{i,:}, \*S_{j,:} \in \mathbb{R}^d$ are \textit{column vectors} corresponding to the rows of $\*P, \*S$. Then according to (\ref{eq:rfdec}), $\widehat{\mathcal{K}} = \*P \*S^\top$ is an unbiased \textit{Monte Carlo (MC)} approximation of $\mathcal{K}^{(\alpha)} (\mathcal{X}, \mathcal{Y})$ on $M$ samples. The variance $\mathrm{Var} \, \widehat{\mathcal{K}}_{i,j} = M^{-1} \mathrm{Var}_\nu \re{ f^{(1)} (\bs{\omega}, \*x^{(i)}) f^{(2)} (\bs{\omega}, \*y^{(j)})}$ of this approximation is inverse-proportional to $M$, hence $M$ is a tradeoff parameter between computations and precision. Now, $\widehat{\mathcal{K}} \*C$ is an unbiased approximation of $\mathcal{K}^{(\alpha)} (\mathcal{X}, \mathcal{Y}) \*C$ but $\widehat{\mathcal{K}} = \*P \*S^\top$ is a rank-$M$ matrix, hence computing $\widehat{\mathcal{K}} \*C$ has $O(L M n)$ complexity. Assuming that sampling each $\bs{\omega}^{(m)}$ and computing $f^{(\cdot)} (\cdot, \cdot)$ are $O(d)$ operations, which is usually the case, precomputing $\*P$ and $\*S$ takes $O(L M d)$ computations, resulting in the total $O(L M (d + n))$ computational complexity. By choosing $M \ll L$, we obtain a significant reduction in computations compared to the exact variant: $O (L M (d + n)) \ll O (L^2 (d + n))$.
Operations of type $\mathcal{K}^{(\alpha)} (\mathcal{X}, \mathcal{Y}) \*C$, especially for the Gaussian kernel $\alpha = -1/2$, emerge in kernel SVM \citep{rfs}, kernel regression \cite{nadaraya,watson} and in physics in the form of a Gauss transform \citep{gausstr}. Another important application has recently emerged in the area of efficient Transformers and is discussed in the next section \citep{performer}.
\subsection{Random Features for Efficient Transformers} \label{sec:efftr}
RFs found a very prominent application in the area of efficient long-sequence Transformers \citep{performer}. Transformers rely on a self-attention block for propagating information between elements of the sequence. If the sequence length is $L$ and input matrices are denoted as $\*Q, \*K, \*V \in \mathbb{R}^{L \times d}$ (\textit{queries}, \textit{keys} and \textit{values}), then self-attention outputs the following matrix:
\begin{equation}
\*Y = \mathrm{diag} (\mathcal{K}^{(0)} (\mathcal{X}, \mathcal{Y}) \*1_L)^{-1} \mathcal{K}^{(0)} (\mathcal{X}, \mathcal{Y}) \*V \in \mathbb{R}^{L \times d} \label{eq:att}
\end{equation}
where $\*1_L \in \mathbb{R}^L$ is a vector of all ones, $\mathrm{diag} (\cdot)$ returns a diagonal $(L \times L)$-sized matrix with the argument on the diagonal, $\mathcal{X} = \{ \*x^{(i)} = d^{-1 / 4} \*Q_{i,:} \in \mathbb{R}^d \}$, $\mathcal{Y} = \{ \*y^{(j)} = d^{-1 / 4} \*K_{j,:} \in \mathbb{R}^d \}$. Hence, substitution of $\widehat{\mathcal{K}}$ instead of $\mathcal{K}^{(0)} (\mathcal{X}, \mathcal{Y})$ in (\ref{eq:att}) reduces computational complexity from $O (L^2 d)$ to $O (L M d)$ ($n = d + 1$). $\mathrm{diag} (\mathcal{K}^{(0)} (\mathcal{X}, \mathcal{Y}) \*1_L)^{-1} \mathcal{K}^{(0)} (\mathcal{X}, \mathcal{Y})$ is the result of a \textit{softmax} operation performed on rows of $d^{-1/2} \*Q \*K^\top$.
\subsection{Existing Random Features for the Softmax Kernel} \label{sec:exist}
Representation (\ref{eq:rfdec}) is not unique and different RFs can be proposed for a single $K^{(\alpha)}$. Note that if $\langle \nu, f^{(1)}, f^{(2)} \rangle$ are RFs for $K^{(0)}$, then $\langle \nu, \widehat{f}^{(1)}, \widehat{f}^{(2)} \rangle$ are RFs for $K^{(\alpha)}$ for $\alpha \in \mathbb{R}$ where $\widehat{f}^{(k)} (\bs{\omega}, \*x) = \exp(\alpha \| \*x \|^2) f^{(k)} (\bs{\omega}, \*x)$. Hence, hereafter we focus on the softmax kernel $K^{(0)}$ without loss of generality.
\citet{perf0} proposed to use \textit{trigonometric random features (TrigRFs)} from \citep{rfs} in the efficient Transformer application:
\begin{gather}
\Omega_\mathrm{trig} = \mathbb{R}^{d + 1}, \,\, \nu_\mathrm{trig} = \mathrm{Unif} ([0, 2 \pi]) \times \mathcal{N} (\*0_d, \*I_d)^d, \label{eq:trigrf1} \\
f^{(1)}_\mathrm{trig} ((\theta, \widetilde{\bs{\omega}}), \*x) = \sqrt{2} \exp (\| \*x \|^2 / 2) \cos(\widetilde{\bs{\omega}}^\top \*x + \theta), \\
f^{(2)}_\mathrm{trig} ((\theta, \widetilde{\bs{\omega}}), \*y) = \sqrt{2} \exp(\| \*y \|^2 / 2) \cos(- \widetilde{\bs{\omega}}^\top \*y + \theta) \label{eq:trigrf3}
\end{gather}
where $\bs{\omega} = (\theta, \widetilde{\bs{\omega}})$, $\mathrm{Unif} (\cdot)$ denotes a uniform distribution on the argument set, $\mathcal{N} (\*0_d, \*I_d)$ is a multivariate Gaussian distribution with a mean $\*0_d$ (vector of $d$ zeros) and covariance matrix $\*I_d$ (identity matrix of size $d \times d$).
The next iteration of efficient attention approximators \citep{performer} encountered a problem with TrigRFs (\ref{eq:trigrf1}-\ref{eq:trigrf3}). The \textit{attention matrix} $\mathrm{diag} (\mathcal{K}^{(0)} (\mathcal{X}, \mathcal{Y}) \*1_L)^{-1} \mathcal{K}^{(0)} (\mathcal{X}, \mathcal{Y})$ from (\ref{eq:att}) is \textit{right stochastic} meaning that its entries are nonnegative and each row sums to $1$ due to the normalizing term $\mathrm{diag} (\mathcal{K}^{(0)} (\mathcal{X}, \mathcal{Y}) \*1_L)^{-1}$. However, since $f^{(\cdot)}_\mathrm{trig}$ can be arbitrary real numbers, $\*P, \*S$ (\ref{eq:ps1}-\ref{eq:ps2}) and, therefore, $\widehat{\mathcal{K}}$ can take negative values. Hence, $\mathrm{diag} (\widehat{\mathcal{K}} \*1_L)^{-1} \widehat{\mathcal{K}}$ is not right stochastic in general and entries of $\widehat{\mathcal{K}} \*1_L$ can take very small and/or negative values resulting in unstable behaviour when inverting $\mathrm{diag} (\widehat{\mathcal{K}} \*1_L)^{-1}$. \citet{performer} therefore propose a new type of \textit{positive random features (PosRFs)} which have a form:
\begin{gather}
\Omega_\mathrm{pos} = \mathbb{R}^d, \quad \nu_\mathrm{pos} = \mathcal{N} (0, 1)^d, \label{eq:posrf1} \\
f^{(1)}_\mathrm{pos} (\bs{\omega}, \*x) = f^{(2)}_\mathrm{pos} (\bs{\omega}, \*x) = \exp (\bs{\omega}^\top \*x - \| \*x \|^2 / 2) \label{eq:posrf3}
\end{gather}
It's clear that such $f^{(\cdot)}_\mathrm{pos}$ only take strictly positive values resulting in the right stochastic $\mathrm{diag} (\widehat{\mathcal{K}} \*1_L)^{-1} \widehat{\mathcal{K}}$ and a stable Transformer training procedure.
\citet{crt} extend PosRFs and propose \textit{generalized exponential random features (GERFs)}\footnote{\citet{crt} define these RFs for $K^{(-1/2)}$ but we adapt them for $K^{(0)}$ using the trick mentioned above.} for $K^{(0)}$:
\begin{gather}
\Omega_\mathrm{GE} = \mathbb{R}^d, \quad \nu_\mathrm{GE} = \mathcal{N} (0, 1)^d, \quad f^{(1)}_\mathrm{GE} (\bs{\omega}, \*x) = \label{eq:gerf1} \\
f^{(2)}_\mathrm{GE} (\bs{\omega}, \*x) \! = \! D \exp (A \| \bs{\omega} \|^2 \! + \! B \bs{\omega}^\top \*x \! + \! C \| \*x \|^2 / 2), \label{eq:gerf3}
\end{gather}
where $A, B, C, D$ are real numbers\footnote{\citet{crt} consider a more generalized form when $A, B, C, D$ are complex with an additional parameter $s = \pm 1$, however only the subfamily (\ref{eq:gerf1}-\ref{eq:gerf3}) with $s = 1$ is proposed to use in the Transformer application.} satisfying:
\begin{equation*}
1 - 8 A > 0, \, B = \sqrt{1 - 4 A}, \, C = -\frac12, \, D = (1 - 4 A)^{\frac{d}{4}} .
\end{equation*}
\citet{crt} express $B, C, D$ through $A$ and find a close form equation for the variance of (\ref{eq:rfdec}):
\begin{gather}
\mathrm{Var}_{\nu_\mathrm{GE}} f^{(1)}_\mathrm{GE} (\bs{\omega}, \*x) f^{(2)}_\mathrm{GE} (\bs{\omega}, \*y) \! = \! e^{\mathcal{L}_\mathrm{GE} (A, \*x, \*y)} \! - \! K^{(0)} (\*x, \*y)^2, \nonumber \\
\mathcal{L}_\mathrm{GE} (A, \*x, \*y) = d \log \left( \frac{1 - 4 A}{\sqrt{1 - 8 A}} \right) + \frac{2 (1 - 4 A)}{1 - 8 A} \label{eq:gevar1} \\
\times \| \*x + \*y \|^2 - \| \*x \|^2 - \| \*y \|^2 \label{eq:gevar2}
\end{gather}
The minimum variance corresponds to the minimum $ \mathcal{L} (A, \*x, \*y)$ since $K^{(0)} (\*x, \*y)^2$ doesn't depend on $A$. Since $\mathcal{L} (A, \*x, \*y)$ is defined for a single pair of $\*x, \*y$ and not for sets $\mathcal{X}, \mathcal{Y}$, \citet{crt} propose a \textit{homogeneity heuristic} when they replace $\| \*x + \*y \|^2$, $\| \*x \|^2$, $\| \*y \|^2$ in (\ref{eq:gevar1}-\ref{eq:gevar2}) with its averages over $\mathcal{X}, \mathcal{Y}$: $L^{-2} \sum_{i,j} \| \*x^{(i)} + \*y^{(j)} \|^2$, $L^{-1} \sum_i \| \*x^{(i)} \|^2$ and $L^{-1} \sum_j \| \*y^{(j)} \|^2$ respectively. This heuristic is based on the assumption that $\{ \*x^{(i)} \}$ and $\{ \*y^{(j)} \}$ are homogeneous and their statistics are tightly concentrated around the mean. After this substitution, the minimum of (\ref{eq:gevar1}-\ref{eq:gevar2}) with respect to $A$ can be found in a close form.
\section{The Objective Minimized by GERFs}
Our first contribution is showing that homogeneity heuristic adapted in GERFs is actually an analytic solution of a certain optimization problem. Define
\begin{gather}
\overline{\mathcal{L}} (\bs{\theta}; \mathcal{X}, \mathcal{Y}, \mathcal{T}) = L^{-2} \sum_{1 \leq i,j \leq L} \log (\mathrm{Var}_\nu [f^{(1)} (\bs{\omega}, \*x^{(i)}) \nonumber \\
\times f^{(2)} (\bs{\omega}, \*y^{(j)})] + K^{(0)} (\*x^{(i)}, \*y^{(j)})^2 ) \label{eq:ldef}
\end{gather}
where $\mathcal{T} = \langle \nu, f^{(1)}, f^{(2)} \rangle$ are RFs for the kernel $K^{(0)}$ and $\bs{\theta}$ are their parameters.
(\ref{eq:ldef}) is a mean log-variances shifted by $K^{(0)}(\*x^{(i)}, \*y^{(j)})^2$. The best possible value of (\ref{eq:ldef}) is $\log K^{(0)} (\*x^{(i)}, \*y^{(j)})$ which corresponds to all variances $\mathrm{Var} f^{(1)} (\bs{\omega}, \*x^{(i)}) f^{(2)} (\bs{\omega}, \*y^{(j)})$ being zero, meaning that RFs provide exact kernel estimation. Hence, minimization of (\ref{eq:ldef}) leads more precise estimators on $\mathcal{X}, \mathcal{Y}$. We call the loss function $\overline{\mathcal{L}} (\bs{\theta}; \mathcal{X}, \mathcal{Y}, \mathcal{T})$ the \textit{shifted log-variance} objective.
If $\mathcal{T}_\mathrm{GE} = \langle \nu_\mathrm{GE}, f^{(1)}_\mathrm{GE}, f^{(2)}_\mathrm{GE} \rangle$ are taken in (\ref{eq:ldef}), then $\bs{\theta}_\mathrm{GE} = \{ A, B, C, D \}$ and $\overline{\mathcal{L}} (\bs{\theta}_\mathrm{GE}; \mathcal{X}, \mathcal{Y}, \mathcal{T}_\mathrm{GE}) = L^{-2} \sum_{i,j} \mathcal{L}_\mathrm{GE} (A; \*x^{(i)}, \*y^{(j)})$. Using (\ref{eq:gevar1}-\ref{eq:gevar2}), we get:
\begin{gather*}
\overline{\mathcal{L}} (\bs{\theta}_\mathrm{GE}; \mathcal{X}, \mathcal{Y}, \mathcal{T}_\mathrm{GE}) = d \log \left( \frac{1 - 4 A}{\sqrt{1 - 8 A}} \right) +
\frac{2 (1 - 4 A)}{1 - 8 A} \\
\times \frac{1}{L^2} \sum_{i,j} \| \*x^{(i)} + \*y^{(j)} \|^2 \! - \! \frac{1}{L} \sum_i \| \*x^{(i)} \|^2 \! - \! \frac{1}{L} \sum_j \| \*y^{(j)} \|^2 .
\end{gather*}
That is, $\mathcal{L} (A; \mathcal{X}, \mathcal{Y})$ coincides with (\ref{eq:gevar1}-\ref{eq:gevar2}) when $\| \*x + \*y \|^2$, $\| \*x \|^2$, $\| \*y \|^2$ are replaced by their average statistics computed on $\mathcal{X}, \mathcal{Y}$. Hence, the homogeneity heuristic is nothing but minimization of (\ref{eq:ldef}). While in general it's unclear how to find a close-form optimum of $\mathrm{Var} \widehat{\mathcal{K}}$ or $\mathrm{Var} (\widehat{\mathcal{K}} \*C)$, the global minimum of (\ref{eq:ldef}) is feasible and can be computed in $O(1)$ time. Further, \citet{crt} show that optimization of (\ref{eq:ldef}) leads to very good results in large-scale applications of efficient Transformers.
The advantage of the shifted log-variance objective is that it can be efficiently optimized. Another natural choice for the loss function is the regular mean squared error (MSE) loss given for $X(\mathbf{x}^{(i)},\mathbf{y}^{(j)})=f^{(1)}(\bs{\omega},\mathbf{x}^{(i)}) \times f^{(2)}(\bs{\omega},\mathbf{y}^{(j)})$ as:
\begin{equation}
\mathcal{L}_{\mathrm{MSE}}(\theta; \mathcal{X}, \mathcal{Y}, \mathcal{T}) = \sum _{1 \leq i,j\leq L} \mathrm{Var}_{\nu}[X(\mathbf{x}^{(i)},\mathbf{y}^{(j)})]
\end{equation}
In Section \ref{sec:arf}, we will show the results obtained by using also this objective.
\section{Towards DERFs - Asymmetric RFs (ARFs)}
\label{sec:arf}
Note that for GERF we have: $f_{\mathrm{GE}}^{(1)} = f_{\mathrm{GE}}^{(2)}$. Our first step to go beyond GERF is to give up this symmetry for the softmax kernel ($\alpha=0$). We rely on a simple observation (see for instance: \cite{hrfs}) that $K^{(0)}(\mathbf{x},\mathbf{y})=K^{(0)}(\mathbf{x}^{\prime},\mathbf{y}^{\prime})$, where: $\mathbf{x}^{\prime}=\mathbf{A}\mathbf{x}$ and $\mathbf{y}^{\prime}=\mathbf{A}^{-T}\mathbf{y}$ for the given invertible $\mathbf{A} \in \mathbb{R}^{d \times d}$. Thus effectively, we have introduced an additional set of parameters to the RF mechanism (encoded by a matrix $\mathbf{A}$) that can be tuned. One way of choosing $\mathbf{A}$ is to minimize the objective given by Equation \ref{eq:ldef}. The following is true:
\begin{lemma}[optimizing asymmetric RFs]
An invertible matrix $\mathbf{A} \in \mathbb{R}^{d \times d}$ minimizing the objective given in Equation \ref{eq:ldef} for $\alpha=0$ satisfies the following matrix equation:
\begin{equation}
\label{arf-eq}
\mathbf{A} \cdot \mathbf{M}^{(1)} \cdot \mathbf{A} = (\mathbf{A}^{-2})^{\top} \cdot \mathbf{M}^{(2)},
\end{equation}
where $\mathbf{M}^{(1)}$ and $\mathbf{M}^{(2)}$ are data covariance matrices:
\begin{equation}
\mathbf{M}^{(1)} = \frac{1}{L} \sum_{i = 1}^L \*x^{(i)} (\*x^{(i)})^\top, \*M^{(2)} = \frac{1}{L} \sum_{j = 1}^L \*y^{(j)} (\*y^{(j)})^\top.
\end{equation}
\end{lemma}
A prominent special instantiation of the above mechanism is for $\mathbf{A} \in\mathbb{R}^{d \times d}$ constrained to be diagonal. We have:
\begin{lemma} [asymmetric RFs with diagonal $\mathbf{A}$]
\label{lemma:arf-diag}
A diagonal matrix $\mathbf{A} \in \mathbb{R}^{d \times d}$ with diagonal $(a_{1},...,a_{d})$ and minimizing the objective given in Equation \ref{eq:ldef} for $\alpha=0$ satisfies:
\begin{equation}
a_{j} = \mp \left(\frac{\sum_{i=1}^{L}(\mathbf{x}^{i}_{j})^{2}}{\sum_{i=1}^{L}(\mathbf{y}^{i}_{j})^{2}}\right)^{\frac{1}{4}}
\end{equation}
\end{lemma}
It is easy to notice that the solution from Lemma \ref{lemma:arf-diag} can be obtained from Eq. \ref{arf-eq} by zeroing out non-diagonal entries of $\mathbf{M}^{(1)}$ and $\mathbf{M}^{(2)}$.
We refer to the class of the above RF methods shortly as \textit{asymmetric random features} or (ARFs).
\subsection{MSE-guided optimization of ARFs}
Now let us consider the MSE-objective and see how we can optimize $\mathbf{A}$ accordingly. We have for $\rho=\frac{1}{1-8A}$:
\begin{align}
\begin{split}
\mathcal{L}_{\mathrm{MSE}}(\mathcal{X},\mathcal{Y}) =
\left(\frac{\rho+1}{2\sqrt{\rho}}\right)^{d}
\sum_{1 \leq i,j \leq L} \exp(2(\mathbf{x}^{(i)})^{\top}\mathbf{y}^{(j)}) \cdot \\ [\exp(\rho\|\mathbf{A}\mathbf{x}^{(i)}+(\mathbf{A}^{-1})^{\top}\mathbf{y}^{(j)}\|_{2}^{2})-1]
\end{split}
\end{align}
To optimize $\mathcal{L}_{\mathrm{MSE}}(\mathcal{X},\mathcal{Y})$ with respect to $\mathbf{A}$, it thus suffices to minimize the following objective for $\tau^{i,j}=\exp(2(\mathbf{x}^{(i)})^{\top}\mathbf{y}^{(j)})$:
\begin{equation}
\widehat{\mathcal{L}}_{\mathrm{MSE}}(\mathcal{X},\mathcal{Y}) = \sum_{1 \leq i,j \leq L} \tau^{i,j}\exp(\rho\|\mathbf{A}\mathbf{x}^{(i)}+(\mathbf{A}^{-\top})\mathbf{y}^{(j)}\|_{2}^{2})
\end{equation}
Our first relaxation is to instead minimize the upper bound on $\widehat{\mathcal{L}}_{\mathrm{MSE}}(\mathcal{X},\mathcal{Y})$ obtained by applying Cauchy-Schwartz inequality. We get:
\begin{align}
\begin{split}
\widehat{\mathcal{L}}_{\mathrm{MSE}}(\mathcal{X},\mathcal{Y}) \leq
\sum_{1 \leq i \leq L} \exp((1+2\rho)\|\mathbf{A}\mathbf{x}^{(i)}\|^{2}_{2}) \cdot \\ \sum_{1 \leq j \leq L} \exp((1+2\rho)\|\mathbf{A}^{-\top}\mathbf{y}^{(j)}\|^{2}_{2}) = \\
\sum_{1 \leq i,j \leq L} \sum_{k,l \in \{0,1,...\}}
\frac{(1+2\rho)^{k+l}\|\mathbf{A}\mathbf{x}^{(i)}\|_{2}^{2k}
\|\mathbf{A}^{-\top}\mathbf{y}^{(y)}\|^{2l}_{2}}{k! l!}
\end{split}
\end{align}
We will assume that $A<0$ which implies that $\rho>0$ and consequently that all the terms of the above sum are non-negative. By further relaxing the original optimization problem, we might aim to minimize the highest-order elements of the truncated (with respect to $k,l$) version of the above sum. Thus effectively we can define the following \textit{$n^{th}$-order relaxation} of the original optimization problem:
\begin{equation}
\min_{\mathbf{A} \in \mathbb{R}^{d \times d}} \sum_{1 \leq i,j \leq L} \|\mathbf{A}\mathbf{x}^{(i)}\|_{2}^{2n}
\|\mathbf{A}^{-\top}\mathbf{y}^{(y)}\|^{2n}_{2}
\end{equation}
For $n=1$ this problem has the closed-form solution that can be efficiently computed.
\begin{lemma}[\cite{arps}]
\label{lemma:arps}
The solution $\mathbf{A}^{*}$ of the 1st-order relaxation is of the following form:
\begin{align}
\begin{split}
\mathbf{A}^{*}= \mathbf{D}^{\frac{1}{2}}\mathbf{U}^{\top}\mathbf{Q}_{\mathcal{X}}^{-\top}, \\
\mathbf{M}^{(1)} = \mathbf{Q}_{\mathcal{X}}^{\top}\mathbf{Q}_{\mathcal{X}}, \textrm{ }
\mathbf{M}^{(2)} = \mathbf{Q}_{\mathcal{Y}}^{\top}\mathbf{Q}_{\mathcal{Y}}, \\
\mathbf{Q}_{\mathcal{X}}\mathbf{Q}_{\mathcal{Y}}^{\top} = \mathbf{U}\mathbf{D}\mathbf{V}^{\top},
\end{split}
\end{align}
where matrices $\mathbf{Q}_{\mathcal{X}},\mathbf{Q}_{\mathcal{Y}} \in \mathbb{R}^{d \times d}$ define the decomposition of the data covariance matrices $\mathbf{M}^{(1)},\mathbf{M}^{(2)}$ and furthermore: diagonal matrix $\mathbf{D} \in \mathbb{R}^{d \times d}$ and orthogonal matrices $\mathbf{U},\mathbf{V} \in \mathbb{R}^{d \times d}$ define the SVD-decomposition of $\mathbf{Q}_{\mathcal{X}}\mathbf{Q}_{\mathcal{Y}}^{\top}$.
\end{lemma}
Interestingly, if we further constrain $\mathbf{A}$ to be diagonal, then the solution from Lemma \ref{lemma:arps} has exactly the same form as the one from Lemma \ref{lemma:arf-diag}. This is not that surprising since the corresponding objectives, even though not the same, are related as: one works in the $\mathrm{log}$-space and the other takes the linear terms of the Taylor series expansion. The question of finding the exact solution for the $n^{th}$-order relaxation for $n>1$ remains open.
\section{Dense-Exponential Random Features} \label{sec:derf}
In this section, we propose \textit{dense-exponential random features (DERFs)}: an extension of GERFs where scalars $A, B, C$ are replaced with dense matrices.
DERFs are effectively our most general variant that contains previously introduced classes as its special instantiations.
We define DERFs as follows: $\Omega_\mathrm{DE} = \mathbb{R}^d$, $\nu_\mathrm{DE} = \mathcal{N} (0, 1)^d$ and for $k \in \{ 1, 2 \}$:
\begin{equation*}
f^{(k)}_\mathrm{DE} (\bs{\omega}, \*x) \! = \! D \exp(\bs{\omega}^\top \*A \bs{\omega} + \bs{\omega}^\top \*B^{(k)} \*x + \*x^\top \*C^{(k)} \*x),
\end{equation*}
where $\*B^{(k)}, \*C^{(k)} \in \mathbb{R}^{d \times d}$, $D \in \mathbb{R}$, $\*A \in \mathbb{S}_d$ (a set of $d \times d$ real symmetric matrices). Clearly, GERFs with parameters $A, B, C, D$ can be expressed via DERFs with parameters $\*A = A \*I_d$, $\*B^{(1)} = \*B^{(2)} = B \*I_d$, $\*C^{(1)} = \*C^{(2)} = C \*I_d$, $D$ is unchanged. Our first theoretical result is giving the conditions when $\mathcal{T}_\mathrm{DE} = \langle \nu_\mathrm{DE}, f^{(1)}_\mathrm{DE}, f^{(2)}_\mathrm{DE} \rangle$ are valid RFs:
\begin{theorem} \label{th:derf}
Let the following conditions hold:
\begin{gather*}
8 \*A \prec \*I_d, \quad (\*B^{(1)})^\top (\*I_d - 4 \*A)^{-1} \*B^{(2)} = \*I_d, \quad \*C^{(k)} = \\
- \frac{1}{2} (\*B^{(k)})^\top (\*I_d - 4 \*A)^{-1} \*B^{(k)}, \,\, D = \det (\*I_d - 4 \*A)^{1/4}
\end{gather*}
where $k \in \{ 1, 2 \}$. Then $\mathcal{T}_\mathrm{DE}$ are RFs for $K^{(0)}$ and, for all $\*x, \*y \in \mathbb{R}^d$:
\begin{align}
&\mathrm{Var}_{\nu_\mathrm{DE}} f^{(1)}_\mathrm{DE} (\bs{\omega}, \*x) f^{(2)}_\mathrm{DE} (\bs{\omega}, \*y) = D^4 \det (\*I_d - 8 \*A)^{-1/2} \nonumber \\
&\times \exp\biggl(2 \*x^\top \left( \*C^{(1)} + (\*B^{(1)})^\top (\*I_d - 8 \*A)^{-1} \*B^{(1)} \right) \*x \nonumber \\
&+ 2 \*y^\top \left( \*C^{(2)} + (\*B^{(2)})^\top (\*I_d - 8 \*A)^{-1} \*B^{(2)} \right) \*y \nonumber \\
&+ \! 4 \*x^\top (\*B^{(1)})^\top (\*I_d - 8 \*A)^{-1} \*B^{(2)} \*y \! \biggr) \! - \! K^{(0)} (\*x, \*y)^2 . \label{eq:devar}
\end{align}
\end{theorem}
Our ultimate goal is to find optimal parameters $\*A, \*B^{(k)}, \*C^{(k)}$ and $D$ minimizing variance of the low-rank approximation of $\mathcal{K}^{(0)} (\mathcal{X}, \mathcal{Y})$ where sets $\mathcal{X}, \mathcal{Y}$ are provided. Our first observation is that we can assume that $\*A \in \mathbb{D}_d$ (a set of $d \times d$ real diagonal matrices). Indeed, any symmetric $\*A$ can be expressed as $\*Q \widehat{\*A} \*Q^\top$ where $\*Q \in \mathbb{O}_d$ (a set of orthogonal matrices $\{ \*Z \in \mathbb{R}^{d \times d} \, | \, \*Z^\top \*Z = \*I_d \}$) and $\widetilde{\*A} \in \mathbb{D}_d$. Let $\bs{\omega} \sim \mathcal{N} (\*0_d, \*I_d)$. Then, for any $\*x \in \mathbb{R}^d$, $k \in \{ 1, 2 \}$,
\begin{gather*}
f^{(k)}_\mathrm{DE} (\bs{\omega}, \! \*x) \! = \! D \exp(\bs{\omega}^\top \*Q \! \widetilde{\*A} \*Q^\top \bs{\omega} \! + \! \bs{\omega}^\top \*B^{(k)} \*x \! + \! \*x^\top \*C^{(k)} \*x) \\
= \exp(\widetilde{\bs{\omega}}^\top \widetilde{\*A} \widetilde{\bs{\omega}} + \widetilde{\bs{\omega}}^\top \! \widetilde{\*B}^{(k)} \*x + \*x^\top \! \*C^{(k)} \*x) = \widetilde{f}^{(k)}_\mathrm{DE} (\widetilde{\bs{\omega}}, \*x)
\end{gather*}
where $\widetilde{\*B}^{(k)} = \*Q^\top \*B^{(k)}$, $\widetilde{\bs{\omega}} = \*Q^\top \bs{\omega}$, $\widetilde{\bs{\omega}} \sim \mathcal{N} (\*0_d, \*I_d)$ since $\mathcal{N} (\*0_d, \*I_d)$ is \textit{isometric}, i.e. rotation-invariant and $\widetilde{f}^{(k)}_\mathrm{DE}$ are DERFs with parameters $\widetilde{\*A}$, $\widetilde{\*B}^{(k)}$, $\*C^{(k)}$, $D$. We conclude that with any $\*A$, $f^{(k)}_\mathrm{DE} (\bs{\omega}, \*x)$ can be expressed as DERFs $\widetilde{f}^{(k)}_\mathrm{DE}$ with $\widetilde{\*A} \in \mathbb{D}_d$. Hence, hereafter we consider $\*A \in \mathbb{D}_d$ only without loss of generality.
Since $\*B^{(k)}, \*C^{(k)}$ are dense matrices in general, evaluation of $f^{(k)}_\mathrm{DE} (\bs{\omega}, \*x)$ takes $O(d^2)$ time which bigger than $O(d)$ complexity for TrigRFs, PosRFs and GERFs. However, $\*P$ and $\*S$ matrices (\ref{eq:ps1}-\ref{eq:ps2}) can be still computed in $O(L M d)$. For that, precompute $(\*B^{(k)})^\top \bs{\omega}^{(m)}$, $\*C^{(1)} \*x^{(i)}$, $\*C^{(2)} \*y^{(j)}$ for all $k \in \{ 1, 2\}$, $1 \leq m \leq M$, $1 \leq i, j \leq L$ ($O((M + L) d^2)$). Then, computing $f^{(1)}_\mathrm{DE} (\bs{\omega}^{(m)}, \*x^{(i)})$, $f^{(2)}_\mathrm{DE} (\bs{\omega}^{(m)}, \*y^{(j)})$ for all $1 \leq i, j \leq L$, $1 \leq m \leq M$ takes $O(L M d)$ operations. The complexity of constructing (\ref{eq:ps1}-\ref{eq:ps2}) then is $O (L (M d + d^2) + M d^2)$ or $O(L M d)$ assuming that $d \leq M \ll L$ which is the case in practice.
Our goal is to minimize $\overline{\mathcal{L}} (\bs{\theta}_\mathrm{DE}; \mathcal{X}, \mathcal{Y}, \mathcal{T}_\mathrm{DE})$ for $\bs{\theta}_\mathrm{DE} = \{ \*A, \*B^{(1)}, \*B^{(2)}, \*C^{(1)}, \*C^{(2)}, D \}$. However, we find that even for a single pair of $\*x, \*y$ it's unclear how to minimize the variance (\ref{eq:devar}) in a close form. Hence, below we consider very general special cases where analytic solution is feasible.
\subsection{Asymmetric Dense-Exponential Random Features}
Define RFs $\mathcal{T}_\mathrm{ADE} = \langle \nu_\mathrm{ADE}, f^{(1)}_\mathrm{ADE}, f^{(2)}_\mathrm{ADE} \rangle$ in the same way as $\mathcal{T}_\mathrm{DE}$ with the only difference that $\*A = A \*I_d$ where $\lambda \in \mathbb{R}$. We refer to these RFs as \textit{asymmetric dense-exponential RFs (ADERFs)} since $f^{(1)}_\mathrm{ADE} \neq f^{(2)}_\mathrm{ADE}$ in general. The only additional restriction of ADERFs compared to DERFs is that all diagonal entries of $\*A \in \mathbb{D}_d$ are the same. The parameters of $\mathcal{T}_\mathrm{ADE}$ are $\bs{\theta}_\mathrm{ADE} = \{ A, \*B^{(1)}, \*B^{(2)}, \*C^{(1)}, \*C^{(2)}, D \}$. By $\Theta_\mathrm{ADE}$ denote a set of all possible $\bs{\theta}_\mathrm{ADE}$'s resulting in correct RFs for the kernel $K^{(0)}$, i.e. satisfying conditions from Theorem \ref{th:derf} with $\*A = A \*I_d$. The following result gives an analytic formula for a global minimum of $\overline{\mathcal{L}} (\bs{\theta}_\mathrm{ADE}; \mathcal{X}, \mathcal{Y}, \mathcal{T}_\mathrm{ADE})$. In the theorem, we use notions of SVD decomposition and eigendecomposition of a symmetric matrix \citep{nla}.
\begin{theorem}
\label{th:aderf}
Let $\mathcal{X} = \{ \*x^{(i)} \in \mathbb{R}^d \}_{i = 1}^L$, $\mathcal{Y} = \{ \*y^{(j)} \in \mathbb{R}^d \}_{j = 1}^L$. Let
\begin{equation*}
\*M^{(1)} = \frac{1}{L} \sum_{i = 1}^L \*x^{(i)} (\*x^{(i)})^\top, \quad \*M^{(2)} = \frac{1}{L} \sum_{j = 1}^L \*y^{(j)} (\*y^{(j)})^\top.
\end{equation*}
Suppose that $\*M^{(1)}, \*M^{(2)} \in \mathbb{S}_d$ are nonsingular. Define $\mu^{(3)} = d^{-1} L^{-2} \left( \sum_{i = 1}^L \*x^{(i)} \right)^\top \left( \sum_{j = 1}^L \*y^{(j)} \right) \in \mathbb{R}$. For $k \in \{ 1, 2 \}$, let $\*M^{(k)} = \*Q^{(k)} \bs{\Lambda}^{(k)} (\*Q^{(k)})^\top$ be eigendecomposition of a symmetric $\*M^{(k)}$ where $\*Q^{(k)} \in \mathbb{O}_d$. $\bs{\Lambda}^{(k)} \in \mathbb{D}_d$ has strictly positive diagonal values since $\*M^{(k)} \succeq 0$ by definition and $\*M^{(k)}$ is nonsingular. Let $\*U \bs{\Sigma} \*V^\top$ be SVD decomposition of $(\bs{\Lambda}^{(1)})^\frac12 (\*Q^{(1)})^\top \*Q^{(2)} (\bs{\Lambda}^{(2)})^\frac12$ where $\*U, \*V \in \mathbb{O}^d$, $\bs{\Sigma} \in \mathbb{D}_d$ has nonnegative diagonal entries.
One of the solutions $\bs{\theta}_\mathrm{ADE}^* = \{ A, \*B^{(1)}, \*B^{(2)}, \*C^{(1)}$, $\*C^{(2)}, D \}$ of $\min_{\bs{\theta}_\mathrm{ADE} \in \Theta_\mathrm{ADE}}$ $\overline{\mathcal{L}} (\bs{\theta}_\mathrm{ADE}; \mathcal{X}, \mathcal{Y}, \mathcal{T}_\mathrm{ADE})$ is as follows. Set $\phi = 2 d^{-1} \sum_{l = 1}^d \bs{\Sigma}_{l,l} + 2 \mu^{(3)}$ and, for $k \in \{ 1, 2 \}$,
\begin{gather*}
A = \frac{1}{16} \left( 1 - 2 \phi - \sqrt{\left(2 \phi + 1\right)^2 + 8 \phi} \right), \\
\*B^{(1)} = \sqrt{1 - 4 A} \bs{\Sigma}^{1/2} \*U^\top (\bs{\Lambda}^{(1)})^{-1/2} (\*Q^{(1)})^\top, \\
\*B^{(2)} = \sqrt{1 - 4 A} \bs{\Sigma}^{-1/2} \*U^\top (\bs{\Lambda}^{(1)})^{1/2} (\*Q^{(1)})^\top, \\
\*C^{(k)} = -\frac{1}{2 (1 - 4 A)} (\*B^{(k)})^\top \*B^{(k)}, \quad D = (1 - 4 A)^{d / 4} .
\end{gather*}
Further, we have:
\begin{gather}
\overline{\mathcal{L}} (\bs{\theta}_\mathrm{ADE}^*; \mathcal{X}, \mathcal{Y}, \mathcal{T}_\mathrm{ADE}) \! = \! d \biggl( \log (1 \! - \! 4 A) - \frac{1}{2} \log(1 \! - \! 8 A) \nonumber \\
+ 2 (1 - 8 A)^{-1} \left( d^{-1} \sum_{l = 1}^d \bs{\Sigma}_{l,l} + \mu^{(3)} \right) + 2 \mu^{(3)} \biggr). \label{eq:adevar}
\end{gather}
\end{theorem}
\subsection{Symmetric Dense-Exponential Random Features}
Define $\mathcal{T}_\mathrm{SDE} = \langle \nu_\mathrm{SDE}, f^{(1)}_\mathrm{SDE}, f^{(2)}_\mathrm{SDE} \rangle$ in the same way as $\mathcal{T}_\mathrm{DE}$ with the only difference that $\*B^{(1)} = \*B^{(2)} = \*B$. From the conditions in Theorem \ref{th:derf} it follows immediately that also $\*C^{(1)} = \*C^{(2)} = \*C$. Hence, $f^{(1)}_\mathrm{SDE} = f^{(2)}_\mathrm{SDE}$ and we refer to these RFs as \textit{symmetric dense-exponential RFs (SDERFs)}. The parameters of $\mathcal{T}_\mathrm{SDE}$ are $\bs{\theta}_\mathrm{SDE} = \{ \*A, \*B, \*C, D \}$. By $\Theta_\mathrm{SDE}$ denote a set of all possible $\bs{\theta}_\mathrm{SDE}$'s resulting in correct RFs for the kernel $K^{(0)}$, i.e. satisfying conditions from Theorem \ref{th:derf} with $\*B^{(k)} = \*B$, $\*C^{(k)} = \*C$, $k \in \{ 1, 2 \}$. The following theorem gives an analytic solution for a global minimum of $\overline{\mathcal{L}} (\bs{\theta}_\mathrm{SDE}; \mathcal{X}, \mathcal{Y}, \mathcal{T}_\mathrm{SDE})$.
\begin{theorem}
\label{th:sderf}
Let $\mathcal{X} = \{ \*x^{(i)} \in \mathbb{R}^d \}_{i = 1}^L$, $\mathcal{Y} = \{ \*y^{(j)} \in \mathbb{R}^d \}_{j = 1}^L$ and let $\*M^{(1)}$, $\*M^{(2)}$ be defined as in Theorem \ref{th:aderf} and define
\begin{equation*}
\bs{\mu}^{(4)} = \frac{1}{L} \sum_{i = 1}^L \*x^{(i)} \in \mathbb{R}^d, \quad \bs{\mu}^{(5)} = \frac{1}{L} \sum_{j = 1}^L \*y^{(j)} \in \mathbb{R}^d .
\end{equation*}
Further, let $\*Q^{(3)} \bs{\Lambda}^{(3)} (\*Q^{(3)})^\top$ be eigendecomposition of of a symmetric positive semidefinite matrix $\*M^{(1)} + \bs{\mu}^{(4)} (\bs{\mu}^{(5)})^\top + \bs{\mu}^{(5)} (\bs{\mu}^{(4)})^\top + \*M^{(2)}$ where $\*Q^{(3)} \in \mathbb{O}_d$ and $\bs{\Lambda}^{(3)} \in \mathbb{D}_d$ with nonnegative diagonal entries. Further, we assume that the entries on the diagonal of $\bs{\Lambda}^{(3)}$ are sorted in the non-ascending order.
One of the solutions $\bs{\theta}_\mathrm{SDE}^* = \{ A, \*B, \*C, D \}$ of $\min_{\bs{\theta}_\mathrm{SDE} \in \Theta_\mathrm{SDE}}$ $\overline{\mathcal{L}} (\bs{\theta}_\mathrm{SDE}; \mathcal{X}, \mathcal{Y}, \mathcal{T}_\mathrm{SDE})$ is as follows. $\*A \in \mathbb{D}_d$, for all $1 \leq l \leq d$:
\begin{equation*}
\*A_{l,l} = \frac{1}{16} \left( 1 - 2 \bs{\Lambda}^{(3)}_{l,l} - \sqrt{\left(2 \bs{\Lambda}^{(3)}_{l,l} + 1\right)^2 + 8 \bs{\Lambda}^{(3)}_{l,l}} \right),
\end{equation*}
$\*B = (\*I_d - 4 \*A)^{1/2} (\*Q^{(3)})^\top$, $\*C = -\frac12 \*I_d$, $D = \det (\*I_d - 4 \*A)^{1/4}$. Further, we have:
\begin{gather}
\overline{\mathcal{L}} (\bs{\theta}_\mathrm{SDE}; \mathcal{X}, \mathcal{Y}, \mathcal{T}_\mathrm{SDE}) = \sum_{l = 1}^d \biggl( \log (1 - 4 \*A_{l,l}) \nonumber \\
- \frac12 \log (1 - 8 \*A_{l,l}) + \left(1 + (1 - 8 \*A_{l,l})^{-1}\right) \bs{\Lambda}^{(3)}_{l,l} \biggr) \nonumber \\
- L^{-1} \sum_{i = 1}^L \| \*x^{(i)} \|^2 - L^{-1} \sum_{j = 1}^L \| \*y^{(j)} \|^2 \label{eq:sdevar}
\end{gather}
\end{theorem}
\section{Experiments}
\subsection{Variance Comparison}
\begin{figure*}
\centering
\includegraphics[width=0.9\textwidth]{log_rel_vars.png}
\caption{Caption}
\label{fig:vars}
\end{figure*}
\subsection{Non-parametric Classification}
\begin{figure*}
\centering
\includegraphics[width=0.9\textwidth]{uci.png}
\caption{Caption}
\label{fig:uci}
\end{figure*}
\subsubsection{Natural language processing}
\begin{table*}[b]
\small
\centering
\caption{GLUE Dev results on base sized models.
Number of training examples is reported below each task.
MCC score is reported for CoLA, F1 score is reported for MRPC, Spearman correlation is reported for STS-B, and accuracy scores are reported for the other tasks. The \textbf{best} result, \underline{second best}.}
\label{tab:glue_dev}
\begin{tabular}{@{}lcccccccc@{}}
\toprule
System & MNLI & QQP & QNLI & SST-2 & CoLA & STS-B & MRPC & RTE \\
& 392k & 363k & 108k & 67k & 8.5k & 5.7k & 3.5k & 2.5k \\
\midrule
Uptrain ELU\citep{pmlr-v119-katharopoulos20a} & 82.58 & 90.05 & 89.81 & 92.43 & 58.63 & {87.91} & 87.50 & 67.15 \\
Uptrain RELU\citep{performer} & 82.49 & 90.71 & 89.68 & 92.32 & 57.57 & \textbf{88.15} & 87.25 & 68.95\\
Uptrain FAVOR+\citep{performer} & 77.69 & 86.69 & 89.41 & 91.80 & 54.87 & 83.78 & 80.73 & 66.19\\
Uptrain FAVOR++\citep{crt} & {82.29} & {90.43} & {89.73} & {92.20} & {58.85} & 85.90 & \textbf{{88.73}} & 67.63\\
\midrule
Uptrain FAVOR\# & \textbf{82.69} & \textbf{90.68} & \textbf{90.01} & \textbf{92.53} & \textbf{59.33} & 85.48 & 87.99 & \textbf{69.68} \\
\bottomrule
\end{tabular}
\end{table*}
|
2,869,038,154,350 | arxiv | \section{Introduction}
In recent years there is an increasing interest in using non-classical entangled states of continuous variable systems in applications of quantum information processing, communication and computation \cite{bra1}. In this respect, Gaussian states, in particular two-mode Gaussian states, play a key role since they can be easily created and controlled experimentally. Due to the unavoidable interaction with the environment, in order to describe realistically quantum information processes it is necessary to take decoherence and dissipation into consideration. Decoherence and dynamics of quantum entanglement in continuous variable open systems have been intensively studied in the last years \cite{oli,ser3,pra,dod1,ser4,avd,ben1,mch,man,jan,aphysa,aeur,arus1,paz2,arus2,ascri1}.
In this paper we study, in the framework of the theory of open systems based on completely positive quantum dynamical semigroups, the dynamics of continuous variable quantum entanglement and quantum discord of a subsystem consisting of two uncoupled bosonic modes (harmonic oscillators) interacting with a common thermal environment. We are interested in discussing the correlation effect of the environment, therefore we assume that the two modes are independent, i.e. they do not interact directly. The initial state of the subsystem is taken of Gaussian form and the evolution under the quantum dynamical semigroup assures the preservation in time of the Gaussian form of the state. In Refs. \cite{ascri4,aosid1,ascri2} we studied the evolution of entanglement and quantum discord of two identical harmonic oscillators interacting with a thermal environment for initial symmetric squeezed vacuum and squeezed thermal states of the subsystem. In this work we extend the previous analysis to the case of non-resonant bosonic modes and take non-symmetric squeezed thermal states as initial states. We show that in the case of an entangled initial squeezed thermal state, entanglement suppression (entanglement sudden death) takes place for all temperatures of the environment, including zero temperature. We analyze the time evolution of Gaussian quantum discord, which is a measure of all quantum correlations in the bipartite state, including entanglement, and show that discord decays asymptotically in time under the effect of the thermal bath. Before the suppression of the entanglement, the qualitative evolution of discord is very similar to that of the entanglement. We describe also the time evolution of classical correlations and quantum mutual information, which measures the total correlations of quantum system.
\section{Equations of motion for two modes interacting with an environment}
We study the dynamics of a subsystem composed of two non-interacting bosonic modes in weak interaction with a thermal environment. In the axiomatic formalism based on completely positive quantum dynamical semigroups, the Markovian irreversible time evolution of an open system is described by the Kossakowski-Lindblad master equation \cite{rev,san}.
We are interested in the set of Gaussian states, therefore we introduce such quantum dynamical semigroups that preserve this set during time evolution of the system. The Hamiltonian of the two uncoupled non-resonant harmonic oscillators of identical mass $m$ and frequencies $\omega_1$ and $\omega_2$ is
\begin{eqnarray} H={1\over 2m}(p_x^2+p_y^2)+\frac{m}{2}(\omega_1^2 x^2+\omega_2^2 y^2),\end{eqnarray} where $x,y$ are the coordinates and $p_x,p_y$ are the momenta of the two quantum oscillators.
The equations of motion for the quantum correlations of the canonical observables $x,y$ and $p_x,p_y$ are the following ($\rm T$ denotes the transposed matrix) \cite{san}:
\begin{eqnarray}{d \sigma(t)\over
dt} = Y \sigma(t) + \sigma(t) Y^{\rm T}+2 D,\label{vareq}\end{eqnarray} where
\begin{equation} Y=\left(\matrix{ -\lambda&1/m&0 &0\cr -m\omega_1^2&-\lambda&0&
0\cr 0&0&-\lambda&1/m \cr 0&0&-m\omega_2^2&-\lambda}\right),~~
D=\left(\matrix{
D_{xx}& D_{xp_x} &D_{xy}& D_{xp_y} \cr D_{xp_x}&D_{p_x p_x}&
D_{yp_x}&D_{p_x p_y} \cr D_{xy}& D_{y p_x}&D_{yy}& D_{y p_y}
\cr D_{xp_y} &D_{p_x p_y}& D_{yp_y} &D_{p_y p_y}} \right),\end{equation}
and the diffusion coefficients $D_{xx}, D_{xp_x},$... and the dissipation constant $\lambda$ are real quantities. We introduced the following $4\times 4$ bimodal covariance matrix:
\begin{eqnarray}\sigma(t)=\left(\matrix{\sigma_{xx}(t)&\sigma_{xp_x}(t) &\sigma_{xy}(t)&
\sigma_{xp_y}(t)\cr \sigma_{xp_x}(t)&\sigma_{p_xp_x}(t)&\sigma_{yp_x}(t)
&\sigma_{p_xp_y}(t)\cr \sigma_{xy}(t)&\sigma_{yp_x}(t)&\sigma_{yy}(t)
&\sigma_{yp_y}(t)\cr \sigma_{xp_y}(t)&\sigma_{p_xp_y}(t)&\sigma_{yp_y}(t)
&\sigma_{p_yp_y}(t)}\right)=\left(\begin{array}{cc}A&C\\
C^{\rm T}&B \end{array}\right),\label{covar}
\end{eqnarray}
where $A$, $B$ and $C$ are $2\times 2$ Hermitian matrices. $A$ and $B$ denote the symmetric covariance matrices for the individual reduced one-mode states, while the matrix $C$ contains the cross-correlations between modes.
The elements of the covariance matrix are defined as $\sigma_{ij}=<R_iR_j+R_jR_i>/2, i,j=1,..,4,$ with ${\bf R}=\{x,p_x,y,p_y\},$ which up to local displacements fully characterize any Gaussian state of a bipartite system.
The time-dependent solution of Eq. (\ref{vareq}) is given by \cite{san}
\begin{eqnarray}\sigma(t)= M(t)[\sigma(0)-\sigma(\infty)] M^{\rm T}(t)+\sigma(\infty),\label{covart}\end{eqnarray} where the matrix $M(t)=\exp(Yt)$ has to fulfill the condition $\lim_{t\to\infty} M(t) = 0.$ The values at infinity are obtained from the equation \begin{eqnarray}
Y\sigma(\infty)+\sigma(\infty) Y^{\rm T}=-2 D.\label{covarinf}\end{eqnarray}
\section{Dynamics of quantum correlations}
\subsection{Time evolution of entanglement}
In order to quantify the degree of entanglement of the two-mode states it is appropriate to use the logarithmic negativity. For a Gaussian density operator, the logarithmic negativity is completely defined by the symplectic spectrum of the partial transpose of the covariance matrix. It is given by
$E_N={\rm max}\{0,-\log_2 2\tilde\nu_-\},$
where $\tilde\nu_-$ is the smallest of the two symplectic eigenvalues of the partial transpose $\tilde{{\sigma}}$ of the two-mode covariance matrix $\sigma$ \cite{ser4}:
\begin{eqnarray}2\tilde{\nu}_{\mp}^2 = \tilde{\Delta}\mp\sqrt{\tilde{\Delta}^2
-4\det\sigma}
\end{eqnarray}
and $ \tilde\Delta$ is the symplectic invariant (seralian), given by
$ \tilde\Delta=\det A+\det B-2\det C.$
In our model, the logarithmic negativity is calculated as \cite{aijqi,aosid} \begin{eqnarray}E_N(t)={\rm max}\{0,-\frac{1}{2}\log_2[4g(\sigma(t))]\}, \end{eqnarray} where \begin{eqnarray}g(\sigma(t))=\frac{1}{2}(\det A +\det
B)-\det C\nonumber\\
-\left({\left[\frac{1}{2}(\det A+\det B)-\det
C\right]^2-\det\sigma(t)}\right)^{1/2}.\end{eqnarray}
It determines the strength of entanglement for $E_N(t)>0,$ and if $E_N(t)=0,$ then the state is separable.
We assume that the initial Gaussian state is a two-mode squeezed thermal state, with the covariance matrix of the form \cite{mar}
\begin{eqnarray}\sigma_{st}(0)=\frac{1}{2}\left(\matrix{a&0&c&0\cr
0&a&0&-c\cr
c&0&b&0\cr
0&-c&0&b}\right),\label{ini1} \end{eqnarray}
with the matrix elements given by
\begin{eqnarray}a=n_1 \cosh^2 r + n_2 \sinh^2 r + \frac{1}{2} \cosh 2r,\\
b=n_1 \sinh^2 r + n_2 \cosh^2 r + \frac{1}{2} \cosh 2r,\\
c=\frac{1}{2}(n_1 + n_2 + 1) \sinh 2r,\label{ini2}
\end{eqnarray}
where $n_1,n_2$ are the average number of thermal photons associated with the two modes and $r$ denotes the squeezing parameter.
In the particular case $n_1=0$ and $n_2=0$, (\ref{ini1}) becomes the covariance matrix of the two-mode squeezed vacuum state. A two-mode squeezed thermal state is entangled when the squeezing parameter $r$ satisfies the inequality $r>r_s$ \cite{mar},
where \begin{eqnarray} \cosh^2 r_s=\frac{(n_1+1)(n_2+1)}{ n_1+n_2+1}. \end{eqnarray}
We suppose that the asymptotic state of the considered open system is a Gibbs state corresponding to two independent bosonic modes in thermal equilibrium at temperature $T.$ Then the quantum diffusion coefficients have the following form (we put from now on $\hbar=1$) \cite{rev}:
\begin{eqnarray}m\omega_1 D_{xx}=\frac{D_{p_xp_x}}{m\omega_1}=\frac{\lambda}{2}\coth\frac{\omega_1}{2kT},\nonumber\\
m\omega_2 D_{yy}=\frac{D_{p_yp_y}}{m\omega_2}=\frac{\lambda}{2}\coth\frac{\omega_2}{2kT},\label{envcoe}\\
D_{xp_x}=D_{yp_y}=D_{xy}=D_{p_xp_y}=D_{xp_y}=D_{yp_x}=0.\nonumber\end{eqnarray}
The evolution of entangled initial squeezed thermal states with the covariance matrix given by Eq. (\ref{ini1}) is illustrated in Fig. 1, where we represent the dependence of the logarithmic negativity $E_N(t)$ on time $t$ and temperature $T$ for the case of an initial non-symmetric Gaussian state ($a\neq b$). For all temperatures $T,$ including zero temperature, at certain finite moment of time, which depends on $T,$ $E_N(t)$ becomes zero and therefore the state becomes separable. This is the so-called phenomenon of entanglement sudden death. It is in contrast to the quantum decoherence, during which the loss of quantum coherence is usually gradual \cite{aphysa,arus}. One can also show that the dissipation favors the phenomenon of entanglement sudden death -- with increasing the dissipation parameter $\lambda,$ the entanglement suppression happens earlier. The same qualitative behaviour of the time evolution of entanglement was obtained previously \cite{aosid1,ascri3} in the particular case $n_1=0$ and $n_2=0$ corresponding to an initial two-mode squeezed vacuum state and in the case of symmetric initial squeezed thermal states.
Comparing the present results with those obtained in the previous paper \cite{ascri2} one can assert that the asymmetry ($a\neq b$) of the initial Gaussian state favors the suppression of entanglement. The most robust under the influence of the environment is the entanglement of symmetric ($a=b$) initial squeezed thermal states. An even stronger influence on the entanglement has the non-resonant character of the two modes: by increasing the ratio of the frequencies of the two modes, the entanglement sudden death happens earlier in time. The longest surviving entanglement takes place when the modes are resonant ($\omega_1=\omega_2$). This effect due to the non-resonance of the modes is stronger for small values of the frequencies, and it diminishes, for the same ratio of frequencies, by increasing the values of frequencies.
In our model, in which we suppose that the asymptotic state of the considered open system is a Gibbs state corresponding to two independent bosonic modes in thermal equilibrium, an separable initial state remains separable in time, and it is not possible to generate entanglement. This is in contrast with the possibility of entanglement generation starting, for instance, with a separable state in the case of two non-interacting two-level systems immersed in a common bath \cite{ben2}. At the same time we remind that we have studied previously \cite{aosid,ascri} the evolution of the entanglement of two identical harmonic oscillators interacting with a general environment, characterized by general diffusion and dissipation coefficients, and we obtained that, for separable initial states and for definite values of these coefficients, entanglement generation or a periodic generation and collapse of entanglement take place. In discussing the entanglement decay, it is interesting to mention that models have been elaborated to realize quantum feedback control of continuous variable entanglement for a system consisting of two interacting bosonic modes plunged into an environment, based on a local technique \cite{man1}, or on a nonlocal homodyne measurement \cite{man2}.
\begin{figure}
\resizebox{0.4\columnwidth}{!}
{
\includegraphics{sin1.pdf}
}
\caption{Logarithmic negativity $E_N$ versus time $t$ and temperature $T$ for an entangled initial non-symmetric squeezed thermal state with squeezing parameter $r=3$, $n_1=3, n_2=1$ and $\lambda=0.1, \omega_1=1, \omega_2=2.$ We take $m=\hbar=k=1.$
}
\label{fig:1}
\end{figure}
\subsection{Gaussian quantum discord}
Recent studies have shown that separable states, usually considered as being classically correlated, might also contain quantum correlations. Quantum discord was introduced \cite{zur,oll} as a measure of all quantum correlations in a bipartite state, including -- but not restricted to -- entanglement. Quantum discord has been defined as the difference
between two quantum analogues of classically equivalent expressions of the mutual information, which is a measure of total correlations in a quantum state.
For an arbitrary bipartite state $\rho_{12},$ the total correlations are expressed by quantum mutual information
\begin{eqnarray}
I(\rho_{12})=\sum_{i=1,2} S(\rho_{i})-S(\rho_{12}),
\end{eqnarray} where $\rho_i$ represents the reduced density matrix of subsystem $i$ and $S(\rho)= - {\rm Tr}(\rho \ln \rho)$
is the von Neumann entropy. A measure of bipartite classical correlations $C(\rho_{12})$ in the bipartite quantum state $\rho_{12}$ based on a complete set of local projectors $\{\Pi_{2}^k\}$ on the subsystem 2 can be given by
\begin{eqnarray}
C(\rho_{12})=S(\rho_{1})-{\inf}_{\{\Pi_{2}^k\}}\{S(\rho_{1|2})\},
\end{eqnarray}
where $S(\rho_{1|2}) =\sum_{k}p^k S(\rho_{1}^k)$ is the conditional entropy of subsystem {1} and $\inf\{S(\rho_{1|2})\}$
represents the minimal value of the entropy with respect to a complete set of local measurements $\{\Pi_{2}^k\}.$
Here, $p^k$ is the measurement probability for the $k$th local projector and $\rho_{1}^k$ denotes the reduced state of subsystem $1$ after the local measurements.
Then the quantum discord is defined by
\begin{eqnarray}
D(\rho_{12})=I(\rho_{12})-C(\rho_{12}).
\end{eqnarray}
Originally the quantum discord was defined and evaluated mainly for finite dimensional systems. Recently \cite{par,ade} the notion of discord has been extended to the domain of
continuous variable systems, in particular to the analysis of bipartite systems described by two-mode Gaussian states.
Closed formulas have been derived for bipartite thermal squeezed states \cite{par} and for all two-mode Gaussian states \cite{ade}.
The Gaussian quantum discord of a general two-mode Gaussian state $\rho_{12}$ can be defined as the quantum discord where the conditional entropy is restricted to generalized Gaussian positive operator valued measurements (POVM) on the mode 2, and in terms of symplectic invariants it is given by (the symmetry between the two modes 1 and 2 is broken) \cite{ade}
\begin{eqnarray}
D=f(\sqrt{\beta})-f(\nu_-) - f(\nu_+) + f(\sqrt{\varepsilon}),
\label{disc}
\end{eqnarray}
where \begin{eqnarray}f(x) =\frac{x+1}{2} \log\frac{x+1}{2} -\frac{x-1}{2} \log\frac{x-1}{2},\end{eqnarray}
\begin{eqnarray}\label{infdet}
\varepsilon=
& &
\hspace*{-.1cm}
\left\{ \hspace*{-.5cm} \begin{array}{rcl}& &\begin{array}{c}\displaystyle{\frac{{2 \gamma^2+(\beta-1)(\delta-\alpha)
+2 |\gamma| \sqrt{\gamma^2+(\beta-1) (\delta-\alpha)}}}{{(\beta-1){}^2}}}\end{array},\\& &\qquad
\hbox{if}~~(\delta-\alpha\beta)^2 \le (\beta+1)\gamma^2 (\alpha +\delta)\\ \\& &
\begin{array}{c}\displaystyle{\frac{{\alpha\beta-\gamma^2+\delta-\sqrt{\gamma^4+(\delta-\alpha\beta){}^2-
2\gamma^2(\delta+\alpha\beta)}}}{{2\beta}}}\end{array}, \\& & \qquad
\hbox{otherwise,} \end{array} \right.
\end{eqnarray}
\begin{eqnarray}\alpha=4\det A,~~~\beta=4\det B,~~~\gamma=4\det C,~~~\delta=16\det\sigma,\end{eqnarray}
and $\nu_\mp$ are the symplectic eigenvalues of the state, given by
\begin{eqnarray}2{\nu}_{\mp}^2 ={\Delta}\mp\sqrt{{\Delta}^2
-4\det\sigma},
\end{eqnarray}
where
$\Delta=\det A+\det B+2\det C.$
\begin{figure}
\resizebox{0.4\columnwidth}{!}
{
\includegraphics{sin2.pdf}
}
\caption{Gaussian quantum discord $D$ versus time $t$ and temperature $T$ for an entangled initial non-symmetric squeezed thermal state with squeezing parameter $r=3$, $n_1=3, n_2=1$ and $\lambda=0.1, \omega_1=1, \omega_2=2.$ We take $m=\hbar=k=1.$
}
\label{fig:2}
\end{figure}
The evolution of the Gaussian quantum discord $D$ is illustrated in Figure 2, where we represent the dependence of $D$ on time $t$ and temperature $T$ for an entangled initial non-symmetric Gaussian state, taken of the form of a two-mode squeezed thermal state (\ref{ini1}), for such values of the parameters which satisfy for all times the first condition in formula (\ref{infdet}). The Gaussian discord has nonzero values for all finite times and this fact certifies the existence of non-classical correlations in two-mode Gaussian states, either separable or entangled. Gaussian discord asymptotically decreases in time, compared to the case of logarithmic negativity, which has an evolution leading to a sudden suppression of entanglement. For entangled initial states the Gaussian discord remains strictly positive in time and in the limit of infinite time it tends asymptotically to zero, corresponding to the thermal product (separable) state, with no correlations at all.
From Fig. 2 we notice that, in agreement with the general properties of the Gaussian quantum discord \cite{ade}, the states can be either separable or entangled for $D\le 1$ and all the states above the threshold $D=1$ are entangled. We also notice that the decay of quantum discord is stronger when the temperature $T$ is increasing.
It should be remarked that the decay of quantum discord is very similar to that of the entanglement before the time of the sudden death of entanglement. Near the threshold of zero logarithmic negativity ($E_N = 0$), the nonzero values of the discord can quantify the non-classical correlations for separable mixed states and one considers that this fact could make possible some tasks in quantum computation \cite{yut}.
The discord is increasing with the squeezing parameter $r$ and it is decreasing with increasing the ratio of the frequencies $\omega_1$ and $\omega_2$ of the two modes and the difference of parameters $a$ and $b.$
\begin{figure}
\resizebox{0.4\columnwidth}{!}
{
\includegraphics{sin3.pdf}
}
\caption{Degree of classical correlations $C$ versus time $t$ and temperature $T$ for an entangled initial non-symmetric squeezed thermal state with squeezing parameter $r=3$, $n_1=3, n_2=1$ and $\lambda=0.1, \omega_1=1, \omega_2=2.$ We take $m=\hbar=k=1.$
}
\label{fig:3}
\end{figure}
\subsection{Classical corellations and quantum mutual information}
The measure of classical correlations for a general two-mode Gaussian state $\rho_{12}$ can also be calculated and it is given by \cite{ade}
\begin{eqnarray}
C=f(\sqrt{\alpha}) - f(\sqrt{\varepsilon}),
\label{clas}
\end{eqnarray}
while the expression of the quantum mutual information, which measures the total correlations, is given by
\begin{eqnarray}
I=f(\sqrt{\alpha}) + f(\sqrt{\beta}) -f(\nu_-) - f(\nu_+).
\label{mut}
\end{eqnarray}
In Figs. 3 and 4 we illustrate the evolution of classical correlations $C$ and, respectively, quantum mutual information $I,$ as functions of time $t$ and temperature $T$ for an entangled initial Gaussian state, taken of the form of a two-mode squeezed thermal state (\ref{ini1}). These two quantities manifest a qualitative behaviour similar to that one of the Gaussian discord: they have nonzero values for all finite times and in the limit of infinite time they tend asymptotically to zero, corresponding to the thermal product (separable) state, with no correlations at all. One can also see that the classical correlations and quantum mutual information decrease with increasing the temperature of the thermal bath.
One can show that the classical correlations and quantum mutual information increase with increasing the squeezing parameter $r$ and the difference of parameters $a$ and $b.$ At the same time classical correlations increase with the ratio of the frequencies $\omega_1$ and $\omega_2$ of the two modes, while quantum mutual information is decreasing with increasing this ratio.
For comparison all these quantities are represented also on the same graphic in Fig. 4. In the considered case the value of classical correlations is larger than that of quantum correlations, represented by the Gaussian quantum discord.
\begin{figure}
\resizebox{0.4\columnwidth}{!}
{
\includegraphics{sin4.pdf}
}
\caption{Quantum mutual information $I$ versus time $t$ and temperature $T$ for an entangled initial non-symmetric squeezed thermal state with squeezing parameter $r=3$, $n_1=3, n_2=1$ and $\lambda=0.1, \omega_1=1, \omega_2=2.$ We take $m=\hbar=k=1.$ There are also represented the Gaussian quantum discord and classical correlations.
}
\label{fig:4}
\end{figure}
\section{Summary}
We investigated the Markovian dynamics of quantum correlations for a subsystem composed of two non-interacting bosonic modes embedded in a thermal bath. We have analyzed the influence of the environment on the dynamics of quantum entanglement and quantum discord for Gaussian initial states. We have described the time evolution of the logarithmic negativity in terms of the covariance matrix for non-symmetric squeezed thermal states, for the case when the asymptotic state of the considered open system is a Gibbs state corresponding to two independent quantum harmonic oscillators in thermal equilibrium. The dynamics of the quantum entanglement strongly depends on the initial states and the parameters characterizing the environment (dissipation coefficient and temperature). For an entangled initial squeezed thermal state, entanglement suppression (entanglement sudden death) takes place for all values of the temperatures of the environment, including zero temperature. The time when the entanglement is suppressed decreases with increasing the temperature and dissipation.
We described also the time evolution of Gaussian quantum discord, which is a measure of all quantum correlations in the bipartite state, including entanglement.
The values of quantum discord decrease asymptotically in time. This is in contrast to the sudden death of entanglement. The time evolution of quantum discord is very similar to that of entanglement before the sudden suppression of the entanglement. Quantum discord is decreasing with increasing the temperature. After the sudden death of entanglement the nonzero values of discord manifest the existence of quantum correlations for separable mixed states. We described also the time evolution of classical correlations and quantum mutual information, which measures the total correlations of the quantum system.
\ack
The author thanks the referee for his useful suggestions and recommendations. The author acknowledges the financial support received from the Romanian Ministry of Education and Research, through the Projects CNCS-UEFISCDI PN-II-ID-PCE-2011-3-0083 and PN 09 37 01 02/2010.
\section*{References}
|
2,869,038,154,351 | arxiv |
\section{Introduction}\label{sec1}
The generalized-$\alpha$ method is a family of time integration schemes that, for a particular choice of method parameters, is second-order accurate, is unconditionally stable, and exhibits an optimal combination of accuracy in the low-frequency range and damping in the high-frequency range. The generalized-$\alpha$ method was first introduced for second-order initial value problems by Chung and Hulbert \cite{chung1994family}, and it was later extended to first-order initial value problems by Jansen, Whiting, and Hulbert \cite{jansen2000generalized}. The generalized-$\alpha$ method is typically combined with a finite element spatial discretization in order to arrive at a fully-discrete method for the numerical solution of partial differential equations. This is a particularly popular approach for structural mechanics applications \cite{hulbert1996explicit,kuhl1999energy,kuhl1999generalized,arnold2007convergence,erlicher2002analysis}, though it is often used for fluid mechanics \cite{bazilevs2007variational,gomez2010isogeometric,modirkhazeni2016algebraic,bayram2020variational,codoni2021stabilized,liu2021note}, fluid-structure interaction \cite{dettmer2006computational,bazilevs2008isogeometric}, and magnetohydrodynamics \cite{gleason2022divergence} applications as well. In the context of fluid mechanics, the generalized-$\alpha$ method is used to time integrate finite element spatial discretizations of mass, momentum, and energy differential conservation laws. However, this results in a fully-discrete method that does not admit discrete balance laws for mass, momentum, and energy with respect to the temporal mesh, even if the underlying spatial discretization is conservative. Fortunately, we show in this note that the resulting fully-discrete method does admit discrete balance laws for mass, momentum, and energy with respect to a shifted temporal mesh if the temporal mesh is uniform and if the parameters of the generalized-$\alpha$ method are chosen so that it is second-order accurate. To arrive at this result, we invoke a new interpretation of the second-order accurate generalized-$\alpha$ method. Namely, it can be interpreted as an implicit midpoint method on a shifted temporal mesh when the temporal mesh is uniform.
An outline of this short communication is as follows. In Section \ref{sec:gen_alpha}, we show how the generalized-$\alpha$ method for first-order initial-value problems can be viewed as an implicit midpoint method on a shifted temporal mesh when it is second-order accurate and the temporal mesh is uniform. In Section \ref{sec:advection_diffusion}, we use this knowledge to show that application of second-order accurate generalized-$\alpha$ time integration to a conservative stabilized or unstabilized Galerkin discretization of a model advection-diffusion problem yields a fully-discrete method harboring a discrete balance law when the temporal mesh is uniform. In Section \ref{sec:conservation_laws}, we show the same is true for general systems of differential conservation laws provided the conservation variables are themselves discretized, and in Section \ref{sec:nonconservative variables}, we show how to modify the generalized-$\alpha$ method to arrive at a conservative fully-discrete method when nonconservation variables are discretized instead. Finally, in Secion \ref{sec:conclusion}, we provide concluding remarks.
\section{An alternative form of the generalized-$\alpha$ method}\label{sec:gen_alpha}
Consider the following first-order initial-value problem: Find $\mathrm{U}: [0,\infty) \rightarrow \mathbb{R}^m$ such that
\begin{equation}
\mathrm{R}\left(\dot{\mathrm{U}}(t),\mathrm{U}(t),t\right) = \mathrm{0}
\end{equation}
for all $t \in (0,\infty)$ and
\begin{equation}
\mathrm{U}(0) = \mathrm{U}_0
\end{equation}
where $m \in \mathbb{N}$ is the size of the solution vector $\mathrm{U}$, $\dot{\mathrm{U}}$ is the time derivative of $\mathrm{U}$, $\mathrm{U}_0 \in \mathbb{R}^m$ is the initial condition of $\mathrm{U}$, and $\mathrm{R}: \mathbb{R}^m \times \mathbb{R}^m \times (0,\infty) \rightarrow \mathbb{R}^m$ encodes the ordinary differential equations associated with the initial-value problem. In the generalized-$\alpha$ method, the solution vector $\mathrm{U}$ is approximated on a temporal mesh $\left\{ t_n \right\}_{n \in \mathbb{N}}$ of increasing times with $t_1 = 0$. In particular, given the approximations $\dot{\mathrm{U}}_n$ and $\mathrm{U}_n$ of $\dot{\mathrm{U}}$ and $\mathrm{U}$ at the $n^{\text{th}}$ time $t_n$, the generalized-$\alpha$ method involves solving the following algebraic system of equations for the approximations $\dot{\mathrm{U}}_{n+1}$ and $\mathrm{U}_{n+1}$ of $\dot{\mathrm{U}}$ and $\mathrm{U}$ at the $(n+1)^{\text{st}}$ time $t_{n+1}$\cite{jansen2000generalized}:
\begin{align}
\mathrm{R}\left(\dot{\mathrm{U}}_{n+\alpha_m},\mathrm{U}_{n+\alpha_f},t_{n+\alpha_f}\right) = \mathrm{0} \label{eq:gen_alpha}
\end{align}
and
\begin{align}
\mathrm{U}_{n+1} = \mathrm{U}_{n} + \Delta t_n \left( (1-\gamma) \dot{\mathrm{U}}_{n} + \gamma \dot{\mathrm{U}}_{n+1} \right),
\end{align}
where
\begin{align}
\dot{\mathrm{U}}_{n+\alpha_m} &:= (1-\alpha_m) \dot{\mathrm{U}}_{n} + \alpha_m \dot{\mathrm{U}}_{n+1}, \\
\mathrm{U}_{n+\alpha_f} &:= (1-\alpha_f) \mathrm{U}_{n} + \alpha_f \mathrm{U}_{n+1},
\end{align}
$\Delta t_n := t_{n+1} - t_n$ is the time-step size, $t_{n+\alpha_f} := t_n + \alpha_f \Delta t_n$, and $\gamma$, $\alpha_m$, and $\alpha_f$ are free parameters. The term $\dot{\mathrm{U}}_{n+\alpha_m}$ is often interpreted as an approximation of $\dot{\mathrm{U}}$ at time $t_{n+\alpha_m} := t_n + \alpha_m \Delta t_n$, while the term $\mathrm{U}_{n+\alpha_f}$ is often interpreted as an approximation of $\mathrm{U}$ at time $t_{n+\alpha_f}$. The generalized-$\alpha$ method is second-order accurate if and only if
\begin{equation}
\gamma = \frac{1}{2} + \alpha_m - \alpha_f, \label{eq:gamma}
\end{equation}
and it is unconditionally stable if and only if
\begin{equation}
\alpha_m \geq \alpha_f \geq \frac{1}{2}.
\end{equation}
If Equation \eqref{eq:gamma} holds, then
\begin{align}
\mathrm{U}_{n+1} &= \mathrm{U}_{n} + \Delta t_n \left( \left(\frac{1}{2} - \alpha_m + \alpha_f\right) \dot{\mathrm{U}}_{n} + \left(\frac{1}{2} + \alpha_m - \alpha_f\right) \dot{\mathrm{U}}_{n+1} \right) \nonumber \\
&= \mathrm{U}_{n} + \Delta t_n \left( (1-\alpha_m) \dot{\mathrm{U}}_{n} + \alpha_m \dot{\mathrm{U}}_{n+1} + \left(\alpha_f - \frac{1}{2}\right) \dot{\mathrm{U}}_{n} - \left(\alpha_f - \frac{1}{2}\right) \dot{\mathrm{U}}_{n+1} \right) \nonumber \\
&= \mathrm{U}_{n} + \Delta t_n \left( \dot{\mathrm{U}}_{n+\alpha_m}+ \left(\alpha_f - \frac{1}{2}\right) \dot{\mathrm{U}}_{n} - \left(\alpha_f - \frac{1}{2}\right) \dot{\mathrm{U}}_{n+1} \right).
\end{align}
Thus, if the generalized-$\alpha$ method is second order-accurate, then
\begin{align}
\dot{\mathrm{U}}_{n+\alpha_m} &= \frac{\mathrm{U}^+_{n+\alpha_f} - \mathrm{U}^-_{n+\alpha_f}}{\Delta t_n} \label{eq:central_diff}
\end{align}
where
\begin{align}
\mathrm{U}^+_{n+\alpha_f} &:= \mathrm{U}_{n+1} + \left(\alpha_f - \frac{1}{2}\right) \Delta t_n \dot{\mathrm{U}}_{n+1}
\end{align}
and
\begin{align}
\mathrm{U}^-_{n+\alpha_f} &:= \mathrm{U}_{n} + \left(\alpha_f - \frac{1}{2}\right) \Delta t_n \dot{\mathrm{U}}_{n}.
\end{align}
Note that $\mathrm{U}^+_{n+\alpha_f}$ may be viewed as an approximation of $\mathrm{U}$ at time $t^+_{n+\alpha_f} := t_{n+1} + \left(\alpha_f-1/2\right) \Delta t_n$ due to the Taylor series
\begin{align}
\mathrm{U}\left(t^+_{n+\alpha_f}\right) &= \mathrm{U}(t_{n+1}) + \left(\alpha_f - \frac{1}{2}\right) \Delta t_n \dot{\mathrm{U}}(t_{n+1}) + O(\Delta t_n^2),
\end{align}
while $\mathrm{U}^-_{n+\alpha_f}$ may be viewed as an approximation of $\mathrm{U}$ at time $t^-_{n+\alpha_f} := t_n + \left(\alpha_f-1/2\right) \Delta t_n$ due to the Taylor series
\begin{align}
\mathrm{U}\left(t^-_{n+\alpha_f} \right) &= \mathrm{U}(t_{n}) + \left(\alpha_f - \frac{1}{2}\right) \Delta t_n \dot{\mathrm{U}}(t_{n}) + O(\Delta t_n^2).
\end{align}
Consequently, while $\dot{\mathrm{U}}_{n+\alpha_m}$ is often interpreted as an approximation of $\dot{\mathrm{U}}$ at time $t_{n+\alpha_m}$, Equation \eqref{eq:central_diff} indicates it may be instead be seen as a central difference approximation of $\dot{\mathrm{U}}$ at time $t_{n+\alpha_f}$ when the generalized-$\alpha$ method is second-order accurate.
For a non-uniform temporal mesh, we generally have that $U^+_{n+\alpha_f} \neq U^-_{(n+1)+\alpha_f}$ and $t^+_{n+\alpha_f} \neq t^-_{(n+1)+\alpha_f}$. However, for a uniform temporal mesh, we have $U^+_{n+\alpha_f} = U^-_{(n+1)+\alpha_f}$ and $t^+_{n+\alpha_f} = t^-_{(n+1)+\alpha_f}$ for all $n \in \mathbb{N}$. Defining in this case
\begin{align}
\mathrm{U}_{n+\alpha_f-1/2} &:= \mathrm{U}_{n} + \left(\alpha_f - \frac{1}{2}\right) \Delta t \dot{\mathrm{U}}_{n}
\end{align}
for $n \in \mathbb{N}$ where $\Delta t$ is the uniform time-step size, we have that
\begin{align}
\dot{\mathrm{U}}_{n+\alpha_m} &= \frac{\mathrm{U}_{n+\alpha_f+1/2} - \mathrm{U}_{n+\alpha_f-1/2}}{\Delta t} \label{eq:gold}
\end{align}
for $n \in \mathbb{N}$ when the generalized-$\alpha$ method is second order-accurate. The terms $\left\{ \mathrm{U}_{n+\alpha_f-1/2} \right\}_{n \in \mathbb{N}}$ may be viewed as approximations of $\mathrm{U}$ on the shifted temporal mesh $\left\{ t_{n+\alpha_f-\frac{1}{2}} \right\}_{n \in \mathbb{N}}$ where
\begin{align}
t_{n+\alpha_f-\frac{1}{2}} := t_n + \left(\alpha_f - \frac{1}{2}\right) \Delta t
\end{align}
for $n \in \mathbb{N}$, and it can be shown that $\left\{ \mathrm{U}_{n+\alpha_f-1/2} \right\}_{n \in \mathbb{N}}$ are in fact second-order approximations of $\left\{ \mathrm{U}\left(t_{n+\alpha_f-1/2}\right) \right\}_{n \in \mathbb{N}}$ when $\left\{ \mathrm{U}_{n} \right\}_{n \in \mathbb{N}}$ are second-order approximations of $\left\{ \mathrm{U}\left(t_{n}\right) \right\}_{n \in \mathbb{N}}$ and $\left\{ \dot{\mathrm{U}}_{n} \right\}_{n \in \mathbb{N}}$ are first-order approximations of $\left\{ \dot{\mathrm{U}}\left(t_{n}\right) \right\}_{n \in \mathbb{N}}$. Thus, when the generalized-$\alpha$ method is second-order accurate and the temporal mesh is uniform, it may be viewed as an implicit midpoint method on a shifted temporal mesh (see Figure \ref{fig:timeMesh}). One might expect, then, the generalized-$\alpha$ method to inherit the conservation properties of the implicit midpoint method. We demonstrate later this is indeed the case.
\begin{figure}[t!]
\begin{center}
\input{temporalMesh}
\end{center}
\caption{Visual representation of the shifted temporal mesh on which the generalized-$\alpha$ method may be interpreted as an implicit midpoint method when it is second-order accurate and the original temporal mesh is uniform.}
\label{fig:timeMesh}
\end{figure}
In practice, the generalized-$\alpha$ parameters are typically chosen to be equal to
\begin{align}
\alpha_m &= \frac{1}{2} \left( \frac{3-\rho_{\infty}}{1+\rho_{\infty}} \right), \label{eq:alpha_m} \\
\alpha_f &= \frac{1}{1+\rho_{\infty}}, \label{eq:alpha_f}
\end{align}
where $\rho_{\infty} \in [0,1]$. For this choice of parameters, the generalized-$\alpha$ method exhibits an optimal combination of accuracy in the low-frequency range and numerical damping in the high-frequency range for a linear model problem\cite{jansen2000generalized}. The parameter $\rho_{\infty}$ then corresponds to the spectral radius of the amplification at infinite time step, and a choice of $\rho_{\infty} = 0$ annihilates the highest frequency in one step while a choice of $\rho_{\infty} = 1$ preserves the highest frequency. As $\rho_{\infty}$ shifts from $\rho_{\infty} = 1$ to $\rho_{\infty} = 0$, $\alpha_f$ shifts from $\alpha_f = \frac{1}{2}$ to $\alpha_f = 1$, and as $\alpha_f$ shifts closer to $\alpha_f = 1$, the generalized-$\alpha$ method exhibits increased numerical damping. In light of our interpretation of the generalized-$\alpha$ method as an implicit midpoint method on a shifted temporal mesh, we thus can view the generalized-$\alpha$ method with $\alpha_f > \frac{1}{2}$ as ``upwinding in time''.
If Equations \eqref{eq:alpha_m} and \eqref{eq:alpha_f} hold, it can be shown that second-order accuracy dictates $\gamma = \alpha_f$. It follows then that the governing equations of the generalized-$\alpha$ method can be simply written entirely in terms of $\alpha_f$. In particular, if the generalized-$\alpha$ method is second-order accurate, the temporal mesh is uniform, and Equations \eqref{eq:alpha_m} and \eqref{eq:alpha_f} hold, then the governing equations at the $n^\text{th}$ time step are
\begin{align}
&\mathrm{R}\left(\frac{\mathrm{U}_{n+\alpha_f+1/2} - \mathrm{U}_{n+\alpha_f-1/2}}{\Delta t},\mathrm{U}_{n+\alpha_f},t_{n+\alpha_f}\right) = \mathrm{0}, \\
&\mathrm{U}_{n+1} = \mathrm{U}_{n} + \Delta t \left( (1-\alpha_f) \dot{\mathrm{U}}_{n} + \alpha_f \dot{\mathrm{U}}_{n+1} \right), \\
&\mathrm{U}_{n+\alpha_f} = (1-\alpha_f) \mathrm{U}_{n} + \alpha_f \mathrm{U}_{n+1}, \\
&\mathrm{U}_{n+\alpha_f+1/2} = \mathrm{U}_{n+1} + \left(\alpha_f - \frac{1}{2}\right) \Delta t \dot{\mathrm{U}}_{n+1}, \\
&\mathrm{U}_{n+\alpha_f-1/2} = \mathrm{U}_{n} + \left(\alpha_f - \frac{1}{2}\right) \Delta t \dot{\mathrm{U}}_{n},
\end{align}
and $\alpha_f \in \left[\frac{1}{2},1\right]$ with $\alpha_f = \frac{1}{2}$ corresponding to no damping (and the implicit midpoint method on the original temporal mesh) and $\alpha_f = 1$ corresponding to maximal damping (and an implicit midpoint method on a temporal mesh shifted to the right by $\frac{\Delta t}{2}$).
\section{Application to the advection-diffusion problem}\label{sec:advection_diffusion}
Now consider the following advection-diffusion problem: Find $u: \Omega \times [0,\infty) \rightarrow \mathbb{R}$ such that
\begin{align}
\frac{\partial u}{\partial t} + \nabla \cdot \left( \mathbf{a} u - \kappa \nabla u \right) &= f && \text{ in } \Omega \times (0,\infty) \label{eq:ad_strong} \\
\kappa \nabla u \cdot \mathbf{n} - \min(\mathbf{a} \cdot \mathbf{n},0) u &= h && \text{ on } \Gamma \times (0,\infty) \\
u(\cdot,0) &= u_0 && \text{ in } \Omega
\end{align}
where $\Omega \subset \mathbb{R}^d$ is a $d$-dimensional spatial domain with $d \in \mathbb{N}$, $\Gamma$ is the boundary of $\Omega$, $\mathbf{n}$ is the outward unit normal vector to $\Omega$, $\mathbf{a}: \Omega \times (0,\infty) \rightarrow \mathbb{R}^d$ is the advection velocity satisfying $\nabla \cdot \mathbf{a} \equiv 0$, $\kappa : \Omega \times (0,\infty) \rightarrow \mathbb{R}$ is the diffusivity satisfying $\kappa > 0$, $f : \Omega \times (0,\infty) \rightarrow \mathbb{R}$ is the applied body force, $h : \Gamma \times (0,\infty) \rightarrow \mathbb{R}$ is the applied flux, and $u_0: \Omega \rightarrow \mathbb{R}$ is the applied initial condition. Note that over the inflow boundary $\Gamma_\text{in} := \left\{ \mathbf{x} \in \Gamma: \mathbf{a}(\mathbf{x}) \cdot \mathbf{n} (\mathbf{x}) < 0 \right\}$, the sum of diffusive and advective fluxes is specified, while along the outflow boundary $\Gamma_\text{out} := \left\{ \mathbf{x} \in \Gamma: \mathbf{a}(\mathbf{x}) \cdot \mathbf{n} (\mathbf{x}) \geq 0 \right\}$, only the diffusive flux is specified. This is necessary to arrive at a well-posed problem \cite{moghadam2011comparison}. By integrating Equation \eqref{eq:ad_strong} over the spatial domain and invoking the divergence theorem, we attain
\begin{align}
\frac{d}{dt} \int_{\Omega} u d\Omega &= \int_{\Omega} f d\Omega + \int_{\Gamma} h d\Gamma - \int_{\Gamma_\text{out}} \left( \mathbf{a} \cdot \mathbf{n} \right) u d\Gamma
\end{align}
for all $t \in (0,\infty)$, and by integrating between times $t_\text{begin}$ and $t_\text{end}$ with $0 \leq t_\text{begin} \leq t_\text{end}$ and invoking the fundamental theorem of calculus, we further attain
\begin{align}
\int_{\Omega} u(\cdot,t_\text{end}) d\Omega &= \int_{\Omega} u(\cdot,t_\text{begin}) d\Omega + \int_{t_\text{begin}}^{t_\text{end}} \left( \int_{\Omega} f d\Omega + \int_{\Gamma} h d\Gamma - \int_{\Gamma_\text{out}} \left( \mathbf{a} \cdot \mathbf{n} \right) u d\Gamma \right) dt. \label{eq:ad_conservation}
\end{align}
The above is an integral balance law that we typically wish to preserve in some sense at the discrete level.
A stabilized or unstabilized Galerkin semi-discretization of the considered advection-diffusion problem takes the form: Find $u^h(t) \in \mathcal{V}^h$ for all $t \in [0,\infty)$ such that
\begin{align}
\int_{\Omega} \frac{\partial u^h}{\partial t} w^h d\Omega - \int_{\Omega} \left( \mathbf{a} u^h - \kappa \nabla u^h \right) \cdot \nabla w^h d\Omega + \int_{\Gamma_\text{out}} \left( \mathbf{a} \cdot \mathbf{n} \right) u^h w^h d\Gamma + S^h(u^h,w^h) &= \int_{\Omega} f w^h d\Omega + \int_{\Gamma} h w^h d\Gamma \label{eq:ad_galerkin}
\end{align}
for all $w^h \in \mathcal{V}^h$ and $t \in (0,\infty)$ and
\begin{align}
\int_{\Omega} u^h(\cdot,0) w^h d\Omega &= \int_{\Omega} u_0 w^h d\Omega \label{eq:ad_galerkin_ic}
\end{align}
for all $w^h \in \mathcal{V}^h$ where $\mathcal{V}^h$ is a finite-dimensional subspace of $H^1(\Omega)$ and $S^h : \mathcal{V}^h \times \mathcal{V}^h \rightarrow \mathbb{R}$ is a stabilization form ($S^h \equiv 0$ when no stabilization is applied). Provided that $1 \in \mathcal{V}^h$ and $S^h(v^h,1) = 0$ for all $v^h \in \mathcal{V}^h$, we can take $w^h \equiv 1$ in Equation \eqref{eq:ad_galerkin}, integrate between times $t_\text{begin}$ and $t_\text{end}$ with $0 \leq t_\text{begin} \leq t_\text{end}$, and invoke the fundamental theorem of calculus to arrive at
\begin{align}
\int_{\Omega} u^h(\cdot,t_\text{end}) d\Omega &= \int_{\Omega} u^h(\cdot,t_\text{begin}) d\Omega + \int_{t_\text{begin}}^{t_\text{end}} \left( \int_{\Omega} f d\Omega + \int_{\Gamma} h d\Gamma - \int_{\Gamma_\text{out}} \left( \mathbf{a} \cdot \mathbf{n} \right) u^h d\Gamma \right) dt. \label{eq:ad_conservation_semi}
\end{align}
Thus a Galerkin semi-discretization inherits the integral balance law given in Equation \eqref{eq:ad_conservation} provided that $1 \in \mathcal{V}^h$ and $S^h(v^h,1) = 0$ for all $v^h \in \mathcal{V}^h$. The property that $1 \in \mathcal{V}^h$ holds for most finite element approximation spaces that are used in practice. The property that $S^h(v^h,1) = 0$ for all $v^h \in \mathcal{V}^h$ also holds for most stabilization methodologies that are used in practice. In particular, it holds for the popular Streamline Upwind Petrov Galerkin (SUPG) method \cite{hughes1987recent}, the Galerkin Least Squares (GLS) method \cite{shakib1991new}, the Variational Multiscale (VMS) method \cite{bazilevs2007variational}, the method of orthogonal subscales \cite{codina2002stabilized}, edge stabilization \cite{burman2007continuous}, and local projection stabilization \cite{braack2006local}. It also holds when viscosity-based discontinuity capturing operators \cite{bazilevs2007yzbeta} are employed.
The generalized-$\alpha$ method can be employed to discretize the first-order initial-value problem given by Equations \eqref{eq:ad_galerkin} and \eqref{eq:ad_galerkin_ic}. This gives rise to the following governing equations at the $n^\text{th}$ time step:
\begin{align}
&\int_{\Omega} \dot{u}^h_{n+\alpha_m} w^h d\Omega - \int_{\Omega} \left( \mathbf{a} u^h_{n+\alpha_f} - \kappa_{n+\alpha_f} \nabla u^h_{n+\alpha_f} \right) \cdot \nabla w^h d\Omega + \int_{\Gamma_\text{out}} \left( \mathbf{a}_{n+\alpha_f} \cdot \mathbf{n} \right) u^h_{n+\alpha_f} w^h d\Gamma + S^h(u^h_{n+\alpha_f},w^h) \\
&= \int_{\Omega} f_{n+\alpha_f} w^h d\Omega + \int_{\Gamma} h_{n+\alpha_f} w^h d\Gamma
\end{align}
and
\begin{align}
u^h_{n+1} = u^h_{n} + \Delta t_n \left( (1-\gamma) \dot{u}^h_{n} + \gamma \dot{u}^h_{n+1} \right)
\end{align}
where
\begin{align}
\dot{u}^h_{n+\alpha_m} &:= (1-\alpha_m) \dot{u}^h_{n} + \alpha_m \dot{u}^h_{n+1}, \\
u^h_{n+\alpha_f} &:= (1-\alpha_f) u^h_{n} + \alpha_f u^h_{n+1},
\end{align}
$u^h_n$ and $u^h_{n+1}$ are the approximations of $u^h$ at times $t_n$ and $t_{n+1}$, $\dot{u}^h_n$ and $\dot{u}^h_{n+1}$ are the approximations of $\frac{\partial u^h}{\partial t}$ at times $t_n$ and $t_{n+1}$, $\kappa_{n+\alpha_f} = \kappa(\cdot,t_{n+\alpha_f})$, $\mathbf{a}_{n+\alpha_f} = \mathbf{a}(\cdot,t_{n+\alpha_f})$, $f_{n+\alpha_f} = f(\cdot,t_{n+\alpha_f})$, and $h_{n+\alpha_f} = h(\cdot,t_{n+\alpha_f})$. If the generalized-$\alpha$ method is second-order accurate and the temporal mesh is uniform, Equation \eqref{eq:gold} applies and we can write
\begin{align}
\dot{u}^h_{n+\alpha_m} &= \frac{u^h_{n+\alpha_f+1/2} - \mathrm{u^h}_{n+\alpha_f-1/2}}{\Delta t}
\end{align}
where
\begin{align}
u^h_{n+\alpha_f+1/2} &:= u^h_{n+1} + \left(\alpha_f - \frac{1}{2}\right) \Delta t \dot{u}^h_{n+1}, \\
u^h_{n+\alpha_f-1/2} &:= u^h_{n} + \left(\alpha_f - \frac{1}{2}\right) \Delta t \dot{u}^h_{n}.
\end{align}
In this case, if $1 \in \mathcal{V}^h$ and $S^h(v^h,1) = 0$ for all $v^h \in \mathcal{V}^h$, we can take $w^h \equiv 1$ to immediately arrive at
\begin{align}
\int_{\Omega} u^h_{n+\alpha_f+1/2} d\Omega = \int_{\Omega} u^h_{n+\alpha_f-1/2} d\Omega + \Delta t \left( \int_{\Omega} f_{n+\alpha_f} d\Omega + \int_{\Gamma} h_{n+\alpha_f} d\Gamma - \int_{\Gamma_\text{out}} \left( \mathbf{a}_{n+\alpha_f} \cdot \mathbf{n} \right) u^h_{n+\alpha_f} d\Gamma \right).
\end{align}
We can sum over time steps to arrive at the balance law
\begin{align}
\int_{\Omega} u^h_{n_{\text{end}}+\alpha_f-1/2} d\Omega = \int_{\Omega} u^h_{n_{\text{begin}}+\alpha_f-1/2} d\Omega + \sum_{n=n_{\text{begin}}}^{n_{\text{end}}-1} \Delta t \left( \int_{\Omega} f_{n+\alpha_f} d\Omega + \int_{\Gamma} h_{n+\alpha_f} d\Gamma - \int_{\Gamma_\text{out}} \left( \mathbf{a}_{n+\alpha_f} \cdot \mathbf{n} \right) u^h_{n+\alpha_f} d\Gamma \right) \label{eq:ad_conservation_fully}
\end{align}
for two integers $1 \leq n_{\text{begin}} \leq n_{\text{end}}$. The above is a fully-discrete analogue of Equation \eqref{eq:ad_conservation_semi}. Thus, application of the generalized-$\alpha$ method to a conservative Galerkin semi-discretization of the advection-diffusion equation yields a conservative fully-discrete method if the generalized-$\alpha$ method is second-order accurate and the temporal mesh is uniform. Similar results to those seen here can be attained if Dirichlet boundary conditions are applied provided a Lagrange multiplier field is introduced\cite{evans2013isogeometric}, and local conservation results can also be attained using the method described by Hughes et. al\cite{hughes2000continuous}.
\section{Application to systems of conservation laws}\label{sec:conservation_laws}
We last consider a general system of differential conservation laws. Without loss of generality, we consider only periodic boundary conditions. The problem of interest then reads as follows: Find $\textbf{U} : \Omega \times [0,\infty) \rightarrow \mathbb{R}^p$ such that
\begin{align}
\frac{\partial \mathbf{U}}{\partial t} + \sum_{i=1}^{d} \frac{\partial \mathbf{F}^i}{\partial x_i} &= \mathbf{S} && \text{ in } \Omega \times (0,\infty) \label{eq:cons} \\
\mathbf{U}(\cdot,0) &= \mathbf{U}_0 && \text{ in } \Omega
\end{align}
and $\textbf{U}$ is periodic in each spatial direction where $\Omega := (0,1)^d$ is the $d$-dimensional domain with $d \in \mathbb{N}$, $\left\{ \textbf{F}^i \right\}_{i=1}^d$ are the flux vectors in each spatial direction, $\textbf{S}$ is the source vector, and $\mathbf{U}_0 : \Omega \rightarrow \mathbb{R}^p$ is the applied initial condition. The flux vector and source vectors can depend on space, time, as well as $\textbf{U}$ and its spatial derivatives in each direction. We require, however, that the flux vector and source vectors are themselves periodic in each direction. Both the Euler and Navier-Stokes equations can be written in the above form, as can the equations governing magnetohydrodynamics. Integrating \eqref{eq:cons} over the spatial domain, invoking the divergence theorem, and then integrating between times $t_\text{begin}$ and $t_\text{end}$ with $0 \leq t_\text{begin} \leq t_\text{end}$ and invoking the fundamental theorem of calculus, we attain
\begin{align}
\int_{\Omega} \mathbf{U}(\cdot,t_\text{end}) d\Omega = \int_{\Omega} \mathbf{U}(\cdot,t_\text{begin}) d\Omega + \int_{t_\text{begin}}^{t_\text{end}} \int_{\Omega} \mathbf{S} d\Omega dt \label{eq:cons_conservation}
\end{align}
which is a generalization of Equation \eqref{eq:ad_conservation} to the current setting. A stabilized or unstabilized Galerkin semi-discretization of the above problem using conservation variables takes the form: Find $\mathbf{U}^h(t) \in \bm{\mathcal{V}}^h$ for all $t \in [0,\infty)$ such that
\begin{align}
\int_{\Omega} \frac{\partial \mathbf{U}^h}{\partial t} \cdot \mathbf{W}^h d\Omega - \sum_{i=1}^d \int_{\Omega} \mathbf{F}^i \cdot \frac{\partial \mathbf{W}^h}{\partial x_i} d\Omega + S^h\left(\mathbf{U}^h,\mathbf{W}^h\right) &= \int_{\Omega} \mathbf{S} \cdot \mathbf{W}^h d\Omega \label{eq:cons_galerkin}
\end{align}
for all $\mathbf{W}^h \in \bm{\mathcal{V}}^h$ and $t \in (0,\infty)$ and
\begin{align}
\int_{\Omega} \mathbf{U}^h(\cdot,0) \cdot \mathbf{W}^h d\Omega &= \int_{\Omega} \mathbf{U}_0 \cdot \mathbf{W}^h d\Omega \label{eq:cons_galerkin_ic}
\end{align}
for all $\mathbf{W}^h \in \bm{\mathcal{V}}^h$ where $\bm{\mathcal{V}}^h$ is a finite-dimensional subspace of $(H^1_\text{per}(\Omega))^p$ and $S^h : \bm{\mathcal{V}}^h \times \bm{\mathcal{V}}^h \rightarrow \mathbb{R}$ is a stabilization form (again $S^h \equiv 0$ in the absence of stabilization). If the unit vector $\mathbf{e}_j$ is a member of $\bm{\mathcal{V}}^h$ and $S^h(\mathbf{v}^h,\mathbf{e}_j)$ for all $\mathbf{v}^h \in \bm{\mathcal{V}}^h$ for each $j = 1, \ldots, p$, then we can use the same procedure employed to arrive at Equation \eqref{eq:ad_conservation_semi} to also arrive at
\begin{align}
\int_{\Omega} \mathbf{U}^h(\cdot,t_\text{end}) d\Omega = \int_{\Omega} \mathbf{U}^h(\cdot,t_\text{begin}) d\Omega + \int_{t_\text{begin}}^{t_\text{end}} \int_{\Omega} \mathbf{S} d\Omega dt \label{eq:cons_conservation_semi}
\end{align}
for $0 \leq t_\text{begin} \leq t_\text{end}$. Application of the generalized-$\alpha$ method to the Galerkin semi-discretization results in the following governing equations at the $n^\text{th}$ time step:
\begin{align}
\int_{\Omega} \dot{\mathbf{U}}_{n+\alpha_m}^h \cdot \mathbf{W}^h d\Omega - \sum_{i=1}^d \int_{\Omega} \mathbf{F}_{n+\alpha_f}^i \cdot \frac{\partial \mathbf{W}^h}{\partial x_i} d\Omega + S^h\left(\mathbf{U}_{n+\alpha_f}^h,\mathbf{W}^h\right) &= \int_{\Omega} \mathbf{S}_{n+\alpha_f} \cdot \mathbf{W}^h d\Omega
\end{align}
and
\begin{align}
\mathbf{U}^h_{n+1} = \mathbf{U}^h_{n} + \Delta t_n \left( (1-\gamma) \dot{\mathbf{U}}^h_{n} + \gamma \dot{\mathbf{U}}^h_{n+1} \right)
\end{align}
where
\begin{align}
\dot{\mathbf{U}}^h_{n+\alpha_m} &:= (1-\alpha_m) \dot{\mathbf{U}}^h_{n} + \alpha_m \dot{\mathbf{U}}^h_{n+1}, \\
\mathbf{U}^h_{n+\alpha_f} &:= (1-\alpha_f) \mathbf{U}^h_{n} + \alpha_f \mathbf{U}^h_{n+1},
\end{align}
$\mathbf{U}^h_n$ and $\mathbf{U}^h_{n+1}$ are the approximations of $\mathbf{U}^h$ at times $t_n$ and $t_{n+1}$, $\dot{\mathbf{U}}^h_n$ and $\dot{\mathbf{U}}^h_{n+1}$ are the approximations of $\frac{\partial \mathbf{U}^h}{\partial t}$ at times $t_n$ and $t_{n+1}$, $\left\{ \mathbf{F}^i_{n+\alpha_f} \right\}_{i=1}^d$ are the flux vectors $\left\{ \mathbf{F}^i \right\}_{i=1}^d$ evaluated at time $t_{n+\alpha_f}$ using the value and spatial derivatives of $\mathbf{U}^h_{n+\alpha_f}$, and $\mathbf{S}_{n+\alpha_f}$ is the source vector evaluated at time $t_{n+\alpha_f}$ using the value and spatial derivatives of $\mathbf{U}^h_{n+\alpha_f}$. If the generalized-$\alpha$ method is second-order accurate and the temporal mesh is uniform, then
\begin{align}
\dot{\mathbf{U}}^h_{n+\alpha_m} &= \frac{\mathbf{U}^h_{n+\alpha_f+1/2} - \mathrm{\mathbf{U}^h}_{n+\alpha_f-1/2}}{\Delta t}
\end{align}
where
\begin{align}
\mathbf{U}^h_{n+\alpha_f+1/2} &:= \mathbf{U}^h_{n+1} + \left(\alpha_f - \frac{1}{2}\right) \Delta t \dot{\mathbf{U}}^h_{n+1}, \\
\mathbf{U}^h_{n+\alpha_f-1/2} &:= \mathbf{U}^h_{n} + \left(\alpha_f - \frac{1}{2}\right) \Delta t \dot{\mathbf{U}}^h_{n},
\end{align}
and if the unit vector $\mathbf{e}_j$ is a member of $\bm{\mathcal{V}}^h$ and $S^h(\mathbf{v}^h,\mathbf{e}_j)$ for all $\mathbf{v}^h \in \bm{\mathcal{V}}^h$ for each $j = 1, \ldots, p$, then we can use the same procedure employed to arrive at Equation \eqref{eq:ad_conservation_fully} to also arrive at the following fully-dicrete analogue of Equation \eqref{eq:cons_conservation_semi}:
\begin{align}
\int_{\Omega} \mathbf{U}^h_{n_{\text{end}}+\alpha_f-1/2} d\Omega = \int_{\Omega} \mathbf{U}^h_{n_{\text{begin}}+\alpha_f-1/2} d\Omega + \sum_{n=n_{\text{begin}}}^{n_{\text{end}}-1} \Delta t \int_{\Omega} \mathbf{S}_{n+\alpha_f} d\Omega \label{eq:cons_conservation_fully}
\end{align}
for two integers $1 \leq n_{\text{begin}} \leq n_{\text{end}}$. Therefore we also have that application of the generalized-$\alpha$ method to a conservative Galerkin semi-discretization of a system of differential conservation laws yields a conservative fully-discrete method if the generalized-$\alpha$ method is second-order accurate, the temporal mesh is uniform, and the conservation variables themselves are discretized.
\section{Discretization with nonconservation variables}\label{sec:nonconservative variables}
It is common practice to discretize systems of differential conservation laws using variables other than the conservation variables. For instance, the use of pressure primitive variables or entropy variables is common in the discretization of the Euler and Navier-Stokes equations\cite{hughes1986new}. A stabilized or unstabilized Galerkin semi-discretization of the system of differential conservation laws analyzed in the previous section using a set of nonconservation variables takes the form: Find $\mathbf{V}^h(t) \in \bm{\mathcal{V}}^h$ for all $t \in [0,\infty)$ such that
\begin{align}
\int_{\Omega} \left(\frac{\partial \mathbf{U}}{\partial \mathbf{V}}\left(\mathbf{V}^h\right) \frac{\partial \mathbf{V}^h}{\partial t}\right) \cdot \mathbf{W}^h d\Omega - \sum_{i=1}^d \int_{\Omega} \mathbf{F}^i \cdot \frac{\partial \mathbf{W}^h}{\partial x_i} d\Omega + S^h\left(\mathbf{U}(\mathbf{V}^h),\mathbf{W}^h\right) &= \int_{\Omega} \mathbf{S} \cdot \mathbf{W}^h d\Omega \label{eq:alt}
\end{align}
for all $\mathbf{W}^h \in \bm{\mathcal{V}}^h$ and $t \in (0,\infty)$ and
\begin{align}
\int_{\Omega} \mathbf{U}(\mathbf{V}^h(\cdot,0)) \cdot \mathbf{W}^h d\Omega &= \int_{\Omega} \mathbf{U}_0 \cdot \mathbf{W}^h d\Omega \label{eq:alt_ic}
\end{align}
for all $\mathbf{W}^h \in \bm{\mathcal{V}}^h$ where $\mathbf{V}^h$ is the approximate vector of nonconservation variables and $\mathbf{U}: \mathbb{R}^p \rightarrow \mathbb{R}^p$ is the mapping between nonconservation variables and conservation variables. The above Galerkin semi-discretization harbors the same conservation properties as a Galerkin semi-discretization using conservation variables. The same is not true, however, for the fully-discrete method attained after application of the generalized-$\alpha$ method. To see this, note that time-discretization of Equations \eqref{eq:alt} and \eqref{eq:alt_ic} using the generalized-$\alpha$ method results in the following governing equations at the $n^\text{th}$ time step:
\begin{align}
\int_{\Omega} \left(\frac{\partial \mathbf{U}}{\partial \mathbf{V}}\left(\mathbf{V}^h_{n+\alpha_f}\right) \dot{\mathbf{V}}_{n+\alpha_m}^h\right) \cdot \mathbf{W}^h d\Omega - \sum_{i=1}^d \int_{\Omega} \mathbf{F}_{n+\alpha_f}^i \cdot \frac{\partial \mathbf{W}^h}{\partial x_i} d\Omega + S^h\left(\mathbf{U}\left(\mathbf{V}_{n+\alpha_f}^h\right),\mathbf{W}^h\right) &= \int_{\Omega} \mathbf{S}_{n+\alpha_f} \cdot \mathbf{W}^h d\Omega \label{eq:cons_galerkin_non}
\end{align}
and
\begin{align}
\mathbf{V}^h_{n+1} = \mathbf{V}^h_{n} + \Delta t_n \left( (1-\gamma) \dot{\mathbf{V}}^h_{n} + \gamma \dot{\mathbf{V}}^h_{n+1} \right)
\end{align}
where
\begin{align}
\dot{\mathbf{V}}^h_{n+\alpha_m} &:= (1-\alpha_m) \dot{\mathbf{V}}^h_{n} + \alpha_m \dot{\mathbf{V}}^h_{n+1}, \\
\mathbf{V}^h_{n+\alpha_f} &:= (1-\alpha_f) \mathbf{V}^h_{n} + \alpha_f \mathbf{V}^h_{n+1}.
\end{align}
Unfortunately, even when the generalized-$\alpha$ method is second-order accurate and the temporal mesh is uniform, we cannot express $\frac{\partial \mathbf{U}}{\partial \mathbf{V}}\left(\mathbf{V}^h_{n+\alpha_f}\right) \dot{\mathbf{V}}_{n+\alpha_m}^h$ in terms of a difference of conservation variable states. As such, we cannot arrive at a discrete balance law analogous to Equation \eqref{eq:cons_conservation_fully}. We can remedy this situation by replacing $\frac{\partial \mathbf{U}}{\partial \mathbf{V}}\left(\mathbf{V}^h_{n+\alpha_f}\right) \dot{\mathbf{V}}_{n+\alpha_m}^h$ in Equation \eqref{eq:cons_galerkin_non} with
\begin{align}
\frac{\hat{\mathbf{U}}^h_{n+\alpha_f+1/2} - \mathrm{\hat{\mathbf{U}}^h}_{n+\alpha_f-1/2}}{\Delta t}
\end{align}
when Equation \eqref{eq:gamma} holds and the temporal mesh is uniform where
\begin{align}
\hat{\mathbf{U}}^h_{n+\alpha_f+1/2} &:= \mathbf{U}\left(\mathbf{V}^h_{n+1}\right) + \left(\alpha_f - \frac{1}{2}\right) \Delta t \frac{\partial \mathbf{U}}{\partial \mathbf{V}}\left(\mathbf{V}^h_{n+1}\right) \dot{\mathbf{V}}^h_{n+1}, \\
\hat{\mathbf{U}}^h_{n+\alpha_f-1/2} &:= \mathbf{U}\left(\mathbf{V}^h_{n}\right) + \left(\alpha_f - \frac{1}{2}\right) \Delta t \frac{\partial \mathbf{U}}{\partial \mathbf{V}}\left(\mathbf{V}^h_n\right) \dot{\mathbf{V}}^h_{n}.
\end{align}
The resulting fully-discrete method then admits the discrete balance law
\begin{align}
\int_{\Omega} \hat{\mathbf{U}}^h_{n_{\text{end}}+\alpha_f-1/2} d\Omega = \int_{\Omega} \hat{\mathbf{U}}^h_{n_{\text{begin}}+\alpha_f-1/2} d\Omega + \sum_{n=n_{\text{begin}}}^{n_{\text{end}}-1} \Delta t \int_{\Omega} \mathbf{S}_{n+\alpha_f} d\Omega
\end{align}
for two integers $1 \leq n_{\text{begin}} \leq n_{\text{end}}$.
\section{Conclusion}\label{sec:conclusion}
In this short communication, we showed that application of the second-order accurate generalized-$\alpha$ method to a stabilized or unstabilized Galerkin semi-discretization of a system of differential conservation laws results in a fully-discrete method that inherits the conservation properties of the underlying semi-discretization provided the temporal mesh is uniform. To do so, we first conducted a critical examination of the second-order accurate generalized-$\alpha$ method for first-order initial value problems, and we found it may be viewed as an implicit midpoint method on a shifted temporal mesh when the temporal mesh is uniform. We then employed this knowledge to show second-order accurate generalized-$\alpha$ time integration of a conservative Galerkin semi-discretization of the advection-diffusion equation using a uniform temporal mesh yields a fully-discrete method admitting a discrete balance law, and we then illustrated the same is true for general systems of differential conservation laws provided the conservation variables are themselves discretized. When nonconservation variables are instead discretized, the resulting fully-discrete method is not conservative, but we demonstrated how to modify the generalized-$\alpha$ method to arrive at a discrete balance law in this case. All the theoretical results appearing in this note have been verified by numerical experiments, but these experiments are not discussed here for brevity.
The theoretical results appearing in this note hold under the restrictive assumption of a uniform temporal mesh. While a uniform temporal mesh is most commonly employed in practice, significant efficiency gains are sometimes possible with adaptive time integration. We do not believe that discrete balance laws can be derived under the less restrictive assumption of a nonuniform temporal mesh, but we believe it is possible that the generalized-$\alpha$ method can be modified to ensure conservation in this case, just as was done for the case of nonconservation variables in this note.
Finally, while we only showed that the second-order accurate generalized-$\alpha$ method for first-order initial value problems can be viewed as an implicit midpoint method on a shifted temporal mesh if the unshifted temporal mesh is uniform, the same is true for second-order initial value problems as well. To see this, note that application of the generalized-$\alpha$ method to a second-order initial value problem results in a residual equation of the form
\begin{align}
\mathrm{R}\left(\ddot{\mathrm{U}}_{n+\alpha_m},\dot{\mathrm{U}}_{n+\alpha_f},\mathrm{U}_{n+\alpha_f},t_{n+\alpha_f}\right) = \mathrm{0}
\end{align}
at the $n^\text{th}$ time step, and second-order accuracy still dictates that $\gamma = \frac{1}{2} + \alpha_m - \alpha_f$\cite{chung1994family}. Consequently, the same analysis conducted in this note can also be used to show that
\begin{align}
\ddot{\mathrm{U}}_{n+\alpha_m} &= \frac{\dot{\mathrm{U}}_{n+\alpha_f+1/2} - \dot{\mathrm{U}}_{n+\alpha_f-1/2}}{\Delta t}
\end{align}
on a uniform temporal mesh where $\dot{\mathrm{U}}_{n+\alpha_f+1/2} := \dot{\mathrm{U}}_{n+1} + \left(\alpha_f - \frac{1}{2}\right) \Delta t \ddot{\mathrm{U}}_{n+1}$ and $\dot{\mathrm{U}}_{n+\alpha_f-1/2} = \dot{\mathrm{U}}_{n} + \left(\alpha_f - \frac{1}{2}\right) \Delta t \ddot{\mathrm{U}}_{n}$, and as a result, application of the second-order generalized-$\alpha$ method to a Galerkin elastodynamics semi-discretization results in a fully-discrete method with a discrete balance law for momentum. We leave further analysis of this, as well as extension of the theoretical results shown here to higher-order generalizations of the generalized-$\alpha$ method\cite{behnoudfar2021higher,behnoudfar2021higher_parabolic}, for future work.
\section*{Acknowledgments}
Both authors were partially funded by the Army Research Office under Award Number W911NF20P0002.
\subsection*{Author contributions}
Both authors contributed to the conceptualization, writing, and editing of this note.
\subsection*{Financial disclosure}
None reported.
\subsection*{Conflict of interest}
The authors declare no potential conflict of interests.
\section*{Supporting information}
There is no supporting information for this article.
\nocite{*
|
2,869,038,154,352 | arxiv | \section{Introduction}
The phase retrieval problem refers to the recovery of the phase of a function $f$ from known data from the magnitude of $f$ and some constraints on $f$ usually expressed in terms of properties of some transforms of $f$. A typical example consists in the recovery of $f$ from $|f|$ and the knowledge that the Fourier transform of $f$ is compactly supported. These problems have been studied due to their physical applications such as in x-ray crystallography \cite{Mi1990}, optical imaging \cite{SECCMS2015}, microscopy \cite{DHF1975}, and astronomy \cite{DF1987}. However, until the turn of the century, little was known in the mathematics literature. Early work on this problem centered on describing the set of solutions and finding additional constraints that
can lead to significant reductions of the set of solutions, see {\it e.g.} Klibanov {\it et. al.} \cite{Kl1995} and the first author's papers \cite{Ja1999, Ja2014}.
In the last decade, this subject has seen a blooming interest thanks to the discovery of new algorithms based on convex optimization (see {\it e.g.} \cite{CESV2015,CLS2014,CSV2013,WAM2015}). This has in turn triggered interest in the issue of stability, which refers to continuous dependence of the solution from the given magnitude data. It has been shown that phase retrieval problems situated in finite dimensions are stable, however, this is not the case for infinite dimensions \cite{ADGY2019,CCD2016}. More precisely, the stability deteriorates whenever the dimension increases \cite{AG2017,CCD2016}. For more information on phase retrieval problems, we refer the reader to the survey articles \cite{CLS2014,Fi1982,GKR2020,LBL2002,Mi1990} which include detailed discussions on both the theoretical (\textit{e.g.} abstract formulations, additional constraints, stability) and the numerical aspects (algorithms), and some physical examples.
In order to simply explain how zero-flipping works, we recall a classical Fourier phase
retrieval problem which was solved independently by Akutowicz \cite{Aku1956,Aku1957}, Walther \cite{Wa1963}, and Hofstetter \cite{Ho1964}: given $f$ in the Paley-Wiener class, \textit{i.e.} $f\in L^2(\mathbb{R})$ with compactly supported Fourier transform, the goal is to find all $g$ in the Paley-Wiener class such that
$$
|g(x)|=|f(x)|,\qquad x\in\mathbb{R}.
$$
We summarize the proof of their solution. Recall that the Paley-Wiener theorem extends $f$ and $g$ to entire functions of finite order 1. Writing $|f(x)|^2=|g(x)|^2$ or
$$
f(x)\overline{f(\bar x)}=g(x)\overline{g(\bar x)},\qquad x\in\mathbb{R},
$$
we see that their extensions satisfy
\begin{equation}
\label{eq:intro1}
g(z)\overline{g(\bar z)}=f(z)\overline{f(\bar z)},\qquad z\in\mathbb{C}.
\end{equation}
Since $f$ is of finite order, we can use Hadamard factorization theorem which states that entire functions of finite order are identified by their zeros. Here, we may write $f$ as
$$
f(z)=ce^{\alpha z}z^m\prod_{k\in\mathbb{N}}\left(1-\dfrac{z}{z_k}\right)e^{z/z_k},\qquad z\in\mathbb{C}
$$
where $c,\alpha\in \mathbb{C}$, $m\in \mathbb{N}\cup \{0\}$ and $\{z_k\}_{k\in\mathbb{N}}$ is the sequence of nonzero zeros of $f$. Hence if we denote by $\mathcal{Z}(g)$ the zero set of $g$ (counting with multiplicities), by \eqref{eq:intro1} we have
$$
\mathcal{Z}(g)\setminus\{0,0,...,0\}=\{z_k:k\in J\}\cup \{\overline{z_k}:k\in \mathbb{N}\setminus J\} ,\qquad J\subseteq \mathbb{N}.
$$
This process was called \textit{zero-flipping} by Walther. With this, it follows that all such $g$'s have Hadamard factorization given by
$$
g(z)=\tilde ce^{(\alpha+i\gamma) z}z^m\prod_{k\in J}\left(1-\dfrac{z}{z_k}\right)e^{z/z_k}\prod_{k\in \mathbb{N}\setminus J}\left(1-\dfrac{z}{\overline{z_k}}\right)e^{z/\overline{z_k}},\qquad z\in\mathbb{C}
$$
where $|\tilde c|=|c|$ and $\gamma\in\mathbb{R}$. The convergence of the infinite product above to an entire function of order 1 is guaranteed by a result from Titchmarsh \cite{Ti1926}. Moreover, we see that $g\in L^2(\mathbb{R})$ since $|g|=|f|$. For a more technical discussion of zero-flipping in this context, we refer the reader to the book of Hurt \cite[Section 3.17]{Hu1989}.
Now, let $a\in \mathbb{C}\setminus\mathbb{R}$ and $f$ be in the Paley-Wiener class. Define the \textit{flipping operator}, denoted by $\mathfrak{F}_a$ where
$$
\mathfrak{F}_a: f \longmapsto \dfrac{(1-x/\bar a)}{(1-x/a)}\dfrac{e^{x/\bar a}}{e^{x/a}}\cdot f,\qquad x\in\mathbb{R}.
$$
This operator exhibits the zero-flipping of $f$ at $a$. Indeed, dividing $f$ by the factor $(1-x/a)e^{x/a}$ cancels the canonical factor associated to $a$ while multiplying the result by $(1-x/\bar a)e^{x/\bar a}$ completes the flipping process. Whenever $f(a)= 0$, $\mathfrak{F}_af$ is still inside the Paley-Wiener class and is always a solution of the phase retrieval problem. On the other hand, when $f(a)\ne 0$, $\mathfrak{F}_af$ no longer belongs to the Paley-Wiener class. However we will show that $\mathfrak{F}_af$ is \textit{wide-banded}, that is, its Fourier transform statisfies a square-integrability condition with an exponential weight. It turns out that this problem was solved in our previous work in \cite{JKP2020}.
The main question we address in this paper is that of stability of zero-flipping.
In some previous work on the stability of phase retrieval problems (see e.g. \cite{ADGY2019, GR2019}), stability was shown by finding (in some cases) a positive constant $C$ such that
\begin{equation}
\label{eq:stable}
\inf_{|c|=1}||f-cg||_{\mathcal B}\leq C\big|\big| |f|-|g| \big|\big|_{\mathcal B'}
\end{equation}
where $\mathcal B, \mathcal B'$ are suitable Banach or Hilbert spaces. Some error terms
may eventually be added. In our case, stability of the phase retrieval problem in
some subclass $X$ of the Paley-Wiener class would mean that
$$
\inf_{|c|=1}||f-cg||_2\leq C\big|\big| |f|-|g| \big|\big|_2+\mbox{(error term)}
$$
for every $f\in X$ and every solution $g\in X$ of the phase retrieval problem. In particular,
for $g=\mathfrak F_af$ we should recover the error term only and stability
would imply that this error term be small. Our aim here is to investigate this issue, namely to
get an estimate of
$$
\inf_{|c|=1}||f-c\mathfrak{F}_af||_{2}.
$$
It turns out that when $a$ is in some small region near the origin, then this quantity is actually large (close to $2||f||_2$) so that zero-flipping of such a zero leads to instabilities. On the other hand, we will show that the error term is small when we flip a zero that is large and close to the real axis so that such a flipping does not lead to instabilities.
In a second stage, we compare the effect of two zero-flipping, that is, we investigate
\begin{equation}
\label{eq:stabmeasure}
\inf_{|c|=1}||\mathfrak F_af - c\mathfrak F_bf||_2.
\end{equation}
For instance, if $a,b\in \mathbb{C}\setminus\mathbb{R}$ are such that $f(a)\ne 0$ and $f(b)=0$, then we are comparing a genuine solution of the phase retrieval problem in the Paley-Wiener class with a solution obtained after having made a mistake on the location of the zero. Note that $\mathfrak F_af$, $\mathfrak F_bf$ are solutions of the phase retrieval problem. Thus, if the phase retrieval problem were stable, then this quantity should be an error term since it is bounded by
$$
\inf_{|c|=1}||\mathfrak F_af - cf||_2+\inf_{|c|=1}||\mathfrak F_bf - cf||_2
$$
and should thus be an error term. We will indeed obtain an upper bound of \eqref{eq:stabmeasure} of the form $C(f)\,\text{dist}(a,b)$ where $C(f)$ is a positive constant depending on $f$, and $\text{dist}(a,b)$ is some distance function depending on $a$ and $b$.
This paper is organized as follows. Section 2 provides a short summary about the Fourier transform and its properties relevant to the study. Section 3 is devoted to our stability results.
\section{Preliminaries}
For $f\in L^1(\mathbb{R})$, we use the following normalized definition for the Fourier transform $\widehat{f}$ given by
$$
\widehat{f}(w)=\dfrac{1}{\sqrt{2\pi}}\int_\mathbb{R} f(x)e^{-iwx}\,\mathrm{d} x,\qquad w\in\mathbb{R}.
$$
With this definition, we have Parseval's identity given by $||f||_2=||\widehat f||_2$ for $f\in L^2(\mathbb{R})$. Recall also that for all $x,w\in\mathbb{R}$,
\begin{enumerate}
\item if $g(x)=e^{iax}f(x)$ for some $\alpha\in\mathbb{R}$, then $\widehat g(w) = \tau_\alpha \widehat f(w)= \widehat f(w-\alpha)$
\item if $g,h\in L^2(\mathbb{R})$, then $\widehat{gh}=\widehat g*\widehat h$ where
$$
(\widehat g*\widehat h)(w)=\dfrac{1}{\sqrt{2\pi}}\int_\mathbb{R} \widehat g(s)\widehat h(w-s)\,\mathrm{d} s,\qquad w\in\mathbb{R}.
$$
\end{enumerate}
Recall that whenever $f\in L^2(\mathbb{R})$ with $\supp\widehat{f}\subseteq[-L,L]$ for some $L>0$, $f$ is said to be bandlimited and is contained in the Paley-Wiener class which we denote by $PW_L$. The space $PW_L$ is a closed linear subspace of $L^2(\mathbb{R})$.
We also recall the Paley-Wiener theorem on the strip, that is, whenever $f\in L^2(\mathbb{R})$ and $\lambda>0$, $\widehat f\in L^2(\mathbb{R}, e^{2\lambda|x|}\mathrm{d} x)$ where
$$
L^2(\mathbb{R}, e^{2\lambda|x|}\mathrm{d} x)=\left\{F\text{ is measurable}: \int_\mathbb{R} |F(x)|^2e^{2\lambda|x|}\mathrm{d} x<+\infty\right\}
$$
if and only if $f$ belongs to the Hardy space on the strip $H^2_\tau(\mathcal S_\lambda)$ (see \cite{JKP2020} and references therein). Here, $H^2_\tau(\mathcal S_\lambda)$ is the collection of all holomorphic functions on the strip $\mathcal S_\lambda=\{z\in\mathbb{C}:|\operatorname{Im}z|<\lambda \}$ such that
$$
||f||^2_{H^2_\tau(\mathcal S_\lambda)}=\sup_{|y|<\lambda}\int_\mathbb{R} |f(x+iy)|^2\,\mathrm{d} x<+\infty.
$$
Note also that if $f\in H^2_\tau(\mathcal S_\lambda)$, then $\widehat{f}$ has to be concentrated near the origin so that $f$ is wide-banded.
For any function $f$, we denote its reflection with respect to the
$y$-axis by $Rf$ given by
$$
Rf(x)=f(-x), \qquad x\in\mathbb{R}.
$$
Finally, recall that the $L^2$-modulus of continuity of $F\in L^2(\mathbb{R})$, denoted by $\omega_2(F;h)$ for some $h>0$, is given by
$$
\omega_2(F;h)=\sup_{|\eta|\leq h}\left(\int_\mathbb{R} |F(x-\eta)-F(x)|^2\,\mathrm{d} x\right)^{1/2}= \sup_{|\eta|\leq h}||\tau_{\eta} F-F||_2.
$$
Throughout the paper, we use the notation $C(\alpha_1,\ldots,\alpha_n)$ to denote
a positive constant that depends only on $\alpha_1,\ldots,\alpha_n\in\mathbb{C}$. The constant may change from one line to the next.
\section{Results}
\subsection{The operator $\zf_a$}
Let $f$ belong to $PW_L$ and let $a\in\mathbb{C}$ such that $\operatorname{Im}a>0$.
Define the \textit{flipping operator} which we denote by $\mathfrak{F}_a$ where
\begin{equation}
\label{eq:ZFa}
(\mathfrak{F}_af)(x)=\dfrac{1-x/\bar{a}}{1-x/a}\cdot\dfrac{e^{x/\bar{a}}}{e^{x/a}}\,f(x),\qquad x\in\mathbb{R}.
\end{equation}
It is easy to verify that $|\mathfrak{F}_af|=|f|$ on $\mathbb{R}$ and so $||\mathfrak{F}_af||_2=||f||_2$, and thus also $||\widehat{\mathfrak{F}_af}||_2=||\widehat{f}||_2$. Note that it will suffice to analyze the stability for $\mathfrak{F}_a$ when $\operatorname{Im}a>0$ since we can cover the case $\text{Im }a<0$ by looking at
$\mathfrak F_{\bar a}$ since for $x\in\mathbb{R}$, $\mathfrak F_{\bar a}f(x)=\overline{\mathfrak F_a\bar f(x)}$.
Observe that $\mathfrak F_af$ extends into an meromorphic function and that if $f(a)\ne 0$, then $\mathfrak F_af$ has a pole at $a$ and so that $\mathfrak F_af\notin PW_L$. On the other hand, if $f(a)=0$, from the Hadamard factorization of $f$ we see that $\mathfrak F_af$ has the effect of replacing the zero at $z=a$ by a zero at $z=\bar a$, and that $\mathfrak F_af$ is still holomorphic. From the Paley-Wiener theorem, we conclude that $\mathfrak F_af\in PW_L$. However, when $f(a)\ne 0$, $\mathfrak F_a$ extends to a holomorphic function on a strip. More precisely:
\begin{lemma}
Let $f\in PW_L$ and let $a\in\mathbb{C}$ such that $\operatorname{Im}a>0$ and $f(a)\ne 0$. Then the operator $\mathfrak F_a: PW_L\longrightarrow H^2_\tau(\mathcal S_\lambda)$ is bounded with
$$
||\mathfrak F_af||_{H_\tau^2(\mathcal S_\lambda)}<\left[1+\dfrac{2\operatorname{Im}a}{\operatorname{Im}a-\lambda}\right]e^{\frac{2(\operatorname{Im}a)^2}{|a|^2}}e^{L\lambda}||f||_2
$$
where $\lambda < \operatorname{Im}a$. In particular, $\widehat{\mathfrak F_a f}\in L^2(\mathbb{R}, e^{2\lambda|x|}\mathrm{d} x)$.
\end{lemma}
\begin{proof}
For $x,y\in\mathbb{R}$ with $|y|<\lambda<\operatorname{Im}a$, observe that if $z=x+iy$,
\begin{align*}
\bigg|\dfrac{(x+iy)-\bar a}{(x+iy)- a}\bigg|&=\bigg|1-\dfrac{2i\operatorname{Im}a}{z-a}\bigg|\\
&\leq 1+\dfrac{2\operatorname{Im}a}{\sqrt{(x-\operatorname{Re}a)^2+(y-\operatorname{Im}a)^2}}\\
&< 1+\dfrac{2\operatorname{Im}a}{|y-\operatorname{Im}a|}\\
&< 1+\dfrac{2\operatorname{Im}a}{\operatorname{Im}a-\lambda}
\end{align*}
and
$$
\bigg|\dfrac{e^{(x+iy)/\bar{a}}}{e^{(x+iy)/a}}\bigg|=e^{-\frac{2\operatorname{Im}a}{|a|^2}y}<e^{\frac{2(\operatorname{Im}a)^2}{|a|^2}}.
$$
Moreover, since $\widehat{\tau_{-iy}f}(\xi)=\widehat f(\xi)e^{\xi y}$ for $y\in\mathbb{R}$ such that $|y|<\lambda<\operatorname{Im}a$ and for $\xi\in\mathbb{R}$, Parseval's identity implies that
$$
\int_\mathbb{R} |f(x+iy)|^2=\int_{-L}^{L} |\widehat f(\xi)|^2e^{2\xi y}\,\mathrm{d} \xi\leq e^{2L\lambda}||f||_2^2.
$$
Thus, if $y\in\mathbb{R}$ such that $|y|<\lambda<\operatorname{Im}a$, we have
\begin{align*}
\int_\mathbb{R} |(\mathfrak{F}_af)(x+iy)|^2\mathrm{d} x &= \int_\mathbb{R} \bigg| \dfrac{(x+iy)-\bar a}{(x+iy)- a}\cdot \dfrac{e^{(x+iy)/\bar{a}}}{e^{(x+iy)/a}}\bigg|^2 |f(x+iy)|^2\,\mathrm{d} x\\
&< \left[1+\dfrac{2\operatorname{Im}a}{\operatorname{Im}a-\lambda}\right]^2e^{\frac{4(\operatorname{Im}a)^2}{|a|^2}}\int_\mathbb{R} |f(x+iy)|^2\,\mathrm{d} x\\
&<\left[1+\dfrac{2\operatorname{Im}a}{\operatorname{Im}a-\lambda}\right]^2e^{\frac{4(\operatorname{Im}a)^2}{|a|^2}}e^{2L\lambda}||f||_2^2<+\infty.
\end{align*}
Taking the supremum for all $y$ such that $|y|<\lambda<\operatorname{Im}a$ yields the first result. The second result then follows from the Paley-Wiener theorem on the strip.
\end{proof}
We now compute the explicit form of the Fourier transform of $\mathfrak F_af$ which we will need for our results.
\begin{lemma}
Let $f\in PW_L$ for some $L>0$ and let $a\in\mathbb{C}$ such that $\operatorname{Im}a>0$. For all $x\in\mathbb{R}$,
\begin{equation}
\label{eq:ZFa-hat}
(\widehat{\zf_af})(x)=\dfrac{a}{\bar{a}}\left[\widehat f(x-\beta_a)-(2\operatorname{Im}a)\int_0^{+\infty} e^{ias}\widehat f(x-\beta_a+s)\,\mathrm{d} s\right]
\end{equation}
where $\beta_a=\dfrac{2\operatorname{Im}a}{|a|^2}$.
\end{lemma}
\begin{proof}
Consider the function $\gamma_a$ defined by
\begin{equation}
\label{eq:gamma}
\gamma_a(x)=-\dfrac{\sqrt{2\pi}i}{2}\left[1+\sgn(x)\right]e^{iax},\qquad x\in\mathbb{R}.
\end{equation}
It is easy to check that $\gamma_a\in L^1(\mathbb{R})$ with $||\gamma_a||_1=\dfrac{\sqrt{2\pi}}{\operatorname{Im}a}$. Then, for all $w\in\mathbb{R}$,
\begin{align*}
\widehat\gamma_a (w)&=\dfrac{1}{\sqrt{2\pi}}\int_\mathbb{R} -\dfrac{\sqrt{2\pi}i}{2}\left[1+\sgn(x)\right]e^{i(a-w)x}\,\mathrm{d} x\\
&=-i\int_0^{+\infty} e^{i(a-w)x}\,\mathrm{d} x\\
&=\dfrac{1}{a-w}.
\end{align*}
Now, for all $x\in \mathbb{R}$, write \eqref{eq:ZFa} as
\begin{align*}
(\mathfrak F_af)(x)&=\dfrac{1-x/\bar{a}}{1-x/a}\cdot e^{i\beta_ax}f(x)\\
&=\dfrac{a}{\bar a}\left[1-\dfrac{a-\bar a}{a-x}\right]e^{i\beta_ax}f(x)\\
&=\dfrac{a}{\bar a}\left[e^{i\beta_ax}f(x)-(2i\text{Im }a)e^{i\beta_ax}f(x)\widehat{\gamma}_a(x)\right].
\end{align*}
Then
\begin{equation}
\label{eq:ga-comp}
(\widehat{\zf_af})(x)=\dfrac{a}{\bar{a}}\left[\tau_{\beta_a}\widehat{f}(x)-(2i\text{Im }a)\left(R\gamma_a*\tau_{\beta_a}\widehat f\right)(x)\right],\qquad x\in\mathbb{R}.
\end{equation}
Expanding this equation, we get
\begin{align*}
(\widehat{\zf_af})(x)&=\dfrac{a}{\bar{a}}\left[\widehat f(x-\beta_a)-\dfrac{2i\operatorname{Im}a}{\sqrt{2\pi}}\int_\mathbb{R} \gamma_a(-s)\widehat f(x-\beta_a-s)\,\mathrm{d} s\right]\\
&=\dfrac{a}{\bar{a}}\left[\widehat f(x-\beta_a)-\dfrac{2i\operatorname{Im}a}{\sqrt{2\pi}}\int_\mathbb{R} \gamma_a(s)\widehat f(x-\beta_a+s)\,\mathrm{d} s\right]\\
&=\dfrac{a}{\bar{a}}\left[\widehat f(x-\beta_a)-2\operatorname{Im}a\int_0^{+\infty} e^{ias}\widehat f(x-\beta_a+s)\,\mathrm{d} s\right]
\end{align*}
as claimed.
\end{proof}
\subsection{Stability between $\zf_af$ and $f$}
In this section, we will give an estimate of
$$
\inf_{|c|=1}\|f-c\zf_af\|_2.
$$
This is a classical measure of stability for the phase retrieval problem. We are here investigating
how far zero-flipping drives us from the original function (up to the trivial solution $f\to cf$).
Recall that $\mathfrak F_af$ is bandlimited only if $f(a)=0$, this will however play no role here, that is we allow the solution $\mathfrak{F}_af$ to be wide-banded. This can also be considered as a simpler case of \eqref{eq:stabmeasure}, where $b$ is real so that $\zf_bf=f$. Our result here is the following:
\begin{theorem}
\label{thm:stab1}
Let $f\in PW_L$ for some $L>0$. Let $a\in\mathbb{C}$ such that $\operatorname{Im}a>0$ and $\beta_a=\displaystyle\frac{2\operatorname{Im}a}{|a|^2}$. Then
\begin{align}
\label{eq:stab1-1}
\bigg|\inf_{|c|=1}||\zf_af-cf||_2^2-2||f||_2^2\bigg|\leq 30 \bigr(L\operatorname{Im}a\bigr)||f||_2^2,&\qquad\text{if }\beta_a> 2L
\intertext{and}
\label{eq:stab1-2}
\inf_{|c|=1}||\zf_af-cf||_2^2\leq 2\,\omega_2(\widehat f;\beta_a)||f||_2+8\sqrt{L\operatorname{Im}a}\,||f||_2^2,&\qquad\text{if }\beta_a\leq 2L.
\end{align}
\end{theorem}
\begin{remark}
The actual bounds are a bit more precise, see \eqref{eq:c1a} and \eqref{eq:c2a} in the proof below. Note that $\beta_a=\dfrac{2\operatorname{Im}a}{|a|^2}=2L$ is equivalent to $(\operatorname{Re}a)^2+\left(\operatorname{Im}a-\tfrac{1}{2L}\right)^2=\tfrac{1}{4L^2}$ which represents a circle, with a hole at the origin, illustrated below (in blue).
\begin{center}
\vspace{3mm}
\begin{tikzpicture}
\fill[fill=red!20!white] (-5,0) -- (5,0) -- (5,4) -- (-5,4) -- cycle;
\fill[fill=gray!20!white] (0,1.5) circle (1.5);
\draw[draw=blue,very thick] (0,1.5) circle (1.5);
\draw (-5,0) -- (5,0);
\draw (0,0) -- (0,4);
\draw[blue,thick] (0,0) circle (2pt);
\filldraw[black,thick] (0,1.5) circle (2pt);
\filldraw[black,thick] (-4,0.3) circle (2pt);
\filldraw[black,thick] (-2.4,2.4) circle (2pt);
\filldraw[black,thick] (0.2,0.15) circle (2pt);
\coordinate[label={0:{$a_1$}}] (theta) at (-4,0.3);
\coordinate[label={0:{$a_2$}}] (theta) at (-2.4,2.4);
\coordinate[label={45:{$a_3$}}] (theta) at (0.2,0.15);
\coordinate[label={0:{$\frac{1}{2L}i$}}] (theta) at (0,1.5);
\coordinate[label={[text=blue]0:{$\beta_a=2L$}}] (theta) at (1.5,1.5);
\coordinate[label={90:{$\operatorname{Re}a$}}] (theta) at (4.6,0);
\coordinate[label={0:{$\operatorname{Im}a$}}] (theta) at (0,3.7);
\end{tikzpicture}
\\
\textnormal{The stability region}
\vspace{3mm}
\end{center}
From Theorem \ref{thm:stab1}, we have stability when $\beta_a\leq 2L$ (in red), whereas we have instability when $\beta_a> 2L$ (in gray). Consider $a_1,a_2,a_3\in \mathbb{C}$ which have positive imaginary parts as plotted in the figure above. Note that the zero-flipping is `more stable' at $a_1$ as it is farther from the origin and has a smaller imaginary part than of $a_2$, and the zero-flipping at $a_3$ is unstable as it is very close to the origin.
This result says that zero-flipping becomes unstable (for this criteria)
when $a$ approaches the real axis inside this disc. On the other hand, if $a$
approaches the real line while staying away from the origin, we have stability.
Indeed, if $|a|\geq \alpha>0$ and $\operatorname{Im}a\longrightarrow 0$ then $\beta_a\longrightarrow 0$ so that
$\omega_2(\widehat f;\beta_a)\longrightarrow 0$.
\end{remark}
\begin{proof}[Proof of Theorem \ref{thm:stab1}]
Observe first that for $|c|=1$,
\begin{align*}
||\zf_af-cf||^2_2&=||\widehat{\zf_af}-c\widehat f||^2_2\\
&=||\widehat{f}||_2^2-2\bar{c}\operatorname{Re}\langle\,\widehat{\zf_af},\widehat{f}\,\rangle+||\widehat{\zf_af}||_2^2\\
&=2\left[||f||_2^2-\bar{c}\operatorname{Re}\langle\,\widehat{\zf_af},\widehat{f}\,\rangle\right],
\end{align*}
and thus
\begin{equation}
\label{eq:inf}
\inf_{|c|=1}||
\zf_af-cf||_2^2=2\left[||f||^2_2
-\big\vert\langle\,\widehat{\zf_af},\widehat{f}\,\rangle\big\vert\right].
\end{equation}
For our calculations, we notice from \eqref{eq:ZFa-hat} that
\begin{equation}
\label{eq:ip}
\dfrac{\bar a}{a}\langle\,\widehat{\zf_af},\widehat{f}\,\rangle=\sqrt{2\pi}\left(R\widehat{f}*\overline{\widehat{f}\,}\,\right)(\beta_a)-2\operatorname{Im}a\int_{-L}^L \int_0^{+\infty}e^{ias}\widehat f(x+s-\beta_a)\overline{\widehat f (x})\,\mathrm{d} s\,\mathrm{d} x\,
\end{equation}
with $R\widehat{f}(x)=\widehat f(-x)$ for $x\in\mathbb{R}$.
\medskip
\noindent \textit{Case 1} $\beta_a>2L$.
\smallskip
Observe that if $\beta_a>2L$, then
\begin{equation}
\label{eq:zff}
\left(R\widehat{f}*\overline{\widehat{f}\,}\,\right)(\beta_a)=0.
\end{equation}
For the second term, if $\beta_a>2L$, then $x-\beta_a\leq L-\beta_a<-L$ for any $x\in[-L,L]$, thus
\begin{align*}
\Bigg|\int_0^{+\infty}e^{ias}\widehat{f}(x+s-\beta_a)\,ds\Bigg|
&=\Bigg|e^{-ia(x-\beta_a)}\int_{-L}^L e^{iat}\widehat{f}(t)\,\mathrm{d} t\Bigg|\\
&\leq e^{\operatorname{Im}a\,(x-\beta_a+L)}||\widehat{f}||_1\\
&\leq e^{\operatorname{Im}a\,(x-\beta_a+L)}\sqrt{2L}||f||_2
\end{align*}
and so
\begin{align}
\big\vert\langle\,\widehat{\zf_af},\widehat{f}\,\rangle\big\vert&=
\Bigg|2\operatorname{Im}a\int_{-L}^L\int_0^{+\infty}e^{ias}\widehat{f}(x+s-\beta_a)\overline{\widehat f(x)}\,\mathrm{d} s\,\mathrm{d} x\Bigg|\notag\\
&\leq 2\operatorname{Im}a\left[\sqrt{2L}||f||_2\int_{-L}^L|\widehat{f}(x)|e^{\operatorname{Im}a(x-\beta_a+L)}\,\mathrm{d} x\right]\notag\\
&\leq 2\sqrt{2L}\operatorname{Im}a||f||_2^2\left(\int_{-L}^L e^{2\operatorname{Im}a\,(x-\beta_a+L)}\,\mathrm{d} x\right)^{1/2}\notag\\
&=2\sqrt{2L\operatorname{Im}a\cdot\sinh(2L\operatorname{Im}a)}e^{L\operatorname{Im}a }e^{-\beta_a\operatorname{Im}a}||f||_2^2.
\label{eq:c1a}
\end{align}
Note that $\beta_a\operatorname{Im}a=\dfrac{2(\operatorname{Im}a)^2}{|a|^2}\leq 2$ so that $e^{-\beta_a\operatorname{Im}a}$
plays no role and we just bound it by 1. Further, if $\beta_a>2L$ then $L\operatorname{Im}a <1$ thus $\sinh(L\operatorname{Im}a)\leq \sinh(1)L\operatorname{Im}a$. As $2\sqrt{2\sinh(2)}e^{1}\leq 15$ we get
$$
2\sqrt{2L\operatorname{Im}a\cdot\sinh(2L\operatorname{Im}a)}e^{L\operatorname{Im}a }\leq 15 L\operatorname{Im}a
$$
and finaly $\big\vert\langle\,\widehat{\zf_af},\widehat{f}\,\rangle\big\vert\leq 15 \bigr(L\operatorname{Im}a\bigr)||f||_2^2$.
Together with \eqref{eq:zff}, we see that \eqref{eq:inf} imples \eqref{eq:stab1-1}.
\medskip
\noindent \textit{Case 2} $\beta_a\leq 2L$.
\smallskip
Observe first that
\begin{align}
||f||^2_2-\sqrt{2\pi}\left(R\widehat{f}*\overline{\widehat{f}\,}\,\right)(\beta_a)&\leq
\Big|\sqrt{2\pi}\left(R\widehat{f}*\overline{\widehat{f}\,}\,\right)(\beta_a)-||f||_2^2\Big|\notag\\
&=\Bigg|\int_{-L}^L\overline{\widehat{f}(\xi)}\left(\widehat{f}(\xi-\beta_a)-\widehat f(\xi)\right)\mathrm{d}\xi\Bigg|\notag\\
&\leq ||f||_2||\widehat{f}-\tau_{\beta_a}\widehat{f}||_2\notag\\
&\leq||f||_2\,\omega_2(\widehat{f};\beta_a).
\label{eq:case2-1}
\end{align}
It remains to bound the second term in \eqref{eq:ip}, that is, to show that
\begin{equation*}
\left|2\operatorname{Im}a\int_{-L}^L \int_0^{+\infty}e^{ias}\widehat f(x+s-\beta_a)\,\mathrm{d} s\,\overline{\widehat f (x})\,\mathrm{d} x
\right|\leq C(a)||f||_2^2
\end{equation*}
with $C(a)\longrightarrow 0$ when $\operatorname{Im}a\longrightarrow 0$.
We first want to bound
\begin{equation}
\label{eq:2inner}
\int_0^{+\infty}e^{ias}\widehat{f}(x+s-\beta_a)\,ds=e^{-ia(x-\beta_a)}\int_{x-\beta_a}^L e^{iat}\widehat{f}(t)\,\mathrm{d} t
\end{equation}
when $x\in[-L,L]$.
To do so, assume first that $-L\leq x \leq \beta_a-L$. Then
\begin{align*}
\Bigg|\int_0^{+\infty}e^{ias}\widehat{f}(x+s-\beta_a)\,ds\Bigg|
&\leq e^{\operatorname{Im}a\,(x-\beta_a)}\Bigg|\int_{-L}^L e^{iat}\widehat{f}(t)\,\mathrm{d} t\Bigg|\\
&\leq e^{\operatorname{Im}a\,(x-\beta_a)}\Bigg|\int_{-L}^L e^{-\operatorname{Im}at}|\widehat{f}(t)|\,\mathrm{d} t\Bigg|\\
&\leq e^{\operatorname{Im}a\,(x-\beta_a+L)}\sqrt{2L}||f||_2.
\end{align*}
On the other hand, if $\beta_a-L< x \leq L$, then by Cauchy-Schwarz inequality, we obtain
\begin{align*}
\Bigg|\int_0^{+\infty}e^{ias}\widehat{f}(x+s-\beta_a)\,ds\Bigg|&=\Bigg|e^{-ia(x-\beta_a)}\int_{x-\beta_a}^L e^{iat}\widehat{f}(t)\,\mathrm{d} t\Bigg|\\
&\leq e^{\operatorname{Im}a\,(x-\beta_a)}||f||_2\left[\int_{x-\beta_a}^Le^{-2\operatorname{Im}a\,t}\,\mathrm{d} t\right]^{1/2}\\
&=\dfrac{e^{\operatorname{Im}a\,(x-\beta_a)}||f||_2}{\sqrt{2\operatorname{Im}a}}\left[-e^{-2L\operatorname{Im}a}+e^{-2\operatorname{Im}a\,(x-\beta_a)}\right]^{1/2}\\
&=\dfrac{||f||_2}{\sqrt{2\operatorname{Im}a}}\left[1-e^{2\operatorname{Im}a\,(x-\beta_a-L)}\right]^{1/2}.
\end{align*}
Hence, combining these two bounds, we have
\begin{align}
\Bigg|2\operatorname{Im}a\int_{-L}^L\int_0^{+\infty}e^{ias}\widehat{f}(x+s-&\beta_a)\overline{\widehat f(x)}\,\mathrm{d} s\,\mathrm{d} x\Bigg|\notag\\
&\leq 2\operatorname{Im}a\Bigg[\sqrt{2L}||f||_2\int_{-L}^{\beta_a-L}|\widehat f(x)|e^{\operatorname{Im}a\,(x-\beta_a+L)}\,\mathrm{d} x\notag\\
&\qquad+\dfrac{||f||_2}{\sqrt{2\operatorname{Im}a}}\int_{\beta_a-L}^L|\widehat{f}(x)|\left(1-e^{2\operatorname{Im}a\,(x-\beta_a-L)}\right)^{1/2}\mathrm{d} x\Bigg]\notag\\
&\leq \sqrt{2\operatorname{Im}a}||f||_2^2\Bigg[2\sqrt{L\operatorname{Im}a}\left(\int_{-L}^{\beta_a-L}e^{2\operatorname{Im}a\,(x-\beta_a+L)}\,\mathrm{d} x\right)^{1/2}\notag\\
&\qquad+\left(\int_{\beta_a-L}^L\left(1-e^{2\operatorname{Im}a\,(x-\beta_a-L)}\right)\mathrm{d} x\right)^{1/2}\Bigg]\notag\\
&=\sqrt{2\operatorname{Im}a}||f||_2^2\Bigg[ \sqrt{2L}\left(1-e^{-2\beta_a\operatorname{Im}a}\right)^{1/2}\notag\\
&\qquad+\left(2L-\beta_a-\dfrac{e^{-2\beta_a\operatorname{Im}a}}{2\operatorname{Im}a}(1-e^{2(\beta_a-2L)\operatorname{Im}a})\right)^{1/2}\Bigg].
\label{eq:case2-2}
\end{align}
This gives the desired bound with
\begin{equation}
\small
\label{eq:c2a}
C(a)=\sqrt{2\operatorname{Im}a}\Bigg[ \sqrt{2L}\left(1-e^{-2\beta_a\operatorname{Im}a}\right)^{1/2}+\left(2L-\beta_a-\dfrac{e^{-2\beta_a\operatorname{Im}a}}{2\operatorname{Im}a}(1-e^{2(\beta_a-2L)\operatorname{Im}a})\right)^{1/2}\Bigg]
\end{equation}
and it is easy to see that $0<C(a)\leq4\sqrt{L\operatorname{Im}a}$. Finally, by \eqref{eq:case2-1} and \eqref{eq:case2-2}, we obtain the estimate in \eqref{eq:stab1-2}.
\end{proof}
\subsection{Stability between $\zf_af$ and $\zf_bf$}
In this section, we now compare two nontrivial solutions $\zf_af$ and $c\zf_bf$ of the phase retrieval problem. To do this, we introduce the following stability measure given by
$$
\inf_{|c|=1}||\zf_af-c\zf_bf||_2.
$$
Note that we also allow these solutions to be either bandlimited or wide-banded. By using a similar computation we did to obtain \eqref{eq:inf}, note that
\begin{equation}
\label{eq:inf2}
\inf_{|c|=1}||\zf_af-c
\zf_bf||_2^2=2\left[||f||^2_2-\big\vert\langle\,\widehat{\zf_af},\widehat{\zf_bf}\,\rangle\big\vert\right].
\end{equation}
Before we look at the next stability result, we first prove some technical lemmas which we will need.
\begin{lemma}
\label{lem:techlem1}
Let $f\in L^2(\mathbb{R})$ and let $a,b\in \mathbb{C}$ with $\operatorname{Im}a,\operatorname{Im}b>0$ and $|a-b|\leq \dfrac{|b|}{2}$.
Let $\beta_a=\displaystyle\frac{2\operatorname{Im}a}{|a|^2}$ and
$\beta_b=\displaystyle\frac{2\operatorname{Im}b}{|b|^2}$.
Consider $\gamma_a,\gamma_b\in L^1(\mathbb{R})$ as defined in \eqref{eq:gamma}. Then
$$
||\operatorname{Im}a\,(R\gamma_a*\tau_{\beta_a}\widehat{f})-\operatorname{Im}b\,(R\gamma_b*\tau_{\beta_b}\widehat{f})||_2
\leq C(a,b)||f||_2+\sqrt{2\pi}\,\omega_2(\widehat{f};\beta_a-\beta_b)
$$
where $C(a,b)\leq \displaystyle 14\frac{|a-b|}{\operatorname{Im}b}$.
\end{lemma}
\begin{remark} The actual value of $C(a,b)$ is a bit more precise and given in \eqref{eq:cab} below.
\end{remark}
\begin{proof}
First, observe that for all $x\geq 0$,
\begin{align*}
\big|\sin(\tfrac{a-b}{2}x)\big|&\leq\big|\sin\left(\operatorname{Re}(\tfrac{a-b}{2}x)\right)\big|\cosh\left(\operatorname{Im}(\tfrac{a-b}{2})x\right)+\big|\sinh\left(\operatorname{Im}(\tfrac{a-b}{2})x\right)\big|\\
&\leq\big|\operatorname{Re}(\tfrac{a-b}{2})\big|x\cdot e^{\frac{\operatorname{Im}(a-b)}{2}x}+\big|\sinh\left(\operatorname{Im}(\tfrac{a-b}{2})x\right)\big|.
\end{align*}
Hence, with this bound, we get
\begin{align*}
||R\gamma_a-R\gamma_b||_1&=\sqrt{2\pi}\int_0^{+\infty}|e^{iax}-e^{ibx}|\, \mathrm{d} x\\
&=\sqrt{2\pi}\int_0^{+\infty}|e^{i\frac{a+b}{2}x}||e^{i\frac{a-b}{2}x}-e^{-i\frac{a-b}{2}x}|\,\mathrm{d} x\\
&=2\sqrt{2\pi}\int_0^{+\infty}e^{-\frac{\operatorname{Im}(a+b)}{2}x}\big|\sin(\tfrac{a-b}{2}x)\big|\,\mathrm{d} x\\
&\leq 2\sqrt{2\pi}\bigg[\int_0^{+\infty}\big|\operatorname{Re}(\tfrac{a-b}{2})\,\big|xe^{-\operatorname{Im}b\,x}\,\mathrm{d} x+\int_0^{+\infty}e^{-\frac{\operatorname{Im}(a+b)}{2}x}\big|\sinh\left(\operatorname{Im}(\tfrac{a-b}{2})x\right)\big|\,\mathrm{d} x\bigg]\\
&=\sqrt{2\pi} \left[\dfrac{|\operatorname{Re}a-\operatorname{Re}b\,|}{(\operatorname{Im}b)^2}+\Big|\dfrac{1}{\operatorname{Im}a}-\dfrac{1}{\operatorname{Im}b}\Big|\right].
\end{align*}
Note also that $|a-b|\leq\dfrac{|b|}{2}$ implies that $\operatorname{Im}a\leq \dfrac{3}{2}\operatorname{Im}b$. Using this, the previous norm estimate, and Young's convolution inequality, we then have
\begin{align*}
||\operatorname{Im}a\,(R\gamma_a*\tau_{\beta_a}\widehat{f})&-\operatorname{Im}b\,(R\gamma_b*\tau_{\beta_b}\widehat{f})||_2\\
&\leq |\operatorname{Im}a-\operatorname{Im}b\,|\cdot||R\gamma_b*\tau_{\beta_b}\widehat{f}||_2+\operatorname{Im}a\,||(R\gamma_a-R\gamma_b)*\tau_{\beta_b}\widehat{f}||_2\\
&\qquad\qquad+\operatorname{Im}a\,|| R\gamma_a*(\tau_{\beta_a}\widehat{f}-\tau_{\beta_b}\widehat{f})||_2\\
&\leq |\operatorname{Im}a-\operatorname{Im}b\,|\cdot ||R\gamma_b||_1||f||_2+\operatorname{Im}a\,||R\gamma_a-R\gamma_b||_1||f||_2\\
&\qquad\qquad+\operatorname{Im}a\,||R\gamma_a||_1\,\omega_2(\widehat f;\beta_a-\beta_b)\\
&=\sqrt{2\pi}\, \dfrac{|\operatorname{Im}a-\operatorname{Im}b\,|}{\operatorname{Im}b}||f||_2+\operatorname{Im}a\,||R\gamma_a-R\gamma_b||_1||f||_2+\sqrt{2\pi}\,\omega_2(\widehat f;\beta_a-\beta_b)\\
&\leq\left[\sqrt{2\pi}\, \dfrac{|\operatorname{Im}a-\operatorname{Im}b\,|}{\operatorname{Im}b}+\dfrac{3\sqrt{2\pi}}{2}\operatorname{Im}b\left[\dfrac{|\operatorname{Re}a-\operatorname{Re}b\,|}{(\operatorname{Im}b)^2}+\Big|\dfrac{1}{\operatorname{Im}a}-\dfrac{1}{\operatorname{Im}b}\Big|\right]\right]||f||_2\\
&\qquad\qquad+\sqrt{2\pi}\,\omega_2(\widehat f;\beta_a-\beta_b).
\end{align*}
We thus obtain the lemma with
\begin{equation}
\label{eq:cab}
C(a,b)=\sqrt{2\pi}\, \dfrac{|\operatorname{Im}a-\operatorname{Im}b\,|}{\operatorname{Im}b}+\dfrac{3\sqrt{2\pi}}{2}\operatorname{Im}b\left[\dfrac{|\operatorname{Re}a-\operatorname{Re}b\,|}{(\operatorname{Im}b)^2}+\Big|\dfrac{1}{\operatorname{Im}a}-\dfrac{1}{\operatorname{Im}b}\Big|\right].
\end{equation}
Note that if $|a-b|\leq\dfrac{|b|}{2}$ then $\operatorname{Im}a\geq\dfrac{1}{2}\operatorname{Im}b$ thus
$$
\Big|\dfrac{1}{\operatorname{Im}a}-\dfrac{1}{\operatorname{Im}b}\Big|
\leq 2\,\dfrac{|\operatorname{Im}a-\operatorname{Im}b\,|}{(\operatorname{Im}b)^2}
$$
from which the bound $C(a,b)\leq \displaystyle 14\frac{|a-b|}{\operatorname{Im}b}$ immediately follows.
\end{proof}
\begin{lemma}
\label{lem:techlem2}
Let $f\in PW_L$ and let $b\in \mathbb{C}$ with $\operatorname{Im}b>0$ and $\beta_b=\dfrac{2\operatorname{Im}b}{|b|^2}$. Then
\begin{equation*}
\left[\int_\mathbb{R}\left(e^{\operatorname{Im}b\,(x-\beta_b)}\int_{x-\beta_b}^{L}e^{-\operatorname{Im}b\,y}|\widehat{f}(y)|\,\mathrm{d} y\right)^2\mathrm{d} x\right]^{1/2}\leq ||f||_2\,C(b)
\end{equation*}
with
\begin{equation*}
C(b)=\dfrac{1}{\sqrt{2\operatorname{Im}b}}\left[2L+1+\dfrac{e^{-4L\operatorname{Im}b}-1}{2\operatorname{Im}b}\right]^{1/2}.
\end{equation*}
\end{lemma}
\begin{proof}
Firstly, if $x\geq L+\beta_b$, then
$$
\int_{x-\beta_b}^{L}e^{-\operatorname{Im}b\,y}|\widehat{f}(y)|\,\mathrm{d} y=0.
$$
Secondly, if $-L+\beta_b\leq x \leq L+\beta_b$, Cauchy-Schwarz inequality implies that
\begin{align*}
e^{\operatorname{Im}b\,(x-\beta_b)}\int_{x-\beta_b}^{L}e^{-\operatorname{Im}b\,y}|\widehat{f}(y)|\,\mathrm{d} y&\leq e^{\operatorname{Im}b\,(x-\beta_b)} ||f||_2\left(\int_{x-\beta_b}^{L} e^{-2\operatorname{Im}b\,y}\,\mathrm{d} y\right)^{1/2}\\
&=e^{\operatorname{Im}b\,(x-\beta_b)}||f||_2\left(\dfrac{e^{-2\operatorname{Im}b\,(x-\beta_b)}-e^{-2\operatorname{Im}b\,L}}{2\operatorname{Im}b} \right)^{1/2}\\
&=\dfrac{||f||_2}{\sqrt{2\operatorname{Im}b}}\left(1-e^{2\operatorname{Im}b\,x}e^{-2\operatorname{Im}b\,(L+\beta_b)}\right)^{1/2}
\end{align*}
and so
\begin{align*}
\int_{-L+\beta_b}^{L+\beta_b} \bigg(e^{\operatorname{Im}b\,(x-\beta_b)}\int_{x-\beta_b}^{L}e^{-\operatorname{Im}b\,y}|\widehat{f}(y)|\,\mathrm{d} y\bigg)^2\mathrm{d} x&=\dfrac{||f||_2^2}{2\operatorname{Im}b}\int_{-L+\beta_b}^{L+\beta_b}\left(1-e^{2\operatorname{Im}b\,x}e^{-2\operatorname{Im}b\,(L+\beta_b)}\right)\mathrm{d} x\\
&=\dfrac{||f||_2^2}{2\operatorname{Im}b}\left[2L+\dfrac{e^{-4L\operatorname{Im}b}-1}{2\operatorname{Im}b}\right].
\end{align*}
Lastly, if $x\leq -L+\beta_b$,
\begin{align*}
e^{\operatorname{Im}b\,(x-\beta_b)}\int_{x-\beta_b}^{L}e^{-\operatorname{Im}b\,y}|\widehat{f}(y)|\,\mathrm{d} y&=e^{\operatorname{Im}b\,(x-\beta_b)}\int_{-L}^{L}e^{-\operatorname{Im}b\,y}|\widehat{f}(y)|\,\mathrm{d} y\\
&\leq e^{2\operatorname{Im}b\,(x-\beta_b)}e^{2\operatorname{Im}b\,L}||f||_2^2
\end{align*}
and thus,
\begin{align*}
\int_{-\infty}^{-L+\beta_b} \bigg(e^{\operatorname{Im}b\,(x-\beta_b)}\int_{x-\beta_b}^{L}e^{-\operatorname{Im}b\,y}|\widehat{f}(y)|\,\mathrm{d} y\bigg)^2\mathrm{d} x&\leq ||f||_2^2\,e^{2\operatorname{Im}b\,(L-\beta_b)} \int_{-\infty}^{-L+\beta_b} e^{2\operatorname{Im}b\,x}\,\mathrm{d} x\\
&=\dfrac{||f||_2^2}{2\operatorname{Im}b}.
\end{align*}
Combining all these cases, we obtain
\begin{equation*}
\int_\mathbb{R} \left(e^{\operatorname{Im}b\,(x-\beta_b)}\int_{x-\beta_b}^{L}e^{-\operatorname{Im}b\,y}|\widehat{f}(y)|\,\mathrm{d} y\right)^2\mathrm{d} x\leq ||f||_2^2\,C(b)^2
\end{equation*}
where
$$
C(b)=\dfrac{1}{\sqrt{2\operatorname{Im}b}}\left[2L+1+\dfrac{e^{-4L\operatorname{Im}b}-1}{2\operatorname{Im}b}\right]^{1/2}
$$
as announced.
\end{proof}
With these lemmas, we now state and prove our next stability result.
\begin{theorem}
\label{thm:stab2}
Let $f\in PW_L$ for some $L>0$. Let $a,b\in\mathbb{C}$ such that $\operatorname{Im}a,\operatorname{Im}b>0$, and $|a-b|\leq \dfrac{|b|}{2}$. Let $\beta_a=\displaystyle\frac{2\operatorname{Im}a}{|a|^2}$ and
$\beta_b=\displaystyle\frac{2\operatorname{Im}b}{|b|^2}$. Then
$$
\inf_{|c|=1}||\zf_af-c\zf_bf||_2^2\leq C_1(b)\,\omega_2(\widehat f;\beta_b-\beta_a)||f||_2+C_2(a,b)||f||_2^2.
$$
where $C_2(a,b)\longrightarrow 0$ as $a\longrightarrow b$.
\end{theorem}
\begin{remark}
The constants $C_1(b)$ and $C_2(a,b)$ are given in \eqref{eq:k1b}-\eqref{eq:k2b}
and depend on the quantities $C(a,b)$ and $C(b)$ given in Lemmas \ref{lem:techlem1} and \ref{lem:techlem2}, respectively.
\end{remark}
\begin{proof}
We use the formula for $\zf_af$ from \eqref{eq:ga-comp}. Write
$$
\dfrac{\bar a b}{a\bar b}\langle\,\widehat{\zf_af},\widehat{\zf_bf}\,\rangle=\int_\mathbb{R}\left(\mathbf{A}_{a,b}(x)+\mathbf{B}_{a,b}(x)+\mathbf{C}_{a,b}(x)+\mathbf{D}_{a,b}(x)\right)\,\mathrm{d} x
$$
where
\begin{align*}
\mathbf{A}_{a,b}(x)&=\tau_{\beta_b}\widehat{f}(x)\overline{\tau_{\beta_a}\widehat{f}(x)}\\
\mathbf{B}_{a.b}(x)&=-\tau_{\beta_b}\widehat{f}(x)\overline{(2i\operatorname{Im}a)(R\gamma_a*\tau_{\beta_a}\widehat{f})(x)}\\
\mathbf{C}_{a,b}(x)&=-\overline{\tau_{\beta_a}\widehat{f}(x)}{(2i\operatorname{Im}b)(R\gamma_b*\tau_{\beta_b}\widehat{f})(x)}\\
\mathbf{D}_{a,b}(x)&=(4\operatorname{Im}a\operatorname{Im}b){(R\gamma_b*\tau_{\beta_b}\widehat{f})(x)}\overline{(R\gamma_a*\tau_{\beta_a}\widehat{f})(x)}
\end{align*}
for all $x\in\mathbb{R}$. Since $\int_\mathbb{R} \mathbf{A}_{b,b}(x)\,\mathrm{d} x=||f||_2^2$ and
$$
||f||_2^2=\langle\,\widehat{\zf_bf},\widehat{\zf_bf}\,\rangle=\int_\mathbb{R}\left(\mathbf{A}_{b,b}(x)+\mathbf{B}_{b,b}(x)+\mathbf{C}_{b,b}(x)+\mathbf{D}_{b,b}(x)\right)\,\mathrm{d} x,
$$
we have
$$
\int_\mathbb{R}\left(\mathbf{B}_{b,b}(x)+\mathbf{C}_{b,b}(x)+\mathbf{D}_{b,b}(x)\right)\,\mathrm{d} x=0.
$$
Hence,
\begin{equation}
\small
\label{eq:thm2-4terms}
\dfrac{\bar a b}{a\bar b}\langle\,\widehat{\zf_af},\widehat{\zf_bf}\,\rangle=\int_\mathbb{R}\Big[\mathbf{A}_{a,b}(x)+\left(\mathbf{B}_{a,b}-\mathbf{B}_{b,b}\right)(x)+\left(\mathbf{C}_{a,b}-\mathbf{C}_{b,b}\right)(x)+\left(\mathbf{D}_{a,b}-\mathbf{D}_{b,b}\right)(x)\Big]\,\mathrm{d} x.
\end{equation}
We will show the result by estimating each term of this integral.
We first look at $\mathbf{A}_{a,b}$. Observe that
\begin{align*}
||f||_2^2-\int_\mathbb{R} \mathbf{A}_{a,b}(x)\,\mathrm{d} x&\leq \bigg|\int_\mathbb{R} \mathbf{A}_{a,b}(x)\,\mathrm{d} x-||f||_2^2\,\bigg|\\
&\leq \int_\mathbb{R} |\widehat{f}(x-\beta_b)||\tau_{\beta_a}\widehat{f}(x)-\tau_{\beta_b}\widehat{f}(x)|\,\mathrm{d} x\\
&\leq ||f||_2||\tau_{\beta_a}\widehat{f}-\tau_{\beta_b}\widehat{f}||_2\\
&\leq ||f||_2\,\omega_2(\widehat f;\beta_a-\beta_b).
\end{align*}
For $\mathbf{B}_{a,b}-\mathbf{B}_{b,b}$, we use the bounds from Lemma \ref{lem:techlem1} to obtain
\begin{align*}
\int_\mathbb{R} |(\mathbf{B}_{a,b}-\mathbf{B}_{b,b})(x)|\,\mathrm{d} x&=2\int_\mathbb{R} |\tau_{\beta_b}\widehat f(x)||\operatorname{Im}a\,(R\gamma_a*\tau_{\beta_a}\widehat{f})(x)-\operatorname{Im}b\,(R\gamma_b*\tau_{\beta_b}\widehat{f})(x)|\,\mathrm{d} x\\
&\leq 2||f||_2||\operatorname{Im}a\,(R\gamma_a*\tau_{\beta_a}\widehat{f})-\operatorname{Im}b\,(R\gamma_b*\tau_{\beta_b}\widehat{f})||_2\\
&\leq 2C(a,b)||f||_2^2+2\sqrt{2\pi}\,\omega_2(\widehat{f};\beta_a-\beta_b)||f||_2.
\end{align*}
Next, we use the bounds from Lemma \ref{lem:techlem2} so that
\begin{align*}
\int_\mathbb{R} |(\mathbf{C}_{a,b}&-\mathbf{C}_{b,b})(x)|\,\mathrm{d} x\\
&=2\operatorname{Im}a\int_\mathbb{R} |\tau_{\beta_a}\widehat{f}(x)-\tau_{\beta_b}\widehat{f}(x)||(R\gamma_b*\tau_{\beta_b}\widehat{f})(x)|\,\mathrm{d} x\\
&=2\operatorname{Im}a\int_\mathbb{R} |\tau_{\beta_a}\widehat{f}(x)-\tau_{\beta_b}\widehat{f}(x)|\bigg|\int_0^{+\infty}e^{ibs}\widehat{f}(x-\beta_b+s)\,\mathrm{d} s\bigg|\,\mathrm{d} x\\
&=2\operatorname{Im}a\int_\mathbb{R} |\tau_{\beta_a}\widehat{f}(x)-\tau_{\beta_b}\widehat{f}(x)|\bigg|e^{-ib(x-\beta_b)}\int_{x-\beta_b}^{L}e^{iby}\widehat{f}(y)\,\mathrm{d} y\bigg|\,\mathrm{d} x\\
&\leq 2\operatorname{Im}a\int_\mathbb{R} |\tau_{\beta_a}\widehat{f}(x)-\tau_{\beta_b}\widehat{f}(x)|\left[e^{\operatorname{Im}b\,(x-\beta_b)}\int_{x-\beta_b}^{L}e^{-\operatorname{Im}by}|\widehat{f}(y)|\,\mathrm{d} y\right]\,\mathrm{d} x\\
&\leq 2\operatorname{Im}b\,C(b)\cdot\omega_2(\widehat f;\beta_a-\beta_b)||f||_2.
\end{align*}
For the last term, the bounds from Lemmas \ref{lem:techlem1} and \ref{lem:techlem2} imply that
\begin{align*}
\int_\mathbb{R} |(\mathbf{D}_{a,b}&-\mathbf{D}_{b,b})(x)|\,\mathrm{d} x\\
&=4\operatorname{Im}b\int_\mathbb{R}|(R\gamma_b*\tau_{\beta_b}\widehat{f})(x)||\operatorname{Im}a\,(R\gamma_a*\tau_{\beta_a}\widehat{f})(x)-\operatorname{Im}b\,(R\gamma_b*\tau_{\beta_b}\widehat{f})(x)|\,\mathrm{d} x\\
&\leq 4\operatorname{Im}b \,C(b)||f||_2||\operatorname{Im}a\,(R\gamma_a*\tau_{\beta_a}\widehat{f})-\operatorname{Im}b\,(R\gamma_b*\tau_{\beta_b}\widehat{f})||_2\\
&\leq 4\operatorname{Im}b\,C(b)C(a,b)||f||^2_2+4\sqrt{2\pi}\operatorname{Im}b\, C(b)\cdot\omega_2(\widehat f;\beta_a-\beta_b)||f||_2.
\end{align*}
Combining these three estimates from above, we get
\begin{align*}
\bigg|\int_\mathbb{R}\Big[\left(\mathbf{B}_{a,b}-\mathbf{B}_{b,b}\right)&(x)+\left(\mathbf{C}_{a,b}-\mathbf{C}_{b,b}\right)(x)+\left(\mathbf{D}_{a,b}-\mathbf{D}_{b,b}\right)(x)\Big]\,\mathrm{d} x\bigg|\\
&\leq\int_\mathbb{R}\Big[|\left(\mathbf{B}_{a,b}-\mathbf{B}_{b,b}\right)(x)|+|\left(\mathbf{C}_{a,b}-\mathbf{C}_{b,b}\right)(x)|+|\left(\mathbf{D}_{a,b}-\mathbf{D}_{b,b}\right)(x)|\Big]\,\mathrm{d} x\\
&\leq\bigg[2\sqrt{2\pi}+(2+4\sqrt{2\pi})\operatorname{Im}b\,C(b)\bigg]\omega_2(\widehat f;\beta_a-\beta_b)||f||_2\\
&\qquad\qquad+\bigg[2C(a,b)+4\operatorname{Im}b\,C(b)C(a,b)\bigg]||f||_2^2.
\end{align*}
Finally, from \eqref{eq:inf2}, we obtain
\begin{align*}
\inf_{|c|=1}||\zf_af-c\zf_bf||_2^2&\leq 2\Big|\dfrac{\bar a b}{a\bar b}\langle\,\widehat{\zf_af},\widehat{\zf_bf}\,\rangle-||f||_2^2\Big|\\
&\leq \bigg[2+2\sqrt{2\pi}+(2+4\sqrt{2\pi})\operatorname{Im}b\,C(b)\bigg]\omega_2(\widehat f;\beta_a-\beta_b)||f||_2\\
&\qquad\qquad+\bigg[2C(a,b)+4\operatorname{Im}b\,C(b)C(a,b)\bigg]||f||_2^2.
\end{align*}
Setting
\begin{equation}
\label{eq:k1b}
C_1(b)=2+2\sqrt{2\pi}+(2+4\sqrt{2\pi})\operatorname{Im}b\,C(b)
\end{equation}
and
\begin{equation}
\label{eq:k2b}
C_2(a,b)=2C(a,b)+4\operatorname{Im}b\,C(b)C(a,b),
\end{equation}
so that $C_2(a,b)\longrightarrow 0$ as $a\longrightarrow b$, we obtain the theorem.
\end{proof}
In this corollary, we see that if a false complex zero $a$ goes close to a genuine complex zero, then the corresponding wide-banded solution $\zf_af$ goes close to a genuine solution in the Paley-Wiener class.
\begin{corollary}
Let $f\in PW_L$ for some $L>0$. Fix $b\in\mathbb{C}$, a simple zero of $f$ with $\operatorname{Im}b>0$. Suppose $a\in \mathbb{C}$ with $\operatorname{Im}a> 0$, $f(a)\ne 0$ and $|a-b|\leq \dfrac{|b|}{2}$. Then
$$
\inf_{|c|=1}||\zf_af-c\zf_bf||_2^2\longrightarrow 0\text{ as }a\longrightarrow b.
$$
\end{corollary}
\begin{remark}
Since zeros are isolated, $f$ does not vanish on $V\setminus\{b\}$ where $V$ is a neighborhood of $b$. Without loss of generality, $V\subset \left\{a\in\mathbb{C}: |a-b|\leq \dfrac{|b|}{2}\right\}$.
\end{remark}
\section*{Acknowledgements}
The research of the second author is partially supported by the project ANR-18-CE40-0035. The third author is supported by the CHED-PhilFrance scholarship from Campus France and the Commission of Higher Education (CHED), Philippines.
\section*{Data availability}
All data generated or analysed during this study are included in this published article
\IfFileExists{\jobname.bbl}{}
{\typeout{}
\typeout{******************************************}
\typeout{** Please run "bibtex \jobname" to optain}
\typeout{** the bibliography and then re-run LaTeX}
\typeout{** twice to fix the references!}
\typeout{******************************************}
\typeout{}
}
|
2,869,038,154,353 | arxiv | \section{COPYRIGHT AND CLEARANCE}
\input{sec-introduction}
\input{sec-methodology}
\input{sec-problem}
\input{sec-inputs}
\input{sec-architecture}
\input{sec-datasets_metrics}
\input{sec-future_direcs}
\input{sec-conclusion}
\bibliographystyle{ACM-Reference-Format}
\section{COPYRIGHT AND CLEARANCE}
\input{sec-introduction}
\input{sec-methodology}
\input{sec-problem}
\input{sec-inputs}
\input{sec-architecture}
\input{sec-datasets_metrics}
\input{sec-future_direcs}
\input{sec-conclusion}
\bibliographystyle{ACM-Reference-Format}
\section{COPYRIGHT AND CLEARANCE}
\input{sec-introduction}
\input{sec-methodology}
\input{sec-problem}
\input{sec-inputs}
\input{sec-architecture}
\input{sec-datasets_metrics}
\input{sec-future_direcs}
\input{sec-conclusion}
\bibliographystyle{ACM-Reference-Format}
\section{Introduction}
Recommendation systems are a widely prevalent field of research in the deep learning domain. The most popular recommendation systems are popularity based, content based and collaborative filtering based recommendation systems. With social media, traditional recommendation systems lack the understanding of relationships and influence. A good recommendation system should incorporate these factors as it is a common human tendency to buy products or watch a movie recommended by those whom we trust, it is essential for modern deep learning based recommendation to incorporate these factors. This led to the development of GCNN based recommendation systems. GCNN exploits the graph structure of the data to understand relationships / connections as well as the heavy computational aspect that neural networks present. Other modern day concepts like Attention, Fusion, Temporal are added to GCNN to make it more adaptable to model heterogeneity, global influence and time.
In this paper we did an extensive survey understanding the different types of Social network based Recommendation System.We used Scopus Database to find the list of papers, used PRISMA architecture to annotate the paper for finding the relevant set of 41 papers. Our survey findings mainly focus on different types of architecture of GNN used and on user sociology. To the best of our knowledge this is the first paper existing in this particular topic.
\section{Existing Survey and Related Work}
In this section, we summarize the work done on social recommender systems. Current literature can be categorized into six broad categories- Collaborative Filtering, Temporal Recommendation, Social Recommendation, Socio-Temporal Recommendation, Graph Convolutional Networks and Graph Attention Networks.
Collaborative Filtering algorithms are the most traditional method for predicting ratings on items by a given user. Methods like matrix factorization are used to derive user and item-specific latent factors from a given matrix capturing user-item interactions. Several architectures like NeuMF exist which work on using Neural Networks on top of Collaborative Filtering algorithms to capture complex non-linear correlations in the user-item data. There are two primary limitations of this branch of recommender systems- firstly, these recommender systems make use of a static user-item interaction graph which is unable to accommodate for evolving user preferences over time; and secondly, these methods cannot exploit the user’s social connections in terms of recommending items.
Temporal Recommendation approaches use Markov Chains to factor in temporal relations concerning the user, like the user’s past and current preferences. More recent approaches have made use of convolutional, RNN and attention layers to model complex temporal changes. However, these methods also overlook the potential of exploiting social relations of the user to make relevant recommendations, and they are also susceptible to data sparsity and cold start problems, much like the Collaborative Filtering approaches.
Social Recommendation approaches address the limitations of the aforementioned two approaches and make use of the user’s social ties with other users in the network. Social recommenders follow the principle of homophily which states that the socially connected clusters of users are likely to exhibit the same kind of behaviour in these platforms, and that these clusters of users have a certain influence on the fellow users in their corresponding cluster. Furthermore, Socio-Temporal recommendation approaches consider the temporal dependence and the social influence of the user’s item interactions and their social circles by extracting information from the items used by the user’s social connections. This modeling of socio-temporal features has also given rise to session based recommendation approaches which take into account the transitions between items of session sequences to help generate better item recommendations.
With the context of this wide set of approaches used for recommender systems, we work on reviewing the use of Graph Convolutional Networks (GCNs) for social recommendations. The effectiveness of Graph Neural Networks (GNNs) in addressing the data sparsity and cold start problem, while accounting for the user’s social ties has significantly increased the popularity of GNNs in developing social recommendation algorithms. With the abundance of existing literature on recommender systems, our emphasis paper lies on conducting an exhaustive literature survey on the use of Graph Neural Networks for social recommendations. We work on studying the various architectures, applications and the degree of heterogeneity employed in conjunction with Graph Neural Networks in currently existing approaches for social recommender systems.
\section{Survey Methodology}
Following the guidelines set by the PRISMA, the Scopus index was queried to filter for literature containing phrases “social” and “graph” and one of “recommendation” or “recommender” present in the title or the abstract of the literature. Further filters were set to return work published only after 2009 and limited to English language. This query resulted in 1958 pieces of work.
In order to obtain the final list of most relevant papers, an iterative approach of manual review and filter was performed. Before each contributor began to review, a comprehensive and exhaustive discussion was held between all three contributors to discuss, define and agree upon a narrow definition of the concepts and ideas necessary to be present in a paper for it to be selected ahead. These included but were not limited to the use of concepts of graph neural networks focussed towards the application of Social Recommendations. Alternative definitions such as graph convolutional networks were also accepted.
Using these set of predefined guidelines, all three contributors labelled 1 batch of 100 papers together into three categories: “Yes”, “No” and “Maybe” where “Yes” represented full confidence of the contributor in the paper being relevant, “Maybe” representing some level of doubt in its relevance needing a second look and “No” represented with full confidence its irrelevance to the survey after skimming through the title and abstract.
A second round of further detailed discussions took place over papers with differences in markings to align each contributor's understanding on the papers. Once these discussions concluded constructively, another 100 papers were labelled independently and the inter-annotator score was calculated again resulting in a high score of 0.845 with 11 papers having a unanimous “Yes”, 5 having a majority of contributors agreeing and 1 with only one contributor agreeing. Rest all had complete “No” marked papers. Once we found sufficient agreement as represented through the new score, the remaining papers were divided equally amongst the contributors and each contributor reviewed each paper and marked as relevant, not or maybe. Papers marked as maybe were reviewed once again before being given a definitive label. Through this iterative process, we were left with 41 papers, relevant to the survey.
\section{Survey Findings}
Due to the lack of any existing surveys exploring the implementation of Graph Neural Networks (GNNs) for the task of social recommendation, we survey to find the advances made in the architectures of models implemented to this purpose as well as the degree of involvement of neighboring nodes contributing to making recommendation decisions, which can labelled as user sociology for shortened representations. The following findings can be broken down further into two categories: one each for model architecture as well as user sociology. Under each of these, each major subdomain will be further explored.
\subsection{Taxonomy of Architecture}
\subsubsection{Attention-based architectures}
\hfill\\
The node and edge representations in a Graph Neural Network are calculated by message passing among the graph nodes, which aggregate features from the neighbors of a node to represent a local graph structure. Prior to the use of attention, the traditional GNN based models like Graph Convolutional Networks and GraphSage assumed equal contributions of every neighbor’s influence on the target node. The use of attention layers for social recommendations was first proposed in GraphRec \cite{1} for calculating the user and to learn the latent user and item factors. GNNTSR \cite{18} worked with a very similar architecture proposed by GraphRec by adding modules to account for the user's trust based network and authenticity of the reviews. LSTM based RNNs are employed to compute attention weights for GNNs. Attention mechanism was used to capture the heterogeneous influence of user-item interactions and user-user interactions to provide a personalized weight of each user on every corresponding item, while also accounting for the popularity of that item among the strong and weak ties of the user’s first connections. Figure 1 captures the use of attention is GraphRec.
\begin{figure}[h]
\centering
[width=0.8\linewidth, height=0.5\linewidth]{Figures/graphrec.png}
\caption{Architecture of GraphRec \cite{1}}
\end{figure}
\subsubsection{Hierarchical architectures}
\hfill\\
Hierarchical architectures follow a general pattern of generating embeddings through message propagation methods among neighbors in order to account for the differential impact each neighbor plays on the recommendation \cite{4,18}. Hierarchy is brought into play by stacking multiple layers of GNNs, each with an attention mechanism which are then combined for final predictions as seen in Figure 2.
\begin{figure}[h]
\centering
\includegraphics[width=0.8\linewidth, height=0.5\linewidth]{Figures/hierarchy.png}
\caption{Architecture of ASR \cite{18}}
\end{figure}
Y. Jiang, H. Ma, Y. Liu et al. explain in their design of the Attentional Social Recommendation (ASR) the focus of attention on the social user-user graph and the interaction user-item graph. While working from the perspective of a user, attention is applied to both of the different kinds of graphs as mentioned above while for the perspective of the item graph, only the interaction graph is considered for the attention models, representing a dual-level mechanism of attention.
Message propagation allows for injecting information about neighbors into node representations based on the premise that nodes with higher correlation amongst one another will also be close in a graphical network representation. Each work \cite{4}, \cite{20} and \cite{18} have their own definitions of message generation but follow the same basic premise. For message aggregation, two-level attention layers is used by ASR \cite{18} while PReLU activation functions are used by SR-HGNN \cite{20}.
By creating a hierarchy of multiple stacked GNN layers, high-order connectivity of users and items can be found. For each of the layer, the resultant output is combined together to create a representative vector on which prediction can be performed.
In comparison with GraphRec \cite{1}, ASR goes beyond directly connected neighbors to leverage data in a more complete manner and in comparison with DiffNet++ \cite{14}, ASR proves to save training cost since DiffNet++ requires multiple multilayer perceptrons for execution. Through experimentations, it is also shown that hierarchical models perform better than single layer sequential models. Intuitively, selecting more number of GNNs to stack improves the performance of the model but only until a certain point, beyond which performance begins to drop.
\subsubsection{Items as Nodes}
\hfill\\
The approach of considering items as nodes similar to those of users is explored by the works HeteroGraphRec \cite{4}, DICER \cite{22} and EGFRec \cite{23} on the premise that attraction between items is a valuable source of preferences. By including these item preferences, recommendations can be made more accurate. In order for this, items are modelled just as users would by considering them to be structured and having signals.
The similarities between items can either be implicit such as being interactions between two items or explicit such as item categories. These are important to generate the links between the items. Even though this idea is considered in algorithms based on Collaborative Filtering, GNNs are able to leverage these interconnections much more powerfully.
Essentially, HeteroGraphRec has its GNNs consisting of two kinds of entities: users and items and three types of interconnections as user-user, item-item and user-item \cite{3} By including such connections, an additional dimension is added onto which aggregation can be performed and allows the neural network to learn more adeptly. Owing to the three different kinds of the interconnections as defined above, three different graphs are also drawn, one each for every connection type.
Building on top of the three graphs, the model architecture can be seen to have three main components: aggregators for users, aggregators for items and the final predictor as seen in Figure 3. To find important items and other users for a particular user or item, attention networks are implemented and their results combined. Similarly, by considering items as nodes, user-item attention can be modelled as well. Once all attention networks select the most relevant information, aggregator functions combine them before passing them to the prediction module.
Experiments conducted go onto back the fact that by considering items are nodes in the GNNs, accuracy of recommendations generated is boosted.
\begin{figure}[h]
\centering
\includegraphics[width=0.8\linewidth, height=0.5\linewidth]{Figures/item_nodes.png}
\caption{Architecture of HeteroGraphRec \cite{4}}
\end{figure}
\subsubsection{Temporal-based Architectures}
\hfill\\
The architectures of GNNs make use of a temporal component to account for dynamic factors such as the user’s past and current item preferences and the user’s evolving social connections and influence. Previously Markov chains have been used in temporal recommender systems but their limitation of not being able to capture the long term user history has led to them being replaced by Recurrent Neural Networks (RNNs) based on LSTM components. FuseRec \cite{21} makes use of an LSTM based RNN to find the user temporal embeddings at a time step as a recurrence relation of the general user embeddings at the previous time step and the item embeddings. The temporal component is also used for finding context specific embeddings based on the user’s current preferences by finding the similarity between user embeddings and the candidate item embeddings. The context specific user-social embeddings are also captured in a similar manner by FuseRec. Figure 3. Depicts a high level architecture of the FuseRec model. TGRec \cite{13} makes use of this temporal aspect along with an item-item social graph to capture not only the temporal influence of the user’s item interactions and social connections, but also to capture the temporal influence of previous items on the user.
\begin{figure}[h]
\centering
\includegraphics[width=0.8\linewidth, height=0.5\linewidth]{Figures/fuserec.png}
\caption{High level overview of FuseRec \cite{21}}
\end{figure}
EGFRec \cite{23} makes use of an LSTM to model the user’s short term interest from the current session and take the last hidden state as a session interest representation over which a long term interest is computed by pooling these session interests. This architecture accounts for the short term as well as the long term interest of the user.
\subsubsection{Fusion-based Architectures}
\hfill\\
This architecture of GNNs makes use of Fusion Layers. Instead of only the neighbors affecting the user, fusion uses a recursive method to model the real network where higher degree contacts have an influence too. This architecture is called DiffNet\cite{27}: an Influence Diffusion Neural Network based model which simulates the recursive social influence. Layer-wise influence diffusion structure plays a key role, with every iteration, say iteration k, kth layer embedding will influence diffusion. In this figure, Embedding Layer captures the collaborative latent representation, fusion layer creates user fusion embedding for each user whose output is taken by the layerwise fusion diffusion layer and recursively performs diffusion and the rating is predicted by prediction layer. Various other Diffusion based model like HIDM \cite{2} models the heterogeneous nature of the graph by aggregating information from both user and item subspaces
SAGLG \cite{8}, uses collaborative embedding and group embedding using fusion techniques ATGCN \cite{13} uses information fusion to learn implicit trust.
\begin{figure}[h]
\centering
\includegraphics[width=0.8\linewidth, height=0.5\linewidth]{Figures/diffnet.png}
\caption{High level overview of Diffnet \cite{21}}
\end{figure}
\subsubsection{Mutualistic Model based architecture}
\hfill\\
This architecture is based on the concept A user’s preference for items or locations can be easily influenced by its social links while users with similar interests are likely to build a relationship. Mutualistic model originates from exploring the implicit mutual inter-active relationship between two species based on reinforcement learning models instead of concatenate operations that traditional models do. MGNN \cite{6} is designed for joint friend and item recommendation using GNN by leveraging mutual relationships between two types of user behavior by modelling mutual-reinforcement relationship from theoretical concepts.
\begin{figure}[h]
\centering
\includegraphics[width=0.8\linewidth, height=0.5\linewidth]{Figures/mgnn.png}
\caption{High level overview of MGNN Architecture \cite{21}}
\end{figure}
MutualRec \cite{26} first extracts user’s consumption preference embedding and social preference embedding from a spatial attention layer and a spectral attention layer. It then merges those embeddings into a mutualistic attention layer for predicting ratings and link value simultaneously.
\subsection{Taxonomy of User Sociology}
\subsubsection{Heterogeneous Link Ties}
\hfill\\
A major portion of existing research contributes to developing interconnecting links between nodes of varying weights on the basis of the strength of their relationship. Given the works in this direction as published in GraphRec \cite{1}, HIDM \cite{2}, HeteroGraphRec \cite{3}, SoRecGAT \cite{11}, DiffNet++ \cite{14}, ASR \cite{18}, FuseRec \cite{21}, KconvGraphRec \cite{24}, GRAFRANK \cite{25}, SAN \cite{28}, SeFrame \cite{29} it is interesting to understand how and why this is necessary to achieve better results. SoRecGAT \cite{11} develops a novel and comprehensive architecture model that can represent other works as well by combining nodes from different graphs into one unified space.
\begin{figure}[h]
\centering
\includegraphics[width=0.8\linewidth, height=0.5\linewidth]{Figures/hetero.png}
\caption{Part Architecture for the SoRecGAT model \cite{12}}
\end{figure}
As seen in Figure 6, representing the part architecture of SoRecGAT, there are two parallel formations of item network graphs and user network graphs. These two networks are then combined to create a network with learnt node representations with different link strengths, on which further computation is performed. As mentioned the heterogeneous graph is built by combining different types of nodes together, it is important in deciding the weights of the neighbouring nodes. In the most primitive settings, we need to understand that user nodes can be linked with other user nodes via their interaction behaviours as well as other item nodes on the basis of their ratings assigned to the other. To compute the differential influence of neighbours, as mentioned above, it is observed that some form of attention mechanisms are deployed.
Through exhaustive experimentation, it was clearly shown that the incorporating the influence of the neighboring nodes influences the decisions of the model and can bump up accuracy of the recommendations it makes.
\subsubsection{Temporal/Dynamic Recommendations in Social Circles}
\hfill\\
Temporal recommendations in the content of user sociology refers to the consideration of temporal behaviour of the user’s social connections in recommending a certain item to the user. To depict this interaction better, we can refer to Figure 7. While referring to the diagram, we see how all the users at the current session will be paying attention to the previous sessions of their friends. User A at session “t” would be most interested in session “t-1” of User B, User B would be most interested in the previous session of User C, and User C would be most interested in the first session of User A. Thus, with this definition, we saw works like TGRec \cite{13}, FuseRec \cite{21}, EGFRec \cite{23}, DICER \cite{22} and GRAFRANK \cite{25} implement their recommendation systems to account for the history of user’s strong and weak ties.
\begin{figure}[h]
\centering
\includegraphics[width=0.8\linewidth, height=0.5\linewidth]{Figures/temporal-us.png}
\caption{Temporal influence of user’s friends’ interests on the user’s attention \cite{24}}
\end{figure}
\subsubsection{Global Local Preference}
\hfill\\
This user sociology concept explains how in a social context will have his/her own preference which will have higher weightage compared to their friends. The user may take the opinion of different set of friends with different weightage for different items. This is modelled by PA-GAN \cite{7} , Item-Aggregation for User captures local preference, Friend-Preference Aggregation for Users captures user’s preference and strength is called global preference
\begin{figure}[h]
\centering
\includegraphics[width=0.8\linewidth, height=0.3\linewidth]{Figures/pagan.png}
\caption{Local Global Preference using PA-GAN \cite{7}}
\end{figure}
\section{Datasets and Evaluation Metrics}
The most frequently used datasets for GNNs that we encountered in this literature survey were Epinions and Ciao with 17 and 13 references on our surveyed papers. Ciao dataset was crawled from Tang et al. (2012) from the Ciao website which is a product review website with user reviews for multiple products for every website. This dataset has a total of 1653 users with over 26,190 interactions on 16,862 items. Epinions is another sparse density dataset of trust based user network of reviews on items with over 22000+ users, 296000 items and 464k interactions between users and items. Basically, any forum or platform with reviews on items could be used for evaluating GNNs. Thus, the literature we surveyed also made use of datasets like Yelp reviews, Movielens, Delicious, Douban and FilmTrust. GRAFRANK \cite{25} took an interesting approach with their selection of datasets and used two large scale datasets from Snapchat for theyr which had over 3.1 million users and 286 million edges, and 17.1 million users and 2.36 billion edges respectively for their friend ranking based model which made use of 79 distinct user features.
The most used evaluation metric that we encountered was Normalized Discounted Cumulative Gain (NDCG@K). It takes into factor the hit position of the any given item and assigns a higher score to an item on top position. Recall@n was anothe rwidely used metric to depict the proportion of correct results in a list of top “n” items. HitRate@n is another metric which computed if the ground truth item is present in the top “n” ranking of the items. MRR@n was another popular metric which computes the the average of reciprocal ranks of the target items and is set as zero if the rank is greater than “n. Root Mean Squared Error (RMSE) and Mean Absolute Error (MAE) were two other widely used evaluation metrics encountered by us in our literature survey.
\section{Results and Future Research Scope}
\subsection{Results}
After surveying over 41 works in the space of the social recommendation through graph neural networks, we observe certain directions in which a majority of research is being performed. Out of the 41 papers surveyed, we are citing 32 novel papers in this domain as the other 9 publications did not have a significant novelty or contribution in their approach. These are summarized in Table 1 and Table 2.
\begin{table*}
\caption{Taxonomy of Architectures}
\label{tab:commands}
\begin{tabular}{ccl}
\toprule
Architecture &Models\\
\midrule
Attention based&GraphRec \cite{1}, HIDM \cite{2}, HeteroGraphRec \cite{3}, GHSCF \cite{4}, PAGAN \cite{7}, \\
\texttt{} &DGARec-R \cite{10}, SoRecGAT \cite{11}, TGRec \cite{12}, DiffNet++ \cite{14}, GAT-NSR \cite{16} , \\
\texttt{}&GNNTSR \cite{17},EGFRec \cite{23}, MutualRec \cite{26}, SAN \cite{28}, ASR \cite{18}, SAGLG \cite{8}, \cite{15} \\
\texttt{}&\\
Fusion based&HIDM \cite{2}, SAGLG \cite{8}, ATGCN \cite{13}, FuseRec \cite{21}, \\
\texttt{}&DiffNet \cite{27}, MEGCN \cite{30}, DiffNetLG \cite{31}, MSGCN \cite{19}\\
\texttt{}&\\
Items as Nodes&HeteroGraphRec \cite{3}, ASR \cite{18}, DICER \cite{22}, KconvGraphRec \cite{24}\\
\texttt{}&\\
Hierarchical&GHSCF \cite{4}, ASR \cite{18}, SR-HGNN \cite{20}\\
\texttt{}&\\
Temporal Based&TGRec \cite{12}, FuseRec \cite{21}, EGFRec \cite{23}, GRAFRANK \cite{25}, \cite{15}\\
\texttt{}&\\
Mutualistic based&MGNN \cite{6}, MutualRec \cite{26}\\
\bottomrule
\end{tabular}
\end{table*}
\begin{table*}
\caption{Taxonomy of User Sociology}
\label{tab:commands}
\begin{tabular}{ccl}
\toprule
User Sociology &Models\\
\midrule
Heterogenous Degree&GraphRec \cite{1}, HIDM \cite{2}, HeteroGraphRec \cite{3}, GNN-SoR \cite{32} \\
&SoRecGAT \cite{11}, DiffNet++ \cite{14}, ASR \cite{18}, FuseRec \cite{21}, \\
&KconvGraphRec \cite{24}, GRAFRANK \cite{25}, SAN \cite{28}, SeFrame \cite{29}\\
&\\
Temporal&DGARec-R \cite{10}, TGRec \cite{12}, DiffNet++ \cite{14}, FuseRec \cite{21}, \\
&DICER \cite{22}, EGFRec \cite{23}, GRAFRANK \cite{25}\\
\texttt{}&\\
Global Preference&PAGAN \cite{7}, ASR \cite{18}, FuseRec \cite{21}, DICER \cite{22}, SAN \cite{28}, DiffNetLG \cite{31}\\
\bottomrule
\end{tabular}
\end{table*}
It comes as no surprise that attention is the most widely used concept in building architectures for this task, since it’s essential in selecting the most influential neighbours through which recommendations can be made. It was also observed that most models built on top of another by incorporating multiple concepts, selecting the best of what each of the base models has to offer. For instance, HeteroGraphRec \cite{3} utilises attention as well as architects its model in a hierarchical fashion. ASR \cite{18} can be classified as models that use items as nodes, hierarchical structure and attention, all three. Few works that did not fall under any categorization but were worth recognizing were based on multi-channel hypergraph convolution networks \cite{9} and knowledge graphs \cite{5} amongst others.
Just as attention played a crucial role in defining model architectures, when it comes to user sociology, setting heterogenous strengths to connections between nodes is the core concept adopted by works followed by incorporating the time factor into the progression of the structure of the social graphs used.
\subsection{Future Research Scope}
An interesting direction for future research could be to capture different types of relations between users like classmates, work colleagues or friends. Capturing these types of relations between users would help achieve a previously unexplored dimension of personalized recommendations on the basis of the user’s preference patterns. This direction could also comment on the influence of supposedly weak ties of the users in item recommendation because a user could be recommended items based on a work colleague’s (a social tie which does not particularly enforce homophily) preferences solely based on the fact that they work together. Another interesting direction of research would be to work on comparing the bidirectional friend association platforms like Facebook with unidirectional platforms like Twitter. As per our knowledge, no prior work has been done in this domain.
\section{Conclusion}
In this paper we presented a comprehensive survey of Social Recommendation based on Graph Convolution Neural Network focusing on major architectures we found and user sociology concepts that were widely incorporated. Attention based architectures seems to have immense growth opportunities. More architectures to further handle heterogeneous link strength could be a good scope for further research. Other dimensions like application, sociology concepts like weak ties can be explored.
\section{Contributions}
Aditya Salian worked on the existing survey and related work, attention architecture subsection, temporal architecture subsection, temporal user sociology subsection, future work and database sections.
Shlok Shah worked on the survey methodology, hierarchical architecture subsection, items as nodes architecture subsection, heterogeneous strength of links user sociology subsection and the results section.
Sivagami Nambi worked on the introduction, fusion architecture subsection, mutualistic architecture subsection, global-preference user sociology subsection and the conclusion.
\bibliographystyle{ACM-Reference-Format}
\section{Taxonomy of Architectures}\label{sec:arch}
In this section, we present the taxonomy of architectures for GNN-based SocialRS\xspace.
Model architectures consist of three key components as shown in Figure~\ref{fig:overview}:
(C1) encoders; (C2) decoders; (C3) loss functions.
In (C1), the encoders represent users and items into the low-dimensional vectors (\textit i.e.,\xspace embeddings) by employing different GNN encoders.
Here, some works exploit additional information of users and/or items (\textit e.g.,\xspace their attributes and groups; refer to Section~\ref{sec:inputs}) to construct more-accurate user and item embeddings.
In (C2), the decoders predict each user's preference on each item via different operations on the user and item embeddings obtained from (C1).
Finally, in (C3), different loss functions are optimized to learn the embeddings in an end-to-end manner.
In the subsequent subsections, we describe each component of GNN-based SocialRS\xspace in detail.
\begin{figure*}[t]
\centering
\includegraphics[width=\linewidth]{./Figures/Architecture2.pdf}
\caption{Overview of architectures for GNN-based SocialRS\xspace methods.}
\label{fig:overview}
\end{figure*}
\input{tab-encoders}
\subsection{Encoders}
We group the encoders of GNN-based SocialRS\xspace into 8 categories: graph convolutional network (GCN), lightweight GCN (LightGCN), graph attention neural networks (GANN), heterogeneous GNN (HetGNN), graph recurrent neural networks (GRNN), hypergraph neural networks (HyperGNN), graph autoencoder (GAE), and hyperbolic GNN.
Table~\ref{tab:encoder} shows the taxonomy of encoders used in existing work in detail.
Generally, in (C1) encoders,
most methods represent each user $p_i$ into two types of low-dimensional vectors (\textit i.e.,\xspace embeddings) by employing a GNN encoder: $p_i$'s interaction embedding $\mathbf{u}_i^I$ based on a U-I graph and $p_i$'s social embedding $\mathbf{u}_i^S$ based on a U-U graph. Then, they aggregate them into one embedding $\mathbf{u}_i$ for the corresponding user $p_i$.
In the meantime, they also obtain each item $q_j$'s embedding $\mathbf{v}_j$ via another GNN encoder using a U-I graph.
It should be noted that some works employ only a single GNN encoder to obtain the two embeddings.
In contrast, others use different GNN encoders for the embeddings of different node types (\textit i.e.,\xspace users or items).
For simplicity, however, we here explain the GNN encoders by generalizing them to any node type in the input graph.
\subsubsection{\textbf{GCN}}
Early works~\cite{wu2019diffnet,jin2020megcn,liu2022mpsr,liu2022hosr,zhu2022sinews,seng2021atgcn,shi2022sengr,liu2021sagclg,guo2020gnnsor,xiao2020mgnn} have focused on representing the user and item embeddings using GCN.
Given a node $n_i$ (\textit i.e.,\xspace a user or an item) in the input graph (\textit i.e.,\xspace U-I or U-U graphs), a $n_i$'s embedding $\mathbf e_{i}^{k}$ in $k$-th layer is represented based on the embeddings of $n_i$'s neighbors in $(k-1)$-th layer as follows:
\begin{equation}
\mathbf e_{i}^{(k)} = \sigma(\sum\nolimits_{n_j \in \mathcal{N}_{n_i}} \mathbf{e}_{j}^{(k-1)} \mathbf{W}^{(k)}),
\end{equation}
where $\sigma$ and $\mathbf{W}^{(k)} \in \mathbb{R}^{d \times d}$ denote a non-linear activation function (\textit e.g.,\xspace ReLU) and a trainable transformation matrix, respectively. Also, $\mathcal{N}_{n_i}$ indicates a set of $n_i$'s neighbors in the input graph.
Here, some works take the self-connection of $n_i$ into consideration by aggregating over the set $\mathcal{N}_{n_i} \cup \{n_i\}$.
Most methods simply consider the $n_i$'s embedding in the last $K$-th layer, $\mathbf e_{i}^{(K)}$, as its final embedding $\mathbf z_{i}$.
Another variant is to aggregate $n_i$'s embeddings from all layers, \textit i.e.,\xspace $\mathbf z_{i}=\sum^K_{k=1} \mathbf e_{i}^{(k)}$.
For instance, DiffNet~\cite{wu2019diffnet} obtains a user $p_i$'s social embedding $\mathbf u_{i}^S$ (resp. interaction embedding $\mathbf u_{i}^I$) by performing GCN with $k$-layers (resp. $1$-layer) based on the U-U graph (resp. U-I graph).
For each item $q_j$, it simply obtains $q_j$'s embedding $\mathbf v_{j}$ based on its attributes without using a GNN encoder.
\subsubsection{\textbf{LightGCN}} It is well-known that non-linear activation and feature transformation in GCN encoders make the propagation step very complicated for training and scalability~\cite{he20lightgcn,MaoZXLWH21}.
Motivated by this, some works~\cite{liao2022sociallgn,wu2022dcrec,zhen2022apte,wu2022eagcn,zhang2022cgl,yu2021sept,sha2021dsr,tao2022design,li2022idiffnet} have attempted to replace their GCN encoders with lightweight GCN~\cite{he20lightgcn}, \textit i.e.,\xspace
\begin{equation}
\mathbf e_{i}^{(k)} = \sum\nolimits_{n_j \in \mathcal{N}_{n_i}} \mathbf{e}_{j}^{(k-1)}.
\end{equation}
It should be noted that LightGCN~\cite{he20lightgcn} has no non-linear activation function, no feature transformation, and no self-connection.
For instance, DcRec~\cite{wu2022dcrec} obtains each user $p_i$'s social embedding $\mathbf u_{i}^S$ via GCN, whereas obtaining the $p_i$'s interaction embedding $\mathbf u_{i}^I$ and each item $q_j$'s embedding $\mathbf v_{j}$ via the LightGCN encoder.
\subsubsection{\textbf{GANN}}
The attention mechanism in graphs originated from the graph attention network (GAT)~\cite{velivckovic2017graph} and has already been successful in many applications, including recommender systems.
Considering different weights from neighbor nodes in the input graph helps focus on important adjacent nodes while filtering out noises during the propagation process~\cite{velivckovic2017graph}.
Therefore, almost all existing works on SocialRS\xspace have leveraged the attention mechanism in their GNN encoders~\cite{fan2019graphrec,wu2020diffnet++,song2021diffnetlg,xiao2021mutualrec,xu2020srhgnn,jiang2021asr,mandal2021gnntsr,mu2019gatnsr,vijaikumar2019sorecgat,hou2021pagan,walker2021soapvae,hoang2021gtn,zheng2021pdarec,liu2022fesog,yu2022esrf,liu2022sohrml,chen2022fbne,li2022disgcn,miao2022melgn,liufu2021sga,chen2022gdsrec,wu2019danser,fan2022graphrecp,xie2022sran,qiao2022tag,du2022sdcrec,tien2020kconvgraphrec,salamat2021heterographrec,jiang2022socialripplenet,bai2020tgnn,yang2022scgrec,xiao2022gsfr,chen2022igrec,fu2021dicer,chen2022igrec,jiang2021san,li2020hidm,leng2022glow,liao2022gman,wei2022hsgnn,song2022mrapr,zhao2021bfhan,zhu2021shgcn,sun2020dgarecr,chen2022ssrgnn,lin2022gnndsr,song2020dream,narang2021fuserec,wei2022sghan,yan2022ssdrec,bi2021ghscf}.
The common intuitions behind their design of the attention mechanism are:
(1) each user's preferences for different items may differ, and
(2) each user's influences on her social friends may differ.
Based on such intuitions, many methods represent a node $n_i$'s embedding in $k$-th layer by attentively aggregating the embeddings of $n_i$'s neighbors in $(k-1)$-th layer as follows:
\begin{equation}
\mathbf e_{i}^{(k)} = \sigma(\sum\nolimits_{n_j \in \mathcal{N}_{n_i}} (\alpha_{ij} \cdot \mathbf{e}_{j}^{(k-1)}) \mathbf{W}^{(k)}),
\end{equation}
where $\alpha_{ij}$ indicates the attention weight of neighbor node $n_j$ w.r.t $n_i$.
Now, we discuss how to compute the attention weights in existing works.
Most methods, including DANSER~\cite{wu2019danser} and SCGRec~\cite{yang2022scgrec}, typically use the concatenation-based graph attention as follows:
\begin{equation}
\alpha_{ij} = \frac{\exp(\text{MLP}[\mathbf e_{i},\mathbf e_{j}])}{\sum\nolimits_{n_k \in \mathcal{N}_{n_i}} \exp(\text{MLP}[\mathbf e_{i},\mathbf e_{j}])}.
\end{equation}
Also, other methods, including DICER~\cite{wu2019danser} and MEGCN~\cite{jin2020megcn}, use the similarity-based graph attention, which is another popular technique, \textit i.e.,\xspace
\begin{equation}
\alpha_{ij} = \frac{\exp(\text{sim}(\mathbf e_{i}, \mathbf e_{j}))}{\sum\nolimits_{n_k \in \mathcal{N}_{n_i}} \exp(\text{sim}(\mathbf e_{i}, \mathbf e_{k}))},
\end{equation}
where sim() denotes a similarity function such as cosine similarity and dot product.
\subsubsection{\textbf{HetGNN}} The user-item interactions and user-user social relations can be regarded as the users' heterogeneous relationships, \textit i.e.,\xspace a user's preferences on items and his/her friendship.
In this sense, a few methods~\cite{chen2021serec,wang2022dcan} have attempted to model the inputs as a heterogeneous graph and then design the HetGNN encoders for learning user and item embeddings, \textit i.e.,\xspace
\begin{equation}
\mathbf e_{i}^{(k)} = \sigma(\sum\nolimits_{n_j \in \mathcal{N}_{n_i}} \mathbf{e}_{j}^{(k-1)} \mathbf{W}_{v_{ij}}^{(k)}),
\end{equation}
where $v_{ij}$ indicates the type of relation between $n_i$ and $n_j$.
As a result, the HetGNN encoder employs different transformation matrices according to the relations between two nodes.
For instance, SeRec~\cite{chen2021serec} defines four types of directed edges (\textit i.e.,\xspace user-user edges, user-item edges, item-user edges, and item-item edges), constructing a heterogeneous graph based on the above edges.
Then, it obtains each user $p_i$'s embedding $\mathbf{u}_i$ and each item $q_j$'s embedding $\mathbf{v}_j$ via the HetGNN encoder.
\subsubsection{\textbf{GRNN}} The sequential behaviors of users when they interact with items reflect the evolution of their preferences of items over time.
For this reason, time-aware recommender systems have attracted increasing attention in recent years~\cite{wang2021survey}.
Such temporal interactions are often divided into multiple user sessions and modeled as session-based SocialRS\xspace.
Multiple works~\cite{sun2020dgarecr,chen2022ssrgnn,lin2022gnndsr,liu2022gnnrec,niu2021mgsr,gu2021egfrec,song2019dgrec,wang2022mohcn,xiao2020mgnn} have attempted to model dynamic user interests through session-based or temporal SocialRS\xspace.
These models leverage the GRNN encoders to capture these time-evolving interests.
Suppose each user $p$ interacts with items in a given sequence $\mathcal{S}_p$. Consequently, one can create a sequence of interactions for each item $q$ as $\mathcal{S}_q$, consisting of users that rate item $q$ in a temporal sequence. In general,
the temporal sequence is denoted for node $n_i$ as $\mathcal{S}_{n_i} = \{n^i_1, n^i_2, \cdots, n^i_K\}$.
Note that session-based encoders would divide $\mathcal{S}_{n_i}$ into multiple sessions $\mathcal{S}^t_{n_i}$ and encode each session separately. The GRNN encoder for node $n_i$ can be then generalized as:
\begin{equation}
\mathbf e_i = \textsc{GRNN}(\mathcal{S}_{n_i}, \mathcal{N}_{n_i}),
\end{equation}
where \textsc{GRNN} is a combination of RNN and GNN modules. In particular, one can obtain dynamic user interests and item embeddings through a long short-term memory (LSTM)~\cite{peng2017cross,zayats2018conversation} unit, \textit i.e.,\xspace
\begin{equation}
\begin{aligned}
\mathbf x_{i}^{(k)} &= \sigma(\mathbf{W}_x[\mathbf{h}_i^{(k-1)}, \mathbf n_k^i] + b_x), \\
\mathbf f_{i}^{(k)} &= \sigma(\mathbf{W}_s[\mathbf{h}_i^{(k-1)}, \mathbf n_k^i] + b_s), \\
\mathbf o_{i}^{(k)} &= \sigma(\mathbf{W}_o[\mathbf{h}_i^{(k-1)}, \mathbf n_k^i] + b_o), \\
\mathbf{\tilde{c}}_{i}^{(k)} &= \tanh(\mathbf{W}_c[\mathbf{h}_j^{(k-1)}, \mathbf n_k^i] + b_c), \\
\mathbf{c}_{i}^{(k)} &= \mathbf f_{i}^{(k)} \odot \mathbf{c}_{i}^{(k-1)} + \mathbf x_{i}^{(k)} \odot \mathbf{\tilde{c}}_{i}^{(k)},\\
\mathbf h_{i}^{(k)} &= \mathbf o_{i}^{(k)} \odot \tanh(\mathbf{c}_{i}^{(k)}).
\end{aligned}
\end{equation}
Then, the node embedding $\mathbf e_i$ is obtained using a GNN module such as GANN and GCN (as discussed below). In general, one can obtain
\begin{equation}
\mathbf e_i = \textsc{GNN}(\mathbf{h}_i^{(K)}, \{\mathbf{h}_j^{(K)}:n_j\in\mathcal{N}_{n_i}\cup \{n_i\}\}).
\end{equation}
\begin{comment}
\ks{This description can be polished. Change this notation - $j_{t,1}^{i}$. }
Given a sequence of $n$ behaviors $\overrightarrow{S}_t^i=\{j_{t,1}^{i},j_{t,2}^{i},\cdots,j_{t,n}^{i}\}$ in the $t$-th session of a node $n_i$ (\textit e.g.,\xspace a sequence of items rated in the $n_i$'s $t$-th session), the GRNN encoder can be generalized as:
\begin{equation}
\mathbf e_{i}^{(k)} = f(\mathbf e_{i}^{(k-1)},\{\mathbf{h}_j^{(k-1)}:n_j\in\mathcal{N}_{n_i}\cup \{n_i\}\}),
\end{equation}
where $f(\cdot,\cdot)$ indicates the combination function, which is usually defined as long short-term memory (LSTM)~\cite{peng2017cross,zayats2018conversation} unit, \textit i.e.,\xspace
\begin{equation}
\begin{aligned}
\mathbf x_{i}^{(k)} &= \sigma(\mathbf{W}_x[\sum\nolimits_{n_j \in \mathcal{N}_{n_i}\cup \{n_i\}}\mathbf{h}_j^{(k-1)}, \mathbf e_{i}^{(k-1)}] + b_x), \\
\mathbf s_{i}^{(k)} &= \sigma(\mathbf{W}_s[\sum\nolimits_{n_j \in \mathcal{N}_{n_i}\cup \{n_i\}}\mathbf{h}_j^{(k-1)}, \mathbf e_{i}^{(k-1)}] + b_s), \\
\mathbf o_{i}^{(k)} &= \sigma(\mathbf{W}_o[\sum\nolimits_{n_j \in \mathcal{N}_{n_i}\cup \{n_i\}}\mathbf{h}_j^{(k-1)}, \mathbf e_{i}^{(k-1)}] + b_o), \\
\mathbf{\tilde{c}}_{i}^{(k)} &= \tanh(\mathbf{W}_c[\sum\nolimits_{n_j \in \mathcal{N}_{n_i}\cup \{n_i\}}\mathbf{h}_j^{(k-1)}, \mathbf e_{i}^{(k-1)}] + b_c), \\
\mathbf{c}_{i}^{(k)} &= \sum\nolimits_{n_j \in \mathcal{N}_{n_i}\cup \{n_i\}} \mathbf s_{i,j}^{(k)} \odot \mathbf{c}_{j}^{(k-1)} + \mathbf x_{i}^{(k)} \odot \mathbf{\tilde{c}}_{i}^{(k)},\\
\mathbf e_{i}^{(k)} &= \mathbf o_{i}^{(k)} \odot \tanh(\mathbf{c}_{i}^{(k)}).
\end{aligned}
\end{equation}
\ks{How is $h$ defined? I am kinda confused }
\end{comment}
For instance, DREAM~\cite{song2020dream} obtains each user $p_i$'s embedding within each session using the GRNN encoder as above. It uses a Relational GAT module for the GNN layer to aggregate information from its social neighbors. Meanwhile, item embeddings $\mathbf v_j$ for item $q_j$ is obtained using a simple embedding layer.
\subsubsection{\textbf{HyperGNN}} Most GNN encoders, as mentioned above, learn pairwise connectivity between two nodes.
However, more-complicated connections can be captured by
jointly using user-item relations with user-user edges and/or using higher-order social relations.
For instance, triangular structures, including two users and their co-rated items, are a common motif.
To leverage such high-order relations, some works~\cite{yu2021mhcn,sun2022motifres,han2022dhhgcn} have attempted to model the inputs as a hypergraph and then design the HyperGNN encoders for learning user and item embeddings.
Let $\mathcal{G}=(\mathcal{N},\mathcal{E})$ denotes a hypergraph where $\mathcal{N}$ and $\mathcal{E}$ indicate sets of nodes and hyperedges, respectively.
Each hyperedge connects any number of nodes.
The hypergraph $\mathcal{G}$ can be denoted by an incident matrix $\mathbf{H}_\mathcal{G} \in \mathbb{R}^{x \times y}$, where $x$ and $y$ indicate the numbers of nodes and hyperedges, respectively.
In $\mathbf{H}_\mathcal{G}$, $h(n,e)=1$ if $n\in e$, otherwise 0.
Also, $\mathbf{D}_\mathcal{N}$ and $\mathbf{D}_\mathcal{E}$ denote the diagonal matrices of the node and hyperedge degrees, respectively.
In this case, each layer of the HyperGNN encoder is defined as:
\begin{equation}
\mathbf E^{(k)} = \mathbf D^{-\frac{1}{2}}_\mathcal{N}\mathbf H_{\mathcal{G}}\mathbf D^{-1}_\mathcal{E}\mathbf H_{\mathcal{G}}^\top\mathbf D^{-\frac{1}{2}}_\mathcal{N}\mathbf E^{(k-1)}.
\end{equation}
We note that HyperGNN-based SocialRS\xspace methods~\cite{yu2021mhcn,han2022dhhgcn} remove non-linear activation and feature transformation as in the LightGCN encoder.
For instance, MHCN~\cite{yu2021mhcn} designs three types of triangular motifs, constructing three incidence matrices, each representing a hypergraph induced by each motif.
Then, it obtains each user $p_i$'s embedding $\mathbf{u}_i$ via the multi-type HyperGNN encoders while obtaining each item $q_j$'s embedding $\mathbf{v}_j$ via the GCN encoder.
\subsubsection{\textbf{Others}} Furthermore, we briefly describe the two encoders, GAE and hyperbolic GNN, each of which is employed by only one method.
Liu et al.~\cite{liu2022siga} pointed out that GCN is mainly suitable for semi-supervised learning tasks.
On the other hand, they claimed that the goal of GAE coincides with that of the recommendation task, which is to minimize the reconstruction error of input and output~\cite{liu2022siga}.
For this reason, they proposed a SocialRS\xspace method, named SIGA, which employs GAE and is used for the rating prediction task.
Meanwhile, Wang et al.~\cite{wang2021hypersorec} pointed out that since existing methods usually learn the user and item embeddings in the Euclidean space, these methods fail to explore the latent hierarchical property in the data.
For this reason, they proposed a SocialRS\xspace method, named HyperSoRec, which performs in the hyperbolic space because the exponential expansion of hyperbolic space helps preserve more-complex relationships between users and items~\cite{Krioukov2010}.
\input{tab-decoders}
\subsection{Decoders}
In this subsection, we group the decoders of GNN-based SocialRS\xspace into two categories: dot-product and multi-layer perceptron (MLP).
Table~\ref{tab:decoder} summarizes the taxonomy of these decoders.
\input{tab-loss}
\subsubsection{\textbf{Dot-product}} Many methods~\cite{wu2019diffnet,wu2020diffnet++,song2021diffnetlg,jin2020megcn,jiang2021asr,seng2021atgcn,guo2020gnnsor,sun2020dgarecr,liu2021sagclg,li2020hidm,liao2022sociallgn,liao2022gman,liu2022mpsr,song2020dream,zhu2021shgcn,yang2022scgrec,zheng2021pdarec,yu2021mhcn,yu2021sept,wu2022dcrec,han2022dhhgcn,yan2022ssdrec,chen2021serec,sun2022motifres,zhen2022apte,wu2022eagcn,liu2022hosr,liu2022fesog,chen2022igrec,yu2022esrf,zhu2022sinews,du2022sdcrec,liu2022sohrml,wang2021hypersorec,sha2021dsr,tao2022design,xie2022sran,li2022idiffnet,zhang2022cgl,chen2022fbne,song2022mrapr,liu2022gnnrec,wei2022sghan,chen2022ssrgnn,li2022disgcn,miao2022melgn,liufu2021sga,liu2022siga} simply predict a user $p_i$'s preference $\hat{r}_{ij}$ on an item $q_j$ via a dot product of their corresponding embeddings, \textit i.e.,\xspace
\begin{equation}
\hat{r}_{ij} = \mathbf{u}_i \cdot \mathbf{v}_j^\top.
\end{equation}
\subsubsection{\textbf{MLP}} More than half of the existing methods~\cite{fan2019graphrec,fan2022graphrecp,fu2021dicer,jiang2021san,tien2020kconvgraphrec,gu2021egfrec,narang2021fuserec,mandal2021gnntsr,mu2019gatnsr,bai2020tgnn,vijaikumar2019sorecgat,hou2021pagan,bi2021ghscf,salamat2021heterographrec,leng2022glow,zhao2021bfhan,niu2021mgsr,hoang2021gtn,xiao2020mgnn,xu2020srhgnn,wu2019danser,xiao2021mutualrec,xiao2022gsfr,walker2021soapvae,shi2022sengr,wang2022mohcn,wang2022dcan,qiao2022tag,wei2022hsgnn,jiang2022socialripplenet,lin2022gnndsr,chen2022gdsrec,song2019dgrec,song2020dream} predict a user $p_i$'s preference $\hat{r}_{ij}$ on an item $q_j$ by employing MLP as follows:
\begin{equation}
\hat{r}_{ij} = \sigma_{L}(\mathbf{W}^\top_{L}(\sigma_{L-1}(...\sigma_{2}(\mathbf{W}_{2}^\top \begin{bmatrix}
\mathbf{u}_{i} \\
\mathbf{v}_{j}
\end{bmatrix}
+\mathbf{b}_{2})...))+\mathbf{b}_{L}, \\
\end{equation}
where $\mathbf{W}_{i}$, $\mathbf{b}_{i}$, and $\sigma_{i}$ denote the weight matrix, bias vector, and activation function for $i$-th layer's perceptron, respectively.
\subsection{Loss Functions}
In this subsection, we first group the primary loss functions of GNN-based SocialRS\xspace into 4 categories: Bayesian personalized ranking (BPR)~\cite{RendleFGS09}, mean squared error (MSE), cross-entropy (CE), and hinge loss.
In addition, we found that some works additionally employ auxiliary loss functions.
Thus, we further group these loss functions into 8 categories: social link prediction (LP) loss, self-supervised loss (SSL), group-based loss, adversarial (Adv) loss, path-based loss, knowledge distillation (KD) loss, sentiment-aware loss, and policy-network-based (Policy Net) loss.
Table~\ref{tab:loss} summarizes the taxonomy of loss functions used in existing work.
\subsubsection{\textbf{Primary Loss Functions}} Different primary loss functions are employed depending on whether the methods focus on explicit or implicit feedback.
\vspace{1mm}
\textbf{MSE Loss.} For the methods that focus on explicit feedback (\textit e.g.,\xspace star ratings) of users, most of them~\cite{fan2019graphrec,mandal2021gnntsr,mu2019gatnsr,bai2020tgnn,hou2021pagan,bi2021ghscf,fan2022graphrecp,hoang2021gtn,zheng2021pdarec,guo2020gnnsor,wu2019danser,jiang2021san,tien2020kconvgraphrec,salamat2021heterographrec,xu2020srhgnn,liao2022gman,sun2020dgarecr,niu2021mgsr,zhen2022apte,wu2022eagcn,liu2022fesog,shi2022sengr,wang2022mohcn,qiao2022tag,jiang2022socialripplenet,lin2022gnndsr,chen2022gdsrec} learn user and item embeddings via the MSE-based loss function $\mathcal{L}_{MSE}$, which is defined as follows:
\begin{equation}
\mathcal{L}_{MSE}=\sum_{p_i\in\mathcal{U}}\sum_{q_j\in\mathcal{N}_{p_i}}(\hat{r}_{ij}-{r}_{ij})^2,
\end{equation}
where ${r}_{ij}$ indicates $p_i$'s real rating score on $q_j$. That is, the embeddings of $p_i$ and $q_j$ are learned, aiming at minimizing the differences between $p_i$'s real and predicted scores, \textit i.e.,\xspace ${r}_{ij}$ and $ \hat{r}_{ij}$, for $q_j$.
\vspace{1mm}
\textbf{BPR Loss.} For the methods that focus on implicit feedback (\textit e.g.,\xspace click or browsing history) of users, most of them~\cite{jiang2021asr,liu2021sagclg,xiao2020mgnn,li2020hidm,liao2022sociallgn,liu2022mpsr,zhu2021shgcn,xiao2021mutualrec,seng2021atgcn,wu2019diffnet,wu2020diffnet++,song2021diffnetlg,jin2020megcn,leng2022glow,yang2022scgrec,yu2021sept,wu2022dcrec,yu2021mhcn,han2022dhhgcn,xiao2022gsfr,liu2022hosr,chen2022igrec,yu2022esrf,liu2022sohrml,sha2021dsr,xie2022sran,li2022idiffnet,zhang2022cgl,song2022mrapr,li2022disgcn,wei2022hsgnn,miao2022melgn,liufu2021sga,sun2022motifres} learn user and item embeddings via the BPR-based loss function $\mathcal{L}_{BPR}$, which is defined as follows:
\begin{equation}
\mathcal{L}_{BPR}=-\sum_{p_i\in\mathcal{U}}\sum_{q_j\in\mathcal{N}_{p_i}}\sum_{q_k\in\mathcal{N}_{p_i}\text{\textbackslash}{\mathcal{I}}}\text{log}\sigma(\hat{r}_{ij}-\hat{r}_{ik}),
\end{equation}
where $\mathcal{U}$ and $\mathcal{N}_{n_i}$ denote a set of users and a set of items rated by $p_i$, respectively. $\hat{r}_{ij}$ and $\hat{r}_{ik}$ indicate $p_i$'s preference on the rated item $q_j$ and the (randomly-sampled) unrated item $q_k$, respectively.
Also, $\sigma$ indicates the sigmoid function.
That is, the embeddings of $p_i$, $q_j$, and $q_k$ are learned based on the intuition that $p_i$’s preference $\hat{r}_{ij}$ on $q_j$ is likely to be higher than $p_i$’s preference $\hat{r}_{ik}$ on $q_k$.
\vspace{1mm}
\textbf{CE Loss.} Several methods~\cite{fu2021dicer,vijaikumar2019sorecgat,wu2019danser,zhao2021bfhan,gu2021egfrec,narang2021fuserec,chen2021serec,yan2022ssdrec,zhu2022sinews,tao2022design,chen2022fbne,liu2022gnnrec,wang2022dcan,walker2021soapvae,wei2022sghan,chen2022ssrgnn,chen2022gdsrec,li2021spex,liu2022siga,song2019dgrec,song2020dream} for implicit feedback learn user and item embeddings via the CE-based loss function $\mathcal{L}_{CE}$, which is defined as follows:
\begin{equation}
\mathcal{L}_{CE}=-\sum_{p_i\in\mathcal{U}}\sum_{q_j\in\mathcal{I}}{r}_{ij}\text{log}(\hat{r}_{ij}) + (1-{r}_{ij})\text{log}(1-\hat{r}_{ij}),
\end{equation}
where $\mathcal{I}$ indicates a set of items. It should be noted that ${r}_{ij}=1$ if $q_j \in \mathcal{N}_{p_i}$, otherwise ${r}_{ij}=0$.
That is, the embeddings of $p_i$ and $q_j$ are learned, aiming at maximizing $p_i$’s preferences on his/her rated items while minimizing $p_i$'s preferences on his/her unrated items.
\vspace{1mm}
\textbf{Hinge Loss.} A method~\cite{wang2021hypersorec} for implicit feedback learns user and item embeddings via the hinge loss function $\mathcal{L}_{Hinge}$, which is defined as follows:
\begin{equation}
\mathcal{L}_{Hinge}=\sum_{p_i\in\mathcal{U}}\sum_{q_j\in\mathcal{N}_{p_i}}\sum_{q_k\in\mathcal{N}_{p_i}\text{\textbackslash}{\mathcal{I}}}\text{max}(0,\lambda+(\hat{r}_{ij})^2-(\hat{r}_{ik})^2),
\end{equation}
where $\lambda$ indicates the safety margin size. That is, the embeddings of $p_i$, $q_j$, and $q_k$ are learned, aiming at ensuring that $p_i$’s preferences on his/her rated items $q_j$ are higher than those on his/her unrated items $q_k$ at least by a margin of $\lambda$.
\subsubsection{\textbf{Auxiliary Loss Functions}}
Here, we discuss the auxiliary loss functions used by GNN-based SocialRS\xspace methods.
\vspace{1mm}
\textbf{Social Link Prediction (LP) Loss.}
It should be noted that the primary objectives of the existing works focus on reconstructing the input U-I rating graph.
Along with this, papers like MGNN~\cite{xiao2020mgnn}, MutualRec~\cite{xiao2021mutualrec}, and SR-HGNN~\cite{xu2020srhgnn} learn the BPR-based social LP loss that aims at reconstructing the input U-U social graph.
Through this method, user embeddings can be informed further to reconstruct the social relations, which allows them to better capture the social network structure that is essential for the more-effective social recommendation.
\vspace{1mm}
\textbf{Self-Supervised Loss (SSL)}. SSL originated in image and text domains to address the deficiency of labeled data~\cite{liu2020selfsupervised}.
The basic idea of SSL is to assign labels for unlabeled data and exploit them additionally in the training process.
It is well-known that the data sparsity problem significantly affects the performance of recommender systems.
Therefore, there has recently been a surge of interest in SSL for recommender systems~\cite{yu2022ssl}.
Some GNN-based SocialRS\xspace methods~\cite{yu2021sept,yu2021mhcn,wu2022dcrec,zhang2022cgl,li2022disgcn,wang2022dcan,du2022sdcrec,wu2022dcrec,sun2022motifres} designed SSL, which is derived from U-U social and/or U-I rating graphs.
In this survey, we categorized them as social SSL and interaction-based SSL depending on the graph type employed to design SSL.
For the social SSL, SEPT~\cite{yu2021sept} augments different views related to users with the U-U social graph, and designs two socially-aware encoders that aim at reconstructing the augmented views. It adopts the regime of tri-training~\cite{ZhouL05tri}, which operates on the augmented-views above for self-supervised signals.
For the interaction-based SSL, SDCRec~\cite{du2022sdcrec} samples two items among items rated by a user, which have the highest similarities to the user. Then, it additionally utilizes them as self-supervised signals.
On the other hand, Motif-Res~\cite{sun2022motifres} and MHCN~\cite{yu2021mhcn} explore the motif information in graph structure so that such information can be utilized as self-supervised signals.
For instance, MHCN~\cite{yu2021mhcn} constructs multi-type hyperedges, which are instances of a set of triangular relations, and designs SSL by leveraging the hierarchy in the hypergraph structures. It aims at reflecting the user node’s local and global high-order connectivity patterns in different hypergraphs~\cite{yu2021mhcn}.
\vspace{1mm}
\textbf{Group-based Loss.} GLOW~\cite{leng2022glow} and GMAN~\cite{liao2022gman} make use of the user groups.
Based on the group information, both methods additionally design the group-based loss.
They define the group-item interaction as indicating a set of users that has interacted with an item.
Then, they represent each group's embedding by attentively aggregating the users' embeddings within the corresponding group.
Finally, the user and item embeddings are learned via a group-based loss so that each group's preferences on items rated by users in the corresponding group are likely to be higher than those of their unrated items.
\vspace{1mm}
\textbf{Others.} We briefly discuss the other loss functions that are employed by only one method.
Yu et al.~\cite{yu2022esrf} designed an adversarial mechanism to consider the fact that social relations are very sparse, noisy, and multi-faceted in real-world social networks.
On the other hand, Li et al.~\cite{li2021spex} pointed out that existing SocialRS\xspace methods fail to distinguish social influence from social homophily. To address this limitation, they designed an auxiliary loss function that models and captures the rich information conveyed by the formation of social homophily~\cite{li2021spex}.
Furthermore, Tao et al.~\cite{tao2022design} leveraged the knowledge distillation (KD) technique into the social recommendation to address the overfitting problem of existing methods.
Shi et al.~\cite{shi2022sengr} incorporated both sentiment information derived from reviews and interaction information captured by the GNN encoder. To this end, they designed an auxiliary loss function that captures different sentimental aspects of items from reviews~\cite{shi2022sengr}.
Lastly, Wu et al.~\cite{wu2019danser} designed a policy-based loss function based on a contextual multi-armed bandit~\cite{BubeckC12}, which dynamically weighs different social effects, \textit i.e.,\xspace social homophily, social influence, item-to-item homophily, and item-to-item influence.
\hide{
\subsubsection{Attention-based architectures}
\hfill\\
The node and edge representations in a Graph Neural Network are calculated by message passing among the graph nodes, which aggregate features from the neighbors of a node to represent a local graph structure. Prior to the use of attention, the traditional GNN based models like Graph Convolutional Networks and GraphSage assumed equal contributions of every neighbor’s influence on the target node. The use of attention layers for social recommendations was first proposed in GraphRec \cite{1} for calculating the user and to learn the latent user and item factors. GNNTSR \cite{18} worked with a very similar architecture proposed by GraphRec by adding modules to account for the user's trust based network and authenticity of the reviews. LSTM based RNNs are employed to compute attention weights for GNNs. Attention mechanism was used to capture the heterogeneous influence of user-item interactions and user-user interactions to provide a personalized weight of each user on every corresponding item, while also accounting for the popularity of that item among the strong and weak ties of the user’s first connections. Figure 1 captures the use of attention is GraphRec.
\begin{figure}[h]
\centering
\includegraphics[width=0.8\linewidth, height=0.5\linewidth]{Figures/graphrec.png}
\caption{Architecture of GraphRec \cite{1}}
\end{figure}
\subsubsection{Hierarchical architectures}
\hfill\\
Hierarchical architectures follow a general pattern of generating embeddings through message propagation methods among neighbors in order to account for the differential impact each neighbor plays on the recommendation \cite{4,18}. Hierarchy is brought into play by stacking multiple layers of GNNs, each with an attention mechanism which are then combined for final predictions as seen in Figure 2.
\begin{figure}[h]
\centering
\includegraphics[width=0.8\linewidth, height=0.5\linewidth]{Figures/hierarchy.png}
\caption{Architecture of ASR \cite{18}}
\end{figure}
Y. Jiang, H. Ma, Y. Liu et al. explain in their design of the Attentional Social Recommendation (ASR) the focus of attention on the social user-user graph and the interaction user-item graph. While working from the perspective of a user, attention is applied to both of the different kinds of graphs as mentioned above while for the perspective of the item graph, only the interaction graph is considered for the attention models, representing a dual-level mechanism of attention.
Message propagation allows for injecting information about neighbors into node representations based on the premise that nodes with higher correlation amongst one another will also be close in a graphical network representation. Each work \cite{4}, \cite{20} and \cite{18} have their own definitions of message generation but follow the same basic premise. For message aggregation, two-level attention layers is used by ASR \cite{18} while PReLU activation functions are used by SR-HGNN \cite{20}.
By creating a hierarchy of multiple stacked GNN layers, high-order connectivity of users and items can be found. For each of the layer, the resultant output is combined together to create a representative vector on which prediction can be performed.
In comparison with GraphRec \cite{1}, ASR goes beyond directly connected neighbors to leverage data in a more complete manner and in comparison with DiffNet++ \cite{14}, ASR proves to save training cost since DiffNet++ requires multiple multilayer perceptrons for execution. Through experimentations, it is also shown that hierarchical models perform better than single layer sequential models. Intuitively, selecting more number of GNNs to stack improves the performance of the model but only until a certain point, beyond which performance begins to drop.
\subsubsection{Items as Nodes}
\hfill\\
The approach of considering items as nodes similar to those of users is explored by the works HeteroGraphRec \cite{4}, DICER \cite{22} and EGFRec \cite{23} on the premise that attraction between items is a valuable source of preferences. By including these item preferences, recommendations can be made more accurate. In order for this, items are modelled just as users would by considering them to be structured and having signals.
The similarities between items can either be implicit such as being interactions between two items or explicit such as item categories. These are important to generate the links between the items. Even though this idea is considered in algorithms based on Collaborative Filtering, GNNs are able to leverage these interconnections much more powerfully.
Essentially, HeteroGraphRec has its GNNs consisting of two kinds of entities: users and items and three types of interconnections as user-user, item-item and user-item \cite{3} By including such connections, an additional dimension is added onto which aggregation can be performed and allows the neural network to learn more adeptly. Owing to the three different kinds of the interconnections as defined above, three different graphs are also drawn, one each for every connection type.
Building on top of the three graphs, the model architecture can be seen to have three main components: aggregators for users, aggregators for items and the final predictor as seen in Figure 3. To find important items and other users for a particular user or item, attention networks are implemented and their results combined. Similarly, by considering items as nodes, user-item attention can be modelled as well. Once all attention networks select the most relevant information, aggregator functions combine them before passing them to the prediction module.
Experiments conducted go onto back the fact that by considering items are nodes in the GNNs, accuracy of recommendations generated is boosted.
\begin{figure}[h]
\centering
\includegraphics[width=0.8\linewidth, height=0.5\linewidth]{Figures/item_nodes.png}
\caption{Architecture of HeteroGraphRec \cite{4}}
\end{figure}
\subsubsection{Temporal-based Architectures}
\hfill\\
The architectures of GNNs make use of a temporal component to account for dynamic factors such as the user’s past and current item preferences and the user’s evolving social connections and influence. Previously Markov chains have been used in temporal recommender systems but their limitation of not being able to capture the long term user history has led to them being replaced by Recurrent Neural Networks (RNNs) based on LSTM components. FuseRec \cite{21} makes use of an LSTM based RNN to find the user temporal embeddings at a time step as a recurrence relation of the general user embeddings at the previous time step and the item embeddings. The temporal component is also used for finding context specific embeddings based on the user’s current preferences by finding the similarity between user embeddings and the candidate item embeddings. The context specific user-social embeddings are also captured in a similar manner by FuseRec. Figure 3. Depicts a high level architecture of the FuseRec model. TGRec \cite{13} makes use of this temporal aspect along with an item-item social graph to capture not only the temporal influence of the user’s item interactions and social connections, but also to capture the temporal influence of previous items on the user.
\begin{figure}[h]
\centering
\includegraphics[width=0.8\linewidth, height=0.5\linewidth]{Figures/fuserec.png}
\caption{High level overview of FuseRec \cite{21}}
\end{figure}
EGFRec \cite{23} makes use of an LSTM to model the user’s short term interest from the current session and take the last hidden state as a session interest representation over which a long term interest is computed by pooling these session interests. This architecture accounts for the short term as well as the long term interest of the user.
\subsubsection{Fusion-based Architectures}
\hfill\\
This architecture of GNNs makes use of Fusion Layers. Instead of only the neighbors affecting the user, fusion uses a recursive method to model the real network where higher degree contacts have an influence too. This architecture is called DiffNet\cite{27}: an Influence Diffusion Neural Network based model which simulates the recursive social influence. Layer-wise influence diffusion structure plays a key role, with every iteration, say iteration k, kth layer embedding will influence diffusion. In this figure, Embedding Layer captures the collaborative latent representation, fusion layer creates user fusion embedding for each user whose output is taken by the layerwise fusion diffusion layer and recursively performs diffusion and the rating is predicted by prediction layer. Various other Diffusion based model like HIDM \cite{2} models the heterogeneous nature of the graph by aggregating information from both user and item subspaces
SAGLG \cite{8}, uses collaborative embedding and group embedding using fusion techniques ATGCN \cite{13} uses information fusion to learn implicit trust.
\begin{figure}[h]
\centering
\includegraphics[width=0.8\linewidth, height=0.5\linewidth]{Figures/diffnet.png}
\caption{High level overview of Diffnet \cite{21}}
\end{figure}
\subsubsection{Mutualistic Model based architecture}
\hfill\\
This architecture is based on the concept A user’s preference for items or locations can be easily influenced by its social links while users with similar interests are likely to build a relationship. Mutualistic model originates from exploring the implicit mutual inter-active relationship between two species based on reinforcement learning models instead of concatenate operations that traditional models do. MGNN \cite{6} is designed for joint friend and item recommendation using GNN by leveraging mutual relationships between two types of user behavior by modelling mutual-reinforcement relationship from theoretical concepts.
\begin{figure}[h]
\centering
\includegraphics[width=0.8\linewidth, height=0.5\linewidth]{Figures/mgnn.png}
\caption{High level overview of MGNN Architecture \cite{21}}
\end{figure}
MutualRec \cite{26} first extracts user’s consumption preference embedding and social preference embedding from a spatial attention layer and a spectral attention layer. It then merges those embeddings into a mutualistic attention layer for predicting ratings and link value simultaneously.
\subsubsection{Global Local Preference}
\hfill\\
This user sociology concept explains how in a social context will have his/her own preference which will have higher weightage compared to their friends. The user may take the opinion of different set of friends with different weightage for different items. This is modelled by PA-GAN \cite{7} , Item-Aggregation for User captures local preference, Friend-Preference Aggregation for Users captures user’s preference and strength is called global preference
\begin{figure}[h]
\centering
\includegraphics[width=0.8\linewidth, height=0.3\linewidth]{Figures/pagan.png}
\caption{Local Global Preference using PA-GAN \cite{7}}
\end{figure}
}
\section{Conclusions}\label{sec:conclusions}
Although there has been a surge of papers on developing GNN-based social recommendation methods, no survey paper existed that reviewed them thoroughly.
Our work is the first systematic and comprehensive survey that studies $80$ papers on GNN-based SocialRS\xspace, collected by following the PRISMA guidelines.
We present a novel taxonomy of inputs and architectures for GNN-based SocialRS\xspace, thus, categorizing different methods developed over the years in this important topic.
Through this survey, we hope to enable the researchers of this field to better position their works in the recent trend while forming a gateway for the new researchers to get introduced to this important and hot topic.
We hope this survey helps readers to grasp recent trends in SocialRS\xspace and develop novel GNN-based SocialRS\xspace methods.
\hide{
In this paper we presented a comprehensive survey of Social Recommendation based on Graph Convolution Neural Network focusing on major architectures we found and user sociology concepts that were widely incorporated. Attention based architectures seems to have immense growth opportunities. More architectures to further handle heterogeneous link strength could be a good scope for further research. Other dimensions like application, sociology concepts like weak ties can be explored.
}
\section{Experimental Setup}\label{sec:setup}
In this section, we discuss the experimental setup of GNN-based SocialRS\xspace methods.
Specifically, we review 17 benchmark datasets and 8 evaluation metrics, that are widely used in GNN-based SocialRS\xspace methods.
\subsection{\textbf{Benchmark Datasets}}
We summarize the datasets widely used by existing GNN-based SocialRS\xspace methods in Table~\ref{tab:datasets}.
These datasets come from 8 different application domains: product, location, movie, image, music, bookmark, microblog, and miscellaneous.
We present the statistics of each dataset, including the numbers of users, items, ratings, and social relations, and a list of papers using the corresponding dataset.
Since several versions exist per dataset, we chose the version that includes the most significant number of rating information.
\subsubsection{\textbf{Product-related Datasets}}
\vspace{1mm}
\hfill \break
\indent\textbf{Epinions.}
This dataset is collected from a now-defunct consumer review site, Epinions.
It contains 355.8K trust relations from 18.0K users and 764.3K ratings from 18.0K users on 261.6K products.
Here, a trust relation between two users indicates that one user trusts a review of a product written by another user.
For each rating, this dataset originally provides the product name, its category, the rating score in the range [1, 5], the timestamp that a user rated on an item, and the helpfulness of this rating.
37 GNN-based SocialRS\xspace methods reviewed in this survey used this dataset~\cite{zheng2021pdarec,guo2020gnnsor,xiao2021mutualrec,fan2019graphrec,fu2021dicer,mandal2021gnntsr,mu2019gatnsr,bai2020tgnn,hou2021pagan,xiao2020mgnn,bi2021ghscf,li2020hidm,walker2021soapvae,fan2022graphrecp,hoang2021gtn,wu2020diffnet++, wu2019danser,tien2020kconvgraphrec,salamat2021heterographrec,xu2020srhgnn, zhao2021bfhan,narang2021fuserec,sun2020dgarecr,xiao2022gsfr,zhen2022apte,liu2022fesog,du2022sdcrec,liu2022sohrml,lin2022gnndsr,chen2022gdsrec,li2021spex,song2020dream,sha2021dsr,tao2022design,wang2021hypersorec,wang2022mohcn,liu2022gnnrec}, which means the most popular in SocialRS\xspace.
\vspace{1mm}
\textbf{Ciao.}
This dataset is collected from a consumer review site in the UK, Ciao (\url{https://www.ciao.co.uk/}).
It contains 111.7K trust relations from 7.3K users and 283.3K ratings from 7.3K users on 104.9K products. The rating scale is from 1 to 5.
This dataset was used from 34 GNN-based SocialRS\xspace methods reviewed in this survey~\cite{mandal2021gnntsr,bai2020tgnn,hou2021pagan,bi2021ghscf,li2020hidm,liao2022sociallgn,walker2021soapvae,liu2022mpsr,tien2020kconvgraphrec,fan2022graphrecp,salamat2021heterographrec,xu2020srhgnn,zhao2021bfhan,narang2021fuserec,sun2020dgarecr,wang2021hypersorec,fan2019graphrec,fu2021dicer,jiang2021asr,hoang2021gtn,wu2022dcrec,liu2022siga,xiao2022gsfr,wu2022eagcn,liu2022fesog,du2022sdcrec,liu2022sohrml,sha2021dsr,tao2022design,chen2022gdsrec,wang2022mohcn,song2022mrapr,lin2022gnndsr}.
\vspace{1mm}
\textbf{Beidan.}
This dataset is collected from a social e-commerce platform in China, Beidan (\url{https://www.beidian.com/}), which allows users' sharing behaviors.
It includes 2.3K social relations from 2.8K users and 35.1K ratings from 2.8K users on 2.2K products.
For each social relation, Li et al~\cite{li2022disgcn} collected when a user's friend clicks a link shared by the user that points to the information of a specific item.
In this dataset, rating information does not provide explicit preference scores of users, rather containing implicit feedback only.
This dataset was used in only one GNN-based SocialRS\xspace method~\cite{li2022disgcn}.
\vspace{1mm}
\textbf{Beibei.}
This dataset is collected from another social e-commerce platform in China, Beibei (\url{https://www.beibei.com/}).
It is similar to Beidan but provides larger sizes of social relations and ratings.
This dataset includes 197.5K social relations from 24.8K users and 1.6M ratings from 24.8K users on 16.8K products.
For ratings, this dataset provides users' implicit feedback.
This dataset was used in~\cite{li2022disgcn} only.
\subsubsection{\textbf{Location-related Datasets}}
\vspace{1mm}
\hfill \break
\indent\textbf{Yelp.}
This dataset is collected from a business review site, Yelp (\url{https://www.yelp.com/}).
It contains 363.6K social relations from 19.5K users and 405.8K ratings from 19.5K users on 21.2K businesses.
On Yelp, users can share their check-ins about local businesses (\textit e.g.,\xspace restaurants and home services) and express their experience through ratings in the range [0, 5].
Also, users can create social relations with other users.
Each check-in contains a user, a timestamp, and a business (\textit i.e.,\xspace an item) that the user visited.
29 GNN-based SocialRS\xspace methods reviewed in this survey used this dataset~\cite{jiang2021asr,vijaikumar2019sorecgat,liu2022mpsr,guo2020gnnsor,wu2019diffnet,wu2020diffnet++,song2021diffnetlg,jin2020megcn,jiang2021san,yu2021sept,yu2021mhcn,han2022dhhgcn,sun2022motifres,zhen2022apte,wu2022eagcn,liu2022hosr,tao2022design,wang2021hypersorec,xie2022sran,li2022idiffnet,zhang2022cgl,wang2022mohcn,song2022mrapr,wei2022sghan,miao2022melgn,song2019dgrec,shi2022sengr,chen2022fbne,qiao2022tag}.
\vspace{1mm}
\textbf{Dianping.}
This dataset is collected from a local restaurant search and review platform in China, Dianping (\url{https://www.dianping.com/}).
It contains 813.3K social relations from 59.4K users and 934.3K ratings from 59.4K users on 10.2K restaurants.
For ratings, each user can give scores in the range [1, 5].
This dataset was used in two GNN-based SocialRS\xspace methods~\cite{wu2020diffnet++,wu2022dcrec}.
\vspace{1mm}
\textbf{Gowalla.}
This dataset is collected from a location-based social networking site, Gowalla (\url{https://www.gowalla.com/}).
It contains 283.7K friendship relations from 33.6K users and 1.2M ratings from 33.6K users on 41.2K locations.
On Gowalla, users can share information about their locations by check-in and make friends based on the shared information.
For ratings, this dataset provides users' implicit feedback.
9 GNN-based SocialRS\xspace methods used this dataset~\cite{seng2021atgcn,chen2021serec,li2022atstggnn,wu2022eagcn,yu2022esrf,wang2022dcan,wei2022sghan,chen2022ssrgnn,liufu2021sga}.
\vspace{1mm}
\textbf{Foursquare.}
This dataset is collected from another location-based social networking site, Foursquare (\url{https://foursquare.com/}).
It is similar to Gowalla but provides larger sizes of social relations and ratings.
It contains 304.0K friendship relations from 39.3K users and 3.6M ratings from 39.3K users on 45.5K locations.
For ratings, this dataset provides users' implicit feedback.
This dataset was used in 4 GNN-based SocialRS\xspace methods~\cite{chen2021serec,li2022atstggnn,wang2022dcan,chen2022ssrgnn}.
\subsubsection{\textbf{Movie-related Datasets}}
\vspace{1mm}
\hfill \break
\indent\textbf{MovieLens.}
This dataset is collected from GroupLens Research (\url{https://grouplens.org/}) for the purpose of recommendation research.
It contains 487.1K social relations from 138.1K users and 1.5M ratings from 138.1K users on 16.9K movies.
It should be noted that this dataset has different versions according to the size of the rating information. For the details, refer to \url{https://grouplens.org/datasets/movielens/}.
Since the original MovieLens datasets do not contain users' social relations, methods using this dataset built social relations by calculating the similarities between users.
This dataset was used in 4 GNN-based SocialRS\xspace methods~\cite{liu2021sagclg,tien2020kconvgraphrec,jiang2022socialripplenet,chen2022fbne}.
\vspace{1mm}
\textbf{Flixster.}
This dataset is collected from a movie review site, Flixster (\url{https://www.flixster.com/}).
It contains 667.3K friendship relations from 58.4K users and 3.6M ratings from 58.4K users on 38.0K movies.
On Flixster, users can add other users to their friend lists and express their preferences for movies.
The rating values are 10 discrete numbers in the range [0.5, 5].
We found that 7 GNN-based SocialRS\xspace methods used this dataset~\cite{xiao2020mgnn,fan2022graphrecp,guo2020gnnsor,xiao2021mutualrec,xiao2022gsfr,liu2022sohrml,liu2022siga}.
\vspace{1mm}
\textbf{FilmTrust.}
This dataset is collected from another (now-defunct) movie review site, FilmTrust.
It is similar to Flixster but provides smaller sizes of social relations and ratings.
It contains 1.8K friendship relations from 1.5K users and 35.4K ratings from 1.5K users on 2.0K movies.
The rating scale is from 1 to 5.
This dataset was used in 6 GNN-based SocialRS\xspace methods~\cite{jiang2021asr,mu2019gatnsr,zheng2021pdarec,sun2022motifres,liu2022fesog,liu2022siga}.
\subsubsection{\textbf{Image-related Dataset}}
\vspace{1mm}
\hfill \break
\indent\textbf{Flickr.}
This dataset is collected from a who-trust-whom online image-based social sharing platform, Flickr (\url{https://www.flickr.com/}).
It contains 187.2K follow relations from 8.3K users and 327.8K ratings from 8.3K users on 82.1K images.
On Flickr, users can follow other users and share their preferences for images with their followers.
For ratings, this dataset provides users' implicit feedback.
Also, we found that 9 GNN-based SocialRS\xspace methods used this dataset~\cite{wu2019diffnet,wu2020diffnet++,jin2020megcn,jiang2021san,wu2022eagcn,tao2022design,xie2022sran,li2022idiffnet,zhang2022cgl}.
\subsubsection{\textbf{Music-related Dataset}}
\vspace{1mm}
\hfill \break
\indent\textbf{Last.fm.}
This dataset is collected from a social music platform, Lat.fm (\url{https://www.last.fm/}).
It contains 25.4K social relations from 1.8K users and 92.8K ratings from 1.8K users on 17.6K music artists.
Each rating indicates that one user listened to an artist's music, \textit i.e.,\xspace implicit feedback.
On Lat.fm, users can make friend relations based on their preferences for artists.
This dataset was used in 11 GNN-based SocialRS\xspace methods~\cite{liao2022sociallgn, xiao2021mutualrec, seng2021atgcn,tien2020kconvgraphrec,yu2021sept,yu2021mhcn,zhang2021kcrec,chen2022igrec,yu2022esrf,liufu2021sga,miao2022melgn}.
\subsubsection{\textbf{Bookmark-related Dataset}}
\vspace{1mm}
\hfill \break
\indent\textbf{Delicious.}
This dataset is collected from a social bookmarking system, Delicious (\url{https://del.icio.us/}).
It contains 12.5K social relations from 1.6K users and 282.4K ratings from 1.6K users on 3.4K tags.
On Delicious, users can bookmark URLs (\textit i.e.,\xspace implicit feedback) and also assign a variety of semantic tags to bookmarks.
Also, they can have social relations with other users having mutual bookmarks or tags.
This dataset was used in 7 GNN-based SocialRS\xspace methods~\cite{li2020hidm,gu2021egfrec, chen2021serec,wang2022dcan,chen2022ssrgnn,lin2022gnndsr,song2019dgrec}.
\subsubsection{\textbf{Microblog-related Datasets}}
\vspace{1mm}
\hfill \break
\indent\textbf{Weibo.}
This dataset is collected from a social microblog site in China, Weibo (\url{https://weibo.com/}).
It contains 133.7K social relations from 6.8K users and 157.5K ratings from 6.8K users on 19.5K blogs.
On Weibo, users can post microblogs (\textit i.e.,\xspace implicit feedback) and retweet other users' blogs.
Based on such retweeting behavior, Li et al.~\cite{li2021spex} collected social relations between users.
Specifically, if a user has retweeted a microblog from another user, a social relation between the two users is created.
This dataset was used in only one GNN-based SocialRS\xspace method~\cite{li2021spex}.
\vspace{1mm}
\textbf{Twitter.}
This dataset is collected from another social microblog site, Twitter (\url{https://twitter.com/}).
It is similar to Weibo and contains 96.7K social relations from 8.3K users and 466.2K ratings from 8.9K users on 232.8K blogs.
Li et al.~\cite{li2021spex} collected social relations between two users if a user retweets or replies to a tweet from another user.
This dataset was used in~\cite{li2021spex} only.
\subsubsection{\textbf{Miscellaneous}}
\vspace{1mm}
\hfill \break
\indent\textbf{Douban.}
This dataset is collected from a social platform in China, Douban (\url{https://douban.com/}).
It contains 35.7K social relations from 2.8K users and 894.8K ratings from 2.8K users on 39.5K items of different categories (\textit e.g.,\xspace books, movies, movies, and so on).
For ratings, this dataset provides users' implicit feedback.
This dataset was used in 19 GNN-based SocialRS\xspace methods~\cite{bai2020tgnn,walker2021soapvae,seng2021atgcn,salamat2021heterographrec,xu2020srhgnn,gu2021egfrec,niu2021mgsr,han2022dhhgcn,sun2022motifres,liu2022hosr,liu2022gnnrec,liu2022siga,song2019dgrec,chen2022igrec,yu2022esrf,yu2021sept,yu2021mhcn,song2020dream,miao2022melgn}.
However, it should be noted that most methods using this dataset split users' ratings according to the item categories and then use those of some categories only, \textit e.g.,\xspace Douban-Movie and Douban-Book.
\hide{
The most frequently used datasets for GNNs that we encountered in this literature survey were Epinions and Ciao with 17 and 13 references on our surveyed papers. Ciao dataset was crawled from Tang et al. (2012) from the Ciao website which is a product review website with user reviews for multiple products for every website. This dataset has a total of 1653 users with over 26,190 interactions on 16,862 items. Epinions is another sparse density dataset of trust based user network of reviews on items with over 22000+ users, 296000 items and 464k interactions between users and items. Basically, any forum or platform with reviews on items could be used for evaluating GNNs. Thus, the literature we surveyed also made use of datasets like Yelp reviews, Movielens, Delicious, Douban and FilmTrust. GRAFRANK \cite{25} took an interesting approach with their selection of datasets and used two large scale datasets from Snapchat for theyr which had over 3.1 million users and 286 million edges, and 17.1 million users and 2.36 billion edges respectively for their friend ranking based model which made use of 79 distinct user features.
}
\subsection{Evaluation Metrics}
\subsubsection{\textbf{Rating Prediction Task}} The methods that focus on explicit feedback aim to minimize the errors of the rating prediction task.
To evaluate the performance of this task, they use the following metrics: root mean squared error (RMSE) and mean absolute error (MAE).
Specifically, MAE calculates the average error, the difference between the predicted and actual ratings, while RMSE emphasizes larger errors.
Both metrics are computed as follows:
\begin{equation}\label{maermse}
\begin{aligned}
MAE &= \frac{1}{M}\sum_{p_i\in\mathcal{U}}\sum_{q_j\in\mathcal{N}_{p_i}}|\hat{r}_{ij}-{r}_{ij}|,\\
RMSE &= \sqrt{\frac{1}{M}\sum_{p_i\in\mathcal{U}}\sum_{q_j\in\mathcal{N}_{p_i}}(\hat{r}_{ij}-{r}_{ij})^2},
\end{aligned}
\end{equation}
where $M$ indicates the number of ratings. Also, $\mathcal{U}$ and $\mathcal{N}_{p_i}$ denote a set of users and a set of items rated by $p_i$, respectively.
Lastly, ${r}_{ij}$ and $\hat{r}_{ij}$ indicate a user $p_i$'s actual and predicted ratings on an item $q_j$, respectively.
\subsubsection{\textbf{Top-$N$ Recommendation Task}} The methods that focus on implicit feedback aim to improve the accuracy of the top-$N$ recommendation task.
To evaluate the performance of this task, they use the following metrics: normalized discounted cumulative gain (NDCG)~\cite{JarvelinK00}, mean reciprocal rank (MRR)~\cite{BreeseHK98}, area under the ROC curve (AUC), F1 score, precision, and recall.
First, NDCG reflects the importance of ranked positions of items in a set $\mathcal{R}_{p_i}$ of $N$ items that each method recommends to a user $p_i$.
Let $y_{k}$ represent a binary variable for $k$-th item $i_{k}$ in $\mathcal{R}_{p_i}$, \textit i.e.,\xspace $y_{k}\in{\{0,1\}}$.
$y_{k}$ is set as $1$ if $i_{k}\in \mathcal{R}_{p_i}$ and set as $0$ otherwise.
$\mathcal{N}_{p_i}$ denotes a set of items considered relevant to $p_i$ (\textit i.e.,\xspace ground truth).
In this case, $\text{NDCG}_{p_i}@N$ is computed by:
\begin{equation}\label{NDCG}
\begin{aligned}
\text{NDCG}_{p_i}@N & = \dfrac {\text{DCG}_{p_i}@N}{\text{IDCG}_{p_i}@N}, \\
\text{DCG}_{p_i}@N & = \sum_{k=1}^{N}\dfrac {2^{y_{k}}-1}{\log_{2}{(k+1)}},
\end{aligned}
\end{equation}
where $\text{IDCG}_{p_i}@N$ is the ideal DCG at \textit{N}, \textit i.e.,\xspace for every item $i_{k}$ in $\mathcal{R}_{p_i}$, $y_{k}$ is set as $1$.
Second, MRR reflects the average inversed rankings of the first relevant item $i_{k}$ in $\mathcal{R}_{p_i}$. $\text{MRR}_{p_i}@N$ is computed by:
\begin{equation}\label{MRR}
\text{MRR}_{p_i}@N = \dfrac {1}{\text{rank}_{p_i}},
\end{equation}
where $\text{rank}_{p_i}$ refers to the rank position of the first relevant item in $\mathcal{N}_{p_i}$.
Third, AUC evaluates whether each method ranks a rated item higher than an unrated item.
That is, AUC$_{p_i}$ is computed by:
\begin{equation}\label{auc}
\text{AUC}_{p_i} = \dfrac {\sum_{q_j\in\mathcal{N}_{p_i}}\sum_{q_k\in\mathcal{N}_{p_i}\text{\textbackslash}{\mathcal{I}}}I(\hat{r}_{ij}>\hat{r}_{ik})}{\arrowvert \mathcal{N}_{p_i} \arrowvert \arrowvert \mathcal{N}_{p_i}\text{\textbackslash}{\mathcal{I}} \arrowvert},
\end{equation}
where $I(\cdot)$ is the indicator function.
Lastly, F1 score considers precision and recall together by taking their harmonic mean:
\begin{equation}
\text{F1}_{p_i}@N = 2 \cdot \dfrac {\text{Precision}_{p_i}@N \cdot \text{Recall}_{p_i}@N}{\text{Precision}_{p_i}@N+\text{Recall}_{p_i}@N},
\end{equation}
\begin{equation}\label{precision}
\begin{aligned}
\text{Precision}_{p_i}@N & = \dfrac {\arrowvert \mathcal{N}_{p_i}\bigcap \mathcal{N}_{p_i} \arrowvert}{\arrowvert \mathcal{R}_{p_i} \arrowvert}, \\
\text{Recall}_{p_i}@N & = \dfrac {\arrowvert \mathcal{N}_{p_i}\bigcap \mathcal{R}_{p_i} \arrowvert}{\arrowvert \mathcal{N}_{p_i} \arrowvert},
\end{aligned}
\end{equation}
where $\text{Precision}_{p_i}@N$ and $\text{Recall}_{p_i}@N$ denote precision and recall at $N$, respectively.
\hide{
The most used evaluation metric that we encountered was Normalized Discounted Cumulative Gain (NDCG@K). It takes into factor the hit position of the any given item and assigns a higher score to an item on top position. Recall@n was anothe rwidely used metric to depict the proportion of correct results in a list of top “n” items. HitRate@n is another metric which computed if the ground truth item is present in the top “n” ranking of the items. MRR@n was another popular metric which computes the the average of reciprocal ranks of the target items and is set as zero if the rank is greater than “n. Root Mean Squared Error (RMSE) and Mean Absolute Error (MAE) were two other widely used evaluation metrics encountered by us in our literature survey.
}
\section{Future Directions}\label{sec:directions}
In this section, we discuss the limitations of GNN-based SocialRS\xspace methods and present several future research directions.
\subsection{Graph Augmentation in GNN-based SocialRS\xspace}
An intrinsic challenge of GNN-based SocialRS\xspace methods lies in the \textit{sparsity of the input data} (\textit i.e.,\xspace user-item interactions and user-user relations).
To mitigate this problem, some GNN-based SocialRS\xspace methods~\cite{yu2021sept,wu2022dcrec,du2022sdcrec,wu2022dcrec,zhang2022cgl,li2022disgcn,wang2022dcan,sun2022motifres,yu2021mhcn} have explored more supervision signals from the input data so that such signals can be utilized as different views from the original graph structure.
Although many graph augmentation techniques~\cite{ding2022augmentation,zhao2022augmentation} such as node/edge deletion and graph rewiring have been proposed recently in a machine learning area, existing GNN-based SocialRS\xspace methods only focus on adding edges between two users or between a user and an item~\cite{yu2021sept,yu2021mhcn,wu2022dcrec,zhang2022cgl,li2022disgcn,wang2022dcan,du2022sdcrec,wu2022dcrec,sun2022motifres}.
Therefore, it is a promising direction to leverage extra self-supervision signals based on various augmentation techniques to learn user and item embeddings more efficiently and effectively.
\subsection{Trustworthy GNN-based SocialRS\xspace}
Existing GNN-based SocialRS\xspace methods have focused on improving their \textit{accuracy} by only relying on users' past feedback.
However, it is worth mentioning that there are other important ``beyond accuracy'' metrics, which we call trustworthiness\footnote{``Trustworthy'' is defined in the Oxford Dictionary as follows: an object or a person that you can rely on to be good, honest, sincere, etc~\cite{zhang2022trustworthy}.}
according to~\cite{zhang2022trustworthy}.
Motivated by the importance of such metrics, various trustworthy GNN architectures have been proposed to incorporate core aspects of trustworthiness, including robustness, explainability, privacy, and fairness, in the context of GNN encoders~\cite{zhang2022trustworthy}.
One GNN-based SocialRS\xspace method is proposed in this direction to specifically address the privacy issue~\cite{liu2022fesog}.
In particular, Liu et al.~\cite{liu2022fesog} devised a framework that stores user privacy data only in local devices individually and analyzes them together via federated learning.
Thus, developing trustworthy GNN-based SocialRS\xspace is a wide open for research.
For example, consider robustness: bad actors may want to target certain products to certain users in a SocialRS\xspace setting; how robust would existing GNN-based SocialRS\xspace be against such attackers is an unanswered question and opens opportunities to create accurate as well as robust models.
\subsection{Heterogeneity}
In real-world graphs, nodes and their interactions are often multi-typed.
Such graphs, which are called heterogeneous graphs, convey rich information such as heterogeneous attributes, meta-path structures, and temporal properties.
Although HetGNN encoders have recently attracted attention in many domains (\textit e.g.,\xspace healthcare and cybersecurity)~\cite{wang2020heterogeneity}, there have been only a few attempts to leverage such heterogeneity in SocialRS\xspace~\cite{chen2021serec,wang2022dcan}.
Therefore, designing a HetGNN-based SocialRS\xspace method remains an open question for the future.
\subsection{Efficiency and Scalability}
Most real-world graphs are too large and also grow rapidly.
However, most GNN-based SocialRS\xspace methods are too complicated, thus facing difficulty scaling to such large-scale graphs.
Some works have attempted to make more scalable versions of models, including SocialLGN~\cite{liao2022sociallgn}, SEPT~\cite{yu2021sept}, and DcRec~\cite{wu2022dcrec}, have attempted to remove the non-linear activation function, feature transformation, and self-connection, whereas Tao et al.~\cite{tao2022design} leveraged the knowledge distillation (KD) technique into SocialRS\xspace.
However, designing a highly scalable GNN architecture is an important problem that remains challenging to date.
\hide{
An interesting direction for future research could be to capture different types of relations between users like classmates, work colleagues or friends. Capturing these types of relations between users would help achieve a previously unexplored dimension of personalized recommendations on the basis of the user’s preference patterns. This direction could also comment on the influence of supposedly weak ties of the users in item recommendation because a user could be recommended items based on a work colleague’s (a social tie which does not particularly enforce homophily) preferences solely based on the fact that they work together. Another interesting direction of research would be to work on comparing the bidirectional friend association platforms like Facebook with unidirectional platforms like Twitter. As per our knowledge, no prior work has been done in this domain.
}
\section{Taxonomy of Inputs}\label{sec:inputs}
In this section, we present a taxonomy of inputs for GNN-based SocialRS\xspace.
Figures~\ref{fig:input_type} and~\ref{fig:inputs_representation} depict the input types and their representations, respectively.
In the subsequent subsections, we describe each of these in detail.
\begin{figure*}[t]
\centering
\includegraphics[width=\linewidth]{Figures/Input_type.pdf}
\caption{Overview of input types used by GNN-based SocialRS\xspace methods. }
\label{fig:input_type}
\end{figure*}
\subsection{Input Types: Types of Inputs to the Models}
In this subsection, we group the input types used by GNN-based SocialRS\xspace into 5 categories: user-item ratings, user-user social relations, attributes, knowledge graph (KG), and groups.
Table~\ref{tab:inputs} categorizes all papers based on the input data types they use.
\input{tab-inputs}
\subsubsection{\textbf{User-Item Rating}}
Users interact with different items as they rate them, thus forming the rating matrix $\mathbf{R}\in \mathbb{R}^{m \times n}$. Therefore, each user has a list of items that he/she has interacted with along with the corresponding rating.
The timestamp of the user-item interaction may also be available and can be exploited to recommend items to users at specific points in time.
Each rating can thus also be associated with a timestamp for that rating.
Some models exploit the temporal information to make more-effective recommendations in continuous time~\cite{bai2020tgnn,sun2020dgarecr,narang2021fuserec} or during a user session~\cite{yan2022ssdrec,gu2021egfrec,song2019dgrec,song2020dream}.
Furthermore, one may also have multi-typed user-item interactions. For example, a user may interact with an item positively (positive rating) or negatively (negative rating). Some models have distinguished among these different interaction types to predict each type more effectively~\cite{xu2020srhgnn}.
\subsubsection{\textbf{User-User Social}}
The second essential input to SocialRS\xspace is the social adjacency matrix $\mathbf{S} \in \mathbb{R}^{m \times m}$, storing user-user social relations.
People may be connected to each other via different kinds of social relations. For example, two users may be related if they are friends or if they may
co-comment on an item or if one follows the other, etc. DH-HGCN~\cite{han2022dhhgcn} and BFHAN~\cite{zhao2021bfhan} consider multifaceted, heterogeneous user-user relations in the social network.
\subsubsection{\textbf{Additional Features}}
\hfill \break
\vspace{1mm}
\indent\textbf{Attributes.}
Both user and items may have additional attributes that can be encoded by the models to make better social recommendations. User attributes are often features of user profiles on social media, \textit e.g.,\xspace age, sex, etc., while item attributes are often information about the items such as its price and category. Some models just incorporate user attributes~\cite{xiao2021mutualrec,seng2021atgcn}, some only item attributes~\cite{guo2020gnnsor,chen2021serec}, and others incorporate both~\cite{wu2019diffnet,wu2020diffnet++,song2021diffnetlg,jin2020megcn,jiang2021san}.
\vspace{1mm}
\textbf{Knowledge Graph (KG).}
Items are structured on a product site in the form of a knowledge graph where items are related with each other if they have some mutual dependency. Models incorporate such dependencies between items as represented by this knowledge graph~\cite{tien2020kconvgraphrec,salamat2021heterographrec}.
\vspace{1mm}
\textbf{Groups.}
Users are often grouped together denoting a group structure among them. For instance, multiple users can form an online social group based on similar interests or hobbies. Models incorporate the group membership in addition to the social relations to model the social network more effectively~\cite{liao2022gman,chen2022igrec,leng2022glow}. User groups can also be formed based on the businesses that they are part of or are clients of, as in ~\cite{chen2022fbne}.
\begin{figure*}[t]
\centering
\includegraphics[width=\linewidth]{Figures/Input_representation.pdf}
\caption{Overview of input representations used by GNN-based SocialRS\xspace methods.}
\label{fig:inputs_representation}
\end{figure*}
\input{tab-inputreps}
\subsection{Input Representations: Representation of Inputs within the Models}
In order to effectively use the available inputs with GNN-based models, SocialRS\xspace methods represent them as different graphs.
In particular, the input representations employed by GNN-based SocialRS\xspace can be grouped into 7 categories: U-U/U-I graphs, U-U-I graph, attributed graph, multiplex graph, U-U/U-I/I-I graphs, hypergraph, and decentralized.
Table~\ref{tab:inputreps} categorizes papers based on the input representation they develop using the input data.
\subsubsection{\textbf{U-U/U-I Graphs}}
The simplest representation of the input for social recommendation is to use separate graphs for a user-user social network and a user-item interaction network. The user-item interaction network is represented as a bipartite graph and the user-user social network is represented as a general undirected/directed graph. Information from the two graphs is encoded separately at the common user node and later aggregated. Most works follow this representation to encode users and items~\cite{fan2019graphrec,wu2019diffnet,fu2021dicer,wu2019danser,fan2022graphrecp,xiao2021mutualrec,xiao2020mgnn,li2020hidm,wu2020diffnet++,song2021diffnetlg,tao2022design,wu2022eagcn}.
\subsubsection{\textbf{U-U-I Graph}}
Both kinds of user-user relations and user-item interactions can be modeled together by a single graph as well. Here, user-user edges and user-item edges in the graph need to be distinguished by the type of the end node. Many works thus merge the social relation edges and interaction edges together in a single graph to obtain node embeddings for both users and items~\cite{zhu2021shgcn,chen2022ssrgnn,miao2022melgn,shi2022sengr}.
\subsubsection{\textbf{Attributed Graph}}
Both user and item nodes may further contain features describing the corresponding entity. For example, users may have profile features while items may have their description features. These features are first encoded numerically and then represented explicitly as node attributes in the U-U/U-I graph or U-U-I graph to make effective recommendations~\cite{wu2019diffnet,wu2020diffnet++,song2021diffnetlg,xie2022sran,jiang2021san,song2022mrapr,seng2021atgcn}. These attributes are either fused with the learned embeddings or are used as initialization for the GNN layers.
\subsubsection{\textbf{Multiplex Graph}}
Users may be related to each other via multiple relationships while they may also interact with items in multiple ways. Such relationships are often represented using a multiplex network, \textit i.e.,\xspace using multiple layers of the U-U/U-I graph, where each layer represents a particular relation type~\cite{xu2020srhgnn,han2022dhhgcn,zhao2021bfhan}.
\subsubsection{\textbf{U-U/U-I/I-I Graphs}}
When information on item-item relations is available, an item-item knowledge graph is considered in addition to the U-U and U-I graphs. Item embeddings are now obtained separately one from the U-I interaction graph and the other from the item-item knowledge graph and then aggregated later to obtain the final item embedding ~\cite{tien2020kconvgraphrec,salamat2021heterographrec}.
\subsubsection{\textbf{Hypergraph}}
One may want to incorporate higher-order relations among users and items to explicitly establish organizational properties in the input such as (1) constructing a user-only hyperedge if a group of users are connected together in closed motifs, (2) constructing a user-item joint hyperedge if a group of users interacts with the same item, and (3) constructing an item-item hyperedge if one user interacts with a group of items. Models have been developed to include just user-item joint hyperedges~\cite{zhu2021shgcn}, both user-user and user-item joint hyperedges~\cite{yu2021mhcn}, and user-user and item-item hyperedges~\cite{han2022dhhgcn}.
\subsubsection{\textbf{Decentralized}}
Centralized data storage is becoming infeasible in practice due to rising privacy concerns. Thus, instead of storing the complete U-U/U-I graphs together, a decentralized storage of the graphs is often required. Here, the edges for social relations and interactions of each user are stored locally at each user's local server such that only non-sensitive data is shared with the centralized server~\cite{liu2022fesog}
\begin{comment}
\subsection{Heterogeneous Link Ties}
\hfill\\
A major portion of existing research contributes to developing interconnecting links between nodes of varying weights on the basis of the strength of their relationship. Given the works in this direction as published in GraphRec \cite{1}, HIDM \cite{2}, HeteroGraphRec \cite{3}, SoRecGAT \cite{11}, DiffNet++ \cite{14}, ASR \cite{18}, FuseRec \cite{21}, KconvGraphRec \cite{24}, GRAFRANK \cite{25}, SAN \cite{28}, SeFrame \cite{29} it is interesting to understand how and why this is necessary to achieve better results. SoRecGAT \cite{11} develops a novel and comprehensive architecture model that can represent other works as well by combining nodes from different graphs into one unified space.
\begin{figure}[h]
\centering
\includegraphics[width=0.8\linewidth, height=0.5\linewidth]{Figures/hetero.png}
\caption{Part Architecture for the SoRecGAT model \cite{12}}
\end{figure}
As seen in Figure 6, representing the part architecture of SoRecGAT, there are two parallel formations of item network graphs and user network graphs. These two networks are then combined to create a network with learnt node representations with different link strengths, on which further computation is performed. As mentioned the heterogeneous graph is built by combining different types of nodes together, it is important in deciding the weights of the neighbouring nodes. In the most primitive settings, we need to understand that user nodes can be linked with other user nodes via their interaction behaviours as well as other item nodes on the basis of their ratings assigned to the other. To compute the differential influence of neighbours, as mentioned above, it is observed that some form of attention mechanisms are deployed.
Through exhaustive experimentation, it was clearly shown that the incorporating the influence of the neighboring nodes influences the decisions of the model and can bump up accuracy of the recommendations it makes.
\subsubsection{Temporal/Dynamic Recommendations in \cmark Circles}
\hfill\\
Temporal recommendations in the content of user sociology refers to the consideration of temporal behaviour of the user’s social connections in recommending a certain item to the user. To depict this interaction better, we can refer to Figure 7. While referring to the diagram, we see how all the users at the current session will be paying attention to the previous sessions of their friends. User A at session “t” would be most interested in session “t-1” of User B, User B would be most interested in the previous session of User C, and User C would be most interested in the first session of User A. Thus, with this definition, we saw works like TGRec \cite{13}, FuseRec \cite{21}, EGFRec \cite{23}, DICER \cite{22} and GRAFRANK \cite{25} implement their recommendation systems to account for the history of user’s strong and weak ties.
\begin{figure}[h]
\centering
\includegraphics[width=0.8\linewidth, height=0.5\linewidth]{Figures/temporal-us.png}
\caption{Temporal influence of user’s friends’ interests on the user’s attention \cite{24}}
\end{figure}
\end{comment}
\section{Introduction}
With the advent of online social network platforms (\textit e.g.,\xspace Facebook, Twitter, Instagram, etc.), there has been a surge of research efforts in developing social recommender systems (SocialRS\xspace), which simultaneously utilize user-user social relations along with user-item interactions to recommend relevant items to users.
Exploiting social relations in recommendation works well because of the effects of \textit{social homophily}~\cite{mcpherson2001birds} and \textit{social influence}~\cite{marsden1993network}:
(1) social homophily indicates that a user tends to connect herself to other users with similar attributes and preferences, and
(2) social influence indicates that users with direct or indirect relations tend to influence each other to make themselves become more similar.
Accordingly, SocialRS\xspace can effectively mitigate the data sparsity problem by exploiting social neighbors to capture the preferences of a sparsely interacting user.
Literature has shown that SocialRS\xspace can be applied successfully in various recommendation domains (\textit e.g.,\xspace product~\cite{wu2020diffnet++, wu2019danser}, music~\cite{yu2021sept,yu2021mhcn,yu2022esrf}, location~\cite{wu2022dcrec,seng2021atgcn,li2022atstggnn}, and image~\cite{wu2019diffnet,wu2022eagcn,tao2022design}), thereby improving user satisfaction.
Furthermore, techniques and insights explored from SocialRS\xspace can also be exploited in real-world applications other than recommendations.
For instance, García-Sánchez et al.~\cite{GarciaSanchezP20} leveraged SocialRS\xspace to design a decision-making system for marketing (\textit e.g.,\xspace advertisement), while Gasparetti et al.~\cite{GasparettiSM21} analyzed SocialRS\xspace in terms of community detection.
\input{fig-pub}
Motivated by such wide applicability, there has been an increasing interest in research on developing accurate SocialRS\xspace models.
In the early days, research focused on matrix factorization (MF) techniques~\cite{MaYLK08a,MaKL09,MaLK09,JamaliE10,MaZLLK11,0002LLL13,TangHGL13}.
However, MF-based methods cannot effectively model the complex (\textit i.e.,\xspace non-linear) relationships inherent in user-user social relations and user-item interactions~\cite{shokeen2020social}.
Motivated by this, most recent works have focused on applying deep-learning techniques to SocialRS\xspace, \textit e.g.,\xspace autoencoder~\cite{DengHXWW17,YingCXW16}, generative adversarial networks (GAN)~\cite{krishnan2019modular}, and graph neural networks (GNN)~\cite{fan2019graphrec,wu2019diffnet}.
In particular, since user-item interactions and user-user social relations can naturally be represented as graph data, GNN-based SocialRS\xspace has increasingly attracted attention in the literature.
As a demonstration, Figure~\ref{fig:pub} shows that the number of papers related to GNN-based SocialRS\xspace has increased consistently since 2019.
Given the growing and timely interest in this area, we survey GNN-based SocialRS\xspace methods in this survey.
\subsection{Challenges}
Applying GNN into SocialRS\xspace is not trivial and faces the following challenges.
\vspace{1mm}
\textbf{Input representation}. The input data should be modeled appropriately into a heterogeneous graph structure. Many SocialRS\xspace methods build two separate graphs: one where nodes represent users and items, and edges represent user-item interactions; the other where nodes represent users and edges represent user-user social relations.
Thus, GNN methods for SocialRS\xspace need to extract knowledge from both the networks simultaneously for accurate inference. This is in contrast with most regular GNNs that consider only a single network.
Additionally, we note that there are valuable input features in the two networks, such as user/item attributes, item knowledge/relation, and group information. Thus, methods fuse features along with network information in GNN-based SocialRS\xspace.
In this survey, we discuss the input types used in GNN-based SocialRS\xspace methods and the different ways they are represented as graphs.
\vspace{1mm}
\textbf{Design of GNN encoder}. The performance of GNN-based SocialRS\xspace methods relies heavily on their GNN encoders, which aim to represent users and items into low-dimensional embeddings.
For this reason, existing SocialRS\xspace methods have explored various design choices regarding GNN encoders and have adopted different architectures according to their goals.
For instance, many SocialRS\xspace methods employ the graph attention neural network (GANN)~\cite{velivckovic2017graph} to differentiate each user's preference for items or each user's influence on their social friends.
On the other hand, some methods~\cite{sun2020dgarecr,yan2022ssdrec,gu2021egfrec,narang2021fuserec,niu2021mgsr} use the graph recurrent neural networks (GRNN)~\cite{peng2017cross,zayats2018conversation} to model the sequential behaviors of users.
It should be noted that GNN encoders for SocialRS\xspace need to simultaneously consider the characteristics of user-item interactions and user-user social relations.
This is in contrast with GNN encoders for non-SocialRS\xspace that model only user-item interactions.
In this survey, we discuss different types of GNN encoders used by SocialRS\xspace methods.
\vspace{1mm}
\textbf{Training}. The training of GNN-based SocialRS\xspace should be designed to reflect users' tastes and items' characteristics in the embeddings for the corresponding users and items. To this end, SocialRS\xspace methods employ well-known loss functions, such as mean squared error (MSE), Bayesian personalized ranking (BPR)~\cite{RendleFGS09}, and cross-entropy (CE), to reconstruct user behaviors. Furthermore, to mitigate the data sparsity problem, some works have additionally employed auxiliary loss functions such as self-supervised loss~\cite{liu2020selfsupervised} and group-based loss~\cite{leng2022glow,liao2022gman}.
It is worth mentioning that loss functions used by GNN-based SocialRS\xspace are designed so that rich structural information such as motifs and user attributes can be exploited. These are not considered by loss functions for non-SocialRS\xspace.
In this survey, we discuss the training remedies of GNN-based SocialRS\xspace methods to learn the user and item embeddings.
\subsection{Related Surveys}
\input{tab-salesman}
Most of the existing surveys, which fully cover SocialRS\xspace papers, focus either on traditional methods~\cite{PapadimitriouSM12,TangHL13,YangGLS14,XuWZC15,DouYD16,ChenHCWZK18,ShokeenR18} (\textit e.g.,\xspace matrix factorization), feature information~\cite{shokeen2020studyFeatures} (\textit e.g.,\xspace context), or a specific application~\cite{GasparettiSM21} (\textit e.g.,\xspace community detection).
On the other hand, the other related surveys~\cite{Deng22,wangHW0SOC0Y21,wu2020gnnsurvey,gao2021gnnsurvey} focus on graph-based recommender systems, including GNN-based RS methods, but they partially cover SocialRS\xspace papers in their surveys.
A comparison between the current survey and the previous surveys is shown in Table~\ref{tab:salesman}.
\begin{figure*}[t]
\centering
\includegraphics[width=\linewidth]{./Figures/Timeline2.pdf}
\caption{A timeline of GNN-based SocialRS\xspace methods. We categorize methods according to their GNN encoders: graph convolutional network (GCN), lightweight GCN (LightGCN), graph attention neural networks (GANN), heterogeneous GNN (HetGNN), graph recurrent neural networks (GRNN), hypergraph neural networks (HyperGNN), graph autoencoder (GAE), and hyperbolic GNN. It should be noted that some methods employ two or more GNN encoders in their architectures.}
\vspace{-0.5cm}
\label{fig:timeline}
\end{figure*}
Specifically, several survey papers on SocialRS\xspace have been published before 2019 ~\cite{PapadimitriouSM12,TangHL13,YangGLS14,XuWZC15,DouYD16,ChenHCWZK18,ShokeenR18}.
However, they only focus on traditional methods such as matrix factorization and collaborative filtering.
These surveys largely ignore methods that use modern-day deep-learning techniques, in particular GNN.
More recent surveys discuss the taxonomy of social recommendation,
starting the comparison of deep-learning based techniques~\cite{shokeen2020studyFeatures,shokeen2020social,GasparettiSM21}.
However, Shokeen and Rana~\cite{shokeen2020studyFeatures}
only focus on the taxonomy of feature information regarding social relations,
such as context, trust, and group, used in SocialRS\xspace methods, while Gasparetti et al.~\cite{GasparettiSM21} only discuss SocialRS\xspace methods using community detection (CD) techniques.
Shokeen and Rana~\cite{shokeen2020social} include just one social recommendation method based on GNNs.
With the advent of GNNs in recommender systems, multiple surveys have been conducted on graph-based recommender systems~\cite{wangHW0SOC0Y21,wu2020gnnsurvey,gao2021gnnsurvey,Deng22}.
However, their focus is not on SocialRS\xspace as they consider different kinds of recommender systems where graph-learning is employed. They cover only a small section of most representative papers on GNN-based SocialRS\xspace. Thus, one cannot rely on these surveys to gain insights on the ever-increasing field of using GNNs for SocialRS\xspace.
As shown in Table~\ref{tab:salesman}, no survey paper exists in the literature that focuses specifically on GNN-based SocialRS\xspace methods. In the current work, we aim to fill this gap by providing a comprehensive and systematic survey on GNN-based SocialRS\xspace methods.
\input{fig-venue.tex}
\subsection{Contributions}
The main contribution of this survey paper is summarized as follows:
\begin{itemize}
\item \textbf{The First Survey in GNN-based SocialRS\xspace}: To the best of our knowledge, we are the first to systematically dedicate ourselves to reviewing GNN-based SocialRS\xspace methods. Most of the existing surveys focus either on traditional methods~\cite{PapadimitriouSM12,TangHL13,YangGLS14,XuWZC15,DouYD16,ChenHCWZK18,ShokeenR18} (\textit e.g.,\xspace matrix factorization), feature information~\cite{shokeen2020studyFeatures} (\textit e.g.,\xspace context), or a specific application~\cite{GasparettiSM21} (\textit e.g.,\xspace community detection).
The other related surveys~\cite{Deng22,wangHW0SOC0Y21,wu2020gnnsurvey,gao2021gnnsurvey} focus on graph-based recommender systems, but they partially cover SocialRS\xspace.
\item \textbf{Comprehensive Survey}: We systematically identify the relevant papers on GNN-based SocialRS\xspace by following the guidelines of the preferred reporting items for systematic reviews and meta-analyses (PRISMA framework)~\cite{moher2009preferred}. Then, we comprehensively review them in terms of their inputs and architectures. Figure~\ref{fig:timeline} provides a brief timeline of GNN-based SocialRS\xspace methods. In addition, Figure~\ref{fig:venue} shows the number of relevant papers published in relevant journals (\textit e.g.,\xspace IEEE TKDE and ACM TOIS) and conferences (\textit e.g.,\xspace WWW, ACM SIGIR, and ACM CIKM).
\item \textbf{Novel Taxonomy of Inputs and Architectures}: We provide a novel taxonomy of inputs and architectures in GNN-based SocialRS\xspace methods, enabling researchers to capture the research trends in this field easily. An input taxonomy includes 5 groups of input type notations and 7 groups of input representation notations. On the other hand, an architecture taxonomy includes 8 groups of GNN encoder notations, 2 groups of decoder notations, and 12 groups (4 for primary losses and 8 for auxiliary losses) of loss function notations.
\item \textbf{Benchmark Datasets}: We review 17 benchmark datasets used to evaluate the performance of GNN-based SocialRS\xspace methods. We group the datasets into 8 domains (\textit i.e.,\xspace product, location, movie, image, music, bookmark, microblog, and miscellaneous). Also, we present some statistics for each dataset and a list of papers using the dataset.
\item \textbf{Future Directions}: We discuss the limitations of existing GNN-based SocialRS\xspace methods and provide several future research directions.
\end{itemize}
The rest of this survey paper is organized as follows.
In Section~\ref{sec:survey}, we introduce the survey methodology based on PRISMA~\cite{moher2009preferred} that collects the papers on GNN-based SocialRS\xspace thoroughly.
In Section~\ref{sec:problem}, we define the social recommendation problem.
In Sections~\ref{sec:inputs} and~\ref{sec:arch}, we review 80 GNN-based SocialRS\xspace methods in terms of their inputs and architectures, respectively. We summarize
17 benchmark datasets and 8 evaluation metrics, widely-used in GNN-based SocialRS\xspace methods, in Section~\ref{sec:setup}.
Section~\ref{sec:directions} discusses future research directions.
Finally, we conclude the paper in Section~\ref{sec:conclusions}.
\hide{
Recommendation systems are a widely prevalent field of research in the deep learning domain. The most popular recommendation systems are popularity based, content based and collaborative filtering based recommendation systems. With social media, traditional recommendation systems lack the understanding of relationships and influence. A good recommendation system should incorporate these factors as it is a common human tendency to buy products or watch a movie recommended by those whom we trust, it is essential for modern deep learning based recommendation to incorporate these factors. This led to the development of GCNN based recommendation systems. GCNN exploits the graph structure of the data to understand relationships / connections as well as the heavy computational aspect that neural networks present. Other modern day concepts like Attention, Fusion, Temporal are added to GCNN to make it more adaptable to model heterogeneity, global influence and time.
In this paper we did an extensive survey understanding the different types of Social network based Recommendation System.We used Scopus Database to find the list of papers, used PRISMA architecture to annotate the paper for finding the relevant set of 41 papers. Our survey findings mainly focus on different types of architecture of GNN used and on user sociology. To the best of our knowledge this is the first paper existing in this particular topic.
We summarize the work done on social recommender systems. Current literature can be categorized into six broad categories--- Collaborative Filtering, Temporal Recommendation, Social Recommendation, Socio-Temporal Recommendation, Graph Convolutional Networks and Graph Attention Networks.
Collaborative Filtering algorithms are the most traditional method for predicting ratings on items by a given user. Methods like matrix factorization are used to derive user and item-specific latent factors from a given matrix capturing user-item interactions. Several architectures like NeuMF exist which work on using Neural Networks on top of Collaborative Filtering algorithms to capture complex non-linear correlations in the user-item data. There are two primary limitations of this branch of recommender systems--- firstly, these recommender systems make use of a static user-item interaction graph which is unable to accommodate for evolving user preferences over time; and secondly, these methods cannot exploit the user’s social connections in terms of recommending items.
Temporal Recommendation approaches use Markov Chains to factor in temporal relations concerning the user, like the user’s past and current preferences. More recent approaches have made use of convolutional, RNN and attention layers to model complex temporal changes. However, these methods also overlook the potential of exploiting social relations of the user to make relevant recommendations, and they are also susceptible to data sparsity and cold start problems, much like the Collaborative Filtering approaches.
Social Recommendation approaches address the limitations of the aforementioned two approaches and make use of the user’s social ties with other users in the network. Social recommenders follow the principle of homophily which states that the socially connected clusters of users are likely to exhibit the same kind of behaviour in these platforms, and that these clusters of users have a certain influence on the fellow users in their corresponding cluster. Furthermore, Socio-Temporal recommendation approaches consider the temporal dependence and the social influence of the user’s item interactions and their social circles by extracting information from the items used by the user’s social connections. This modeling of socio-temporal features has also given rise to session based recommendation approaches which take into account the transitions between items of session sequences to help generate better item recommendations.
With the context of this wide set of approaches used for recommender systems, we work on reviewing the use of Graph Convolutional Networks (GCNs) for social recommendations. The effectiveness of Graph Neural Networks (GNNs) in addressing the data sparsity and cold start problem, while accounting for the user’s social ties has significantly increased the popularity of GNNs in developing social recommendation algorithms. With the abundance of existing literature on recommender systems, our emphasis paper lies on conducting an exhaustive literature survey on the use of Graph Neural Networks for social recommendations. We work on studying the various architectures, applications and the degree of heterogeneity employed in conjunction with Graph Neural Networks in currently existing approaches for social recommender systems.
}
\section{Survey Methodology}\label{sec:survey}
Following the guidelines set by the PRISMA~\cite{moher2009preferred}, the Scopus index was queried to filter for relevant literature. In particular, the following query was run on October 14, 2022, resulting in $2,151$ papers.
\begin{center}
{\footnotesize
\texttt{
\textcolor{red}{TITLE-ABS-KEY} (social \textcolor{blue}{AND} (recommendation \textcolor{blue}{OR} recommender) \textcolor{blue}{AND} graph) \textcolor{blue}{AND} (\textcolor{red}{PUBYEAR} > 2009) \textcolor{blue}{AND} (\textcolor{red}{LIMIT-TO} (\textcolor{red}{LANGUAGE}, ``English''))}}
\end{center}
To obtain the final list of relevant papers for the current survey, an iterative strategy of manual reviewing and filtering was carried out, following PRISMA guidelines. Four expert annotators were used to select the relevant papers. Before reviewing the papers, a comprehensive and exhaustive discussion was held among the annotators to discuss and agree upon the definitions of the main concepts that a paper is to be examined for before including it in the survey. These included concepts of Graph Neural Networks and Social Recommendation.
Based on these guidelines, each annotator labeled one batch of $200$ papers together. Each paper in this batch was assigned one of the three categories by each annotator: ``Yes'', ``No'', and ``Maybe''. ``Yes'' represents full confidence of relevance, ``Maybe'' represents some confidence of relevance, and ``No'' represents full confidence of irrelevance of the paper for the current survey. A high inter-annotator agreement of $0.845$ among the annotators was reported on this set.
The remaining papers were then divided equally among the annotators without any overlap. The annotator assigned each paper a label of ``Yes'', ``No'', or ``Maybe''. Papers marked ``Maybe'' were reviewed again by the other annotators to reach a consensus. Finally, papers marked ``Yes'' were collected together and these served as the focus of our survey. Through this comprehensive process of filtering, we finally found $80$ papers that study GNN-based SocialRS\xspace for our survey paper.
\section{Notations and Problem Definition}\label{sec:problem}
The social recommendation problem is formulated as follows.
Let $\mathcal{U}=\{p_1,p_2,\cdots,p_m\}$ and $\mathcal{I}=\{q_1,q_2,\cdots,q_n\}$ be sets of $m$ users and $n$ items, respectively.
Also, $\mathbf{R}\in\mathbb{R}^{m\times n}$ represents a rating matrix that stores user-item ratings (that we call U-I rating).
$\mathbf{S}\in\mathbb{R}^{m\times m}$ represents a social matrix that stores user-user social relations (that we call U-U social).
In addition, $\mathcal{N}_{p_i}$ indicates a set of items rated by a user $p_i$.
In this paper, we use bold uppercase letters and bold lowercase letters to denote matrices and vectors, respectively.
Also, we use calligraphic letters to denote sets and graphs.
Table~\ref{tab:notations} summarizes a list of notations used in this paper.
The goal of GNN-based SocialRS\xspace methods is to solve the rating prediction and/or top-$N$ recommendation tasks.
Given $\mathbf{R}$ and $\mathbf{S}$, both tasks are formally defined as follows:
\begin{prob}[\textbf{Rating Prediction}]
\label{task:rp}
The goal is to predict the rating values for unrated items (\textit i.e.,\xspace $\mathcal{I}$\textbackslash$\mathcal{N}_{p_i}$) in $\mathbf{R}$ as close as possible to the ground truth.
\end{prob}
\begin{prob}[\textbf{Top-\boldsymbol{$N$} Recommendation}]
\label{prob:topn}
The goal is to recommend the top-$N$ items that are most likely to be preferred by each user $p_i$ among $p_i$'s unrated items (\textit i.e.,\xspace $\mathcal{I}$\textbackslash$\mathcal{N}_{p_i}$).
\end{prob}
\section{Existing Survey and Related Work}
In this section, we summarize the work done on social recommender systems. Current literature can be categorized into six broad categories- Collaborative Filtering, Temporal Recommendation, Social Recommendation, Socio-Temporal Recommendation, Graph Convolutional Networks and Graph Attention Networks.
Collaborative Filtering algorithms are the most traditional method for predicting ratings on items by a given user. Methods like matrix factorization are used to derive user and item-specific latent factors from a given matrix capturing user-item interactions. Several architectures like NeuMF exist which work on using Neural Networks on top of Collaborative Filtering algorithms to capture complex non-linear correlations in the user-item data. There are two primary limitations of this branch of recommender systems- firstly, these recommender systems make use of a static user-item interaction graph which is unable to accommodate for evolving user preferences over time; and secondly, these methods cannot exploit the user’s social connections in terms of recommending items.
Temporal Recommendation approaches use Markov Chains to factor in temporal relations concerning the user, like the user’s past and current preferences. More recent approaches have made use of convolutional, RNN and attention layers to model complex temporal changes. However, these methods also overlook the potential of exploiting social relations of the user to make relevant recommendations, and they are also susceptible to data sparsity and cold start problems, much like the Collaborative Filtering approaches.
Social Recommendation approaches address the limitations of the aforementioned two approaches and make use of the user’s social ties with other users in the network. Social recommenders follow the principle of homophily which states that the socially connected clusters of users are likely to exhibit the same kind of behaviour in these platforms, and that these clusters of users have a certain influence on the fellow users in their corresponding cluster. Furthermore, Socio-Temporal recommendation approaches consider the temporal dependence and the social influence of the user’s item interactions and their social circles by extracting information from the items used by the user’s social connections. This modeling of socio-temporal features has also given rise to session based recommendation approaches which take into account the transitions between items of session sequences to help generate better item recommendations.
With the context of this wide set of approaches used for recommender systems, we work on reviewing the use of Graph Convolutional Networks (GCNs) for social recommendations. The effectiveness of Graph Neural Networks (GNNs) in addressing the data sparsity and cold start problem, while accounting for the user’s social ties has significantly increased the popularity of GNNs in developing social recommendation algorithms. With the abundance of existing literature on recommender systems, our emphasis paper lies on conducting an exhaustive literature survey on the use of Graph Neural Networks for social recommendations. We work on studying the various architectures, applications and the degree of heterogeneity employed in conjunction with Graph Neural Networks in currently existing approaches for social recommender systems.
\section{Results}
After surveying over 41 works in the space of the social recommendation through graph neural networks, we observe certain directions in which a majority of research is being performed. Out of the 41 papers surveyed, we are citing 32 novel papers in this domain as the other 9 publications did not have a significant novelty or contribution in their approach. These are summarized in Table 1 and Table 2.
\hide{
\begin{table*}
\centering
\caption{Taxonomy of Architectures}
\label{tab:commands}
\begin{tabular}{ccl}
\toprule
Architecture &Models\\
\midrule
Attention based&GraphRec \cite{1}, HIDM \cite{2}, HeteroGraphRec \cite{3}, GHSCF \cite{4}, PAGAN \cite{7}, \\
\texttt{} &DGARec-R \cite{10}, SoRecGAT \cite{11}, TGRec \cite{12}, DiffNet++ \cite{14}, GAT-NSR \cite{16} , \\
\texttt{}&GNNTSR \cite{17},EGFRec \cite{23}, MutualRec \cite{26}, SAN \cite{28}, ASR \cite{18}, SAGLG \cite{8}, \cite{15} \\
\texttt{}&\\
Fusion based&HIDM \cite{2}, SAGLG \cite{8}, ATGCN \cite{13}, FuseRec \cite{21}, \\
\texttt{}&DiffNet \cite{27}, MEGCN \cite{30}, DiffNetLG \cite{31}, MSGCN \cite{19}\\
\texttt{}&\\
Items as Nodes&HeteroGraphRec \cite{3}, ASR \cite{18}, DICER \cite{22}, KconvGraphRec \cite{24}\\
\texttt{}&\\
Hierarchical&GHSCF \cite{4}, ASR \cite{18}, SR-HGNN \cite{20}\\
\texttt{}&\\
Temporal Based&TGRec \cite{12}, FuseRec \cite{21}, EGFRec \cite{23}, GRAFRANK \cite{25}, \cite{15}\\
\texttt{}&\\
Mutualistic based&MGNN \cite{6}, MutualRec \cite{26}\\
\bottomrule
\end{tabular}
\end{table*}
\begin{table*}
\centering
\caption{Taxonomy of User Sociology}
\label{tab:commands}
\begin{tabular}{ccl}
\toprule
User Sociology &Models\\
\midrule
Heterogenous Degree&GraphRec \cite{1}, HIDM \cite{2}, HeteroGraphRec \cite{3}, GNN-SoR \cite{32} \\
&SoRecGAT \cite{11}, DiffNet++ \cite{14}, ASR \cite{18}, FuseRec \cite{21}, \\
&KconvGraphRec \cite{24}, GRAFRANK \cite{25}, SAN \cite{28}, SeFrame \cite{29}\\
&\\
Temporal&DGARec-R \cite{10}, TGRec \cite{12}, DiffNet++ \cite{14}, FuseRec \cite{21}, \\
&DICER \cite{22}, EGFRec \cite{23}, GRAFRANK \cite{25}\\
\texttt{}&\\
Global Preference&PAGAN \cite{7}, ASR \cite{18}, FuseRec \cite{21}, DICER \cite{22}, SAN \cite{28}, DiffNetLG \cite{31}\\
\bottomrule
\end{tabular}
\end{table*}
}
It comes as no surprise that attention is the most widely used concept in building architectures for this task, since it’s essential in selecting the most influential neighbours through which recommendations can be made. It was also observed that most models built on top of another by incorporating multiple concepts, selecting the best of what each of the base models has to offer. For instance, HeteroGraphRec \cite{3} utilises attention as well as architects its model in a hierarchical fashion. ASR \cite{18} can be classified as models that use items as nodes, hierarchical structure and attention, all three. Few works that did not fall under any categorization but were worth recognizing were based on multi-channel hypergraph convolution networks \cite{9} and knowledge graphs \cite{5} amongst others.
Just as attention played a crucial role in defining model architectures, when it comes to user sociology, setting heterogenous strengths to connections between nodes is the core concept adopted by works followed by incorporating the time factor into the progression of the structure of the social graphs used.
|
2,869,038,154,354 | arxiv | \section{Introduction}
Determining the alignment of a group of biological sequences
is among the most common problems in computational biology.
The dynamic programming method of pairwise sequence alignment
can be readily extended to multiple sequences
but requires the computation of an $n$-dimensional matrix to align $n$ sequences.
Consequently, this method has an exponential time and space complexity.
\emph{Progressive alignment} \cite{Thompson1994} offers
a substantial complexity reduction
at the cost of possible loss of the optimal solution.
Within this approach, subset alignments
are sequentially pairwise aligned to build the final multiple alignment.
The order of pairwise alignments is determined by
a guide-tree representing the phylogenetic relationships between sequences.
There are two drawbacks of the progressive alignment approach.
First, the accuracy of the guide-tree affects the quality of the final alignment.
This problem is particularly important in the field of phylogeny reconstruction,
because multiple alignment acts as a preprocessing step
in most prominent methods of inferring a phylogenetic tree of sequences.
It has been shown that, within this approach,
the inferred phylogeny is biased towards the initial guide-tree
\cite{Wong2008,Loeytynoja2008a}.
Second, only sequences belonging to currently aligned subsets
contribute to their pairwise alignment.
Even if a guide-tree reflects correct phylogenetic relationships,
these alignments may be inconsistent with remaining sequences
and the inconsistencies are propagated to further steps.
To address this problem, in recent programs
\cite{Notredame2000,Edgar2004a,Katoh2005a,Do2005,Roshan2006}
progressive alignment is usually preceded by \emph{consistency transformation}
(incorporating information from all pairwise alignments into the objective function)
and/or followed by \emph{iterative refinement} of
the multiple alignment of all sequences.
In the present paper we propose MSARC,
a new multiple sequence alignment algorithm that avoids guide-trees altogether.
MSARC constructs a graph with all residues from all sequences as nodes
and edges weighted with alignment affinities of its adjacent nodes.
Columns of best multiple alignments tend to form clusters in this graph,
so in the next step residues are clustered (see Figure~\ref{fig:overview}a).
Finally, MSARC refines the multiple alignment corresponding to the clustering.
Experiments on the BAliBASE dataset \cite{Thompson2005} show that our approach
is competitive with the best progressive methods and
significantly outperforms current non-progressive algorithms
\cite{Subramanian2005,Subramanian2008}.
Moreover, MSARC is the best aligner for sequence sets
with very low levels of conservation.
This feature makes MSARC a promising preprocessing tool
for phylogeny reconstruction pipelines.
\section{Methods}
MSARC aligns sequence sets in several steps.
In a preprocessing step, following Probalign \cite{Roshan2006},
\emph{stochastic alignments} are calculated for all pairs of sequences
and consistency transformation is applied to resulting
posterior probabilities of residue correspondences.
Transformed probabilities, called residue alignment affinities,
represent weights of an \emph{alignment graph}\footnote{Our notion of
alignment graph slightly differs from the one of Kececioglu \cite{Kececioglu1993}:
removing edges between clusters transforms the former into the latter.}.
MSARC clusters this graph with a top-down hierarchical method (Figure~\ref{fig:overview}c).
Division steps are based on the Fiduccia-Mattheyses
graph partitioning algorithm \cite{Fiduccia1982},
adapted to satisfy constraints imposed by the sequence order of residues.
Finally, multiple alignment corresponding to resulting clustering
is refined with the iterative improvement strategy
proposed in Probcons \cite{Do2005},
adapted to remove clustering artefacts.
\begin{figure}[!t]
\begin{minipage}[t]{0.35\columnwidth
\begin{center}
(a)\includegraphics[clip,width=0.9\columnwidth]{fig/goal}
\vspace{1ex}
\begin{tabular}{cccc}
(b)
&
{\includegraphics[clip,width=0.2\columnwidth]{fig/ambiguity}}
&
{\includegraphics[clip,width=0.2\columnwidth]{fig/conflict}}
&
{\includegraphics[bb=0bp 0bp 40bp 104bp,clip,width=0.2\columnwidth]{fig/valid}}
\\
&
Ambiguity
&
Conflict
&
Valid
\end{tabular}
\end{center}
\end{minipage}\hfill{
\begin{minipage}[t]{0.6\columnwidth
\vspace{-17ex}
(c)
\includegraphics[width=0.95\columnwidth]{fig/hierarchical}
\end{minipage}
\caption{Overview of our residue clustering approach.
(a) Alignment graph and its desired clustering.
Clusters form columns of a corresponding multiple sequence alignment.
(b) Clusterings inconsistent (left and middle) and consistent (right)
with the alignment structure.
(c) An example of hierarchical divisive clustering of residues.
The graph is recursively partitioned by finding a balanced minimal cut
while maintaining the ordering of residues until
all parts have at most one residue from each sequence.
Final alignment is constructed by concatenating these parts (alignment columns)
from left to right.}\label{fig:overview}
\end{figure}
\subsection{Pairwise stochastic alignment}
The concept of stochastic (or probability) alignment was proposed in \cite{Miyazawa1995}.
Given a pair of sequences, this framework defines
statistical weights of their possible alignments.
Based on these weights, for each pair of residues from both sequences,
the posterior probability of being aligned may be computed.
A consensus of highly weighted suboptimal alignments was shown
to contain pairs with significant probabilities
that agree with structural alignments
despite the optimal alignment deviating significantly.
M\"uckstein et al. \cite{Mueckstein2002} suggest the use of the method as a starting point
for improved multiple sequence alignment procedures.
The statistical weight $\mathcal{W}\left(\mathcal{A}\right)$
of an alignment $\mathcal{A}$
is the product of the individual weights of (mis-)matches and gaps \cite{Yu2001}.
It may be obtained from the standard similarity scoring function $S(\mathcal{A})$
with the following formula:
\begin{equation}
\mathcal{W}\left(\mathcal{A}\right)=e^{\beta{S\left(\mathcal{A}\right)}}
\end{equation}
\nowe{where $\beta$ corresponds to the inverse of Boltzmann's constant
and should be adjusted to the match/mismatch scoring function $s(x,y)$
(in fact, $\beta$ simply rescales the scoring function).}
The probability distribution over all alignments $\mathcal{A}^{*}$
is achieved by normalizing this value. The normalization factor $Z$
is called the \emph{partition function} of the alignment problem \cite{Miyazawa1995},
and is defined as
\begin{equation}
Z=\sum_{\mathcal{A}\in\mathcal{A}^{*}}\mathcal{W}\left(\mathcal{A}\right)=\mathcal{\sum_{\mathcal{A}\in\mathcal{A}^{*}}}e^{\beta{S\left(\mathcal{A}\right)}}
\end{equation}
The probability $P\left(\mathcal{A}\right)$ of an alignment can be calculated by
\begin{equation}
P\left(\mathcal{A}\right)=\frac{\mathcal{W}\left(\mathcal{A}\right)}{Z}
=\frac{e^{\beta{S\left(\mathcal{A}\right)}}}{Z}
\end{equation}
Let $\mathbf{P}\left(a_{i}\sim b_{j}\right)$ denote
the posterior probability that residues $a_{i}$ and $b_{j}$ are aligned.
We can calculate it as the sum of probabilities of all alignments
with $a_{i}$ and $b_{j}$ in a common column (denoted by ${A}^{*}_{a_{i}\sim b_{j}}$):
\begin{multline}
\mathbf{P}\left(a_{i}\sim b_{j}\right)
= \sum_{\mathcal{A}\in\mathcal{A}^{*}_{a_{i}\sim b_{j}}\hspace{-4ex}}
\mathbf{P}(\mathcal{A})
= \frac{{\displaystyle
\sum_{\mathcal{A}\in\mathcal{A}^{*}_{a_{i}\sim b_{j}}\hspace{-4ex}}}
e^{\beta S\left(\mathcal{A}\right)}}{Z} =\\
= \frac{\displaystyle \bigg(\sum_{\mathcal{A}_{i-1,j-1}\hspace{-5ex}}
e^{\beta S(\mathcal{A}_{i-1,j-1})}\bigg)
e^{\beta s(a_{i},b_{j})}
\bigg(\sum_{\widehat{\mathcal{A}}_{i+1,j+1}\hspace{-5ex}}
e^{\beta S(\widehat{\mathcal{A}}_{i+1,j+1})}\bigg)}
{Z} =\\
= \frac{Z_{i-1,j-1}\, e^{\beta s\left(a_{i},b_{j}\right)}\,\widehat{Z}_{i+1,j+1}}
{Z}
\end{multline}
Here we use the notation ${\textstyle \mathcal{A}}_{i,j}$ for an
alignment of the sequence prefixes $a_{1}\cdots a_{i}$ and $b_{1}\cdots b_{j}$,
and $\widehat{{\textstyle \mathcal{A}}}_{i,j}$ for an alignment of
the sequence suffixes $a_{i}\cdots a_{m}$ and $b_{j}\cdots b_{n}$.
Analogously, $Z_{i,j}$ is the partition function over the prefix alignments
and $\widehat{Z}_{i,j}$ is the (reverse) partition function over
the suffix alignments.
\nowe{An efficient algorithm for calculating the partition function can
be derived from the Gotoh maximum score algorithm \cite{Gotoh1982}
by replacing the maximum operations with additions.
From a few possible approaches \cite{Miyazawa1995,Yu2001,Mueckstein2002}
we chose a variant proposed by Miyazawa \cite{Miyazawa1995}
and applied in Probalign \cite{Roshan2006},
where insertions and deletions must be separated
by at least one match/mismatch position:}
\begin{eqnarray}
Z_{i,j}^{M} & = & \left(Z_{i-1,j-1}^{M}+Z_{i-1,j-1}^{E}+Z_{i-1,j-1}^{F}\right)e^{\beta s\left(a_{i},b_{j}\right)}\\
Z_{i,j}^{E} & = & Z_{i,j-1}^{M}e^{\beta g_{o}}+Z_{i,j-1}^{E}e^{\beta g_{ext}}\label{eq:partition-e}\\
Z_{i,j}^{F} & = & Z_{i-1,j}^{M}e^{\beta g_{o}}+Z_{i-1,j}^{F}e^{\beta g_{ext}}\\
Z_{i,j} & = & Z_{i,j}^{M}+Z_{i,j}^{E}+Z_{i,j}^{F}
\end{eqnarray}
The reverse partition function can be calculated using the same recursion
in reverse, starting from the ends of the aligned sequences.
\subsection{Alignment graphs}
Probabilities
$\mathbf{P}\left(a_{i}\sim b_{j}\right)$
may be viewed as a representation of a bipartite graph
with nodes corresponding to residues $a_i$ and $b_j$
and edges weighted with residue alignment affinity.
Given a set $S$ of $k$ sequences to be aligned,
we would like to analogously represent their residue alignment affinity
by a $k$-partite weighted graph.
It may be obtained by joining pairwise alignment graphs
for all pairs of $S$-sequences.
However, separate computation of edge weights for each pair of sequences
does not exploit information included in the remaining alignments.
In order to incorporate correspondence with residues from other sequences,
we perform a \emph{consistency transformation} \cite{Notredame2000,Do2005}.
It re-estimates the residue alignment affinity
according to the following formula:
\nowe{\begin{equation}
\mathbf{P}^{\prime}\left(x_{i}\sim y_{j} \right)
\leftarrow \frac{{\displaystyle \sum_{z\in S}}{\displaystyle \sum_{l=0}^{|z|}}
\mathbf{P}\left(x_{i}\sim z_{l}\right)
\mathbf{P}\left(z_{l}\sim y_{j}\right)}{\left|S\right|}
\end{equation}}
If $P_{xy}$ is a matrix of current residue alignment affinities
for sequences $x$ and $y$, the matrix form equivalent
transformation is
\begin{equation}
P_{xy}^{\prime}\leftarrow\frac{{\displaystyle \sum_{z\in S}}P_{xz}P_{zy}}{\left|S\right|}\label{eq:consistency-matrix}
\end{equation}
\nowe{The consistency transformation may be iterated any number of times,
but excessive iterations blur the structure of residue affinity.
Following Probalign \cite{Roshan2006} and ProbCons \cite{Do2005}
MSARC performs it twice by default.}
\subsection{Residue clustering}
Columns of any multiple alignment form a partition of the set of sequence residues.
The main idea of MSARC is to reconstruct the alignment
by clustering an alignment graph into columns.
The clustering method must satisfy constraints imposed by alignment structure.
First, each cluster may contain at most one residue from a single sequence.
Second, the set of all clusters must be orderable
consistently with sequence orders of their residues.
Violation of the first constraint will be called \emph{ambiguity},
while violation of the second one -- \emph{conflict}
(see Figure~\ref{fig:overview}b).
Towards this objective, MSARC applies top-down hierarchical clustering
(see Figure~\ref{fig:overview}c).
Within this approach, the alignment graph is recursively split into two parts
until no ambiguous cluster is left.
Each partition step results from a single cut through all sequences,
so clusterings are conflict-free at each step of the procedure.
Consequently, the final clustering represents a proper multiple alignment.
Optimal clustering is expected
to maximize residue alignment affinity within clusters
and minimize it between them.
Therefore, the partition selection in recursive steps of the clustering procedure
should minimize the sum of weights of edges cut by the partition.
This is in fact the objective of the well-known problem of \emph{graph partitioning},
i.e. dividing graph nodes into roughly equal parts such that
the sum of weights of edges connecting nodes in different parts is minimized.
The Fiduccia-Mattheyses algorithm \cite{Fiduccia1982} is an efficient
heuristic for the graph partitioning problem.
After selecting an initial, possibly random partition, it calculates
for each node the change in cost caused by moving it between parts,
called \textit{gain}.
Subsequently, single nodes are greedily moved between partitions based on the maximum gain
and gains of remaining nodes are updated.
The process is repeated in \textit{passes},
where each node can be moved only once per pass.
The best partition found in a pass is chosen
as the initial partition for the next pass.
The algorithm terminates when a pass fails to improve the partition.
\nowe{Grouping single moves into passes helps the algorithm to escape local optima,
since intermediate partitions in a pass may have negative gains.}
An additional balance condition is enforced,
disallowing movement from a partition that contains
less than a minimum desired number of nodes.
Fiduccia-Mattheyses algorithm needs to be modified
in order to deal with alignment graphs.
Mainly, residues are not moved independently;
since the graph topology has to be maintained,
moving a residue involves moving all the residues
positioned between it and a current cut point on its sequence.
This modification implies further changes
in the design of data structures for gain processing.
\nowe{Next, the sizes of parts in considered partitions cannot differ
by more than the maximum cluster size in a final clustering,
i.e., the number of aligned sequences.}
This choice implies minimal search space containing partitions
consistent with all possible multiple alignment.
In the initial partition sequences are cut in their midpoints.
The Fiduccia-Mattheyses heuristic may be optionally extended
with a \emph{multilevel} scheme \cite{Hendrickson1995}.
In this approach increasingly coarse approximations of the graph
are created by an iterative process called \emph{coarsening}.
At each iteration step selected pairs of nodes are merged into single nodes.
Adjacent edges are merged accordingly and weighted with sums of original weights.
The final coarsest graph is partitioned using Fiduccia-Mattheyses algorithm.
Then the partition is projected back to the original graph
through the series of \emph{uncoarsening} operations,
each of which is followed by a Fiduccia-Mattheyses based refinement.
Because the last refinement is applied
to the original graph, the multilevel scheme in fact
reduces the problem of selecting an initial partition
to the problem of selecting pairs of nodes to be merged.
In alignment graphs only neighboring nodes can be merged, so
MSARC just merges consecutive pairs of neighboring nodes.
\subsection{Refinement}
\begin{figure}[tbh]
\begin{center}
\includegraphics[height=6ex]{fig/partition/single}\\
(a) Fiduccia-Mattheyses partitioning\\
\includegraphics[height=6ex]{fig/partition/multi}\\
(b) Multilevel partitioning\\
\includegraphics[height=6ex]{fig/refine/single}\\
(c) Refined Fiduccia-Mattheyses partitioning\\
\includegraphics[height=6ex]{fig/refine/multi}\\
(d) Refined multilevel partitioning\\
\end{center}
\caption{\nowe{Example visualization of the alignment produced by the graph partitioning
methods alone (ab) and graph partitioning followed by refinement (cd).
Residue colors reflect how well the column is aligned
based on residue match probabilities (darker is better).
Partition cuts are colored to show the order of partitioning
with darker cuts being performed earlier.
}}
\label{fig:part_example}
\end{figure}
\nowe{An example of alignment columns produced by residue clustering can be
seen in Figure \ref{fig:part_example}(ab).
Unfortunately, right parts of alignments contain many superfluous spaces
that could easily be removed manually.}
\nowe{Therefore we decided to add a refinement step,
following the method used in ProbCons \cite{Do2005}.}
Sequences are split into two groups and the groups are pairwise re-aligned.
Re-alignment is performed using the Needleman-Wunsch algorithm
with the score for each pair of positions defined as the sum of posterior
probabilities for all non-gap pairs and zero gap-penalty.
Since gap-penalties are not used,
every such refinement iteration creates a new alignment of equal or greater expected accuracy.
\nowe{First each sequence is re-aligned with the remaining sequences,
since such division is very efficient in removing superfluous spaces.}
Next, several randomly selected sequence subsets are re-aligned against the rest.
\nowe{Figures \ref{fig:part_example}(cd) show the results
of refining the alignments from Figures \ref{fig:part_example}(ab).
Refinement removed superfluous spaces from the clustering process
and optimized the alignment.
Note that the final post-refinement alignments turned out to be the same for both
Fiduccia-Mattheyses and multilevel method of graph partitioning.}
\section{Results}
\subsection{Benchmark data and methodology}
MSARC was tested against the BAliBASE
3.0 benchmark database \cite{Thompson1994}.
It contains manually
refined reference alignments based on 3D structural superpositions.
Each alignment contains core-regions that correspond to the most reliably
alignable sections of the alignment. Alignments are divided into five
sets designed to evaluate performance on varying types of problems:
\begin{itemize}
\item[{\textsc{rv1x}}] Equidistant sequences with two different levels
of conservation
\begin{itemize}
\item[{\textsc{rv11}}] very divergent sequences (<20\% identity)
\item[{\textsc{rv12}}] medium to divergent sequences (20-40\% identity)
\end{itemize}
\item[{\textsc{rv20}}] Families aligned with a highly divergent ``orphan''
sequence
\item[{\textsc{rv30}}] Subgroups with <25\% residue identity between groups
\item[{\textsc{rv40}}] Sequences with N/C-terminal extensions
\item[{\textsc{rv50}}] Internal insertions
\end{itemize}
BAliBASE 3.0 also provides a program
comparing given alignments with a reference one.
Alignments are scored according to two metrics. A sum-of-pairs
score (SP) showing the ratio of residue pairs that are correctly
aligned, and a total column (TC) score showing the ratio
of correctly aligned columns. Both scores can be applied to
full sequences or just the core-regions.
Two variants of MSARC: with multilevel Fiduccia-Mattheyses algorithm (MSARC-ML)
and with basic Fiduccia-Mattheyses algorithm (MSARC-FM)
were tested on the full length sequences
and scored based on the correct alignment of core-regions.
\nowe{The results were compared to
CLUSTAL $\mathrm{\Omega}$ \cite{Thompson1994,Sievers2011} ver. 1.1.0,
DIALIGN-T \cite{Subramanian2005} ver. 0.2.2,
DIALIGN-TX \cite{Subramanian2008} ver. 1.0.2,
MAFFT \cite{Katoh2005a}
ver. 6.903, MUSCLE \cite{Edgar2004a} ver. 3.8.31,
MSAProbs \cite{Liu2009} ver. 0.9.7, Probalign \cite{Roshan2006} ver. 1.4,
ProbCons \cite{Do2005} ver. 1.12 and T-Coffee \cite{Notredame2000} ver. 9.02.}
\nowe{All the programs were executed with their default parameters.
In the case of MSARC, default parameters of \emph{stochastic alignment},
\emph{consistency transformation} and \emph{iterative refinement} steps
follow the defaults of corresponding steps of Probalign and ProbCons.
Namely, MSARC was run with
Gonnet 160 similarity matrix \cite{Gonnet1992},
gap penalties of $-22$, $-1$ and $0$ for gap open, extension and
terminal gaps respectively, $\beta=0.2$, a
cut-off value for posterior probabilities of $0.01$
(values smaller than the cutoff are set to $0$ and
operations designed for sparse matrices are
used in order to speed up computations),
two iterations of the consistency transformation and
$100$ iterations of iterative refinement.}
\subsection{Aligner comparison}
\begin{table}[!t]
\caption{Performance on BAliBASE 3.0\label{tab:results}}
\begin{center}
{\begin{tabular}{llcccccccclr}
\hline
& & \multicolumn{8}{c}{SP/TC scores} & & Computation \\
{Aligner} & & {all} & \textsc{rv11} & \textsc{rv12} & \textsc{rv20} & \textsc{rv30} & \textsc{rv40} & \textsc{rv50} & \textsc{bb40037} & & {Time}\\
\hline \\[1ex]
MSARC-ML & & $\displaystyle\frac{87.6}{57.3}$ & $\displaystyle\frac{\mathbf{70.1}}{\mathbf{46.1}}$ & $\displaystyle\frac{94.5}{85.6}$ & $\displaystyle\frac{92.5}{40.7}$ & $\displaystyle\frac{83.4}{45.7}$ & $\displaystyle\frac{\mathbf{93.1}}{\mathbf{63.3}}$ & $\displaystyle\frac{88.7}{51.6}$ & $\displaystyle\frac{\mathbf{97.1}}{\mathbf{70.0}}$ & & ${33:49:37}$\\ \\[1ex]
MSARC-FM & & $\displaystyle\frac{87.5}{57.1}$ & $\displaystyle\frac{70.0}{46.0}$ & $\displaystyle\frac{94.5}{85.6}$ & $\displaystyle\frac{92.5}{40.9}$ & $\displaystyle\frac{82.8}{45.0}$ & $\displaystyle\frac{93.0}{62.9}$ & $\displaystyle\frac{88.6}{51.7}$ & $\displaystyle\frac{\mathbf{97.1}}{\mathbf{70.0}}$ & & ${22:14:19}$\\ \\[1ex]
{CLUSTAL $\mathrm{\Omega}$} & & $\displaystyle\frac{84.0}{55.4}$ & $\displaystyle\frac{59.0}{35.8}$ & $\displaystyle\frac{90.6}{78.9}$ & $\displaystyle\frac{90.2}{45.0}$ & $\displaystyle\frac{86.2}{57.5}$ & $\displaystyle\frac{90.2}{57.9}$ & $\displaystyle\frac{86.2}{53.3}$ & $\displaystyle\frac{61.2}{0.0}$ & & $\mathbf{12:15}$\\ \\[1ex]
DIALIGN-T & & $\displaystyle\frac{77.3}{42.8}$ & $\displaystyle\frac{49.3}{25.3}$ & $\displaystyle\frac{88.8}{72.5}$ & $\displaystyle\frac{86.3}{29.2}$ & $\displaystyle\frac{74.7}{34.9}$ & $\displaystyle\frac{82.0}{45.2}$ & $\displaystyle\frac{80.1}{44.2}$ & $\displaystyle\frac{52.6}{0.0}$ & & ${1:13:21}$\\ \\[1ex]
DIALIGN-TX & & $\displaystyle\frac{78.8}{44.3}$ & $\displaystyle\frac{51.5}{26.5}$ & $\displaystyle\frac{89.2}{75.2}$ & $\displaystyle\frac{87.9}{30.5}$ & $\displaystyle\frac{76.2}{38.5}$ & $\displaystyle\frac{83.6}{44.8}$ & $\displaystyle\frac{82.3}{46.6}$ & $\displaystyle\frac{52.8}{0.0}$ & & ${1:36:05}$\\ \\[1ex]
MAFFT & & $\displaystyle\frac{86.7}{58.4}$ & $\displaystyle\frac{65.3}{42.8}$ & $\displaystyle\frac{93.6}{83.8}$ & $\displaystyle\frac{92.5}{44.6}$ & $\displaystyle\frac{85.9}{58.1}$ & $\displaystyle\frac{91.5}{59.0}$ & $\displaystyle\frac{90.1}{59.4}$ & $\displaystyle\frac{56.4}{0.0}$ & & ${54:04}$\\ \\[1ex]
MUSCLE & & $\displaystyle\frac{81.9}{47.5}$ & $\displaystyle\frac{57.2}{31.8}$ & $\displaystyle\frac{91.5}{80.4}$ & $\displaystyle\frac{88.9}{35.0}$ & $\displaystyle\frac{81.4}{40.9}$ & $\displaystyle\frac{86.5}{45.0}$ & $\displaystyle\frac{83.5}{45.9}$ & $\displaystyle\frac{48.4}{0.0}$ & & ${23:32}$\\ \\[1ex]
MSAProbs & & $\displaystyle\frac{\mathbf{87.8}}{\mathbf{60.7}}$ & $\displaystyle\frac{68.2}{44.1}$ & $\displaystyle\frac{94.6}{\mathbf{86.5}}$ & $\displaystyle\frac{\mathbf{92.8}}{\mathbf{46.4}}$ & $\displaystyle\frac{\mathbf{86.5}}{\mathbf{60.7}}$ & $\displaystyle\frac{92.5}{62.2}$ & $\displaystyle\frac{\mathbf{90.8}}{\mathbf{60.8}}$ & $\displaystyle\frac{59.5}{0.0}$ & & ${6:43:51}$\\ \\[1ex]
{Probalign} & & $\displaystyle\frac{87.6}{58.9}$ & $\displaystyle\frac{69.5}{45.3}$ & $\displaystyle\frac{\mathbf{94.6}}{86.2}$ & $\displaystyle\frac{92.6}{43.9}$ & $\displaystyle\frac{85.3}{56.6}$ & $\displaystyle\frac{92.2}{60.3}$ & $\displaystyle\frac{88.7}{54.9}$ & $\displaystyle\frac{54.2}{0.0}$ & & ${4:31:41}$\\ \\[1ex]
{ProbCons} & & $\displaystyle\frac{86.4}{55.8}$ & $\displaystyle\frac{67.0}{41.7}$ & $\displaystyle\frac{94.1}{85.5}$ & $\displaystyle\frac{91.7}{40.6}$ & $\displaystyle\frac{84.5}{54.4}$ & $\displaystyle\frac{90.3}{53.2}$ & $\displaystyle\frac{89.4}{57.3}$ & $\displaystyle\frac {59.3}{0.0}$ & & ${6:56:32}$\\ \\[1ex]
{T-Coffee} & & $\displaystyle\frac{85.7}{55.1}$ & $\displaystyle\frac{65.5}{40.9}$ & $\displaystyle\frac{93.9}{84.8}$ & $\displaystyle\frac{91.4}{40.1}$ & $\displaystyle\frac{83.7}{49.0}$ & $\displaystyle\frac{89.2}{54.5}$ & $\displaystyle\frac{89.4}{58.5}$ & $\displaystyle\frac{50.9}{0.0}$ & & ${13:53:02}$\\ \\[1ex]
\hline
\end{tabular}}
\end{center}
\nowe{{Columns 2-9 show the mean SP and TC scores for each alignment
algorithm on the whole BAliBASE dataset,
each of its series and case \textsc{bb40037}.
The last column presents total CPU
computation time (hh:mm:ss).
All scores are multiplied by 100.
Best results in each column are shown in bold.}}
\end{table}
Table~\ref{tab:results} shows the SP and TC
scores obtained by the alignment algorithms on
the BAliBASE 3.0 benchmark.
MSARC-ML has slightly better accuracy than MSARC-FM.
Both variants of MSARC substantially outperform DIALIGN-T
(the only non-progressive method in the test) and DIALIGN-TX
(a progressive extension of DIALIGN-T).
Moreover, MSARC achieves accuracy similar to the leading alignment methods:
MSAProbs, Probalign and ProbCons.
\begin{table}[!t]
\caption{Significance of differences in BAliBASE 3.0
SP/TC scores\label{tab:p-values}}
\begin{center}
{\begin{tabular}{lccccccc}
\hline
{SP scores} & \textsc{rv11} & \textsc{rv12} & \textsc{rv20} & \textsc{rv30} & \textsc{rv40} & \textsc{rv50} & {Total}\\
\hline
{Clustal $\mathrm{\Omega}$} & {+3.8e-7} & {+1.1e-5} & {+0.0031} & {-0.047} & {+4.2e-6} & {+0.012} & {+8.7e-15}\\
DIALIGN-T & {+8.6e-8} & {+7.7e-9} & {+1.3e-7} & {+2.7e-6} & {+2.1e-9} & {+0.00098} & {+5.3e-36}\\
DIALIGN-TX & {+1.0e-7} & {+6.2e-8} & {+2.3e-7} & {+8.7e-6} & {+2.8e-9} & {+0.0017} & {+3.1e-34}\\
MAFFT & {+0.0031} & {+0.00085} & {-(0.64)} & {-0.0009} & {+0.0005} & {-(0.072)} & {+0.028}\\
MUSCLE & {+4.5e-6} & {+1.3e-6} & {+0.0002} & {+(0.24)} & {+2.5e-8} & {+0.006} & {+6.8e-22}\\
MSAProbs & {+0.015} & {-(0.56)} & {-0.016} & {-1.9e-5} & {+(0.39)} & {-0.0041} & {-0.0025}\\
{Probalign} & {+(0.16)} & {-(0.77)} & {-0.048} & {-0.0099} & {+(0.66)} & {-(0.85)} & {-(0.067)}\\
{ProbCons} & {+0.0070} & {+0.037} & {+0.032} & {-(0.11)} & {+0.0014} & {-(0.17)} & {+0.0018}\\
{T-Coffee} & {+0.001} & {+0.005} & {+0.021} & {-(0.40)} & {+0.0001} & {-(0.077)} & {+7.1e-6}\\
\hline
{TC scores} & \textsc{rv11} & \textsc{rv12} & \textsc{rv20} & \textsc{rv30} & \textsc{rv40} & \textsc{rv50} & {Total}\\
\hline
{Clustal $\Omega$} & {+2.8e-5} & {+0.0004} & {-0.025} & {-0.0018} & {+(0.11)} & {-(0.84)} & {+(0.096)}\\
DIALIGN-T & {+1.5e-6} & {+2.2e-8} & {+9.6e-5} & {+0.0024} & {+4.9e-8} & {+0.027} & {+3.6e-26}\\
DIALIGN-TX & {+1.3e-6} & {+4.0e-7} & {+0.00040} & {+0.038} & {+1.3e-7} & {+(0.066)} & {+9.5e-23}\\
MAFFT & {+(0.11)} & {+0.005} & {-(0.052)} & {-0.0007} & {+(0.07)} & {-(0.062)} & {-(0.55)}\\
MUSCLE & {+9.9e-5} & {+0.0002} & {+(0.06)} & {+(0.76)} & {+2.2e-6} & {+0.009} & {+5.8e-13}\\
MSAProbs & {+(0.13)} & {-(0.22)} & {-0.0016} & {-8.5e-5} & {+(0.076)} & {-0.0014} & {-5.4e-7}\\
{Probalign} & {+(0.54)} & {-(0.11)} & {-0.00062} & {-0.0006} & {+(0.087)} & {-(0.36)} & {-1.9e-6}\\
{ProbCons} & {+0.043} & {-(0.69)} & {-(0.31)} & {-0.011} & {+0.017} & {-(0.062)} & {+(0.84)}\\
{T-Coffee} & {+0.003} & {+(0.10)} & {+(0.75)} & {-(0.11)} & {+(0.12)} & {-0.0072} & {+(0.61)}\\
\hline
\end{tabular}}
\end{center}
{{
Entries show $p$-values indicating the significance of the mean difference of
SP/TC scores between MSARC-ML and other aligners
as measured using the Wilcoxon matched-pair signed-rank test.
A $+$ means that MSARC had a higher mean score
while a $-$ means MSARC had a lower mean score.
Nonsignificant $p$-values (>0.05) are shown in parentheses.}}
\end{table}
The differences are not significant in most cases
(see Table~\ref{tab:p-values}) and
correspond with the structure of benchmark series --
MSARC shows the best
results for test series \textsc{rv11} and \textsc{rv40}, and the worst
performance on \textsc{rv20} and \textsc{rv30}.
Distances in \textsc{rv20} and \textsc{rv30} families
are particularly well represented by phylogenetic trees
(low similarity between highly conserved subgroups).
On the other hand, series \textsc{rv11}
contains highly divergent sequences for which guide-tree is poorly informative,
even if it represents the correct phylogeny,
and \textsc{rv40} contains sequences
with N/C-terminal extensions which may affect the accuracy
of the estimated phylogeny.
\begin{figure}[!t]
\begin{center}
\begin{tabular}{lc}
\begin{tabular}{l}
BAliBASE
\\\\(a)
\end{tabular}
&
{\includegraphics[bb=-2500bp 0bp 9084bp 368bp,width=.75\columnwidth,height=0.2\columnwidth]{fig/bb40037/balibase}
}\\
\begin{tabular}{l}
MSARC
\\\\(b)
\end{tabular}
&
{\includegraphics[bb=-2200bp 0bp 9384bp 368bp,width=.75\columnwidth,height=0.2\columnwidth]{fig/bb40037/msarc}
}\\
\begin{tabular}{l}
Probalign
\\\\(c)
\end{tabular}
&
{\includegraphics[bb=-2500bp 0bp 9084bp 368bp,width=.75\columnwidth,height=0.2\columnwidth]{fig/bb40037/probalign}
}\\
\begin{tabular}{l}
MSAProbs
\\\\(d)
\end{tabular}
&
{\includegraphics[width= .75\columnwidth,height=0.2\columnwidth]{fig/bb40037/msaprobs}
}
\end{tabular}
\end{center}
\caption{Visualization of reference (a) and reconstructed (bcd) alignments
for test case \textsc{bb40037.}
In all alignments sequences are ordered accordingly.
Each sequence is colored based on the evolutionary distance to its
neighbors in a phylogenetic tree, such that families
of related sequences have similar colors. Trees for (a) and (b)
are computed with the PhyML 3.0 program \cite{Guindon2010},
using the maximum parsimony method. Trees for (c) and (d) are
the guide-trees used by those aligners.\label{fig:test-bb40037-visualisation}}
\end{figure}
We illustrate this observation with an example of test case \textsc{bb40037}.
As is shown in column 9 of Table \ref{tab:results},
MSARC outperforms other methods by a large margin.
The TC scores of zero means that each alignment method has shifted
at least one sequence from its correct position relative to the other
sequences.
Figure \ref{fig:test-bb40037-visualisation} presents the
structure of the reference alignment, as well as
alignments generated by MSARC, Probalign and MSAprobs.
The large family of red, orange and yellow colored sequences near the bottom
has been misaligned by the progressive methods.
The reason for this is more visible
in Figure \ref{fig:test-bb40037-reordered},
where sequences in alignments are reordered
according to related guide-trees.
\begin{figure}[!t]
\begin{tabular}{lc}
{\includegraphics[width=0.25\columnwidth,height=0.2\columnwidth]{fig/bb40037/reordered/probalign-tree}}
&
{\includegraphics[width=0.6\columnwidth,height=0.2\columnwidth]{fig/bb40037/reordered/probalign}
}\\
(a) & Probalign\hspace{30ex}(b)\\\\
{\includegraphics[width=0.25\columnwidth,height=0.2\columnwidth]{fig/bb40037/reordered/msaprobs-tree}}
&
{\includegraphics[width=0.7\columnwidth,height=0.2\columnwidth]{fig/bb40037/reordered/msaprobs}}\\
(c) & MSAProbs\hspace{30ex}(d)
\end{tabular}
\caption{Guide trees (ac) and alignment visualizations (bd) for test case \textsc{bb40037} and programs Probalign (ab) and MSAProbs (cd).
Tree branches and aligned sequences are colored
based on the evolutionary distances to their neighbors,
as computed from the guide-trees used during alignment.
Sequences in alignments are ordered following their order in trees,
so related sequences have similar color and are positioned together.
\label{fig:test-bb40037-reordered}}
\end{figure}
Probalign aligns separately the first half of the sequences (blue and green)
and the second half of the sequences (from yellow to red).
Next, the prefixes of the
second group are aligned with the suffixes of the first group,
propagating an error within a yellow sub-alignmen
.
MSAprobs aligns separately the dark blue, light blue and red sequences.
Next the blue sub-alignments are aligned together.
\nowe{Resulting alignment has erroneously inserted gaps near the right ends
of dark blue sequences.}
\nowe{This error is propagated in next step,
where the suffix of the blue alignment
is aligned with the prefix of the red alignment.}
Finally the single violet sequence
is added to the alignment, splitting it in two.
\nowe{For both programs, alignment errors introduced in the earlier steps
are propagated to the final alignment.
On the other hand,
the non-progressive strategy used in MSARC
yields a reasonable approximation of the reference alignment
(see Figure \ref{fig:test-bb40037-visualisation}(ab)).}
\section{Discussion}
The progressive principle dominates multiple alignment algorithms for nearly 20 years.
Throughout this time, many groups have dedicated their effort
to refine its accuracy to the current state.
Other approaches were omitted due to high computational complexity and/or
unsatisfactory quality.
To our best knowledge, MSARC is the only non-progressive aligner
of quality comparable to best progressive programs.
Moreover, due to a guide-tree bias of alignments computed with progressive methods,
MSARC is a quality leader for sequence sets
with evolutionary distances hardly representable by a phylogenetic tree.
Despite of the algorithmic novelty,
the non-progressive approach to multiple alignment
makes MSARC an interesting tool for phylogeny reconstruction pipelines.
The objective of these procedures is to infer the structure of
a phylogenetic tree from a given sequence set.
Multiple alignment is usually the first pipeline step.
When alignment is guided by a tree,
the reconstructed phylogeny is biased towards this tree.
In order to minimize this effect,
some phylogenetic pipelines alternately
optimize a tree and an alignment
\cite{Redelings2005,Lunter2005,Liu2009}.
Unbiased alignment process of MSARC may simplify this procedure
and improve the reconstruction accuracy,
especially in most problematic cases.
The main disadvantage of MSARC is its computational complexity,
especially in the case of the multilevel scheme variant
(MSARC-FM is $\sim 3\times$ slower than MSAProbs
and $\sim 5\times$ slower than Probalign,
MSARC-ML is $1.5\times$ slower than MSARC-FM).
However, the running time can be greatly
improved by using multiple cores to parallel computations,
because every step of its algorithm can be parallelized.
Since multiple cores are becoming more and more common, this should
allow for the computation time comparable with other alignment algorithms.
MSARC has also the potential for quality improvements.
Alternative methods of computing residue alignment affinities
could be used to improve the accuracy of both MSARC and Probalign based methods.
Other approaches to alignment graph partitioning may also lead
to improvements in the accuracy of MSARC,
for example a better method of pairing residues for multilevel coarsening
than currently used naive consecutive neighbors merging.
\subsubsection*{Acknowledgements}
This work was supported by the Polish Ministry of Science and Higher Education
[N N519 652740].
\bibliographystyle{splncs03}
|
2,869,038,154,355 | arxiv | \section{Introduction}
Clusters of galaxies are highly biased peaks of the underlying dark matter
distribution, allowing us to study the large-scale structure of our Universe.
Because their growth is directly influenced by cosmology, we can also use
them to test cosmological models with a cluster survey, which has a
well-understood selection function.
Among several detection techniques, X-ray observations of clusters reveal the hot
intra-cluster plasma that fills the cluster potential that is
mostly shaped by dark matter.
Hence X-ray observations of clusters provide a powerful method to detect and
characterise the cluster population.
Equally powerful is a cluster detection through the Sunyaev-Zel'dovich effect (SZE).
The same plasma is traced in the cluster potential, but with a different dependence
on the gas density.
Since SZE is almost insensitive to the redshift dimming, unlike X-ray observations,
which are mostly dominated by Bremsstrahlung emission, an SZE survey selection
is effectively mass-limited rather than flux-limited as in a typical X-ray survey.
Because the strength of the X-ray signal is proportional to the square of the
electron density, an X-ray flux-limited survey has a higher detection
efficiency for clusters that are brighter at the centre given the same
mass~\citep{pesce90,markevitch98}.
This effect is responsible for a large part of the scatter in the X-ray
luminosity-mass relation~\citep{pratt09,chon12}, which leads to a significant
Malmquist bias in a flux-limited sample.
This Malmquist bias effect should also lead to an enhancement of the fraction
of relaxed and cool-core clusters in flux-limited samples, which can be
understood as follows.
Most of the clusters in a flux-limited survey are detected near the flux-limit.
A scatter of the X-ray luminosity for a given mass will bring more clusters
at the more luminous end of the $L_X$ distribution into the sample,
which is enriched in cool-core clusters, and it will tend to miss
clusters at the less luminous end of the distribution, where predominantly
disturbed and non-cool cores are located.
Because of this effect, studies have shown that flux-limited X-ray samples have
a so-called cool-core bias (e.g.~\citet{eckert11,rossetti17,santos17}).
Therefore it is very important to explore the results that are obtained when
this strong Malmquist-type of bias effect can be avoided, for example,
in a survey that is volume-limited.
Hence we devised a program to study this effect with a volume-limited sample
drawn from the REFLEX cluster survey.
In this Letter we investigate the morphologies of the clusters in this
sample using a combination of centre shifts and the third moment
of the power ratios and compare to the properties of the clusters in
a flux-limited sample.
We also use X-ray luminosity ratios between two different apertures
to measure the fraction of cool cores and the X-ray luminosity distributions
of the clusters with different morphologies.
In Sect. 2 we describe our study sample and data reduction.
The structural analysis and its results are presented in Sect. 3, and
discussions on the cool-core fraction are found in Sect. 4.
Section 5 investigates X-ray luminosity distributions of clusters with
different morphological types.
The last section summarises our results and provides a perspective
for future work.
\section{Sample selection and data reduction}
We constructed a volume-limited sample (VLS) from the
flux-limited REFLEX cluster
survey~\citep{reflex2-1,reflex2-2} with a redshift limit of $z \le$~0.1
and a luminosity limit of 5$\times$10$^{43}$~\lxunit{}
in the 0.1-2.4~keV energy band (rest frame).
This sample, which we name ReVols (REFLEX-Volume-limited sample), is one of the
largest VLSs that can be constructed from REFLEX with a relatively high cluster
density.
We thinned out this sample statistically by selecting only every third cluster
in the third least luminous bin for an affordable XMM-Newton follow-up.
In addition, we constructed two flux-limited samples (FLS) within this volume
for a comparison between a VLS and an FLS.
FLS1 was constructed with a flux limit of 10$^{-11}$~\fxunit and FLS2
with 1.3$\times 10^{-11}$~\fxunit.
Technically, an FLS should not be constrained in redshift, hence our numerical
values in this paper include additional ten clusters that are above the redshift
limit of ReVols.
More details on the construction of this sample and the catalogue with
the properties of clusters are being prepared (B\"ohringer et al., in prep.).
We obtained relatively deep XMM-Newton observations as well as data from
the XMM-Newton archive for a total of 93 clusters.
The exposure of individual observations was designed to collect at least
4000 photons inside \rfive{} of each cluster for reliable structural
measurements.
All archival data satisfy this minimum criterium.
We processed the XMM-Newton data as described in~\cite{chon12}.
For the clusters whose \rfive{} is larger than the field-of-view
of XMM-Newton, we used ROSAT PSPC observations to model the cluster
emission and estimated the background in the XMM-Newton data.
We comment on these clusters in the analysis section.
\section{Structural analysis}
To quantitatively determine the degree of substructure, we employed
two common substructure measures: power ratios~\citep{buote95},
and centre shifts~\citep{poole06}.
They are well tested for X-ray observations and simulations
(see e.g.~\citet{rexcess_sub,chon12,mahdavi13,rasia13,chon16})
and have been shown to provide very useful diagnostics.
\subsection{Power ratio calculation}
The power ratio method first introduced by~\cite{buote95} was motivated by
the assumption that the X-ray surface brightness closely traces the projected
two-dimensional mass distribution of a cluster.
A multipole decomposition of such a projected mass distribution provides
moments that are identified as power ratios after normalisation by the zeroth moment.
In practice, the power ratio analysis is applied to the surface brightness
distribution.
The moments $P_m$ are defined as
\begin{equation}
P_0 = \left[ a_0 \ln (R_{\rm ap}) \right]^2
\end{equation}
\begin{equation}
P_m = \frac{1}{2 m^2 R_{\rm ap}^{2m} } \left( a_m^2 + b_m^2 \right)
,\end{equation}
\noindent where $R_{\rm ap}$ is the aperture radius in units of \rfive{}.
The moments $a_m$ and $b_m$ are calculated by
\begin{equation}
a_m(r) = \int_{r \le R_{\rm ap}} d\vec{x} ~S(\vec{x}) ~r^m \cos (m\phi)
\end{equation}
\noindent and
\begin{equation}
b_m(r) = \int_{r \le R_{\rm ap}} d\vec{x} ~S(\vec{x}) ~r^m \sin (m\phi),
\end{equation}
where $S(\vec{x})$ is the X-ray surface brightness image, and the integral
extends over all pixels inside the aperture radius.
Thus, $a_0$ in Eq. (1) is the total radiation intensity inside the
aperture radius.
Since all $P_m$ are proportional to the total intensity of the X-ray image,
all moments are normalised by $P_0$ , resulting in the so-called
power ratios, $P_m/P_0$.
For brevity, we refer to $P_m/P_0$ as $P_m$ in the rest of the paper.
We used \pt{} as one measure of the substructure degree since it is
the lowest moment that measures geometric asymmetries above an ellipticity.
We calculated the uncertainty of the power ratio measurement
and the influence of photon noise with end-to-end Monte Carlo simulations
in which an additional Poisson noise was imposed on the count images with
background.
We interpreted the variance of the power ratio results from the simulations
as the measurement uncertainty and subtracted the additional noise
found in the mean of all simulations compared to the observations from the
observational result.
Further technical discussions are found in~\cite{chon12}.
\subsection{Centre shifts}
The centre shift measures the stability of the X-ray centre calculated at
different radii and is formulated as~\citep{poole06}
\begin{equation}
w~ =~ \left[ \frac{1}{N-1}~ \sum \left( \Delta_i - \langle
\Delta \rangle \right)^2 \right] ^{1/2} ~\times~ \frac{1}{r_{500}},
\end{equation}
\noindent where $\Delta_i$ is the distance between the mean centroid and
the centroid of the $i$th aperture.
The centroid of each aperture is found by determining the centre of mass
of the photon distribution within this aperture.
The resulting \cs{} is then the standard deviation of the different centre
shifts (in units of \rfive{}).
We used the mean centroid value of all apertures as the reference centre.
The uncertainties in the \cs{} parameter were determined with the same
simulations as those of the power ratios,
that is, by using Poissonised resampled cluster X-ray images.
The standard deviation of the \cs{} parameter in the simulation was used as
an estimate of the measurement uncertainties.
We did not use the noise-bias-subtracted \cs{} parameter as in the case
of the power ratios since the bias correction is mostly much smaller than
the errors and the bias correction does not shift the \cs{} parameter
to alter the classification of the cluster morphology.
The end-to-end simulations for the power ratios and centre shifts ensures that,
for example, the systematics introduced by the photon shot noise is properly
taken into account in the parameter uncertainties.
\subsection{Morphological classification}
The measured \cs{} and \pt{} values for the \vls{} clusters are shown in
Fig.~\ref{fig:wp3}.
Both parameters are generally correlated with some scatter.
We note that for the clusters for which \rfive{} is not fully covered
by the XMM-Newton observations, the structural analysis was only calculated
out to the minimum available fraction of \rfive{} measured from the
centre of the aperture.
This is to ensure that no artificial asymmetry is introduced when
the aperture does not fully cover the cluster.
Hence in these cases, the measured \cs{} and \pt{} are lower limits
of the true values since we normalised both parameters by \rfive{}
instead of their available fraction, and they are represented by upward
arrows.
As in~\cite{chon12,chon16}, we defined three boundaries to classify
the clusters into three distinct categories:
disturbed (\cs{} > 0.006 or \pt{} > 2$\times$10$^{-7}$),
intermediate (\cs{} < 0.006 and 7$\times$10$^{-8}$ < \pt{} < 2$\times$10$^{-7}$), and
relaxed (\cs{} < 0.006 and \pt{} < 7$\times$10$^{-8}$).
The number of clusters based on this classification is listed in Table 1 under
the heading VLS, and the same information is given for two flux-limited samples
that are constructed from \vls{}, namely FLS1 and FLS2.
For comparison, FLS1 clusters are represented by the filled circles in
Fig.~\ref{fig:wp3}.
There are twice more disturbed clusters than relaxed clusters in \vls,{}
while the two FLSs have more comparable number statistics for
both populations.
Figure~\ref{fig:wp3} and Table 1 show that the average morphology of
the clusters in the volume-limited sample is very different to
those found in FLSs.
\begin{figure}
\includegraphics[width=\columnwidth]{wp3.ps}
\caption{
Structural parameters, \cs{} vs. \pt{} , for the distributions of the
disturbed (red), intermediate (green), and relaxed (blue) clusters.
Solid circles are the clusters that also belong to FLS1 above
a flux limit of $10^{-11}$~\fxunit{}.
For the clusters whose \rfive{} is not covered by the XMM-Newton
observations, the measured values are lower limits, as indicated
by upward arrows.
The three morphological classifications are based on the three dashed
lines.
}
\label{fig:wp3}
\end{figure}
\begin{table}
\begin{center}
\centering
\caption{Morphological classification in our VLS and two FLSs.
FLS1 is constructed with the flux limit of 10$^{-11}$ and FLS2
with 1.3$\times$10$^{-11}$~\fxunit{}.
The number of clusters in the intermediate class is the difference
between the total and the sum of the disturbed and relaxed clusters.
}
\begin{tabular}{l l l l}
\hline
\multicolumn{1}{l}{Morphology} &
\multicolumn{1}{l}{VLS} &
\multicolumn{1}{l}{FLS1} &
\multicolumn{1}{l}{FLS2} \\
\hline
\rule{0pt}{3ex}Disturbed & 56 & 23 & 19 \\
Relaxed & 27 & 21 & 18 \\
\hline
\rule{0pt}{3ex}Total & 93 & 51 & 42 \\
\hline
\end{tabular}
\end{center}
\label{tab:tab1}
\end{table}
\section{Cool-core fraction}
The fraction of cool cores (CC) in an X-ray cluster sample was used to study
the degree of CC bias and as an indicator for cluster dynamics.
Because a CC cluster does not necessarily indicate a dynamically relaxed
state, we simply make a comparison of the fraction of CCs in the flux-
and volume-limited samples.
\begin{figure}
\includegraphics[width=\columnwidth]{cc.ps}
\vspace{-0.7cm}
\caption{
Luminosity ratio of ReVols clusters (all) in comparison to a FLS1
(solid) ordered by \cs{} values, which decrease from left to right.
Colours follow the morphological classification assigned
in Fig.~\ref{fig:wp3}.
For the clusters whose \rfive{} is larger than the XMM-Newton field-of-view
the measured ratio is an upper limit, as indicated by downward arrows.
The dashed line is drawn at the value of 1.31, which divides the clusters
into CC and non-CC.
}
\label{fig:ccf}
\end{figure}
The cores of clusters in \vls{}, defined by 0.1$\times$ \rfive{}, are well
resolved in the XMM-Newton observations.
Hence we calculated the luminosity ratio between the total luminosity
in the aperture out to the clusters \rfive{} and that measured
in the same aperture without the core region~\citep{rexcess_sub}.
Analogous to the substructure calculations, \rfive{} was not fully
covered in the observations for some largest clusters.
In this case, the surface brightness ratio was calculated again out to the available
aperture and was normalised to \rfive{}.
Since the core was always observed, the luminosity ratio becomes an upper limit,
and this is indicated by a downward arrow in Fig.~\ref{fig:ccf}.
We adopted the nominal luminosity ratio of 1.31 as used in~\cite{rexcess_sub}
to divide the CC population and the non-CCs.
Figure~\ref{fig:ccf} and Table 2 show that there are more non-CC clusters
in \vls,{} while the FLS clusters (filled circles) have more comparable
number statistics between the two populations.
\begin{table}
\begin{center}
\centering
\caption{
Number of clusters in each class.
The samples are identical to those described in Table 1.
}
\begin{tabular}{l l l l}
\hline
\multicolumn{1}{l}{} &
\multicolumn{1}{l}{VLS} &
\multicolumn{1}{l}{FLS1} &
\multicolumn{1}{l}{FLS2} \\
\hline
\rule{0pt}{3ex}Non-CC & 57 & 27 & 22 \\
Cool Core & 36 & 24 & 20 \\
\hline
\rule{0pt}{3ex}Total & 93 & 51 & 42 \\
\hline
\end{tabular}
\end{center}
\label{tab:tab2}
\end{table}
\section{Distinct distributions of X-ray luminosity }
Since we expect that CC clusters are on average more luminous than non-CC
clusters for a given cluster mass, we studied the distributions of the X-ray
luminosity in more detail for our sample.
Figure~\ref{fig:lxdist} shows two normalised cumulative X-ray
luminosity distributions for the disturbed clusters as a red line
and for the relaxed clusters as a black line.
The two populations have distinctly different luminosity distributions,
from which we can deduce that there are relatively more luminous clusters
in the sample of relaxed clusters than in the disturbed cluster
sample.
A Kolmogorov-Smirnoff (KS) test shows that it is highly unlikely
that both distributions are the same, with a probability of 0.0003.
This difference in the luminosity distribution can have several reasons.
When we assume that the mass distribution is not drastically different
for the two populations (as found, for example, for the REXCESS sample
and in simulations~\citep{rexcess_sub}), we could attribute this effect
mostly to the luminosity difference for a given mass.
In this case, we expect that the two distributions show a similar shape
with a constant displacement.
Applying a chi-square fit yields a displacement factor of 1.57, as shown by
the red dashed line in Fig.~\ref{fig:lxdist}.
To avoid edge effects, we fitted in the luminosity range between
0.7 and 3.7$\times$10$^{44}$~erg~s$^{-1}$.
The KS test also finds a best-fit value of 1.59 that maximises
the probability, that is, the value that minimises the difference between
the two distributions.
These results imply that the difference in the luminosity distribution
can be explained if relaxed clusters are typically approximately 60\%
more luminous than the disturbed clusters.
\begin{figure}
\includegraphics[width=\columnwidth]{lxdist.ps}
\vspace{-0.5cm}
\caption{
Normalised cumulative luminosity functions for the disturbed (red)
and relaxed (black) clusters.
The red dashed line represents the distribution for the disturbed
clusters scaled by 1.57.
}
\label{fig:lxdist}
\end{figure}
An approximate factor of 1.6 can in fact be derived from the
scaling relations that we presented in Table 2 of~\cite{chon12}.
We fitted scaling relations by dividing the clusters according to
their morphological classification, finding that there are differences
of 40\% and 20\% in the amplitude of the $L_X$--$T$ and
in the $M_X$--$T$ relations, respectively, for a fixed slope.
Since the two differences add, this implies that overall we expect a difference of about
60\% in the luminosity between the two populations
for a given mass.
It is likely that this is not the only effect that contributes
to what we observe in \vls,{} and in a forthcoming paper we will
present scaling relations, which will provide a more complete
picture.
\section{Summary and discussions}
We used two measures of substructures, centre shifts and the third moment
of the power ratios, to diagnose the degree of substructures in X-ray
clusters for our ReVols and two flux-limited samples derived from
REFLEX clusters.
As far as we are aware, we present the first numerical evidence that
clusters in a volume-limited sample are different from a flux-limited
sample in morphology and in the cool-core fraction with relatively large
number statistics.
We find twice more disturbed than relaxed clusters, and CCs do not
dominate \vls{}.
Thus we do not find that the VLS has a cool-core bias in comparison
to cluster populations from SZE surveys.
The so-called cool-core bias is therefore found in X-ray
cluster samples that are compiled in a flux-limited way.
In the application of X-ray flux-limited cluster samples for cosmological
studies, the use of the scaling relation between X-ray luminosity
and mass is an important ingredient, and the correction for Malmquist bias
related to the scatter in the relation is a prerequisite.
In our previous study, we showed that clusters with different morphologies
or dynamical states easily influence scaling relations, as demonstrated
in Figs. 11 and 12 of~\cite{chon12}.
In the standard correction for Malmquist bias, the overall
scatter of the $L_X$--M relation is taken into account regardless of morphological
types, and the scatter is typically assumed to have a Gaussian distribution.
Our findings suggest a different way of Malmquist-bias correction that
may lead to a higher precision in the results because the distribution
of the scatter for a mixed morphological population does not appear
to be Gaussian.
Therefore a procedure where the relation and scatter are determined
independently for disturbed and relaxed clusters while taking
the morphological fractions also into account may provide a more precise
Malmquist-bias correction.
As an alternative to this approach, the excision of the cool-core
region has been suggested as early as~\citet{markevitch98}.
For surveys where the core of clusters cannot be resolved, however, such as
the RASS, our suggested refinement of bias correction should provide an
improvement.
Moreover, in the future, the eROSITA survey does not have the imaging
resolution for the clusters at medium and larger distances
to precisely excise the core, and the suggested procedure may help
to improve the efficiency of the cosmological test.
\begin{acknowledgements}
Our research is based on the XMM-Newton facility and data archive
operated by ESA.
We thank for the support of the XMM-Newton team,
especially Norbert Schartel, Ignacio de la Calle, and Maria Santos-Lleo.
HB and GC acknowledge support from the DFG Transregio Program TR33
and the Munich Excellence Cluster
''Structure and Evolution of the Universe''.
GC acknowledges support by the Deutsches Zentrum f\"ur Luft- und Raumfahrt
under grant no. 50 OR 1601.
\end{acknowledgements}
\footnotesize{
\bibliographystyle{aa}
|
2,869,038,154,356 | arxiv | \section{Introduction}\label{sec:intro}
A classical result on Hamilton cycles is Dirac's theorem \cite{dirac} which states that if $G$ is a graph on $n \geq 3$ vertices with minimum degree $\delta(G) \geq n/2$, then $G$ contains a Hamilton cycle. Ghouila-Houri \cite{ghouila} proved an analogue of Dirac's theorem for digraphs which guarantees that any digraph of minimum semidegree at least $n/2$ contains a consistently oriented Hamilton cycle (where the minimum semidegree $\delta^0(G)$ of a digraph $G$ is the minimum of all the in- and outdegrees of the vertices in $G$). In \cite{keevko}, Keevash, K\"uhn and Osthus proved a version of this theorem for oriented graphs. Here the minimum semidegree threshold turns out to be $\delta^0(G) \geq (3n-4)/8$. (In a digraph we allow two edges of opposite orientations between a pair or vertices, in an oriented graph at most one edge is allowed between any pair of vertices.)
Instead of asking for consistently oriented Hamilton cycles in an oriented graph or digraph, it is natural to consider different orientations of a Hamilton cycle. For example, Thomason \cite{thom} showed that every sufficiently large strongly connected tournament contains every orientation of a Hamilton cycle. H\"aggkvist and Thomason \cite{hagthom2} proved an approximate version of Ghouila-Houri's theorem for arbitrary orientations of Hamilton cycles. They showed that a minimum semidegree of $n/2+n^{5/6}$ ensures the existence of an arbitrary orientation of a Hamilton cycle in a digraph. This improved a result of Grant \cite{grant} for antidirected Hamilton cycles. The exact threshold in the antidirected case was obtained by DeBiasio and Molla \cite{debmol}, here the threshold is $\delta^0(G) \geq n/2+1$, i.e., larger than in Ghouila-Houri's theorem. In Figure~\ref{fig:cai}, we give two digraphs $G$ on $2m$ vertices which satisfy $\delta^0(G) = m$ and have no antidirected Hamilton cycle, showing that this bound is best possible. (The first of these examples is already due to Cai \cite{cai}.)
\begin{figure}[h]
\centering
\includegraphics[scale=0.2]{extremals.png}
\caption{In digraphs $F_{2m}^1$ and $F_{2m}^2$, $A$ and $B$ are independent sets of size $m-1$ and bold arrows indicate that all possible edges are present in the directions shown.}\label{fig:cai}
\end{figure}
\begin{theorem}[DeBiasio \& Molla, \cite{debmol}]\label{thm:antidirected}
There exists an integer $m_0$ such that the following hold for all $m\geq m_0$. Let $G$ be a digraph on $2m$ vertices. If $\delta^0(G) \geq m$, then $G$ contains an antidirected Hamilton cycle, unless $G$ is isomorphic to $F_{2m}^1$ or $F_{2m}^2$. In particular, if $\delta^0(G) \geq m+1$, then $G$ contains an antidirected Hamilton cycle.
\end{theorem}
In this paper, we settle the problem by completely determining the exact threshold for arbitrary orientations. We show that a minimum semidegree of $n/2$ suffices if the Hamilton cycle is not antidirected. This bound is best possible by the extremal examples for Ghouila-Houri's theorem, i.e., if $n$ is even, the digraph consisting of two disjoint complete digraphs on $n/2$ vertices and, if $n$ is odd, the complete bipartite digraph with vertex classes of size $(n-1)/2$ and $(n+1)/2$.
\begin{theorem}\label{thm:main}
There exists an integer $n_0$ such that the following holds. Let $G$ be a digraph on $n\geq n_0$ vertices with $\delta^0(G) \geq n/2$. If $C$ is any orientation of a cycle on $n$ vertices which is not antidirected, then $G$ contains a copy of $C$.
\end{theorem}
Kelly \cite{kelly} proved an approximate version of Theorem~\ref{thm:main} for oriented graphs. He showed that the semidegree threshold for an arbitrary orientation of a Hamilton cycle in an oriented graph is $3n/8 +o(n)$. It would be interesting to obtain an exact version of this result. Further related problems on digraph Hamilton cycles are discussed in \cite{kosurvey}.
\section{Proof sketch}\label{sec:sketch}
The proof of Theorem~\ref{thm:main} utilizes the notion of robust expansion which has been very useful in several settings recently. Roughly speaking, a digraph $G$ is a robust outexpander if every vertex set $S$ of reasonable size has an outneighbourhood which is at least a little larger than $S$ itself, even if we delete a small proportion of the edges of $G$. A formal definition of robust outexpansion is given in Section~\ref{sec:tools}. In Lemma~\ref{lem:structure}, we observe that any graph satisfying the conditions of Theorem~\ref{thm:main} must be a robust outexpander or have a large set which does not expand, in which case we say that $G$ is ${\varepsilon}$-extremal. Theorem~\ref{thm:main} was verified for the case when $G$ is a robust outexpander by Taylor \cite{msci} based on the approach of Kelly \cite{kelly}. This allows us to restrict our attention to the ${\varepsilon}$-extremal case. We introduce three refinements of the notion of ${\varepsilon}$-extremality: $ST$-extremal, $AB$-extremal and $ABST$-extremal. These are illustrated in Figure~\ref{fig:abstextremal}, the arrows indicate that $G$ is almost complete in the directions shown. In each of these cases, we have that $|A|\sim|B|$ and $|S|\sim|T|$. If $G$ is $ST$-extremal, then the sets $A$ and $B$ are almost empty and so $G$ is close to the digraph consisting of two disjoint complete digraphs on $n/2$ vertices. If $G$ is $AB$-extremal, then the sets $S$ and $T$ are almost empty and so in this case $G$ is close to the complete bipartite digraph with vertex classes of size $n/2$ (thus both digraphs in Figure~\ref{fig:cai} are $AB$-extremal). Within each of these cases, we further subdivide the proof depending on how many changes of direction the desired Hamilton cycle has. Note that in the directed setting the set of extremal structures is much less restricted than in the undirected setting (in the undirected case, it is well known that all the near extremal graphs are close to the complete bipartite graph $K_{n/2,n/2}$ or two disjoint cliques on $n/2$ vertices).
\begin{figure}[h]
\centering
\includegraphics[scale=0.25]{abstextremal.png}
\caption{An $ABST$-extremal graph. When $G$ is $AB$-extremal, the sets $S$ and $T$ are almost empty and when $G$ is $ST$-extremal the sets $A$ and $B$ are almost empty.}\label{fig:abstextremal}
\end{figure}
The main difficulty in each of the cases is covering the exceptional vertices, i.e., those vertices with low in- or outdegree in the vertex classes where we would expect most of their neighbours to lie. When $G$ is $AB$-extremal, we also consider the vertices in $S\cup T$ to be exceptional and, when $G$ is $ST$-extremal, we consider the vertices in $A\cup B$ to be exceptional. In each case we find a short path $P$ in $G$ which covers all of these exceptional vertices. When the cycle $C$ is close to being consistently oriented, we cover these exceptional vertices by short consistently oriented paths and when $C$ has many changes of direction, we will map sink or source vertices in $C$ to these exceptional vertices (here a sink vertex is a vertex of indegree two and a source vertex is a vertex of outdegree two).
An additional difficulty is that in the $AB$- and $ABST$-extremal cases we must ensure that the path $P$ leaves a balanced number of vertices in $A$ and $B$ uncovered. Once we have found $P$ in $G$, the remaining vertices of $G$ (i.e., those not covered by $P$) induce a balanced almost complete bipartite digraph and one can easily embed the remainder of $C$ using a bipartite version of Dirac's theorem. When $G$ is $ST$-extremal, our aim will be to split the cycle $C$ into two paths $P_S$ and $P_T$ and embed $P_S$ into the digraph $G[S]$ and $P_T$ into $G[T]$. So a further complication in this case is that we need to link together $P_S$ and $P_T$ as well as covering all vertices in $A\cup B$.
This paper is organised as follows. Sections~\ref{sec:notation} and \ref{sec:tools} introduce the notation and tools which will be used throughout this paper. In Section~\ref{sec:structure} we describe the structure of an ${\varepsilon}$-extremal digraph and formally define what it means to be $ST$-, $AB$- or $ABST$-extremal. The remaining sections prove Theorem~\ref{thm:main} in each of these three cases: we consider the $ST$-extremal case in Section~\ref{sec:ST}, the $AB$-extremal case in Section~\ref{sec:AB} and the $ABST$-extremal case in Section~\ref{sec:ABST}.
\section{Notation}\label{sec:notation}
Let $G$ be a digraph on $n$ vertices. We will write $xy \in E(G)$ to indicate that $G$ contains an edge oriented from $x$ to $y$. If $G$ is a digraph and $x\in V(G)$, we will write $N^+_G(x)$ for the \emph{outneighbourhood} of $x$ and $N^-_G(x)$ for the \emph{inneighbourhood} of $x$. We define $d^+_G(x):=|N^+_G(x)|$ and $d^-_G(x):=|N^-_G(x)|$. We will write, for example, $d^\pm_G(x)\geq a$ to mean $d^+_G(x), d^-_G(x) \geq a$. We sometimes omit the subscript $G$ if this is unambiguous. We let $\delta^0(G):=\min\{d^+(x), d^-(x):x\in V(G)\}$. If $A\subseteq V(G)$, we let $d^+_A(x):=|N^+_G(x)\cap A|$ and define $d^-_A(x)$ and $d^\pm_A(x)$ similarly. We say that $x\in V(G)$ is a \emph{sink vertex} if $d^+(x)=0$ and a \emph{source vertex} if $d^-(x)=0$.
Let $A, B \subseteq V(G)$ and $xy\in E(G)$. If $x\in A$ and $y\in B$ we say that $xy$ is an \emph{$AB$-edge}. We write $E(A,B)$ for the set of all $AB$-edges and we write $E(A)$ for $E(A,A)$. We let $e(A, B):=|E(A,B)|$ and $e(A):=|E(A)|$. We write $G[A,B]$ for the digraph with vertex set $A\cup B$ and edge set $E(A,B)\cup E(B,A)$ and we write $G[A]$ for the digraph with vertex set $A$ and edge set $E(A)$. We say that a path $P=x_1x_2\dots x_q$ is an \emph{$AB$-path} if $x_1\in A$ and $x_q\in B$. If $x_1,x_q\in A$, we say that $P$ is an \emph{$A$-path}. If $A \subseteq V(P)$, we say that $P$ \emph{covers} $A$. If $\mathcal{P}$ is a collection of paths, we write $V(\mathcal{P})$ for $\bigcup_{P\in\mathcal{P}}V(P)$.
Let $P=x_1x_2\dots x_q$ be a path. The \emph{length} of $P$ is the number of its edges. Given sets $X_1, \dots, X_q\subseteq V(G)$, we say that $P$ has \emph{form} $X_1X_2\dots X_q$ if $x_i\in X_i$ for $i=1,2, \dots, q$. We will use the following abbreviation
$$(X)^k:=\underbrace{XX\dots X}_{k\text{ times}}.$$
We will say that $P$ is a \emph{forward} path of the form $X_1X_2\dots X_q$ if $P$ has form $X_1X_2\dots X_q$ and $x_ix_{i+1}\in E(P)$ for all $i=1,2,\dots, q-1$. Similarly, $P$ is a \emph{backward} path of the form $X_1X_2\dots X_q$ if $P$ has form $X_1X_2\dots X_q$ and $x_{i+1}x_{i}\in E(P)$ for all $i=1,2,\dots, q-1$.
A digraph $G$ is \emph{oriented} if it is an orientation of a simple graph (i.e., if there are no $x,y\in V(G)$ such that $xy, yx\in E(G)$). Suppose that $C=(u_1u_2\dots u_n)$ is an oriented cycle. We let $\sigma(C)$ denote the number of sink vertices in $C$. We will write $(u_iu_{i+1} \dots u_j)$ or $(u_iCu_j)$ to denote the subpath of $C$ from $u_i$ to $u_j$. In particular, $(u_iu_{i+1})$ may represent the edge $u_iu_{i+1}$ or $u_{i+1}u_i$. Given edges $e=(u_i,u_{i+1})$ and $f=(u_j,u_{j+1})$, we write $(eCf)$ for the path $(u_iCu_{j+1})$. We say that an edge $(u_iu_{i+1})$ is a \emph{forward edge} if $(u_iu_{i+1})=u_iu_{i+1}$ and a \emph{backward edge} if $(u_iu_{i+1})=u_{i+1}u_i$. We say that a cycle is \emph{consistently oriented} if all of its edges are oriented in the same direction (forward or backward). We define a consistently oriented subpath $P$ of $C$ in the same way. We say that $P$ is \emph{forward} if it consists of only forward edges and \emph{backward} if it consists of only backward edges. A collection of subpaths of $C$ is \emph{consistent} if they are all forward paths or if they are all backward paths. We say that a path or cycle is \emph{antidirected} if it contains no consistently oriented subpath of length two.
Given $C$ as above, we define $d_C(u_i,u_j)$ to be the length of the path $(u_iCu_j)$ (so, for example, $d_C(u_1, u_n)=n-1$ and $d_C(u_n, u_1)=1$).
For a subpath $P=(u_iu_{i+1}\dots u_k)$ of $C$, we call $u_i$ the \emph{initial} vertex of $P$ and $u_k$ the \emph{final vertex}. We write $(u_jP):=(u_ju_{j+1}\dots u_k)$ and $(Pu_j):=(u_iu_{i+1}\dots u_j)$. If $P_1$ and $P_2$ are subpaths of $C$, we define $d_C(P_1,P_2):=d_C(v_1,v_2)$, where $v_i$ is the initial vertex $P_i$. In particular, we will use this definition when one or both of $P_1,P_2$ are edges. Suppose $P_1, P_2, \dots, P_k$ are internally disjoint subpaths of $C$ such that the final vertex of $P_i$ is the initial vertex of $P_{i+1}$ for $i=1, \dots, k-1$. Let $x$ denote the initial vertex of $P_1$ and $y$ denote the final vertex of $P_k$. If $x\neq y$, we write $(P_1P_2\dots P_k)$ for the subpath of $C$ from $x$ to $y$. If $x=y$, we sometimes write $C=(P_1P_2\dots P_k)$.
We will also make use of the following notation: $a \ll b$. This means that we can find an increasing function $f$ for which all of the conditions in the proof are satisfied whenever $a \leq f(b)$. It is equivalent to setting $a := \min \{f_1(b), f_2(b), \dots, f_k(b)\}$, where each $f_i(b)$ corresponds to the maximum value of $a$ allowed in order that the corresponding argument in the proof holds. However, in order to simplify the presentation, we will not determine these functions explicitly.
\section{Tools}\label{sec:tools}
\subsection{Hamilton cycles in dense graphs and digraphs}
We will use the following standard results concerning Hamilton paths and cycles. Theorem~\ref{thm:moon} is a bipartite version of Dirac's theorem. Proposition~\ref{prop:completepath} is a simple consequence of Dirac's theorem and this bipartite version.
\begin{theorem}[Moon \& Moser, \cite{moon}]\label{thm:moon}
Let $G=(A,B)$ be a bipartite graph with $|A|=|B|=n$. If $\delta(G) \geq n/2+1$, then $G$ contains a Hamilton cycle.
\end{theorem}
\begin{prop}\label{prop:completepath}
\begin{enumerate}[\rm(i)]
\item Let $G$ be a digraph on $n$ vertices with $\delta^0(G)\geq 7n/8$. Let $x,y\in V(G)$ be distinct. Then $G$ contains a Hamilton path of any orientation between $x$ and $y$.
\item Let $m \geq 10$ and $G=(A,B)$ be a bipartite digraph with $|A|=m+1$ and $|B|=m$. Suppose that $\delta^0(G)\geq (7m+2)/8$. Let $x,y \in A$. Then $G$ contains a Hamilton path of any orientation between $x$ and $y$.
\end{enumerate}
\end{prop}
\begin{proof}
To prove (i), we define an undirected graph $G'$ on the vertex set $V(G)$ where $uv \in E(G')$ if and only if $uv, vu \in E(G)$. Let $G''$ be the graph obtained from $G'$ by contracting the vertices $x$ and $y$ to a single vertex $x'$ with $N_{G''}(x'):=N_{G'}(x) \cap N_{G'}(y)$. Note that%
\COMMENT{$d_{G'}(v)\geq 7n/8-(n/8-1)=3n/4+1$ for all $v\in V(G)$. So for $v\neq x,y$, we have $d_{G''}(x')\geq 3n/4\geq (n-1)/2$. Also, $d_{G''}(x')\geq$ (no. neighbours of $x$ in $G'$)$-$(no. non-neighbours of $y$ in $G'$)$-1$, where $-1$ is in case $y$ is a neighbour of $x$. So $d_{G''}(x')\geq (3n/4+1)-(n/4-2)-1=n/2+2\geq (n-1)/2$.}
$$\delta(G'') \geq (n-1)/2= |G''|/2.$$
Hence $G''$ has a Hamilton cycle by Dirac's theorem. This corresponds to a Hamilton path of any orientation between $x$ and $y$ in $G$.
For (ii), we proceed in the same way, using Theorem~\ref{thm:moon} instead of Dirac's theorem.%
\COMMENT{We define an undirected bipartite graph $G'$ with vertex classes $A$ and $B$ where for all $u \in A, v\in B$, $uv \in E(G')$ if and only if $uv, vu \in E(G)$. Obtain the graph $G''$ by contracting the vertices $x$ and $y$ to a single vertex $x'$ with $N_{G''}(x'):=N_{G'}(x)\cap N_{G'}(y)$. Note that the resulting graph has vertex classes $A',B'$ of equal size, $m$. For any $v\in A$, we have $d_{G'}(v) \geq 2(7m+2)/8-m =(3m+2)/4$. So for $v\in A$ with $v \neq x,y$, we have $d_{G''}(v) \geq (3m+2)/4$. $d_{G''}(x') \geq 2(3m+2)/4-m = m/2+1$. For any $v\in B$ we have $d_{G'}(v) \geq 2(7m+2)/8-(m+1) =(3m-2)/4$ and so $d_{G''}(v) \geq (3m-2)/4-1 =(3m-6)/4$ (where $-1$ accounts for contraction of $x,y$). Note that $(3m-6)/4\geq m/2+1$ for all $m \geq 10$. So $G''$ satisfies $\delta(G'') \geq m/2+1.$
Then, by Theorem~\ref{thm:moon}, $G''$ has a Hamilton cycle. This Hamilton cycle uses only edges that are present in both directions in $G$. This allows us to find a Hamilton path of any orientation between $x$ and $y$ in $G$.}
\end{proof}
\subsection{Robust expanders}\label{subsec:robexp}
Let $0< \nu \leq \tau <1$, let $G$ be a digraph on $n$ vertices and let $S \subseteq V(G)$. The \emph{$\nu$-robust outneighbourhood} $RN_{\nu,G}^+(S)$ of $S$ is the set of all those vertices $x \in V(G)$ which have at least $\nu n$ inneighbours in $S$. $G$ is called a \emph{robust $(\nu,\tau)$-outexpander} if $|RN_{\nu,G}^+(S)| \geq |S|+\nu n$ for all $S \subseteq V(G)$ with $\tau n < |S| < (1- \tau)n$.
Recall from Section~\ref{sec:intro} that Kelly \cite{kelly} showed that any sufficiently large oriented graph with minimum semidegree at least $(3/8+\alpha)n$ contains any orientation of a Hamilton cycle. It is not hard to show that any such oriented graph is a robust outexpander (see \cite{kellyexact78}). In fact, in \cite{kelly}, Kelly observed that his arguments carry over to robustly expanding digraphs of linear degree. Taylor \cite{msci} has verified that this is indeed the case, proving the following result.
\begin{theorem}[\cite{msci}]\label{thm:robust}
Suppose $1/n \ll \nu \leq \tau \ll \eta <1$. Let $G$ be a digraph on $n$ vertices with $\delta^0(G) \geq \eta n$ and suppose $G$ is a robust $(\nu, \tau)$-outexpander. If $C$ is any orientation of a cycle on $n$ vertices, then $G$ contains a copy of $C$.
\end{theorem}
\subsection{Structure}\label{sec:structure}
Let ${\varepsilon} >0$ and $G$ be a digraph on $n$ vertices. We say that $G$ is \emph{${\varepsilon}$-extremal} if there is a partition $A,B,S,T$ of its vertices into sets of sizes $a,b,s,t$ such that $|a-b|,|s-t|\leq 1$ and $e(A\cup S, A\cup T) < {\varepsilon} n^2$.
The following lemma describes the structure of a graph which satisfies the conditions of Theorem~\ref{thm:main}.
\begin{lemma}\label{lem:structure}
Suppose
$0<1/n \ll \nu \ll \tau, {\varepsilon} < 1$
and let $G$ be a digraph on $n$ vertices with
$ \delta^0(G) \geq n/2.$
Then $G$ satisfies one of the following:
\begin{enumerate}[\rm(i)]
\item $G$ is ${\varepsilon}$-extremal;
\item $G$ is a robust $(\nu, \tau)$-outexpander.
\end{enumerate}
\end{lemma}
\begin{proof}
Suppose that $G$ is not a robust $(\nu, \tau)$-outexpander. Then there is a set $X \subseteq V(G)$ with $\tau n \leq |X| \leq (1-\tau)n$ and $|RN_{\nu,G}^+(X)|<|X|+\nu n$. Define $RN^+ := RN_{\nu,G}^+(X)$. We consider the following cases:
\medskip
\noindent \textbf{Case 1: } \emph{$\tau n \leq |X| \leq (1/2- \sqrt \nu)n$.}
We have $$ |X|n/2 \leq e(X, RN^+) + e(X, \overline {RN^+}) \leq |X||RN^+| + \nu n^2 \leq |X|(|RN^+| + \nu n/\tau),$$
so $|RN^+| \geq (1/2-\nu/\tau)n \geq |X| +\nu n,$
which gives a contradiction.
\medskip
\noindent \textbf{Case 2: } \emph{$(1/2+\nu)n \leq |X| \leq (1-\tau)n$.}
For any $v \in V(G)$ we note that $d^-_X(v) \geq \nu n.$
Hence $|RN^+|=|G| \geq |X| +\nu n$, a contradiction.
\medskip
\noindent \textbf{Case 3: } \emph{$(1/2- \sqrt \nu)n < |X| < (1/2+ \nu)n$.}
Suppose that $|RN^+|<(1/2-3\nu) n$. Since $\delta^0(G) \geq n/2$, each vertex in $X$ has more than $3\nu n$ outneighbours in $\overline{RN^+}$. Thus, there is a vertex $v \not\in RN^+$ with more than $3\nu n |X|/n > \nu n$ inneighbours in $X$, which is a contradiction. Therefore,
\begin{equation}\label{eqn:rn+}
(1/2-3\nu) n \leq |RN^+|< |X|+\nu n < (1/2+ 2\nu)n.
\end{equation}
Write $A_0:=X\setminus RN^+$, $B_0:=RN^+\setminus X$, $S_0:=X\cap RN^+$ and $T_0 := \overline X\cap \overline{RN^+}$. Let $a_0,b_0,s_0,t_0$, respectively, denote their sizes. Note that $|X|=a_0+s_0$, $|RN^+|=b_0+s_0$ and $a_0+b_0+s_0+t_0=n$. It follows from (\ref{eqn:rn+}) and the conditions of Case 3 that
$$(1/2-\sqrt\nu) n \leq a_0+s_0, b_0+t_0, b_0+s_0, a_0+t_0 \leq (1/2+\sqrt \nu) n$$
and so $|a_0-b_0|, |s_0-t_0| \leq 2\sqrt\nu n$. Note that
$$e(A_0\cup S_0, A_0\cup T_0) = e(X,\overline{RN^+})<\nu n^2.$$ By moving at most $\sqrt\nu n$ vertices between the sets $A_0$ and $B_0$ and $\sqrt \nu n$ between the sets $S_0$ and $T_0$, we obtain new sets $A,B,S,T$ of sizes $a,b,s,t$ satisfying $|a-b|, |s-t| \leq 1$ and $e(A \cup S, A\cup T) \leq {\varepsilon} n^2$. So $G$ is ${\varepsilon}$-extremal.
\end{proof}
\subsection{Refining the notion of ${\varepsilon}$-extremality}\label{subsec:refine}
Let $n\in \mathbb{N}$ and ${\varepsilon}, {\varepsilon}_1, {\varepsilon}_2, {\varepsilon}_3, {\varepsilon}_4, \eta_1, \eta_2, \tau$ be positive constants satisfying
$$1/n \ll {\varepsilon} \ll {\varepsilon}_1 \ll {\varepsilon}_2\ll \eta_1 \ll \tau \ll {\varepsilon}_3 \ll {\varepsilon}_4 \ll \eta_2 \ll 1.$$
We now introduce three refinements of ${\varepsilon}$-extremality. (The constants ${\varepsilon}_2$ and ${\varepsilon}_4$ do not appear in these definitions but will be used at a later stage in the proof so we include them here for clarity.) Let $G$ be a digraph on $n$ vertices.
Firstly, we say that $G$ is \emph{$ST$-extremal} if there is a partition $A,B,S,T$ of $V(G)$ into sets of sizes $a,b,s,t$ such that:
\begin{enumerate}[(P1)]
\item $a\leq b$, $s\leq t$;\label{P*}
\item $\lfloor n/2 \rfloor -{\varepsilon}_3 n \leq s,t \leq \lceil n/2 \rceil + {\varepsilon}_3 n$;\label{P1}
\item $\delta^0(G[S]), \delta^0(G[T]) \geq \eta_2 n$;\label{P2}
\item $d^\pm_S(x) \geq n/2-{\varepsilon}_3 n$ for all but at most ${\varepsilon}_3 n$ vertices $x\in S$;\label{P3}
\item $d^\pm_T(x) \geq n/2-{\varepsilon}_3 n$ for all but at most ${\varepsilon}_3 n$ vertices $x\in T$;\label{P4}
\item $a+b \leq {\varepsilon}_3 n$;\label{P5}
\item $d^-_T(x), d^+_S(x) > n/2-3\eta_2 n$ and $d^-_S(x), d^+_T(x)\leq 3\eta_2 n$ for all $x \in A$;\label{P6}
\item $d^-_S(x), d^+_T(x)> n/2-3\eta_2 n$ and $d^-_T(x), d^+_S(x) \leq 3\eta_2 n$ for all $x \in B$.\label{P7}
\end{enumerate}
Secondly, we say that $G$ is \emph{$AB$-extremal} if there is a partition $A,B,S,T$ of $V(G)$ into sets of sizes $a,b,s,t$ such that:
\begin{enumerate}[(Q1)]
\item $a\leq b$, $s\leq t$;\label{Q*}
\item $\lfloor n/2 \rfloor -{\varepsilon}_3 n \leq a, b \leq \lceil n/2 \rceil + {\varepsilon}_3 n$;\label{Q1}
\item $\delta^0(G[A,B])\geq n/50$;\label{Q2}
\item $d^\pm_B(x)\geq n/2-{\varepsilon}_3 n$ for all but at most ${\varepsilon}_3 n$ vertices $x\in A$;\label{Q3}
\item $d^\pm_A(x)\geq n/2-{\varepsilon}_3 n$ for all but at most ${\varepsilon}_3 n$ vertices $x\in B$;\label{Q4}
\item $s+t \leq {\varepsilon}_3 n$;\label{Q5}
\item $d^-_A(x), d^+_B(x) \geq n/50$ for all $x \in S$;\label{Q6}
\item $d^-_B(x), d^+_A(x) \geq n/50$ for all $x \in T$;\label{Q7}
\item if $a<b$, $d^\pm_B(x) < n/20$ for all $x\in B$; $d^-_B(x)< n/20$ for all $x \in S$ and $d^+_B(x)< n/20$ for all $x \in T$.\label{Q8}%
\COMMENT{Only used in Prop.~\ref{prop:balance}. Cannot lose any conditions here because I need to be able to swap $S$ and $T$ to get $s\leq t$, if necessary.}
\end{enumerate}
Thirdly, we say that $G$ is \emph{$ABST$-extremal} if there is a partition $A,B,S,T$ of $V(G)$ into sets of sizes $a,b,s,t$ such that:
\begin{enumerate}[(R1)]
\item $a\leq b$, $s\leq t$;\label{R*}
\item $a,b,s,t \geq \tau n$;\label{R1}
\item $|a-b|, |s-t| \leq {\varepsilon}_1n$;\label{R2}
\item $\delta^0(G[A,B])\geq \eta_1 n$;\label{R3}
\item $d^+_{B\cup S}(x), d^-_{A\cup S}(x) \geq \eta_1 n$ for all $x \in S$;\label{R4}
\item $d^+_{A\cup T}(x), d^-_{B\cup T}(x) \geq \eta_1 n$ for all $x \in T$;\label{R5}
\item $d^\pm_B(x) \geq b-{\varepsilon}^{1/3} n$ for all but at most ${\varepsilon}_1n$ vertices $x\in A$;\label{R6}
\item $d^\pm_A(x) \geq a-{\varepsilon}^{1/3} n$ for all but at most ${\varepsilon}_1n$ vertices $x\in B$;\label{R7}
\item $d^+_{B\cup S}(x)\geq b+s-{\varepsilon}^{1/3} n$ and $d^-_{A\cup S}(x) \geq a+s-{\varepsilon}^{1/3} n$ for all but at most ${\varepsilon}_1n$ vertices $x\in S$;\label{R8}
\item $d^+_{A\cup T}(x)\geq a+t-{\varepsilon}^{1/3} n$ and $d^-_{B\cup T}(x) \geq b+t-{\varepsilon}^{1/3} n$ for all but at most ${\varepsilon}_1n$ vertices $x\in T$.\label{R9}
\end{enumerate}
\begin{prop}\label{prop:structure2}
Suppose
$$1/n \ll {\varepsilon} \ll {\varepsilon}_1 \ll \eta_1 \ll \tau \ll {\varepsilon}_3 \ll \eta_2 \ll 1$$
and $G$ is an ${\varepsilon}$-extremal digraph on $n$ vertices with $\delta^0(G) \geq n/2$. Then there is a partition of $V(G)$ into sets $A,B,S,T$ of sizes $a,b,s,t$ satisfying one of the following:
\begin{itemize}
\item (P\ref{P1})--(P\ref{P7});
\item (Q\ref{Q1})--(Q\ref{Q8}) with $a\leq b$;
\item (R\ref{R1})--(R\ref{R9}).
\end{itemize}
\end{prop}
\begin{proof}
Consider a partition $A_0,B_0,S_0,T_0$ of $V(G)$ into sets of sizes $a_0,b_0,s_0,t_0$ such that $|a_0-b_0|,|s_0-t_0|\leq 1$ and $e(A_0\cup S_0, A_0\cup T_0) < {\varepsilon} n^2$.
Define
\begin{align*}
&X_1:=\{x\in A_0 \cup S_0: d^+_{B_0 \cup S_0}(x) < n/2-\sqrt{\varepsilon} n\},\\
&X_2:=\{x\in A_0 \cup T_0: d^-_{B_0 \cup T_0}(x) < n/2-\sqrt{\varepsilon} n\},\\
&X_3:=\{x\in B_0 \cup T_0: d^+_{A_0 \cup T_0}(x) < n/2-\sqrt{\varepsilon} n\},\\
&X_4:=\{x\in B_0 \cup S_0: d^-_{A_0 \cup S_0}(x) < n/2-\sqrt{\varepsilon} n\}
\end{align*}
and let $X:=\bigcup_{i=1}^4 X_i$. We now compute an upper bound for $|X|$.
Each vertex $x\in X_1$ has $d^+_{A_0\cup T_0}(x)> \sqrt{\varepsilon} n$, so $|X_1|\leq {\varepsilon} n^2/ \sqrt{\varepsilon} n=\sqrt{\varepsilon} n$. Also, each vertex $x\in X_2$ has $d^-_{A_0\cup S_0}(x) > \sqrt{\varepsilon} n$, so $|X_2|\leq \sqrt{\varepsilon} n$.
Observe that
\begin{align*}
|A_0\cup T_0|n/2-{\varepsilon} n^2 &\leq e(B_0\cup T_0,A_0\cup T_0)\\
&\leq (n/2-\sqrt{\varepsilon} n)|X_3|+|A_0\cup T_0|(|B_0\cup T_0|-|X_3|)
\end{align*}
which gives
$$|X_3|(|A_0\cup T_0|-n/2+\sqrt{\varepsilon} n) \leq |A_0\cup T_0|(|B_0\cup T_0|-n/2)+{\varepsilon} n^2\leq 2{\varepsilon} n^2.$$
So $|X_3| \leq 2{\varepsilon} n^2/(\sqrt{\varepsilon} n/2)=4\sqrt{\varepsilon} n$. Similarly, we find that $|X_4| \leq 4\sqrt{\varepsilon} n$.%
\COMMENT{$|A_0\cup S_0|n/2-{\varepsilon} n^2 \leq e(A_0\cup S_0, B_0 \cup S_0)
\leq (n/2-\sqrt{\varepsilon} n)|X_4|+|A_0\cup S_0|(|B_0\cup S_0|-|X_4|)$}
Therefore, $|X|\leq 10\sqrt{\varepsilon} n$.
\medskip
\noindent \textbf{Case 1: } \emph{$a_0,b_0<2\tau n$.}
Let $Z:=X\cup A_0 \cup B_0$. Choose disjoint $Z_1, Z_2\subseteq Z$ so that $d^\pm_{S_0}(x) \geq 2\eta_2n$ for all $x\in Z_1$ and $d^\pm_{T_0}(x) \geq 2\eta_2n$ for all $x\in Z_2$ and $|Z_1\cup Z_2|$ is maximal. Let $S:=(S_0 \setminus X) \cup Z_1$ and $T:=(T_0\setminus X) \cup Z_2$. The vertices in $Z\setminus (Z_1\cup Z_2)$ can be partitioned into two sets $A$ and $B$ so that $d^+_S(x), d^-_T(x) \geq n/2-3\eta_2n$ for all $x\in A$ and $d^-_S(x), d^+_T(x) \geq n/2-3\eta_2n$ for all $x\in B$. The partition $A,B, S, T$ satisfies (P\ref{P1})--(P\ref{P7}).
\medskip
\noindent \textbf{Case 2: } \emph{$s_0,t_0<2\tau n$.}
Partition $X$ into four sets $Z_1, Z_2, Z_3, Z_4$ so that $d^\pm_{B_0}(x)\geq n/5$ for all $x\in Z_1$; $d^\pm_{A_0}(x)\geq n/5$ for all $x\in Z_2$; $d^+_{B_0}(x), d^-_{A_0}(x)\geq n/5$ for all $x\in Z_3$ and $d^-_{B_0}(x), d^+_{A_0}(x) \geq n/5$ for all $x\in Z_4$. Then set $A_1:=(A_0\setminus X)\cup Z_1$, $B_1:=(B_0\setminus X)\cup Z_2$.
Assume, without loss of generality, that $|A_1|\leq |B_1|$. To ensure that the vertices in $B$ satisfy (Q\ref{Q8}), choose disjoint sets $B', B''\subseteq B_1$ so that $|B'\cup B''|$ is maximal subject to: $|B'\cup B''|\leq |B_1|-|A_1|$, $d^+_{B_1}(x)\geq n/20$ for all $x\in B'$ and $d^-_{B_1}(x)\geq n/20$ for all $x\in B''$. Set $B:= B_1 \setminus (B' \cup B'')$, $S_1 :=(S_0\setminus X)\cup Z_3\cup B'$ and $T_1:=(T_0\setminus X)\cup Z_4\cup B''$. To ensure that the vertices in $S\cup T$ satisfy (Q\ref{Q8}), choose sets $S'\subseteq S_1, T'\subseteq T_1$ which are maximal subject to: $|S'|+|T'|\leq |B|-|A_1|$, $d^\pm_{B}(x) \geq n/20$ for all $x\in S'$ and $d^\pm_{B}(x) \geq n/20$ for all $x\in T'$. We define $A:=A_1 \cup S'\cup T'$, $S:=S_1\setminus S'$ and $T:=T_1\setminus T'$. Then $a\leq b$ and (Q\ref{Q1})--(Q\ref{Q8}) hold.%
\COMMENT{Note that $|X|\leq 10\sqrt{\varepsilon} n$ so we move at most ${\varepsilon}_3^2n$ vertices to get to $A,B,S,T$ from $A_0,B_0, S_0, T_0$. (Q\ref{Q1}) is clear. For (Q\ref{Q2})--(Q\ref{Q4}), note that we defined $Z_1, Z_2,S', T'$ carefully so that $\delta^0(G[A,B]) \geq n/50$. For (Q\ref{Q5}), note that $s_0+t_0<4\tau n$ so $s+t \leq 4\tau n+{\varepsilon}_3^2 n\leq {\varepsilon}_3 n$. (Q\ref{Q6}) and (Q\ref{Q7}) follow from the definitions of $A_1, B_1, S_1,T_1$.}
\medskip
\noindent \textbf{Case 3: } \emph{$a_0,b_0,s_0, t_0\geq 2\tau n-1$.}
The case conditions imply $a_0,b_0,s_0, t_0<n/2-\tau n$. Then, since $\delta^0(G) \geq n/2$, each vertex must have at least $2\eta_1 n$ inneighbours in at least two of the sets $A_0, B_0, S_0, T_0$. The same holds when we consider outneighbours instead. So we can partition the vertices in $X$ into sets $Z_1, Z_2, Z_3, Z_4$ so that: $d^\pm_{B_0}(x) \geq 2\eta_1 n$ for all $x\in Z_1$; $d^\pm_{A_0}(x) \geq 2\eta_1 n$ for all $x\in Z_2$; $d^+_{B_0\cup S_0}(x), d^-_{A_0\cup S_0}(x) \geq 2\eta_1 n$ for all $x\in Z_3$ and $d^+_{A_0\cup T_0}(x), d^-_{B_0\cup T_0}(x) \geq 2\eta_1 n$ for all $x\in Z_4$.%
\COMMENT{To see that each $x$ can be put in at least one of $Z_1, Z_2, Z_3, Z_4$: Let $x\in V(G)$ and define $X^+:=\{X\in \{A_0,B_0,S_0,T_0\}: d^+_X(x) \geq 2 \eta_1 n\}$. Define $X^-$ similarly. Then $|X^+|,|X^-|\geq 2$. If $\exists X\in X^+\cap X^-$ then if $X=B_0$ can put $x$ in $Z_1$, $X=A_0$ can put $x$ in $Z_2$, $X=S_0$ put $x$ in $Z_3$ and $X=T_0$ put $x$ in $Z_4$. So assume $X^+\cap X^-=\emptyset$, so $X^+=\{A_0, B_0, S_0, T_0\} \setminus X^-$. Check for each. If $X^+=\{A_0, B_0\}$ or $\{A_0, S_0\}$ or $\{A_0, T_0\}$, can put $x$ in $Z_4$. If $X^+=\{B_0,S_0\}$ or $\{B_0,T_0\}$ or $\{S_0 , T_0\}$ put $x$ in $Z_3$.}
Let $A:=(A_0\setminus X)\cup Z_1$, $B:=(B_0\setminus X)\cup Z_2$, $S:=(S_0\setminus X)\cup Z_3$ and $T:=(T_0\setminus X)\cup Z_4$. This partition satisfies (R\ref{R1})--(R\ref{R9}).
\end{proof}
The above result implies that to prove Theorem~\ref{thm:main} for ${\varepsilon}$-extremal graphs it will suffice to consider only graphs which are $ST$-extremal, $AB$-extremal or $ABST$-extremal. Indeed, to see that we may assume that $a\leq b$ and $s\leq t$, suppose that $G$ is ${\varepsilon}$-extremal. Then $G$ has a partition satisfying (P\ref{P1})--(P\ref{P7}), (Q\ref{Q1})--(Q\ref{Q8}) or (R\ref{R1})--(R\ref{R9}) by Proposition~\ref{prop:structure2}. Note that relabelling the sets of the partition $(A,B,S,T)$ by $(B,A,T,S)$ if necessary allows us to assume that $a\leq b$. If $s\leq t$, then we are done. If $s>t$, reverse the orientation of every edge in $G$ to obtain the new graph $G'$. Relabel the sets $(A,B,S,T)$ by $(A,B,T,S)$. Under this new labelling, the graph $G'$ satisfies all of the original properties as well as $a\leq b$ and $s\leq t$. Obtain $C'$ from the cycle $C$ by reversing the orientation of every edge in $C$. The problem of finding a copy of $C$ in $G$ is equivalent to finding a copy of $C'$ in $G'$.
\section{$G$ is $ST$-extremal}\label{sec:ST}
The aim of this section is to prove the following lemma which settles Theorem~\ref{thm:main} in the case when $G$ is $ST$-extremal.
\begin{lemma}\label{lem:ST}
Suppose that $1/n \ll {\varepsilon}_3 \ll {\varepsilon}_4 \ll \eta_2 \ll 1.$
Let $G$ be a digraph on $n$ vertices such that $\delta^0(G)\geq n/2$ and $G$ is $ST$-extremal. If $C$ is any orientation of a cycle on $n$ vertices, then $G$ contains a copy of $C$.
\end{lemma}
We will split the proof of Lemma~\ref{lem:ST} into two cases based on how close the cycle $C$ is to being consistently oriented. Recall that $\sigma(C)$ denotes the number of sink vertices in $C$. Observe that in any oriented cycle, the number of sink vertices is equal to the number of source vertices.
\subsection{$C$ has many sink vertices, $\sigma(C)\geq {\varepsilon}_4 n$}
The rough strategy in this case is as follows. We would like to embed half of the cycle $C$ into $G[S]$ and half into $G[T]$, making use of the fact that these graphs are nearly complete. At this stage, we also suitably assign the vertices in $A\cup B$ to $G[S]$ or $G[T]$. We will partition $C$ into two disjoint paths, $P_S$ and $P_T$, each containing at least $\sigma(C)/8$ sink vertices, which will be embedded into $G[S]$ and $G[T]$. The main challenge we will face is finding appropriate edges to connect the two halves of the embedding.
\begin{lemma}\label{lem:linking1}
Suppose that $1/n \ll {\varepsilon}_3 \ll {\varepsilon}_4 \ll \eta_2 \ll 1.$ Let $G$ be a digraph on $n$ vertices with $\delta^0(G)\geq n/2$. Suppose $A,B,S,T$ is a partition of $V(G)$ satisfying (P\ref{P*})--(P\ref{P7}). Let $C$ be an oriented cycle on $n$ vertices with $\sigma(C) \geq {\varepsilon}_4n$. Then there exists a partition $S^*, T^*$ of the vertices of $G$ and internally disjoint paths $R_1, R_2, P_S, P_T$ such that $C=(P_SR_1P_TR_2)$ and the following hold:
\begin{enumerate}[\rm(i)]
\item $S\subseteq S^*$ and $T\subseteq T^*$;
\item $|P_T|=|T^*|$;
\item $P_S$ and $P_T$ each contain at least ${\varepsilon}_4n/8$ sink vertices;
\item $|R_i|\leq 3$ and $G$ contains disjoint copies $R_i^G$ of $R_i$ such that $R_1^G$ is an $ST$-path, $R_2^G$ is a $TS$-path and all interior vertices of $R_i^G$ lie in $S^*$.
\end{enumerate}
\end{lemma}
In the proof of Lemma~\ref{lem:linking1} we will need the following proposition.
\begin{prop}\label{prop:2edges}
Suppose that $1/n \ll {\varepsilon}_3 \ll {\varepsilon}_4 \ll \eta \ll 1.$ Let $G$ be a digraph on $n$ vertices with $\delta^0(G) \geq n/2$. Suppose $A, B, S, T$ is a partition of $V(G)$ satisfying (P\ref{P*})--(P\ref{P7}).
\begin{enumerate}[\rm(i)]
\item If $a=b \in \{0,1\}$ then there are two disjoint edges between $S$ and $T$ of any given direction.\label{prop:2edges1}
\item If $A = \emptyset$ then there are two disjoint $TS$-edges.\label{prop:2edges2}
\item If $a= 1$ and $b \geq 2$ then there are two disjoint $TS$-edges.\label{prop:2edges4}
\item There are two disjoint edges in $E(S,T\cup A) \cup E(T, S\cup B)$.\label{prop:2edges6}
\end{enumerate}
\end{prop}
\begin{proof}
Let
$$S':=\{x\in S : N^+_{A}(x), N^-_{B}(x) = \emptyset\} \text{ and } T':=\{x\in T : N^+_{B}(x), N^-_{A}(x) = \emptyset\}.$$
First we prove (\ref{prop:2edges1}). If $a=b \in \{0,1\}$ then it follows from (P\ref{P6}), (P\ref{P7}) that $|S'|,|T'| \geq n/4$. Since $s\leq t$, it is either the case that $s \leq (n-1)/2-b$ or $s=t=n/2-b$. If $s \leq (n-1)/2-b$ choose any $x\neq y \in S'$. Both $x$ and $y$ have at least $\lceil n/2-((n-1)/2 -b-1+b)\rceil=2$ inneighbours and outneighbours in $T$, so we find the desired edges. Otherwise $s=t=n/2-b$ and each vertex in $S'$ must have at least one inneighbour and at least one outneighbour in $T$ and each vertex in $T'$ must have at least one inneighbour and at least one outneighbour in $S$. It is now easy to check that (\ref{prop:2edges1}) holds. Indeed, K\"onig's theorem gives the two required disjoint edges provided they have the same direction. Using this, it is also easy to find two edges in opposite directions.%
\COMMENT{If we want edges in opposite directions, K\"onig's theorem gives two disjoint $ST$-edges $e_1$ and $e_2$. Choose any $t_1 \in T'$ which is disjoint from $e_1, e_2$. Now $t_1$ must have an outneighbour $s_1\in S$ and $s_1$ can lie on at most one of the edges $e_1$ and $e_2$. Then, without loss of generality, $e_1, t_1s_1$ form the required pair of edges.}
We now prove (\ref{prop:2edges2}). Suppose that $A = \emptyset$. We have already seen that the result holds when $B=\emptyset$. So assume that $b \geq 1$. Since $s \leq (n-b)/2$, each vertex in $S$ must have at least $b/2+1$ inneighbours in $T \cup B$. Assume for contradiction that there are no two disjoint $TS$-edges. Then all but at most one vertex in $S$ must have at least $b/2$ inneighbours in $B$. So $e(B, S) \geq bn/8$ which implies that there is a vertex $v \in B$ with $d^+_S(v) \geq n/8.$ But this contradicts (P\ref{P7}). So there must be two disjoint $TS$-edges.
For (\ref{prop:2edges4}), suppose that $a=1$ and $b \geq 2$. Since $s\leq (n-b-1)/2$, each vertex in $S$ must have at least $(b+1)/2$ inneighbours in $T\cup B$. Assume that there are no two disjoint $TS$-edges. Then all but at most one vertex in $S$ have at least $(b-1)/2$ inneighbours in $B$. So $e(B,S) \geq nb/12$ which implies that there is a vertex $v \in B$ with $d^+_S(v) \geq n/12$ which contradicts (P\ref{P7}). Hence (\ref{prop:2edges4}) holds.
For (\ref{prop:2edges6}), we observe that $\min\{s+b, t+a\} \leq (n-1)/2$ or $s+b=t+a=n/2$. If $s+b\leq (n-1)/2$ then each vertex in $S$ has at least two outneighbours in $T\cup A$, giving the desired edges. A similar argument works if $t+a \leq (n-1)/2$. If $s+b=t+a=n/2$ then each vertex in $S$ has at least one outneighbour in $T \cup A$ and each vertex in $T$ has at least one outneighbour in $S\cup B$. It is easy to see that there must be two disjoint edges in $E(S,T\cup A) \cup E(T, S\cup B)$.
\end{proof}
\begin{proofof}\textbf{Lemma~\ref{lem:linking1}.}
Observe that $C$ must have a subpath $P_1$ of length $n/3$ containing at least ${\varepsilon}_4 n/3$ sink vertices. Let $v\in P_1$ be a sink vertex such that the subpaths $(P_1v)$ and $(vP_1)$ of $P_1$ each contain at least ${\varepsilon}_4 n/7$ sink vertices. Write $C=(v_1v_2\dots v_n)$ where $v_1:=v$ and write $k':=n-t$.
\medskip
\noindent \textbf{Case 1: } \emph{$a\leq 1$}
If $a=b$, set $S^*:=S\cup A\cup B$, $T^*:=T$, $R_1:=(v_{k'}v_{k'+1})$ and $R_2:=(v_nv_1)=v_nv_1$. By Proposition~\ref{prop:2edges}(\ref{prop:2edges1}), $G$ contains a pair of disjoint edges between $S$ and $T$ of any given orientation. So we can map $v_nv_1$ to a $TS$-edge and $(v_{k'}v_{k'+1})$ to an edge between $S$ and $T$ of the correct orientation such that the two edges are disjoint.
Suppose now that $b \geq a+1$. By Proposition~\ref{prop:2edges}(\ref{prop:2edges2})--(\ref{prop:2edges4}), we can find two disjoint $TS$-edges $e_1$ and $e_2$. If $v_{k'}$ is not a source vertex, set $S^*:=S\cup A\cup B$, $T^*:=T$, $R_1:=(v_{k'-1}v_{k'}v_{k'+1})$ and $R_2:=v_nv_1$. Map $v_nv_1$ to $e_1$. If $v_{k'+1}v_{k'}\in E(C)$, map $R_1$ to a path of the form $SST$ which uses $e_2$. Otherwise, since $v_{k'}$ is not a source vertex, $R_1$ is a forward path. Using (P\ref{P7}), we find a forward path of the form $SBT$ for $R_1^G$.
So let us suppose that $v_{k'}$ is a source vertex. Let $b_1\in B$ and set $S^*:=S\cup A\cup B \setminus\{b_1\}$ and $T^*:=T\cup\{b_1\}$. Let $R_1:=(v_{k'-1}v_{k'})=v_{k'}v_{k'-1}$ and $R_2:=v_nv_1$. We know that $v_nv_1, v_{k'}v_{k'-1}\in E(C)$, so we can map these edges to $e_1$ and $e_2$.
In each of the above, we define $P_S$ and $P_T$ to be the paths, which are internally disjoint from $R_1$ and $R_2$, such that $C=(P_SR_1P_TR_2)$. Note that (i)--(iv) are satisfied.
\medskip
\noindent \textbf{Case 2: } \emph{$a\geq 2$}
Apply Proposition~\ref{prop:2edges}(\ref{prop:2edges6}) to find two disjoint edges $e_1, e_2 \in E(S,T\cup A) \cup E(T, S\cup B)$. Choose any distinct $x,y\in A\cup B$ such that $x$ and $y$ are disjoint from $e_1$ and $e_2$.
First let us suppose that $v_{k'}$ is a sink vertex. If $e_1, e_2 \in E(S,A)\cup E(T, S \cup B)$, set $S^*:=S\cup A\cup B$, $T^*:=T$, $R_1:=(v_{k'-1}v_{k'}v_{k'+1})$ and $R_2:=(v_nv_1v_2)$. If $e_1\in E(T, S \cup B)$, use (P\ref{P2}) and (P\ref{P7}) to find a path of the form $S(S\cup B)T$ which uses $e_1$ for $R_1^G$. If $e_1 \in E(S,A)$, we use (P\ref{P6}) to find a path of the form $SAT$ using $e_1$ for $R_1^G$. In the same way, we find a copy $R_2^G$ of $R_2$. If exactly one of $e_i$, $e_2$ say, lies in $E(S,T)$, set $S^*:=(S\cup A\cup B)\setminus\{x\}$, $T^*:=T\cup \{x\}$, $R_1:=(v_{k'-1}v_{k'}v_{k'+1})$ and $R_2:=(v_1v_2)$. Then $v_2v_1$ can be mapped to $e_2$ and we use $e_1$ to find a copy $R_1^G$ of $R_1$ as before. If both $e_1,e_2 \in E(S,T)$, set $S^*:=(S\cup A\cup B)\setminus\{x,y\}$, $T^*:=T\cup \{x,y\}$, $R_1:=(v_{k'-1}v_{k'})$ and $R_2:=(v_1v_2)$. Then map $v_2v_1$ and $v_{k'-1}v_{k'}$ to the edges $e_1$ and $e_2$.
Suppose now that $(v_{k'-1}v_{k'}v_{k'+1})$ is a consistently oriented path. If $e_2 \not \in E(S,T)$, let $S^*:=S\cup A\cup B$, $T^*:=T$, $R_1:=(v_{k'-1}v_{k'}v_{k'+1})$ and $R_2:=(v_nv_1v_2)$ and, if $e_2\in E(S,T)$, let $S^*:=(S\cup A\cup B)\setminus\{x\}$, $T^*:=T\cup \{x\}$, $R_1:=(v_{k'-1}v_{k'}v_{k'+1})$ and $R_2:=(v_1v_2)$. Then use the edge $e_2$ to find a copy $R_2^G$ of $R_2$ as above. We use (P\ref{P6}) or (P\ref{P7}) to map $R_1$ to a backward path of the form $SAT$ or a forward path of the form $SBT$ as appropriate.
We let $P_S$ and $P_T$ be paths which are internally disjoint from $R_1$ and $R_2$ such that $C=(P_SR_1P_TR_2)$. Then (i)--(iv) are satisfied.
It remains to consider the case when $v_{k'}$ is a source vertex. We now consider the vertex $v_{k'-1}$ instead of $v_{k'}$. Note that $C$ cannot contain two adjacent source vertices, so either $v_{k'-1}$ is a sink vertex or $(v_{k'-2}v_{k'-1}v_{k'})$ is a backward path. We proceed as previously. Note that when we define the path $P_T$ it will have one additional vertex and so we must allocate an additional vertex from $A\cup B$ to $T^*$, we are able to do this since $a+b>3$.%
\COMMENT{Suppose that $v_{k'-1}$ is a sink vertex. If $e_1, e_2 \in E(S,A)\cup E(T, S \cup B)$, set $S^*:=(S\cup A\cup B)\setminus\{x\}$, $T^*:=T\cup \{x\}$, $R_1:=(v_{k'-2}v_{k'-1}v_{k'})$ and $R_2:=(v_nv_1v_2)$. Find $R_1^G$ and $R_2^G$ as before. If exactly one $e_i$, $e_2$ say, lies in $E(S,T)$, we set $S^*:=S\cup A\cup B\setminus\{x,y\}$, $T^*:=T\cup \{x,y\}$, $R_1:=(v_{k'-2}v_{k'-1}v_{k'})$ and $R_2:=(v_1v_2)$. Then $v_2v_1$ can be mapped to $e_2$ and we use $e_1$ to find a copy of $R_1^G$ as before. If both $e_1,e_2 \in E(S,T)$, let $z\in A\cup B$, $z\neq x,y$ be arbitrary. Set $S^*:=S\cup A\cup B\setminus\{x,y,z\}$, $T^*:=T\cup \{x,y,z\}$, $R_1:=(v_{k'-2}v_{k'-1})$ and $R_2:=(v_1v_2)$. Then map $v_2v_1$ and $v_{k'-2}v_{k'-1}$ to the edges $e_1$ and $e_2$.\\
Suppose now that $(v_{k'-2}v_{k'-1}v_{k'})$ is a backward path. If $e_2 \not \in E(S,T)$, let $S^*:=(S\cup A\cup B)\setminus\{x\}$, $T^*:=T\cup \{x\}$, $R_1:=(v_{k'-2}v_{k'-1}v_{k'})$ and $R_2:=(v_nv_1v_2)$ and, if $e_2\in E(S,T)$, let $S^*:=S\cup A\cup B\setminus\{x,y\}$, $T^*:=T\cup \{x,y\}$, $R_1:=(v_{k'-2}v_{k'-1}v_{k'})$ and $R_2:=(v_1v_2)$. Then use the edge $e_2$ to find a copy of $R_2^G$ as above. We use (P\ref{P7}) to map $R_1$ to a backward path of the form $SAT$. Let $P_S$ and $P_T$ be such that $C=(P_SR_1P_TR_2)$.}
\end{proofof}
Apply Lemma~\ref{lem:linking1} to $G$ and $C$ to obtain internally disjoint subpaths $R_1$, $R_2$, $P_S$ and $P_T$ of $C$ as well as a partition $S^*,T^*$ of $V(G)$. Let $R_i^G$ be copies of $R_i$ in $G$ satisfying the properties of the lemma. Write $R'$ for the set of interior vertices of the $R_i^G$. Define $G_S:= G[S^*\setminus R']$ and $G_T:=G[T^*]$. Let $x_T$ and $x_S$ be the images of the final vertices of $R_1$ and $R_2$ and let $y_S$ and $y_T$ be the images of the initial vertices of $R_1$ and $R_2$, respectively. Also, let
$V_S:=S^* \cap (A \cup B)$ and $V_T:=T^* \cap (A \cup B)$.
The following proposition allows us to embed copies of $P_S$ and $P_T$ in $G_S$ and $G_T$. The idea is to greedily find a short path which will contain all of the vertices in $V_S$ and $V_T$ and any vertices of ``low degree''. We then use that the remaining graph is nearly complete to complete the embedding.
\begin{prop}\label{prop:pathembedding}
Let $G_S$, $P_S$, $P_T$, $x_S$, $y_S$, $x_T$ and $y_T$ be as defined above.
\begin{enumerate}[\rm(i)]
\item There is a copy of $P_S$ in $G_S$ such that the initial vertex of $P_S$ is mapped to $x_S$ and the final vertex is mapped to $y_S$.
\item There is a copy of $P_T$ in $G_T$ such that the initial vertex of $P_T$ is mapped to $x_T$ and the final vertex is mapped to $y_T$.
\end{enumerate}
\end{prop}
\begin{proof}
We prove (i), the proof of (ii) is identical. Write $P_S=(u_1u_2\dots u_k)$. An averaging argument shows that there exists a subpath $P$ of $P_S$ of order at most ${\varepsilon}_4 n$ containing at least $\sqrt{{\varepsilon}_3}n$ sink vertices.%
\COMMENT{Consider a partition of $P_S$ into subpaths of order ${\varepsilon}_4 n$. If there does not exist such $P$ then the number of sink vertices in $P_S$ is at most $|P_S|(\sqrt{{\varepsilon}_3}n+1)/({\varepsilon}_4n) < {\varepsilon}_4n/8$, a contradiction.}
Let
$X:= \{x\in S : d^+_S(x)< n/2-{\varepsilon}_3 n \text{ or } d^-_S(x) < n/2-{\varepsilon}_3 n\}$. By (P\ref{P3}), $|X| \leq {\varepsilon}_3n$ and so, using (P\ref{P2}), we see that every vertex $x\in X$ is adjacent to at least $\eta_2n/2$ vertices in $S\setminus X$. So we can assume that $x_S, y_S \in S \setminus X$ since otherwise we can embed the second and penultimate vertices on $P_S$ to vertices in $S\setminus X$ and consider these vertices instead.
Let $u_1'$ be the initial vertex of $P$ and $u_k'$ be the final vertex. Define $m_1 := d_{P_S}(u_1,u_1')+1$ and $m_2:=d_{P_S}(u_k',u_k)+1$. Suppose first that $m_1,m_2 > \eta_2^2n$. We greedily find a copy $P^G$ of $P$ in $G_S$ which covers all vertices in $V_S\cup X$ such that $u_1'$ and $u_k'$ are mapped to vertices $s_1,s_2\in S \setminus X$. This is possible since any two vertices in $X$ can be joined by a path of length at most three of any given orientation, by (P\ref{P2}) and (P\ref{P3}), and we can use each vertex in $V_S$ as the image of a sink or source vertex of $P$.
Partition $(V(G_S)\setminus V(P^G)) \cup \{s_1,s_2\}$, arbitrarily, into two sets $L_1$ and $L_2$ of size $m_1$ and $m_2$ respectively so that $s_1,x_S \in L_1$ and $s_2,y_S \in L_2$. Consider the graphs $G_i:=G_S[L_i]$ for $i=1,2$. Then (P\ref{P3}) implies that
$\delta(G_i) \geq m_i-{\varepsilon}_3 n -{\varepsilon}_4n \geq 7m_i/8.$
Applying Proposition~\ref{prop:completepath}(i), we find suitably oriented Hamilton paths from $s_1$ to $x_S$ in $G_1$ and $s_2$ to $y_S$ in $G_2$ which, when combined with $P$, form a copy of $P_S$ in $G_S$ (with endvertices $x_S$ and $y_S$).
It remains to consider the case when $m_1< \eta_2^2n$ or $m_2 < \eta_2^2n$. Suppose that the former holds (the latter is similar). Let $P'$ be the subpath of $P_S$ between $u_1$ and $u_k'$. So $P \subseteq P'$. Similarly as before, we first greedily find a copy of $P'$ in $G_S$ which covers all vertices of $X\cup V_S$ and then extend this to an embedding of $P_S$.%
\COMMENT{Greedily embed into $G_S$ a path starting from $x_S$ which is isomorphic to $P'$ and incorporates all vertices in $V_S$, as sink or source vertices, and all vertices in $X$. We ensure that the final vertex of $P'$, $u_1'$, is mapped to a vertex $s_1\in S \setminus X$. Now $|P'| \leq 2\eta_2^2n$. Let $S':=(V(G_S) \setminus V((P')^G))\cup \{s_1\}$ and consider the graph $G_S':=G_S[S']$. Note that $\delta^0(G_S') \geq n/2-{\varepsilon}_3n -2\eta_2^2n \geq 7|G_S'|/8$ so $G_S'$ has a spanning path of any given orientation from $s_1$ to $y_S$, by Proposition~\ref{prop:completepath}(i), and we obtain an embedding of $P_S$ in $G_S$.}
\end{proof}
Proposition~\ref{prop:pathembedding} allows us to find copies of $P_S$ and $P_T$ in $G_S$ and $G_T$ with the desired endvertices. Combining these with $R_1^G$ and $R_2^G$ found in Lemma~\ref{lem:linking1}, we obtain a copy of $C$ in $G$. This proves Lemma~\ref{lem:ST} when $\sigma(C) \geq {\varepsilon}_4n$.
\subsection{$C$ has few sink vertices, $\sigma(C)<{\varepsilon}_4n$}
Our approach will closely follow the argument when $C$ had many sink vertices. The main difference will be how we cover the exceptional vertices, i.e. the vertices in $A\cup B$. We will call a consistently oriented subpath of $C$ which has length $20$ a \emph{long run}. If $C$ contains few sink vertices, it must contain many of these long runs. So, whereas previously we used sink and source vertices, we will now use long runs to cover the vertices in $A\cup B$.
\begin{prop}\label{prop:goodpaths}
Suppose that $1/n \ll {\varepsilon} \ll 1$ and $n/4 \leq k\leq 3n/4$. Let $C$ be an oriented cycle with $\sigma(C) < {\varepsilon} n$. Then we can write $C$ as $(u_1u_2 \dots u_n)$ such that there exist:
\begin{enumerate}[\rm(i)]
\item Long runs $P_1, P_2$ such that $P_1$ is a forward path and $d_C(P_1, P_2)=k$,
\item Long runs $P_1', P_2', P_3', P_4'$ such that $d_C(P_i', P_{i+1}')=\lfloor n/4 \rfloor$ for $i=1,2,3$.
\end{enumerate}
\end{prop}
\begin{proof}
Let $P$ be a subpath of $C$ of length $n/8$. Let $\mathcal{Q}$ be a consistent collection of vertex disjoint long runs in $P$ of maximum size. Then $|\mathcal{Q}| \geq 2{\varepsilon} n$, with room to spare. We can write $C$ as $(u_1u_2 \dots u_n)$ so that the long runs in $\mathcal{Q}$ are forward paths.
Suppose that (i) does not hold. For each $Q_i \in \mathcal{Q}$, let $Q_i'$ be the path of length $20$ such that $d_C(Q_i,Q_i')=k$. Since $Q_i'$ is not a long run, $Q_i'$ must contain at least one sink or source vertex. The paths $Q_i'$ are disjoint so, in total, $C$ must contain at least
$|\mathcal{Q}|/2 \geq {\varepsilon} n > \sigma(C)$ sink vertices, a contradiction. Hence (i) holds.
We call a collection of four disjoint long runs $P_1, P_2, P_3, P_4$ \emph{good} if $P_1 \in \mathcal{Q}$ and $d_C(P_i,P_{i+1}) = \lfloor n/4 \rfloor$ for all $i=1, 2, 3$. Suppose $C$ does not contain a good collection of long runs. In particular, this means that each long run in $\mathcal{Q}$ does not lie in a good collection. For each path $Q_i \in \mathcal{Q}$, let $Q_{i,1}, Q_{i,2}, Q_{i,3}$ be subpaths of $C$ of length $20$ such that $d_C(Q_i, Q_{i,j})=j\lfloor n/4 \rfloor$. Since $\{Q_i, Q_{i,1}, Q_{i,2}, Q_{i,3}\}$ does not form a good collection, at least one of the $Q_{i,j}$ must contain a sink or source vertex. The paths $Q_{i,j}$ where $Q_i \in \mathcal{Q}$ and $j=1,2,3$ are disjoint so, in total, $C$ must contain at least
$|\mathcal{Q}|/2 \geq {\varepsilon} n>\sigma(C)$ sink vertices, which is a contradiction. This proves (ii).
\end{proof}
The following proposition finds a collection of edges oriented in an atypical direction for an ${\varepsilon}$-extremal graph. We will use these edges to find consistently oriented $S$- and $T$-paths covering all of the vertices in $A\cup B$. This proposition will be used again in Section~\ref{sec:ABST1}, where it allows us to correct an imbalance in the sizes of $A$ and $B$.
\begin{prop}\label{prop:dedges}
Let $G$ be a digraph on $n$ vertices with $\delta^0(G) \geq n/2$. Let $d\geq 0$ and suppose $A, B, S, T$ is a partition of $V(G)$ into sets of size $a,b,s,t$ with $t\geq s\geq d+2$ and $b = a+d$. Then $G$ contains a collection $M$ of $d+1$ edges in $E(T,S \cup B) \cup E(B, S)$ satisfying the following. The endvertices of $M$ outside $B$ are distinct and each vertex in $B$ is the endvertex of at most one $TB$-edge and at most one $BS$-edge in $M$. Moreover, if $e(T,S)>0$, then $M$ contains a $TS$-edge.
\end{prop}
\begin{proof}
Let $k:=t-s$.
We define a bipartite graph $G'$ with vertex classes $S':=S\cup B$ and $T':=T\cup B$ together with all edges $xy$ such that $x \in S', y\in T'$ and $yx\in E(T,S \cup B) \cup E(B, S)$. We claim that $G'$ has a matching of size $d+2$.
To prove the claim, suppose that $G'$ has a vertex cover $X$ of size $|X|< d+2$. Then $|X\cap S'|<(d-k)/2+1$ or $|X\cap T'|<(d+k)/2+1$.
Suppose that the former holds and consider any vertex $t_1 \in T \setminus X$. Since $\delta^+(G) \geq n/2$ and $a+t=(n-d+k)/2$, $t_1$ has at least $(d-k)/2+1$ outneighbours in $S'$. But these vertices cannot all be covered by $X$. So we must have that $|X\cap T'|<(d+k)/2+1$. Consider any vertex $s_1 \in S \setminus X$. Now $\delta^-(G)\geq n/2$ and $a+s=(n-d-k)/2$, so $s_1$ must have at least $(d+k)/2+1$ inneighbours in $T'$. But not all of these vertices can be covered by $X$. Hence, any vertex cover of $G'$ must have size at least $d+2$ and so K\"onig's theorem implies that $G'$ has a matching of size $d+2$.%
\COMMENT{Argument is fine if $(d-k)/2+1 \leq 0$. Then we must have that $|X\cap T'|<d+2\leq (d+k)/2+1$ and argument follows through.}
If $e(T,S)>0$, either the matching contains a $TS$-edge, or we can choose any $TS$-edge $e$ and at least $d$ of the edges in the matching will be disjoint from $e$. This corresponds to a set of $d+1$ edges in $E(T,S \cup B) \cup E(B, S)$ in $G$ with the required properties.
\end{proof}
We define a \emph{good path system} $\mathcal{P}$ to be a collection of disjoint $S$- and $T$-paths such that each path $P \in \mathcal{P}$ is consistently oriented, has length at most six and covers at least one vertex in $A\cup B$. Each good path system $\mathcal{P}$ gives rise to a modified partition $A_{\mathcal{P}}, B_{\mathcal{P}}, S_{\mathcal{P}}, T_{\mathcal{P}}$ of the vertices of $G$ (we allow $A_{\mathcal{P}}$, $B_{\mathcal{P}}$ to be empty) as follows. Let $\textrm{Int}_S(\mathcal{P})$ be the set of all interior vertices on the $S$-paths in $\mathcal{P}$ and $\textrm{Int}_T(\mathcal{P})$ be the set of all interior vertices on the $T$-paths. We set $A_{\mathcal{P}}:=A \setminus V(\mathcal{P})$, $B_{\mathcal{P}}:=B \setminus V(\mathcal{P})$, $S_{\mathcal{P}}:=(S\cup \textrm{Int}_S(\mathcal{P}))\setminus\textrm{Int}_T(\mathcal{P})$ and $T_{\mathcal{P}}:=(T\cup \textrm{Int}_T(\mathcal{P})) \setminus \textrm{Int}_S(\mathcal{P})$ and say that $A_{\mathcal{P}}, B_{\mathcal{P}}, S_{\mathcal{P}}, T_{\mathcal{P}}$ is the \emph{$\mathcal{P}$-partition} of $V(G)$.
\begin{lemma}\label{lem:linking2}
Suppose that $1/n \ll {\varepsilon}_3 \ll {\varepsilon}_4 \ll \eta_2 \ll 1.$ Let $G$ be a digraph on $n$ vertices with $\delta^0(G) \geq n/2$. Suppose $A, B, S, T$ is a partition of $V(G)$ satisfying (P\ref{P*})--(P\ref{P7}). Let $C$ be a cycle on $n$ vertices with $\sigma(C) < {\varepsilon}_4n$.
Then there exists $t^*$ such that one of the following holds:
\begin{itemize}
\item There exist internally disjoint paths $P_S, P_T, R_1, R_2$ such that:
\begin{enumerate}[\rm(i)]
\item $C=(P_SR_1P_TR_2)$;
\item $|P_T|=t^*$;
\item $R_1$ and $R_2$ are paths of length two and $G$ contains disjoint copies $R_i^G$ of $R_i$ whose interior vertices lie in $V(G) \setminus T$. Moreover, $R_1^G$ is an $ST$-path and $R_2^G$ is a $TS$-path.
\end{enumerate}
\item There exist internally disjoint paths $P_S, P_S', P_T, P_T', R_1, R_2, R_3, R_4$ such that:
\begin{enumerate}[\rm(i)]
\item $C=(P_SR_1P_TR_2P_S'R_3P_T'R_4)$;
\item $|P_T|+|P_T'|=t^*$ and $|P_S|, |P_S'|, |P_T|, |P_T'| \geq n/8$;
\item $R_1,R_2,R_3, R_4$ are paths of length two and $G$ contains disjoint copies $R_i^G$ of $R_i$ whose interior vertices lie in $V(G) \setminus T$. Moreover, $R_1^G$ and $R_3^G$ are $ST$-paths and $R_2^G$ and $R_4^G$ are $TS$-paths.
\end{enumerate}
\end{itemize}
Furthermore, $G$ has a good path system $\mathcal{P}$ such that the paths in $\mathcal{P}$ are disjoint from each $R_i^G$, $\mathcal{P}$ covers $(A\cup B)\setminus \bigcup V(R_i^G)$ and the $\mathcal{P}$-partition $A_{\mathcal{P}}, B_{\mathcal{P}}, S_{\mathcal{P}}, T_{\mathcal{P}}$ of $V(G)$ satisfies $|T_{\mathcal{P}}|=t^*$.
\end{lemma}
\begin{proof}
Let $d:=b-a$ and $k:=t-s$.
We first obtain a good path system $\mathcal{P}_0$ covering $A\cup B$ as follows. Apply Proposition~\ref{prop:dedges} to obtain a collection $M_0$ of $d+1$ edges as described in the proposition. Choose $M\subseteq M_0$ of size $d$ such that $M$ contains a $TS$-edge if $d\geq 1$ and $e(T,S)>0$. We use each edge $e\in M$ together with properties (P\ref{P2}), (P\ref{P4}) and (P\ref{P7}) to cover one vertex in $B$ by a consistently oriented path of length at most six as follows. If $e\in E(T,B)$ and $e$ is disjoint from all other edges in $M$, find a consistently oriented path of the form $TBT$ using $e$. If $e\in E(B,S)$ and $e$ is disjoint from all other edges in $M$, find a consistently oriented path of the form $SBS$ using $e$. If $e\in E(T,S)$, we note that (P\ref{P2}), (P\ref{P4}) and (P\ref{P7}) allows us to find a consistently oriented path of length three between any vertex in $B$ and any vertex in $T$. So we can find a consistently oriented path of the form $SB(T)^3S$ which uses $e$. Finally, if $e\in E(T,B)$ and shares an endvertex with another edge $e'\in M\cap E(B,S)$ we find a consistently oriented path of the form $SB(T)^3BS$ using $e$ and $e'$. This path uses two edges in $M$ but covers two vertices in $B$. Since we have many choices for each such path, we can choose them to be disjoint, so $M$ allows us to find a good path system $\mathcal{P}_1$ covering $d$ vertices in $B$.
Label the vertices in $A$ by $a_1, a_2, \dots, a_a$ and the remaining vertices in $B$ by $b_1, b_2, \dots, b_a$. We now use (P\ref{P5})--(P\ref{P7}) to find a consistently oriented $S$- or $T$-path $L_i$ covering each pair $a_i, b_i$. If $1\leq i \leq \lceil(4a+k)/8\rceil$, cover the pair $a_i,b_i$ by a path of the form $SBTAS$. If $\lceil(4a+k)/8\rceil<i \leq a$ cover the pair $a_i,b_i$ by a path of the form $TASBT$.%
\COMMENT{bound $\lceil(4a+k)/8\rceil$ used in Case~2.3.}
Let $\mathcal{P}_2:=\bigcup_{i=1}^a L_i$.
We are able to choose all of these paths so that they are disjoint and thus obtain a good path system $\mathcal{P}_0:=\mathcal{P}_1\cup \mathcal{P}_2$ covering $A\cup B$. Let $A_{\mathcal{P}_0}, B_{\mathcal{P}_0}, S_{\mathcal{P}_0}, T_{\mathcal{P}_0}$ be the $\mathcal{P}_0$-partition of $V(G)$ and let $t':=|T_{\mathcal{P}_0}|$, $s':=|S_{\mathcal{P}_0}|$.
By Proposition~{\ref{prop:goodpaths}}(i), we can enumerate the vertices of $C$ so that there are long runs $P_1, P_2$ such that $P_1$ is a forward path and $d_C(P_1, P_2) = t'$. We will find consistently oriented $ST$- and $TS$-paths for $R_1^G$ and $R_2^G$ which depend on the orientation of $P_2$. The paths $R_1$ and $R_2$ will be consistently oriented subpaths of $P_1$ and $P_2$ respectively, whose position will be chosen later.
\medskip
\noindent \textbf{Case 1: } \emph{$b\geq a+2$.}
Suppose first that $P_2$ is a backward path. If $\mathcal{P}_1$ contains a path of the form $SB(T)^3BS$, let $b_0$ and $b_0'$ be the two vertices in $B$ on this path. Otherwise, let $b_0$ and $b_0'$ be arbitrary vertices in $B$ which are covered by $\mathcal{P}_1$. Use (P\ref{P7}) to find a forward path for $R_1^G$ which is of the form $S\{b_0\}T$. We also find a backward path of the form $T\{b_0'\}S$ for $R_2^G$. We choose the paths $R_1^G$ and $R_2^G$ to be disjoint from all paths in $\mathcal{P}_0$ which do not contain $b_0$ or $b_0'$.
Suppose now that $P_2$ is a forward path. If $a\geq 1$, consider the path $L_1\in \mathcal{P}_2$ covering $a_1\in A$ and $b_1\in B$. Find forward paths of the form $S\{b_1\}T$ for $R_1^G$ and $T\{a_1\}S$ for $R_2^G$, using (P\ref{P6}) and (P\ref{P7}), which are disjoint from all paths in $\mathcal{P}_0\setminus\{L_1\}$. Finally, we consider the case when $a=0$. Recall that $e(T,S)>0$ by Proposition~\ref{prop:2edges}(ii) and so $M$ contains a $TS$-edge. Hence there is a path $P'$ in $\mathcal{P}_1$ of the form $SB(T)^3S$, covering a vertex $b_0 \in B$ and an edge $t_1s_1\in E(T,S)$, say. We use (P\ref{P2}) and (P\ref{P7}) to find forward paths of the form $S\{b_0\}T$ for $R_1^G$ and $\{t_1\}\{s_1\}S$ for $R_2^G$ which are disjoint from all paths in $\mathcal{P}_0\setminus\{P'\}$.
Obtain the good path system $\mathcal{P}$ from $\mathcal{P}_0$ by removing all paths meeting $R_1^G$ or $R_2^G$. Let $A_{\mathcal{P}}, B_{\mathcal{P}}, S_{\mathcal{P}}, T_{\mathcal{P}}$ be the $\mathcal{P}$-partition of $V(G)$ and $t^*:=|T_{\mathcal{P}}|$. The only vertices which could have moved to obtain $T_{\mathcal{P}}$ from $T_{\mathcal{P}_0}$ are interior vertices on the paths in $\mathcal{P}_0\setminus \mathcal{P}$, so $|t^*-t'| \leq 2\cdot 5=10$. Thus we can choose $R_1$ and $R_2$ to be subpaths of length two of $P_1$ and $P_2$ so that $|P_T|=t^*$, where $P_S$ and $P_T$ are defined by $C=(P_SR_1P_TR_2)$.
\medskip
\noindent \textbf{Case 2: } \emph{$b\leq a+1$.}
\medskip
\noindent \textbf{Case 2.1: } \emph{$a\leq 1$.}
If $a=b$, by Proposition~\ref{prop:2edges}(\ref{prop:2edges1}) we can find disjoint $e_1, e_2\in E(S,T)$ and disjoint $e_3 \in E(S,T)$, $e_4 \in E(T,S)$. Note that $\mathcal{P}_0=\mathcal{P}_2$, since $a=b$, so we may assume that all paths in $\mathcal{P}_0$ are disjoint from $e_1,e_2,e_3, e_4$. If $P_2$ is a forward path, find a forward path of the form $SST$ for $R_1^G$ using $e_3$ and a forward path of the form $TSS$ for $R_2^G$ using $e_4$. If $P_2$ is a backward path, find a forward path of the form $SST$ for $R_1^G$ using $e_1$ and a backward path of the form $TSS$ for $R_2^G$ using $e_2$. In both cases, we choose $R_1^G$ and $R_2^G$ to be disjoint from all paths in $\mathcal{P}_0$.
If $b=a+1$, note that there exist $e_1\in E(S,T)$ and $e_2\in E(T,S)$. (To see this, use that $\delta^0(G) \geq n/2$ and the fact that (P\ref{P6}) and (P\ref{P7}) imply that $|\{x\in S: N^+_A(x), N^-_B(x)=\emptyset\}|\geq n/4$.)%
\COMMENT{We have $a=0, b=1$ or $a=1, b=2$. Let $S':=\{x\in S: N^+_A(x), N^-_B(x)=\emptyset\}$. By (P\ref{P6}) and (P\ref{P7}), $|S'|\geq n/4$. Note that $a+s \leq (n-1)/2$ so each vertex in $S'$ has at least two inneighbours in $T$. This gives disjoint $TS$-edges $e_1$ and $e_1'$. We also note that $b+s \leq (n+1)/2$ so each vertex in $S'$ must have at least one outneighbour in $T$. Choose a vertex $x$ in $S'$ which does not lie on either $e_1$ or $e_1'$ and let $e_2$ be an $ST$-edge using $x$. At least one of $e_1, e_1'$ must be disjoint from $e_2$.}
We may assume that all paths in $\mathcal{P}_2$ are disjoint from $e_1,e_2$. Let $b_0\in B$ be the vertex covered by the single path in $\mathcal{P}_1$. Find a forward path of the form $S\{b_0\}T$ for $R_1^G$, using (P\ref{P7}). Find a consistently oriented path of the form $TSS$ for $R_2^G$ which uses $e_1$ if $P_2$ is a backward path and $e_2$ if $P_2$ is a forward path. Choose the paths $R_1^G$ and $R_2^G$ to be disjoint from the paths in $\mathcal{P}_0\setminus \mathcal{P}_1=\mathcal{P}_2$.
In both cases, we obtain the good path system $\mathcal{P}$ from $\mathcal{P}_0$ by removing at most one path which meets $R_1^G$ or $R_2^G$. Let $A_{\mathcal{P}}, B_{\mathcal{P}}, S_{\mathcal{P}}, T_{\mathcal{P}}$ be the $\mathcal{P}$-partition of $V(G)$ and let $t^*:=|T_{\mathcal{P}}|$. The only vertices which could have moved to obtain $T_{\mathcal{P}}$ from $T_{\mathcal{P}_0}$ are interior vertices on the path in $\mathcal{P}_0\setminus \mathcal{P}$ if $\mathcal{P}_0\neq \mathcal{P}$, so $|t^*-t'| \leq 5$. So we can choose subpaths $R_i$ of $P_i$ so that $|P_T|=t^*$, where $P_S$ and $P_T$ are defined by $C=(P_SR_1P_TR_2)$.
\medskip
\noindent \textbf{Case 2.2: } \emph{$2\leq a \leq k$.}
If $P_2$ is a forward path, consider $a_1\in A$ and $b_1\in B$ which were covered by the path $L_1\in \mathcal{P}_0$. Use (P\ref{P6}) and (P\ref{P7}) to find forward paths, disjoint from all paths in $\mathcal{P}_0\setminus\{L_1\}$, of the form $S\{b_1\}T$ and $T\{a_1\}S$ for $R_1^G$ and $R_2^G$ respectively.
Suppose now that $P_2$ is a backward path. We claim that $G$ contains $2-d$ disjoint $ST$-edges. Indeed, suppose not. Then $d^+_T(x) \leq 1-d$ for all but at most one vertex in $S$. Note that $b+s = (n-k+d)/2$, so $d^+_{A\cup T}(x)\geq (k-d)/2+1$ for all $x\in S$. So
$$e(S,A) \geq (s-1)((k-d)/2+1-(1-d))=(s-1)(k+d)/2 \geq nk/8\geq na/8.$$
Hence, there is a vertex $x \in A$ with $d^-_S(x) \geq n/8$, contradicting (P\ref{P6}). Let $E=\{e_i:1\leq i \leq 2-d\}$ be a set of $2-d$ disjoint $ST$-edges. We may assume that $\mathcal{P}_2$ is disjoint from $E$.
If $a=b$, use (P\ref{P2}) to find a forward path of the form $SST$ using $e_1$ for $R_1^G$ and a backward path of the form $TSS$ using $e_2$ for $R_2$. If $b=a+1$, let $b_0\in B$ be the vertex covered by the single path in $\mathcal{P}_1$. Use (P\ref{P2}) and (P\ref{P7}) to find a forward path of the form $S\{b_0\}T$ for $R_1^G$ and a backward path of the form $TSS$ using $e_1$ for $R_2^G$. We choose the paths $R_1^G$ and $R_2^G$ to be disjoint from all paths in $\mathcal{P}_2$.
In both cases, we obtain the good path system $\mathcal{P}$ from $\mathcal{P}_0$ by removing at most one path which meets $R_1^G$ or $R_2^G$. Let $A_{\mathcal{P}}, B_{\mathcal{P}}, S_{\mathcal{P}}, T_{\mathcal{P}}$ be the $\mathcal{P}$-partition of $V(G)$ and $t^*:=|T_{\mathcal{P}}|$. The only vertices which could have moved to obtain $T_{\mathcal{P}}$ from $T_{\mathcal{P}_0}$ are interior vertices on the path in $\mathcal{P}_0\setminus \mathcal{P}$ if $\mathcal{P}_0\neq \mathcal{P}$, so $|t^*-t'| \leq 5$. Thus we can choose $R_1$ and $R_2$ to be subpaths of length two of $P_1$ and $P_2$ so that $|P_T|=t^*$, where $P_S$ and $P_T$ are defined by $C=(P_SR_1P_TR_2)$.
\medskip
\noindent \textbf{Case 2.3: } \emph{$a \geq 2,k$.}
We note that
\begin{align*}
t'-s'&=|(T\cup \textrm{Int}_T(\mathcal{P}_0))\setminus \textrm{Int}_S(\mathcal{P}_0)|-|(S\cup \textrm{Int}_S(\mathcal{P}_0))\setminus\textrm{Int}_T(\mathcal{P}_0)|\\
&= |(T\cup \textrm{Int}_T(\mathcal{P}_2))\setminus \textrm{Int}_S(\mathcal{P}_2)|-|(S\cup \textrm{Int}_S(\mathcal{P}_2))\setminus\textrm{Int}_T(\mathcal{P}_2)|+c\\
&=(t+3a-4\lceil (4a+k)/8\rceil)-(s+4\lceil (4a+k)/8\rceil-a) +c\\
&=4a+k-8\lceil (4a+k)/8\rceil +c
\end{align*}
where $-7\leq c\leq 1$ is a constant representing the contribution of interior vertices on the path in $\mathcal{P}_1$ if $b=a+1$ and $c=0$ if $b=a$.%
\COMMENT{$c=(|\textrm{Int}_T(\mathcal{P}_1)\setminus T|-|\textrm{Int}_S(\mathcal{P}_1) \cap T|)-(|\textrm{Int}_S(\mathcal{P}_1)\setminus S|- |\textrm{Int}_T(\mathcal{P}_1)\cap S|)$. If $\mathcal{P}_1=\emptyset$, $c=0$. If $\mathcal{P}_1$ consists of a path of form $TBT$, $c=1$. Path of form $SBS$, $c=-1$. Path of form $SB(T)^3S$, $c=-3-4=-7$.}
In particular, this implies that $|t'-s'| \leq 15$%
\COMMENT{$-15=4a+k-8((4a+k)/8+1)-7 \leq t'-s'\leq 4a+k-8(4a+k)/8+1=1$}
and
$$(n-15)/2 \leq s',t' \leq (n+15)/2.$$
Apply Proposition~\ref{prop:goodpaths}(ii) to find long runs $P'_1, P'_2, P'_3, P'_4$ such that $d_C(P'_i, P'_{i+1})=\lfloor n/4 \rfloor$ for $i=1,2,3$. Let $x_i$ be the initial vertex of each $P'_i$. If $\{P'_i, P'_{i+2}\}$ is consistent for some $i\in \{1,2\}$, consider $a_1 \in A$, $b_1 \in B$ which which were covered by the path $L_1\in \mathcal{P}_0$. If $P'_i, P'_{i+2}$ are both forward paths, let $R_1^G$ and $R_2^G$ be forward paths of the form $S\{b_1\}T$ and $T\{a_1\}S$ respectively. If $P'_i, P'_{i+2}$ are both backward paths, let $R_1^G$ and $R_2^G$ be backward paths of the form $S\{a_1\}T$ and $T\{b_1\}S$ respectively. Choose the paths $R_1^G$ and $R_2^G$ to be disjoint from the paths in $\mathcal{P}:=\mathcal{P}_0\setminus \{L_1\}$. Let $A_{\mathcal{P}}, B_{\mathcal{P}}, S_{\mathcal{P}}, T_{\mathcal{P}}$ be the $\mathcal{P}$-partition of $V(G)$ and let $t^*=|T_{\mathcal{P}}|$. The only vertices which could have been added or removed to obtain $T_{\mathcal{P}}$ from $T_{\mathcal{P}_0}$ are interior vertices on $L_1$ so $(n-15)/2-3 \leq t^* \leq (n+15)/2+3$. Then we can choose $R_1$ and $R_2$ to be subpaths of length two of $P'_i$ and $P'_{i+2}$ so that $|P_T|=t^*$, where $P_S, P_T$ are defined so that $C=(P_SR_1P_TR_2)$.
So let us assume that $\{P'_i, P'_{i+2}\}$ is not consistent for $i=1,2$. We may assume that the paths $P'_1$ and $P'_4$ are both forward paths, by relabelling if necessary, and we illustrate the situation in Figure~\ref{fig:goodcoll}.
\begin{figure}[h]
\centering
\includegraphics[scale=0.33]{goodcollection.png}
\caption{A good collection of long runs.}\label{fig:goodcoll}
\end{figure}
Consider the vertices $a_i \in A$ and $b_i \in B$ covered by the paths $L_i\in\mathcal{P}_0$ for $i=1,2$. Let $\mathcal{P}:=\mathcal{P}_0\setminus \{L_1, L_2\}$ and let $A_{\mathcal{P}}, B_{\mathcal{P}}, S_{\mathcal{P}}, T_{\mathcal{P}}$ be the $\mathcal{P}$-partition of $V(G)$. Let $t^*:=|T_{\mathcal{P}}|$. The only vertices which can have been added or removed to obtain $T_{\mathcal{P}}$ from $T_{\mathcal{P}_0}$ are interior vertices on the paths $L_1$ and $L_2$, so $(n-15)/2-6 \leq t^* \leq (n+15)/2+6$. Find a forward path of the form $S\{b_1\}T$ for $R_1^G$. Then find backward paths of the form $T\{b_2\}S$ and $S\{a_1\}T$ for $R_2^G$ and $R_3^G$ respectively. Finally, find a forward path of the form $T\{a_2\}S$ for $R_4^G$. We can choose the paths $R_i^G$ to be disjoint from all paths in $\mathcal{P}$. Since $P'_1$ and $P'_2$ are of length $20$ we are able to find subpaths $R_1, R_2, R_3, R_4$ of $P'_1, P'_2, P'_3, P'_4$ so that $|P_T|+|P_T'|=t^*$, where $P_S, P_S', P_T, P_T'$ are defined so that $C=(P_SR_1P_TR_2P_S'R_3P_T'R_4)$.
\end{proof}
In order to prove Lemma~\ref{lem:ST} in the case when $\sigma(C)<{\varepsilon}_4n$, we first apply Lemma~\ref{lem:linking2} to $G$. We now proceed similarly as in the case when $C$ has many sink vertices (see Proposition~\ref{prop:pathembedding}) and so we only provide a sketch of the argument. We first observe that any subpath of the cycle of length $100{\varepsilon}_4n$ must contain at least
\begin{equation}\label{eqn:STlongruns}
\lfloor100{\varepsilon}_4n/21\rfloor-2{\varepsilon}_4n>2{\varepsilon}_3n\geq a+b \geq |\mathcal{P}|
\end{equation}
disjoint long runs. Let $s_1$ be the image of the initial vertex of $P_S$. Let $P_S^*$ be the subpath of $P_S$ formed by the first $100{\varepsilon}_4n$ edges of $P_S$. We can cover all $S$-paths in $\mathcal{P}$ and all vertices $x\in S$ which satisfy $d^+_S(x)< n/2-{\varepsilon}_3 n$ or $d^-_S(x)< n/2-{\varepsilon}_3 n$ greedily by a path in $G$ starting from $s_1$ which is isomorphic to $P_S^*$. Note that \eqref{eqn:STlongruns} ensures that $P_S^*$ contains $|\mathcal{P}|$ disjoint long runs. So we can map the $S$-paths in $\mathcal{P}$ to subpaths of these long runs. Let $P_S''$ be the path formed by removing from $P_S$ all edges in $P_S^*$.
If Lemma~\ref{lem:linking2}(i) holds and thus $P_S$ is the only path to be embedded in $G[S]$, we apply Proposition~\ref{prop:completepath}(i) to find a copy of $P_S''$ in $G[S]$, with the desired endvertices. If Lemma~\ref{lem:linking2}(ii) holds, we must find copies of both $P_S$ and $P_S'$ in $G[S]$. So we split the graph into two subgraphs of the appropriate size before applying Proposition~\ref{prop:completepath}(i) to each.%
\COMMENT{Suppose (ii) (the proof for (i) is the same, without the initial subdivision step). Let $r_i$ be the endpoints of $R_i^G$ in $S$ for $i=1,2,3$ and let $r_4$ be the endpoint of the copy of $P_S^*$. Let $m_1:=|P_S''|$ and $m_2:=|P_S'|$. Arbitrarily partition the vertices $(S\setminus (V(P_S^*)) )\cup \{r_1, r_2, r_3, r_4\}$ into sets $M_1$ and $M_2$ so that $|M_1|=m_1$, $|M_2|=m_2$, $r_1,r_4\in M_1$ and $r_2,r_3\in M_2$. Consider the graphs $G_i:=G[M_i]$ for $i=1,2$. We have that $\delta^0(G_i)\geq m_i-{\varepsilon}_3 n-100{\varepsilon}_4n\geq 7m_i/8$, since $m_i \geq n/9$. Thus Proposition~\ref{prop:completepath}(i) implies that $G_1$ has a Hamilton path isomorphic to $P_S''$ from $r_4$ to $r_1$ and $G_2$ has a Hamilton path isomorphic to $P_S'$ from $r_2$ to $r_3$.}
We do the same to find copies of $P_T$ (or $P_T$ and $P_T'$) in $G[T]$. Thus, we obtain a copy of $C$ in $G$.
This completes the proof of Lemma~\ref{lem:ST}.
\section{$G$ is $AB$-extremal}\label{sec:AB}
The aim of this section is to prove the following lemma which shows that Theorem~\ref{thm:main} is satisfied when $G$ is $AB$-extremal. Recall that an $AB$-extremal graph closely resembles a complete bipartite graph. We will proceed as follows. First we will find a short path which covers all of the exceptional vertices (the vertices in $S\cup T$). It is important that this path leaves a balanced number of vertices uncovered in $A$ and $B$. We will then apply Proposition~\ref{prop:completepath} to the remaining, almost complete, balanced bipartite graph to embed the remainder of the cycle.
\begin{lemma}\label{lem:AB}
Suppose that $1/n \ll {\varepsilon}_3 \ll 1.$
Let $G$ be a digraph on $n$ vertices with $\delta^0(G)\geq n/2$ and assume that $G$ is $AB$-extremal. If $C$ is any orientation of a cycle on $n$ vertices which is not antidirected, then $G$ contains a copy of $C$.
\end{lemma}
If $b>a$, the next lemma implies that $E(B\cup T, B)$ contains a matching of size $b-a+2$. We can use $b-a$ of these edges to pass between vertices in $B$ whilst avoiding $A$ allowing us to correct the imbalance in the sizes of $A$ and $B$.
\begin{prop}\label{prop:balance}
Suppose $1/n \ll{\varepsilon}_3 \ll 1$.
Let $G$ be a digraph on $n$ vertices with $\delta^0(G) \geq n/2$. Suppose $A, B, S, T$ is a partition of $V(G)$ satisfying (Q\ref{Q*})--(Q\ref{Q8}) and $b=a+d$ for some $d>0$. Then there is a matching of size $d+2$ in $E(B\cup T,B)$.
\end{prop}
\begin{proof}
Consider a maximal matching $M$ in $E(B \cup T,B)$ and suppose that $|M|\leq d+1$. Since $a+s \leq (n-d)/2$, each vertex in $B$ has at least $d/2$ inneighbours in $B\cup T$. In particular, since $M$ was maximal, each vertex in $B\setminus V(M)$ has at least $d/2$ inneighbours in $V(M)$. Then there is a $v \in V(M)\subseteq B\cup T$ with
$$d^+_B(v) \geq \frac{(b-2|M|)}{2|M|}\frac{d}{2}\geq \frac{n}{20},$$
contradicting (Q\ref{Q8}). Therefore $|M| \geq d+2$.
\end{proof}
We say that $P$ is an \emph{exceptional cover of $G$} if $P\subseteq G$ is a copy of a subpath of $C$ and
\begin{enumerate}[\rm{(EC}$1$)]
\item $P$ covers $S\cup T$;\label{EC1}
\item both endvertices of $P$ are in $A$;\label{EC2}
\item $|A\setminus V(P)|+1=|B\setminus V(P)|$.\label{EC3}
\end{enumerate}
We will use the following notation when describing the form of a path. If $X,Y\in \{A,B\}$ then we write $X*Y$ for any path which alternates between $A$ and $B$ whose initial vertex lies in $X$ and final vertex lies in $Y$. For example, $A*A(ST)^2$ indicates any path of the form $ABAB\dots ASTST$.
Suppose that $P$ is of the form $Z_1Z_2\dots Z_m$, where $Z_i \in \{A,B,S,T\}$. Let $Z_{i_1}, Z_{i_2},\dots, Z_{i_j}$ be the appearances of $A$ and $B$, where $i_j<i_{j+1}$. If $Z_{i_j}=A=Z_{i_{j+1}}$, we say that $Z_{i_{j+1}}$ is a \emph{repeated $A$}. We define a \emph{repeated $B$} similarly. Let $\text{rep}(A)$ and $\text{rep}(B)$ be the numbers of repeated $A$s and repeated $B$s, respectively. Suppose that $P$ has both endvertices in $A$ and $P$ uses $\ell+\text{rep}(B)$ vertices from $B$. Then $P$ will use $\ell+\text{rep}(A)+1$ vertices from $A$ (we add one because both endvertices of $P$ lie in $A$). So we have that
\begin{equation}\label{eqn:repeats}
|B\setminus V(P)|-|A\setminus V(P)|= b-a-\text{rep}(B)+\text{rep}(A)+1.
\end{equation}
Given a set of edges $M\subseteq E(G)$ we define the graph $G_M\subseteq G$ whose vertex set is $V(G)$ and whose edge set is $E(A,B\cup S)\cup E(B,A\cup T) \cup E(T,A) \cup E(S,B)\cup M\subseteq E(G)$. Informally, in addition to the edges of $M$, $G_M$ has edges between two vertex classes when the bipartite graph they induce in $G$ is dense.\label{G_M}
We will again split our argument into two cases depending on the number of sink vertices in~$C$.
\subsection{Finding an exceptional cover when $C$ has few sink vertices, $\sigma(C) <{\varepsilon}_4n$}
It is relatively easy to find an exceptional cover when $C$ has few sink vertices by observing that $C$ must contain many disjoint consistently oriented paths of length three. We can use these consistently oriented paths to cover the vertices in $S\cup T$ by forward paths of the form $ASB$ or $BTA$, for example.
\begin{prop}\label{prop:excover1}
Suppose $1/n \ll {\varepsilon}_3 \ll {\varepsilon}_4 \ll 1$.
Let $G$ be a digraph on $n$ vertices with $\delta^0(G) \geq n/2$. Suppose $A, B, S, T$ is a partition of $V(G)$ satisfying (Q\ref{Q*})--(Q\ref{Q8}). If $\sigma(C) <{\varepsilon}_4n$, then there is an exceptional cover of $G$ of length at most $21{\varepsilon}_4n$.
\end{prop}
\begin{proof}
Let $d:=b-a$. Let $P$ be any subpath of $C$ of length $20{\varepsilon}_4n$. Let $\mathcal{Q}$ be a maximum consistent collection of disjoint paths of length three in $P$, such that $d_C(Q, Q')\geq 7$ for all distinct $Q, Q'\in \mathcal{Q}$. Then
$$|\mathcal{Q}| \geq (\lfloor 20{\varepsilon}_4n/7 \rfloor - 2{\varepsilon}_4n)/2 > 4{\varepsilon}_3n>d+s+t.$$
If necessary, reverse the order of all vertices in $C$ so that the paths in $\mathcal{Q}$ are forward paths.
Apply Proposition~\ref{prop:balance} to find a matching $M\subseteq E(B\cup T,B)$ of size $d$ and write $M=\{e_1, \dots, e_m, f_{m+1}, \dots, f_d\}$, where $e_i\in E(B)$ and $f_i\in E(T,B)$. Map the initial vertex of $P$ to any vertex in $A$. We will greedily find a copy of $P$ in $G_M$ which covers $M$ and $S\cup T$ as follows.
Note that, by (Q\ref{Q7}), we can cover each edge $f_i\in M$ by a forward path of the form $BTB$. By (Q\ref{Q6}), each of the vertices in $S$ can be covered by a forward path of the form $ASB$. Similarly, (Q\ref{Q7}) allows us to find a forward path of the form $BTA$ covering each vertex in $T$. Moreover, note that (Q\ref{Q1})--(Q\ref{Q4}) allow us to find a path of length three of any orientation between any pair of vertices $x\in A$ and $y\in B$ using only edges from $E(A,B) \cup E(B,A)$. So we can find a copy of $P$ which covers every edge in $M$ and every vertex in $(S\cup T)\setminus V(M)$ by a copy of a path in $\mathcal{Q}$ and which has the form
$$(A*BB)^m(A*BTB)^{d-m}(A*ASB)^s(A*BT)^{t-d+m}A*X,$$
where $X\in \{A,B\}$. We may assume that $X=A$ by extending the path $P$ by one vertex if necessary. Let $P^G$ denote this copy of $P$ in $G$.
Now (EC\ref{EC1}) and (EC\ref{EC2}) hold. It remains to check (EC\ref{EC3}).
Observe that $P^G$ contains no repeated $A$s and exactly $d$ repeated $B$s, these occur in the subpath of $P^G$ of the form $(A*BB)^m(A*BTB)^{d-m}$. By (\ref{eqn:repeats}), we see that $$|B\setminus V(P^G)|-|A\setminus V(P^G)|= 1,$$ so (EC\ref{EC3}) is satisfied. Hence $P^G$ forms an exceptional cover.
\end{proof}
\subsection{Finding an exceptional cover when $C$ has many sink vertices, $\sigma(C) \geq{\varepsilon}_4n$}
When $C$ is far from being consistently oriented, we use sink and source vertices to cover the vertices in $S\cup T$. A natural approach would be to try to cover the vertices in $S\cup T$ by paths of the form $ASA$ and $BTB$ whose central vertex is a sink or by paths of the form $ATA$ and $BSB$ whose central vertex is a source. In essence, this is what we will do, but there are some technical issues we will need to address. The most obvious is that each time we cover a vertex in $S$ or $T$ by a path of one of the above forms, we will introduce a repeated $A$ or a repeated $B$, so we will need to cover the exceptional vertices in a ``balanced'' way.
Let $P$ be a subpath of $C$ and let $m$ be the number of sink vertices in $P$. Suppose that $P_1,P_2,P_3$ is a partition of $P$ into internally disjoint paths such that $P=(P_1P_2P_3)$. We say that $P_1, P_2, P_3$ is a \emph{useful tripartition of $P$} if there exist $\mathcal{Q}_i\subseteq V(P_i)$ such that:
\begin{itemize}
\item $P_1$ and $P_2$ have even length;
\item $|\mathcal{Q}_i|\geq \lfloor m/12 \rfloor$ for $i=1,2,3$;
\item all vertices in $\mathcal{Q}_1 \cup \mathcal{Q}_3$ are sink vertices and are an even distance apart;
\item all vertices in $\mathcal{Q}_2$ are source vertices and are an even distance apart.%
\end{itemize}
Note that a useful tripartition always exists. We say that $\mathcal{Q}_1, \mathcal{Q}_2, \mathcal{Q}_3$ are \emph{sink/source/sink sets} for the tripartition $P_1, P_2, P_3$. We say that a subpath $L\subseteq P_2$ is a \emph{link} if $L$ has even length and, if, writing $x$ for the initial vertex and $y$ for the the final vertex of $L$, the paths $(P_2x)$ and $(yP_2)$ each contain at least $|\mathcal{Q}_2|/3$ elements of $\mathcal{Q}_2$.
\begin{prop}\label{prop:sinksource}
Let $1/n \ll {\varepsilon} \ll \eta \ll \tau \leq 1$. Let $G$ be a digraph on $n$ vertices and let $A,B,S,T$ be a partition of $V(G)$. Let $S_A, S_B$ be disjoint subsets of $S$ and $T_A, T_B$ be disjoint subsets of $T$. Let $a:=|A|$, $b:=|B|$, $s_A:= |S_A|$, $s_B:=|S_B|$, $t_A:=|T_A|$, $t_B:=|T_B|$ and let $a_1\in A$. Suppose that:
\begin{enumerate}[\rm(i)]
\item $a,b\geq \tau n$;
\item $s_A,s_B,t_A,t_B \leq {\varepsilon} n$;
\item $\delta^0(G[A,B]) \geq \eta n$;
\item $d^\pm _B(x) \geq b -{\varepsilon} n$ for all but at most ${\varepsilon} n$ vertices $x \in A$;
\item $d^\pm _A(x) \geq a -{\varepsilon} n$ for all but at most ${\varepsilon} n$ vertices $x \in B$;
\item $d^-_A(x)\geq \eta n$ for all $x\in S_A$, $d^+_B(x)\geq \eta n$ for all $x\in S_B$, $d^+_A(x)\geq \eta n$ for all $x\in T_A$ and $d^-_B(x)\geq \eta n$ for all $x\in T_B$.
\end{enumerate}
Suppose that $P$ is a path of length at most $\eta^2 n$ which contains at least $200{\varepsilon} n$ sink vertices. Let $P_1, P_2, P_3$ be a useful tripartition of $P$ with sink/source/sink sets $\mathcal{Q}_1, \mathcal{Q}_2, \mathcal{Q}_3$. Let $L\subseteq P_2$ be a link. Suppose that $G\setminus (S_A\cup S_B\cup T_A\cup T_B)$ contains a copy $L^G$ of $L$ which is an $AB$-path if $d_C(P,L)$ is even and a $BA$-path otherwise. Let $r_A$ be the number of repeated $A$s in $L^G$ and $r_B$ be the number of repeated $B$s in $L^G$. Let $G'$ be the graph with vertex set $V(G)$ and edges $$E(A,B\cup S_A)\cup E(B,A\cup T_B) \cup E(T_A, A) \cup E(S_B,B) \cup E(L^G).$$ Then $G'$ contains a copy $P^G$ of $P$ such that:
\begin{itemize}
\item $L^G\subseteq P^G$;
\item $P^G$ covers $S_A, S_B, T_A, T_B$;
\item $a_1$ is the initial vertex of $P^G$;
\item The final vertex of $P^G$ lies in $B$ if $P$ has even length and $A$ if $P$ has odd length;
\item $P^G$ has $s_A+t_A+r_A$ repeated $A$s and $s_B+t_B+r_B$ repeated $B$s.
\end{itemize}
\end{prop}
\begin{proof}
We may assume, without loss of generality, that the initial vertex of $P$ lies in $\mathcal{Q}_1$. If not, let $x$ be the first vertex on $P$ lying in $\mathcal{Q}_1$ and greedily embed the initial segment $(Px)$ of $P$ starting at $a_1$ using edges in $E(A,B)\cup E(B,A)$. Let $a_1'$ be the image of $x$. We can then use symmetry to relabel the sets $A,B, S_A, S_B, T_A, T_B$, if necessary, to assume that $a_1'\in A$.
We will use (vi) to find a copy of $P$ which covers the vertices in $S_A\cup T_B$ by sink vertices in $\mathcal{Q}_1\cup \mathcal{Q}_3$ and the vertices in $S_B\cup T_A$ by source vertices in $\mathcal{Q}_2$. We will use that $|\mathcal{Q}_i|\geq 15{\varepsilon} n$ for all $i$ and also that (iii)--(v) together imply that $G'$ contains a path of length three of any orientation between any pair of vertices in $x\in A$ and $y\in B$. Consider any $q_1\in \mathcal{Q}_1$ and $q_2\in \mathcal{Q}_2$. The order in which we cover the vertices will depend on whether $d_C(q_1,q_2)$ is even or odd (note that the parity of $d_C(q_1,q_2)$ does not depend on the choice of $q_1$ and $q_2$).
Suppose first that $d_C(q_1,q_2)$ is even. We find a copy of $P$ in $G'$ as follows. Map the initial vertex of $P$ to $a_1$. Then greedily cover all vertices in $T_B$ so that they are the images of sink vertices in $\mathcal{Q}_1$ using a path $P_1^G$ which is isomorphic to $P_1$ and has the form $(A*BT_BB)^{t_B}A*A$.
Let $x_L$ be the initial vertex of $L$ and $y_L$ be the final vertex. Let $x_L^G$ and $y_L^G$ be the images of $x_L$ and $y_L$ in $L^G$. Cover all vertices in $S_B$ so that they are the images of source vertices in $\mathcal{Q}_2$ using a path isomorphic to $(P_2x_L)$ which starts from the final vertex of $P_1^G$ and ends at $x_L^G$. This path has the form $(A*BS_BB)^{s_B}A*X$, where $X:=A$ if $d_C(P,L)$ is even and $X:=B$ if $d_C(P,L)$ is odd. Now use the path $L^G$. Next cover all vertices in $T_A$ so that they are the images of source vertices in $\mathcal{Q}_2$ using a path isomorphic to $(y_LP_2)$ whose initial vertex is $y_L^G$. This path has the form $Y*A(B*AT_AA)^{t_A}B*B$, where $Y:=B$ if $d_C(P,L)$ is even and $Y:=A$ if $d_C(P,L)$ is odd.%
\COMMENT{We use that the parity of the lengths of $(P_2x_L)$ and $(y_LP_2)$ is the same. Since $P_1$ and $P_2$ have even length we note that the parity of the lengths of $P_3$ and $P$ is the same. Finally, the vertices in $\mathcal{Q}_3$ have even distance to the initial vertex of $P_3$.}
Let $P_2^G$ denote the copy of $P_2$ obtained in this way. Finally, starting from the final vertex of $P_2^G$, find a copy of $P_3$ which covers all vertices in $S_A$ by sink vertices in $\mathcal{Q}_3$ and has the form $(B*AS_AA)^{s_A}B*B$ if $P$ (and thus also $P_3$) has even length and $(B*AS_AA)^{s_A}B*A$ if $P$ (and thus also $P_3$) has odd length. If $d_C(q_1,q_2)$ is odd, we find a copy of $P$ which covers $T_B$, $T_A$, $V(L^G)$, $S_B$, $S_A$ (in this order) in the same way.%
\COMMENT{Map the initial vertex of $P$ to $a_1$. Then greedily cover all vertices in $T_B$ so that they are the images of sink vertices in $\mathcal{Q}_1$ using a path $P_1^G$ which is isomorphic to $P_1$ and has the form $(A*BT_BB)^{t_B}A*A$.
Let $x_L$ be the initial vertex of $L$ and $y_L$ be the final vertex. Let $x_L^G$ and $y_L^G$ be the images of $x_L$ and $y_L$ in $L^G$. Cover all vertices in $T_A$ so that they are the images of source vertices in $\mathcal{Q}_2$ using a path isomorphic to $(P_2x_L)$ which starts from the final vertex of $P_1^G$ and ends at $x_L^G$. This path has the form $A(B*AT_AA)^{t_A}B*X$, where $X:=A$ if $d_C(P,L)$ is even and $X:=B$ if $d_C(P,L)$ is odd. Now use the path $L^G$. Next cover all vertices in $S_B$ so that they are the images of source vertices in $\mathcal{Q}_2$ using a path isomorphic to $(y_LP_2)$ whose initial vertex is $y_L^G$. This path has the form $Y*B(A*BS_BB)^{s_B}A*B$, where $Y:=B$ if $d_C(P,L)$ is even and $Y:=A$ if $d_C(P,L)$ is odd. Let $P_2^G$ denote the copy of $P_2$ obtained in this way. Finally, starting from the final vertex of $P_2^G$, find a copy of $P_3$ which covers all vertices in $S_A$ by sink vertices in $\mathcal{Q}_3$ and has the form $(B*AS_AA)^{s_A}B*B$ if $P$ has even length and $(B*AS_AA)^{s_A}B*A$ if $P$ has odd length.}
Observe that $P^G$ has $s_A+t_A+r_A$ repeated $A$s and $s_B+t_B+r_B$ repeated $B$s, as required.
\end{proof}
We are now in a position to find an exceptional cover. The proof splits into a number of cases and we will require the assumption that $C$ is not antidirected. We will need a matching found using Proposition~\ref{prop:balance} and a careful assignment of the remaining vertices in $S\cup T$ to sets $S_A, S_B, T_A$ and $T_B$ to ensure that the path found by Proposition~\ref{prop:sinksource} leaves a balanced number of vertices in $A$ and $B$ uncovered.
\begin{lemma}\label{lem:excover2}
Suppose $1/n \ll {\varepsilon}_3 \ll {\varepsilon}_4 \ll 1$.
Let $G$ be a digraph on $n$ vertices with $\delta^0(G) \geq n/2$. Suppose $A, B, S, T$ is a partition of $V(G)$ satisfying (Q\ref{Q*})--(Q\ref{Q8}). If $C$ is an oriented cycle on $n$ vertices, $C$ is not antidirected and $\sigma(C) \geq{\varepsilon}_4n$, then there is an exceptional cover $P$ of $G$ of length at most $2{\varepsilon}_4 n$.
\end{lemma}
\begin{proof}
Let $d:=b-a$, $k:=t-s$ and $r:=s+t$.
Since $\sigma(C) \geq {\varepsilon}_4n$, we can use an averaging argument to guarantee a subpath $Q'$ of $C$ of length at most ${\varepsilon}_4n$ such that $Q'$ contains at least $2\sqrt{{\varepsilon}_3}n$ sink vertices.%
\COMMENT{To see this, we partition $C$ into subpaths of length ${\varepsilon}_4 n$. The expected number of sink vertices in each subpath is at least
$({\varepsilon}_4n-2){\varepsilon}_4n/n\geq 2\sqrt{{\varepsilon}_3}n$.}
Let $Q$ be an initial subpath of $Q'$ which has odd length and contains $\sqrt{{\varepsilon}_3}n$ sink vertices.
\medskip
\noindent \textbf{Case 1: }\emph{$a<b$ or $s<t$.}
We will find disjoint sets of vertices $S_A,S_B, T_A, T_B$, of sizes $s_A, s_B, t_A, t_B$ respectively, and a matching $M'=E\cup E'$ (where $E$ and $E'$ are disjoint) such that the following hold:
\begin{enumerate}[(E1)]
\item $S_A\cup S_B=S$ and $T_A\cup T_B=T\setminus V(E')$;\label{E1}
\item $E\subseteq E(B)$, $|E|\leq d$;\label{E2a}
\item $E' \subseteq E(B\cup T, B) \cup E(A,A\cup T)$ and $1\leq |E'|\leq 2$;\label{E2b}
\item If $p:=|E'\cap E(B)|-|E'\cap E(A)|$, then $s_A+t_A+d=s_B+t_B+p+|E|$.\label{E3}
\end{enumerate}
We find sets satisfying (E\ref{E1})--(E\ref{E3}) as follows.
Suppose first that $n$ is odd. Note that we can find a matching $M\subseteq E(B\cup T,B)$ of size $d+1$. Indeed, if $a<b$ then $M$ exists by Proposition~\ref{prop:balance} and if $a=b$, and so $s<t$, we use that $a+s<n/2$ and $\delta^0(G) \geq n/2$ to find $M$ of size $d+1=1$.
Fix one edge $e\in M$ and let $E':=\{e\}$. There are $r':=r-|V(E')\cap T|$ vertices in $S\cup T$ which are not covered by $E'$. Set $d':=\min\{r',d-p\}$ and let $E\subseteq(M\setminus E') \cap E(B)$ have size $d-p-d'$.%
\COMMENT{$M\setminus E'$ can contain at most $r'$ $TB$-edges, so must have at least $d-r'$ $B$-edges. Note that $d-p-d'=\max \{d-p-r', 0\}\leq d-r'$ as $p\geq 0$.}
Suppose that $n$ is even. If $a<b$, by Proposition~\ref{prop:balance}, we find a matching $M$ of size $d+2$ in $E(B\cup T, B)$. Fix two edges $e_1, e_2 \in M$ and let $E':=\{e_1, e_2\}$. Choose $r'$, $d'$ and $E$ as above.
If $n$ is even and $a=b$, then $a+s = b+s = (n-k)/2 \leq n/2-1$. So $d^+_{A\cup T}(x) \geq k/2$ for each $x\in A$ and $d^-_{B \cup T}(x)\geq k/2$ for each $x\in B$. Either we can find a matching $M$ of size two in $E(B\cup T, B) \cup E(A, A\cup T)$ or $t=s+2$ and there is a vertex $v \in T$ such that $A \subseteq N^-(v)$ and $B \subseteq N^+(v)$. In the latter case, move $v$ to $S$ to get a new partition satisfying (Q\ref{Q*})--(Q\ref{Q8}) and the conditions of Case~2. So we will assume that the former holds. Let $E':=M$, $E:=\emptyset$, $r':=r-|V(E')\cap T|$ and $d':=-p$.
In each of the above cases, note that $d'\equiv r' \mod 2$%
\COMMENT{since $r'=r+|E'\cap E(A)|+|E'\cap E(B)|-|E'|$ and $d\equiv r-|E'| \mod 2$.}
and $|d'|\leq r'$. So we can choose disjoint subsets $S_A, S_B, T_A, T_B$ satisfying (E\ref{E1}) such that $s_A+t_A=(r'-d')/2$ and $s_B+t_B= (r'+d')/2$. Then (E\ref{E3}) is also satisfied.%
\COMMENT{$s_B+t_B+p+|E|=(r'+d')/2+p+(d-p-d') = (r'-d')/2+d=s_A+t_A+d$.}
We construct an exceptional cover as follows.
Let $L_1$ denote the oriented path of length two whose second vertex is a sink and let $L_2$ denote the oriented path of length two whose second vertex is a source. For each $e\in E'$, we find a copy $L(e)$ of $L_1$ or $L_2$ covering $e$. If $e\in E(A)$ let $L(e)$ be a copy of $L_1$ of the form $AAB$, if $e\in E(B)$ let $L(e)$ be a copy of $L_1$ of the form $ABB$, if $e\in E(A,T)$ let $L(e)$ be a copy of $L_1$ of the form $ATB$ and if $e\in E(T,B)$ let $L(e)$ be a copy of $L_2$ of the form $ATB$. Note that for each $e\in E'$, the orientation of $L(e)$ is the same regardless of whether it is traversed from its initial vertex to final vertex or vice versa. This means that we can embed it either as an $AB$-path or a $BA$-path.
Let $a_1$ be any vertex in $A$ and let $e_1\in E'$. Let $r_A$ and $r_B$ be the number of repeated $A$s and $B$s, respectively, in $L(e_1)$. So $r_A=1$ if and only if $e_1\in E(A)$, otherwise $r_A=0$. Also, $r_B=1$ if and only if $e_1\in E(B)$, otherwise $r_B=0$.
Consider a useful tripartition $P_1, P_2, P_3$ of $Q$. Let $L\subseteq P_2$ be a link which is isomorphic to $L(e_1)$. Let $x$ denote the final vertex of $Q$. Using Proposition~\ref{prop:sinksource} (with $2{\varepsilon}_3, {\varepsilon}_4, 1/4$ playing the roles of ${\varepsilon}, \eta, \tau$), we find a copy $Q^G$ of $Q$ covering $S_A, S_B, T_A, T_B$ whose initial vertex is $a_1$. Moreover, $L(e_1)\subseteq Q^G \subseteq G_{\{e_1\}} \subseteq G_M$, the final vertex $x^G$ of $Q^G$ lies in $A$, $Q^G$ has $s_A+t_A+r_A$ repeated $A$s and $s_B+t_B+r_B$ repeated $B$s. If $|E'|=2$, let $e_2\in E'\setminus\{e_1\}$. Let $Q'':=(xQ')$. Let $y$ be the second source vertex in $Q''$ if $e_2\in E(T,B)$ and the second sink vertex in $Q''$ otherwise. Let $y^-$ be the vertex preceding $y$ on $C$, let $y^+$ be the vertex following $y$ on $C$ and let $q:=d_C(x,y^-)$. Find a path in $G$ whose initial vertex is $x^G$ which is isomorphic to $(Q''y^-)$ and is of the form $A*A$ if $q$ is even and $A*B$ if $q$ is odd, such that the final vertex of this path is an endvertex of $L(e_2)$. Then use the path $L(e_2)$ itself. Let $Z:=B$ if $q$ is even and $Z:=A$ if $q$ is odd. Finally, extend the path to cover all edges in $E$ using a path of the form $Z*B(A*ABB)^{|E|}A$ which is isomorphic to an initial segment of $(y^+Q'')$. Let $P$ denote the resulting extended subpath of $C$, so $Q\subseteq P \subseteq Q'$. Let $P^G$ be the copy of $P$ in $G_M$.
Note that (EC\ref{EC1}) and (EC\ref{EC2}) hold. Each repeated $A$ in $P^G$ is either a repeated $A$ in $Q^G$ or it occurs when $P^G$ uses $L(e_2)$ in the case when $e_2\in E(A)$. Similarly, each repeated $B$ in $P^G$ is either a repeated $B$ in $Q^G$ or it occurs when $P^G$ uses $L(e_2)$ in the case when $e_2\in E(B)$ or when $P^G$ uses an edge in $E$. Substituting into (\ref{eqn:repeats}) and recalling (E\ref{E3}) gives
\begin{align*}
|B \setminus V(P^G)|-|A\setminus V(P^G)| =& b-a-(s_B+t_B+|E|+|E'\cap E(B)|)+ (s_A+t_A+|E'\cap E(A)|)+1\\
=&d -(s_B+t_B+|E|)-p+(s_A+t_A)+1=1.
\end{align*}
So (EC\ref{EC3}) is satisfied and $P^G$ is an exceptional cover.
\medskip
\noindent \textbf{Case 2: } \emph{$a=b$ and $s=t$.}
If $s=t=0$ then any path consisting of one vertex in $A$ is an exceptional cover. So we will assume that $s,t\geq 1$.
We say that $C$ is \emph{close to antidirected} if it contains an antidirected subpath of length $500{\varepsilon}_3n$.
\medskip
\noindent \textbf{Case 2.1: }\emph{$C$ is close to antidirected.}
If there is an edge $e\in E(T,B)\cup E(B, S) \cup E(S, A) \cup E(A,T)$ then we are able to find an exceptional cover in the graph $G_{\{e\}}$. We illustrate how to do this when $e=t_1b_1\in E(T,B)$, the other cases are similar.%
\COMMENT{If $e=a_1t_1\in E(A,T)$ then the proof is almost identical, only difference being that the link is a copy of $L_1$ (oriented path of length two whose second vertex is a sink).\\
Suppose $e=s_1a_1\in E(S,A)$. Let $t_1\in T$. If initial edge of $P$ is forward, let $P'$ consist of first two edges of $P$ and let $(P')^G$ be a forward path of the form $B\{t_1\}A$. If initial edge is backward, let $P'$ be the first three edges of $P$ and find a copy $(P')^G$ of $P'$ of the form $A\{t_1\}BA$. Let $P'':=P\setminus P''$. Find a copy $L^G$ of $L_2$ which is of the form $ASB$ and uses $e$. Let $L\subseteq P_2$ be a link isomorphic to $L_1$. Set $S_A:=S\setminus\{s_1\}$, $T_B:=T \setminus\{t_1\}$ and $S_B, T_A:=\emptyset$. Apply Proposition~\ref{prop:sinksource}, as above. If $e=b_1s_1\in E(B,S)$, then the proof is nearly identical, uses copy of $L_1$ as the link instead.}
Since $C$ is close to but not antidirected, it follows that $C$ contains a path $P$ of length $500{\varepsilon}_3n$ which is antidirected except for the initial two edges which are oriented consistently. Let $s_1 \in S$. If the initial edge of $P$ is a forward edge, let $P'$ be the subpath of $P$ consisting of the first three edges of $P$ and find a copy $(P')^G$ of $P'$ in $G$ of the form $A\{s_1\}BA$. If the initial edge of $P$ is a backward edge, let $P'$ consist of the first two edges of $P$ and let $(P')^G$ be a backward path of the form $B\{s_1\}A$. Let $P''$ be the subpath of $P$ formed by removing from $P$ all edges in $P'$. Let $x^G\in A$ be the final vertex of $(P')^G$. Set $S_A:=S\setminus\{s_1\}$, $T_B:=T \setminus\{t_1\}$ and $S_B, T_A:=\emptyset$. Let $P_1,P_2, P_3$ be a useful tripartition of $P''$. As in Case~1, let $L_2$ denote the oriented path of length two whose second vertex is a source. Let $L\subseteq P_2$ be a link which is isomorphic to $L_2$ and map $L$ to a path $L^G$ of the form $BTA$ which uses the edge $t_1b_1$. We use Proposition~\ref{prop:sinksource} to find a copy $(P'')^G$ of $P''$ which uses $L^G$, covers $S_A\cup T_B$ and whose initial vertex is mapped to $x^G$. Moreover, the final vertex of $P''$ is mapped to $A\cup B$ and $(P'')^G$ has $s_A=s-1$ repeated $A$s and $t_B=t-1$ repeated $B$s. Let $P^G$ be the path $(P')^G\cup (P'')^G$. Then $P^G$ satisfies (EC\ref{EC1}) and we may assume that (EC\ref{EC2}) holds, by adding a vertex in $A$ as a new initial vertex and/or final vertex if necessary. The repeated $A$s and $B$s in $P^G$ are precisely the repeated $A$s and $B$s in $(P'')^G$. Therefore, \eqref{eqn:repeats} implies that (EC\ref{EC3}) holds and $P^G$ forms an exceptional cover.
Let us suppose then that $E(T,B)\cup E(B, S)\cup E(S, A) \cup E(A,T)$ is empty. If $S=\{s_1\}, T=\{t_1\}$ then, since $\delta^0(G) \geq n/2$, $G$ must contain the edge $s_1t_1$ and edges $a_1s_1,b_1t_1$ for some $a_1\in A, b_1\in B$. Since $C$ is not antidirected but has many sink vertices we may assume that $C$ contains a subpath $P=(uvxyz)$ where $uv, vx, yx\in E(C)$. We use the edges $a_1s_1, s_1t_1, b_1t_1$, as well as an additional $AB$- or $BA$-edge, to find a copy $P^G$ of $P$ in $G$ of the form $ASTBA$. The path $P^G$ forms an exceptional cover.
If $s=t=2$ and $e(S)=e(T)=2$, we find an exceptional cover as follows. Write $S=\{s_1, s_2\}$, $T=\{t_1,t_2\}$. We have that $s_is_j, t_it_j \in E(G)$ for all $i\neq j$. Note that $C$ is not antidirected, so $C$ must contain a path of length six which is antidirected except for its initial two edges which are consistently oriented. Suppose first that the initial two edges of $P$ are forward edges. Let $a_1\in A$ be an inneighbour of $s_1$. Note that $s_2$ has an inneighbour in $T$, without loss of generality $t_1$. Let $b_1\in B$ be an inneighbour of $t_2$ and $a_2\in A$ be an outneighbour of $b_1$. We find a copy $P^G$ of $P$ which has the form $ASSTTBA$ and uses the edges $a_1s_1, s_1s_2, t_1s_2, t_1t_2, b_1t_2, b_1a_2$, in this order. If the initial two edges of $P$ are backward, we instead find a path of the form $ATTSSBA$. Note that in both cases, $P^G$ satisfies (EC\ref{EC1}) and (EC\ref{EC2}). $P^G$ has no repeated $A$s and $B$s and \eqref{eqn:repeats} implies that (EC\ref{EC3}) holds. So $P^G$ forms an exceptional cover.
So let us assume that $s,t\geq 2$ and, additionally, $e(S)+e(T)<4$ if $s=2$. There must exist two disjoint edges $e_1=t_1s_1$, $e_2=s_2t_2$ where $s_1,s_2 \in S$ and $t_1, t_2 \in T$ (since $\delta^0(G)\geq n/2$ and $E(T,B)\cup E(B, S)\cup E(S, A) \cup E(A,T)=\emptyset$).%
\COMMENT{If $s=t=2$, without loss of generality, $e(S)<2$. So we can write $S=\{s_1, s_2\}$ such that $s_1s_2\not\in E(G)$. Note that $s_1$ must have at least two outneighbours in $T$. Since $s_2$ must have at least one inneighbour in $T$, we can find the desired edges. If $s,t\geq 3$, we note that each vertex in $S$ has at least one outneighbour in $T$ and each vertex in $T$ has at least one inneighbour in $S$. By K\"onig's theorem, we find two disjoint $ST$-edges $e_1'$ and $e_2'$. Choose any $s_1\in S$ which does not lie on $e_1'$ or $e_2'$. Note that $s_1$ has an inneighbour in $T$ and this vertex can lie on at most one of $e_1',e_2'$. Hence we find the desired edges.}
We use these edges to find an exceptional cover as follows. We let $S_A:= S\setminus\{s_1,s_2\}$, $T_B:=T\setminus\{t_1, t_2\}$, $s_A:=|S_A|$ and $t_B:=|T_B|$. We use $e_1$ and $e_2$ to find an antidirected path $P^G$ which starts with a backward edge and is of the form
$$A\{t_1\}\{s_1\}A(B*AS_AA)^{s_A}B*B\{s_2\}\{t_2\}B(A*BT_BB)^{s_B}A.$$ The length of $P^G$ is less than $500{\varepsilon}_3n$. So, as $C$ is close to antidirected, $C$ must contain a subpath isomorphic to $P^G$. We claim that $P^G$ is an exceptional cover. Clearly, $P^G$ satisfies (EC\ref{EC1}) and (EC\ref{EC2}). For (EC\ref{EC3}), note that $P^G$ contains an equal number of repeated $A$s and repeated $B$s. Then (\ref{eqn:repeats}) implies that $|B\cap V(P^G)|=|A\cap V(P^G)|+1$.
\medskip
\noindent \textbf{Case 2.2: }\emph{$C$ is far from antidirected.}
Recall that $Q$ is a subpath of $C$ of length at most ${\varepsilon}_4n$ containing at least $\sqrt{{\varepsilon}_3}n$ sink vertices. Let $\mathcal{Q}$ be a maximum collection of sink vertices in $Q$ such that all vertices in $\mathcal{Q}$ are an even distance apart, then $|\mathcal{Q}|\geq \sqrt{{\varepsilon}_3}n/2$. Partition the path $Q$ into $11$ internally disjoint subpaths so that $Q=(P_1P_1'P_2P_2'\dots P_5P_5'P_6)$ and each subpath contains at least $300{\varepsilon}_3n$ elements of $\mathcal{Q}$. Note that each $P_i'$ has length greater than $500{\varepsilon}_3n$ and so is not antidirected, that is, each $P_i'$ must contain a consistently oriented subpath $P_i''$ of length two. At least three of the $P_i''$ must form a consistent set. Thus there must exist $i<j$ such that $d_C(P_i'', P_j'')$ is even and $\{P_i'',P_j''\}$ is consistent. We may assume, without loss of generality, that $P_i'', P_j''$ are forward paths and that the second vertex of $P_i$ is in $\mathcal{Q}$. Let $P$ be the subpath of $Q$ whose initial vertex is the initial vertex of $P_i$ and whose final vertex is the final vertex of $P_j''$.
We will find an exceptional cover isomorphic to $P$ as follows. Choose $s_1\in S$ and $t_1 \in T$ arbitrarily. Set $S_A:=S\setminus \{s_1\}$ and $T_B:=T\setminus \{t_1\}$. Map the initial vertex of $P$ to $A$. We find a copy of $P$ which maps each vertex in $S_A$ to a sink vertex in $P_i$ and each vertex in $T_B$ to a sink vertex in $P_j$. If $d_C(P_i, P_i'')$ is even, $P_i''$ is mapped to a path $L'$ of the form $A\{s_1\}B$ and $P_j''$ is mapped to a path $L''$ of the form $B\{t_1\}A$. If $d_C(P_i, P_i'')$ is odd, $P_i''$ is mapped to a path $L'$ of the form $B\{t_1\}A$ and $P_j''$ is mapped to a path $L''$ of the form $A\{s_1\}B$. Thus, if $d_C(P_i, P_i'')$ is even, we obtain a copy $P^G$ which starts with a path of the form $A(B*AS_AA)^{s_A}B*A$, then uses $L'$ and continues with a path of the form $B*B(A*BT_BB)^{t_B}A*B$. Finally, the path uses $L''$. The case when $d_C(P_i, P_i'')$ is odd is similar.%
\COMMENT{$d_C(P_i, P_i'')$ is odd: we obtain a copy $P^G$ which starts with a path of the form $A(B*AS_AA)^{s_A}B*B$, then uses $L'$ and continues with a path of the form $A*B(A*BT_BB)^{t_B}A*A$. Finally, the path uses $L''$.}
(EC\ref{EC1}) holds and we may assume that (EC\ref{EC2}) holds by adding one vertex to $P$ if necessary. Note that $P^G$ contains an equal number of repeated $A$s and $B$s, so (\ref{eqn:repeats}) implies that (EC\ref{EC3}) holds and $P^G$ is an exceptional cover.
\end{proof}
\subsection{Finding a copy of $C$}
Proposition~\ref{prop:excover1} and Lemma~\ref{lem:excover2} allow us to find a short exceptional cover for any cycle which is not antidirected. We complete the proof of Lemma~\ref{lem:AB} by extending this path to cover the small number of vertices of low degree remaining in $A$ and $B$ and then applying Proposition~\ref{prop:completepath}.
\begin{proofof}\textbf{Lemma~\ref{lem:AB}.}
Let $P$ be an exceptional cover of $G$ of length at most $21{\varepsilon}_4n$, guaranteed by Proposition~\ref{prop:excover1} or Lemma~\ref{lem:excover2}. Let
\begin{align*}
X:= \{v\in A : d^+_B(v)< n/2-{\varepsilon}_3 n \text{ or } d^-_B(v) < n/2-{\varepsilon}_3 n\} &\text{ and}\\
Y:= \{v\in B : d^+_A(v)< n/2-{\varepsilon}_3 n \text{ or } d^-_A(v) < n/2-{\varepsilon}_3 n\}&.
\end{align*}
(Q\ref{Q3}) and (Q\ref{Q4}) together imply that $|X\cup Y|\leq 2{\varepsilon}_3 n$. Together with (Q\ref{Q2}), this allows us to cover the vertices in $X\cup Y$ by any orientation of a path of length at most ${\varepsilon}_4 n$. So we can extend $P$ to cover the remaining vertices in $X\cup Y$ (by a path which alternates between $A$ and $B$). Let $P'$ denote this extended path. Thus $|P'| \leq 22{\varepsilon}_4n$. Let $x$ and $y$ be the endvertices of $P'$. We may assume that $x, y \in A\setminus X$. Let $A':=(A\setminus V(P'))\cup \{x,y\}$ and $B':=B\setminus V(P')$ and consider $G':=G[A',B']$. Note that $|A'|=|B'|+1$ by (EC\ref{EC3}) and
$$\delta^0(G') \geq n/2-{\varepsilon}_3 n-22{\varepsilon}_4n \geq (7|B'|+2)/8.$$
Thus, by Proposition~\ref{prop:completepath}(ii), $G'$ has a Hamilton path of any orientation between $x$ and $y$ in $G$. We combine this path with $P'$, to obtain a copy of $C$.
\end{proofof}
\section{$G$ is $ABST$-extremal}\label{sec:ABST}
In this section we prove that Theorem~\ref{thm:main} holds for all $ABST$-extremal graphs. When $G$ is $ABST$-extremal, the sets $A$, $B$, $S$ and $T$ are all of significant size; $G[S]$ and $G[T]$ look like cliques and $G[A,B]$ resembles a complete bipartite graph. The proof will combine ideas from Sections~\ref{sec:ST}~and~\ref{sec:AB}.
\begin{lemma}\label{lem:ABST}
Suppose that $1/n \ll {\varepsilon} \ll {\varepsilon}_1 \ll \eta_1 \ll \tau \ll 1.$
Let $G$ be a digraph on $n$ vertices with $\delta^0(G)\geq n/2$ and assume that $G$ is $ABST$-extremal. If $C$ is any orientation of a cycle on $n$ vertices which is not antidirected, then $G$ contains a copy of $C$.
\end{lemma}
We will again split the proof into two cases, depending on how many changes of direction $C$ contains. In both cases, the first step is to find an exceptional cover (defined in Section~\ref{sec:AB}) which uses only a small number of vertices from $A\cup B$.
\subsection{Finding an exceptional cover when $C$ has few sink vertices, $\sigma(C) <{\varepsilon}_2n$}\label{sec:ABST1}
The following lemma allows us to find an exceptional cover when $C$ is close to being consistently oriented. The two main components of the exceptional cover are a path $P_S\subseteq G[S]$ covering most of the vertices in $S$ and another path $P_T\subseteq G[T]$ covering most of the vertices in $T$. We are able to find $P_S$ and $P_T$ because $G[S]$ and $G[T]$ are almost complete. A shorter path follows which uses long runs (recall that a long run is a consistently oriented path of length $20$) and a small number of vertices from $A\cup B$ to cover any remaining vertices in $S\cup T$. We use edges found by Proposition~\ref{prop:dedges} to control the number of repeated~$A$s and $B$s on this path.
\begin{lemma}\label{lem:ABST1}
Suppose $1/n \ll {\varepsilon} \ll {\varepsilon}_1 \ll {\varepsilon}_2\ll \eta_1 \ll \tau \ll 1$.
Let $G$ be a digraph on $n$ vertices with $\delta^0(G) \geq n/2$. Suppose $A, B, S, T$ is a partition of $V(G)$ satisfying (R\ref{R*})--(R\ref{R9}). Let $C$ be an oriented cycle on $n$ vertices. If $\sigma(C) <{\varepsilon}_2n$, then $G$ has an exceptional cover $P$ such that $|V(P)\cap(A\cup B)|\leq 2\eta_1^2 n$.
\end{lemma}
\begin{proof}
Let $s^*:=s-\lceil{\varepsilon}_2 n\rceil$ and $d:=b-a$. Define $S'\subseteq S$ to consist of all vertices $x\in S$ with $d^+_{B\cup S}(x) \geq b+s-{\varepsilon}^{1/3}n$ and $d^-_{A\cup S}(x)\geq a+s-{\varepsilon}^{1/3}n$. Define $T'\subseteq T$ similarly. Note that $|S\setminus S'|, |T\setminus T'| \leq {\varepsilon}_1n$ by (R\ref{R8}) and (R\ref{R9}).
We may assume that the vertices of $C$ are labelled so that the number of forward edges is at least the number of backward edges. Let $Q\subseteq C$ be a forward path of length two, this exists since $\sigma(C)<{\varepsilon}_2 n$. If $C$ is not consistently oriented, we may assume that $Q$ is immediately followed by a backward edge. Define $e_1, e_2, e_3\in E(C)$ such that $d_C(e_1,Q)=s^*$, $d_C(Q, e_2)=s^*+1$, $d_C(Q,e_3)=2$. Let $P_0:=(e_1Ce_2)$.
If at least one of $e_1, e_2$ is a forward edge, define paths $P_T$ and $P_S$ of order $s^*$ so that $P_0=(e_1P_TQP_Se_2)$. In this case, map $Q$ to a path $Q^G$ in $G$ of the form $T'AS'$. If $e_1$ and $e_2$ are both backward edges, our choice of $Q$ implies that $e_3$ is also a backward edge. Let $P_T$ and $P_S$ be defined so that $P_0=(e_1P_TQe_3P_Se_2)$. So $|P_T|=s^*$ and $|P_S|=s^*-1$. In this case, map $(Qe_3)$ to a path $Q^G$ of the form $T'ABS'$.
Let $p_T:=|P_T|$ and $p_S:=|P_S|$. Our aim is to find a copy $P_0^G$ of $P_0$ which maps $P_S$ to $G[S]$ and $P_T$ to $G[T]$. We will find $P_0^G$ of the form $F$ as given in Table~\ref{table1}.
\begin{table}[h]
\begin{tabular}{|c|c|c|c|c|}
\hline
$e_1$& forward & forward & backward & backward\\
$e_2$& forward & backward & forward & backward\\
\hline
\rule{0pt}{11pt} $F$ & $B(T)^{p_T}A(S)^{p_S}B$ & $B(T)^{p_T}A(S)^{p_S}A$ & $A(T)^{p_T}A(S)^{p_S}B$ & $A(T)^{p_T}AB(S)^{p_S}A$\\
\hline
\end{tabular}
\caption{Proof of Lemma~\ref{lem:ABST1}: $P_0^G$ has form $F$.}\label{table1}
\end{table}
Let $M$ be a set of $d+1$ edges in $E(T,B \cup S) \cup E(B, S)$ guaranteed by Proposition~\ref{prop:dedges}. We also define a subset $M'$ of $M$ which we will use to extend $P_0^G$ to an exceptional cover. If $e_1, e_2$ are both forward edges, choose $M' \subseteq M$ of size $d$. Otherwise let $M':=M$. Let $d':=|M'|$. Let $M_1'$ be the set of all edges in $M'$ which are disjoint from all other edges in $M'$ and let $d_1':=|M_1'|$. So $M'\setminus M_1'$ consists of $(d'-d_1')/2=:d_2'$ disjoint consistently oriented paths of the form $TBS$.
We now fix copies $e_1^G$ and $e_2^G$ of $e_1$ and $e_2$. If $e_1$ is a forward edge, let $e_1^G$ be a $BT'$-edge, otherwise let $e_1^G$ be a $T'A$-edge. If $e_2$ is a forward edge, let $e_2^G$ be a $S'B$-edge, otherwise let $e_2^G$ be an $AS'$-edge. Let $t_1$ be the endpoint of $e_1^G$ in $T'$, $s_2$ be the endpoint of $e_2^G$ in $S'$ and let $t_2\in T'$ and $s_1\in S'$ be the endpoints of $Q^G$. Let $v$ be the final vertex of $e_2^G$ and let $X\in \{A,B\}$ be such that $v\in X$.
We now use (R\ref{R4}), (R\ref{R5}), (R\ref{R8}) and (R\ref{R9}) to find a collection $\mathcal{P}$ of at most $3{\varepsilon}_1n+1$ disjoint, consistently oriented paths which cover the edges in $M'$ and the vertices in $S\setminus S'$ and $T\setminus T'$. $\mathcal{P}$ uses each edge $e\in M_1'$ in a forward path $P_e$ of the form $B(S\cup T)^jB$ for some $1\leq j\leq 4$ and $\mathcal{P}$ uses each path in $M'\setminus M_1'$ in a forward path of the form $BT^jBS^{j'}B$ for some $1\leq j,j'\leq 4$. The remaining vertices in $S\setminus S'$, $T\setminus T'$ are covered by forward paths in $\mathcal{P}$ of the form $A(S)^jB$ or $B(T)^jA$, for some $1\leq j \leq 3$.
Let $S''\subseteq S \setminus (V(\mathcal{P})\cup \{s_1, s_2\})$ and $T''\subseteq T \setminus (V(\mathcal{P})\cup \{t_1, t_2\})$ be sets of size at most $2{\varepsilon}_2n$ so that $|S''|+p_S=|S\setminus V(\mathcal{P})|$ and $|T''|+p_T=|T\setminus V(\mathcal{P})|$. Note that $S''\subseteq S'$ and $T''\subseteq T'$. So we can cover the vertices in $S''$ by forward paths of the form $ASB$ and we can cover the vertices in $T''$ by forward paths of the form $BTA$. Let $\mathcal{P}'$ be a collection of disjoint paths thus obtained. Let $P_1$ be the subpath of order $\eta_1^2n$ following $P_0$ on $C$. Note that $P_1$ contains at least $\sqrt{{\varepsilon}_2}n$ disjoint long runs. Each path in $\mathcal{P}\cup \mathcal{P}'$ will be contained in the image of such a long run. (Each forward path in $\mathcal{P}\cup \mathcal{P}'$ might be traversed by $P_1^G$ in a forward or backward direction, for example, a forward path of the form $BT^jBS^{j'}B$ could appear in $P_1^G$ as a forward path of the form $BT^jBS^{j'}B$ or a backward path of the form $BS^{j'}BT^jB$.) So we can find a copy $P_1^G$ of $P_1$ starting from $v$ which uses $\mathcal{P}\cup \mathcal{P}'$ and has the form
$$X*AX_1X_2\dots X_{d_1'}Y_1Y_2\dots Y_{d_2'}Z_1Z_2\dots Z_\ell B*Y$$ for some $\ell\geq 0$ and $Y\in \{A,B\}$, where
\begin{align*}
X_i&\in \{B(S\cup T)^jB*A: 1\leq j\leq 4\},\\
Y_i&\in \{B(S\cup T)^jB(S\cup T)^{j'}B*A: 1\leq j,j'\leq 4\}\hspace{6pt}\text{ and}\\
Z_i&\in\{BA(S\cup T)^jB*A, B(S\cup T)^jA*A:1\leq j\leq 3\}.
\end{align*}
Let $S^*$ be the set of uncovered vertices in $S$ together with the vertices $s_1, s_2$ and let $T^*$ be the set of uncovered vertices in $T$ together with $t_1$ and $t_2$. Write $G_S:=G[S^*]$ and $G_T:=G[T^*]$. Now
$\delta^0(G_T)\geq t-\sqrt{{\varepsilon}_2}n \geq 7|G_T|/8$ and so $G_T$ has a Hamilton path from $t_1$ to $t_2$ which is isomorphic to $P_T$, by Proposition~\ref{prop:completepath}(i). Similarly, we find a path isomorphic to $P_S$ from $s_1$ to $s_2$ in $G_S$. Altogether, this gives us the desired copy $P_0^G$ of $P_0$ in $G$. Let $P^G:=P_0^GP_1^G$.
We now check that $P^G$ forms an exceptional cover. Clearly (EC1) holds and we may assume that $P^G$ has both endvertices in $A$ (by extending the path if necessary) so that (EC2) is also satisfied. For (EC\ref{EC3}), observe that $P_1^G$ contains exactly $d_1'+2d_2'=d'$ repeated $B$s, these occur in the subpath of the form $X_1X_2\dots X_{d_1'}Y_1Y_2\dots Y_{d_2'}$ covering the edges in $M'$. If $e_1$ and $e_2$ are both forward edges, then, consulting Table~\ref{table1}, we see that $P_0^G$ has no repeated $A$s and that there are no other repeated $A$s or $B$s in $P^G$. Recall that in this case $d'=d$, so \eqref{eqn:repeats} gives $|B\setminus V(P^G)|-|A\setminus V(P^G)| = d-d'+1=1.$ If at least one of $e_1, e_2$ is a backward edge, using Table~\ref{table1}, we see that there is one repeated $A$ in $P_0^G$ and there are no other repeated $A$s or $B$s in $P^G$. In this case, we have $d'=d+1$, so \eqref{eqn:repeats} gives $|B\setminus V(P^G)|-|A\setminus V(P^G)|=d-d'+1+1=1$. Hence $P^G$ satisfies (EC3) and forms an exceptional cover. Furthermore, $|V(P^G)\cap (A\cup B)| \leq 2\eta_1^2 n$.
\end{proof}
\subsection{Finding an exceptional cover when $C$ has many sink vertices, $\sigma(C) \geq {\varepsilon}_2n$}
In Lemma~\ref{lem:ABST2}, we find an exceptional cover when $C$ contains many sink vertices. The proof will use the following result which allows us to find short $AB$- and $BA$-paths of even length. We will say that an $AB$- or $BA$-path $P$ in $G$ is \emph{useful} if it has no repeated $A$s or $B$s and uses an odd number of vertices from $S\cup T$.
\begin{prop}\label{prop:ABST2links}
Suppose $1/n \ll {\varepsilon} \ll {\varepsilon}_1\ll \eta_1 \ll \tau \ll 1.$
Let $G$ be a digraph on $n$ vertices with $\delta^0(G) \geq n/2$. Suppose $A, B, S, T$ is a partition of $V(G)$ satisfying (R\ref{R*})--(R\ref{R9}). Let $L_1$ and $L_2$ be oriented paths of length eight. Then $G$ contains disjoint copies $L_1^G$ and $L_2^G$ of $L_1$ and $L_2$ such that each $L_i^G$ is a useful path. Furthermore, we can specify whether $L_i^G$ is an $AB$-path or a $BA$-path.
\end{prop}
\begin{proof}
Define $S'\subseteq S$ to be the set consisting of all vertices $x\in S$ with $d^\pm_{S}(x)\geq \eta_1 n/2$. Define $T'\subseteq T$ similarly. Note that $|S\setminus S'|, |T\setminus T'|\leq {\varepsilon}_1n$ by (R\ref{R8}) and (R\ref{R9}). We claim that $G$ contains disjoint edges $e,f\in E(B\cup T,S')\cup E(A\cup S, T')$. Indeed, if $a+s<n/2$ it is easy to find disjoint $e,f\in E(B\cup T,S')$, since $\delta^0(G)\geq n/2$. Otherwise, we must have $a+s=b+t=n/2$ and so each vertex in $S'$ has at least one inneighbour in $B\cup T$ and each vertex in $T'$ has at least one inneighbour in $A\cup S$. Let $G'$ be the bipartite digraph with vertex classes $A\cup S$ and $B\cup T$ and all edges in $E(B\cup T,S')\cup E(A\cup S, T')$. The claim follows from applying K\"onig's theorem to the underlying undirected graph of $G'$.
We demonstrate how to find a copy $L_1^G$ of $L_1$ in $G$ which is an $AB$-path. The argument when $L_1^G$ is a $BA$-path is very similar.%
\COMMENT{For the $BA$-path case, relabel the vertex classes $(A,B,S,T)$ by $(B,A,T,S)$. Note that $G$ with this new labelling satisfies (R\ref{R1})--(R\ref{R9}) so we can find $L_1^G$ as above.}
$L_1^G$ will have the form $A*B(T)^i(S)^j(T)^kA*B$ or $A*A(T)^i(S)^j(T)^kB*B$, for some $i,j,k\geq 0$ such that $i+j+k$ is odd. Note then that $L_1^G$ will have no repeated $A$s or $B$s.
First suppose that $L_1$ is not antidirected, so $L_1$ has a consistently oriented subpath $L'$ of length two. We will find a copy of $L_1$, using (R\ref{R8})--(R\ref{R9}) to map $L'$ to a forward path of the form $ASB$ or $BTA$ or a backward path of the form $BSA$ or $ATB$. More precisely, if $L'$ is a forward path, let $L_1^G$ be a path of the form $A*ASB*B$ if $d_C(L_1, L')$ is even and a path of the form $A*BTA*B$ if $d_C(L_1, L')$ is odd. If $L'$ is backward, let $L_1^G$ be a path of the form $A*ATB*B$ if $d_C(L_1, L')$ is even and a path of the form $A*BSA*B$ if $d_C(L_1, L')$ is odd.
Suppose now that $L_1$ is antidirected. We will find a copy $L_1^G$ of $L_1$ which contains $e$. If $e\in E(B,S')$, we use (R\ref{R8}) and the definition of $S'$ to find a copy of $L_1$ of the following form. If the initial edge of $L_1$ is a forward edge, we find $L_1^G$ of the form $A(S)^3B*B$. If the initial edge is a backward edge, we find $L_1^G$ of the form $AB(S)^3A*B$. If $e\in E(A,T')$ we will use (R\ref{R9}) and the definition of $T'$ to find a copy of $L_1$ of the following form. If the initial edge of $L_1$ is a forward edge, we find $L_1^G$ of the form $A(T)^3B*B$. If the initial edge is a backward edge, we find $L_1^G$ of the form $AB(T)^3A*B$.
If $L_1$ is antidirected and $e\in E(T,S')$, we will use (R\ref{R3}), (R\ref{R5}), (R\ref{R8}), (R\ref{R9}) and the definition of $S'$ to find a copy of $L_1$ containing $e$. If the initial edge of $L_1$ is a forward edge, find $L_1^G$ of the form $AB(S)^2(T)^{2h-1}A*B$, where $1\leq h\leq 2$. If the initial edge is a backward edge, find $L_1^G$ of the form $A(T)^{2h-1}(S)^2B*B$, where $1\leq h\leq 2$. Finally, we consider the case when $e\in E(S,T')$. If the initial edge of $L_1$ is a forward edge, we find $L_1^G$ of the form $AB(S)^{2h-1}(T)^2A*B$, where $1\leq h\leq 2$. If the initial edge of $L_1$ is a backward edge, we find $L_1^G$ of the form $A(T)^2(S)^{2h-1}B*B$, where $1\leq h\leq 2$.
We find a copy $L_2^G$ of $L_2$ (which is disjoint from $L_1^G$) in the same way, using the edge $f$ if $L_2$ is an antidirected path.
\end{proof}
As in the case when there were few sink vertices, we will map long paths to $G[S]$ and $G[T]$. It will require considerable work to choose these paths so that $G$ contains edges which can be used to link these paths together and so that we are able to cover the remaining vertices in $S\cup T$ using sink and source vertices in a ``balanced'' way. In many ways, the proof is similar to the proof of Lemma~\ref{lem:excover2}. In particular, we will use Proposition~\ref{prop:sinksource} to map sink and source vertices to some vertices in $S\cup T$.
\begin{lemma}\label{lem:ABST2}
Suppose $1/n \ll {\varepsilon} \ll {\varepsilon}_1 \ll {\varepsilon}_2\ll \eta_1 \ll \tau \ll 1.$
Let $G$ be a digraph on $n$ vertices with $\delta^0(G) \geq n/2$. Suppose $A, B, S, T$ is a partition of $V(G)$ satisfying (R\ref{R*})--(R\ref{R9}). Let $C$ be an oriented cycle on $n$ vertices which is not antidirected. If $\sigma(C) \geq {\varepsilon}_2n$, then $G$ has an exceptional cover $P$ such that $|V(P)\cap(A\cup B)|\leq 5{\varepsilon}_2n$.
\end{lemma}
\begin{proof}
Let $d:=b-a$. Define $S'\subseteq S$ to be the set consisting of all vertices $x\in S$ with $d^\pm_{S}(x)\geq \eta_1 n/2$ and define $T'\subseteq T$ similarly. Let $S'':=S\setminus S'$ and $T'':=T\setminus T'$. Note that $|S''|, |T''| \leq {\varepsilon}_1n$ by (R\ref{R8}) and (R\ref{R9}). By (R\ref{R4}), all vertices $x\in S''$ satisfy $d^-_A(x)\geq \eta_1 n/2$ or $d^+_B(x) \geq \eta_1 n/2$ and, by (R\ref{R5}), all $x\in T''$ satisfy $d^+_A(x)\geq \eta_1 n/2$ or $d^-_B(x) \geq \eta_1 n/2$. In our proof below, we will find disjoint sets $S_A, S_B \subseteq S$ and $T_A, T_B \subseteq T$ of suitable size such that
\begin{align}
&d^-_A(x)\geq \eta_1 n/2 \text{ for all } x\in S_A\;\text{ and }\;
d^+_B(x)\geq \eta_1 n/2 \text{ for all } x\in S_B;\label{eq:condition1}\\
&d^-_B(x)\geq \eta_1 n/2 \text{ for all } x\in T_B \;\text{ and }\;
d^+_A(x)\geq \eta_1 n/2 \text{ for all } x\in T_A.\label{eq:condition2}
\end{align}
Note that (R\ref{R8}) implies that all but at most ${\varepsilon}_1n$ vertices from $S$ could be added to $S_A$ or $S_B$ and satisfy the conditions of \eqref{eq:condition1}. Similarly, (R\ref{R9}) implies that all but at most ${\varepsilon}_1n$ vertices in $T$ are potential candidates for adding to $T_A$ or $T_B$ so as to satisfy \eqref{eq:condition2}. We will write $s_A:=|S_A|$, $s_B:=|S_B|$, $t_A:=|T_A|$ and $t_B:=|T_B|$.
Let $s^*:=s-\lceil\sqrt{{\varepsilon}_1} n\rceil$ and let $\ell:=2\lceil {\varepsilon}_2n\rceil -1$. If $C$ contains an antidirected subpath of length $\ell$, let $Q_2$ denote such a path. We may assume that the initial edge of $Q_2$ is a forward edge by reordering the vertices of $C$ if necessary. Otherwise, choose $Q_2$ to be any subpath of $C$ of length $\ell$ such that $Q_2$ contains at least ${\varepsilon}_1^{1/3} n$ sink vertices and the second vertex of $Q_2$ is a sink. Let $Q_1$ be the subpath of $C$ of length $\ell$ such that $d_C(Q_1,Q_2)=2s^*+\ell$. Note that if $Q_1$ is antidirected then $Q_2$ must also be antidirected. Let $e_1,e_2$ be the final two edges of $Q_1$ and let $f_1,f_2$ be the initial two edges of $Q_2$ (where the edges are listed in the order they appear in $Q_1$ and $Q_2$, i.e., $(e_1e_2)\subseteq Q_1$ and $(f_1f_2)\subseteq Q_2$). Note that $f_1$ is a forward edge and $f_2$ is a backward edge.
Let $Q'$ be the subpath of $C$ of length $14$ such that $d_C(Q', Q_2)=s^*$. If $Q'$ is antidirected, let $Q$ be the subpath of $Q'$ of length $13$ whose initial edge is a forward edge. Otherwise let $Q\subseteq Q'$ be a consistently oriented path of length two. We will consider the three cases stated below.
\medskip
\noindent \textbf{Case 1: }\emph{$Q_1$ and $Q_2$ are antidirected. Moreover, $\{e_2,f_1\}$ is consistent if and only if $n$ is even.}
We will assume that the initial edge of $Q$ is a forward edge, the case when $Q$ is a backward path of length two is very similar.%
\COMMENT{Suppose that $Q$ is a backward path. Map $Q$ to a backward path $Q^G$ of the form $T'BS'$. If $n$ is even, let $e:=e_1$ and, if $n$ is odd, let $e:=e_2$. In both cases, let $f:=f_2$. The assumptions of this case imply that $e$ and $f$ are both backward edges. Define $P$, $P_T$, $P_S$, $p_T$, $p_S$, $q_T$ and $q_S$ as in the main text. We have that $p_S+p_T+q_T+q_S=d_C(e,f)-1\equiv n \mod 2$. So we can choose $S_A, S_B, T_A, T_B$ as in main text.\\
Recall that $Q_1$ is antidirected. So we can find a path $(Q_1e)^G$ isomorphic to $(Q_1e)$ which covers the vertices in $T_A$ by source vertices and the vertices in $T_B$ by sink vertices - parity is OK as $e$ is backward. We choose this path to have the form
$$X*A(BAT_AA*A)^{t_A}(BT_BB*A)^{t_B}B*AT',$$ where $X\in \{A,B\}$. Observe that $(Q_1e)^G$ has $t_A$ repeated $A$s and $t_B$ repeated $B$s. Find a path $(fQ_2)^G$ isomorphic to $(fQ_2)$ of the form
$$S'A*A(BAS_AA*A)^{s_A}(BS_BB*A)^{s_B}B*B$$
which covers all vertices in $S_A$ by sink vertices and all vertices in $S_B$ by source vertices. $(fQ_2)^G$ has $s_A$ repeated $A$s and $s_B$ repeated $B$s. The rest of the proof is identical to the case when $Q$ is a forward path.}
We will find a copy $Q^G$ of $Q$ which is a $T'S'$-path. If $Q$ is a forward path of length two, map $Q$ to a forward path $Q^G$ of the form $T'AS'$. If $Q$ is antidirected, we find a copy $Q^G$ of $Q$ as follows. Let $Q''$ be the subpath of $Q$ of length eight such that $d_C(Q, Q'')=3$. Recall that a path in $G$ is useful if it has no repeated $A$s or $B$s and uses an odd number of vertices from $S\cup T$. Using Proposition~\ref{prop:ABST2links}, we find a copy $(Q'')^G$ of $Q''$ in $G$ which is a useful $AB$-path. We find $Q^G$ which starts with a path of the form $T'ABA$, uses $(Q'')^G$ and then ends with a path of the form $BAS'$. Let $q_S$ and $q_T$ be the numbers of interior vertices of $Q^G$ in $S$ and $T$, respectively.
If $n$ is even, let $e:=e_2$ and, if $n$ is odd, let $e:=e_1$. In both cases, let $f:=f_1$. The assumptions of this case imply that $e$ and $f$ are both forward edges. Let $P:=(Q_1CQ_2)$ and let $P_T$ and $P_S$ be subpaths of $C$ which are internally disjoint from $e, f$ and $Q$ and are such that $(eCf)= (eP_TQP_Sf)$. Our plan is to find a copy of $P_T$ in $G[T]$ and a copy of $P_S$ in $G[S]$. Let $p_T:=|P_T|$ and $p_S:=|P_S|$. If $Q$ is a consistently oriented path we have that $q_S,q_T=0$ and $p_S+p_T=d_C(e,f)-1$. If $Q$ is antidirected, then $q_S+q_T$ is odd and $p_S+p_T=d_C(e,f)-12$. So in both cases we observe that
\begin{equation}\label{eqn:parity1}
p_S+p_T+q_S+q_T\equiv d_C(e,f)-1\equiv n\mod 2.
\end{equation}
Choose $S_A, S_B, T_A, T_B$ to satisfy \eqref{eq:condition1} and \eqref{eq:condition2} so that $S''\setminus V(Q^G)\subseteq S_A\cup S_B$, $T''\setminus V(Q^G)\subseteq T_A\cup T_B$, $s=s_A+s_B+p_S+q_S$, $t=t_A+t_B+p_T+q_T$ and $s_A+t_A+d=s_B+t_B$. To see that this can be done, first note that the choice of $s^*$ implies that $s-p_S-q_S \geq \sqrt{{\varepsilon}_1}n/2>|S''|+d$ and $t-p_T-q_T\geq \sqrt{{\varepsilon}_1}n/2 > |T''|+d$. Let $r:=s+t-(p_S+p_T+q_S+q_T)$. So $r$ is the number of vertices in $S\cup T$ which will not be covered by the copies of $P_T$, $P_S$ or $Q$. Then \eqref{eqn:parity1} implies that
$$r\equiv s+t-n \equiv d \mod 2.$$ Thus we can choose the required subsets $S_A, S_B, T_A, T_B$ so that $s_A+t_A=(r-d)/2$ and $s_B+t_B=(r+d)/2$. Note that (R\ref{R2}) and the choice of $s^*$ also imply that $s_A+s_B, t_A+t_B \leq 2\sqrt{{\varepsilon}_1}n$.
Recall that $Q_1$ is antidirected. So we can find a path $(Q_1e)^G$ isomorphic to $(Q_1e)$ which covers the vertices in $T_A$ by source vertices and the vertices in $T_B$ by sink vertices. We choose this path to have the form%
\COMMENT{Since $e$ is forward and the initial vertex of $e$ is mapped to $B$, the parity is OK, that is, we can map source vertices to $T_A$ and sink vertices to $T_B$.}
$$X*A(BAT_AA*A)^{t_A}(BT_BB*A)^{t_B}B*BT',$$ where $X\in \{A,B\}$. Observe that $(Q_1e)^G$ has $t_A$ repeated $A$s and $t_B$ repeated $B$s. Find a path $Q_2^G$ isomorphic to $Q_2$ of the form
$$S'B*A(BAS_AA*A)^{s_A}(BS_BB*A)^{s_B}B*B$$
which covers all vertices in $S_A$ by sink vertices and all vertices in $S_B$ by source vertices. $Q_2^G$ has $s_A$ repeated $A$s and $s_B$ repeated $B$s. So far, we have been working under the assumption that $Q$ starts with a forward edge. If $Q$ is a backward path, the main difference is that we let $e:=e_1$ if $n$ is even and let $e:=e_2$ if $n$ is odd. We let $f:=f_2$ so that $e$ and $f$ are both backward edges and we map $Q$ to a backward path $Q^G$ of the form $T'BS'$. Then \eqref{eqn:parity1} holds and we can proceed similarly as in the case when $Q$ is a forward path.
We find copies of $P_T$ in $G[T']$ and $P_S$ in $G[S']$ as follows. Greedily embed the first $\sqrt{{\varepsilon}_1}n$ vertices of $P_T$ to cover all uncovered vertices $x\in T'$ with $d^+_T(x)\leq t-{\varepsilon}^{1/3}n$ or $d^-_T(x)\leq t-{\varepsilon}^{1/3}n$. Note that, by (R\ref{R9}), there are at most ${\varepsilon}_1n$ such vertices. Write $P_T'\subseteq P_T$ for the subpath still to be embedded and let $t_1$ and $t_2$ be the images of its endvertices in $T$. Let $T^*$ denote the sets of so far uncovered vertices in $T$ together with $t_1$ and $t_2$ and define $G_T:=G[T^*]$. We have that $\delta^0(G_T)\geq t-{\varepsilon}^{1/3}n-3\sqrt{{\varepsilon}_1}n\geq 7|G_T|/8$, using (R\ref{R1}), and so we can apply Proposition~\ref{prop:completepath}(i) to find a copy of $P_T'$ in $G_T$ with the desired endpoints. In the same way, we find a copy of $P_S$ in $G[S']$. Together with $Q^G$, $(Q_1e)^G$ and $Q_2^G$, this gives a copy $P^G$ of $P$ in $G$ such that $|V(P^G)\cap(A\cup B)|\leq 5{\varepsilon}_2n$.
The path $P^G$ satisfies (EC\ref{EC1}) and we may assume that (EC\ref{EC2}) holds, by extending the path by one or two vertices, if necessary, so that both of its endvertices lie in $A$. Let us now verify (EC\ref{EC3}). All repeated $A$s and $B$s in $P^G$ are repeated $A$s and $B$s in the paths $(Q_1e)^G$ and $Q_2^G$. So in total, $P^G$ has $s_A+t_A$ repeated $A$s and $s_B+t_B$ repeated $B$s. Then \eqref{eqn:repeats} gives that $P^G$ satisfies
$$|B\setminus V(P^G)|-|A\setminus V(P^G)|=d-(s_B+t_B)+(s_A+t_A)+1=1.$$
So (EC\ref{EC3}) is satisfied and $P^G$ is an exceptional cover.
\medskip
\noindent \textbf{Case 2: }\emph{There exists $e\in \{e_1,e_2\}$ and $f\in \{f_1,f_2\}$ such that $\{e,f\}$ is consistent and $n-d_C(e,f)$ is even.}
Let $v$ be the final vertex of $f$. Recall the definitions of a useful tripartition and a link from Section~\ref{sec:AB}. Consider a useful tripartition $P_1,P_2,P_3$ of $(vQ_2)$ and let $\mathcal{Q}_1, \mathcal{Q}_2, \mathcal{Q}_3$ be sink/source/sink sets. Let $L\subseteq P_2$ be a link of length eight such that $d_C(v, L)$ is even. If $Q$ is a consistently oriented path, use Proposition~\ref{prop:ABST2links} to find a copy $L^G$ of $L$ which is a useful $BA$-path if $e$ is forward and a useful $AB$-path if $e$ is backward. Map $Q$ to a path $Q^G$ of the form $T'AS'$ if $Q$ is a forward path and $T'BS'$ if $Q$ is a backward path. If $Q$ is antidirected, let $Q''$ be the subpath of $Q$ of length eight such that $d_C(Q, Q'')=3$. Using Proposition~\ref{prop:ABST2links}, we find disjoint copies $(Q'')^G$ of $Q''$ and $L^G$ of $L$ in $G$ such that $(Q'')^G$ is a useful $AB$-path and $L^G$ is as described above. We find $Q^G$ which starts with a path of the form $T'ABA$, uses $(Q'')^G$ and then ends with a path of the form $BAS'$. Let $q_S$ be the number of interior vertices of $Q^G$ and $L^G$ in $S$ and let $q_T$ be the number of interior vertices of $Q^G$ and $L^G$ in $T$. Note that in all cases, $Q^G$ is a $T'S'$-path with no repeated $A$s or $B$s.
Let $P:=(eCQ_2)$ and let $P_0:=(eCf)$. Define subpaths $P_T$ and $P_S$ of $C$ which are internally disjoint from $Q,e,f$ and are such that $P_0= (eP_TQP_Sf)$. Let $p_T:=|P_T|$ and $p_S:=|P_S|$. Our aim will be to find a copy $P_0^G$ of $P_0$ which uses $Q^G$ and maps $P_T$ to $G[T]$ and $P_S$ to $G[S]$. $P_0^G$ will have the form $F$ given in Table~\ref{table2}. We fix edges $e^G$ and $f^G$ for $e$ and $f$. If $e$ is a forward edge, then choose $e^G$ to be a $BT'$-edge and $f^G$ to be an $S'B$-edge. If $e$ is a backward edge, let $e^G$ be a $T'A$-edge and $f^G$ be an $AS'$-edge. We also define a constant $d'$ in Table~\ref{table2} which will be used to ensure that the final assignment is balanced.
\begin{table}[h]
\begin{tabular}{|c|c|c|c|c|}
\hline
Initial edge of $Q$& forward & forward & backward & backward\\
$e$& forward & backward & forward & backward\\
\hline
\rule{0pt}{11pt}$F$ & $BT^{p_T}\mathcal{A} S^{p_S}B$& $AT^{p_T}\mathcal{A} S^{p_S}A$ & $BT^{p_T}BS^{p_S}B$ & $AT^{p_T}BS^{p_S}A$\\
\hline
\rule{0pt}{11pt}$d'$ & $d$& $d+2$ & $d-2$ & $d$\\
\hline
\end{tabular}
\caption{Proof of Lemma~\ref{lem:ABST2}, Cases~2 and 3: $P_0^G$ has form $F$, where $\mathcal{A}$ denotes an $A$-path with no repeated $A$s or $B$s.}\label{table2}
\end{table}
So, if $r_A$ and $r_B$ are the numbers of repeated $A$s and $B$s in $P_0^G$ respectively, we will have $r_A-r_B=d'-d$.
Note that%
\COMMENT{If $Q$ is a consistently oriented path, $p_T+p_S=d_C(e,f)-1$ and $q_T+q_S$ is the number of interior vertices in $L^G$ in $S\cup T$ which is odd. So $p_T+p_S+q_T+q_S \equiv d_C(e,f) \mod 2$. If $Q$ is antidirected, $p_T+p_S=d_C(e,f)-12$. Note that in this case $Q^G$ has an odd number of interior vertices in $S\cup T$ and $q_T+q_S$ is the number of interior vertices in $Q^G$ and $L^G$ in $S\cup T$ which is even. So $p_T+p_S+q_T+q_S \equiv d_C(e,f) \mod 2$.}
\begin{equation}\label{eqn:parity2}
p_T+p_S+q_T+q_S \equiv d_C(e,f) \equiv n\mod 2.
\end{equation}
The number of vertices in $S\cup T$ which will not be covered by $P_0^G$ or $L^G$ is equal to $r:=s+t-(p_T+p_S+q_T+q_S)$ and \eqref{eqn:parity2} implies that
$$r\equiv s+t-n\equiv d \equiv d' \mod 2.$$
Also note that the choice of $s^*$ implies that $s-p_S-q_S \geq \sqrt{{\varepsilon}_1}n/2>|S''|+d'$ and $t-p_T-q_T\geq \sqrt{{\varepsilon}_1}n/2 > |T''|+d'$.
Thus we can choose sets $S_A, S_B, T_A, T_B$ satisfying \eqref{eq:condition1} and \eqref{eq:condition2} so that $S''\setminus V(Q^G \cup L^G)\subseteq S_A\cup S_B$, $T''\setminus V(Q^G\cup L^G)\subseteq T_A\cup T_B$, $s=s_A+s_B+p_S+q_S$, $t=t_A+t_B+p_T+q_T$ and $s_A+t_A+d'=s_B+t_B$.%
\COMMENT{Let $s_A+t_A=(r-d')/2$ and $s_B+t_B=(r+d')/2$.}
(R\ref{R2}) and the choice of $s^*$ imply that $s_A+s_B, t_A+t_B \leq 2\sqrt{{\varepsilon}_1}n$.
Recall that $v$ denotes the final vertex of $f$ and let $v^G$ be the image of $v$ in $G$. If $v^G\in A$ (i.e., if $e$ is backward), let $v':=v$ and $(v')^G:=v^G$. If $v^G\in B$, let $v'$ denote the successor of $v$ on $C$. If $vv'\in E(C)$, map $v'$ to an outneighbour of $v^G$ in $A$ and, if $v'v\in E(C)$, map $v'$ to an inneighbour of $v^G$ in $A$. Let $(v')^G$ be the image of $v'$. Then we can apply Proposition~\ref{prop:sinksource}, with $2\sqrt{{\varepsilon}_1}, \eta_1/2, \tau/2, (v')^G$ playing the roles of ${\varepsilon}, \eta, \tau, a_1$, to find a copy $(v'Q_2)^G$ of $(v'Q_2)$ which starts at $(v')^G$, covers $S_A, S_B, T_A, T_B$ and contains $L^G$. Note that we make use of \eqref{eq:condition1} and \eqref{eq:condition2} here. We obtain a copy $(vQ_2)^G$ of $(vQ_2)$ (by combining $v^G(v')^G$ with $(v'Q_2)^G$ if $v'\neq v$) which has $s_A+t_A$ repeated $A$s and $s_B+t_B$ repeated $B$s.
We find copies of $P_T$ in $G[T]$ and $P_S$ in $G[S]$ as in Case~1. Combining these paths with $(vQ_2)^G$, $e^G$, $Q^G$ and $f^G$, we obtain a copy $P^G$ of $P$ in $G$ such that $|V(P^G)\cap(A\cup B)|\leq 3{\varepsilon}_2 n$. The path $P^G$ satisfies (EC\ref{EC1}) and we may assume that (EC\ref{EC2}) holds, by extending the path if necessary to have both endvertices in $A$. All repeated $A$s and $B$s in $P^G$ occur as repeated $A$s and $B$s in the paths $P_0^G$ and $(vQ_2)^G$ so we can use \eqref{eqn:repeats} to see that
$$|B\setminus V(P^G)|-|A\setminus V(P^G)|= d-(s_B+t_B)+(d'-d)+(s_A+t_A)+1=1.$$
Therefore, (EC\ref{EC3}) is satisfied and $P^G$ is an exceptional cover.
\medskip
\noindent \textbf{Case 3: }\emph{The assumptions of Cases~$1$ and $2$ do not hold.}
Recall that $f_1$ is a forward edge and $f_2$ is a backward edge. Since Case~2 does not hold, this implies that $e_2$ is a forward edge if $n$ is even (otherwise $e:=e_2$ and $f:=f_2$ would satisfy the conditions of Case~2) and $e_2$ is a backward edge if $n$ is odd (otherwise $e:=e_2$ and $f:=f_1$ would satisfy the conditions of Case~2). In particular, since Case~1 does not hold, this in turn implies that $Q_1$ is not antidirected.
We claim that $Q_1\setminus \{e_2\}$ is not antidirected. Suppose not. Then it must be the case that $\{e_1,e_2\}$ is consistent. If $e_1$ and $e_2$ are forward edges (and so $n$ is even), then $e:=e_1$ and $f:=f_1$ satisfy the conditions of Case~2. If $e_1$ and $e_2$ are both backward edges (and so $n$ is odd), then $e:=e_1$ and $f:=f_2$ satisfy the conditions of Case~2. Therefore, $Q_1\setminus \{e_2\}$ is not antidirected and must contain a consistently oriented path $Q_1'$ of length two.
Let $e:=e_2$. If $n$ is even, let $f:=f_1$ and, if $n$ is odd, let $f:=f_2$. In both cases, we have that $\{e,f\}$ is consistent. Let $P:=(Q_1'CQ_2)$ and $P_0:=(ePf)$. Let $P_T$ and $P_S$ be subpaths of $C$ defined such that $P_0=(eP_TQP_Sf)$. Set $p_T:=|P_T|$ and $p_S:=|P_S|$. Our aim is to find a copy $P_0^G$ which is of the form given in Table~\ref{table2}. We also define a constant $d'$ as in Table~\ref{table2}. So if $r_A$ and $r_B$ are the numbers of repeated $A$s and $B$s in $P_0^G$ respectively, then again $r_A-r_B=d'-d$.
Let $v$ be the final vertex of $f$. Consider a tripartition $P_1, P_2, P_3$ of $(vQ_2)$ and a link $L\subseteq P_2$ of length eight such that $d_C(v,L)$ is even. Proceed exactly as in Case~2 to find copies $Q^G$ and $L^G$ of $Q$ and $L$.
Use (R\ref{R3}), (R\ref{R8}) and (R\ref{R9}) to fix a copy $(Q_1'Ce)^G$ of $(Q_1'Ce)$ which is disjoint from $Q^G$ and $L^G$ and is of the form given in Table~\ref{table3}.
\begin{table}[h]
\begin{tabular}{|p{3.2cm}|c|c|c|c|}
\hline
\centering{$Q_1'$}& forward & forward & backward & backward\\
\centering{$d_C(Q_1',e)$}& odd & even & odd & even\\
\hline
\rule{0pt}{11pt} \centering{Form of $(Q_1'Ce)^G$ if $e$ is forward}& $BTA*BT'$ & $ASB*BT'$ & $BSA*BT'$ & $ATB*BT'$\\
\hline
\rule{0pt}{11pt} \centering{Form of $(Q_1'Ce)^G$ if $e$ is backward}& $ASB*AT'$ & $BTA*AT'$ & $ATB*AT'$ & $BSA*AT'$\\
\hline
\end{tabular}
\caption{Form of $(Q_1'Ce)^G$ in Case 3.}\label{table3}
\end{table}
Note that the interior of $(Q_1'Ce)^G$ uses exactly one vertex from $S\cup T$ and $(Q_1'Ce)^G$ has no repeated $A$s or $B$s. Write $(Q_1')^G$ for the image of $Q_1'$. We also fix an edge $f^G$ for the image of $f$ which is disjoint from $Q^G$, $L^G$ and $(Q_1'Ce)^G$ and is an $S'B$-edge if $e$ is forward and an $AS'$-edge if $e$ is backward. Let $q_S$ be the number of interior vertices of $Q^G$, $L^G$ and $(Q_1')^G$ in $S$ and let $q_T$ be the number of interior vertices of $Q^G$, $L^G$ and $(Q_1')^G$ in $T$.
Note that $p_S+p_T+q_S+q_T\equiv d_C(e,f)-1 \equiv n \mod 2$.%
\COMMENT{Note that $(Q_1')^G$ has exactly one interior vertex in $S\cup T$. If $Q$ is consistently oriented, $p_T+p_S=d_C(e,f)-1$ and $q_T+q_S$ is even. If $Q$ is antidirected oriented, $p_T+p_S=d_C(e,f)-12$ and $q_T+q_S$ is odd.}
Using the same reasoning as in Case~2, we find sets $S_A, S_B, T_A, T_B$ satisfying \eqref{eq:condition1} and \eqref{eq:condition2} such that $S''\setminus V(Q^G \cup L^G\cup (Q_1')^G)\subseteq S_A\cup S_B$, $T''\setminus V(Q^G\cup L^G\cup (Q_1')^G)\subseteq T_A\cup T_B$, $s=s_A+s_B+p_S+q_S$, $t=t_A+t_B+p_T+q_T$ and $s_A+t_A+d'=s_B+t_B$.
(R\ref{R2}) and the choice of $s^*$ imply that $s_A, t_A, s_B, t_B \leq 2\sqrt{{\varepsilon}_1}n$.
Recall that $v$ denotes the final vertex of $f$. Similarly as in Case~2, we now use Proposition~\ref{prop:sinksource} to find a copy $(vQ_2)^G$ of $(vQ_2)$ which covers $S_A, S_B, T_A, T_B$, contains $L^G$ and has $s_A+t_A$ repeated $A$s and $s_B+t_B$ repeated $B$s.%
\COMMENT{Let $v^G$ be the image of $v$. If $v^G\in A$, let $v':=v$ and $(v')^G:=v^G$. If $v^G\in B$, let $v'$ denote the successor of $v$ on $C$. Map $v'$ to a suitable neighbour $(v')^G$ of $v^G$ in $A$. Then we can apply Proposition~\ref{prop:sinksource}, with $2\sqrt{{\varepsilon}_1}, \eta_1/2, \tau/2, (v')^G$ playing the roles of ${\varepsilon}, \eta, \tau, a_1$, to find a copy $(v'Q_2)^G$ of $(v'Q_2)$ which starts at $(v')^G$, covers $S_A, S_B, T_A, T_B$ and contains $L^G$. We obtain a copy $(vQ_2)^G$ of $(vQ_2)$ which has $s_A+t_A$ repeated $A$s and $s_B+t_B$ repeated $B$s.}
We find copies of $P_T$ in $G[T]$ and $P_S$ in $G[S]$ as in Case~1. Together with $(Q_1'Ce)^G$, $Q^G$, $f^G$ and $(vQ_2)^G$, these paths give a copy $P^G$ of $P$ in $G$ such that $|V(P^G)\cap(A\cup B)|\leq 5{\varepsilon}_2 n$. The path $P^G$ satisfies (EC\ref{EC1}) and we may assume that (EC\ref{EC2}) holds, by extending the path so that both endvertices lie in $A$ if necessary. All repeated $A$s and $B$s in $P^G$ occur as repeated $A$s and $B$s in the paths $P_0^G$ and $(vQ_2)^G$, so we can use \eqref{eqn:repeats} to see that
$$|B\setminus V(P^G)|-|A\setminus V(P^G)|= d-(s_B-t_B)-(d-d')+(s_A+t_A)+1=1.$$
So (EC\ref{EC3}) is satisfied and $P^G$ is an exceptional cover.
\end{proof}
\subsection{Finding a copy of $C$}
As we did in the $AB$-extremal case, we will now use an exceptional cover to find a copy of $C$ in $G$.
\begin{proofof}\textbf{Lemma~\ref{lem:ABST}.}
Apply Lemma~\ref{lem:ABST1} or Lemma~\ref{lem:ABST2} to find an exceptional cover $P$ of $G$ which uses at most $2\eta_1^2n$ vertices from $A\cup B$. Let $P'$ be the path of length $\sqrt{{\varepsilon}_1}n$ following $P$ on $C$. Extend $P$ by a path isomorphic to $P'$, using this path to cover all $x\in A$ such that $d^+_B(x)\leq b-{\varepsilon}^{1/3}n$ or $d^-_B(x)\leq b-{\varepsilon}^{1/3}n$ and all $x\in B$ such that $d^+_A(x)\leq a-{\varepsilon}^{1/3}n$ or $d^-_A(x)\leq a-{\varepsilon}^{1/3}n$, using only edges in $E(A,B)\cup E(B,A)$. Let $P^*$ denote the resulting extended path.
We may assume that both endvertices $a_1, a_2$ of $P^*$ are in $A$ and also that $d^\pm_B(a_i) \geq b-{\varepsilon}^{1/3} n$ (by extending the path if necessary). Let $A^*, B^*$ denote those vertices in $A$ and $B$ which have not already been covered by $P^*$ together with $a_1$ and $a_2$ and let $G^*:=G[A^*, B^*]$. We have that $|A^*|=|B^*|+1$ and
$\delta^0(G^*)\geq a-3\eta_1^2n \geq (7|B^*|+2)/8.$
Then $G^*$ has a Hamilton path of any orientation with the desired endpoints by Proposition~\ref{prop:completepath}(ii). Together with $P^*$, this gives a copy of $C$ in $G$.
\end{proofof}
\section*{Acknowledgements}
We are grateful to the referees for a careful reading of this paper.
|
2,869,038,154,357 | arxiv | \section{Introduction}
\label{intro}
Directional statistics is the subdiscipline of statistics used to study directional observations that can be represented as unit vectors in Euclidean space. The sample space of a directional variable denoted by a unit vector in $\mathbb{R}^p$ is the surface of an $(p-1)$ dimensional unit hypersphere $\mathbb{S}^{p-1}$. The most common directional observations are circular data and spherical data in the cases that $p=2$ and $p=3$. Directional observations arise in many scientific fields and applications, including studies of wind directions \cite{jammalamadaka2006effect, nunez2015bayesian}, motion planning for robots \cite{kucner2017enabling, palmieri2017kinodynamic}, and image analysis \cite{hasnat2014unsupervised, roy2016swgmm}. Due to the non-Euclidean periodic property of the domain of directional observations, the analysis of such data requires specialized statistical models. Examples of parametric models include the von Mises-Fisher family \cite{watson1982distributions} and variants of the normal distribution (e.g., the projected normal \cite{mardia1975statistics,wang2013directional, hernandez2017general} and wrapped normal \cite{collett1981discriminating}). See \cite{mardia2000directional} and \cite{pewsey2021recent} for a more comprehensive review. A number of recent studies focus on more flexible modeling of directional data using mixtures of distributions, including mixtures of the normal variants \cite{nunez2015bayesian, wang2014modeling,rodriguez2020bayesian}, mixtures of von Mises distributions \cite{carta2008statistical} and sums of trigonometric functions \cite{fernandez2004circular}.
Directional data are often observed together with linear variables that take values on the real line. For example, in the study of meteorology, wind directions may be collected along with other linear components like wind speed, temperature and humidity \cite{fernandez2007models}; and in image analysis, some color spaces adopt hue (a circular variable) and other linear measurements (e.g., chroma and lightness) to represent color information. Establishing the joint distribution of directional-linear data requires modeling the correlation of directional and linear components. This is not a straightforward task due to the complex manifold of the sample space. In the case of one circular variable and one linear variable, the sample space is the surface of a cylinder. A popular approach to modeling cylindrical data is to use a copula density that can marginalize to a circular distribution and a linear distribution \cite{johnson1978some, fernandez2007models,carta2008joint,soukissian2014probabilistic,zhang2018investigation}. \textit{Roy et al.} \cite{roy2017jclmm} developed mixtures of copula distributions to obtain a more flexible family of models. To the best of our knowledge, the copula models developed so far are all bivariate models for circular-linear data, and extending them into higher-dimensional space (e.g., by including a spherical variable or additional linear variables) is not a trivial problem. Efforts have been made to model the joint distribution of directional-linear data in higher dimensional space based on the multivariate normal distribution (MVN), which conveniently models the correlations among multiple variables. For example, by transforming one dimension of the MVN into a wrapped normal, the MVN distribution is capable of modeling the joint distribution of one circular variable and multiple linear variables. \textit{Roy et al.} \cite{roy2016swgmm} developed a mixture model based on this idea. \textit{Mastrantonio} \cite{mastrantonio2018joint} proposed using projected normal and skew-normal distributions to model the joint distribution of multiple circular variables and linear variables. All of the models mentioned above are limited to circular data and not applicable to directional variables in higher dimensions (e.g., a spherical variable).
In this study we propose a novel approach to modeling directional-linear data that can accommodate a directional variable with dimension $p > 2$. The basic idea is to use a MVN to derive the joint distribution of multiple linear variables and
one directional variable in arbitrary dimension, and then marginalize to a projected normal. Following similar nomenclature as in \cite{roy2016swgmm}, we call the resulting marginal distribution a semi-projected normal distribution (SPN). With appropriate covariance structure, the SPN can accommodate skewed and bimodal distributions for the directional component. In many real-world applications, data distributions can be multimodal and too complex for a specified parametric distribution. Nonparametric Bayesian approaches like Dirichlet process mixture models (DPMM) are often applied to address such cases. We define a DPMM based on the SPN to create a more flexible model for directional-linear data and implement a Markov chain Monte Carlo sampling algorithm to fit the model. The projected normal distribution requires a constraint on the covariance matrix to ensure the model is identifiable \cite{hernandez2017general, mastrantonio2018joint}, and our Bayesian approach requires a prior distribution for the covariance matrix that is subject to the same constraint. We developed a conditional inverse-Wishart distribution to accommodate the constraint and still take advantage of conjugacy to achieve efficient sampling.
The remainder of this paper is organized as follows. Section \ref{SPN} reviews the definition and properties of the projected normal distribution and introduces the SPN for directional-linear data. Section \ref{secdpmm} reviews the basic setting of the DPMM and then develops a DPMM based on the SPN to build a more flexible and robust model for directional-linear data. Section \ref{Experiments} applies our Dirichlet process SPN mixture model (DPSPN) to clustering of synthetic data and image segmentation and compares its performance with other state-of-the-art methods. Section \ref{BPA} develops a hierarchical DPSPN that is applied to density estimation for bloodstain pattern analysis. A summary discussion is provided in Section \ref{Discussion}.
\section{The semi-projected normal distribution}
\label{SPN}
In this section, we first review the projected normal distribution that can be used to model a directional variable with arbitrary dimension. Then we introduce the semi-projected normal distribution (SPN) as a generalization of the projected normal to model the joint distribution of directional-linear data.
\subsection{The projected normal distribution for directional data}
One approach to obtaining a distribution for directional data is by projecting a distribution defined on $\mathbb{R}^p$ onto the unit hypersphere $\mathbb{S}^{p-1}$. For $p\geqslant 2$, let the random vector $\bm{x} = (x_1,...,x_p)^T$ follow a $p$-variate normal distribution $\mathcal{N}_p(\bm{\mu},\bm{\Sigma})$ and define the directional variable $\bm{u} = (u_1,...,u_p)^T = r^{-1} \bm{x}$ where $r = \lVert \bm{x} \rVert = (\bm{x}^T\bm{x})^{\frac{1}{2}}$ is the radius. The marginal distribution of $\bm{u}$ is called the projected normal with parameters $\bm{\mu}$ and $\bm{\Sigma}$ and denoted by $\mathcal{PN}_p(\bm{\mu},\bm{\Sigma})$. It is defined on the unit hypersphere in $(p-1)$ dimensions $\mathcal{S}^{p-1}$. An alternative way to represent $\bm{u}$ is to use $(p-1)$ angular coordinates $\bm{\theta} = (\theta_1,...,\theta_{p-1})$ in the spherical system, where $\theta_1,...,\theta_{p-2}$ range over $[0,\pi]$ and $\theta_{p-1}$ ranges over $[0,2\pi)$. Since $\bm{\theta}$ and $\bm{u}$ represent the same direction, they are often used interchangeably in the literature. The vector $\bm{x}$ can be computed from $r$ and $\bm{\theta}$ via the following transformation:
\begin{align} \label{car2sph}
\begin{split}
x_1 &= r\cos{\theta_1} \\
x_2 &= r\sin{\theta_1}\cos{\theta_2} \\
x_3 &= r\sin{\theta_1}\sin{\theta_2}\cos{\theta_3} \\
... \\
x_{p-1} &= r\sin{\theta_1}...\sin{\theta_{p-2}}\cos{\theta_{p-1}} \\
x_{p} &= r\sin{\theta_1}...\sin{\theta_{p-2}}\sin{\theta_{p-1}}
\end{split}
\end{align}
The joint density function of $r$ and $\bm{\theta}$ can be derived from the density of $\mathcal{N}_p(\bm{\mu},\bm{\Sigma})$ and the Jacobian matrix of the transformation (\ref{car2sph}):
\begin{align}
\begin{split}
f(r,\bm{\theta}|\bm{\mu},\bm{\Sigma})
&= f_{\bm{X}}(\bm{x}|\bm{\mu},\bm{\Sigma})|\frac{\partial\bm{x}}{\partial(r,\bm{\theta})}| \\
&= |2\pi\bm{\Sigma}|^{-\frac{1}{2}}r^{p-1}\exp{\Bigl\{-\frac{1}{2}(r\bm{u}-\bm{\mu})^T\bm{\Sigma}^{-1}(r\bm{u}-\bm{\mu})\Bigl\}}\prod_{j=1}^{p-2}(\sin{\theta_j})^{p-1-j} \label{rthetajoint}
\end{split}
\end{align}
In practice, only $\theta$ is observed. The variable $r$, and hence the vector $\bm{x}$ are not observable. They can be viewed as an augmented variable set. The marginal density of $\bm{\theta}$ can be obtained by integrating out $r$. The result derived by \textit{Pukkila} \& \textit{Rao} \cite{pukkila1988pattern} is given below:
\begin{align}
\begin{split}
f(\bm{\theta}|\bm{\mu},\bm{\Sigma}) &= \int_0^\infty f(r,\bm{\theta}|\bm{\mu},\bm{\Sigma})dr\\
&= |2\pi\bm{\Sigma}|^{-\frac{1}{2}}Q_3^{-\frac{p}{2}}\exp{\Bigl\{-\frac{1}{2}(Q_1-Q_2^2Q_3^{-1})\Bigl\}}\kappa_p(Q_2Q_3^{-\frac{1}{2}}) \label{PNpdf}
\end{split}
\end{align}
where $Q_1 = \bm{\mu}^T\bm{\Sigma}^{-1}\bm{\mu}$, $Q_2 = \bm{\mu}^T\bm{\Sigma}^{-1}\bm{u}$, $Q_3 = \bm{u}^T\bm{\Sigma}^{-1}\bm{u}$ and function $\kappa_p(\cdot)$ is defined as follows:
\begin{align*}
\kappa_p(x) = \int_0^\infty r^{p-1}\exp{\Bigl\{-\frac{1}{2}(r-x)^2\Bigl\}}dr
\end{align*}
The recursive property of $\kappa_p(x)$ is given in \cite{pukkila1988pattern}. Equation (\ref{PNpdf}) gives the density function of $\mathcal{PN}_p(\bm{\mu},\bm{\Sigma})$. As pointed out in \cite{wang2013directional}, the shape of the distribution can be asymmetric or bimodal and the mean direction of $\bm{\theta}$ is dependent on both $\bm{\mu}$ and $\bm{\Sigma}$. If $\bm{\mu}$ is orthogonal to any of the eigenvectors of $\bm{\Sigma}$, the distribution is symmetric \cite{hernandez2017general}.
Note that the density function remains unchanged if $\bm{\mu}$ and $\bm{\Sigma}$ are replaced by $a\bm{\mu}$ and $a^2\bm{\Sigma}$ for any $a>0$. This raises an identifiability issue that can be solved by putting a constraint on the parameters. A popular choice is to let $\bm{\Sigma} = \bm{I}_p$ which leads to a distribution that is unimodal and symmetric about the direction of $\bm{\mu}$. A more general approach is to fix one of the diagonal entries of $\bm{\Sigma}$ to be one \cite{wang2013directional,hernandez2017general,mastrantonio2018joint}:
\begin{align} \label{constraint}
\bm{\Sigma} = \begin{pmatrix}
1 & \bm{\omega}^T \\
\bm{\omega} & \bm{\Omega}
\end{pmatrix}
\end{align}
where $\bm{\omega}$ is a $(p-1)$ vector and $\bm{\Omega}$ is a $(p-1)$ by $(p-1)$ matrix. The constrained $\bm{\Sigma}$ needs to be positive semi-definite to remain a valid covariance matrix.
\subsection{Incorporating linear variables} \label{ILV}
Suppose we observe $\bm{u}$ (or equivalently $\bm{\theta}$) together with $q$ linear variables denoted by the vector $\bm{y} = (y_1,...,y_q)^T$ that follow a multivariate normal distribution (MVN). Since $\bm{x}$, the augmented representation of $\bm{u}$, is also normally distributed, it is intuitive to introduce dependence between $\bm{u}$ and $\bm{y}$ by modeling the joint distribution of $\bm{z} = (\bm{x},\bm{y})^T$ with a $(p+q)$-variate normal:
\begin{align} \label{partitionMVN}
\bm{z}=\begin{pmatrix}
\bm{x} \\
\bm{y}
\end{pmatrix} \sim \mathcal{N}_{d}[\Tilde{\bm{\mu}} = \begin{pmatrix}
\bm{\mu}_x \\
\bm{\mu}_y
\end{pmatrix},
\Tilde{\bm{\Sigma}} =
\begin{pmatrix}
\bm{\Sigma}_{xx} & \bm{\Sigma}_{xy} \\
\bm{\Sigma}_{yx} & \bm{\Sigma}_{yy}
\end{pmatrix}]
\end{align}
where $\Tilde{\bm{\mu}}$ and $\Tilde{\bm{\Sigma}}$ are the joint mean and covariance matrix and $d=p+q$. Note $\bm{\Sigma}_{xx}$ should satisfies the identifiability constraint (\ref{constraint}). Marginally $\bm{x}$ and $\bm{y}$ are still normally distributed. Based on the conditional distribution property of the MVN, we have $\bm{x}|\bm{y} \sim\mathcal{N}_p(\bm{\mu}_{x|y},\bm{\Sigma}_{x|y})$
where $\bm{\mu}_{x|y} = \bm{\mu}_x+\bm{\Sigma}_{xy}\bm{\Sigma}_{yy}^{-1}(\bm{y}-\bm{\mu}_y)$ and $\bm{\Sigma}_{x|y}=\bm{\Sigma}_{xx}-\bm{\Sigma}_{xy}\bm{\Sigma}_{yy}^{-1}\bm{\Sigma}_{yx}$. Substituting $(\bm{\mu}_{x|y},\bm{\Sigma}_{x|y})$ for $(\bm{\mu},\bm{\Sigma})$ in (\ref{rthetajoint}) and (\ref{PNpdf}) yields the corresponding conditional density functions for $(r,\bm{\theta})|\bm{y}$ and $\bm{\theta}|\bm{y}$. Multiplying these conditional density functions by the marginal density function of $\bm{y}$, we obtain the following joint distributions:
\begin{align}
f(r,\bm{\theta},\bm{y}|\Tilde{\bm{\mu}},\Tilde{\bm{\Sigma}}) &= f(r,\bm{\theta}|\bm{\mu}_{x|y},\bm{\Sigma}_{x|y}) \cdot \mathcal{N}_q(\bm{y}|\bm{\mu}_{y},\bm{\Sigma}_{yy}) \label{rthetaypdf} \\
f(\bm{\theta},\bm{y}|\Tilde{\bm{\mu}},\Tilde{\bm{\Sigma}}) &= f(\bm{\theta}|\bm{\mu}_{x|y},\bm{\Sigma}_{x|y}) \cdot \mathcal{N}_q(\bm{y}|\bm{\mu}_{y},\bm{\Sigma}_{yy}) \label{spnpdf}
\end{align}
The complete mathematical expressions for (\ref{rthetaypdf}) and (\ref{spnpdf}) are omitted here to avoid redundancy. The joint distribution of $\bm{\theta}$ and $\bm{y}$ given by (\ref{spnpdf}) is obtained by projecting a number of dimensions of a normally distributed variable onto a hypersphere, so we refer to it as the semi-projected normal distribution (SPN). It is worth noting that with $p=2$, SPN is a special case of the joint projected normal and skew-normal distribution (JPNSN) introduced in \cite{mastrantonio2018joint}. Some properties of the JPNSN still hold true for the SPN with $p>2$. For example, $\bm{\theta}$ with any subset of $\bm{y}$ is still SPN distributed; marginally $\bm{\theta}\sim\mathcal{PN}_p(\bm{\mu}_x,\bm{\Sigma}_{xx})$ and $\bm{y}\sim\mathcal{N}_q(\bm{\mu}_y,\bm{\Sigma}_{yy})$; $\bm{\Sigma}_{xy}$ describes the directional-linear dependence and $\bm{\theta}\perp\bm{y}$ if and only if $\bm{\Sigma}_{xy}=\bm{0}$.
The SPN is a very flexible distribution family, but the complex density function makes it challenging to estimate the parameters via maximum likelihood or sample from their posterior distribution. Previous studies \cite{wang2013directional,hernandez2017general,mastrantonio2018joint} exploit the close relationship between a MVN and the projected normal by augmenting the SPN with a draw of $r$ from its full conditional distribution and restoring a complete observation of $\bm{x}$ via the transformation (\ref{car2sph}). Then the posterior distribution of the MVN parameters conditional on $\bm{x}$ and $\bm{y}$ can easily be sampled from. In the case of the SPN, the full conditional of $r$ can be derived from (\ref{rthetajoint}) and (\ref{rthetaypdf}):
\begin{align}
f(r|\bm{\theta},\bm{y},\Tilde{\bm{\mu}},\Tilde{\bm{\Sigma}}) \ \propto \ f(r,\bm{\theta},\bm{y}|\Tilde{\bm{\mu}},\Tilde{\bm{\Sigma}}) \ \propto \
r^{p-1}\exp{\Bigl\{-\frac{1}{2}Q_3^*(r-\frac{Q_2^*}{Q_3^*})^2 \Bigl\}} \label{sampleR}
\end{align}
where $Q_2^* = \bm{\mu}^T_{x|y}\bm{\Sigma}_{x|y}^{-1}\bm{u}$, $Q_3^* = \bm{u}^T\bm{\Sigma}_{x|y}^{-1}\bm{u}$. We modified a slice sampling strategy proposed in \cite{hernandez2017general} to sample from (\ref{sampleR}). Details are provided in Appendix \ref{appA}.
\section{Dirichlet process mixture of semi-projected normal distributions}
\label{secdpmm}
Parametric models often encounter real-world applications for which there is insufficient prior knowledge and data to justify the parametric assumptions, or for which the parametric model is inadequate to capture the complexity of the data. Nonparametric models support a more flexible and robust specification of distributions. In the field of Bayesian nonparametrics, the Dirichlet process mixture model (DPMM) is widely used due to its elegant mathematical structure and broad applicability. In this section, we build a DPMM using the SPN as the mixture density and develop an algorithm for Bayesian inference.
\subsection{The Dirichlet process mixture model}
The basic idea of a DPMM is that an unknown density $f(\bm{z})$ can be approximated by a sum of countably infinite densities:
\begin{align} \label{dedp}
f(\bm{z}) = \int f(\bm{z}|\bm{\phi})dG(\bm{\phi}) = \sum_{k=1}^\infty \pi_kf(\bm{z}|\bm{\phi}_k)
\end{align}
where $f(\bm{z}|\bm{\phi})$ is known as the mixture density with parameter $\bm{\phi}$, and $G$ is a discrete mixing distribution for $\bm{\phi}$ with $\pi_k$'s as probabilities. Consider a number of observations $\bm{z}_1,...,\bm{z}_n$ generated from $f(\bm{z})$. The data generation process can be expressed as follows:
\begin{align} \label{DPMM}
\begin{split}
\bm{z}_i &\sim f(\bm{z}|\bm{\phi}_i) \\
\bm{\phi}_i &\sim G \\
G &\sim DP(\alpha_0, G_0)
\end{split}
\end{align}
where $G$ is generated from a Dirichlet process \cite{ferguson1973bayesian} prior with base measure $G_0$ and concentration parameter $\alpha_0$. The Dirichlet process is a distribution on the family of distributions. With the hierarchical structure, conditional independence is implicitly assumed. For example, $\bm{z}_i$'s are independent of each other given the $\bm{\phi}_i$'s. Formulas (\ref{DPMM}) represent the most basic form of a DPMM. Additional structure can be added to the hierarchical model, e.g., putting priors on the concentration parameter $\alpha_0$ \cite{escobar1995bayesian} and base measure $G_0$ \cite{teh2006hierarchical}.
An equivalent and more comprehensive representation of a DPMM is as the limit of a finite mixture model with the number of clusters $K$ going to infinity \cite{neal2000markov,ishwaran2002exact}:
\begin{align} \label{IMM}
\begin{split}
\bm{z}_i | c_i,\{\bm{\varphi}_k\}_{k=1}^K &\sim f(\bm{z}|\bm{\varphi}_{c_i}) \\
c_i | \bm{\pi} &\sim Discrete(\pi_1,...,\pi_K) \\
\bm{\varphi}_k &\sim G_0 \\
\bm{\pi} &\sim Dirichlet(\alpha_0/K,...,\alpha_0/K)
\end{split}
\end{align}
In this form, $\{c_i\}_{i=1}^n$ label the cluster assignments for each observation and theoretically can take any $K$ distinct values (integers 1 to $K$ are used here) and the relevant parameters for observation $\bm{z}_i$ is $\bm{\phi}_i = \bm{\varphi}_{c_i}$. The probabilities $\bm{\pi}=(\pi_1,...,\pi_K)$ indicate how likely it is that a new observation will be assigned to each of the clusters. The two representations of the DPMM given in (\ref{DPMM}) and (\ref{IMM}) correspond to its two most popular applications: density estimation and clustering.
Bayesian inference for the DPMM mainly involves sampling from the posterior distribution of $\{\bm{\phi}_i\}_{i=1}^n$ and $\{c_i\}_{i=1}^n$ by simulating a Markov chain that reaches equilibrium at that distribution. \textit{Neal} \cite{neal2000markov} provides several Gibbs sampling algorithms for DPMMs. When $G_0$ is a conjugate prior distribution for $f(\bm{z}|\bm{\phi})$, the collapsed Gibbs sampler (\textbf{Algorithm 3} in \cite{neal2000markov}) has a better convergence rate than other sampling algorithms \cite{maceachern1994estimating}. The algorithm directly samples $\{c_i\}_{i=1}^n$ without updating $\{\bm{\phi}_i\}_{i=1}^n$. Each Gibbs sampling iteration consists of assigning each $\bm{z_i}$ to an existing cluster or a new one by evaluating the full conditional distribution of $c_i$ given all $c_j$ but $c_i$ (written as $c_{-i}$):
\begin{align} \label{cpost}
P(c_i=k|c_{-i}, \bm{z}_i)\ \propto
\begin{cases}
n_{-i,k}\int f(\bm{z}_i|\bm{\phi})dG_{-i,k}(\bm{\phi})\qquad &\text{if }k\text{ represents an existing cluster} \\
\alpha_0\int f(\bm{z}_i|\bm{\phi})dG_0(\bm{\phi}) \qquad &\text{if }k\text{ represents a new cluster}
\end{cases}
\end{align}
Here $n_{-i,k}$ is the number of $c_j$ for $j\neq i$ that are equal to $k$, and $G_{-i,k}$ is the posterior distribution of $\bm{\phi}$ based on $G_0$ and all observations $\bm{z}_j$ for which $j\neq i$ and $c_j = k$. Evaluation of the integrals in (\ref{cpost}) becomes much simpler when $G_0$ is a conjugate prior distribution for $f(\bm{z}|\bm{\phi})$. In that case, $G_{-i,k}$ will be in the same distribution family as $G_0$, and therefore all integrals can be viewed as marginal distributions of $\bm{z}_i$ given different prior parameters.
\subsection{Incorporating the semi-projected normal distribution}
Directly using the SPN as the mixture distribution $f(\bm{z}|\bm{\phi})$ in a DPMM can be challenging due to the complexity of its density function and the lack of a conjugate prior distribution. Instead, we choose to model the complete (augmented) data $\bm{z}=(\bm{x},\bm{y})^T$ with a MVN likelihood as shown in (\ref{partitionMVN}). In this case, since $\bm{\phi} = (\Tilde{\bm{\mu}}, \Tilde{\bm{\Sigma}})$, it is natural to use the normal inverse-Wishart distribution as a conjugate prior. However, as mentioned in Section \ref{SPN}, the covariance matrix $\Tilde{\bm{\Sigma}}$ needs to satisfy the identifiability constraint that at least one of the diagonal entries equals one. The inverse-Wishart distribution does not satisfy that constraint and hence can not be directly applied. \textit{Hernandez-Stumpfhauser et al.} \cite{hernandez2017general} provided a reparametrization for the covariance matrix of the projected normal distribution to ensure its positive semi-definiteness and allow separate prior distributions on the constituent submatrices.
We propose using a conditional inverse-Wishart distribution to accommodate the constraint. Suppose $\Tilde{\bm{\Sigma}}$ follows an inverse-Wishart distribution $\mathcal{IW}(\bm{S},\nu)$. Partition $\Tilde{\bm{\Sigma}}$ and $\bm{S}$ conformably with each other:
\begin{align} \label{partitionS}
\Tilde{\bm{\Sigma}} = \begin{pmatrix}
\bm{\Sigma}_{11} & \bm{\Sigma}_{12} \\
\bm{\Sigma}_{21} & \bm{\Sigma}_{22}
\end{pmatrix},\quad
\bm{S} = \begin{pmatrix}
\bm{S}_{11} & \bm{S}_{12} \\
\bm{S}_{21} & \bm{S}_{22}
\end{pmatrix}
\end{align}
Here $\bm{\Sigma}_{ij}$ and $\bm{S}_{ij}$ are $d_i\times d_j$ matrices (with $d_1+d_2=d=p+q$ and $d_1\leq p$) and satisfy the following properties:
\begin{align} \label{CIWprop}
\begin{split}
&\text{(a) }\bm{\Sigma}_{11}\sim\mathcal{IW}(\bm{S}_{11}, \nu - d_2) \\
&\text{(b) }\bm{\Sigma}_{11} \text{ is independent of } \bm{\Sigma}_{11}^{-1}\bm{\Sigma}_{12} \text{ and } \bm{\Sigma}_{22\cdot1} \text{, where } \bm{\Sigma}_{22\cdot1}=\bm{\Sigma}_{22}-\bm{\Sigma}_{21}\bm{\Sigma}_{11}^{-1}\bm{\Sigma}_{12} \\
&\text{(c) }\bm{vec}(\bm{\Sigma}_{11}^{-1}\bm{\Sigma}_{12})| \bm{\Sigma}_{22\cdot1} \sim \mathcal{N}_{d_1\times d_2}[\bm{vec}(\bm{S}_{11}^{-1}\bm{S}_{12}), \bm{\Sigma}_{22\cdot1} \otimes \bm{S}_{11}^{-1}] \\
&\text{(d) }\bm{\Sigma}_{22\cdot1} \sim \mathcal{IW}(\bm{S}_{22\cdot1}, \nu) \text{, where } \bm{S}_{22\cdot1}=\bm{S}_{22}-\bm{S}_{21}\bm{S}_{11}^{-1}\bm{S}_{12}
\end{split}
\end{align}
The operator $\bm{vec}(\cdot)$ vectorizes a matrix by stacking its columns on top of one another, and $\otimes$ is the Kronecker product. In our case, $d_1$ is at least one to ensure identifiability but can be larger (e.g., $d_1=p$). The proof of (\ref{CIWprop}) is provided in Appendix \ref{appB}. The properties listed above suggest a reparameterization of $\Tilde{\bm{\Sigma}}$ as $(\bm{\Sigma}_{11},\bm{\Sigma}_{11}^{-1}\bm{\Sigma}_{12},\bm{\Sigma}_{22\cdot 1})$. We can derive the conditional distribution of $(\bm{\Sigma}_{11}^{-1}\bm{\Sigma}_{12},\bm{\Sigma}_{22\cdot 1}|\bm{\Sigma}_{11})$ using properties (b) - (d) as:
\begin{align} \label{CIWpdf}
\begin{split}
f(\bm{\Sigma}_{11}^{-1}\bm{\Sigma}_{12},\bm{\Sigma}_{22\cdot 1}|\bm{\Sigma}_{11}) &= f(\bm{\Sigma}_{11}^{-1}\bm{\Sigma}_{12},\bm{\Sigma}_{22\cdot 1}) \\
&= f(\bm{\Sigma}_{11}^{-1}\bm{\Sigma}_{12}|\bm{\Sigma}_{22\cdot 1})f(\bm{\Sigma}_{22\cdot 1}) \\
&= \mathcal{N}_{d_1\times d_2}[\bm{vec}(\bm{S}_{11}^{-1}\bm{S}_{12}), \bm{\Sigma}_{22\cdot1} \otimes \bm{S}_{11}^{-1}] \cdot \mathcal{IW}(\bm{S}_{22\cdot1}, \nu)
\end{split}
\end{align}
If $\bm{\Sigma}_{11}$ is fixed as constant, equation (\ref{CIWpdf}) provides a distribution to sample the rest of $\Tilde{\bm{\Sigma}}$ and allows us to evaluate the likelihood of the sample. And $\Tilde{\bm{\Sigma}}$ sampled from (\ref{CIWpdf}) is bound to be positive definite if $\bm{\Sigma}_{11}$ is positive definite. We call this distribution the conditional inverse-Wishart (CIW). The CIW perfectly fits our demand to constrain a part of the covariance matrix ($\bm{\Sigma}_{11}$). We can either choose $\bm{\Sigma}_{11}$ equal one ($d_1 = 1$) to satisfy the constraint (\ref{constraint}) and provide a flexible distribution; or let $\bm{\Sigma}_{11} = \bm{\Sigma}_{xx} = \bm{I}_p$ ($d_1 = p$) such that the marginal of $\bm{\theta}$ is unimodal and symmetrical about $\bm{\mu}_x$.
The inverse-Wishart distribution is often used as a conjugate prior for the covariance matrix. Based on the reparameterization of $\Tilde{\bm{\Sigma}}$ above, the inverse-Wishart can be expressed as the product of a CIW distribution conditional on $\bm{\Sigma}_{11}$ and the marginal of $\bm{\Sigma}_{11}$. Therefore, the CIW is also a conjugate prior to $\Tilde{\bm{\Sigma}}$ when $\bm{\Sigma}_{11}$ is fixed. Assume observations $\bm{z}_1,...,\bm{z}_n\overset {iid} \sim \mathcal{N}_d(\Tilde{\bm{\mu}},\Tilde{\bm{\Sigma}})$ and the following priors on $(\Tilde{\bm{\mu}},\Tilde{\bm{\Sigma}})$:
\begin{align}
\begin{split}
\Tilde{\bm{\mu}} | \Tilde{\bm{\Sigma}} &\sim \mathcal{N}_d(\bm{\mu}_0,\frac{1}{\lambda_0}\Tilde{\bm{\Sigma}}) \\
\Tilde{\bm{\Sigma}} &\sim \mathcal{CIW}(\bm{S}_0,\nu_0) \\
\end{split}
\end{align}
Let $\mathcal{NCIW}(\bm{\Psi}_0)$ denote the joint distribution formed by these priors where $\bm{\Psi}_0 = (\bm{\mu}_0,\lambda_0,\bm{S}_0,\nu_0)$ is the set of hyperparameters. Then the posterior distribution follows $\mathcal{NCIW}(\bm{\Psi}_n)$ with $\bm{\Psi}_n = (\bm{\mu}_n,\lambda_n,\bm{S}_n,\nu_n)$ defined as:
\begin{equation} \label{postpara}
\begin{gathered}
\bm{\mu}_n = \frac{\lambda_0\bm{\mu}_0+n\Bar{\bm{z}}}{\lambda_0+n} \\
\lambda_n = \lambda_0+n \\
\nu_n = \nu_0+n \\
\bm{S}_n = \bm{S}_0 + \sum_{i=1}^n\bm{z}_i\bm{z}_i^T - (\lambda_0+n)\bm{\mu}_n\bm{\mu}_n^T+\lambda_0\bm{\mu}_0\bm{\mu}_0^T
\end{gathered}
\end{equation}
where $\Bar{\bm{z}}$ is the sample mean. The conjugacy of the prior allow us to directly sample the cluster assignments $\{c_i\}_{i=1}^{n}$ from (\ref{cpost}). Here $G_0$ is $\mathcal{NCIW}(\bm{\Psi}_0)$ and $G_{-i,c}$ also follows the $\mathcal{NCIW}$ with parameters derived according to (\ref{postpara}) for cluster $c$. If we fix $\bm{\Sigma}_{11} = \bm{I}_{d_1}$, the marginal distribution for the data $\bm{z}_1,...,\bm{z}_n$ can be derived based on the conjugacy and Bayes rule:
\begin{align} \label{datamarginal}
\begin{split}
f(\bm{z}_1,...,\bm{z}_n|\bm{\Psi}_0)
&= \frac{f(\bm{z}_1,...,\bm{z}_n|\Tilde{\bm{\mu}},\Tilde{\bm{\Sigma}})\times f(\Tilde{\bm{\mu}},\Tilde{\bm{\Sigma}}|\bm{\Psi}_0)}{f(\Tilde{\bm{\mu}},\Tilde{\bm{\Sigma}}|\bm{z}_1,...,\bm{z}_n,\bm{\Psi}_0)} \Bigl|_{\Tilde{\bm{\mu}}=\bm{0},\Tilde{\bm{\Sigma}}=\bm{I}_d} \\
&= \frac{\prod_{i=1}^n \mathcal{N}_d(\bm{z}_i|\bm{0},\bm{I}_d)\cdot\mathcal{NCIW}(\bm{0},\bm{I}_d|\bm{\Psi}_0)}{\mathcal{NCIW}(\bm{0},\bm{I}_d|\bm{\Psi}_n)} \\
&=
\Bigl[2^{nd_1}\pi^{nd}(\frac{\lambda_n}{\lambda_0})^d\frac{|\bm{S}_n|^{\nu_n}}{|\bm{S}_0|^{\nu_0}}\frac{{|\bm{S}_n}_{11}|^{d_2-\nu_n}}{|{\bm{S}_0}_{11}|^{d_2-\nu_0}}\exp{\Bigl\{\bm{tr}({\bm{S}_n}_{11}-{\bm{S}_0}_{11})\Bigl\}}\Bigl]^{-\frac{1}{2}}\prod_{j=1}^{d_2}\frac{\Gamma(\frac{\nu_n+1-j}{2})}{\Gamma(\frac{\nu_0+1-j}{2})}
\end{split}
\end{align}
where ${\bm{S}_0}_{11}$ and ${\bm{S}_n}_{11}$ are submatrices of $\bm{S}_0$ and $\bm{S}_n$ partitioned according to (\ref{partitionS}), and $\Gamma(\cdot)$ is the gamma function. The integrals in (\ref{cpost}) are special cases of (\ref{datamarginal}) and hence can be directly calculated.
For the rest of the paper, we refer to our method using the acronym DPSPN to indicate the Dirichlet process semi-projected normal mixture model. Algorithm \ref{algorithm} provides the pseudocode to sample from the DPSPN using a Gibbs sampler. The initialization of $\{c_i\}_{i=1}^n$ and $\{r_i\}_{i=1}^n$ can incorporate prior knowledge and preprocessing results from other algorithms. In this study, we initialize the algorithm by randomly grouping the data into different clusters and sampling the radius of each observation from an exponential distribution with parameter 1.
\begin{algorithm}
\caption{Gibbs Sampler for the DPSPN}\label{algorithm}
\begin{algorithmic}
\State Random initialization of $\{c_i\}_{i=1}^n$, and $\{r_i\}_{i=1}^n$
\State $K = \#$ of clusters
\For{$iter = 1$ to $M$}
\State update $\{x_i\}_{i=1}^n$ with $\{r_i\}_{i=1}^n$ using (\ref{car2sph})
\For{$i = 1$ to $n$}
\State remove $z_i$ from its current cluster $c_i$
\State update the posterior parameter $\bm{\Psi}$ of cluster $c_i$ using (\ref{postpara})
\State if the cluster is empty, remove it and decrease $K$
\For{$k = 1$ to $K$}
\State calculate $P(c_i=k|c_{-i},\bm{z}_i)\ \propto\ n_{-i,k}f(\bm{z}_i|\bm{\Psi}^k)$ using (\ref{datamarginal}) \Comment{$\bm{\Psi}^k$ is the hyperparameter for cluster $k$}
\EndFor
\State calculate $P(c_i=k^*|c_{-i},\bm{z}_i)\ \propto\ \alpha_0 f(\bm{z}_i|\bm{\Psi}_0)$ using (\ref{datamarginal}) \Comment{$k^*$ is a new cluster}
\State sample a new value for $c_i$ from $P(c_i|c_{-i},\bm{z}_i)$ after normalizing the above probabilities
\State add $\bm{z_i}$ to cluster $c_i$
\State update $\bm{\Psi}^{c_i}$ using (\ref{postpara})
\State if a new cluster is created (i.e., $c_i=k^*$ was selected), increase $K$
\EndFor
\For{$k = 1$ to $K$}
\State sample $(\Tilde{\bm{\mu}}^k,\Tilde{\bm{\Sigma}}^k)$ from $\mathcal{NCIW}(\bm{\Psi}^k)$
\EndFor
\For{$i = 1$ to $n$}
\State sample $r_i$ from $f(r|\bm{\theta}_i,\bm{y}_i,\Tilde{\bm{\mu}}^{c_i},\Tilde{\bm{\Sigma}}^{c_i})$ using (\ref{sampleR})
\EndFor
\State update the concentration parameter $\alpha_0$ (optional, see \cite{escobar1995bayesian})
\EndFor
\end{algorithmic}
\end{algorithm}
\section{Clustering Experiments}
\label{Experiments}
We implemented the DPSPN in C++ by modifying a DPMM package \cite{Märtens2018} and posted the source code on GitHub (\url{https://github.com/zout3/DPSPN}). In this section, Our model is tested in an experiment clustering synthetic data and in a real world application to image segmentation and compared with methods introduced in other studies. In all of the situations, we use a non-informative proper hyperprior distribution by setting the hyperparameters as follows:
\begin{gather}\label{priorpara}
\bm{\mu}_0 = \bm{0}, \ \lambda_0 = 1, \ \nu_0 = d + 2,\ \bm{S}_0 = \bm{I}_d, \text{ and } \alpha_0 = 1
\end{gather}
\subsection{Synthetic data}
The synthetic data are generated from the finite mixture model defined in (\ref{IMM}). We use the MVN as the mixture density $f(\bm{z}|\bm{\phi})$ and the normal inverse-Wishart distribution as the base measure $G_0$. The hyperparameters of $G_0$ are given in (\ref{priorpara}). The directional-linear data can be obtained by either projecting the first $p$ dimensions into $\mathbb{S}^{p-1}$, or taking the modulo of the first dimension over $2\pi$. The first case is exactly the SPN distribution. The second case only yields circular-linear data and is called the semi-wrapped Gaussian (SWG) in \cite{roy2016swgmm}. With the same sample space, the SWG consists of fewer parameters than the SPN and thus has less flexibility. For example, the marginal of the SWG is the wrapped normal distribution which is always unimodal and symmetric. Both approaches are applied here to simulate data with one circular dimension ($p=2$ for SPN and $p=1$ for SWG) and one linear dimension ($q=1$). Sample datasets are displayed in Figure \ref{SWGSPN}. The flexibility of the SPN can be observed here in the asymmetric shape of the blue cluster and the bimodal shape of the red cluster.
\begin{figure}
\centering
\resizebox{1\textwidth}{!}{%
\includegraphics{img/SWGSPN.png}
}
\caption{Examples of mixture data of SPN (left) and SWG (right). Both plots contain 1000 data points with three clusters denoted in colors of blue, red and green.}
\label{SWGSPN}
\end{figure}
To gradually increase the data complexity, the number of clusters $K$ is varied from 2 to 8. For each $K$, datasets composed of 1000 data points are generated following the procedure described above. Since the cluster parameters are highly variable due to the non-informative prior, we analyze 100 datasets for each choice of $K$. We then fit the model to the simulated datasets independently and acquire an average estimate of model performance under level $K$. In terms of the covariance matrix constraint, the DPSPN is applied with both $\bm{\Sigma}_{11} = 1$ and $\bm{\Sigma}_{11} = I_2$ to demonstrate the different degrees of flexibility provided.
For each simulated dataset, we run the Gibbs sampler to produce 4 Monte Carlo chains with different initializations. Each chain is iterated 6000 times and the first 5000 samples are eliminated as a burn-in period to obtain 1000 draws from the posterior distribution. For convergence diagnosis, we compute the Gelman–Rubin statistic \cite{gelman1992inference,brooks1998general} based on the likelihood of the complete data given in (\ref{datamarginal}) over the 4000 posterior clustering (1000 clustering from each of the four chains), and obtain values of the statistic smaller than 1.2 for all datasets.
To simplify the evaluation, we adopt the \text{SALSO} algorithm proposed by \textit{Dahl et al.} \cite{dahl2022search} to produce a consensus clustering as a summary of the posterior distribution of data clusterings. Assume $\bm{C}_1,...,\bm{C}_{N}$ ($N=4000$) are the posterior clusterings obtained from Gibbs sampling for a given dataset. The consensus clustering $\bm{C}^*$ can be estimated as:
\begin{gather} \label{salso}
\bm{C}^* = \text{argmin}_C\sum_{i=1}^N\text{VoI}(\bm{C},\bm{C}_i)
\end{gather}
where $\text{VoI}(\cdot,\cdot)$ denotes the variation of information \cite{meilua2007comparing}
that measures the distance between two clusterings. More formally, the VoI of two clusterings $\bm{C}_1$ (with $K_1$ clusters) and $\bm{C}_2$ (with $K_2$ clusters) of $n$ observations is defined as the sum of their entropies $\text{H}(\bm{C}_1)$ and $\text{H}(\bm{C}_2)$ minus twice their mutual information $\text{MI}(\bm{C}_1,\bm{C}_2)$:
\begin{align} \label{voidef}
\text{VoI}(\bm{C}_1,\bm{C}_2) &= \text{H}(\bm{C}_1) + \text{H}(\bm{C}_2) - 2\text{MI}(\bm{C}_1,\bm{C}_2) \\
&= \sum_{i=1}^{K_1}\frac{n_i}{n}\log_2(\frac{n}{n_i}) + \sum_{j=1}^{K_2}\frac{n_j}{n}\log_2(\frac{n}{n_j}) - 2\sum_{i=1}^{K_1}\sum_{j=1}^{K_2}\frac{n_{ij}}{n}\log_2(\frac{n_{ij}n}{n_in_j})
\end{align}
Here $n_i$ and $n_j$ are respectively the numbers of observations in cluster $i$ of $\bm{C}_1$ and in cluster $j$ of $\bm{C}_2$, and $n_{ij}$ is the number of observations in both cluster $i$ of $\bm{C}_1$ and cluster $j$ of $\bm{C}_2$.
To evaluate the model performance on data clustering, the adjusted Rand index (ARI) \cite{hubert1985comparing} is applied to measure the discrepancy between the consensus clustering $\bm{C}^*$ given by the model and the ground truth (known for simulated data). The Rand index \cite{rand1971objective} is a measure of the similarity between two clusterings. Given a
set of $n$ elements, the Rand index between two clusterings $\bm{C}_1$ and $\bm{C}_2$ is computed as follows:
\begin{gather} \label{RIdef}
RI(\bm{C}_1,\bm{C}_2) = \frac{a+b}{\begin{pmatrix}
n \\
2
\end{pmatrix}}
\end{gather}
where $a$ is the number of pairs of elements that are placed in the same cluster in $\bm{C}_1$ and in the same cluster in $\bm{C}_2$ , and $b$ is the number of pairs placed in different clusters in $\bm{C}_1$ and in different clusters in $\bm{C}_2$. The ARI is a modified version that corrects the clustering similarity measure for chance agreement under the permutation model \cite{xuan2010information}:
\begin{gather}
ARI(\bm{C}_1,\bm{C}_2) = \frac{RI(\bm{C}_1,\bm{C}_2)-\mathbb{E}[RI(\bm{C}_1,\bm{C}_2)]}{1-\mathbb{E}[RI(\bm{C}_1,\bm{C}_2)]}
\end{gather}
We compare our model with the SWGMM proposed in \cite{roy2016swgmm}. The SWGMM prespecifies the number of clusters $K$ and uses SWG as the mixture distribution and therefore can fit circular-linear data. To apply the SWGMM, the data is first preprocessed by a $K$-means clustering algorithm as an initialization. Then an EM algorithm is iterated 100 times to update the model parameters. For our study, we apply the SWGMM using the true number of simulated clusters $K$. Figure \ref{ARIplot} shows the clustering results on the synthetic data. The overall value of ARI is not very high due to the random data generating process where clusters are not always well-separated, hence more challenging for the model. For data generated from SPN, the DPSPN is consistently better than the SWGMM, and the more flexible version of DPSPN ($\bm{\Sigma}_{11} = 1$) is consistently better than the simpler version ($\bm{\Sigma}_{11} = \bm{I}_2$). For data generated from SWG, the SWGMM performs better when the number of clusters is low. The difference becomes less significant among the three models as the number of clusters increases, and the DPSPN has a higher ARI average for five or more clusters. The results show that DPSPN is a more flexible model than SWGMM, but it can also fit well to simpler data forms like those generated by the SWG.
\begin{figure}
\centering
\resizebox{1\textwidth}{!}{%
\includegraphics{img/ARIplot.png}
}
\caption{Clustering results from DPSPN and SWGMM on mixture data of SPN (left) and SWG (right) with different numbers of clusters. Each point is an average of ARI over 100 datasets and the error bar denotes one standard deviation above and below the average.}
\label{ARIplot}
\end{figure}
\subsection{Image segmentation}
Image segmentation has become increasingly important due to its applications in many computer vision tasks like object detection and recognition \cite{wang2007object} and in medical imaging \cite{pham2000survey,roy2017jclmm}. Image segmentation can be viewed as a clustering problem that involves partitioning the image into different groups of objects according to the color of each pixel. Besides the RGB (red, green, blue) color space, many other color spaces can be used to represent images. The LUV is a special color space in which Euclidean distance provides a perceptually uniform spacing of colors \cite{kato2006markov}. Due to this property some studies have adopted the LUV space for image segmentation \cite{shafarenko1997automatic,kato2006markov,mignotte2008segmentation}. A cylindrical representation of the LUV space is to transform the U,V plane to polar coordinates and address the radial distance as chroma C and the angle as hue H. With the lightness L unchanged, this representation is known as the HCL (or LCH) space \cite{ihaka2003colour}. Since lightness and chroma are linear variables, and hue is a circular variable, image segmentation in the HCL space is equivalent to clustering directional-linear data. Compared to using linear models (e.g., a Gaussian mixture model) to cluster in the LUV space, one advantage of using the DPSPN in the HCL space is that the shape of a cluster can be more variable than in the completely linear case due to there being more model parameters.
For our experiment, we consider the Berkeley image database (BSD300) \cite{martin2001database} that has been widely used to benchmark image segmentation algorithms. The data set contains 300 color images with size 481x321. Each image was presented to multiple human subjects to perform manual segmentation. These manual segmentations are used as ground truth to evaluate the performance of a segmentation algorithm. A number of metrics are frequently adopted to assess the quality of an image segmentation. The probabilistic Rand index (PRI) \cite{unnikrishnan2005measure} is similar to the Rand index defined in (\ref{RIdef}) but instead estimates the probability that an arbitrary pair of pixels has consistent labels in two clusterings. The variation of information (VoI) \cite{meilua2007comparing} as given in (\ref{salso}) defines a distance metric between two clusterings defined by mutual information and entropy. VoI measures the amount of randomness in one segmentation which cannot be explained by the other. The global consistency error (GCE) \cite{martin2001database} measures the degree to which one clustering can be considered as a refinement of the other. An issue with GCE is that it does not penalize oversegmentation (each pixel having its own cluster achieves zero error). The boundary displacement error (BDE) \cite{freixenet2002yet} measures the average displacement error of boundary pixels between two segmented images. More precisely, the error of one boundary pixel is defined as the distance between the pixel and the closest pixel in the
other boundary image. Except for PRI, metrics with smaller values indicate better performance. As noted in \cite{yang2008unsupervised}, evaluating clustering performance using the PRI and VoI seems to better correspond with human visual perception. \textit{Roy et al.} \cite{roy2016swgmm} applied several clustering algorithms to the BSD300 and reported benchmark performance based on the metrics discussed above. The algorithms compared include several mixture models with circular-linear distributions and hence make appropriate comparators for the DPSPN.
We convert the images of the BSD300 into LCH color space and apply the DPSPN to each image. For each image, we run 4 chains of Gibbs sampling with each sampler iterated 6000 times. The first 5000 iterations are eliminated as a burn-in period and the remaining 1000 are thinned by keeping every 4th iteration. The Gelman-Rubin statistic computed are below 1.2 for each image. Given the resulting 1000 posterior clusterings, the SALSO algorithm \cite{dahl2022search} produces a posterior consensus clustering of the image. Figure \ref{imgseg} gives a few examples of images and their DPSPN segmentations. Oversegmentation can be observed in the background of some images. This is due to the fact that our approach does not explicitly use any spatial information in order to compare the segmentation performance with that of other circular-linear models provided in \cite{roy2016swgmm}. Another potential cause is the label switching problem \cite{stephens2000dealing} that happens in Bayesian inference. The SALSO algorithm can partially reduce the label switching effect by averaging over multiple posterior clusterings.
\begin{figure}
\centering
\resizebox{1\textwidth}{!}{%
\includegraphics{img/imgseg.png}
}
\caption{Examples of image segmentation by the DPSPN. Original images are shown in the top row. The corresponding segmented images are shown in the bottom row.}
\label{imgseg}
\end{figure}
The four metrics are computed to quantitatively evaluate the results. Notice that every image has a number of different human segmentations as potential ground truth. The metrics are averaged across the multiple comparisons for each image. Table \ref{imgsegrslt} shows the mean value of the metrics over 300 images obtained from the DPSPN along with results of the other models reported in \cite{roy2016swgmm}. The GMM and BMM \cite{roy2007beta} are mixture models of MVN and multivariate beta distributions that are applied to the LUV space. The IvMGMM and IvMBMM \cite{roy2012mixture} are mixture models of von Mises Gaussian and von Mises Beta distributions that are applied to the LCH space. The DMM \cite{boutemedjet2008hybrid} is the mixture model of generalized Dirichlet distributions applied to the RGB space. The DPSPN outperforms the other models in terms of PRI, VoI and BDE. It has the second lowest GCE score. There results demonstrate the flexibility and excellent clustering provided by the DPSPN.
\begin{table}[ht]
\centering
\begin{threeparttable}
\caption{Evaluation metrics of image segmentation on the BSD300 for different models}
\label{imgsegrslt}
\begin{tabular}{ccccc}
\toprule
Models & PRI & VoI & GCE & BDE \\
\toprule
DPSPN & \textbf{0.7287} & \textbf{2.5881} & 0.3324 & \textbf{14.6192} \\
SWGMM$^*$ & 0.7223 & 2.6998 & 0.3486 & 15.2806 \\
GMM$^*$ & 0.7040 & 2.8786 & 0.3608 & 15.9192 \\
BMM$^*$ & 0.7014 & 2.8725 & 0.3688 & 15.8855 \\
DMM$^*$ & 0.6302 & 2.8232 & \textbf{0.3241} & 17.0081 \\
IvMGMM$^*$ & 0.7058 & 2.9117 & 0.3773 & 15.9616 \\
IvMBMM$^*$ & 0.6494 & 2.9763 & 0.3616 & 20.4416 \\
\toprule
\end{tabular}
\begin{tablenotes}
\small
\item $^*$ Results are obtained from \cite{roy2016swgmm}.
\end{tablenotes}
\end{threeparttable}
\end{table}
\section{Bloodstain pattern analysis}
\label{BPA}
The development of the DPSPN was motivated by the desire to provide improved methods for the analysis for bloodstain pattern evidence found at crime scenes. A bloodstain pattern is a collection of stains observed at a crime scene. The main objective for bloodstain pattern analysis (BPA) is to determine the causal mechanism behind the bloodletting event \cite{damelio2001bloodstain}. By analyzing the shapes, sizes, orientations and locations of bloodstains along with other information, BPA experts develop hypotheses about how the event may have happened. Recent studies \cite{national2009strengthening,hicklin2021accuracy} have noted the subjectivity of the approach and spurred research on alternative approaches. Some research works have been done on the development of quantitative method to assess hypotheses regarding the cause of bloodstain patterns \cite{arthur2018automated,liu2020automatic,zou2022towards}. In these studies, the bloodstains are first approximated by ellipses, and then features are designed based on the parameters of the ellipses for further analysis. \textit{Arthur et al.} \cite{arthur2018automated} and \textit{Liu et al.} \cite{liu2020automatic} frame the question as a classification problem between two specified mechanisms. \textit{Zou et al.} \cite{zou2022towards} proposed the use of the likelihood ratio (LR) to measure the strength of the evidence supporting one hypothesis against another. Given a bloodstain pattern $\bm{p}$, let $H_1$ and $H_2$ denote two competing hypothesis regarding the bloodletting mechanism. The LR of evidence $\bm{p}$ regarding the two hypotheses can be written as:
\begin{gather} \label{LRdef}
LR = \frac{f(\bm{p}|H_1)}{f(\bm{p}|H_2)}
\end{gather}
where $f(\bm{p}|H)$ is the likelihood of pattern $\bm{p}$ assuming $H$ is the true causal mechanism. The LR approach can be generalized to consider multiple hypotheses. In \cite{zou2022towards} the likelihood of a bloodstain pattern is approximated by the likelihood of a small number of features. However, a limitation of all feature-based approaches is the inevitable loss of information. The distribution of the bloodstains (ellipses) in the pattern is summarized by some features that may not be useful in distinguishing between different hypotheses. In addition, the features are case-dependent and often need redesigning for a different scenario.
We consider a different approach to estimate the likelihood of a bloodstain pattern. A bloodstain approximated by an ellipse can be represented by its five parameters $\bm{e} =(\theta,y_1,y_2,y_3,y_4)$, where $\theta$ is the angle between the x-axis and the major axis of the ellipse, and the linear component $\bm{y}=(y_1,y_2,y_3,y_4)$ are the the center coordinates $(y_1,y_2)$ of the ellipse relative to the center of the pattern and the radii of major and minor axes $(y_3,y_4)$ of the ellipse. Then we can view a bloodstain pattern $\bm{p} = (\bm{e}_1,...,\bm{e}_n)$ as a collection of quintuples. Assuming these quintuples are independent and identically distributed from a five dimensional density $f_{\bm{p}}(\bm{e})$, then the likelihood of $\bm{p}$ is $\prod_{i=1}^n f_{\bm{p}}(\bm{e}_i)$. Obtaining the likelihood of a pattern requires estimating $f_{\bm{p}}(\bm{e})$. Since the slope of an ellipse $\theta$ is a circular variable and $\bm{y}$ are all linear variables, the DPSPN can be use in this application. Here we apply the density estimation perspective of the DPMM as shown in the model specification (\ref{dedp}).
\begin{figure}[ht]
\centering
\resizebox{0.6\textwidth}{!}{%
\includegraphics{img/bloodpattern.png}
}
\caption{Examples of impact patterns (the first row) and expiration patterns (the second row).}
\label{bloodpattern}
\end{figure}
Two sets of bloodstain pattern images provided by the \textit{Institute of Environmental Science and Research, New Zealand} are used for this experiment. All patterns were generated in the laboratory with swine blood and collected on a vertical cardboard sheet. One set contains 172 impact patterns that were created by releasing a metal cylinder at some height above a blood pool, which simulates stepping into a puddle of blood. The other set contains 112 expiration patterns created by researchers coughing, speaking, shouting and spitting blood onto the target board. All patterns are scanned into image format at a resolution of 300dpi. Figure \ref{bloodpattern} shows some examples of the bloodstain patterns. We applied the technique from the work of \textit{Zou et al.} \cite{zou2021recognition} to represent each pattern $\bm{p}_j$ with a collection of ellipses $(\bm{e}_{j1},...,\bm{e}_{jn_j})$. It can be a challenging task to differentiate impact patterns from expiration patterns for BPA experts as shown by examples in the recent black box study \cite{hicklin2021accuracy}. Based on the available data, we set $H_1$ and $H_2$ regarding a bloodstain pattern as the following:
\begin{gather*}
H_1:\text{The pattern is caused by impact.} \qquad vs \qquad
H_2:\text{The pattern is caused by expiration.}
\end{gather*}
Our strategy is to build a data-driven model that can train on a set of patterns with known causal mechanism. The DPSPN can only estimate the density function $f_{\bm{p}_j}(\bm{e})$ of a single bloodstain pattern $\bm{p}_j$ at one time. To address variation in patterns from the same mechanism, we need to extend the DPMM to a hierarchical Dirichlet process (HDP) \cite{teh2006hierarchical}. HDP have been successfully applied to many applications involving grouped data, for example, modeling topics within documents comprised of words. For BPA, each pattern is analogous to a document and each bloodstain (ellipse) is analogous to a word. HDP allows for the analysis of multiple sets of data (patterns) by putting a Dirichlet process prior on the base measure.
Consider a number of bloodstain patterns $\bm{p}_1,...,\bm{p}_N$ that share the same bloodletting mechanism $M$ (e.g., impact), where each pattern $\bm{p}_j = (\bm{e}_{j1},...,\bm{e}_{jn_j})$ is represented by a number of ellipses. Assuming the ellipse quintuple follows the SPN distribution, then the HDP model can be expressed by the following formulas:
\begin{align} \label{HDPMM}
\begin{split}
\bm{e}_{ji} &\sim \mathcal{SPN}(\Tilde{\bm{\mu}}_{ji},\Tilde{\bm{\Sigma}}_{ji}) \\
\Tilde{\bm{\mu}}_{ji},\Tilde{\bm{\Sigma}}_{ji} &\sim G_j \\
G_j &\sim DP(\alpha_M,G_M) \\
G_M &\sim DP(\alpha_0,G_0) \\
\alpha_M &\sim Gamma(a, b)
\end{split}
\end{align}
From a generative point of view, the discrete measure $G_j$ dominates the distribution $f_{\bm{p}_j}(\bm{e})$ that generates the ellipses in bloodstain pattern $\bm{p}_j$, and thus $G_j$ can be viewed as an abstraction of pattern $\bm{p}_j$. Moreover, the measure $G_M$ and concentration parameter $\alpha_M$ dominate the distribution of all $G_j$'s, so they can be viewed as an abstraction of the bloodletting mechanism $M$. The model in (\ref{HDPMM}) can be implemented by simply converting Algorithm \ref{algorithm} to the HDP sampling algorithm given in \cite{teh2006hierarchical} (details are provided in Appendix \ref{AppC}). If we train the model (\ref{HDPMM}) with representative bloodstain patterns caused by mechanism $M$, then the likelihood of a new pattern $\bm{p} = (\bm{e}_1,...,\bm{e}_n)$ under the hypothesis $H_M$ that it is caused by $M$ can be estimated by the following
\begin{align} \label{HDPDE}
\begin{split}
f(\bm{p}|H_M) = f(\bm{p}|\Hat{\alpha}_M,\Hat{G}_M) &= \int f(\bm{p}|G)dDP(G|\Hat{\alpha}_M,\Hat{G}_M) \\
&= \int \Bigl\{\prod_{i=1}^{n} \int \mathcal{SPN}(\bm{e}_j|\Tilde{\bm{\mu}},\Tilde{\bm{\Sigma}})dG(\Tilde{\bm{\mu}},\Tilde{\bm{\Sigma}}) \Bigl\} dDP(G|\Hat{\alpha}_M,\Hat{G}_M)
\end{split}
\end{align}
where $\Hat{\alpha}_M$ and $\Hat{G}_M$ are the posterior mean estimates of $\alpha_M$ and $G_M$ conditional on $\bm{p}_1,...,\bm{p}_N$, and $DP(\cdot|\alpha,G)$ denotes a Dirichlet process measure. The evaluation of the marginal likelihood in (\ref{HDPDE}) including the integral over a Dirichlet process is not straightforward. However, the fact that $G_M$ is sampled from a Dirichlet process and thus is a discrete distribution makes it possible to estimate the marginal likelihood. Details of evaluating (\ref{HDPDE}) are provided in Appendix \ref{AppC}. It is worth noting that \textit{Basu and Chib} \cite{basu2003marginal} proposed a sequential importance sampling method to estimate the marginal likelihood of the data in a DPMM, where the base measure can be a continuous distribution.
\begin{figure}[t]
\centering
\resizebox{\textwidth}{!}{%
\includegraphics{img/LRplot.png}
}
\caption{Scatter plot of the log LR and number of ellipses for impact patterns (blue) and expiration patterns (red).}
\label{LRplot}
\end{figure}
We fit separate HDP models for the two mechanisms for which we have data using 60\% of the bloodstain patterns from each set (103 impact patterns and 69 expiration patterns). A gamma prior is put on the concentration parameter $\alpha_M$ with hyperparameters $a=b=1$. The rest of the 69 impact patterns and 43 expiration patterns are used to test the performance of the model. For each pattern, its likelihoods under $H_1$ and $H_2$ are calculated via (\ref{HDPDE}) and are used to compute the likelihood ratio defined in (\ref{LRdef}). Figure \ref{LRplot} shows the results; the LR for each test pattern is plotted along with the number of ellipses extracted from that pattern. The LRs are greater than one for all impact patterns and less than one for all expiration patterns. If we use one as threshold for classifying patterns based on the LR, then all test patterns are correctly classified. The magnitude of the log LR (the strength of the evidence) is strongly correlated with the number of ellipses (thus also to the number of bloodstains) because the likelihood of a pattern defined in (\ref{HDPDE}) involves the product of the likelihoods of all ellipses. The correlation seems intuitive in that the LR as a measure of strength of evidence is related to the amount of information that the evidence provides. However, the magnitude of the log LR obtained from many of the patterns is much larger than what might be expected given the uncertainty associated with decisions made by BPA analysts \cite{hicklin2021accuracy}. This likely stems from the lack of diverse and representative impact patterns and expiration patterns. All bloodstain patterns were created in the laboratory with only a few conditions varied, so the model might be learning attributes that are irrelevant to the mechanism. Future study can focus on model calibration and collecting more data to produce more easily interpreted LRs.
\section{Discussion}
\label{Discussion}
In this work we proposed a highly flexible Bayesian nonparametric model to characterize the dependence between linear variables and a directional variable with arbitrary dimension. The multivariate normal distribution was transformed to fit directional-linear data by projecting a number of its dimensions into a unit hypersphere. Then a Dirichlet process mixture model incorporating the semi-projected normal was designed to account for more complex data distributions. A conjugate prior was proposed based on the conditional inverse-Wishart to resolve the identifiability issue raised by the projected normal. Both the clustering and density estimation perspectives were exploited in our experiments.
Future work can focus on more efficient algorithms for posterior inference such as variational methods \cite{blei2006variational, wang2011online}. Another possible direction to extend our approach is to consider modeling the joint distribution of multiple directional variables and linear variables. This requires a more flexible structure for the covariance matrix of the augmented MVN to make the model identifiable, and will also involve developing an appropriate prior distribution.
\begin{appendices}
\section{Sampling the radius \texorpdfstring{$r$}{TEXT}}
\label{appA}
The full conditional distribution of $r$ given in (\ref{sampleR}) has the following form:
\begin{align}\label{rmarg}
f(r) \ \propto \ r^{p-1}\exp{\Bigl\{-\frac{1}{2}Q_3^*(r-\frac{Q_2^*}{Q_3^*})^2\Bigl\}}
\end{align}
\textit{Hernandez-Stumpfhauser et al.} \cite{hernandez2017general} proposed a method to sample $r$ by introducing a latent variable $v$ that has joint density with $r$ given by:
\begin{align}\label{vrjoint}
f(r,v) \ \propto \ r^{p-1}I_{\Bigl(0,\exp{\Bigl\{-\frac{1}{2}Q_3^*(r-\frac{Q_2^*}{Q_3^*})^2\Bigl\}}\Bigl)}(v)I_{(0,\infty)}(r)
\end{align}
Integrating (\ref{vrjoint}) with respect to $v$ yields the marginal distribution of $r$ in (\ref{rmarg}). We can derive the conditional distribution of $v$ and $r$ from (\ref{vrjoint}) and conduct Gibbs sampling.
The conditional distribution of $v$ given $r$ is a uniform distribution:
\begin{align} \label{v_r}
v|r \sim \mathcal{U}\Bigl(0,\exp{\Bigl\{-\frac{1}{2}Q_3^*(r-\frac{Q_2^*}{Q_3^*})^2\Bigl\}}\Bigl)
\end{align}
And the conditional distribution of $r$ given $v$ is:
\begin{align*}
f(r|v) \ \propto \ r^{p-1}I_{\Bigl(\frac{Q_2^*}{Q_3^*}+max\Bigl\{-\frac{Q_2^*}{Q_3^*},-\sqrt{\frac{-2\ln{v}}{Q_3^*}}\Bigl\},\frac{Q_2^*}{Q_3^*}+\sqrt{\frac{-2\ln{v}}{Q_3^*}}\Bigl)}(r)
\end{align*}
By using the inverse cumulative distribution function technique we get
\begin{gather} \label{computeR}
r=[(\eta^p_2-\eta^p_1)w+\eta^p_1]^\frac{1}{p}
\end{gather}
where
\begin{gather*}
w \sim \mathcal{U}(0,1), \quad \eta_1 = \frac{Q_2^*}{Q_3^*}+max\Bigl\{-\frac{Q_2^*}{Q_3^*},-\sqrt{\frac{-2\ln{v}}{Q_3^*}}\Bigl\}, \quad \eta_2 = \frac{Q_2^*}{Q_3^*}+\sqrt{\frac{-2\ln{v}}{Q_3^*}}
\end{gather*}
One issue with this method is that when the difference between $r$ and $\frac{Q_2^*}{Q_3^*}$ is large, underflow of $v$ may occur and lead to overflow of $r$. We suggest directly sample $\ln{v}$ using the inverse cumulative distribution function technique to avoid this issue. Instead of sampling $v$ from (\ref{v_r}), sample $s \sim \mathcal{U}(0,1)$, compute $\ln{v}$:
\begin{gather*}
\ln{v}=\ln{s}-\frac{1}{2}Q_3^*(r-\frac{Q_2^*}{Q_3^*})^2
\end{gather*}
and compute $r$ via (\ref{computeR}).
\section{Properties of the inverse-Wishart distribution}
\label{appB}
To prove that the partitioned inverse-Wishart distribution has the properties given in (\ref{CIWprop}), let matrix $\bm{\Gamma}$ follow the Wishart distribution $\mathcal{W}(\bm{R},\nu)$ and partition $\bm{\Gamma}$ and $\bm{R}$ as:
\begin{align*}
\bm{\Gamma} = \begin{pmatrix}
\bm{\Gamma}_{11} & \bm{\Gamma}_{12} \\
\bm{\Gamma}_{21} & \bm{\Gamma}_{22}
\end{pmatrix},\quad
\bm{R} = \begin{pmatrix}
\bm{R}_{11} & \bm{R}_{12} \\
\bm{R}_{21} & \bm{R}_{22}
\end{pmatrix}
\end{align*}
Here $\bm{\Gamma}$ and $\bm{R}$ are $d\times d$ matrices, and $\bm{\Gamma}_{ij}$ and $\bm{R}_{ij}$ are $d_i\times d_j$ matrices ($d_1+d_2=d$). Denote $\bm{\Gamma}_{11\cdot2}=\bm{\Gamma}_{11}-\bm{\Gamma}_{12}\bm{\Gamma}_{22}^{-1}\bm{\Gamma}_{21}$ and $\bm{R}_{11\cdot2}=\bm{R}_{11}-\bm{R}_{12}\bm{R}_{22}^{-1}\bm{R}_{21}$, then according to \textbf{Theorem 3.3.9} in \cite{gupta1999matrix}:
\begin{align*}
&\text{(i) }\bm{\Gamma}_{22}\sim\mathcal{W}(\bm{R}_{22}, \nu) \\
&\text{(ii) }\bm{\Gamma}_{11\cdot2} \sim \mathcal{W}(\bm{R}_{11\cdot2}, \nu - d_2) \\
&\text{(iii) }\bm{\Gamma}_{11\cdot2} \text{ and } (\bm{\Gamma}_{12}, \bm{\Gamma}_{22}) \text{ are independent} \\
&\text{(iv) }\bm{vec}(\bm{\Gamma}_{12}) | \bm{\Gamma}_{22} \sim \mathcal{N}_{d_1\times d_2}[\bm{vec}(\bm{R}_{12}\bm{R}_{22}^{-1}\bm{\Gamma}_{22}), \bm{\Gamma}_{22} \otimes \bm{R}_{11\cdot2}]
\end{align*}
Property (iv) is slightly different from the version in the book, because here we directly use $\bm{vec}(\cdot)$ on the matrix to denote the matrix-variate normal distribution while the book uses the vectorization of the transpose of the matrix.
Let $\Tilde{\bm{\Sigma}}=\bm{\Gamma}^{-1}$ and $\bm{S}=\bm{R}^{-1}$, then by definition $\Tilde{\bm{\Sigma}} \sim \mathcal{IW}(\bm{S}, \nu)$. According to (\ref{partitionS}) and the properties of the inverse of a partitioned matrix we have the following relations:
\begin{align*}
\bm{\Sigma}_{11} &= \bm{\Gamma}_{11\cdot2}^{-1} \\ \bm{S}_{11} &= \bm{R}_{11\cdot2}^{-1} \\
\bm{\Sigma}_{22\cdot1} &= \bm{\Gamma}_{22}^{-1}\\
\bm{S}_{22\cdot1} &= \bm{R}_{22}^{-1} \\
\bm{\Sigma}_{11}^{-1}\bm{\Sigma}_{12} &= -\bm{\Gamma}_{12}\bm{\Gamma}_{22}^{-1} \\
\bm{S}_{11}^{-1}\bm{S}_{12} &= -\bm{R}_{12}\bm{R}_{22}^{-1}
\end{align*}
Considering the first four equations, properties (a) and (d) in (\ref{CIWprop}) are equivalent to properties (ii) and (i) above.
Because $\bm{\Sigma}_{11}$ can be derived from $\bm{\Gamma}_{11\cdot2}$, and $(\bm{\Sigma}_{11}^{-1}\bm{\Sigma}_{12}, \bm{\Sigma}_{22\cdot1})$ can be derived from $(\bm{\Gamma}_{12}, \bm{\Gamma}_{22})$, then according to property (iii), $\bm{\Sigma}_{11}$ is independent of $(\bm{\Sigma}_{11}^{-1}\bm{\Sigma}_{12}, \bm{\Sigma}_{22\cdot1})$. Hence, property (b) is true.
From the relations above we can rewrite property (iv) in terms of $\bm{\Sigma}_{ij}$ and $\bm{S}_{ij}$ as
\begin{gather*}
\bm{vec}(-\bm{\Sigma}_{11}^{-1}\bm{\Sigma}_{12}\bm{\Sigma}_{22\cdot1}^{-1}) | \bm{\Sigma}_{22\cdot1} \sim \mathcal{N}_{d_1\times d_2}[\bm{vec}(-\bm{S}_{11}^{-1}\bm{S}_{12}\bm{\Sigma}_{22\cdot1}^{-1}), \bm{\Sigma}_{22\cdot1}^{-1} \otimes \bm{S}_{11}^{-1}]
\end{gather*}
Then according to \textbf{Theorem 2.3.10} in \cite{gupta1999matrix} (again the notation is slightly different due to vectorization), property (c) can be derived by right multiplying by $-\bm{\Sigma}_{22\cdot1}$.
\section{Implementation of the hierarchical DPSPN}
\label{AppC}
Our goal of fitting a hierarchical DPSPN is to obtain an estimate of the measure $G_M$ and the concentration parameter $\alpha_M$ in (\ref{HDPMM}) that characterize the bloodstain pattern generation mechanism $M$. Then we can use them in (\ref{HDPDE}) to evaluate the likelihood of a new pattern assuming it is generated by mechanism $M$. \textit{Teh et al.} \cite{teh2006hierarchical} proposed an algorithm by direct assignment that can be applied to sample $G_M$ from the posterior distribution by expressing $G_M$ based on the stick-breaking representation:
\begin{gather}
G_M = \sum_{k=1}^K \beta_k\delta_{\bm{\varphi}_k} + \beta_uG_u
\end{gather}
Here $\{\beta_k\}_{k=1}^K$ are the mixture probabilities and $\bm{\varphi}_k=(\Tilde{\bm{\mu}}_k,\Tilde{\bm{\Sigma}}_k)$ are the mixture parameters where $\delta_{\bm{\varphi}_k}$ denotes a Dirac delta distribution at $\bm{\varphi}_k$. $\beta_u$ is the probability of creating a new cluster and $G_u$ is a measure sampled from $DP(\alpha_0,G_0)$. The algorithm starts by sampling the clustering assignment for each observation as follows:
\begin{align} \label{samplecji}
P(c_{ji}=k|c_{-ji},\bm{z}_{ji})\ \propto
\begin{cases}
(n_{-i,k}^j+\alpha_M\beta_k) f(\bm{z}_{ji}|\bm{\Psi}^k)\qquad &\text{if }c\text{ represents an existing cluster} \\
\alpha_M\beta_uf(\bm{z}_{ji}|\bm{\Psi}^0) \qquad &\text{if }c\text{ represents a new cluster}
\end{cases}
\end{align}
where $n_{-i,k}^j$ is the number of ellipses assigned to cluster $k$ in pattern $j$ excluding ellipse $i$. The likelihood function $f(\bm{z}|\bm{\Psi})$ can be evaluated via (\ref{datamarginal}). Next, an intermediate variable $m_{jk}$ is sampled based on the following distribution:
\begin{align} \label{samplemjk}
P(m_{jk}=m|\bm{c},\bm{\beta})\ \propto\ s(n_k^j,m)(\alpha_M\beta_k)^m, \quad m=1,...,n_k^j
\end{align}
where $n_k^j$ is the number of ellipses assigned to cluster $k$ in pattern $j$ and $s(n,m)$ is the unsigned Stirling number of the first kind. From the perspective of the Chinese restaurant process \cite{aldous1985exchangeability}, $m_{jk}$ denotes the number of tables assigned to cluster $k$ in restaurant $j$, and its conditional distribution given in (\ref{samplemjk}) was proved by \textit{Antoniak} \cite{antoniak1974mixtures}. Finally, we can sample $(\beta_1,...,\beta_K,\beta_u)$ from a Dirichlet distribution
\begin{align} \label{samplebeta}
(\beta_1,...,\beta_K,\beta_u)\sim Dir(m_{\cdot 1},...,m_{\cdot K},\alpha_0)
\end{align}
where $m_{\cdot k} = \sum_{j=1}^Jm_{jk}$ is the number of tables assigned to cluster $k$. Algorithm \ref{algorithm2} provides the pseudocode to sample from the hierarchical DPSPN.
\begin{algorithm}
\caption{Gibbs Sampler for the hierarchical DPSPN}\label{algorithm2}
\begin{algorithmic}
\State Random initialization of $\{c_{ji}\}_{i=1,j=1}^{n_j,J}$ and $\{r_{ji}\}_{i=1,j=1}^{n_j,J}$
\State $K = \#$ of clusters
\For{$iter = 1$ to $M$}
\State update $\{x_{ji}\}_{i=1,j=1}^{n_j,J}$ with $\{r_{ji}\}_{i=1,j=1}^{n_j,J}$ using (\ref{car2sph}) for $j=1,...,J$
\For{$j = 1$ to $N$ and $i = 1$ to $n_j$}
\State remove $z_{ji}$ from its current cluster $c_{ji}$
\State update the posterior parameter $\bm{\Psi}$ of cluster $c_{ji}$ using (\ref{postpara})
\State if the cluster is empty, remove it and decrease $K$
\State sample a new value for $c_{ji}$ from $P(c_{ji}|c_{-ji},\bm{z}_{ji})$ according to (\ref{samplecji})
\State add $\bm{z_{ji}}$ to cluster $c_{ji}$
\State update $\bm{\Psi}^{c_{ji}}$ using (\ref{postpara})
\State if a new cluster is created, increase $K$
\EndFor
\For{$j = 1$ to $J$ and $k = 1$ to $K$}
\State sample $m_{jk}$ according to (\ref{samplemjk})
\EndFor
\State sample $(\beta_1,...,\beta_K,\beta_u)$ according to (\ref{samplebeta})
\For{$k = 1$ to $K$}
\State sample $\bm{\varphi}_k=(\Tilde{\bm{\mu}}^k,\Tilde{\bm{\Sigma}}^k)$ from $\mathcal{NCIW}(\bm{\Psi}^k)$
\EndFor
\For{$j = 1$ to $J$ and $i = 1$ to $n_j$}
\State sample $r_{ji}$ from $f(r|\bm{\theta}_{ji},\bm{y}_{ji},\Tilde{\bm{\mu}}^{c_{ji}},\Tilde{\bm{\Sigma}}^{c_{ji}})$ using (\ref{sampleR})
\EndFor
\State update the concentration parameter $\alpha_M$ (optional, see \cite{teh2006hierarchical})
\EndFor
\end{algorithmic}
\end{algorithm}
Formula (\ref{HDPDE}) computes the likelihood of a pattern and its evaluation involves estimating $G_M$. From the Gibbs sampling algorithm we can sample $\{\beta_k\}_{k=1}^{K}$ and $\{\bm{\varphi}_k\}_{k=1}^{K}$. In the application to the BPA data, as the number of clusters increases through iterations, $\beta_u$ becomes significantly smaller than one. As a result, and for computational convenience, we approximate $G_M$ by cutting off the term $\beta_uG_u$ as follows:
\begin{align}
\Hat{G}_M = \sum_{k=1}^K \Hat{\beta}_k\delta_{\Hat{\bm{\varphi}}_k} \quad \text{where} \quad \Hat{\beta}_k = \frac{\beta_k}{\sum_{k'=1}^K \beta_{k'}}
\end{align}
Let $G$ be sampled from $DP(\Hat{\alpha}_M, \Hat{G}_M)$. Since $\Hat{G}_M$ is a finite discrete distribution, so is $G$:
\begin{align}
G = \sum_{k=1}^K \pi_k\delta_{\Hat{\bm{\varphi}}_k}
\end{align}
The probability weights $\{\pi_k\}_{k=1}^{K}$ can be shown to follow a Dirichlet distribution using the derivation in \cite{teh2006hierarchical}:
\begin{align} \label{dirdist}
(\pi_1,...,\pi_K)\sim Dir(\Hat{\alpha}_M\Hat{\beta}_1,...,\Hat{\alpha}_M\Hat{\beta}_K)
\end{align}
Now we can rewrite (\ref{HDPDE}) in terms of $\{\pi_k\}_{k=1}^{K}$:
\begin{align}
f(\bm{p}|\Hat{\alpha}_M,\Hat{G}_M) &= \int \Bigl\{\prod_{i=1}^{n} \int \mathcal{SPN}(\bm{e}_j|\Tilde{\bm{\mu}},\Tilde{\bm{\Sigma}})dG(\Tilde{\bm{\mu}},\Tilde{\bm{\Sigma}}) \Bigl\} dDP(G|\Hat{\alpha}_M,\Hat{G}_M) \\
&= \int \Bigl\{\prod_{i=1}^{n} \sum_{k=1}^K \pi_k\mathcal{SPN}(\bm{e}_j|\Tilde{\bm{\mu}}_k,\Tilde{\bm{\Sigma}}_k) \Bigl\}p(\pi_1,...,\pi_K)d\pi_1...d\pi_K \label{mclkhd}
\end{align}
Analytical evaluation of (\ref{mclkhd}) is possible when $K$ and $n$ is small. An alternative way to estimate the integral is to use the Monte Carlo approach by sampling $\{\pi_k\}_{k=1}^{K}$ from (\ref{dirdist}).
\end{appendices}
\section*{Acknowledgments}
This work was funded by the Center for Statistics and Applications in Forensic Evidence (CSAFE) through Cooperative Agreements 70NANB15H176 and 70NANB20H019 between NIST and Iowa State University, which includes activities carried out at Carnegie Mellon University, Duke University, University of California Irvine, University of Virginia, West Virginia University, University of Pennsylvania, Swarthmore College and University of Nebraska, Lincoln. The authors would like to thank Tianyu Pan for his discussion on the model development and Ziyi Song for his great advice that leads to the use of SALSO algorithm. The authors are also grateful to the late Michael Taylor for providing the data and for numerous conversations that impacted the work.
\bibliographystyle{unsrt}
\section{Introduction}
\lipsum[2]
\lipsum[3]
\section{Headings: first level}
\label{sec:headings}
\lipsum[4] See Section \ref{sec:headings}.
\subsection{Headings: second level}
\lipsum[5]
\begin{equation}
\xi _{ij}(t)=P(x_{t}=i,x_{t+1}=j|y,v,w;\theta)= {\frac {\alpha _{i}(t)a^{w_t}_{ij}\beta _{j}(t+1)b^{v_{t+1}}_{j}(y_{t+1})}{\sum _{i=1}^{N} \sum _{j=1}^{N} \alpha _{i}(t)a^{w_t}_{ij}\beta _{j}(t+1)b^{v_{t+1}}_{j}(y_{t+1})}}
\end{equation}
\subsubsection{Headings: third level}
\lipsum[6]
\paragraph{Paragraph}
\lipsum[7]
\section{Examples of citations, figures, tables, references}
\label{sec:others}
\lipsum[8] \cite{kour2014real,kour2014fast} and see \cite{hadash2018estimate}.
The documentation for \verb+natbib+ may be found at
\begin{center}
\url{http://mirrors.ctan.org/macros/latex/contrib/natbib/natnotes.pdf}
\end{center}
Of note is the command \verb+\citet+, which produces citations
appropriate for use in inline text. For example,
\begin{verbatim}
\citet{hasselmo} investigated\dots
\end{verbatim}
produces
\begin{quote}
Hasselmo, et al.\ (1995) investigated\dots
\end{quote}
\begin{center}
\url{https://www.ctan.org/pkg/booktabs}
\end{center}
\subsection{Figures}
\lipsum[10]
See Figure \ref{fig:fig1}. Here is how you add footnotes. \footnote{Sample of the first footnote.}
\lipsum[11]
\begin{figure}
\centering
\fbox{\rule[-.5cm]{4cm}{4cm} \rule[-.5cm]{4cm}{0cm}}
\caption{Sample figure caption.}
\label{fig:fig1}
\end{figure}
\subsection{Tables}
\lipsum[12]
See awesome Table~\ref{tab:table}.
\begin{table}
\caption{Sample table title}
\centering
\begin{tabular}{lll}
\toprule
\multicolumn{2}{c}{Part} \\
\cmidrule(r){1-2}
Name & Description & Size ($\mu$m) \\
\midrule
Dendrite & Input terminal & $\sim$100 \\
Axon & Output terminal & $\sim$10 \\
Soma & Cell body & up to $10^6$ \\
\bottomrule
\end{tabular}
\label{tab:table}
\end{table}
\subsection{Lists}
\begin{itemize}
\item Lorem ipsum dolor sit amet
\item consectetur adipiscing elit.
\item Aliquam dignissim blandit est, in dictum tortor gravida eget. In ac rutrum magna.
\end{itemize}
\section{Conclusion}
Your conclusion here
\section*{Acknowledgments}
This was was supported in part by......
\bibliographystyle{unsrt}
|
2,869,038,154,358 | arxiv | \section{Introduction}
By a measurable dynamical system (MDS), we mean $\left(X,\mathcal{B},\mu,(T_{g})_{g\in G}\right)$,
where $\left(X,\mathcal{B},\mu\right)$ is a probability space and
for each $g\in G$, $T_{g}:X\to X$ is an invertible and measure preserving transformation.
For an MDS $\left(X,\mathcal{B},\mu,(T_{g})_{g\in G}\right)$, let
${\mathcal B}^{+}$ be the set of all positive measured sets, and $N(A,B)=\{g\in G:\mu(A\cap T_{g}B)\neq0\}$.
The classical results in ergodic theory state that a transformation
$(T_{g})_{g\in G}$ is ergodic iff $N(A,B)\neq\emptyset$ for each
pair of $A,B\in{\mathcal B}^{+}$, weakly mixing iff $\{g\in G:|\mu(A\cap T_{g}B)-\mu(A)\mu(B)|<\epsilon\}$
is a central$^{*}$-set for each pair of $A,B\in{\mathcal B}^{+}$, mildly
mixing iff $\{g\in G:|\mu(A\cap T_{g}B)-\mu(A)\mu(B)|<\epsilon\}$ is
an IP$^{*}$-set, and strongly mixing iff $\{g\in G:|\mu(A\cap T_{g}B)-\mu(A)\mu(B)|<\epsilon\}$
is a cofinite set. See for example \citep{refBD,refKY08}.
In \citep{refKY08} authors described the notions in terms of families.
We are here interested with the family of central$^{*}$-sets, IP$^{*}$-sets,
cofinite sets and difference sets which we shall denote by ${\mathcal C}^{*}$, ${\mathcal IP}^{*}$,
${\mathcal C}_f$ and $\triangle$ respectively. The notions of ${\mathcal C}^{*}$, ${\mathcal IP}^{*}$ will be defined latter. In these terms ${\mathcal C}^{*}$-mixing
implies weak mixing, ${\mathcal IP}^{*}$-mixing implies mild mixing and
${\mathcal C}_{f}$-mixing implies strong mixing.
\begin{defn} Let $A$ be a subset of a semigroup $S$.
(1) $A$ is called an IP-set if there is a subsequence $\langle x_n\rangle_{n=1}^{\infty}$ such that all finite subset $F\in\mathcal{P}_f(\mathbb{N})$
sums of forms $\sum_{n\in F}x_n$ are in $A$. A subset $A\subset S$ is said to be an IP$^*$-set, if it meets every IP-set in $S$. The collection
of all $\mbox{IP}$-sets is denoted by $\mathcal{IP}$ and the collection of all IP$^*$-sets will be denoted by $\mathcal{IP}^*$.\\
(2) $D$ is called a $\triangle$-set if it contains an infinite difference set, i.e. there is a
sequence $\langle x_n\rangle_{n=1}^{\infty}$ of $S$ such that $D \supset \triangle\left(\langle x_n\rangle_{n=1}^{\infty}\right) =\{ x_n\cdot x_m ^{-1}:m,n\in\mathbb{N}\}$. A subset $D\subset S$ is said to be an $\triangle^*$-set, if it meets every $\triangle$-set in $S$. The collection of $\triangle$-sets is denoted by $\bigtriangleup$ and the collection of all $\triangle^*$-sets will be denoted by $\bigtriangleup^*$.
\end{defn}
In recent years $(F_{q}[x],+)$, where $F_{q}[x]$ denotes the ring
of all polynomials over the finite field of characteristic $q$, has
received much attention both from a combinatorial viewpoint as in regards to its action on measurable dynamical systems.
In \citep{refL} the author has proved a version of celebrated Green-Tao
Theorem for $(F_{q}[x],+)$. In \citep{refBTZ} the authors proved
some higher order versions of Khintchine's recurrence theorem for the action
of $(F_{q}[x],+)$. In the present article we shall present some combinatorial
properties of $(F_{q}[x],+)$ and $(F_{q}[x],\cdot)$. Further we
will apply these combinatorial properties to find some interesting
properties of their actions on measure space.
In order to discuss combinatorial properties of $(F_{q}[x],+)$ and $(F_{q}[x],\cdot)$
we shall need algebraic properties of its Stone-\v{C}ech compactification of $F_{q}[x]$.
For this purpose we need to discuss the algebra of the Stone \v{C}ech
compactification of a discrete semigroup $S$. For a discrete semigroup $S$, the Stone-\v{C}ech compactification of $S$ will be denoted by $\beta S$. We take the points
of $\beta S$ to be the ultrafilters on $S$, identifying the principal
ultrafilters with the points of $S$ and thus pretending that $S\subseteq\beta S$.
Given $A\subseteq S$, we denote
\[
\overline{A}=\{p\in\beta S:A\in p\}.
\]
The set $\{\overline{A}:A\subset S\}$ is a basis for the closed sets
of $\beta S$. The operation $`\cdot$' on $S$ can be extended to the
Stone-\v{C}ech compactification $\beta S$ of $S$ so that $(\beta S,\cdot)$
is a compact right topological semigroup (meaning that for any $p\in\beta S$,
the function $\rho_{p}:\beta S\rightarrow\beta S$ defined by $\rho_{p}(q)=q\cdot p$
is continuous) with $S$ contained in its topological center (meaning
that for any $x\in S$, the function $\lambda_{x}:\beta S\rightarrow\beta S$
defined by $\lambda_{x}(q)=x\cdot q$ is continuous). A nonempty subset
$I$ of a semigroup $T$ is called a \textit{left ideal} of $S$ if
$TI\subset I$, a \textit{right ideal} if $IT\subset I$, and a \textit{two
sided ideal} (or simply an \textit{ideal}) if it is both a left and
right ideal. A \textit{minimal left idea}l is the left ideal that
does not contain any proper left ideal. Similarly, we can define \textit{minimal
right ideal} and \textit{smallest ideal}.
Any compact Hausdorff right topological semigroup $T$ has the smallest
two sided ideal
\[
\begin{array}{ccc}
K(T) & = & \bigcup\{L:L\text{ is a minimal left ideal of }T\}\\
& = & \,\,\,\,\,\bigcup\{R:R\text{ is a minimal right ideal of }T\}.
\end{array}
\]
Given a minimal left ideal $L$ and a minimal right ideal $R$, $L\cap R$
is a group, and in particular contains an idempotent. If $p$ and
$q$ are idempotents in $T$ we write $p\leq q$ if and only if $pq=qp=p$.
An idempotent is minimal with respect to this relation if and only
if it is a member of the smallest ideal $K(T)$ of $T$. Given $p,q\in\beta S$
and $A\subseteq S$, $A\in p\cdot q$ if and only if the set $\{x\in S:x^{-1}A\in q\}\in p$,
where $x^{-1}A=\{y\in S:x\cdot y\in A\}$. See \citep{refHS} for
an elementary introduction to the algebra of $\beta S$ and for any
unfamiliar details.
\begin{defn}Let $C$ be a subset of a semigroup $S$. Then $C$ is called a central set if there exists a minimal idempotent $p\in K(\beta S)$ such that $C\in p$. A subset of $S$, which meets every central set, called central$^*$ set. We shall denote the class of all central sets as $\mathcal{C}$ and that of all central$^*$ sets as $\mathcal{C}^*$.
\end{defn}
The notion of central set was first introduced by Furstenberg in \citep{refF}
using topological dynamics and proved to be equivalent with definition
in \citep{refBH}. The basic fact that we need about central sets
is given by the Central Sets Theorem, which is due to Furstenberg
\citep[Proposition 8.21]{refF} for the case $S=\mathbb{Z}$.
\begin{thm}[Central Sets Theorem]
\label{cst} Let $S$ be a semigroup. Let ${\mathcal T}$ be the set
of sequences $\langle y_{n}\rangle_{n=1}^{\infty}$ in $S$. Let $C$
be a subset of $S$ which is central and let $F\in\mathcal{P}_{f}(\mathcal{T})$.
Then there exist a sequence $\langle a_{n}\rangle_{n=1}^{\infty}$
in $S$ and a sequence $\langle H_{n}\rangle_{n=1}^{\infty}$ in $\mathcal{P}_{f}(\mathbb{N})$
such that for each $n\in\mathbb{N}$, $\max H_{n}<\min H_{n+1}$ and for each
$L\in\mathcal{P}_{f}(\mathbb{N})$ and each $f\in F$, $\sum_{n\in L}(a_{n}+\sum_{t\in H_{n}}f(t))\in C$.
\end{thm}
However, the most general version of Central Sets Theorem is presented in \citep{refDHS}, where all the sequences have been handled simultaneously.\\
To end this preliminary discussions let us recall Khintchine's Theorem,
which states that for any measure preserving system $\left(X,\mathcal{B},\mu,T\right)$,
and for any $\epsilon>0$ the set $\{n\in\mathbb{Z}:\mu(A\cap T^{-n}A)>\mu(A)^{2}-\epsilon\}$
is an IP$^{*}$-set and in particular syndetic. In \citep{refBHK} the authors proved
that for any ergodic system $\left(X,\mathcal{B},\mu,T\right)$ the
sets $\{n\in\mathbb{Z}:\mu(A\cap T^{-n}A\cap T^{-2n}A)>\mu(A)^{3}-\epsilon\}$
and $\{n\in\mathbb{Z}:\mu(A\cap T^{-n}A\cap T^{-2n}A\cap T^{-3n}A>\mu(A)^{4}-\epsilon\}$
are syndetic subsets of $\mathbb{Z}$. On the other hand they proved
that for $n\geq4$ the above result does not hold in general.
In \citep{refBTZ} the authors proved result analogous to Khintchine's
Theorem. In fact they proved that for $q>2$ if $c_{0},c_{1,}c_{2}$
are distinct elements of $ F_{q}[x]$ and $\left(X,{\mathcal B},\mu,T_{f\in F_{q}[x]}\right)$
is an ergodic system, then for any $A\in{\mathcal B}^+$, and $\epsilon>0$ the set
\[
\{f\in F_{q}[x]:\mu(T_{c_{0}f}A\cap T_{c_{1}f}A\cap T_{c_{2}f}A)>\mu(A)^{3}-\epsilon\}
\]
is syndetic.
The authors also proved that for $q>3$ and $c_{0},c_{1,}c_{2},c_{3}\in F_{q}[x]$
the above conclusion is true provided that $c_{i}+c_{j}=c_{k}+c_{l}$
for some permutation $\{i,j,k,l\}$ of $\{1,2,3,4\}$. Further analogous
to \citep{refBHK} the authors proved that for any $k\geq3$ there
exists $(c_{0},c_{1},\ldots,c_{k})\in F_{q}[x]^{k+1}$ for which Khintchine's
Theorem does not hold in general.\\
At the end of this article we shall present some observation on Khintchine's
Theorem for the action of $(F_{q}[x],\cdot)$.
\noindent \textbf{Acknowledgement} : We would like to thank Professor Neil Hindman
for his helpful suggestions. We also thank the referee for her/his comments that have resulted in substantial improvements to this paper.
\section{Combinatorial Properties of $F_{q}[x]$}
In the terminology of Furstenberg \citep{refF}, an IP$^{*}$ set
$A$ in $\mathbb{Z}$ is a set, which meets $\mbox{FS}\langle x_{n}\rangle_{n=1}^{\infty}$
for any sequence $\langle x_{n}\rangle_{n=1}^{\infty}$ in $\mathbb{Z}$.
This in turn implies that $A$ is an IP$^{*}$ set iff it belongs to every
idempotent of $\beta\mathbb{Z}$. IP$^{*}$ -sets are known to have rich combinatorial structures. For example IP$^{*}$- sets are always
syndetic. Given any IP$^{*}$-set $A$ which is a subset of the set of integers
$\mathbb{Z}$ and a sequence $\langle x_{n}\rangle_{n=1}^{\infty}$ in $\mathbb{Z}$
there exists a sum subsystem $\langle y_n\rangle_{n=1}^{\infty}$
of $\langle x_{n}\rangle_{n=1}^{\infty}$ such that
\[
FS(\langle y_{n}\rangle_{n=1}^{\infty})\cup FP(\langle y_{n}\rangle_{n=1}^{\infty})\subseteq A,
\]
where for any sequence $\langle x_{n}\rangle_{n=1}^{\infty}$ in $\mathbb{Z}$, FS$(\langle x_{n}\rangle_{n=1}^{\infty})$ is defined to be the set $\{\sum_{n\in F}x_n:F$ is a finite subset of $\mathbb{N}
\}$. FP$(\langle x_{n}\rangle_{n=1}^{\infty})$ can be defined analogously.
It is well known that in the ring $(\mathbb{Z},+,\cdot)$ the non
trivial principal ideals are IP$^{*}$ sets. So the natural question
is, whether this result is true for arbitrary rings. The answer is
no. In fact in the ring $\mathbb{Z}[x]$ the ideal generated by $x$ has empty
intersection with $\mathbb{N}$, whereas $\mathbb{N}$ is an IP-set in $\mathbb{Z}[x]$. We will
prove that in the ring $(F_{q}[X],+,\cdot)$ every principal ideal is an IP$^{*}$-set.
In fact we will also prove that in $(F_{q}\left[X_{1},X_{2},\ldots,X_{k}\right],+,\cdot)$
every ideal of the form $\langle f_{1}(X_{1}),f_{2}(X_{2}),\ldots,f_{k}(X_{k})\rangle$
is an IP$^{*}$-set.
\begin{thm}
\label{IPstar} In the polynomial ring $(F_{q}\left[X_{1},X_{2},\ldots,X_{k}\right],+,\cdot)$
over the finite field\textup{ $F_{q}$}, the ideal $\langle f_{1}(X_{1}),f_{2}(X_{2}),\ldots,f_{k}(X_{k})\rangle$
generated by $f_{1}(X_{1})$, \textup{$f_{2}(X_{2})$, $\ldots$,
$f_{k}(X_{k})$} (at most one of which is non constant), is an IP$^{*}$-set
in the corresponding additive group.
\end{thm}
\begin{proof}
For simplicity we work with $k=2$. Let $\langle g_{n}(X_{1},X_{2})\rangle_{n=1}^{\infty}$ be
a sequence in $F_{q}[X_{1},X_{2}]$.
Let $g(X_{1},X_{2})$ be a polynomial in $F_{q}[X_{1},X_{2}]$. Then
\[
g(X_{1},X_{2})=\sum_{i\leq n,j\leq m}a_{i,j}X_{1}^{i}X_{2}^{j},\mbox{ where }a_{i,j}\in F_{q}.
\]
Since $X_{1}^{i},f_{1}(X_{1})\in F_{q}[X_{1}]$ and $X_{2}^{j},f_{2}(X_{2})\in F_{q}[X_{2}]$
by applying a division algorithm we have
\[
X_{1}^{i}=f_{1}(X_{1})q_{1,i}(X_{1})+r_{1,i}(X_{1})\mbox{, where deg}(r_{1,i}(X_{1}))<\mbox{deg}f_{1}(X_{1})
\]
\[
X_{2}^{j}=f_{2}(X_{2})q_{2,j}(X_{2})+r_{2,j}(X_{2})\mbox{, where deg}(r_{2,j}(X_{2}))<\mbox{deg}f_{2}(X_{2}).
\]
Then $g(X_{1},X_{2})$ can be expressed as
\[
\sum_{i\leq n,j\leq m}a_{i,j}(f_{1}(X_{1})q_{1,i}(X_{1})+r_{1,i}(X_{1}))(f_{2}(X_{2})q_{2,j}(X_{2})+r_{2,j}(X_{2})).
\]
\[
\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!g(X_{1},X_{2})=f_{1}(X_{1})h_{1}(X_{1},X_{2})+f_{2}(X_{2})h_{2}(X_{1},X_{2})
\]
\[
\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,+\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\sum_{\begin{array}{c}
\mbox{ deg}(r_{1,i}(X_{1}))<\mbox{deg}f_{1}(X_{1})\\
\mbox{deg}(r_{2,j}(X_{2}))<\mbox{deg}f_{2}(X_{2})
\end{array}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! a_{i,j}r_{1,i}(X_{1})r_{2,j}(X_{2}).
\]
Therefore we can write
\[
g(X_{1},X_{2})=h(X_{1},X_{2})+r(X_{1},X_{2}),
\]
where
\[
h(X_{1},X_{2})\in\langle f_{1}(X_{1}),f_{2}(X_{2})\rangle
\]
and $r(X_{1},X_{2})$ is a polynomial such that $\mbox{deg}\,r(X_{1},X_{2})<\mbox{deg}f_{1}(X_{1})+\mbox{deg}f_{2}(X_{2})$.
This implies that
\[
g_{n}(X_{1},X_{2})=h_{n}(X_{1},X_{2})+r_{n}(X_{1},X_{2})
\]
where
\[
h_{n}(X_{1},X_{2})\in\langle f_{1}(X_{1}),f_{2}(X_{2})\rangle
\]
and $r_{n}(X_{1},X_{2})$ is a polynomial such that $\mbox{deg}\,r_{n}(X_{1},X_{2})<\mbox{deg}f_{1}(X_{1})+\mbox{deg}f_{2}(X_{2})$.
But the set $\{r_{n}(X_{1},X_{2}):n\in\mathbb{N}\}$ is finite. Since $\{g_{n}(X_{1},X_{2}):n\in\mathbb{N}\}$
is infinite there exists $q$ many polynomials $g_{n_{i}}(X_{1},X_{2}):i=1,2,\ldots,q$
such that the corresponding $r_{n_{i}}(X_{1},X_{2})$ for $i=1,2,\ldots,q$
are equals. Now adding we get
\[
\sum_{i=1}^{q}g_{n_{i}}(X_{1},X_{2})=\sum_{i=1}^{q}h_{n_{i}}(X_{1},X_{2})+\sum_{i=1}^{q}r_{n_{i}}(X_{1},X_{2}).
\]
This implies that ${\displaystyle \sum_{i=1}^{q}g_{n_{i}}(X_{1},X_{2})}\in\langle f_{1}(X_{1}),f_{2}(X_{2})\rangle$
as ${\displaystyle \sum_{i=1}^{q}r_{n_{i}}(X_{1},X_{2})=0}$. Therefore,
$\langle f_{1}(X_{1}),f_{2}(X_{2})\rangle$ is an IP$^{*}$-set.
\end{proof}
In case of $\mathbb{Z}$ we know that iterated spectra of an IP$^{*}$ set
are also IP$^{*}$ but may not contain any ideal \citep{refBHK96}. But
for $(F_{q}[X],+)$ any IP$^{*}$ set contains an ideal up to finitely many terms.
\begin{thm}
\label{syndetic IP}Any $IP^{*}$-set in $(F_{q}[X],+)$ contains
an ideal of the form $\langle X^{m}\rangle$, for some $m\in\mathbb{N}$,
up to finitely many terms.
\end{thm}
\begin{proof}
Let us claim that any syndetic IP set $A$ in $(F_{q}[X],+)$ contains
$\langle X^{m}\rangle$ for some $m\in\mathbb{N}$. Now $A$ being a syndetic
set will be of the form
\[
A={\displaystyle \bigcup_{i=1}^{k}(f_{i}(X)+\langle X^{m}\rangle)}\mbox{ (up to finitely many terms) }
\]
for some $m,k\in\mathbb{N}$ with $m>\mbox{deg}f_{i}(X)$. Again, since $A$
is an IP-set, one of $f_{i}(X)$ must be zero. In fact $A$ being an
IP set, there exist a sequence $\langle g_{i}(x)\rangle_{i=1}^{\infty}$ such that
$FS\langle g_{i}(x)\rangle\subseteq A$. This implies that for each
$i\in\mathbb{N}$, there exists $j\in\{1,2,\cdots,k\}$ and some $h_{i}(x)\in F_{q}[X]$
such that $g_{i}(X)=f_{j}(X)+h_{i}(X)X^{m}$. Since $\{g_{i}(X):i\in\mathbb{N}\}$
is infinite, there exist $q$ many polynomials $g_{n_{i}}(X):i=1,2,\ldots,q$
such that the corresponding $f_{j_{i}}(X)$ are equal for $i=1,2,\ldots,q$,
and such sum of $q$ many polynomials is equal to zero. Hence some
$f_{j}(X)$ is equal to zero.
\end{proof}
We end this section with the following observation. We know that in the
case of $\mathbb{Z}$, the intersection of thick set and an IP syndetic set may
not be central, but in case of $F_{q}[X]$, such sets will be always
central set. In fact in $F_{q}[X]$, any IP syndetic set is an IP$^{*}$ set. Again
since the thick sets are always central set, intersection of a thick set and
an IP syndetic set is central set.
\section{Mixing Properties of the action of $(F_{q}[x],+)$}
In this section we shall show that all the mixing properties are equivalent
under the action of $(F_{q}[x],+)$. First let us recall the following
definitions.
\begin{defn}
A measure preserving system $\left(X,\mathcal{B},\mu,(T_{g})_{g\in G}\right)$
is said to be ergodic if for any set $A\in{\mathcal B}$ which satisfies $\mu(A\bigtriangleup T_{g}A)=0$
for any $g\in G$ has either measure $0$ or $1$.
\end{defn}
\begin{defn}
Let $\left(X,\mathcal{B},\mu,(T_{g})_{g\in G}\right)$ be a measure
preserving dynamical system. Then
\begin{enumerate}
\item $\left(X,\mathcal{B},\mu,(T_{g})_{g\in G}\right)$ is called strong
mixing if for any $\epsilon>0$ and any $A,B\in{\mathcal B}$ with positive
measure, the set $\{g\in G:|\mu(A\cap T_{g}B)-\mu(A)\mu(B)|<\epsilon\}$
is a cofinite set.
\item $\left(X,\mathcal{B},\mu,(T_{g})_{g\in G}\right)$ is called mild
mixing if for any $\epsilon>0$ and any $A,B\in{\mathcal B}$ with positive
measure, the set $\{g\in G:|\mu(A\cap T_{g}B)-\mu(A)\mu(B)|<\epsilon\}$
is an IP$^{*}$-set.
\item $\left(X,\mathcal{B},\mu,(T_{g})_{g\in G}\right)$ is called weak
mixing if for any $\epsilon>0$ and any $A,B\in{\mathcal B}$ with positive
measure, the set $\{g\in G:|\mu(A\cap T_{g}B)-\mu(A)\mu(B)|<\epsilon\}$
is a central$^{*}$-set.
\end{enumerate}
\end{defn}
\begin{lem}
\label{st mix}Let $\left(X,\mathcal{B},\mu,(T_{f})_{f\in F_{q}[x]}\right)$
be a measure preserving system. Then it is strong mixing iff for each
$B\in{\mathcal B}$ with $\mu(B)>0$ and an infinite set $F$, there exists
a sequence of polynomials $\langle f_{n}\rangle_{n=1}^{\infty}$ in $F$
such that $\chi_{B}\circ T_{f_{n}}=U_{T_{f_{n}}}\chi_{B}\to f_{B}=\mu(B)$.
\end{lem}
\begin{proof}
Let $\left(X,\mathcal{B},\mu,(T_{f})_{f\in F_{q}[x]}\right)$ be a
measure preserving system. Let $\left\{ A_{i}\right\} _{i=1}^{\infty}$
be a countable basis of ${\mathcal B}$ i.e. $\left\{ A_{i}\right\} _{i=1}^{\infty}$
is dense in ${\mathcal B}$ with the metric $d(A,B)=\mu(A\bigtriangleup B)$.
Let $B\in{\mathcal B}$, with $\mu(B)>0$ and $F$ be an infinite set.
Let us set for each $i\in\mathbb{N}$ and $\epsilon>0$,
\[
F(i,\epsilon)=\{f\in F_{q}[x]:|\mu(A_{i}\cap T_{f}B)-\mu(A_{i})\mu(B)|<\epsilon\}.
\]
Then for each $i\in\mathbb{N}$, $F(i,\epsilon)$ is a cofinite set. Let
us choose one $f_{1}(x)\in F\cap F(1,1)$. Next we choose another
$f_{2}(x)\in F\cap F(1,\frac{1}{2})\cap F(2,\frac{1}{2})$
such that $\text{deg}f_{2}>\text{deg}f_{1}$. Inductively we choose
a sequence $\langle f_{n})_{n=1}\rangle ^{\infty}$ with $\text{deg}f_{n+1}>\text{deg}f_{n}$
such that
\[
f_{n+1}\in F\cap F(1,\frac{1}{n+1})\cap\ldots\cap F(i+1,\frac{1}{n+1}).
\]
So we get a subsequence $\langle f_{n}\rangle_{n=1}^{\infty}$ of $F$. By choosing
again a subsequence from it as our requirements we can assume $\chi_{B}\circ T_{f_{n}}=U_{T_{f_{n}}}\chi_{B}\to f_{B}\text{( weakly})$. It
is clear that for each $i$
\[
\int\chi_{A_{i}}(f_{B}-\mu(B))d\mu=0.
\]
This implies that $f_{B}=\mu(B)$.
Conversely given that, for each $B\in{\mathcal B}$ with $\mu(B)>0$ and an
infinite set $F$, there exists a sequence of polynomials $\langle f_{n}\rangle_{n=1}^{\infty}$
in $F$ such that $\chi_{B}\circ T_{f_{n}}=U_{T_{f_{n}}}\chi_{B}\to f_{B}=\mu(B)$.
Now if $\left(X,\mathcal{B},\mu,(T_{f})_{f\in F_{q}[x]}\right)$ is
not strong mixing then there exist $A,B\in{\mathcal B}$ with positive
measure and $\epsilon>0$ such that $\{f\in F_{q}[x]:|\mu(A\cap T_{f}B)-\mu(A)\mu(B)|\geq\epsilon\}$
is an infinite set. For this $B\in{\mathcal B}$, and the infinite subset
$F\subset F_{q}[x]$ we consider the set $\text{cl}_{w}\{U_{T_{f}}(\chi_{B}):f\in F\}$.
Then by the given hypothesis there is a constant function $f_{B}\in\text{cl}_{w}\{U_{T_{f}}(\chi_{B}):f\in F\}$.
Clearly the set $\{f\in F_{q}[x]:\mu(A\cap T_{f}B)\geq\mu(A)\mu(B)+\epsilon\}$
is infinite. Therefore each $f\in\text{cl}_{w}\{U_{T_{f}}(\chi_{B}):f\in F\}$
satisfies that $\int\chi_{A}\cdot fd\mu\geq\mu(A)\mu(B)+\epsilon$.
This contradicts the assumption $\mu(B)\in\text{cl}_{w}\{U_{T_{f}}(\chi_{B}):f\in F\}$.
\end{proof}
\begin{thm}
Let $\left(X,\mathcal{B},\mu,(T_{f})_{f\in F_{q}[x]}\right)$ be a
measure preserving action. Then $T$ is strongly mixing iff for any
$\epsilon>0$ and $A\in{\mathcal B}$ with $\mu(A)>0$
\[
\{f\in F_{q}[x]:|\mu(A\cap T_{f}A)-\mu(A)^{2}|<\epsilon\}\in\triangle^{*}.
\]
\end{thm}
\begin{proof}
Strong mixing clearly implies the given condition.
For the converse, by Lemma \ref{st mix} it is sufficient to show that
for each $B\in{\mathcal B}$ with $\mu(B)>0$ and an infinite set $F$,
there exists a sequence of polynomials $\langle f_{n}\rangle_{n=1}^{\infty}$
in $F$ such that $\chi_{B}\circ T_{f_{n}}=U_{T_{f_{n}}}\chi_{B}{\longrightarrow}f_{B}=\mu(B)$ in the weak topology.
With out loss of generality we can assume that $F$ has a sequence
$\langle f_{n}\rangle_{n=1}^{\infty}$ such that $\text{deg}f_{i}<\text{deg}f_{i+1}$.
Since $\triangle$-sets
always have the Ramsey property, there exists some $F_{1}\subset F$ such
that
\[
F_{1}-F_{1}\subset(F-F)\cap\{f\in F_{q}[x]:|\mu(B\cap T_{f}B)-\mu(B)^{2}|<\frac{1}{2}\}.
\]
Choosing $F_{1}\supset F_{2}\supset\ldots\supset F_{k}$, we can inductively
choose $F_{k+1}\subset F_{k}$ such that
\[
F_{k+1}-F_{k+1}\subset(F_{k}-F_{k})\cap\{f\in F_{q}[x]:|\mu(B\cap T_{f}B)-\mu(B)^{2}|<\frac{1}{2^{k+1}}\}.
\]
Therefore for any $f\neq g\in F_{k}$, we have $|\mu(T_{f}B\cap T_{g}B)-\mu(B)^{2}|<\frac{1}{2^{k}}$.
Now let us consider a sequence $\langle f_{n_{i}}\rangle_{n=1}^{\infty}$
such that $f_{n_{i}}\in F_{n_{i}}$. Then clearly $\chi_{B}\circ T_{f_{n_{i}}}=U_{T_{f_{n_{i}}}}{\longrightarrow}f_{B}$ in the weak topology.
Thus
\[
\langle f_{B},f_{B}\rangle=\lim_{i}\lim_{j}\langle\chi_{B}\circ T_{f_{n_{i}}},\chi_{B}\circ T_{f_{n_{j}}}\rangle\leq\mu(B)^{2}+\lim_{i}\frac{1}{2^{i}}=\mu(B)^{2}=\left(\int f_{B}d\mu\right)^{2}.
\]
This shows that $f_{B}=\mu(B)$ due to the Cauchy-Schwarz inequality.
\end{proof}
So the above theorem shows that like $(\mathbb{N},+)$ action, in case of
$(F_{q}[x],+)$ action also $\triangle^{*}$-mixing and strong mixing
are equivalent. But the authors believe that this is not true for the action of arbitrary group.
\section{Action of $(F_{q}[x],\cdot)$}
In this section we shall show that $(F_{q}[x],\cdot)$ behaves quite
similarly like $(\mathbb{N},+)$ and using some established examples for
the action of $(\mathbb{N},+)$, we shall produce some examples of $(F_{q}[x],\cdot)$.
First, we require the following lemmas.
\begin{lem}
Let $\varphi:(F_{q}[x],\cdot)\to(\mathbb{N},+)$ be a map defined by $\varphi(f)=\text{deg}f$.
Then $C$ is an IP-set in $(\mathbb{N},+)$ iff $\varphi^{-1}(C)$ is an IP-set
in $(F_{q}[x],\cdot)$.
\end{lem}
\begin{cor}
Let $\varphi:(F_{q}[x],\cdot)\to(\mathbb{N},+)$ be a map defined by $\varphi(f)=\text{deg}f$.
Then $C$ is an IP$^{*}$-set in $(\mathbb{N},+)$ iff $\varphi^{-1}(C)$ is an
IP$^{*}$-set in $(F_{q}[x],\cdot)$.
\end{cor}
The following lemma follows from \citep[Corollary 4.22]{refBG}. We still include a proof suggested by Prof. Neil Hindman, for the sake of completeness.
\begin{lem}
Let $\varphi:(F_{q}[x],\cdot)\to(\mathbb{N},+)$ be a map defined by $\varphi(f)=\text{deg}f$.
Then $C$ is a central set in $(\mathbb{N},+)$ iff $\varphi^{-1}(C)$ is a central set in
$(F_{q}[x],\cdot)$.
\end{lem}
\begin{proof}
Clearly $\varphi$ is a homomorphism so that it has continuous extension
$\tilde{\varphi}$ over $(\beta F_{q}[x],\cdot)$ onto $(\beta\mathbb{N},+)$ \citep[Corollary 4.22 and
Exercise 3.4.1]{refHS}.\\
Necessity. Let $C$ be a central set in $(\mathbb{N},+)$ and let $p$ be a minimal
idempotent containing $C$. Let $M=\tilde{\varphi}^{-1}(\{p\})$. Then $M$ is a compact Hausdorff right topological semigroup, so pick an idempotent $q \in K(M)$. We claim that $q$ is
minimal in $(\beta F_{q}[x],\cdot)$. So let $r$ be an idempotent in $(\beta F_{q}[x],\cdot)$ with $r \leq q$. Then $\varphi(r)\leq\varphi(q)=p$. Hence $r\in M$ and thus $r=q$.\\
Sufficiency. Assume that $\varphi^{-1}(C)$ is central in $(\beta F_{q}[x],\cdot)$. Pick an idempotent $p \in K(\beta F_{q}[x])$
such that $\varphi^{-1}(C)\in p$. Then $\tilde{\varphi}(p)\in \tilde{\varphi}(K(\beta F_{q}[x]))$ and $\tilde{\varphi}(K(\beta F_{q}[x]))=K(\beta \mathbb{N})$ by \citep[Exercise 1.7.3]{refHS}. Thus by \citep[Lemma 3.30]{refHS} $\varphi[\varphi^{-1}(C)]\in\tilde{\varphi}(p)$ and $\varphi[\varphi^{-1}(C)=C$.
\end{proof}
\begin{cor}
Let $\varphi:(F_{q}[x],\cdot)\to(\mathbb{N},+)$ be a map defined by $\varphi(f)=\text{deg}f$.
Then $C$ is a central$^{*}$ set in $(\mathbb{N},+)$ iff $\varphi^{-1}(C)$
is a central$^{*}$ set in $(F_{q}[x],\cdot)$.
\end{cor}
We know that strong mixing $\Rightarrow$ mild mixing $\Rightarrow$
weak mixing; but the converses are not true in
general. In case of the action of $(F_{q}[x],+)$ on a measure space
$(X,{\mathcal B},\mu)$ all the mixings are equivalent. In contrast to
the action of $(F_{q}[x],\cdot)$ we see that there are weak mixing systems
that are not mild mixing and there are mild mixing systems that are not
strong mixing.
Let $(X,{\mathcal B},\mu,T)$ be a weak mixing system which is not mild
mixing. We define an action of $(F_{q}[x],\cdot)$ by the formula
\[
T_{f}(x)=T^{\text{deg}f}(x).
\]
Let $\epsilon>0$ and any $A,B\in{\mathcal B}$ with positive measure.
Since $(X,{\mathcal B},\mu,T)$ is weak mixing
\[
\{n\in\mathbb{Z}:|\mu(A\cap T^{-n}B)-\mu(A)\mu(B)|<\epsilon\}
\]
is a central$^{*}$-set. This further implies that the set
\[
\{f:|\mu(A\cap T_{f}B)-\mu(A)\mu(B)|<\epsilon\}=\{f:|\mu(A\cap T^{\text{deg}f}B)-\mu(A)\mu(B)|<\epsilon\}
\]
is a central$^{*}$-set so that $(X,{\mathcal B},\mu,T_{f\in(F_{q}[x],\cdot)})$
is weak mixing. Using quite similar technique and the fact that a
set $A$ in $\mathbb{N}$ is an IP$^{*}$-set iff $\varphi^{-1}(A)$ is an
IP$^{*}$-set, we can show that $(X,{\mathcal B},\mu,T_{f\in(F_{q}[x],\cdot)})$
is not mild mixing. Similarly we can show that there exists $(X,{\mathcal B},\mu,T_{f\in(F_{q}[x],\cdot)})$
which is a mild mixing system but not strong mixing.
In \citep{refBH} the authors introduced the notion of dynamical IP$^{*}$-set.
\begin{defn}
A set $A$ in a semigroup $S$ is called a dynamical IP$^{*}$-set
if there exists a measure preserving dynamical system $\left(X,\mathcal{B},\mu,(T_{s})_{s\in S}\right)$,
$A\in{\mathcal B}$ with $\mu\left(A\right)>0$, such that $\left\{ s\in S:\mu\left(A\cap T_{s}^{-1}A\right)>0\right\} $$\subseteq A$.
\end{defn}
By \citep[Theorem 16.32]{refHS}, there is an IP$^{*}$ set $B$ in
$(\mathbb{N},+)$ such that for each $n\in\mathbb{N}$, neither $n+B$ nor $-n+B$
is an IP$^{*}$ set. Consequently, the following theorem shows that
not every IP$^{*}$-set is a dynamical IP$^{*}$-set.
\begin{thm}
Let B be a dynamical IP$^{*}$ set in $(\mathbb{N},+)$. There is a dynamical
IP$^{*}$-set $C\subset B$ such that for each $n\in C$, $-n+C$
is a dynamical IP$^{*}$ set (and hence not every IP$^{*}$ set is
a dynamical IP$^{*}$ set).
\end{thm}
\begin{proof}
\citep[Theorem 19.35][]{refHS}.
\end{proof}
Now, since there is an IP$^{*}$ set $B$ in $(\mathbb{N},+)$ such that for
each $n\in\mathbb{N}$, neither $n+B$ nor $-n+B$ is an IP$^{*}$ set, we
can show that $\varphi^{-1}B$ is an IP$^{*}$-set in $(F_{q}\left[x\right],\cdot)$,
but neither $\varphi^{-1}B/f$ nor $(\varphi^{-1}B)f$ is an IP$^{*}$-set
in $(F_{q}\left[x\right],\cdot)$ for any $f\in$ $F_{q}\left[x\right]\setminus F_{q}$.
Using an analogous method used in \citep[Theorem 19.35][]{refHS}, we
can prove the following.
\begin{thm}
Let B be a dynamical IP$^{*}$ set in $(F_{q}[x],\cdot)$. There
is a dynamical IP$^{*}$-set $C\subset B$ such that for each $f\in C$,
$C/f$ is a dynamical IP$^{*}$ set (and hence not every IP$^{*}$
set is a dynamical IP$^{*}$ set in $(F_{q}[x],\cdot)$ ).
\end{thm}
It can be proved that if $B$ be a dynamical IP$^{*}$ set in $(\mathbb{N},+)$
then $\varphi^{-1}(C)$ is also a dynamical IP$^{*}$ set in $(F_{q}[x],\cdot)$
by the transformation $T_{f}(x)=T^{\text{deg}f}(x)$. But we do not
know whether the converse is true or false.
In the discussion of the action of $(F_{q}[x],\cdot)$, we have noticed
that it behaves quite similar to the action of $(\mathbb{N},+)$. This motivates
us to raise the following question.
\begin{qs}
Given any ergodic system $\left(X,\mathcal{B},\mu,T_{f\in(F_{q}[x],\cdot)}\right)$,
are the sets,
\[
\{f\in F_{q}[x]:\mu(A\cap T_{f}A\cap T_{f^{2}}A)>\mu(A)^{3}-\epsilon\}
\]
and
\[
\{f\in F_{q}\left[x\right]:\mu(A\cap T_{f}A\cap T_{f^{2}}A\cap T_{f^{3}}A>\mu(A)^{4}-\epsilon\}
\]
syndetic subsets of $(F_{q}[x],\cdot)$?
\end{qs}
In contrast, it can be shown that for $k>3$, $\{f\in F_{q}\left[x\right]:\mu(A\cap T_{f}A\cap T_{f^{2}}A\cap T_{f^{3}}A\cap\ldots\cap T_{f^{k}}A>\mu(A)^{k+1}-\epsilon\}$
are not syndetic subsets of $(F_{q}[x],\cdot)$ by the transformation
$T_{f}(x)=T^{\text{deg}f}(x)$.
\section*{References}
|
2,869,038,154,359 | arxiv | \section{Introduction}
Recall that a \emph{quasi-order} is a binary relation which is reflexive and transitive.
A well-studied quasi-order over the Baire space $\mathbb N^{\mathbb N}$ is the binary relation $\le^*$ which is defined
by letting, for any two elements $\eta:\mathbb N\rightarrow\mathbb N$ and $\xi:\mathbb N\rightarrow\mathbb N$,
$$\eta\le^*\xi\text{ iff }\{n\in\mathbb N\mid \eta(n)>\xi(n)\}\text{ is finite}.$$
This quasi-order is behind the definitions of cardinal invariants $\mathfrak b$ and $\mathfrak d$ (see~\cite[\S2]{MR2768685}),
and serves as a key to the analysis of \emph{oscillation of real numbers} which is known to have prolific applications to topology, graph theory, and forcing axioms (see~\cite{MR980949}).
By a classical theorem of Hechler \cite{MR0360266}, the structure $(\mathbb N^{\mathbb N},\le^*)$ is universal
in that sense that for any $\sigma$-directed poset $\mathbb P$ with no maximal element, there is a $ccc$ forcing extension in which
$(\mathbb N^{\mathbb N},\le^*)$ contains a cofinal order-isomorphic copy of $\mathbb P$.
In this paper, we consider (a refinement of) the higher analogue of the relation $\le^*$ to the realm of the generalized Baire space $\kappa^\kappa$ (sometimes refered as the higher Baire space),
where $\kappa$ is a regular uncountable cardinal. This is done by simply replacing the ideal of finite sets with the ideal of nonstationary sets, as follows.\footnote{A comparison of the generalization considered here with the one obtained
by replacing the ideal of finite sets with the ideal of bounded sets may be found in \cite[\S8]{MR1355135}.}
\begin{defn}
Given a stationary subset $S\subseteq\kappa$, we define a quasi-order $\sq S$ over $\kappa^\kappa$
by letting, for any two elements $\eta:\kappa\rightarrow\kappa$ and $\xi:\kappa\rightarrow\kappa$,
$$\eta\sq S\xi\text{ iff }\{\alpha\in S\mid \eta(\alpha)>\xi(\alpha)\}\text{ is nonstationary}.$$
\end{defn}
Note that since the nonstationary ideal over $S$ is $\sigma$-closed, the quasi-order $\le^S$ is well-founded,
meaning that we can assign a \emph{rank} value $\|\eta\|$ to each element $\eta$ of $\kappa^\kappa$.
The utility of this approach is demonstrated in the celebrated work of Galvin and Hajnal \cite{MR0376359} concerning the behavior of the power function over the singular cardinals,
and, of course, plays an important role in Shelah's \textit{pcf theory} (see~\cite[\S4]{MR2768693}).
It was also demonstrated to be useful in the study of partition relations of singular cardinals of uncountable cofinality \cite{MR2494318}.
\medskip
In this paper, we first address the question of how $\sq S$ compares with $\sq{S'}$ for various subsets $S$ and $S'$. It is proved:
\begin{thma} Suppose that $\kappa$ is a regular uncountable cardinal and $\axiomfont{GCH}$ holds.
Then there exists a cofinality-preserving $\axiomfont{GCH}$-preserving forcing extension in which for all stationary subsets $S,S'$ of $\kappa$,
there exists a map $f:\kappa^{\le\kappa}\rightarrow2^{\le\kappa}$ such that, for all $\eta,\xi\in\kappa^{\le\kappa}$,
\begin{itemize}
\item[$\bullet$] $\dom(f(\eta))=\dom(\eta)$;
\item[$\bullet$] if $\eta\subseteq\xi$, then $f(\eta)\subseteq f(\xi)$;
\item[$\bullet$] if $\dom(\eta)=\dom(\xi)=\kappa$, then $\eta\sq S\xi$ iff $f(\eta)\sq{S'}f(\xi)$.
\end{itemize}
\end{thma}
Note that as $\rng(f\restriction\kappa^\kappa)\subseteq 2^\kappa$, the above assertion is non-trivial even in the case $S=S'=\kappa$,
and forms a contribution to the study of lossless encoding of substructures of $(\kappa^{\le\kappa},\ldots)$ as substructures of $(2^{\le\kappa},\ldots)$ (see, e.g., \cite[Appendix]{paper20}).
\medskip
To formulate our next result --- an optimal strengthening of Theorem~A ---
let us recall a few basic notions from generalized descriptive set theory.
\emph{The generalized Baire space} is the set $\kappa^\kappa$ endowed with
the \emph{bounded topology}, in which a basic open set takes the form
$[\zeta]:=\{\eta\in \kappa^\kappa \mid \zeta\subset \eta\}$, with $\zeta$, an element of $\kappa^{<\kappa}$.
A subset $F\subseteq\kappa^\kappa$ is \emph{closed} iff its complement is open iff there exists a tree $T\subseteq\kappa^{<\kappa}$ such that
$[T]:=\{ \eta\in\kappa^\kappa\mid\forall\alpha<\kappa(\eta\restriction\alpha\in T)\}$ is equal to $F$.
A subset $A\subseteq\kappa^\kappa$ is \emph{analytic} iff there is a closed subset $F$ of the product space $\kappa^\kappa\times \kappa^\kappa$
such that its projection $\pr(F):=\{\eta\in\kappa^\kappa\mid \exists\xi\in\kappa^\kappa~(\eta,\xi)\in F\}$ is equal to $A$.
\emph{The generalized Cantor space} is the subspace $2^\kappa$ of $\kappa^\kappa$ endowed with the induced topology.
The notions of open, closed and analytic subsets of $2^\kappa$, $2^\kappa\times2^\kappa$ and $\kappa^\kappa\times\kappa^\kappa$
are then defined in the obvious way.
\begin{defn}
The restriction of the quasi-order $\sq S$ to $2^\kappa$ is denoted by $\sqc S$.
\end{defn}
For all $\eta,\xi\in\kappa^\kappa$, denote $\Delta(\eta,\xi):=\min(\{\alpha<\kappa\mid \eta(\alpha)\neq\xi(\alpha)\}\cup\{\kappa\})$.
\begin{defn}
Let $R_1$ and $R_2$ be binary relations over $X_1,X_2\in \{2^\kappa, \kappa^\kappa\}$, respectively.
A function $f:X_1\rightarrow X_2$ is said to be:
\begin{enumerate}[(a)]
\item a \emph{reduction of $R_1$ to $R_2$} iff, for all $\eta,\xi\in X_1$, $$\eta\mathrel{R_1}\xi\text{ iff }f(\eta)\mathrel{R_2}f(\xi).$$
\item \emph{$1$-Lipschitz} iff for all $\eta,\xi\in X_1$, $$\Delta(\eta,\xi)\le\Delta(f(\eta),f(\xi)).$$
\end{enumerate}
The existence of a function $f$ satisfying (a) and (b) is denoted by ${R_1}\hookrightarrow_1{R_2}$.
\end{defn}
In the above language, Theorem~A provides a model in which, for all stationary subsets $S,S'$ of $\kappa$,
${\sq S}\hookrightarrow_1{\sqc S'}$.
As $\sq{S}$ is an analytic quasi-order over $\kappa^\kappa$,
it is natural to ask whether a stronger universality result is possible,
namely, whether it is forceable that \emph{any} analytic quasi-order over $\kappa^\kappa$ admits a $1$-Lipschitz reduction to $\sqc{S'}$ for some (or maybe even for all) stationary $S'\subseteq\kappa$.
The answer turns out to be affirmative, hence the choice of the title of this paper.
\begin{thmb} Suppose that $\kappa$ is a regular uncountable cardinal and $\axiomfont{GCH}$ holds.
Then there exists a cofinality-preserving $\axiomfont{GCH}$-preserving forcing extension
in which, for every analytic quasi-order $Q$ over $\kappa^\kappa$
and every stationary $S\subseteq\kappa$, ${Q}\hookrightarrow_1{\sqc S}$.
\end{thmb}
\begin{uremark} The universality statement under consideration is optimal, as ${Q}\hookrightarrow_1{\sqc S}$ implies that $Q$ analytic.
\end{uremark}
The proof of the preceding goes through a new diamond-type principle for reflecting second-order formulas, introduced here and denoted by $\dl^*_S(\Pi^1_2)$.
This principle is a strengthening of Jensen's $\diamondsuit_S$ and a weakening of Devlin's $\diamondsuit^\sharp_S$.
For $\kappa$ a successor cardinal, we have $\dl^*_S(\Pi^1_2)\Rightarrow\diamondsuit^*_S$ but not $\diamondsuit^*_S\Rightarrow\dl^*_S(\Pi^1_2)$ (see Remark~\ref{rmk38} below).
Another crucial difference between the two is that, unlike $\diamondsuit^*_S$, the principle $\dl^*_S(\Pi^1_2)$ is compatible with the set $S$ being ineffable.
In Section~2, we establish the consistency of the new principle, in fact, proving that it follows from an abstract condensation principle that was introduced and studied in \cite{FHl,HolyWuWelch}.
It thus follows that it is possible to force $\dl^*_S(\Pi^1_2)$ to hold over all stationary subsets $S$ of a prescribed regular uncountable cardinal $\kappa$.
It also follows that, in canonical models for Set Theory (including any $L[E]$ model with Jensen's $\lambda$-indexing which is sufficiently iterable and has no subcompact cardinals), $\dl^*_S(\Pi^1_2)$ holds for every stationary subset $S$ of every regular uncountable (including ineffable) cardinal $\kappa$.
Then, in Section~3, the core combinatorial component of our result is proved:
\begin{thmc} Suppose $S$ is a stationary subset of a regular uncountable cardinal $\kappa$.
If $\dl^*_S(\Pi^1_2)$ holds, then, for every analytic quasi-order $Q$ over $\kappa^\kappa$, ${Q}\hookrightarrow_1{\sqc S}$.
\end{thmc}
\section{A Diamond reflecting second-order formulas}
In \cite{MR683163}, Devlin introduced a strong form of the Jensen-Kunen principle $\diamondsuit^+_\kappa$,
which he denoted by $\diamondsuit^{\sharp}_{\kappa}$, and proved:
\begin{fact}[Devlin, {\cite[Theorem~5]{MR683163}}]\label{devlinsthm} In $L$, for every regular uncountable cardinal $\kappa$ that is not ineffable, $\diamondsuit^\sharp_\kappa$ holds.
\end{fact}
\begin{remark}
A subset $S$ of a regular uncountable cardinal $\kappa$ is said to be
\emph{ineffable} iff, for every sequence $\langle Z_\alpha\mid\alpha\in S\rangle$, there exists a subset $Z\subseteq\kappa$, for which $\{\alpha\in S\mid Z\cap\alpha=Z_\alpha\cap\alpha\}$ is stationary.
Note that the collection of non-ineffable subsets of $\kappa$ forms a normal ideal that contains $\{\alpha<\kappa\mid \cf(\alpha)<\alpha\}$ as an element.
Also note that if $\kappa$ is ineffable, then $\kappa$ is strongly inaccessible.
Finally, we mention that by a theorem of Jensen and Kunen, for any ineffable set $S$, $\diamondsuit_S$ holds and $\diamondsuit^*_S$ fails.
\end{remark}
As said before, in this paper, we consider a variation of Devlin's principle compatible with $\kappa$ being ineffable.
Devlin's principle as well as its variation provide us with $\Pi^{1}_{2}$-reflection over structures of the form $\langle \kappa,{\in}, (A_{n})_{n\in \omega} \rangle $.
We now describe the relevant logic in detail.
A $\Pi^{1}_{2}$-sentence $\phi$ is a formula of the form $\forall X\exists Y\varphi$ where $\varphi$ is a first-order sentence over a relational language $\mathcal L$ as follows:
\begin{itemize}
\item[$\bullet$] $\mathcal L$ has a predicate symbol $\epsilon$ of arity $2$;
\item[$\bullet$] $\mathcal L$ has a predicate symbol $\mathbb X$ of arity $m({\mathbb X})$;
\item[$\bullet$] $\mathcal L$ has a predicate symbol $\mathbb Y$ of arity $m({\mathbb Y})$;
\item[$\bullet$] $\mathcal L$ has infinitely many predicate symbols $(\mathbb A_n)_{n\in \omega}$, each $\mathbb A_n$ is of arity $m(\mathbb A_n)$.
\end{itemize}
\begin{defn} For sets $N$ and $x$, we say that \emph{$N$ sees $x$} iff
$N$ is transitive, p.r.-closed, and $x\cup\{x\}\subseteq N$.
\end{defn}
Suppose that a set $N$ sees an ordinal $\alpha$,
and that $\phi=\forall X\exists Y\varphi$ is a $\Pi^{1}_{2}$-sentence, where $\varphi$ is a first-order sentence in the above-mentioned language $\mathcal L$.
For every sequence $(A_n)_{n\in\omega}$ such that, for all $n\in\omega$, $A_n\subseteq \alpha^{m(\mathbb A_n)}$,
we write
$$\langle \alpha,{\in}, (A_{n})_{n\in \omega} \rangle \models_N \phi$$
to express that the two hold:
\begin{enumerate}[(1)]
\item $(A_{n})_{n\in \omega} \in N$;
\item $\langle N,{\in}\rangle\models (\forall X\subseteq \alpha^{m(\mathbb X)})(\exists Y\subseteq \alpha^{m(\mathbb Y)})[\langle \alpha,{\in}, X, Y, (A_{n})_{n\in \omega} \rangle\models \varphi]$,
where:
\begin{itemize}
\item[$\bullet$] $\in$ is the interpretation of $\epsilon$;
\item[$\bullet$] $X$ is the interpretation of $\mathbb X$;
\item[$\bullet$] $Y$ is the interpretation of $\mathbb Y$, and
\item[$\bullet$] for all $n\in\omega$, $A_n$ is the interpretation of $\mathbb A_n$.
\end{itemize}
\end{enumerate}
\begin{convention}\label{conv23}
We write $\alpha^+$ for $|\alpha|^+$,
and write $\langle \alpha,{\in}, (A_{n})_{n\in \omega} \rangle \models \phi$ for
$$\langle \alpha,{\in}, (A_{n})_{n\in \omega} \rangle \models_{H_{\alpha^+}} \phi.$$
\end{convention}
\begin{defn}[Devlin, \cite{MR683163}] Let $\kappa$ be a regular and uncountable cardinal.
$\diamondsuit^\sharp_\kappa$ asserts the existence of a sequence $\vec N=\langle N_\alpha\mid\alpha<\kappa\rangle$ satisfying the following:
\begin{enumerate}[(1)]
\item for every infinite $\alpha<\kappa$, $N_\alpha$ is a set of cardinality $|\alpha|$ that sees $\alpha$;
\item for every $X\subseteq\kappa$, there exists a club $C\subseteq\kappa$ such that, for all $\alpha\in C$, $C\cap\alpha,X\cap\alpha\in N_\alpha$;
\item whenever $\langle \kappa,{\in},(A_n)_{n\in\omega}\rangle\models\phi$,
with $\phi$ a $\Pi^1_2$-sentence,
there are stationarily many $\alpha<\kappa$ such that
$\langle \alpha,{\in},(A_n\cap(\alpha^{m(\mathbb A_n)}))_{n\in\omega}\rangle\models_{N_\alpha}\phi$.
\end{enumerate}
\end{defn}
Consider the following variation:
\begin{defn}\label{reflectingdiamond} Let $\kappa$ be a regular and uncountable cardinal, and $S\subseteq\kappa$ stationary.
$\dl^*_S(\Pi^1_2)$ asserts the existence of a sequence $\vec N=\langle N_\alpha\mid\alpha\in S\rangle$ satisfying the following:
\begin{enumerate}[(1)]
\item for every $\alpha\in S$, $N_\alpha$ is a set of cardinality $<\kappa$ that sees $\alpha$;
\item for every $X\subseteq\kappa$, there exists a club $C\subseteq\kappa$ such that, for all $\alpha\in C \cap S$, $X\cap\alpha\in N_\alpha$;
\item whenever $\langle \kappa,{\in},(A_n)_{n\in\omega}\rangle\models\phi$,
with $\phi$ a $\Pi^1_2$-sentence,
there are stationarily many $\alpha\in S$ such that $|N_\alpha|=|\alpha|$ and
$\langle \alpha,{\in},(A_n\cap(\alpha^{m(\mathbb A_n)}))_{n\in\omega}\rangle\models_{N_\alpha}\phi$.
\end{enumerate}
\end{defn}
\begin{remark} The choice of notation for the above principle is motivated by \cite[Definition~2.10]{Sh:107} and \cite[Definition~45]{TodoVaan}.
\end{remark}
The goal of this section is to derive $\dl^*_S(\Pi^1_2)$ from an abstract principle
which is both forceable and a consequence of $V=L[E]$, for $L[E]$ an iterable extender model with Jensen $\lambda$-indexing without a subcompact cardinal (see~\cite{MR1860606,MR2081183}).
Note that this covers all $L[E]$ models that can be built so far.
\begin{convention}
The class of ordinals is denoted by $\ord$.
The class of ordinals of cofinality $\mu$ is denoted by $\cof(\mu)$, and
the class of ordinals of cofinality greater than $\mu$ is denoted by $\cof({>}\mu)$.
For a set of ordinals $a$, we write
$\acc(a) := \{\alpha \in a \mid \sup(a \cap \alpha) = \alpha > 0\}$.
$\axiomfont{ZF}^{-}$ denotes $\axiomfont{ZF}$ without the power-set axiom.
The transitive closure of a set $X$ is denoted by $\trcl(X)$,
and the Mostowski collapse of a structure $\mathfrak B$ is denoted by $\clps(\mathfrak B)$.
\end{convention}
\begin{defn}\label{nicefiltration}
Suppose $N$ is a transitive set.
For a limit ordinal $\lambda$, we say that $\vec{M}=\langle M_\beta \mid \beta < \lambda \rangle $
is a \emph{nice filtration} of $N$ iff all of the following hold:
\begin{enumerate}[(1)]
\item $\bigcup_{\beta<\lambda}M_\beta=N$;
\item $\vec M$ is $\in$-increasing, that is, $\alpha<\beta<\lambda\implies M_\alpha\in M_\beta$;
\item $\vec M$ is continuous, that is, for every $\beta\in\acc(\lambda)$, $M_\beta=\bigcup_{\alpha<\beta}M_\alpha$;
\item for all $\beta<\lambda$, $M_\beta$ is a transitive set with $M_\beta\cap\ord=\beta$ and $|M_{\beta}|\le|\beta|+\aleph_0$.
\end{enumerate}
\end{defn}
\begin{convention} \label{vecM}
Whenever $\lambda$ is a limit ordinal, and $\vec M=\langle M_\beta\mid\beta<\lambda\rangle$ is a $\subseteq$-increasing, continuous
sequence of sets, we denote its limit $\bigcup_{\beta<\lambda}M_\beta$ by $M_\lambda$.
\end{convention}
\begin{defn}[Holy-Welch-Wu, \cite{HolyWuWelch}] \label{LCCupto}
Let $\eta < \zeta$ be ordinals.
We say that \emph{local club condensation holds in $(\eta,\zeta)$},
and denote this by $\axiomfont{LCC}(\eta,\zeta)$,
iff there exist a limit ordinal $\lambda\ge\zeta$ and a sequence $\vec{M}=\langle M_\beta \mid \beta < \lambda \rangle $ such that
all of the following hold:
\begin{enumerate}[(1)]
\item $\vec M$ is \emph{nice filtration} of $M_{\lambda}$;
\item $\langle M_\lambda,{\in}\rangle \models\axiomfont{ZF}^{-}$;
\item For every ordinal $\alpha$ in the open interval $(\eta,\zeta)$ and every sequence $\vec{\mathcal{F}} = \langle (F_{n},k_{n}) \mid n \in \omega \rangle$ in $M_\lambda$ such that,
for all $n \in \omega$, $k_{n} \in \omega$ and $F_{n} \subseteq (M_{\alpha})^{k_{n}}$, there is a sequence
$\vec{\mathfrak{B}} = \langle \mathfrak{B}_{\beta} \mid \beta < |\alpha| \rangle $ in $M_\lambda$ having the following properties:
\begin{enumerate}[(a)]
\item for all $\beta<|\alpha|$, $\mathfrak B_{\beta}$ is of the form $$\langle B_{\beta},{\in}, \vec{M} \restriction (B_{\beta} \cap\ord), (F_n\cap(B_\beta)^{k_n})_{n\in\omega} \rangle;$$
\item for all $\beta<|\alpha|$, $\mathfrak B_{\beta} \prec \langle M_{\alpha},{\in}, \vec{M}\restriction \alpha, (F_n)_{n\in\omega} \rangle$;
\item for all $\beta<|\alpha|$, $\beta\subseteq B_\beta$ and $|B_{\beta}| < |\alpha|$;
\item for all $\beta < |\alpha|$, there exists $\bar{\beta}<\lambda$ such that
$$\clps(\langle B_{\beta},{\in}, \langle B_{\delta} \mid \delta \in B_{\beta}\cap\ord \rangle \rangle) = \langle M_{\bar{\beta}},{\in}, \vec M\restriction \bar{\beta} \rangle;$$
\item $\langle B_\beta\mid\beta<|\alpha|\rangle$ is $\subseteq$-increasing, continuous and converging to $M_\alpha$.
\end{enumerate}
\end{enumerate}
For $\vec{\mathfrak{B}}$ as in Clause~(3) above we say that
\emph{$\vec{\mathfrak{B}}$ witnesses $\axiomfont{LCC}$ at $\alpha$ with respect to $\vec M$ and $\vec{\mathcal{F}}$}.
\end{defn}
\begin{remark}\label{NotationRemark} There are first-order sentences $\psi_0(\dot{\eta},\dot{\zeta})$ and $\psi_{1}(\dot{\eta})$
in the language $\mathcal{L}^*:=\{{\in},\vec{M}, \dot{\eta}, \dot{\zeta}\}$ of set theory augmented by a predicate for a nice filtration and two ordinals such that,
for all $\eta < \zeta \le \lambda$ and $\vec M=\langle M_\beta\mid\beta<\lambda\rangle$:
\begin{itemize}
\item[$\bullet$] $(\langle M_{\lambda},{\in}, \vec{M} \rangle \models \psi_0(\eta,\zeta)) \iff (\vec M\text{ witnesses that }\axiomfont{LCC}(\eta,\zeta)\text{ holds})$, and
\item[$\bullet$] $(\langle M_{\lambda},{\in}, \vec{M} \rangle \models \psi_1(\eta)) \iff (\vec M\text{ witnesses that }\axiomfont{LCC}(\eta,\lambda)\text{ holds})$.
\end{itemize}
Therefore, we will later make an abuse of notation and write $\langle N,{\in}, \vec{M} \rangle \models\axiomfont{LCC}(\eta,\zeta)$
to mean that $\vec M$ is a nice filtration of $N$ witnessing that $\axiomfont{LCC}(\eta,\zeta)$ holds.
\end{remark}
\begin{fact}[Friedman-Holy, implicit in \cite{FHl}] \label{InaccForcing} Assume $\axiomfont{GCH}$.
For every inaccessible cardinal $\kappa$,
there is a set-size cofinality-preserving notion of forcing $\mathbb{P}$ such that, in $V^{\mathbb{P}}$, the three hold:
\begin{enumerate}[(1)]
\item $\axiomfont{GCH}$;
\item there is a nice filtration $\vec{M}=\langle M_\beta\mid\beta<\kappa^+\rangle$ of $H_{\kappa^+}$ witnessing that $\axiomfont{LCC}(\omega_{1},\kappa^{+})$ holds;
\item there is a $\Delta_{1}$-formula $\Theta$ and a parameter $ a \subseteq \kappa$ such that the relation $<_\Theta$ defined by ($x <_{\Theta} y$ iff $H_{\kappa^{+}} \models \Theta(x,y,a)$) is a global well-ordering of $H_{\kappa^{+}}$.
\end{enumerate}
\end{fact}
\begin{fact}[Holy-Welch-Wu, {\cite[p.~1362 and \S4]{HolyWuWelch}}]\label{forcing}
Assume $\axiomfont{GCH}$. For every regular cardinal $\kappa$, there is a set-size notion of forcing $\mathbb{P}$
which is $({<}\kappa)$-directed-closed and has the $\kappa^+{\text{-}}cc$ such that, in $V^{\mathbb P}$, the three hold:
\begin{enumerate}[(1)]
\item $\axiomfont{GCH}$;
\item there is a nice filtration $\vec{M}=\langle M_\beta\mid\beta<\kappa^+\rangle$ of $H_{\kappa^+}$
witnessing that $\axiomfont{LCC}(\kappa,\kappa^{+})$ holds;
\item there is a $\Delta_{1}$-formula $\Theta$ and a parameter $ a \subseteq \kappa$ such that the relation $<_\Theta$ defined by ($x <_{\Theta} y$ iff $H_{\kappa^{+}} \models \Theta(x,y,a)$) is a global well-ordering of $H_{\kappa^{+}}$.
\end{enumerate}
\end{fact}
The following is a improvement of \cite[Theorem~8]{FHl}.
\begin{fact}[Fernandes, \cite{OnLCC}] \label{NoSubcompact}
Let $L[E]$ be an extender model with Jensen $\lambda$-indexing.
Suppose that, for every $\alpha \in \ord$, the premouse $L[E] || \alpha $ is weakly iterable.\footnote{Here, $L[E]||\alpha$ stands for $\langle J_{\alpha}^{E}, \in, E\restriction \omega \alpha, E_{\omega \alpha} \rangle$,
following the notation from \cite{MR1876087}. For the definition of \emph{weakly iterable}, see \cite[p.~311]{MR1876087}.}
Then, for every infinite cardinal $\kappa$, the following are equivalent:
\begin{enumerate}[(a)]
\item $\langle L_\beta[E]\mid\beta<\kappa^{+} \rangle$ witneses that $\axiomfont{LCC}(\kappa^{+},\kappa^{++})$ holds;
\item $L[E] \models``\kappa\text{ is not a subcompact cardinal}"$.
\end{enumerate}
In addition, for every infinite limit cardinal $\kappa$, $\langle L_\beta[E]\mid\beta<\kappa^{+} \rangle$ witnesses that $\axiomfont{LCC}(\kappa,\kappa^{+})$ holds.
\end{fact}
\begin{lemma}\label{obvious} Suppose that $\lambda$ is a limit ordinal and that $\vec{M} = \langle M_\beta \mid \beta< \lambda \rangle$ is a nice filtration of $H_\lambda$.
Then, for every infinite cardinal $\theta\le\lambda$, $M_\theta\subseteq H_\theta$.
\end{lemma}
\begin{proof} Let $\theta\le\lambda$ be an infinite cardinal.
By Clause~(4) of Definition~\ref{nicefiltration}, for all $\beta < \theta$, the set $M_{\beta}$ is transitive, $M_{\beta} \cap\ord = \beta$, and $|M_{\beta}| = |\beta| < \theta $.
It thus follows that $M_{\theta} = \bigcup_{\beta < \theta} M_{\beta} \subseteq H_{\theta}$.
\end{proof}
Motivated by the property of acceptability that holds in extender models, we define the following property for nice filtrations:
\begin{defn} Given a nice filtration $\vec{M} = \langle M_\beta \mid \beta< \kappa^{+} \rangle$ of $H_{\kappa^{+}}$, we say that $\vec{M}$ is \emph{eventually slow at $\kappa$}
iff there exists an infinite cardinal $\mu < \kappa$ such that, for every cardinal $\theta$ with $\mu<\theta\le\kappa$, $M_{\theta} = H_{\theta}$.
\end{defn}
\begin{lemma}\label{MvsH}
Suppose that $\vec{M}=\langle M_{\beta} \mid \beta < \kappa^{+} \rangle $ is a nice filtration of $H_{\kappa^{+}}$ that is eventually slow at $\kappa$.
Then, for a tail of $\alpha < \kappa$,
for every sequence $\vec{\mathcal{F}} = \langle (F_{n},k_{n}) \mid n \in \omega \rangle$ such that, for all $n \in \omega$, $k_{n} \in \omega$ and $F_{n} \subseteq (M_{\alpha^+})^{k_{n}}$,
there is $\vec{\mathfrak{B}}$ that witnesses $\axiomfont{LCC}$ at $\alpha^+$ with respect to $\vec M$ and $\vec{\mathcal F}$.
\end{lemma}
\begin{proof} Fix an infinite cardinal $\mu < \kappa$ such that, for every cardinal $\theta$ with $\mu<\theta\le\kappa$, $M_{\theta} = H_{\theta}$.
Let $\alpha \in (\mu,\kappa)$ be arbitrary.
Now, given a sequence $\vec{\mathcal{F}}$ as in the statement of the lemma,
build by recursion a $\subseteq$-increasing and continuous sequence $\langle\mathfrak{A}_{\gamma} \mid \gamma< \alpha^{+}\rangle$
of elementary submodels of $\langle M_{\alpha^+},{\in},\vec{M}\restriction\alpha^+, (F_n)_{n\in\omega} \rangle$, such that:
\begin{itemize}
\item[$\bullet$] for each $\gamma < \alpha^{+}$, $|A_{\gamma}|<\alpha^{+}$, and
\item[$\bullet$] $\bigcup_{\gamma < \alpha^{+}} A_{\gamma} = H_{\alpha^{+}}$.
\end{itemize}
By a standard argument, $C:=\{ \gamma<\alpha^+\mid A_\gamma=M_\gamma\}$ is a club in $\alpha^+$.
Let $\{ \gamma_\beta\mid\beta<\alpha^+\}$ denote the increasing enumeration of $C$. Denote $\mathcal B_\beta:=\mathcal A_{\gamma_\beta}$.
Then $\vec{\mathfrak B}=\langle\mathfrak{B}_\beta \mid \beta<\alpha^+\rangle$
is an $\in$-increasing and continuous sequence of elementary submodels of $\langle M_{\alpha^{+}},{\in},\vec M\restriction\alpha^+,(F_n)_{n\in\omega}\rangle$, such that,
for all $\beta < \alpha^{+}$, $\clps(\mathfrak B_{\beta}) = \langle M_{\gamma_{\beta}},{\in},\ldots\rangle$.
\end{proof}
In the next two lemmas we find sufficient conditions for nice filtrations $\langle M_\beta \mid \beta < \kappa^{+} \rangle $ to be eventually slow at $\kappa$.
\begin{lemma}\label{MH2} Suppose that $\kappa$ is a successor cardinal and that $\vec{M}=\langle M_\beta\mid\beta<\kappa^+\rangle$ is
a nice filtration of $H_{\kappa^+}$ witnessing that $\axiomfont{LCC}(\kappa,\kappa^{+})$ holds.
Then $\vec{M}$ is eventually slow at $\kappa$.
\end{lemma}
\begin{proof}
As $\kappa$ is a successor cardinal, $\vec{M}$ is eventually slow at $\kappa$ iff $M_{\kappa}=H_{\kappa}$.
Thus, by Lemma~\ref{obvious}, it suffices to verify that $H_{\kappa}\subseteq M_{\kappa}$.
To this end, let $x \in H_{\kappa}$, and we will find $\beta < \kappa$ such that $ x \in M_{\beta}$.
Set $\theta := |\trcl\{x\}|$ and fix a witnessing bijection $f:\theta \leftrightarrow \trcl\{x\}$.
As $H_{\kappa^{+}}=M_{\kappa^{+}}=\bigcup_{\alpha< \kappa^{+}} M_{\alpha}$, we may fix $\alpha < \kappa^+$ such that $\{f,\theta,\trcl\{x\}\} \subseteq M_{\alpha}$.
Let $\vec{\mathfrak{B}}$ witness $\axiomfont{LCC}(\kappa,\kappa^+)$ at $\alpha$ with respect to $\vec M$ and $\vec{\mathcal{F}}:=\langle (f,2) \rangle$.
Let $\beta<\kappa^+$ be such that $\clps(\mathfrak B_{\theta+1})=\langle M_{\beta},{\in},\ldots\rangle$.
\begin{claim} $\theta<\beta<\kappa$.
\end{claim}
\begin{proof}
By Definition~\ref{LCCupto}(3)(c), $\theta+1\subseteq B_{\theta+1}$, so that, $\theta<\beta$.
By Clause~(4) of Definition~\ref{nicefiltration} and by Definition~\ref{LCCupto}(3)(c), $|\beta|=|M_\beta|=|B_{\theta+1}|<|\alpha|\le\kappa$.
\end{proof}
Now, as $$\mathfrak B_{\theta+1}\prec \langle H_{\kappa^{+}},{\in}, \vec{M},F_0 \rangle \models \exists y(\forall\alpha\forall\delta(F_{0}(\alpha,\delta) \leftrightarrow (\alpha,\delta) \in y)),$$
we have $f\in B_{\theta+1}$. Since $\dom(f)\subseteq\ B_{\theta+1}$, $\rng(f)\subseteq B_{\theta+1}$. But $\rng(f)=\trcl(\{x\})$ is a transitive set, so that the Mostowski collapsing map $\pi:B_{\theta+1}\rightarrow M_\beta$ is the identity over $\trcl(\{x\})$,
meaning that $x\in\trcl(\{x\}) \subseteq M_{\beta}$.
\end{proof}
\begin{lemma}\label{slow} Suppose that $\kappa$ is an inaccessible cardinal, $\mu < \kappa$ and $\vec{M}=\langle M_\beta\mid\beta<\kappa^+\rangle$
witnesses that $\axiomfont{LCC}(\mu,\kappa^{+})$ holds.
Then $\mu$ witnesses that $\vec{M}$ is eventually slow at $\kappa$.
\end{lemma}
\begin{proof} Suppose not. It follows from Lemma~\ref{obvious} that we may fix an infinite cardinal $\theta$ with $\mu\le\theta<\kappa$
along with $x \in H_{\theta^{+}} \setminus M_{\theta^{+}}$.
Fix a surjection $f:\theta\rightarrow\trcl(\{x\})$.
Let $\alpha < \kappa^{+}$ be the least ordinal such that $x \in M_{\alpha}$, so that $\mu<\theta^+<\alpha<\kappa^+$.
Let $\vec{\mathfrak{B}}$ witness $\axiomfont{LCC}(\mu,\kappa^+)$ at $\alpha$ with respect to $\vec M$ and $\vec{\mathcal{F}}:=\langle (f,2) \rangle$.
Let $\beta<\kappa^+$ be such that $\clps(\mathfrak B_{\theta+1})=\langle M_{\beta},{\in},\ldots\rangle$.
\begin{claim} $\beta<\alpha$.
\end{claim}
\begin{proof}
By Clause~(4) of Definition~\ref{nicefiltration} and by Definition~\ref{LCCupto}(3)(c), $|\beta|=|M_\beta|=|B_{\theta+1}|<|\alpha|$.
and hence $\beta<\alpha$.
\end{proof}
By the same argument used in the proof of Lemma~\ref{MH2}, $x \in M_{\beta}$,
contradicting the minimality of $\alpha$.
\end{proof}
\begin{question}
Notice that if $\kappa$ is an inaccessible cardinal and $\vec{M}=\langle M_{\beta} \mid \beta < \kappa^{+} \rangle $ is such that $\langle H_{\kappa^{+}}, \in, \vec{M} \rangle \models \axiomfont{LCC}(\kappa,\kappa^{+})$,
then, for club many $\beta < \kappa$, $M_{\beta}=H_{\beta}$.
We ask: is it consistent that $\kappa$ is an inaccessible cardinal, $\vec{M}=\langle M_{\beta} \mid \beta < \kappa^{+} \rangle $ is such that $\langle H_{\kappa^{+}},{\in}, \vec{M} \rangle \models \axiomfont{LCC}(\kappa,\kappa^{+})$, yet, for stationarily many $\beta < \kappa$, $M_{\beta^{+}} \subsetneq H_{\beta^{+}}$?
\end{question}
\begin{lemma} \label{HvecM1} Suppose that $\vec{M}=\langle M_\beta\mid\beta<\kappa^+\rangle$ is a nice filtration of $H_{\kappa^+}$.
Given a sequence $\vec{\mathcal{F}} = \langle (F_{n},k_{n}) \mid n \in \omega \rangle$ such that, for all $n \in \omega$, $k_{n} \in \omega$ and $F_{n} \subseteq (H_{\kappa^+})^{k_{n}}$,
there are club many $\delta<\kappa^+$ such that
$\langle M_{\delta},{\in}, \vec{M}\restriction \delta, (F_n\cap(M_\delta)^{k_n})_{n\in\omega} \rangle \prec \langle M_{\kappa^+}, {\in}, \vec{M}, (F_n)_{n\in\omega}\rangle $.
\end{lemma}
\begin{proof} Build by recursion an $\in$-increasing continuous sequence $\vec{\mathfrak B}=\langle\mathfrak{B}_{\beta} \mid \beta < \kappa^{+}\rangle$
of elementary submodels of $\langle M_{\kappa^+},{\in},\vec{M}, (F_n)_{n\in\omega} \rangle$, such that:
\begin{itemize}
\item[$\bullet$] for each $\beta < \kappa^{+}$, $|B_{\beta}|<\kappa^{+}$, and
\item[$\bullet$] $\bigcup_{\beta < \kappa^{+}} B_{\beta} = H_{\kappa^{+}}$.
\end{itemize}
By a standard back-and-forth argument, utilizing the continuity of $\vec{\mathfrak B}$ and $\vec M$, $\{ \delta<\kappa^+\mid B_\delta=M_\delta\}$ is a club in $\kappa^+$.
\end{proof}
\begin{defn}\label{formallcc}
Suppose $\vec{M}=\langle M_{\beta} \mid \beta < \lambda \rangle $ is a nice filtration of $M_{\lambda}$ for some limit ordinal $\lambda>0$.
Given $\alpha < \lambda$ and $\vec{\mathcal F}=\langle (F_{n},k_{n} ) \mid n \in \omega \rangle$ in $M_{\lambda}$ such that, for each $n \in \omega$, $k_n\in\omega$ and $F_{n} \subseteq(M_{\alpha})^{k_{n}}$,
for every sequence $\vec{\mathfrak{B}} = \langle \mathfrak{B}_{\beta} \mid \beta < |\alpha| \rangle $ in $M_\lambda$
and every letter $l\in\{a,b,c,d,e\}$, we let $\psi_l(\vec{\mathcal B},\vec{\mathcal{F}},\alpha,\vec{M}\restriction(\alpha+1))$ be some formula expressing that Clause~(3)(l) of Definition~\ref{LCCupto} holds.
\end{defn}
The following forms the main result of this section.
\begin{thm}\label{diamond_from_lcc}
Suppose that $\kappa$ is a regular uncountable cardinal,
and $\vec{M}=\langle M_\beta\mid\beta<\kappa^+\rangle$ is a nice filtration of $H_{\kappa^+}$
that is eventually slow at $\kappa$, and witnesses that $\axiomfont{LCC}(\kappa,\kappa^+)$ holds.
Suppose further that there is a subset $ a\subseteq\kappa$ and a formula $ \Theta \in \Sigma_{\omega}$ which defines a well-order $<_\Theta$ in $H_{\kappa^{+}}$ via $ x <_\Theta y $ iff $ H_{\kappa^{+}} \models \Theta(x,y,a)$. Then, for every stationary $ S \subseteq \kappa$, $\dl^*_S(\Pi^{1}_{2})$ holds.
\end{thm}
\begin{proof} Let $S'\subseteq\kappa$ be stationary.
We shall prove that $\dl^*_{S'}(\Pi^{1}_{2})$ holds
by adjusting Devlin's proof of Fact~\ref{devlinsthm}.
As a first step, we identify a subset $S$ of $S'$ of interest.
\begin{claim} There exists a stationary non-ineffable subset $S\subseteq S'\setminus\omega$ such that, for every $\alpha\in S'\setminus S$, $|H_{\alpha^+}|<\kappa$.
\end{claim}
\begin{proof} If $S'$ is non-ineffable, then let $S:=S'\setminus\omega$, so that $H_{\alpha^+}=H_\omega$ for all $\alpha\in S'\setminus S$.
From now on, suppose that $S'$ is ineffable.
In particular, $\kappa$ is strongly inaccessible and $|H_{\alpha^+}|<\kappa$ for every $\alpha<\kappa$.
Let $S:=S'\setminus(\omega\cup T)$, where $$T:=\{\alpha\in\kappa\cap\cof({>}\omega)\mid S'\cap\alpha\text{ is stationary in }\alpha\}.$$
To see that $S$ is stationary, let $E$ be an arbitrary club in $\kappa$.
$\blacktriangleright$ If $S'\cap\cof(\omega)$ is stationary, then since $S'\cap\cof(\omega)\subseteq S$, we infer that $S\cap E\neq\emptyset$.
$\blacktriangleright$ If $S'\cap\cof(\omega)$ is non-stationary, then fix a club $C\subseteq E$ disjoint from $S'\cap\cof(\omega)$,
and let $\alpha:=\min(\acc(C)\cap S')$. Then $\cf(\alpha)>\omega$ and $C\cap\alpha$ is a club in $\alpha$ disjoint from $S'$, so that $\alpha\notin T$.
Altogether, $\alpha\in S\cap E$.
To see that $S$ is non-ineffable, we define a sequence $\langle Z_\alpha\mid\alpha\in S\rangle$, as follows.
For every $\alpha\in S$, fix a closed and cofinal subset $Z_\alpha$ of $\alpha$ with $\otp(Z_\alpha)=\cf(\alpha)$ such that, if $\cf(\alpha)>\omega$,
then the club $Z_\alpha$ is disjoint from $S'\cap\alpha$.
Towards a contradiction, suppose that $Z\subseteq\kappa$ is a set for which $\{\alpha\in S\mid Z\cap\alpha=Z_\alpha\}$ is stationary.
Clearly, $Z$ is closed and cofinal in $\kappa$, so that $Z\cap S'$ is stationary, $\otp(Z\cap S')=\kappa$ and hence $D:=\{\alpha<\kappa \mid \otp(Z\cap S'\cap\alpha)=\alpha>\omega\}$ is a club.
Pick $\alpha\in D\cap S$ such that $Z\cap\alpha=Z_\alpha$. As $$\cf(\alpha)=\otp(Z_\alpha)=\otp(Z\cap\alpha)\ge\otp(Z\cap S'\cap\alpha)=\alpha>\omega,$$
it must be the case that $Z_\alpha$ is a club disjoint from $S'\cap\alpha$, while $Z_\alpha=Z\cap\alpha$ and $Z\cap S'\cap\alpha\neq\emptyset$. This is a contradiction.
\end{proof}
Let $S$ be given by the preceding claim.
We shall focus on constructing a sequence $\langle N_{\alpha} \mid \alpha \in S \rangle $ witnessing $\dl^*_{S}(\Pi^{1}_{2})$
such that, in addition, $|N_\alpha|=|\alpha|$ for every $\alpha\in S$.
It will then immediately follow that the sequence $\langle N'_\alpha\mid\alpha\in S'\rangle$ defined by letting $N_\alpha':=N_\alpha$ for $\alpha\in S$,
and $N'_\alpha:=H_{\alpha^+}$ for $\alpha\in S'\setminus S$ will witness the validity of $\dl^*_{S'}(\Pi^{1}_{2})$. As $\vec M$ is eventually slow at $\kappa$,
we may also assume that, for every $\alpha\in S$, $M_{\alpha^{+}}=H_{\alpha^{+}}$ and the conclusion of Lemma~\ref{MvsH} holds true.\footnote{For all the small $\alpha\in S'\setminus S$ such that $M_{\alpha^+}\neq H_{\alpha^+}$, simply let $N'_\alpha:=N_{\min(S)}$.}
If $\kappa$ is a successor cardinal, we may moreover assume that, for every $\alpha\in S$, $M_{\alpha^+}=H_\kappa$.
Here we go.
As $S$ is non-ineffable, fix a sequence $\vec{Z} = \langle Z_{\alpha} \mid \alpha \in S \rangle $ with $Z_\alpha\subseteq\alpha$ for all $\alpha\in S$,
such that, for every $Z\subseteq\kappa$, $\{\alpha\in S\mid Z\cap\alpha=Z_\alpha\}$ is nonstationary.
In the course of the rest of the proof, we shall occasionally take witnesses to $\axiomfont{LCC}$ at some ordinal $\alpha$ with respect to
$\vec M$ and a finite sequence $\vec{\mathcal F}=\langle (F_n,k_n)\mid n\in 4\rangle$; for this, we introduce the following piece of notation for any positive $m<\omega$, $X\subseteq(\kappa^+)^m$ and $\alpha<\kappa^+$:
$$\vec{\mathcal F}_{X,\alpha}:=\langle (X\cap\alpha^m,m),(a\cap\alpha,1),(S\cap\alpha,1),(\vec Z\restriction\alpha,2)\rangle.$$
Next, for each $\alpha \in S$, we define $S_{\alpha}$ to be the set of all $\beta \in \alpha^{+}$ satisfying the following list of conditions:
\begin{enumerate}[(i)]
\item $\langle M_{\beta},{\in}, \vec{M} \restriction \beta \rangle \models \axiomfont{LCC}(\alpha,\beta)$,\footnote{Note that $\beta$ is not needed to define $\axiomfont{LCC}(\alpha,\beta)$ in the structure $\langle M_{\beta},{\in}, \vec{M} \restriction \beta \rangle$.
Indeed, by $\axiomfont{LCC}(\alpha,\beta)$ we mean $\psi_{1}(\alpha)$ as in Remark~\ref{NotationRemark}.}
\item $\langle M_{\beta},{\in}\rangle\models \axiomfont{ZF}^{-} ~ \& ~ \alpha \text{ is the largest cardinal}$,\footnote{In particular, $\langle M_{\beta},{\in}\rangle\models\alpha\text{ is uncountable}$.}
\item $\langle M_{\beta},{\in}\rangle\models \alpha\text{ is regular}\ \&\ S \cap \alpha ~ \text{is stationary}$,
\item $\langle M_{\beta},{\in}\rangle\models \Theta(x,y,a\cap \alpha) ~ \text{defines a global well-order}$,
\item $ \vec{Z} \restriction (\alpha + 1) \notin M_{\beta}$.
\end{enumerate}
Then, we consider the set
$$D := \{ \alpha \in S \mid S_{\alpha} \neq \emptyset\ \& \ S_{\alpha} \text{ has no largest element} \}.$$
Define a function $f:S\rightarrow\kappa$ as follow.
For every $\alpha \in D $, let $f(\alpha) := \sup(S_{\alpha})$; for every $\alpha\in S\setminus D$,
let $f(\alpha)$ be the least $\beta< \kappa$ such that $M_{\beta}$ sees $\alpha$, and $\vec{Z} \restriction ( \alpha+1 ) \in M_{\beta}$.
\begin{claim} \label{welldefined} $f$ is well-defined. Furthermore, for all $\alpha\in S $, $\alpha<f(\alpha) < \alpha^+$.
\end{claim}
\begin{proof} Let $\alpha\in S$ be arbitrary. The analysis splits into two cases:
$\blacktriangleright$ Suppose $\alpha\in D$. As $\alpha \in S $, we have $ \bigcup_{\beta < \alpha^{+}} M_{\beta} = M_{\alpha^{+}}=H_{\alpha^{+}}$, and hence we may find some $\beta<\alpha^{+}$ such that $\vec{Z}\restriction(\alpha+1) \in M_{\beta}$.
Then, condition~(v) in the definition of $ S_{\alpha}$ implies that $\alpha<f(\alpha)\le\beta<\alpha^+$.
$\blacktriangleright$ Suppose $\alpha\notin D$. As $\alpha\in S$, let us fix $\langle \mathfrak{B}_{\beta} \mid \beta < \alpha^+ \rangle $ that witnesses $\axiomfont{LCC}$ at $\alpha^+$ with respect to $\vec M$ and $\vec{\mathcal F}_{\emptyset,\alpha^+}$.
Set $\beta:=\alpha+2$ and fix ${\bar\beta}<\kappa^+$ such that $\clps(\mathcal B_\beta)=\langle M_{\bar\beta},\ldots\rangle$.
As $\beta\subseteq B_\beta$ and $|B_\beta|<\alpha^+$, by Clause~(4) of Definition~\ref{nicefiltration}, $\beta\le{\bar\beta}<\alpha^+$.
In addition, $\vec{Z} \restriction ( \alpha+1 ) \in M_{{\bar\beta}}$ and there exists an elementary embedding from $\langle M_{{\bar\beta}},{\in}\rangle$ to $\langle H_{\alpha^{+}},{\in}\rangle$,
so that $M_{\bar\beta}$ sees $\alpha$.
Altogether, $\alpha<f(\alpha)\le{\bar\beta}<\alpha^+$.
\end{proof}
Define $\vec{N}= \langle N_{\alpha} \mid \alpha \in S \rangle $ by letting $N_{\alpha} := M_{f(\alpha)}$ for all $\alpha\in S$.
It follows from Definition~\ref{nicefiltration}(4) and the preceding claim that $|N_\alpha|=|\alpha|$ for all $\alpha\in S$.
\begin{claim}\label{claim2163} Let $X\subseteq\kappa$. Then there exists a club $C\subseteq\kappa$ such that, for all $\alpha\in C\cap S$, $X\cap\alpha \in N_\alpha$.
\end{claim}
\begin{proof} By Lemma~\ref{HvecM1}, we now fix $\delta < \kappa^{+}$ such that $\kappa,S,a \in M_{\delta} $ and $\langle M_{\delta},{\in},\vec M\restriction\delta\rangle \prec \langle M_{\kappa^{+}},{\in}, \vec{M}\rangle$.
Note that $|\delta|=\kappa$. Let $\vec{\mathfrak B}=\langle\mathfrak B_\alpha\mid\alpha<\kappa\rangle$ witness $\axiomfont{LCC}$ at $\delta$ with respect to $\vec M$ and $\vec{\mathcal{F}}_{X,\kappa}$.
\begin{subclaim} $C:= \{ \alpha < \kappa \mid B_{\alpha} \cap \kappa = \alpha \}$ is a club in $\kappa$.
\end{subclaim}
\begin{proof} To see that $C$ is closed in $\kappa$, fix an arbitrary $\alpha<\kappa$ with $\sup(C \cap \alpha)= \alpha >0$.
As $\langle B_\beta\mid\beta<\kappa\rangle$ is $\subseteq$-increasing and continuous, we have
$$ \alpha = \bigcup_{\beta \in (C\cap \alpha)} \beta = \bigcup_{\beta \in (C \cap \alpha)} (B_{\beta}\cap \kappa) = \bigcup_{\beta < \alpha} (B_{\beta} \cap \kappa) = B_{\alpha} \cap \kappa.$$
To see that $C$ is unbounded in $\kappa$, fix an arbitrary $\varepsilon<\kappa$, and we shall find $\alpha\in C$ above $\varepsilon$.
Recall that, by Clause~(3)(c) of Definition~\ref{LCCupto}, for each $\beta<\kappa$, $\beta\subseteq B_\beta$ and $|B_\beta|<\kappa$.
It follows that we may recursively construct an increasing sequence of ordinals $ \langle \alpha_{n} \mid n < \omega \rangle $ such that:
\begin{itemize}
\item[$\bullet$] $\alpha_0:=\sup(B_{\varepsilon} \cap \kappa)$, and, for all $n<\omega$:
\item[$\bullet$] $\sup(B_{\alpha_n} \cap \kappa)<\alpha_{n+1}<\kappa$.
\end{itemize}
In particular, $\sup(B_{\alpha_{n}} \cap \kappa ) \in \alpha_{n+1}$ for all $n<\omega$.
Consequently, for $\alpha:=\sup_{n<\omega}\alpha_{n}$, we have that $\alpha<\kappa$, and
$$ B_{\alpha} \cap \kappa = \bigcup_{n <\omega} (B_{\alpha_{n}} \cap \kappa) \leq \bigcup_{n<\omega} \alpha_{n+1} \leq \bigcup_{n <\omega} (B_{\alpha_{n+2}} \cap \kappa) = \alpha,$$
so that $\alpha\in C\setminus(\varepsilon+1)$.
\end{proof}
To see that the club $C$ is as sought, let $\alpha \in C\cap S$ be arbitrary,
and we shall verify that $X\cap\alpha\in N_\alpha$.
Let $\beta(\alpha)$ be such that $\clps(\mathfrak B_\alpha)=\langle M_{\beta(\alpha)},{\in},\ldots\rangle$,
and let $j_{\alpha}:M_{\beta(\alpha)} \rightarrow B_{\alpha}$ denote the inverse of the collapsing map.
As $\alpha\in C$, $j_\alpha(\alpha)=\kappa$,
and $j^{-1}_{\alpha}(Y) = Y \cap \alpha$ for all $Y \in B_{\alpha} \cap \mathcal{P}(\kappa)$.
\begin{subclaim}\label{nsclaim} For every $\beta<\kappa^+$ such that $\vec Z\restriction(\alpha+1)\in M_\beta$, $\beta>\beta(\alpha)$.
\end{subclaim}
\begin{proof} Suppose not, so that $\vec{Z} \restriction (\alpha+1) \in M_{\beta(\alpha)} $.
As $\langle M_{\delta}, {\in} \rangle \prec \langle M_{\kappa^{+}}, {\in} \rangle $, we infer that
\[ \langle M_{\delta},{\in}\rangle\models \forall Z \subseteq \kappa ~ \exists E \text{ club in } \kappa \ (\forall \gamma \in E \cap S \rightarrow Z\cap \gamma \neq Z_{\gamma}),\]
and hence
\[ \langle M_{\beta(\alpha)},{\in}\rangle\models \forall Z \subseteq \alpha ~ \exists E \text{ club in } \alpha \ (\forall \gamma \in E \cap S \rightarrow Z\cap \gamma \neq Z_{\gamma}).\]
In particular, using $ Z := Z_{\alpha}$, we find some $E$ such that
\[ \langle M_{\beta(\alpha)},{\in}\rangle\models (E \text{ is a club in } \alpha)\land (\forall \gamma \in E \cap S \rightarrow Z_{\alpha} \cap \gamma \neq Z_{\gamma}).\]
Pushing forward with $ E^* := j_{\alpha}(E)$ and $Z^{*}:= j_{\alpha}(Z_{\alpha})$, we infer that
\[ \langle M_{\delta},{\in}\rangle \models (E^* \text{ is a club in } \kappa) \land (\forall \gamma \in E^* \cap S \rightarrow Z^{*} \cap \gamma \neq Z_{\gamma}).\]
Then $Z^{*}\cap \alpha = j_{\alpha}(Z_{\alpha}) \cap \alpha = Z_{\alpha}$, and hence $\alpha\notin E^*$ (recall that $\alpha\in S$).
Likewise $E^*\cap\alpha=j_\alpha(E)\cap\alpha=E$, and hence $\alpha\in\acc(E^*)\subseteq E^*$. This is a contradiction.
\end{proof}
Now, since $\vec{\mathfrak B}$ witnesses $\axiomfont{LCC}$ at $\delta$ with respect to $\vec M$ and $\vec{\mathcal{F}}_{X,\kappa}$, for each $Y$ in $\{X,a,S\}$, we have that
$$\langle B_{\alpha},{\in},Y\cap B_\alpha \rangle \prec \langle M_{\kappa^{+}},{\in}, Y \rangle \models \exists y\forall z ((z \in y)\leftrightarrow (z\in\kappa \land Y(z))),$$
therefore each of $X,a,S$ is a definable element of $\mathfrak B_{\alpha}$. So, as, for all $Y \in B_{\alpha} \cap \mathcal{P}(\kappa)$, $j^{-1}_{\alpha}(Y) = Y \cap \alpha$,
we infer that $X\cap\alpha$, $a\cap \alpha$, and $S \cap \alpha$ are all in $M_{\beta(\alpha)}$.
We will show that $\beta(\alpha) < f(\alpha)$, from which it will follow that $X \cap \alpha \in N_{\alpha}$.
\begin{subclaim}\label{sc61632} $\beta(\alpha) < f(\alpha)$.
\end{subclaim}
\begin{proof} Naturally, the analysis splits into two cases:
$\blacktriangleright$ Suppose $\alpha \notin D$. By definition of $f(\alpha)$ and by Subclaim~\ref{nsclaim}, $\beta(\alpha) < f(\alpha)$.
$\blacktriangleright$ Suppose $\alpha \in D$.
As $\mathfrak B_\alpha\prec \langle M_{\delta},{\in}, \vec{M}\restriction\delta,X,a,S,\vec Z\rangle $
and $\rng(j_\alpha)=B_\alpha$, we infer that
$j_{\alpha}:M_{\beta(\alpha)}\rightarrow M_\delta$ forms an elementary embedding
from $\langle M_{\beta(\alpha)},{\in},\ldots\rangle$ to $\langle M_{\delta},{\in}, \vec{M}\restriction\delta,X,a,S,\vec Z\rangle $ with $j_{\alpha}(\alpha)=\kappa$.
As $\kappa,S,a\in M_{\delta}$ and $ \langle M_{\delta}, {\in}, M \restriction \delta \rangle \prec \langle M_{\kappa}, {\in}, \vec{M} \rangle$, we have:
\begin{enumerate}[(I)]
\item $\langle M_{\delta},{\in}, \vec{M} \restriction \delta \rangle \models \axiomfont{LCC}(\kappa,\delta) $,
\item $\langle M_{\delta},{\in}\rangle\models \axiomfont{ZF}^{-} ~ \& ~ \kappa \text{ is the largest cardinal}$,
\item $\langle M_{\delta},{\in}\rangle\models \kappa\text{ is regular}\ \&\ S \cap \kappa \text{ is stationary}$,
\item $\langle M_{\delta},{\in}\rangle\models \Theta(x,y,a\cap \kappa) ~ \text{defines a global well-order}$.
\end{enumerate}
It now follows that $\beta(\alpha)$ satisfies clauses (i),(ii),(iii) and (iv) of the definition of $S_\alpha$.
Together with Subclaim~\ref{nsclaim}, then, $\beta(\alpha) \in S_{\alpha}$. So,
by definitions of $f$ and $D$, $\beta(\alpha)<f(\alpha)$.
\end{proof}
This completes the proof of Claim~\ref{claim2163}.
\end{proof}
We are left with addressing Clause~(3) of Definition~\ref{reflectingdiamond}.
\begin{claim}\label{claim(3)}
The sequence $\langle N_\alpha\mid\alpha\in S\rangle$ reflects $\Pi^{1}_{2} $-sentences.
\end{claim}
\begin{proof}
We need to show that whenever $\langle \kappa,{\in},(A_n)_{n\in\omega}\rangle\models\phi$,
with $\phi=\forall X\exists Y\varphi$ a $\Pi^1_2$-sentence,
for every club $E\subseteq\kappa$, there is $\alpha\in E\cap S$, such that
$$\langle \alpha,{\in},(A_n\cap(\alpha^{m(\mathbb A_n)}))_{n\in\omega}\rangle\models_{N_{\alpha}}\phi.$$
But by adding $E$ to the list $(A_n)_{n\in\omega}$ of predicates,
and by slightly extending the first-order formula $\varphi$ to also assert that $E$ is unbounded,
we would get that any ordinal $\alpha$ satisfying the above will also satisfy that $\alpha$ is an accumulation point of the closed set $E$, so that $\alpha\in E$.
It follows that if any $\Pi^1_2$-sentence valid in a structure of the form $\langle \kappa,{\in},(A_n)_{n\in\omega}\rangle$
reflects to some ordinal $\alpha'\in S$,
then any $\Pi^1_2$-sentence valid in a structure of the form $\langle \kappa,{\in},(A_n)_{n\in\omega}\rangle$
reflects stationarily often in $S$.
Consider a $\Pi^1_2$-formula $\forall X\exists Y\varphi$,
with integers $p,q$ such that $X$ is a $p$-ary second-order variable and $Y$ is a $q$-ary second-order variable.
Suppose $\vec A=(A_n)_{n\in\omega}$ is a sequence of finitary predicates on $\kappa$, and $\langle\kappa,{\in},\vec A\rangle\models \forall X\exists Y \varphi$.
By the reduction established in the proof of Proposition~\ref{Prop2.4} below, we may assume that $\vec A$ consists of a single predicate $A_0$ of arity, say, $m_0$.
Recalling Convention~\ref{conv23} and since $M_{\kappa^+}=H_{\kappa^+}$, this altogether means that
$$\langle\kappa,{\in},A_0\rangle\models_{M_{\kappa^+}} \forall X\exists Y \varphi.$$
Let $\gamma$ be the least ordinal such that $\vec{Z},A_0,S\in M_\gamma$. Note that $\kappa<\gamma<\kappa^+$.
Let $\Delta$ denote the set of all $\delta \le \kappa^{+}$ such that:
\begin{enumerate}[(a)]
\item $\langle M_{\delta},{\in}, \vec{M}\restriction \delta \rangle \models \axiomfont{LCC}(\kappa,\delta)$,\footnote{In particular, $\delta>\kappa$.}
\item $\langle M_{\delta},{\in}\rangle\models \axiomfont{ZF}^{-} \ \&\ \kappa ~ \text{is the largest cardinal}$,
\item $\langle M_{\delta},{\in}\rangle\models \kappa\text{ is regular}\ \&\ S \text{ is stationary in } \kappa$,
\item $\langle M_{\delta},{\in}\rangle\models \Theta(x,y,a) ~ \text{defines a global well-order}$,
\item $\langle\kappa,{\in},A_0\rangle\models_{M_{\delta}} \forall X\exists Y \varphi$,
\item $\langle M_{\delta},{\in}\rangle\models \vec{Z} ~ \text{witness that} ~ S ~ \text{is not ineffable}$, and
\item $\delta>\gamma$.
\end{enumerate}
As $\kappa^{+} \in \Delta$,
it follows from Lemma~\ref{HvecM1}
and elementarity that $\otp(\Delta\cap\kappa^+)=\kappa^+$.
Let $\{\delta_n\mid n<\omega\}$ denote the increasing enumeration of the first $\omega$ many elements of $\Delta$.
\begin{subdefn}
Let $T(\vec{M},\kappa,S,a,A_0,\vec{Z},\gamma)$ denote the theory consisting of the following axioms:
\begin{enumerate}[(A)]
\item $\vec{M}$ witness $\axiomfont{LCC}(\kappa,\kappa^{+})$,
\item $\axiomfont{ZF}^{-} \ \&\ \kappa ~ \text{is the largest cardinal}$,
\item $\kappa\text{ is regular}\ \&\ S \text{ is stationary in } \kappa$,
\item $\Theta(x,y, a) ~ \text{defines a global well-order}$,
\item $\langle \kappa,{\in}, A_0\rangle\models \forall X\exists Y \varphi$,
\item $\vec{Z}\text{ witness that } S \text{ is not ineffable}$,
\item $\gamma$ is the least ordinal such that $ \{\vec{Z},A_0, S \}\in \vec{M}(\gamma) $.
\end{enumerate}
\end{subdefn}
Let $n<\omega$. Since $M_{\delta_n}$ is transitive, standard facts (cf.~\cite[Chapter~3, \S5]{drake}) yield the existence of a formula $\Psi$ in the language $\{\dot{\vec{M}},\in\}$
which is $\Delta_{1}^{\axiomfont{ZF}^{-}}$, and for all $\delta\in(\gamma,\delta_n)$,
\begin{equation}\tag{$\star_1$}
\begin{array}{c} \label{Psi} \langle M_{\delta},{\in}, \vec{M}\restriction \delta \rangle \models T(\vec M \restriction\delta, \kappa,S,a,A_0,\vec{Z},\gamma)
\\\iff\\
\Psi(\vec M \restriction\delta, \kappa,S,a,A_0,\vec{Z},\gamma)
\\\iff\\
\langle M_{\delta_n},{\in},\vec{M}\restriction \delta_n\rangle\models \Psi(\vec M \restriction\delta, \kappa,S,a,A_0,\vec{Z},\gamma).
\end{array}
\end{equation}
Since $\{\delta_k\mid k<\omega\}$ enumerates the first $\omega$ many elements of $\Delta$,
$M_{\delta_{n}}$ believes that there are exactly $ n$ ordinals $\delta$ such that Clauses (a)--(g) hold for $M_{\delta}$. In fact,
\begin{equation}\label{deltas}\tag{$\star_2$}
\langle M_{\delta_{n}},{\in}, \vec{M} \restriction\delta_{n} \rangle \models \{\delta \mid \Psi(\vec M \restriction\delta, \kappa,S,a,A_0, \vec{Z},\gamma) \} = \{\delta_{k} \mid k <n \}.
\end{equation}
Next, for every $ n < \omega $, as $\langle M_{\delta_{n+1}},{\in}\rangle\models |\delta_{n}|=\kappa$, we may fix in $M_{\delta_{n+1}}$ a sequence
$\vec{\mathfrak{B}_{n}}= \langle \mathcal B_{n,\alpha} \mid \alpha < \kappa \rangle $ witnessing $\axiomfont{LCC}$ at $\delta_n$ with respect to $\vec{M}\restriction \delta_{n+1}$ and $\vec{\mathcal F}_{A_0,\kappa}$
such that, moreover,
$$\langle M_{\delta_{n+1}},{\in}, \vec{M}\restriction \delta_{n+1}\rangle \models ``\vec{\mathfrak{B}_{n}} \text{ is the } {<_\Theta}\text{-least such witness}".\footnote{Recalling Definition~\ref{formallcc}, this means that
$\langle M_{\delta_{n+1}},{\in}, \vec{M}\restriction \delta_{n+1}\rangle \models ``\vec{\mathfrak{B}_{n}} \text{ is the } {<_\Theta}\text{-least }\vec{\mathcal{B}}\text{ such that }(\psi_{a} \wedge \psi_{b} \wedge \psi_{c} \wedge \psi_{d} \wedge \psi_{e})(\vec{\mathcal{B}},\vec{\mathcal{F}}_{A_0,\kappa},\delta_n,\vec{M}\restriction(\delta_n+1))"$.}$$
For every $n<\omega$, consider the club $ C_{n}: = \{ \alpha < \kappa \mid B_{n,\alpha} \cap \kappa = \alpha\}$,
and then let $$\alpha':=\min ((\bigcap\nolimits_{n\in \omega} C_{n}) \cap S).$$
For every $n<\omega$, let $\beta_n $ be such that $\clps(\mathfrak B_{n,\alpha'})=\langle M_{\beta_n},{\in},\ldots\rangle$,
and let $j_{n}:M_{\beta_{n}} \rightarrow B_{n,\alpha'} $ denote the inverse of the Mostowski collapse.
\begin{subclaim} Let $n \in \omega$. Then $j_{n}^{-1}(\gamma) = j_{0}^{-1}(\gamma)$.
\end{subclaim}
\begin{proof}
Since
$j^{-1}_{n}(\vec Z) = \vec Z \restriction \alpha'$, $j^{-1}_{n}(A_0) = A_0 \cap (\alpha')^{m_0}$ and $ j^{-1}_{n}(S) = S \cap \alpha'$,
it follows from
$$\langle M_{\delta_{n}}, {\in} , \vec{M} \restriction \delta_{n} \rangle \models \gamma \text{ is the least ordinal with } \{ \vec{Z}, A_0, S \} \subseteq M_{\gamma},$$
that
$$\langle M_{\beta_{n}},{\in}, \vec{M} \restriction \beta_{n} \rangle \models j^{-1}_{n}(\gamma) \text{ is the least ordinal with } \{ \vec{Z}\restriction \alpha', A_0\cap (\alpha')^{m_0}, S \cap \alpha' \} \subseteq M_{\gamma}.$$
Now, let $\bar{\gamma}$ be such that
$$\langle M_{\beta_{0}},{\in}, \vec{M} \restriction \beta_0 \rangle \models \bar{\gamma} \text{ is the least ordinal such that } \{ \vec{Z}\restriction \alpha', A_0\cap (\alpha')^{m_0}, S\cap \alpha' \} \subseteq M_{\bar{\gamma}}. $$
Since $\vec{M}$ is continuous, it follows that $\bar{\gamma}$ is a successor ordinal, that is, $\bar{\gamma}=\sup(\bar\gamma)+1$.
So $\langle M_{\beta_{0}},{\in}, \vec{M} \restriction \beta_0 \rangle $ satisfies the conjunction of the two:
\begin{itemize}
\item[$\bullet$] $\{ \vec{Z}\restriction \alpha', A_0\cap (\alpha')^{m_0}, S\cap \alpha' \} \subseteq M_{\bar{\gamma}}$, and
\item[$\bullet$] $\{ \vec{Z}\restriction \alpha', A_0\cap (\alpha')^{m_0}, S\cap \alpha' \} \not\subseteq M_{\sup(\bar\gamma)}$.
\end{itemize}
But the two are $\Delta_{0}$-formulas in the parameters $\vec{Z}\restriction \alpha', A_0\cap (\alpha')^{m_0}, S\cap \alpha' ,M_{\bar\gamma}$ and $M_{\sup(\bar{\gamma})}$, which are all elements of $M_{\beta_0}$.
Therefore,
$$ \langle M_{\beta_{n}},{\in}, \vec{M} \restriction \beta_n \rangle \models \bar{\gamma} \text{ is the least ordinal such that } \{ \vec{Z}\restriction \alpha', A_0\cap (\alpha')^{m_0}, S\cap \alpha' \} \subseteq M_{\gamma} ,$$
so that $j^{-1}_{n}(\gamma) = \bar{\gamma}= j^{-1}_0(\gamma)$.
\end{proof}
Denote $ \bar{\gamma}:=j^{-1}_0(\gamma)$.
Let $\Psi$ be the same formula used in statement~\eqref{Psi}.
For all $n < \omega$ and $\bar\beta\in(\bar\gamma,\beta_n)$,
setting $\beta:=j_n(\bar{\beta})$, by elementarity of $j_n$:
\begin{equation} \label{PsiBeta}\tag{$\star_3$}
\begin{array}{c} \langle M_{\beta_n},{\in},\vec{M}\restriction \beta_n\rangle\models \Psi(\vec{M} \restriction
\bar{\beta}, \alpha', S\cap\alpha', a\cap \alpha', A_0\cap (\alpha')^{m_0},\vec{Z}\restriction \alpha',\bar{\gamma})
\\ \iff \\
\langle M_{\delta_n}, {\in},\vec{M}\restriction \delta_n \rangle \models \Psi(\vec{M}\restriction \beta,\kappa,S, a, A_0,\vec{Z}, \gamma).
\end{array}
\end{equation}
Hence, for all $n<\omega$, by statements \eqref{deltas} and \eqref{PsiBeta}, it follows that
$$\begin{aligned} \langle M_{\beta_n},{\in}, \vec{M} \restriction\beta_n \rangle
\models~ &
\{\beta \mid \Psi(\vec{M} \restriction \beta, \alpha', S \cap \alpha',a\cap \alpha', A_0\cap (\alpha')^{m_0},\vec{Z}\restriction \alpha',\bar{\gamma}) \} \\
&= \{ j_n^{-1} (\delta_{k}) \mid k < n \},
\end{aligned}$$
and that, for each $k< n $, $j_{n}(\beta_{k})=\delta_{k}$.
\begin{subclaim}$\beta':=\sup_{n \in \omega} \beta_n$ is equal to $\sup(S_{\alpha'})$.
\end{subclaim}
\begin{proof}
For each $n<\omega$, as $\clps(\mathfrak B_{n,\alpha'})=\langle M_{\beta_n},{\in},\ldots\rangle$,
the proof of Subclaim~\ref{sc61632}, establishing that $\beta(\alpha)\in S_\alpha$,
makes clear that $\beta_n \in S_{\alpha'}$.
We first argue that $\beta'\notin S_{\alpha'}$ by showing that $\langle M_{\beta'},{\in}\rangle\not\models\axiomfont{ZF}^-$,
and then we will argue that no $\beta>\beta'$ is in $S_{\alpha'}$.
Note that $\{\beta_n \mid n < \omega \}$ is a definable subset of $\beta'$ since it can be defined as the first $\omega$ ordinals to satisfy Clauses (a)--(g),
replacing $\vec M \restriction\delta, \kappa,S,a,A_0,\vec{Z},\gamma$ by $\vec{M} \restriction \beta, \alpha', S \cap \alpha',a\cap \alpha', A_0\cap (\alpha')^{m_0},\vec{Z}\restriction \alpha',\bar{\gamma}$, respectively.
So if $\langle M_{\beta'},{\in}\rangle$ were to model $\axiomfont{ZF}^-$, we would have get that $\sup_{n<\omega}\beta_n$ is in $M_{\beta'}$, contradicting the fact that $M_{\beta'}\cap\ord=\beta'$.
Now, towards a contradiction, suppose that there exists $\beta>\beta'$ in $S_{\alpha'}$, and let $\beta$ be the least such ordinal.
In particular, $\langle M_{\beta},{\in}\rangle \models \axiomfont{ZF}^{-}$, and $\langle \beta_n \mid n < \omega \rangle\in M_{\beta}$,
so that $\langle M_{\beta_{n}} \mid n \in \omega \rangle \in M_{\beta}$.
We will reach a contradiction to Clause~(iii) of the definition of $S_{\alpha'}$,
asserting, in particular, that $S \cap \alpha'$ is stationary in $\langle M_{\beta},{\in}\rangle$.
For each $n<\omega$, we have that
$\langle M_{\delta_{n+1}},{\in}, \vec{M}\restriction \delta_{n+1}\rangle \models \Phi(C_{n},\delta_n,\vec{\mathfrak{B}_{n}}, \kappa ),$
where $\Phi(C_{n},\delta_{n},\vec{\mathfrak{B_{n}}},\kappa)$ is the conjunction of the following two formulas:
\begin{itemize}
\item[$\bullet$] $C_{n} = \{ \alpha < \kappa \mid B_{n, \alpha} \cap \kappa = \alpha \}$, and
\item[$\bullet$] $\vec{\mathfrak{B}_{n}}\text{ is the } {<_\Theta}\text{-least witness to } \axiomfont{LCC} \text{ at } \delta_n \text{ with respect to } \vec{M}\restriction\delta_{n+1} \text{ and } \mathcal F_{A_0,\kappa}$.
\end{itemize}
Therefore, for $ \overline{C_{n}}:=j_{n+1}^{-1}(C_{n})$ and $\overline{\mathfrak{B}_{n}}:=j_{n+1}^{-1}(\vec{\mathfrak{B}_{n}})$,
we have
$$\langle M_{\beta_{n+1}},{\in}, \vec{M}\restriction \beta_{n+1}\rangle \models \Phi(\overline{C_{n}},\beta_n,\overline{\mathfrak{B}_{n}}, \alpha').$$
In particular, $\overline{C_n}=j_{n+1}^{-1}(C_{n}) = C_{n}\cap \alpha' $.
Recalling that $\alpha'=\min ((\bigcap_{n\in \omega} C_{n}) \cap S)$, we infer that
$\bigcap_{n<\omega} \overline{C_{n}}$ is disjoint from $S \cap \alpha'$.
Thus, to establish that $S\cap\alpha'$ is nonstationary, it suffices to verify the two:
\begin{enumerate}[(1)]
\item $\langle \overline{C_n}\mid n<\omega\rangle$ belongs to $M_\beta$, and
\item for every $n<\omega$, $\langle M_{\beta},{\in}\rangle \models \overline{C_{n}} ~ \text{is a club in} ~ \alpha'$.
\end{enumerate}
As $\langle M_{\beta_{n}} \mid n \in \omega \rangle \in M_{\beta}$, we can define $\langle \overline{\mathfrak{B}}_{n} \mid n \in \omega \rangle$ using that, for all $n \in \omega$,
$$
\begin{aligned} \langle M_{\beta_{n+1}},{\in}, \vec{M}\restriction \beta_{n+1}\rangle \models ``\overline{\mathfrak{B}}_{n} & \text{ is the } {<_\Theta}\text{-least witness to}
\\ & \axiomfont{LCC} \text{ at } \alpha' \text{ w.r.t. } \vec{M}\restriction \beta_{n+1} \text{ and } \mathcal{F}_{A_{0},\alpha'}".
\end{aligned}
$$
This takes care of Clause~(1), and shows that $\langle M_{\beta_{n+1}},{\in}\rangle\models \overline{C_n}\text{ is a club in }\alpha'$.
Since $M_{\beta}$ is transitive and the formula expressing that $\overline{C_{n}}$ is a club is $\Delta_{0}$,
we have also taken care of Clause~(2).
\end{proof}
It follows that $\alpha'\in D$ and $f(\alpha')=\sup(S_{\alpha'})=\beta'$.\footnote{Notice that the argument of this claim also showed that $D$ is stationary.}
Finally, as, for every $n<\omega$, we have
$$\langle \alpha',{\in}, A_0 \cap(\alpha')^{m_0}\rangle \models_{M_{\beta_n}} \forall X \exists Y \varphi,$$
we infer that $N_{\alpha'}= M_{f(\alpha')}=M_{\beta'} = \bigcup_{n\in\omega} M_{\beta_{n}}$ is such that
$$\langle \alpha',{\in}, A_0 \cap(\alpha')^{m_0} \rangle \models_{N_{\alpha'}} \forall X \exists Y \varphi.$$
Indeed, otherwise there is $X_{0} \in [\alpha']^p \cap N_{\alpha'}$ such that, for all $Y \in [\alpha']^q\cap N_{\alpha'}$,
${N_{\alpha'}}\models[\langle \alpha',{\in}, A_{0}\cap(\alpha')^{m_0} \rangle \models \neg \varphi(X_0,Y)]$.
Find a large enough $n <\omega$ such that $ X_{0} \in M_{\beta_n}$.
Now, since $``\langle \alpha',{\in}, A_{0} \cap (\alpha')^{m_{0}} \rangle \models \neg \varphi(X_0,Y)"$ is a $\Delta_{1}^{\axiomfont{ZF}^{-}}$ formula on the parameters $ \langle \alpha',{\in}, A_{0} \cap (\alpha')^{m_{0}} \rangle$, $\varphi$,
and since $M_{\beta_n}$ is transitive subset of $N_{\alpha'}$
it follows that, for all $Y \in [\alpha']^q \cap M_{\beta_n}$,
$M_{\beta_n} \models [\langle \alpha',{\in},A_{0}\cap(\alpha')^{m_0} \rangle \models \neg \varphi(X_{0},Y)]$, which is a contradiction.
\end{proof}
This completes the proof of Theorem~\ref{diamond_from_lcc}.
\end{proof}
As a corollary we have found a strong combinatorial axiom that holds everywhere (including at ineffable sets) in canonical models of Set Theory (including G\"odel's constructible universe).
\begin{cor} Suppose that:
\begin{itemize}
\item[$\bullet$] $L[E]$ is an extender model with Jensen $\lambda$-indexing;
\item[$\bullet$] $L[E]\models``\text{there are no subcompact cardinals}"$;
\item[$\bullet$] for every $\alpha \in \ord $, the premouse $L[E]||\alpha$ is weakly iterable.
\end{itemize}
Then, in $L[E]$, for every regular uncountable cardinal $\kappa$, for every stationary $S\subseteq\kappa$, $\dl^*_S(\Pi^1_2)$ holds.
\end{cor}
\begin{proof} Work in $L[E]$.
Let $\kappa$ be any regular and uncountable cardinal.
By Fact~\ref{NoSubcompact}, $\vec M=\langle L_{\beta}[E] \mid \beta < \kappa^{+} \rangle$
witnesses that $\axiomfont{LCC}(\kappa,\kappa^{+})$ holds.
Since $L_{\kappa^{+}}[E]$ is an acceptable $J$-structure,\footnote{For the definition of acceptable $J$-structure, see \cite[p.~4]{MR1876087}.}
$\vec{M}$ is a nice filtration of $L_{\kappa^{+}}[E]$ that is eventually slow at $\kappa$.
In addition (cf.~\cite[Lemma~1.11]{MR2768688}),
there is a $\Sigma_{1}$-formula $\Theta$ for which
$$x <_\Theta y \text{ iff } L[E]|\kappa^{+} \models \Theta(x,y)$$ defines a well-ordering of $L_{\kappa^{+}}[E]$.
Finally, acceptability implies that $L_{\kappa^{+}}[E]=H_{\kappa^{+}}$.
Now, appeal to Theorem~\ref{diamond_from_lcc}.
\end{proof}
\section{Universality of inclusion modulo nonstationary}
Throughout this section, $\kappa$ denotes a regular uncountable cardinal satisfying $\kappa^{<\kappa}=\kappa$.
Here, we will be proving Theorems B and C.
Before we can do that,
we shall need to establish a \emph{transversal lemma},
as well as fix some notation and coding that will be useful when working with structures of the form $\langle \kappa,{\in}, (A_n)_{n\in \omega} \rangle $.
\begin{prop}[Transversal lemma]\label{Prop2.4} Suppose that $\langle N_\alpha\mid\alpha\in S\rangle$ is a $\dl^*_S(\Pi^1_2)$-sequence,
for a given stationary $S\subseteq\kappa$.
For every $\Pi^1_2$-sentence $\phi$,
there exists a transversal $\langle \eta_\alpha\mid\alpha\in S\rangle\in\prod_{\alpha\in S}N_\alpha$ satisfying the following.
For every $\eta\in\kappa^\kappa$,
whenever $\langle \kappa,{\in},(A_n)_{n\in\omega}\rangle\models \phi$,
there are stationarily many $\alpha\in S$ such that
\begin{enumerate}[(i)]
\item $\eta_\alpha=\eta\restriction\alpha$, and
\item $\langle \alpha,{\in},(A_n\cap (\alpha^{m(\mathbb A_n)}))_{n\in\omega}\rangle\models_{N_\alpha}\phi$.
\end{enumerate}
\end{prop}
\begin{proof}
Let $c:\kappa \times \kappa \leftrightarrow \kappa$ be some primitive-recursive pairing function.
For each $\alpha\in S$, fix a surjection $f_\alpha:\kappa\rightarrow N_\alpha$ such that $f_\alpha[\alpha]=N_\alpha$ whenever $|N_\alpha|=|\alpha|$.
Then, for all $i<\kappa$, as $f_\alpha(i)\in N_\alpha$, we may define a set $\eta^i_\alpha$ in $N_\alpha$ by letting
$$\eta_\alpha^i:=\begin{cases}
\{(\beta,\gamma)\in\alpha\times\alpha\mid c(i,c(\beta,\gamma))\in f_\alpha(i)\},&\text{if }i<\alpha;\\
\emptyset,&\text{otherwise}.
\end{cases}$$
We claim that for every $\Pi^1_2$-sentence $\phi$,
there exists $i(\phi)<\kappa$ for which $\langle \eta_\alpha^{i(\phi)}\mid \alpha\in S\rangle$ satisfies the conclusion of our proposition.
Before we prove this, let us make a few reductions.
First of all, it is clear that for every $\Pi^1_2$-sentence $\phi=\forall X\exists Y\varphi$,
there exists a large enough $n'<\omega$ such that all predicates mentioned in $\varphi$ are in $\{ \epsilon,\mathbb X,\mathbb Y,\mathbb A_n\mid n<n'\}$.
So the only structures of interest for $\phi$ are in fact $\langle \alpha,{\in},(A_n)_{n<n'}\rangle$, where $\alpha\le\kappa$.
Let $m':=\max\{m(\mathbb A_n)\mid n<n'\}$.
Then, by a trivial manipulation of $\varphi$,
we may assume that the only structures of interest for $\phi$ are in fact $\langle \alpha,{\in},A_0\rangle$, where $\omega\le\alpha\le\kappa$ and $m(\mathbb A_0)=m'+1$.
Having the above reductions in hand, we now fix a $\Pi^1_2$-sentence $\phi=\forall X\exists Y\varphi$ and positive integers $m$ and $k$ such that
the only predicates mentioned in $\varphi$ are in $\{\epsilon,\mathbb X,\mathbb Y,\mathbb A_0\}$, $m(\mathbb A_0)=m$ and $m(\mathbb Y)= k $.
\begin{claim} There exists $i<\kappa$ satisfying the following.
For all $\eta\in\kappa^\kappa$ and $A\subseteq\kappa^m$,
whenever $\langle \kappa,{\in},A\rangle\models\phi$,
there are stationarily many $\alpha\in S$ such that
\begin{enumerate}[(i)]
\item $\eta^i_\alpha=\eta\restriction\alpha$, and
\item $\langle \alpha,{\in},A\cap(\alpha^m)\rangle\models_{N_\alpha}\phi$.
\end{enumerate}
\end{claim}
\begin{proof}
Suppose not. Then, for every $i<\kappa$, we may fix $\eta_i\in\kappa^\kappa$, $A_i\subseteq\kappa^m$ and a club $C_i\subseteq\kappa$ such that $\langle \kappa,{\in},A_i\rangle\models\phi$, but, for all $\alpha\in C_i\cap S$,
one of the two fails:
\begin{enumerate}[(i)]
\item $\eta_\alpha^i=\eta_i\restriction\alpha$, or
\item $\langle \alpha,{\in},A_i\cap(\alpha^m)\rangle\models_{N_\alpha}\phi$.
\end{enumerate}
Let
\begin{itemize}
\item[$\bullet$] $Z:=\{ c(i,c(\beta,\gamma))\mid i<\kappa, (\beta,\gamma)\in \eta_i\}$,
\item[$\bullet$] $A:=\{(i,\delta_1,\ldots,\delta_m)\mid i<\kappa,(\delta_1,\ldots,\delta_m)\in A_i\}$, and
\item[$\bullet$] $C:=\bigtriangleup_{i<\kappa}\{\alpha\in C_i\mid \eta_i[\alpha]\subseteq\alpha\}$.
\end{itemize}
Fix a variable $i$ that does not occur in $\varphi$.
Define a first-order sentence $\psi$ mentioning only the predicates in $\{\epsilon,\mathbb X,\mathbb Y,\mathbb A_1\}$ with $m(\mathbb A_1)=1+m$ and $m(\mathbb Y)=1+k$
by replacing all occurrences of the form $\mathbb A_0(x_1,\ldots,x_m)$ and $\mathbb Y(y_1, \ldots, y_k)$ in $\varphi$ by $\mathbb A_1(i,x_1,\ldots,x_m)$ and $\mathbb Y(i,y_1,\ldots,y_k)$, respectively.
Then, let $\varphi':=\forall i(\psi)$, and finally let $\phi':=\forall X\exists Y\varphi'$, so that $\phi'$ is a $\Pi^1_2$-sentence.
A moment reflection makes it clear that $\langle\kappa,{\in},A\rangle\models\phi'$.
Thus, let $S'$ denote the set of all $\alpha \in S$ such that all of the following hold:
\begin{enumerate}[(1)]
\item $\alpha\in C$;
\item $c[\alpha\times\alpha]=\alpha$;
\item $Z\cap\alpha\in N_\alpha$;
\item $|N_\alpha|=|\alpha|$;
\item $\langle\alpha,{\in},A\cap(\alpha^{m+1})\rangle\models_{N_\alpha}\phi'$.
\end{enumerate}
By hypothesis, $S'$ is stationary.
For all $\alpha\in S'$, by Clauses (3) and (4), we have $Z\cap\alpha\in N_\alpha=f_\alpha[\alpha]$, so, by Fodor's lemma, there exists some $i<\kappa$ and a stationary $S''\subseteq S'\setminus(i+1)$ such that, for all $\alpha\in S''$:
\begin{enumerate}
\item[(3')] $Z\cap\alpha=f_\alpha(i)$.
\end{enumerate}
Let $\alpha\in S''$. By Clause~(5), we in particular have
\begin{enumerate}
\item[(5')] $\langle\alpha,{\in},A_i\cap(\alpha^m)\rangle\models_{N_\alpha}\phi$.
\end{enumerate}
Also, by Clause~(1), we have $\alpha\in C_i$, and
so we must conclude that $\eta_i\restriction\alpha\neq \eta^i_\alpha$.
However, $\eta_i[\alpha]\subseteq\alpha$, and $Z\cap\alpha=f_\alpha(i)$, so that, by Clause~(2),
$$\eta_i\restriction\alpha=\eta_i\cap(\alpha\times\alpha)=\{(\beta,\gamma)\in\alpha\times\alpha\mid c(i,c(\beta,\gamma))\in f_\alpha(i)\}=\eta^i_\alpha.$$ This is a contradiction.
\end{proof}
This completes the proof of Proposition~\ref{Prop2.4}.
\end{proof}
\begin{lemma}\label{prop31}
There is a first-order sentence $\psi_{\baire} $ in the language with binary predicate symbols $\epsilon$ and $\mathbb X$ such that,
for every ordinal $\alpha$ and every $X\subseteq\alpha\times\alpha$,
$$(X\text{ is a function from }\alpha\text{ to }\alpha)\text{ iff }(\langle \alpha,{\in},X\rangle \models \psi_{\baire}).$$
\end{lemma}
\begin{proof} Let $\psi_{\baire}:=\forall\beta\exists\gamma(\mathbb X(\beta,\gamma)\land(\forall\delta(\mathbb X(\beta,\delta)\rightarrow\delta=\gamma)))$.
\end{proof}
\begin{lemma}\label{prop32} Let $\alpha$ be an ordinal. Suppose that $\phi$ is a $\Sigma^1_1$-sentence involving a predicate symbol $\mathbb A$ and two binary predicate symbols $\mathbb X_0,\mathbb X_1$.
Denote $R_\phi:=\{ (X_0,X_1)\mid \langle \alpha,{\in},A,X_0,X_1\rangle\models\phi\}$.
Then there are $\Pi^1_2$-sentences $\psi_{\reflexive}$ and $\psi_{\transitive}$ such that:
\begin{enumerate}[(1)]
\item $(R_\phi\supseteq\{(\eta,\eta)\mid \eta\in\alpha^\alpha\})$ iff $(\langle\alpha,{\in},A\rangle\models\psi_{\reflexive})$;
\item $(R_\phi\text{ is transitive})$ iff $(\langle\alpha,{\in},A\rangle\models\psi_{\transitive})$.
\end{enumerate}
\end{lemma}
\begin{proof}\begin{enumerate}[(1)]
\item Fix a first-order sentence $\psi_{\baire}$ such that ($X_0\in\alpha^\alpha$) iff ($\langle\alpha,{\in},X_0\rangle\models\psi_{\baire}$).
Now, let $\psi_{\reflexive}$ be $\forall X_0\forall X_1((\psi_{\baire}\land(X_1=X_0))\rightarrow\phi)$.
\item Fix a $\Sigma^1_1$-sentence $\phi'$ involving predicate symbols $\mathbb A, \mathbb X_1,\mathbb X_2$
and a $\Sigma^1_1$-sentence $\phi''$ involving binary symbols $\mathbb A,\mathbb X_0,\mathbb X_2$
such that
\begin{gather*}
\{ (X_1,X_2)\mid \langle \alpha,{\in},A,X_1,X_2\rangle\models\phi'\}=\\R_\phi=\{ (X_0,X_2)\mid \langle \alpha,{\in},A,X_0,X_2\rangle\models\phi''\}
\end{gather*}
Now, let $\psi_{\transitive}:=\forall X_0\forall X_1\forall X_2((\phi\land\phi')\rightarrow\phi'')$.\qedhere
\end{enumerate}
\end{proof}
\begin{defn}\label{defn34}
Denote by $ \Seq_3(\kappa)$ the set of level sequences in $\kappa^{<\kappa}$ of length $3$:
$$ \Seq_3(\kappa):=\bigcup_{\tau<\kappa}\kappa^\tau\times\kappa^\tau\times\kappa^\tau.$$
Fix an injective enumeration $\{\ell_\delta\mid\delta<\kappa\}$ of $\Seq_3(\kappa)$.
For each $\delta<\kappa$, we denote $\ell_\delta=(\ell_\delta^0,\ell_\delta^1,\ell_\delta^2)$.
We then encode each $T \subseteq \Seq_{3}(\kappa) $ as a subset of $\kappa^5$ via:
$$T_\ell := \{ (\delta,\beta, \ell_\delta^0(\beta),\ell_\delta^{1}(\beta),\ell_\delta^{2}(\beta)) \mid \delta<\kappa, \ell_\delta \in T, \beta\in \dom(\ell_\delta^0) \}.$$
\end{defn}
We now prove Theorem~C.
\begin{thm}\label{Sigmacompl}
Suppose $\dl^*_S(\Pi^1_2)$ holds for a given stationary $S\subseteq\kappa$.
For every analytic quasi-order $Q$ over $\kappa^\kappa$,
there is a $1$-Lipschitz map $f:\kappa^\kappa\rightarrow2^\kappa$ reducing $Q$ to $\sqc{S}$.
\end{thm}
\begin{proof}
Let $Q$ be an analytic quasi-order over $\kappa^\kappa$.
Fix a tree $T$ on $\kappa^{<\kappa}\times \kappa^{<\kappa}\times \kappa^{<\kappa} $ such that $Q=\pr([T])$, that is,
$$(\eta,\xi) \in Q \iff \exists \zeta\in\kappa^{\kappa}~ \forall \tau < \kappa ~(\eta\restriction\tau,\xi\restriction\tau,\zeta\restriction\tau) \in T.$$
We shall be working with a first-order language having a $5$-ary predicate symbol $\mathbb A$
and binary predicate symbols $\mathbb X_0,\mathbb X_1,\mathbb X_2$ and $\epsilon$.
By Lemma~\ref{prop31}, for each $i<3$, let us fix a sentence $\psi_{\baire}^i$ concerning the binary predicate symbol $\mathbb X_i$
instead of $\mathbb X$, so that
$$(X_i\in\kappa^{\kappa})\text{ iff }(\langle \kappa,{\in},A,X_0,X_1,X_2\rangle \models \psi_{\baire}^i).$$
Define a sentence $\varphi_Q$ to be the conjunction of four sentences:
$\psi_{\baire}^0$, $\psi^1_{\baire}$, $\psi^2_{\baire}$, and
$$\forall \tau \exists \delta \forall \beta [\epsilon(\beta,\tau)\rightarrow\exists \gamma_{0}\exists\gamma_{1}\exists\gamma_{2} (\mathbb X_0(\beta,\gamma_{0}) \land \mathbb X_1(\beta,\gamma_{1})\land\mathbb X_2(\beta,\gamma_{2})\land\mathbb A(\delta,\beta,\gamma_{0},\gamma_{1},\gamma_{2}) )].$$
Set $A:=T_\ell$ as in Definition~\ref{defn34}. Evidently, for all $\eta,\xi,\zeta\in\mathcal P(\kappa\times\kappa)$, we get that
$$\langle\kappa,{\in},A,\eta,\xi,\zeta\rangle\models\varphi_Q$$
iff the two hold:
\begin{enumerate}[(1)]
\item $\eta,\xi,\zeta\in\kappa^\kappa$, and
\item for every $\tau<\kappa$, there exists $\delta<\kappa$, such that $\ell_\delta=(\eta\restriction\tau,\xi\restriction\tau,\zeta\restriction\tau)$ is in $T$.
\end{enumerate}
Let $\phi_Q:= \exists X_2(\varphi_Q)$. Then $\phi_Q$ is a $\Sigma^1_1$-sentence involving predicate symbols $\mathbb A,\mathbb X_0,\mathbb X_1$ and $\epsilon$ for which the induced binary relation
$$R_{\phi_Q}:=\{(\eta,\xi)\in (\mathcal P(\kappa\times\kappa))^2\mid \langle \kappa,{\in},A,\eta,\xi\rangle \models \phi_Q\}$$
coincides with the quasi-order $Q$.
Now, appeal to Lemma~\ref{prop32} with $\phi_Q$ to receive the corresponding $\Pi^1_2$-sentences $\psi_{\reflexive}$ and $\psi_{\transitive}$.
Then, consider the following two $\Pi^1_2$-sentences:
\begin{itemize}
\item[$\bullet$] $\psi^0_Q:=\psi_{\reflexive}\land\psi_{\transitive}\land\phi_Q$, and
\item[$\bullet$] $\psi^1_Q:=\psi_{\reflexive}\land\psi_{\transitive}\land\neg(\phi_Q)$.
\end{itemize}
Let $\vec N=\langle N_\alpha\mid\alpha\in S\rangle$ be a $\dl^*_S(\Pi^1_2)$-sequence.
Appeal to Proposition~\ref{Prop2.4} with the $\Pi^1_2$-sentence $\psi^1_Q$ to obtain a corresponding transversal $\langle \eta_\alpha\mid\alpha\in S\rangle\in\prod_{\alpha\in S}N_\alpha$.
Note that we may assume that, for all $\alpha\in S$, $\eta_\alpha\in{}^\alpha\alpha$, as this does not harm
the key feature of the chosen transversal.\footnote{For any $\alpha$ such that $\eta_\alpha$ is not a function from $\alpha$ to $\alpha$,
simply replace $\eta_\alpha$ by the constant function from $\alpha$ to $\{0\}$.}
For each $\eta\in\kappa^\kappa$, let
$$Z_\eta:=\{\alpha\in S\mid A\cap\alpha^5\text{ and }\eta\restriction\alpha\text{ are in }N_\alpha\}.$$
\begin{claim} Suppose $\eta\in\kappa^\kappa$. Then $S\setminus Z_\eta$ is nonstationary.
\end{claim}
\begin{proof} Fix primitive-recursive bijections $c:\kappa^2\leftrightarrow\kappa$ and $d:\kappa^5\leftrightarrow\kappa$.
Given $\eta\in\kappa^\kappa$, consider the club $D_0$ of all $\alpha<\kappa$ such that:
\begin{itemize}
\item[$\bullet$] $\eta[\alpha]\subseteq\alpha$;
\item[$\bullet$] $c[\alpha\times\alpha]=\alpha$;
\item[$\bullet$] $d[\alpha\times\alpha\times\alpha\times\alpha\times\alpha]=\alpha$.
\end{itemize}
Now, as $c[\eta]$ is a subset of $\kappa$, by the choice $\vec N$, we may find a club $D_1\subseteq\kappa$ such that, for all $\alpha\in D_1\cap S$,
$c[\eta]\cap\alpha\in N_\alpha$.
Likewise, we may find a club $D_2\subseteq\kappa$ such that, for all $\alpha\in D_2\cap S$,
$d[A]\cap\alpha\in N_\alpha$.
For all $\alpha\in S\cap D_0\cap D_1\cap D_2$, we have
\begin{itemize}
\item[$\bullet$] $c[\eta\restriction\alpha]=c[\eta\cap(\alpha\times\alpha)]=c[\eta]\cap c[\alpha\times\alpha]=c[\eta]\cap\alpha\in N_\alpha$, and
\item[$\bullet$] $d[A\cap\alpha^5]=d[A]\cap d[\alpha^5]=d[A]\cap\alpha\in N_\alpha$.
\end{itemize}
As $N_\alpha$ is p.r.-closed, it then follows that $\eta\restriction\alpha$ and $A\cap\alpha^5$ are in $N_\alpha$.
Thus, we have shown that $S \setminus Z_\eta$ is disjoint from the club $D_0\cap D_1\cap D_2$.
\end{proof}
For all $\eta\in\kappa^\kappa$ and $\alpha\in Z_\eta$, let:
$$\mathcal{P}_{\eta,\alpha}:=\{p\in \alpha^\alpha\cap N_\alpha\mid \langle\alpha,{\in},A\cap\alpha^5,p,\eta\restriction\alpha\rangle \models_{N_{\alpha}} \psi^0_Q \}.$$
Finally, define a function $f:\kappa^{\kappa} \rightarrow 2^\kappa$ by letting, for all $\eta\in\kappa^\kappa$ and $\alpha<\kappa$,
$$f(\eta)(\alpha) := \begin{cases}
1,& \text{if } \alpha\in Z_\eta\text{ and }\eta_\alpha\in \mathcal{P}_{\eta,\alpha};\\
0, & \text{otherwise}.
\end{cases}
$$
\begin{claim} $f$ is $1$-Lipschitz.
\end{claim}
\begin{proof} Let $\eta,\xi$ be two distinct elements of $\kappa^{\kappa}$.
Let $\alpha\le\Delta(\eta,\xi)$ be arbitrary.
As $\eta\restriction\alpha=\xi\restriction\alpha$, we have $\alpha\in Z_\eta$ iff $\alpha\in Z_\xi$.
In addition, as $\eta\restriction\alpha=\xi\restriction\alpha$, $\mathcal P_{\eta,\alpha}=\mathcal P_{\xi,\alpha}$ whenever $\alpha\in Z_\eta$.
Thus, altogether, $f(\eta)(\alpha)=1$ iff $f(\xi)(\alpha)=1$.
\end{proof}
\begin{claim}\label{c353} Suppose $(\eta,\xi)\in Q$. Then $f(\eta)\sqc Sf(\xi)$.
\end{claim}
\begin{proof} As $(\eta,\xi)\in Q$, let us fix $\zeta\in\kappa^\kappa$ such that, for all $\tau<\kappa$,
$(\eta\restriction\tau,\xi\restriction\tau,\zeta\restriction\tau)\in T$.
Define a function $g:\kappa\rightarrow\kappa$ by letting, for all $\tau<\kappa$,
$$g(\tau):=\min\{\delta<\kappa\mid \ell_\delta=(\eta\restriction\tau,\xi\restriction\tau,\zeta\restriction\tau)\}.$$
As $(S\setminus Z_\eta)$, $(S\setminus Z_\xi)$ and $(S\setminus Z_\zeta)$ are nonstationary,
let us fix a club $C\subseteq\kappa$ such that $C\cap S\subseteq Z_\eta\cap Z_\xi\cap Z_\zeta$.
Consider the club $D:=\{\alpha\in C\mid g[\alpha]\subseteq\alpha\}$.
We shall show that, for every $\alpha\in D\cap S$, if $f(\eta)(\alpha)=1$ then $f(\xi)(\alpha)=1$.
Fix an arbitrary $\alpha\in D\cap S$ satisfying $f(\eta)(\alpha)=1$. In effect, the following three conditions are satisfied:
\begin{enumerate}[(1)]
\item $\langle\alpha,{\in},A\cap\alpha^5\rangle \models_{N_{\alpha}} \psi_{\reflexive}$,
\item $\langle\alpha,{\in},A\cap\alpha^5\rangle \models_{N_{\alpha}} \psi_{\transitive}$, and
\item $\langle\alpha,{\in},A\cap\alpha^5,\eta_\alpha,\eta\restriction\alpha\rangle \models_{N_{\alpha}} \phi_Q$.
\end{enumerate}
In addition, since $\alpha$ is a closure point of $g$, by definition of $\varphi_Q$, we have
$$\langle\alpha,{\in},A\cap\alpha^5,\eta\restriction\alpha,\xi\restriction\alpha,\zeta\restriction\alpha\rangle\models\varphi_Q.$$
As $\alpha\in S$ and $\varphi_Q$ is first-order,\footnote{$N_{\alpha}$ is transitive and rud-closed (in fact, p.r.-closed), so that $N_{\alpha} \models \axiomfont{GJ}$ (see \cite[\S Other remarks on GJ]{Mathias}).
Now, by \cite[\S The cure in $\axiomfont{GJ}$, proposition 10.31]{Mathias}, $\mathbf{Sat}$ is $\Delta_{1}^{\axiomfont{GJ}}$.}
$$\langle\alpha,{\in},A\cap\alpha^5,\eta\restriction\alpha,\xi\restriction\alpha,\zeta\restriction\alpha\rangle\models_{N_\alpha}\varphi_Q,$$
so that, by definition of $\phi_Q$,
$$\langle\alpha,{\in},A\cap\alpha^5,\eta\restriction\alpha,\xi\restriction\alpha\rangle\models_{N_\alpha}\phi_Q.$$
By combining the preceding with clauses (2) and (3) above, we infer that the following holds, as well:
\begin{enumerate}
\item[(4)] $\langle\alpha,{\in},A\cap\alpha^5,\eta_\alpha,\xi\restriction\alpha\rangle \models_{N_{\alpha}} \phi_Q$.
\end{enumerate}
Altogether, $f(\xi)(\alpha)=1$, as sought.
\end{proof}
\begin{claim} Suppose $(\eta,\xi)\in \kappa^\kappa\times\kappa^\kappa\setminus Q$. Then $f(\eta)\not\sqc{S}f(\xi)$.
\end{claim}
\begin{proof} As $(S\setminus Z_\eta)$ and $(S\setminus Z_\xi)$ are nonstationary,
let us fix a club $C\subseteq\kappa$ such that $C\cap S\subseteq Z_\eta\cap Z_\xi$.
As $Q$ is a quasi-order and $(\eta,\xi)\notin Q$, we have:
\begin{enumerate}[(1)]
\item $\langle\kappa,{\in},A\rangle \models\psi_{\reflexive}$,
\item $\langle\kappa,{\in},A\rangle \models\psi_{\transitive}$, and
\item $\langle\kappa,{\in},A,\eta,\xi\rangle \models \neg(\phi_Q)$.
\end{enumerate}
so that, altogether,
$$ \langle \kappa,{\in},A,\eta,\xi\rangle \models \psi^1_Q.$$
Then, by the choice of the transversal $\langle \eta_\alpha\mid\alpha\in S\rangle$, there is a stationary subset $S'\subseteq S\cap C$ such that,
for all $\alpha\in S'$:
\begin{enumerate}[(1')]
\item $\langle\alpha,{\in},A\cap\alpha^5\rangle \models_{N_{\alpha}} \psi_{\reflexive}$,
\item $\langle\alpha,{\in},A\cap\alpha^5\rangle \models_{N_{\alpha}} \psi_{\transitive}$,
\item $\langle\alpha,{\in},A\cap\alpha^5,\eta\restriction\alpha,\xi\restriction\alpha\rangle \models_{N_{\alpha}} \neg(\phi_Q)$, and
\item $\eta_\alpha=\eta\restriction\alpha$.
\end{enumerate}
By Clauses (3') and (4'), we have that $\eta_\alpha\notin\mathcal P_{\xi,\alpha}$, so that $f(\xi)(\alpha)=0$.
By Clauses (1'), (2') and (4'), we have that $\eta_\alpha\in\mathcal P_{\eta,\alpha}$, so that $f(\eta)(\alpha)=1$.
Altogether, $\{ \alpha\in S\mid f(\eta)(\alpha)>f(\xi)(\alpha)\}$ covers the stationary set $S'$, so that $f(\eta)\not\sqc{S}f(\xi)$.
\end{proof}
This completes the proof of Theorem~\ref{Sigmacompl}
\end{proof}
Theorem~B now follows as a corollary.
\begin{cor}\label{corollary36} Suppose that $\kappa$ is a regular uncountable cardinal and $\axiomfont{GCH}$ holds.
Then there is a set-size cofinality-preserving $\axiomfont{GCH}$-preserving notion of forcing $\mathbb P$,
such that, in $V^{\mathbb P}$, for every analytic quasi-order $Q$ over $\kappa^\kappa$
and every stationary $S\subseteq\kappa$, ${Q}\hookrightarrow_1{\sqc S}$.
\end{cor}
\begin{proof} This follows from Theorems \ref{diamond_from_lcc} and \ref{Sigmacompl}, and one of the following:
$\blacktriangleright$ If $\kappa$ is inaccessible, then we use Fact~\ref{InaccForcing} and Lemma~\ref{slow}.
$\blacktriangleright$ If $\kappa$ is a successor cardinal, then we use Fact~\ref{forcing} and Lemma~\ref{MH2}
\end{proof}
\begin{remark} By combining the proof of the preceding with a result of L\"ucke \cite[Theorem~1.5]{MR2987148}, we arrive at following conclusion.
Suppose that $\kappa$ is an infinite successor cardinal and $\axiomfont{GCH}$ holds.
For every binary relation $R$ over $\kappa^\kappa$,
there is a set-size $\axiomfont{GCH}$-preserving $({<}\kappa)$-closed, $\kappa^+$-cc notion of forcing $\mathbb P_R$
such that, in $V^{\mathbb P_R}$,
the conclusion of Corollary~\ref{corollary36} holds,
and, in addition, $R$ is analytic.
\end{remark}
\begin{remark}A quasi-order $\unlhd$ over a space $X\in\{2^\kappa,\kappa^\kappa\}$ is said to be \emph{$\Sigma^1_1$-complete} iff it is analytic and,
for every analytic quasi-order $Q$ over $X$, there exists a $\kappa$-Borel function $f:X\rightarrow X$ reducing $Q$ to $\unlhd$.
As Lipschitz$\implies$continuous$\implies$$\kappa$-Borel, the conclusion of Corollary~\ref{corollary36} gives that each $\sqc S$ is a $\Sigma^1_1$-complete quasi-order.
Such a consistency was previously only known for $S$'s of one of two specific forms,
and the witnessing maps were not Lipschitz.
\end{remark}
\section{Concluding remarks}
\begin{remark} By \cite[Corollary 4.5]{HKM2}, in $L$, for every successor cardinal $\kappa$
and every theory (not necessarily complete) $T$ over a countable relational language,
the corresponding equivalence relation $\cong_T$ over $2^\kappa$ is either $\Delta^1_1$ or $\Sigma^1_1$-complete.
This dissatisfying dichotomy suggests that $L$ is a singular universe, unsuitable for studying the correspondence between generalized descriptive set theory and model-theoretic complexities.
However, using Theorem~\ref{Sigmacompl}, it can be verified that the above dichotomy holds as soon as $\kappa$ is a successor of an uncountable cardinal $\lambda=\lambda^{<\lambda}$
in which $\dl^{*}_{S}(\Pi^1_2)$ holds for both $S:=\kappa\cap\cof(\omega)$ and $S:=\kappa\cap\cof(\lambda)$.
This means that the dichotomy is in fact not limited to $L$ and can be forced to hold starting with any ground model.
\end{remark}
\begin{remark}\label{rmk38} Let $=^S$ denote the symmetric version of $\sqc S$.
It is well known that, in the special case $S:=\kappa\cap\cof(\omega)$,
$=^S$ is a $\kappa$-Borel$^*$ equivalence relation \cite[\S6]{MR1242054}.
It thus follows from Theorem~\ref{Sigmacompl} that if $\dl^*_S(\Pi^1_2)$ holds for $S:=\kappa\cap\cof(\omega)$,
then the class of $\Sigma^1_1$ sets coincides with the class of $\kappa$-Borel$^*$ sets.
Now, as the proof of {\cite[Theorem~3.1]{HK18}} establishes that the failure of the preceding is consistent with, e.g., $\kappa=\aleph_2=2^{2^{\aleph_0}}$,
which in turn, by \cite[Lemma~2.1]{MR485361}, implies that $\diamondsuit^*_S$ holds,
we infer that the hypothesis $\dl^*_S(\Pi^1_2)$ of Theorem~\ref{Sigmacompl} cannot be replaced by $\diamondsuit^*_S$.
We thus feel that we have identified the correct combinatorial principle behind a line of results that were previously obtained under the heavy hypothesis of ``$V=L$''.
\end{remark}
\section*{Acknowledgements}
This research was partially supported by the European Research Council (grant agreement ERC-2018-StG 802756). The third author was also partially supported by the Israel Science Foundation (grant agreement 2066/18).
The main results of this paper were presented by the second author at the \emph{4th Arctic Set Theory} workshop, Kilpisj\"arvi, January 2019,
by the third author at the \emph{50 Years of Set Theory in Toronto} conference, Toronto, May 2019,
and by the first author at the \emph{Berkeley conference on inner model theory}, Berkeley, July 2019.
We thank the organizers for the invitations.
|
2,869,038,154,360 | arxiv | \section{Introduction}
\label{sec:intro}
Far-field speech recognition is still a challenging problem because only a limited number of far-field speech corpora are available~\cite{microsoft_noise,VOiCES,acoustic_matching}. Unlike near-field speech, which is recorded close to the speaker, far-field speech may contain strong reverberation effects. The reverberation effects are associated due to various factors, including the room layout, speaker and listener positions, obstacles, and room materials. The reverberation effects can be mathematically modeled as a transfer function known as Room Impulse Response (RIR). We can augment far-field speech by convolving clean speech with an RIR and adding environmental noise with different signal-to-noise ratios \cite{ButReverb,study1}.
The RIR can be captured from an acoustic environment using different techniques \cite{Time-Stretched-Pulses, Sine_Sweep, microsoft,rir_capture}. Recording real RIRs require a lot of human labor and special hardware. As a result, many far-field automatic speech recognition systems use synthetic RIRs for training \cite{google_speech,speech1,speech2,zhenyu_GAS}. Synthetic RIRs can be generated using physically-based acoustic simulators for different scenes \cite{study1,wave1,image_method}. The current acoustic simulators have resulted in considerable improvement in far-field speech recognition \cite{zhenyu_GAS}. However, there is still a gap between the performance of RIRs generated using acoustic simulators and the performance of real RIRs. Most commonly used acoustic simulators are unable to model all the acoustic effects in the environment, which can be captured by real RIRs. For example, ray-tracing-based acoustic simulators \cite{zhenyu_GAS} can only simulate high-frequency acoustic effects but are not accurate in terms of low-frequency effects like diffraction or interference.
In the computer vision literature, neural networks are used to translate simple sketches to photo-realistic images \cite{domain1,domain2}. Free-hand sketches are spatially imprecise and geometrically distorted \cite{domain1}. In particular, a neural network (CycleGAN \cite{cycle-GAN}) has been proposed to translate imprecise sketches to realistic images. The goal of translation in a CycleGAN is to learn a mapping between the source domain $X$ and the target domain $Y$ in the absence of paired examples. Motivated by the performance of CycleGAN in improving imprecise images, our goal is to develop a similar approach to improve the fidelity of synthetic RIRs for automatic speech recognition (ASR) applications.
{\bf Main Results:} We present a novel approach to improve the accuracy of synthetic RIRs. We design a TS-RIRGAN architecture to translate the synthetic RIR to a real RIR. TS-RIRGAN takes synthetic RIRs as 1x16384 audio samples to translate them into real RIRs and use multiple loss functions. We also perform real-world sub-band room equalization to the translated RIRs to further improve their quality. We also demonstrate the benefits of our post-processed RIRs in far-field ASR systems. Our main contributions include:-
\begin{itemize}
\item We present our TS-RIRGAN architecture, which is used to translate an imprecise synthetic RIR to a real RIR.
\item We propose a scheme to further improve the wave effects of synthetic RIRs by performing sub-band room equalization.
\item We show that, on a modified Kaldi LibriSpeech far-field ASR benchmark \cite{zhenyu_low}, far-field speech augmented using our improved RIRs outperforms the far-field speech augmented using unmodified RIRs by up to 19.9\%.
\end{itemize}
The rest of the paper is organized as follows. In Section 2 we describe different techniques to generate synthetic RIRs for ASR and other speech augmentation applications. We present our novel approach to improve synthetic RIRs in Section 3. Section 4 shows the benefit of improving synthetic RIRs in far-field ASR systems. We published our code for follow-up research~\footnote{\url{https://github.com/anton-jeran/TS-RIR}}.
\section{Related Work}
\label{sec:related}
\subsection{Synthetic RIRs and Acoustic Simulation}
There are several approaches for generating RIRs for different acoustic environments. Among the existing methods, computing RIRs for indoor or outdoor scenes by numerically solving the wave equation gives the most accurate results for a given scene \cite{wave1,liu2020sound}. However, wave-based approaches are computationally expensive and their complexity can grow as the fourth power of frequency. As a result, they are practical for lower frequencies (e.g., less than 1000Hz). GAN based RIR generators \cite{irgan, image2reverb} can be used to generate RIRs for different environments, though their accuracy can vary.
A simpler and less accurate alternative to the wave-based approach is use of geometric sound propagation techniques \cite{image_method,zhenyu_GAS}. In geometric acoustic simulators, the sound is assumed to propagate as a ray. Therefore, some low-frequency characteristics of sound waves cannot be modeled using these simulators. The ray assumption is valid when the wavelength of the sound is significantly smaller than the size of the obstacle in the environment. However, the low-frequency components are not modeled accurately, when the wavelength is large. The image method \cite{image_method} and path tracing methods \cite{zhenyu_GAS} are common geometric acoustic simulation methods. The image method models only specular reflections. Path tracing methods can model both specular and diffuse reflections.
\subsection{Techniques for improving synthetic RIR}
The geometric acoustic simulators are unable to model low-frequency wave effects such as diffraction \cite{wave-diffraction} and room resonance \cite{wave-resonance} because of the ray assumption. On the other hand, we observe a boost or diminishing effect in the frequency response at different frequency bands in real RIRs due to wave modes created by room resonance. Some methods tend to compensate for the missing room response in synthetic RIRs using a sub-band room equalization approach \cite{zhenyu_low}.
\section{Our Approach}
\begin{figure}[t]
\centering
\includegraphics[width=3.3in]{gen1-eps-converted-to.pdf}
\caption{The architecture of the generator and discriminator of TS-RIRGAN. Our generator takes a synthetic RIR as 1x16384 audio samples and translates it into a real RIR of the same dimension. The discriminator network discriminates between real RIRs and translated synthetic RIRs during training by maximizing the adversarial loss (Equation \ref{ad_loss}).}
\label{architecture}
\end{figure}
We translate synthetic RIRs to real RIRs and perform sub-band room equalization to improve the quality of synthetic RIRs. We compute the spectrogram for post-processed and real RIRs. We evaluate the quality of post-processed synthetic RIRs by comparing their mean value for a set of acoustic parameters (Table \ref{table:statistics}) and their energy distribution (Figure \ref{spectrogram}) with real RIRs.
\subsection{Translation: Synthetic RIR $\Longrightarrow$ Real RIR}
Our TS-RIRGAN (Figure \ref{architecture}) architecture is based upon CycleGAN \cite{cycle-GAN}, and WaveGAN \cite{WaveGAN}. CycleGAN learns to translate two-dimensional images from a source domain $X$ to a target domain $Y$ using unpaired image datasets. Similarly, we design a TS-RIRGAN architecture that learns mapping functions between one-dimensional (1D) synthetic RIRs ($S$) and real RIRs ($R$) in the absence of paired training examples. Inspired by WaveGAN, which applies generative adversarial networks (GANs) to raw-waveform audio, we directly input RIRs as raw audio samples to our network to learn the mapping functions. In most cases, real and synthetic RIRs are less than one second in duration. Therefore, we re-sample the synthetic and real RIR datasets without loss of generality to 16 kHz and pass them as a one-dimensional input of length 16384.
We represent the real RIR training samples as $\{r_i\}_{i=1}^N$ where $r_i \in R$ and the synthetic RIR training samples as $\{s_i\}_{i=1}^N$, where $s_i \in S$. The data distributions of the training samples are $r \sim p_{data} (r)$ and $s \sim p_{data} (s)$. We use 2 generators to learn the mappings $G_{SR} : S \rightarrow R$ and $G_{RS} : R \rightarrow S$. Our goal is to learn the mapping $G_{SR} : S \rightarrow R$. We use the inverse mapping $G_{RS} : R \rightarrow S$ with cycle-consistency loss \cite{cycle-loss} to preserve the acoustic characteristics in Synthetic RIRs during translation. We use discriminator $D_{R}$ to differentiate real RIRs $\{r_i\}_{i=1}^N$ and translated synthetic RIRs $\{G_{SR}(s_i)\}_{i=1}^N$. Similarly, we use $D_{S}$ to discriminate $\{s_i\}_{i=1}^N$ and $\{G_{RS}(r_i)\}_{i=1}^N$. Our objective function consists of adversarial loss \cite{adversarial-loss}, cycle-consistency loss and identity loss \cite{identity-loss} to learn the mapping functions.
\subsubsection{Adversarial Loss}
To ensure the synthetic RIRs are translated to real RIRs, we use the following objective for the mapping function $G_{SR} : S \rightarrow R$ and the discriminator $D_{R}$.
{
\begin{equation}\label{ad_loss}
\begin{aligned}[b]
\mathcal{L}_{adv}(G_{SR},D_{R},S,R) = \mathbb{E}_{r \sim p_{data}{(r)}}[\log{D_{R}(r)}] \\
+ \mathbb{E}_{s \sim p_{data}{(s)}}[\log(1 - {D_{R}(G_{SR}(s))}].
\end{aligned}
\end{equation}
}
\noindent The discriminator $D_{R}$ tries to distinguish between translated RIRs using the mapping function $G_{SR} : S \rightarrow R$ from the real RIRs by maximizing ($max$) the adversarial loss. The generator $G_{SR} : S \rightarrow R$ attempts to generate real RIRs that tend to minimize ($min$) the adversarial loss, i.e., $\min_{G_{SR}} \max_{D_{R}} \mathcal{L}_{adv}(G_{SR},D_{R},S,R)$. Similarly, we train the mapping function $G_{RS} : R \rightarrow S$ and the discriminator $D_{S}$ with the objective $\mathcal{L}_{adv}(G_{RS},D_{S},R,S)$.
\subsubsection{Cycle Consistency Loss}
We use cycle consistency loss to preserve the acoustic characteristics in the RIRs during the translation. The cycle consistency loss ensures that $G_{RS}(G_{SR}(s)) \sim s$ and $G_{SR}(G_{RS}(r)) \sim r$.
{
\begin{equation}\label{cy_loss}
\begin{aligned}[b]
\mathcal{L}_{cyc}(G_{SR},G_{RS}) = \mathbb{E}_{s \sim p_{data}{(s)}}[||G_{RS}(G_{SR}(s)) - s ||_1] \\
+ \mathbb{E}_{r \sim p_{data}{(r)}}[||G_{SR}(G_{RS}(r)) - r ||_1].
\end{aligned}
\end{equation}
}
\subsubsection{Identity Mapping Loss}
Identity mapping loss preserves the amplitude of input RIRs:
{
\begin{equation}\label{id_loss}
\begin{aligned}[b]
\mathcal{L}_{id}(G_{SR},G_{RS}) = \mathbb{E}_{s \sim p_{data}{(s)}}[||G_{RS}(s) - s ||_1] \\
+ \mathbb{E}_{r \sim p_{data}{(r)}}[||G_{SR}(r) - r ||_1].
\end{aligned}
\end{equation}
}
\subsubsection{Full Objective}
The overall objective function can be given as
{
\begin{equation}\label{full_loss}
\begin{aligned}[b]
\mathcal{L}_(G_{SR},G_{RS},D_{S},D_{R}) = \mathcal{L}_{adv}(G_{SR},D_{R},S,R) \\
+ \mathcal{L}_{adv}(G_{RS},D_{S},R,S) \\
+ \lambda_{cyc} \mathcal{L}_{cyc}(G_{SR},G_{RS}) \\
+ \lambda_{id} \mathcal{L}_{id}(G_{SR},G_{RS}),
\end{aligned}
\end{equation}
}
\noindent where $\lambda_{cyc}$ and $\lambda_{id}$ control the relative importance of cycle consistency loss and identity mapping loss, respectively. We train our TS-RIRGAN to find the optimal mapping functions $G_{SR}^{*}$ and $G_{RS}^{*}$ by solving
{
\[
\begin{aligned}[b]
G_{SR}^{*}, G_{RS}^{*} = \arg \min_{G_{SR},G_{RS}} \max_{D_{S},D_{R}} \mathcal{L}(G_{SR},G_{RS},D_{S},D_{R}).
\end{aligned}
\]
}
We use $G_{SR}^{*}$ to translate imprecise synthetic RIRs to real RIRs.
\subsubsection{Implementation}
\textbf{Network Architecture: } We adapt the discriminator architecture from Donahue et al.~\cite{WaveGAN}. We did not use the phase shuffle operation proposed in WaveGAN~\cite{WaveGAN} because this operation did not improve our results. Inspired by Johnson et al.~\cite{Generator}, we designed our generator network consisting of an encoder, a transformer and a decoder. Figure \ref{architecture} highlights our generator and discriminator architectures. Similar to WaveGAN, we use 1D filters of length 25 to perform convolution and transposed convolution operations in our TS-RIRGAN architecture.
\textbf{Dataset:} We use an equal number of real RIRs from BUT ReverbDB \cite{ButReverb} and synthetic RIRs generated using the geometric acoustic simulator \cite{zhenyu_GAS} to train our TS-RIRGAN architecture. The BUT ReverbDB consists of 1891 RIRs covering the office, hotel room, conference room, lecture room, meeting room, and stairs. We remove repeated RIRs and RIRs recorded in stairs. Among 1209 retained RIRs in BUT ReverbDB, we train our network using 967 RIRs and keep 242 RIRs for testing purposes. Room dimensions, loudspeaker location, and microphone location corresponding to each real RIRs are documented in BUT ReverbDB dataset. We use this information to generate synthetic RIRs using the geometric acoustic simulator. We use random surface absorption/reflection coefficients to generate synthetic RIRs because we do not have room-material information. Therefore one-to-one mapping between synthetic and real RIRs should not be expected.
\subsection{Sub-band Room Equalization (EQ)}
Sub-band room equalization bridges the gap in the frequency gain of real and synthetic RIRs over the entire frequency range. Our formulation is based on the sub-band room equalization approach described in \cite{zhenyu_low}. Sub-band relative gain calculation and equalization matching are the two stages in sub-band room equalization.
\subsubsection{Sub-band relative gain calculation}
We calculate the re-sampled relative gains to compensate for the difference in relative gains between synthetic and real RIRs. We compute the frequency response of every RIR in a real-world dataset \cite{ButReverb}. We compute the relative gain from the frequency response by taking the gain at 1000Hz as the reference for each real RIR. Then we extract the relative frequency gain at 7 unique sample points (62.5Hz, 125Hz, 250Hz, 500Hz, 2000Hz, 4000Hz, 8000Hz) for every real RIR. The mean and standard deviations of the relative gains for each sample point are different. Therefore we use a Gaussian mixture model to model 7 Gaussian distributions using the relative gains from the sampled points. We re-sample equal numbers of relative gains for each sample point as the input to the Gaussian mixture model. Instead of using the relative gains of the real RIRs, we use the re-sampled relative gains. We use re-sampled relative gains to avoid duplicating the real RIRs during equalization matching. We choose the reference and the number of sample points as proposed in \cite{zhenyu_low}.
\subsubsection{Equalization matching}
We match the relative gains of synthetic RIRs with the re-sampled relative gains calculated from real RIRs. We compute the relative frequency gains for the synthetic RIRs at the chosen sample points (62.5Hz, 125Hz, 250Hz, 500Hz, 2000Hz, 4000Hz, 8000Hz), taking gain at 1000Hz as the reference. We calculate the difference in the relative gains of synthetic RIRs and the re-sampled relative gains. Next, we design a finite impulse response (FIR) filter using the window method \cite{window} to compensate for the difference in the relative gains. We filter the synthetic RIRs using our designed FIR filter to match the sub-band relative gains of synthetic RIRs with the re-sampled relative gains.
\begin{table}[t]
\setlength{\tabcolsep}{3pt}
\caption{Different combinations of our post-processing methods studied in this paper. The best combination is marked in \textbf{bold}.}
\label{table:combination}
\centering
\begin{tabular}{@{}llr@{}}
\toprule
\textbf{Combination} & \textbf{Description} \\
\midrule
GAS+EQ & Only perform room equalization.\\
\midrule
$G_{SR}^{*}$(GAS+EQ) & First, perform room equalization, \\
& then translate the equalized synthetic RIR \\
& to a real RIR.\\
\midrule
$G_{SR}^{*}$(GAS) & Only translate synthetic RIR to real RIR.\\
\midrule
\textbf{\boldmath $G_{SR}^{*}$(GAS)+EQ} & \textbf{First, translate a synthetic RIR to a real} \\
& \textbf{RIR, then perform room equalization}\\
& \textbf{ to the translated RIR.}\\
\bottomrule
\end{tabular}
\end{table}
\begin{table*}[t]
\setlength{\tabcolsep}{8pt}
\caption{Mean values of the acoustic parameters. We calculated the mean reverberation time ($T_{60}$), mean direct-to-reverberant ratio (DRR), mean early-decay-time (EDT), and mean early-to-late index (CTE) for real, synthetic and post-processed synthetic RIRs. We also report the absolute mean difference of the acoustic parameters between synthetic and post-processed synthetic RIRs and real RIRs. The acoustic parameter values with the least absolute mean difference are shown in \textbf{bold}. }
\label{table:statistics}
\centering
\begin{tabular}{@{}llllllllr@{}}
\toprule
\textbf{RIRs} & \multicolumn{2}{c}{{\textbf{\boldmath$T_{60}$ (seconds)} }}
& \multicolumn{2}{c}{{\textbf{DRR (dB)} }}&\multicolumn{2}{c}{{\textbf{EDT (seconds)}}} & \multicolumn{2}{c}{{\textbf{CTE (dB)}}}\\
\cmidrule(r{4pt}){2-9}
& \textbf{Mean} & \textbf{Difference}& \textbf{Mean} & \textbf{Difference}& \textbf{Mean} & \textbf{Difference}& \textbf{Mean} & \textbf{Difference} \\
\midrule
Real RIRs& 1.0207 & &-6.3945& &0.8572&&3.4886&\\
GAS& \textbf{0.9553}&\textbf{0.0654}&-4.7277&1.6668&0.8846&0.0274&4.7536&1.265\\
GAS+EQ& 0.9540& 0.0667&-7.4246&1.0301&0.8912& 0.0340& 5.6404& 2.1518\\
$G_{SR}^{*}$(GAS+EQ)& 1.5493&0.5286&-8.3879&1.9934&1.046&0.1888&2.6562& 0.8324\\
$G_{SR}^{*}$(GAS)& 1.6433&0.6226&\textbf{-6.6491}&\textbf{0.2546}&\textbf{0.8483}&\textbf{0.0089}&\textbf{3.4907}&\textbf{0.0021}\\
$G_{SR}^{*}$(GAS)+EQ& 1.6364&0.6157&-6.7234&0.3289&0.8323&0.0249&3.5367&0.0481\\
\bottomrule
\end{tabular}
\end{table*}
\begin{figure}[t]
\centering
\subfloat[GAS.]{\includegraphics[width=0.46\columnwidth]{GAS-eps-converted-to.pdf} \label{GAS}}
\quad
\subfloat[GAS+EQ.]{\includegraphics[width=0.46\columnwidth]{GAS-EQ-eps-converted-to.pdf}\label{GAS+EQ}}
\quad
\subfloat[$G_{SR}^{*}$(GAS+EQ).]{\includegraphics[width=0.46\columnwidth]{GAS-EQ-Cycle-eps-converted-to.pdf}\label{T(GAS+EQ)}}
\quad
\subfloat[$G_{SR}^{*}$(GAS).]{\includegraphics[width=0.46\columnwidth]{GAS-Cycle-eps-converted-to.pdf}\label{T(GAS)}}
\quad
\subfloat[$G_{SR}^{*}$(GAS)+EQ.]{\includegraphics[width=0.46\columnwidth]{GAS-Cycle-EQ-eps-converted-to.pdf}\label{T(GAS)+EQ}}
\quad
\subfloat[Real RIR.]{\includegraphics[width=0.46\columnwidth]{Real-eps-converted-to.pdf}\label{Real}}
\caption{The spectrogram of a synthetic RIR generated using the geometric acoustic simulator \cite{zhenyu_GAS} (Figure \ref{GAS}), post-processed synthetic RIRs (Figure \ref{GAS+EQ}-\ref{T(GAS)+EQ}), and a real RIR (Figure \ref{Real}). Sub-band room equalization (EQ) and synthetic RIR to real RIR translation ($G_{SR}^{*}$()) are the two methods used to post-process the synthetic RIR in different combinations (Table \ref{table:combination}). Among the energy distribution in spectrograms of post-processed synthetic RIRs, the energy distribution in $G_{SR}^{*}$(GAS)+EQ is closest to the spectrogram of a real RIR. We can observe that the energy distribution over the low-frequency and high-frequency region in $G_{SR}^{*}$(GAS)+EQ is similar to a real RIR.}
\label{spectrogram}
\end{figure}
\subsection{Optimal Combination}
We tried different combinations (Table \ref{table:combination}) of our post-processing approach to come up with the optimal combination. We estimated 4 different acoustic parameter values from synthetic RIRs generated using the geometric acoustic simulator \cite{zhenyu_GAS} (GAS), post-processed synthetic RIRs using a different combination of our post-processing approach and real RIRs to evaluate how much the post-processed RIRs are closer to real RIRs. Reverberation time ($T_{60}$), direct-to-reverberant ratio (DRR), early-decay-time (EDT), and early-to-late index (CTE) are four acoustic parameters used for our evaluation. $T_{60}$ is the time required to decay the sound pressure by 60 decibels (dB). The ratio of the sound pressure level of a direct sound source to the sound pressure level of reflected sound in dB is called DRR \cite{drr_book}. EDT is calculated by multiplying the time taken for the sound source to decay by 10 dB by a factor of 6. The proportion of the total sound energy received in the first 50ms to the energy received during the rest of the period is called CTE \cite{room_acoustics_vigran_2014}. Synthetic RIRs and real RIRs used to train TS-RIRGAN and to perform sub-band room equalization do not have one-to-one mapping. Therefore, we calculated the mean values for different acoustic parameters to evaluate our post-processing approach.
Table \ref{table:statistics} presents the mean values of the acoustic parameters for different sets of RIRs and the absolute difference between the mean values of the acoustic parameters of synthetic and post-processed synthetic RIRs and real RIRs. We can see that mean DRR, mean EDT, and mean CTE values of $G_{SR}^{*}$(GAS) and $G_{SR}^{*}$(GAS)+EQ are closer to the real RIRs when compared with the other combinations of our post-processing approach. Therefore our proposed TS-RIRGAN is capable of improving the quality of synthetic RIRs by translating the wave effects present in real RIRs to synthetic RIRs. However, we can see a deviation in the mean $T_{60}$ values for the post-processed synthetic RIRs using TS-RIRGAN.
Figure \ref{spectrogram} shows the spectrogram of a synthetic RIR generated using the geometric acoustic simulator \cite{zhenyu_GAS} (GAS), post-processed synthetic RIRs using a different combination of our post-processing approach and a real RIR. From the spectrograms, we can see that by translating a synthetic RIR to a real RIR, we improve the energy distribution in the low-frequency region (Figure \ref{T(GAS)}) by compensating low-frequency wave effects present in real RIRs. When we perform sub-band room equalization after translation, we observe further refinement in the spectrogram (Figure \ref{T(GAS)+EQ}), especially around 600ms to 800ms. After trying all the combinations, we highlight the optimal combination in Figure \ref{optimal}. We chose the optimal combination based on the set of acoustic parameter values and the energy distribution of the post-processed RIRs.
\begin{figure}[t]
\centering
\includegraphics[width=3.3in]{optimal-eps-converted-to.pdf}
\caption{Our overall pipeline to improve the quality of synthetic RIRs. We translate synthetic RIRs to real RIRs using our learned mapping function $G_{SR}^{*}()$, then we augment the wave effects in translated synthetic RIRs by performing real-world sub-band room equalization (EQ).}
\label{optimal}
\end{figure}
\section{Implementation and Results}
\subsection{Benchmark}
We evaluate our approach on the Kaldi LibriSpeech far-field ASR recipe \cite{zhenyu_low}. We convolve clean speech $x_c[t]$ from LibriSpeech \cite{LibriSpeech} with different sets of RIRs $r[t]$ and add environmental noise $n[t]$ from BUT ReverbDB \cite{ButReverb} to augment a far-field speech $x_f[t]$ training dataset. The environmental noise is started at a random position $l$ and repeated in a loop to fill the clean speech. In Equation \ref{eq:conv}, $\lambda$ is calculated for different signal-to-noise ratios, which ranges from 1dB to 2dB:
{
\begin{equation}\label{eq:conv}
\begin{aligned}[b]
x_f[t] = x_c[t] \circledast r[t] + \lambda * n[t+l].
\end{aligned}
\end{equation}
}
We train time-delay neural networks \cite{Kaldi_network} using our augmented training dataset. After training the network, we decode the identity vectors \cite{ivector} (i-vectors) of a far-field speech test set using phone language models. We calculate word error rate (WER) for large four-gram (fglarge), large tri-gram (tglarge), medium tri-gram (tgmed), and small tri-gram (tgsmall) phone language models, as well as online decoding using a tgsmall phone language models. During online decoding, the i-vectors extracted from the far-field speech test set are passed in real-time. We use WER to evaluate the far-field speech augmented using different sets of RIRs.
Training and testing on the benchmark for each far-field speech training dataset take around 4 days. We ran all the experiments in the same environment to perform a fair comparison.
\subsection{Data Preparation}
We use real RIRs and environmental noise from BUT ReverbDB \cite{ButReverb} and clean speech (test-clean) from LibriSpeech \cite{LibriSpeech} to augment a real-world far-field speech test set using Equation \ref{eq:conv}. We evaluate our proposed method using the real-world far-field speech test set. We randomly split 1209 RIRs in BUT ReverbDB \cite{ButReverb} into subsets of \{773,194,242\} to create training, development, and test far-field speech datasets.
We use the meta-info accompanying each real RIR to generate synthetic RIRs using the state-of-the-art geometric acoustic simulator (GAS). We post-process the synthetic RIRs by translating synthetic RIRs to real RIRs and performing real-world sub-band room equalization in different combinations (Table \ref{table:combination}).
We also generated RIRs using the pre-trained IR-GAN \cite{irgan} on BUT ReverbDB dataset \footnote{\url{https://gamma.umd.edu/pro/speech/ir-gan}}. IR-GAN is a neural network based RIR generator that can generate realistic RIRs corresponding to different acoustic environment by parametrically controlling acoustic parameters.
We created different far-field speech training set by convolving LibriSpeech training datasets (train-clean-\{100,360\}) with different RIRs and adding environmental noise from BUT ReverbDB \cite{ButReverb} set using Equation \ref{eq:conv}. We use synthetic RIRs generated using GAS, post-processed synthetic RIRs, RIRs generated using IR-GAN and real RIRs to augment different training far-field speech datasets.
\begin{table}[t]
\setlength{\tabcolsep}{1.8pt}
\renewcommand{\arraystretch}{0.85}
\caption{Word error rate (WER) reported by the Kaldi far-field ASR system. We trained the Kaldi model using the different augmented far-field speech training sets and tested it on a real-world far-field speech. The training sets are augmented using synthetic RIRs generated using GAS, post-processed synthetic RIRs (Table \ref{table:combination}), synthetic RIRs generated using IR-GAN and real RIRs. We report WER for fglarge, tglarge, tgmed, and tgsmall phone language models and online decoding using tgsmall phone language model. Our best results are shown in \textbf{bold}.}
\label{table:results}
\centering
\begin{tabular}{@{}llllllr@{}}
\toprule
\multicolumn{2}{l}{\multirow{2}{24mm}{\textbf{Training data}}}
& \multicolumn{5}{c}{\textbf{Test Word Error Rate (WER)} [\%]}\\
\cmidrule(r{4pt}){3-7}
& & fglarge & tglarge & tgmed & tgsmall & online\\
\midrule
&clean (Baseline) & 77.15 & 77.37 & 78.00 & 78.94 & 79.00\\
&real (Oracle) & 12.40 & 13.19 & 15.62 & 16.92 & 16.88\\
\midrule
&GAS\cite{zhenyu_GAS} & 16.53 & 17.26 & 20.24 & 21.91 & 21.83\\
&GAS+EQ\cite{zhenyu_low} & 14.51 & 15.37 & 18.33 & 20.01 & 19.99\\
\midrule
&$G_{SR}^{*}$(GAS+EQ) & 14.27 & 14.98 & 17.79 & 19.37 & 19.36\\
\textbf{Ours}& $G_{SR}^{*}$(GAS) & 14.12 & 14.70 & 17.44 & 19.08 & 19.06\\
&\textbf{\boldmath $G_{SR}^{*}$(GAS)+EQ} & \textbf{13.24} & \textbf{14.04} & \textbf{16.65} & \textbf{18.40} & \textbf{18.39}\\
\midrule
&GAS\cite{zhenyu_GAS} & 16.53 & 17.26 & 20.24 & 21.91 & 21.83\\
&IR-GAN\cite{irgan} & 14.99 & 15.93 & 18.81 & 20.28 & 20.24\\
&GAS+IR-GAN\cite{irgan} & 14.16 & 14.99 & 17.56 & 19.21 & 19.21\\
\textbf{Ours} &\textbf{\boldmath $G_{SR}^{*}$(GAS)+EQ} & \textbf{13.24} & \textbf{14.04} & \textbf{16.65} & \textbf{18.40} & \textbf{18.39}\\
\bottomrule
\end{tabular}
\end{table}
\subsection{Results and Analysis}
Table \ref{table:results} shows the word error rate (WER) reported by the Kaldi LibriSpeech far-field ASR benchmark \cite{zhenyu_low}. We can see that the augmented far-field speech training sets perform well compared to our baseline model trained on a clean Librispeech dataset. The lowest WER is reported by our oracle model trained on real-world far-field speech. In our work, we aim to minimize the gap in the performance between real RIRs and synthetic RIRs.
The WERs for tgsmall reported by GAS+EQ and $G_{SR}^{*}$(GAS) are 18.33\% and 17.44\%, respectively. We observe that our approach outperforms the prior methods by up to 4.8\%. We see an interesting observation with $G_{SR}^{*}$(GAS+EQ) and $G_{SR}^{*}$(GAS) datasets. When compared to translated synthetic RIRs ($G_{SR}^{*}$(GAS)), translated room equalized RIRs ($G_{SR}^{*}$(GAS+EQ)) perform poorly.
\textbf{Optimal Approach: } We can see that translating imprecise synthetic RIRs to real RIRs and performing real-world sub-band room equalization on the translated RIRs ($G_{SR}^{*}$(GAS)+EQ) gives the lowest WER. When compared to training sets created using unmodified RIRs (GAS) and room equalized RIRs (GAS+EQ), we observe a relative reduction in WER by up to 19.9\% and 9.1\%, respectively.
Physical-based acoustic simulators (GAS) and neural-network-based RIR generators (IR-GAN) generate RIRs using two different approaches. GAS models RIR corresponding to a particular scene by considering room dimension, speaker, listener position, etc. IR-GAN uses acoustic parameters to generate an RIR for a particular scene. In previous work, \cite{irgan}, far-field speech augmented using synthetic RIRs from GAS and IR-GAN are used to train a robust far-field ASR system (GAS+IR-GAN). From Table \ref{table:results}, we can observe that our post-processed RIRs using our optimal approach ($G_{SR}^{*}$(GAS)+EQ) outperforms the combination of RIRs generated using GAS and IR-GAN (GAS+IR-GAN).
\section{Conclusion}
We present a new architecture to translate synthetic RIRs to real RIRs and perform real-world sub-band room equalization on the translated RIRs to improve the quality of synthetic RIRs. We evaluate the quality of our post-processed synthetic RIRs using a set of acoustic parameter values and the energy distribution of the post-processed RIRs. The set of acoustic parameter values indicates how much the wave effects in post-processed RIRs are closer to real RIRs. We show that the mean direct-to-reverberant ratio, mean early-decay-time, and mean early-to-late index of the post-processed synthetic RIRs are closer to the real RIRs when compared to the unmodified synthetic RIRs. We also evaluate our post-processing approach on the Kaldi LibriSpeech far-field automatic speech recognition benchmark and observe that our post-processed RIRs outperform unmodified synthetic RIRs by up to 19.9\%. In the future, we would like to explore improving the quality of synthetic RIRs based on improved techniques to model acoustic wave effects and translation architectures. We would also like to evaluate their benefits for other applications, including speech separation~\cite{cone-of-silence,aralikatti} and audio-visual speech recognition~\cite{Audio-Visual} tasks.
\section{Acknowledgements}
This work is supported in part by ARO grant W911NF-18-1-0313, NSF grant \#1910940, Capital One and Intel.
\bibliographystyle{IEEEbib}
|
2,869,038,154,361 | arxiv | \section{Introduction}
Since the prediction of the $\pi$-meson by Yukawa \cite{Yukawa35}, the long-standing question has been whether a mesonic nuclear bound state exists,
{\it{i.e.}}, whether a meson forms {\it a quantum state} at an eigen-energy $E_{M}$ below the intrinsic mass $m$ without promptly vanishing in nuclear media.
If it exists, it means that a meson ($\overline{q}q$) forms a quantum state where baryons ($qqq$) exist as nuclear medium.
There are many important subjects to study, {\it e.g.}, how hadron masses are generated from $\sim$\,massless particles: quarks ($m_q \sim$ few MeV/$c^2$) and gluons ($m_g=0$), how the properties of these mesons change in the nuclear medium, how hadrons are confined in the nuclear media, and the equation-of-state in nuclear (or star) matter.
Therefore, many mesons have been examined over the past century, to see whether a mesonic nuclear bound state exists below the mass threshold with a binding energy $B_{M} \equiv m - E_{M}$, but there has been no clear evidence for their existence.
The $\pi N$ $S$-wave interaction is repulsive, so there is no nuclear bound state much deeper than the atomic states \cite{Geissel02}.
What about the second-lightest meson with an $s$-quark, the kaon?
After the long standing {\it{``kaonic hydrogen puzzle'' }} was resolved \cite{Iwasaki97,Beer,Bazzi}, the strong ${\overline K}N$ attractive interaction was established in the isospin $I=0$ channel.
This leads us naturally to the ansatz that the $\Lambda(1405)$ could be a $K^- p$ nuclear bound state, rather than a three-quark $\Lambda$-baryonic-state as it is named, {\it{i.e.}}, the name implies that it is a first excited state of the $\Lambda$ baryon whose excitation is caused by the constituent-quark internal-motion.
A recent lattice QCD calculation also supports the $K^- p$ picture \cite{Hall15}.
Akaishi-Yamazaki predicted the existence of kaonic nuclear bound states assuming the $\Lambda(1405)$ be a $K^- p$ bound state \cite{AY1}.
The simplest predicted kaonic nuclear system, $\overline{K}NN$ symbolically denoted as ``$K^- pp$'', has charge $+1$, $I=\frac{1}{2}$ and $J^P = 0^-$, with a binding energy $B_{\rm {\it Kpp}}$ = 48 MeV (measured from $M(Kpp) \equiv m_K + 2 m_p \approx$ 2370 MeV/$c^2$) and a partial mesonic decay width $\Gamma_{\pi Y \!N} $ = 61 MeV \cite{AY2}.
Triggered by this prediction, many studies were undertaken.
Theoretically, the existence of the kaonic bound states is well supported, but the results are widely scattered: binding energies ($B_{\rm {\it Kpp}} \approx 10 \sim 100$ MeV) and partial mesonic decay widths ($\Gamma_{\pi Y \!N} \approx 40 \sim 100$ MeV), {\it e.g.}, \cite{Gal,Weise,Oset,Dote}, while the total decay width $\Gamma_{Kpp}$ (including non-mesonic decay channels) is not yet calculated.
Experimentally, there have been many searches for ``$K^- pp$'', with reports of possible candidates \cite{finuda,disto,e27} as well as contradictory results \cite{hades,oton}, leaving the matter both controversial and unsettled.
\section{J-PARC E15 Experiment}
To search for the ``$K^- pp$'', the most straightforward experiment is the $K\!^- \!+ \!^2$He reaction {\it{below the $M(Kpp)$ mass threshold}}, which is obviously impossible.
Instead, we have conducted an experimental search by bombarding a $^3$He target with a 1 GeV/$c$ $K^-$ beam to knock out a nucleon with the kaon, and directly introduce a recoiled virtual-${\overline {K}}$-meson into the residual nucleus.
At this momentum ($\sqrt{s} \sim 1.8$ GeV for $\overline{K}N$), the single-nucleon elastic-reaction ${\overline {K}}N \rightarrow {\overline {K}}N$ has a very large cross-section, helped by the presence of $Y^*$-resonances ($m_{Y^*} \sim$ 1.8 GeV/$c^2$) \cite{Tanabashi:2018oca}.
On the other hand, due to the shrinkage of the de Broglie wave-length of the projectile, direct multi-NA, which produces a severe background in an {\it at-rest-kaon-absorption} experiment to search for ``$K^- pp$'' \cite{oton,Sato}, will be relatively suppressed.
The momentum of the virtual `${\overline {K}}$' is given as $q_{\,\overline {K}} = q_{K n} \equiv |{\bf q}_{K n}|$ ({\it i.\,e.}\,the momentum difference of an incident kaon and the forward neutron ${\bf q}_{K n} \equiv {\bf p}_{K^-}^{Lab.} - {\bf p}_{n}^{Lab.} $), where the superscript represents that it is in the laboratory-frame, and the single quotation marks represent that it is within the strong interaction range in a nucleus.
When the `${\overline {K}}$' is backscattered, the $q_{\,\overline {K}}$ can be as small as $\sim$ 200 MeV/$c$ (the minimum $q$ among the search experiments performed).
With this condition, a successive reaction between the virtual `${\overline {K}}$' and two `{\it spectator nucleons}' {\it at-rest in the laboratory-frame} can be efficiently realized.
This way, a ``$K^- pp$'' can be formed almost at-rest in the laboratory-frame, which makes the formation probability large.
In this reaction channel, one can reduce the possible combinations of ``$K^- pp$'' decay particles, because the $s$-quark is conserved in the strong interaction and thus a hadron with an $s$-quark should exist in its decay.
Thus, one can efficiently conduct invariant mass spectroscopy (decay channel) by having the detector surrounding the target, and missing mass spectroscopy (formation channel) using a forward neutron counter (NC) and a spectrometer to simultaneously detect a forward going neutron (or proton) coming from ${\overline {K}}N \rightarrow {\overline {K}}N$ reaction.
We designed our apparatus to achieve a mass resolution of $\sigma_M \sim 10$ MeV/$c^2$ both in missing and in invariant mass \cite{Agari}.
The first-stage experiment, J-PARC E15$^{\rm 1st}$, exhibited a huge peak above $M(Kpp)$ by observing the neutron in NC ($\Delta \theta_{N\!C} \sim 1/20$) \cite{hashimoto}.
This spectral peak has a very large cross-section of $\gtrsim$~6 mb/sr in the semi-inclusive quasi-elastic ${\overline{K}}N \rightarrow {\overline{K}}N$ channel at $\theta_n = 0$.
Thus, we confirmed that the forward nucleon knockout reaction, ${\overline{K}}N \rightarrow {\overline{K}}N$, is the dominant process at $p_K = 1$ GeV/$c$.
It also revealed that there was a large event-excess extending from the quasi-elastic ${\overline{K}}$ bump to the lower mass region.
The tail reached to $\sim$ 100 MeV below $M(Kpp)$ ($\sim$ 1 mb/sr).
However, no significant structure was observed in this tail at any location, where ``$K^- pp$'' candidates were reported \cite{finuda,disto,e27}.
On the contrary, we found a kinematical anomaly: a peak-like structure was observed in the $\Lambda p$ invariant mass (${\rm{\it IM}}_{\!\Lambda p} \equiv M, \rm{\it{ hereafter}})$ spectrum of the non-mesonic $\Lambda p n$ final state (observed by the $p p \pi^-$-events without requesting an NC hit) below the $M(Kpp)$ mass threshold at low $q_{\Lambda p} ~(= q_{K n} \equiv q, \rm{\it{ hereafter}})$ \cite{sada}.
This is the simplest final state, which consists of the minimum number of lowest mass baryons without meson emission, so the possible interpretations are limited.
The most promising interpretation is: the ``$K^- pp$'' is formed by knocking out a neutron, decays to $\Lambda p$, and thus a corresponding peak is seen in the $M$ spectrum.
To significantly improve the statistics of the $\Lambda p n$ final state permitting us to examine this interpretation, we set a higher priority on accumulating events having three charged particle hits around the target {\it without requiring forward neutron detection}.
The kinematical refit of $p\pi^- p$ (+\,{\it n missing}) to the $\Lambda p n$ final state using energy-momentum conservation was conducted at the analysis stage to prevent biasing the data.
We succeeded in accumulating 30 times as much data on $p\pi^- p$ events compared to E15$^{1st}$.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=10cm]{Fig1.eps}
\caption{
\label{fig:data}
$a$) 2D event distribution plot on the $M$ ($=\rm{\it IM}_{\Lambda p}$) and the momentum transfer $q$ ($q_{\Lambda p}$) for the $\Lambda p n$ final state.
The $M_{F}(q)$ given in Eq.\,\ref{eq:QFA-centroid}, the mass threshold $M(Kpp)$, and the kinematical boundary for $\Lambda p n$ final state, are plotted in the figure. The lower $q$ boundary corresponds to $ \theta_n = 0$ (forward $n$), and the upper boundary corresponds to $\theta_n = \pi$ (backward $n$).
The histograms of projection onto the $M$ axis $b$), and onto $q$ axis $c$) are also given together with the decompositions of the fit result.}
\end{center}
\end{figure}
The formation channel, $K^-+^3{\rm He} \rightarrow ``K^-pp" + n$, can be uniquely defined by the following two parameters; the $\Lambda p$-invariant mass $M$ and the momentum transfer $q$.
The event distributions over $M$ and $q$ are given in Fig.\,\ref{fig:data}.
As shown in the figure, a strong event-concentration observed previously \cite{sada} is confirmed near the mass threshold $M(Kpp)$ at the lower-$q$ side $(M\!c^2, \,qc) \sim (2.37, \,0.25)$ GeV.
To our surprise, however, the structure near $M(Kpp)$ cannot be represented as a single Breit-Wigner (B.W.) function, as was na{\"{\i}}vely assumed in the previous paper \cite{sada}.
Instead, it is more natural to interpret this structure as consisting of at-least two internal substructures originating from different reaction mechanisms.
However, the primary reaction $K^-N \rightarrow $`${\overline{K}}$'$n$ ($n$ forward) would be the same, because both substructures are close to $(M, \,q) \approx (m_K + 2 m_p, ~{\rm{\it lower \,limit}})$.
The 2D plot (Fig.\,\ref{fig:data}a) shows that the event distribution patterns change at $M(Kpp)$.
The yield of the lower $M$ region is reduced as a function of $q$, but extends to $q \sim 650$ MeV/$c$.
The distribution centroid of $M$ does not depend on $q$ within the statistical uncertainty, which allows a bound state interpretation.
On the other hand, the distribution centroid of $M$ above $M(Kpp)$ depends on $q$, and the yield vanishes rapidly as a function of $q$.
The centroid shifts to the heavier $M$ side for the larger $q$, suggesting its non-resonant feature, {\it i.\,e.}\,the propagator's kinetic energy is converted to the relative kinetic energy between $\Lambda$ and $p$, near the lower $q$ boundary.
Thus, the most natural interpretation would be non-resonant absorption of quasi-free `${\overline{K}}$' by the `$NN$' spectator (QF$_{{\rm \overline{K}A}}$) due to the final state interaction (FSI).
This process can be understood as a part of the quasi-free ${\overline{K}}$ reaction, in which most ${\overline{K}}$s escape from the nucleus, as we published in \cite{hashimoto}.
Note that there is another change in event distributions at $M(Kpp)$, {\it{i.e.}}, the event density is low close to the $\theta_n = 0$ line below $M(Kpp)$, while it is high above $M(Kpp)$ (this point will be separately discussed in the last section).
This spectral substructure is in relatively good agreement with that of Sekihara $\!$- $\!$Oset $\!$- $\!$Ramos's spectroscopic function \cite{Sekihara} to account for the observed structure in \cite{sada}.
Actually, their spectrum has two structures, namely A) a ``$K^- pp$'' pole below the mass threshold $M(Kpp)$ (meson bound state), and B) a QF$_{{\rm \overline{K}A}}$ process above the $M(Kpp)$.
Thus, the interpretation of the internal substructures near $M(Kpp)$ is consistent with their theoretical picture.
\section{Fitting Procedure}
We first describe what we can expect if point-like reactions happen between an incoming $K^-$ and $^3$He, which goes to a $\Lambda p n$ final state.
The events must distribute simply according to the $\Lambda p n$ Lorentz-invariant phase space $\rho_{3}(M,\,q)$, as shown in Fig.\,\ref{fig:PhS-Efficiency}a.
We fully simulated these events based on our experimental setup and analyzed the simulated events by the common analyzer applied to the experimental data.
The result is shown in Fig.\,\ref{fig:PhS-Efficiency}b, which is simply $\mathcal{E} (M,\,q) \times \rho_{3}(M,\,q)$, where $\mathcal{E} (M,\,q)$ is the experimental efficiency.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=10cm]{Fig2.eps}
\caption{
\label{fig:PhS-Efficiency}
Simulated spectra of
a) Lorentz-invariant $\Lambda p n$ phase space $\rho_{3}(M,\,q)$ by taking into account the kaon beam momentum bite, b) $\mathcal{E} (M,\,q) \times \rho_{3}(M,\,q)$, and c) experimental efficiency, $\mathcal{E} (M,\,q)$, evaluated by the bin-by-bin ratio between a) and b).
The unit of $z$-axis (color code) is per one generated events both for a) and b). For c), the ratio is given.
}
\end{center}
\end{figure}
One can evaluate $\mathcal{E} (M,\,q)$ by dividing Fig.\,\ref{fig:PhS-Efficiency}b by Fig.\,\ref{fig:PhS-Efficiency}a bin-by-bin, which is given in Fig.\,\ref{fig:PhS-Efficiency}c.
As shown in Fig.\,\ref{fig:PhS-Efficiency}c, we have sufficient and smooth experimental efficiency at the region of interest, $M \approx M(Kpp)$ at lower $q$, based on the careful design of the experimental setup.
On the other hand, the efficiency is extremely low in the dark-blue to the boundary.
If we simply apply the acceptance correction, the statistical errors of those bins become huge and very asymmetric.
This fact makes the acceptance correction of the entire $(M,\,q)$ region unrealistic.
Therefore, we applied a reverse procedure, {\it i.e.}, we prepared smooth functions $f_{\{j\}}(M,\,q)$ (to account for the $j$-$th$ physical process) and multiplied that with $\mathcal{E} (M,\,q) \times \rho_{3}(M,\,q)$ ({$=$} Fig.\,\ref{fig:PhS-Efficiency}b) bin-by-bin.
In this manner, one can reliably estimate how the physics process should be observed in our experimental setup, and this permitted us to calculate the mean-event-number expected in each 2D bin.
The three introduced model functions (at the best fit parameter set) are shown in Fig.\,\ref{fig:f_j}.
A very important and striking structure exists below $M(Kpp)$, which could be assigned as the ``$K^-pp$'' signal.
To make the fitting function as simple as possible, let us examine the event distribution by using the same function as was applied in \cite{sada}, {\it i.e.}, a product of B.W.\,depending only on $M\!,$ and an $S$-wave harmonic-oscillator form-factor depending only on $q$ as:
\begin{eqnarray}
\label{B.W.G.}
\!\!f\!_{\{\!{\rm {\it Kpp}}\!\}} \! = \!
\frac{ C_{\rm {\it Kpp}} \left( \Gamma\!_{\rm {\it Kpp}} /2 \right)^2 }
{ \left( M - M_{\rm {\it Kpp}} \right)^2 \!\!+ \! \left( \Gamma\!_{\rm {\it Kpp}} /2 \right)^2 }
\, \exp \!\! \left(\! - \!\left(\!\frac{ q } { Q_{\rm {\it Kpp}} }\! \right)^{\!\!2} \right)\!\!.
\end{eqnarray}
where $M_{\rm {\it Kpp}}$ and $\Gamma_{\rm {\it Kpp}}$ are the B.W. pole position and the width, $Q_{\rm {\it Kpp}}$ is the reaction form-factor parameter, and $C_{\rm {\it Kpp}}$ is the normalization constant, as shown in Fig.\,\ref{fig:f_j}a.
A model-function of the QF$_{{\rm \overline{K}A}}$ channel, $f_{{\{Q\!F_{\rm \, \overline{K}A} \}}} \,( M, q )$, is introduced as follows.
As described, we assume that a `${\overline {K}}$' propagates between the two successive reactions.
It consists of 1) $K^-N \rightarrow `{\overline {K}}$'$N$ and 2) non-resonant $`{\overline {K}}$'$ + `NN$'$ \rightarrow \Lambda + p$ in the FSI.
When the $`{\overline {K}}$' propagates at momentum $q$ as an on-shell particle in the spectator's rest frame ($\equiv$ laboratory-frame), then the resulting invariant mass $M$ ($\equiv I\!M_{\Lambda p}(`{\overline {K}}\!+\!NN$'$)$) can be given as:
\begin{eqnarray}
\label{eq:QFA-centroid}
M_{F}(q) = \sqrt{4m_N^2 + m_{K}^2 + 4m_N \sqrt{m_{K}^2 + q^2}},
\end{eqnarray}
where $m_N$ and $m_{K}$ are the intrinsic mass of the nucleon and the kaon, respectively.
The curve originating at $M=M(Kpp)$ in Fig.\,\ref{fig:data}a is the $M_{F}(q)$, which is consistent with the $q$-dependence of QF$_{\rm \, \overline{K}A}$ as shown in the figure.
Along the line, there are two strong event-concentrations observed at $\theta_n = 0$ ({\it backward $`{\overline {K}}$'}) and $\theta_n = \pi$ ({\it forward $`{\overline {K}}$'}).
To account for the distribution, we defined $f_{{\{Q\!F_{\rm \, \overline{K}A} \}}} \, ( M, q )$ as follows.
For the $q$-direction, we introduced Gaussian and exponential distributions at around the minimum and maximum, respectively, with a constant in between.
For the $M$-direction, a Gaussian around $M_{F}(q)$ is applied to account for the spectator's Fermi-motion.
There is another component, widely distributing over the kinematically allowed region of $M$ and $q$, which was previously observed \cite{sada}.
In reference \cite{sada}, we simply assumed that the yield of this component was proportional to $\rho_{3}(M,\,q)$.
However, with the present much improved statistics, we found that we cannot fit this component with $\rho_{3}(M,\,q)$.
Compared to $\rho_{3}(M,\,q)$, the yields in the heavier $M$ region and lower $q$ region are much weaker, as shown in the fit curve given in Fig.\,\ref{fig:data}b and c.
Thus, we {\it phenomenologically} introduced a distribution function, $f_{\{BG\}} (M,\,q)$, similar to Eq.\,\ref{B.W.G.}, but we expanded the $q$-dependent harmonic oscillator term to allow angular momentum up to $P$-wave, as shown in Fig.\,\ref{fig:f_j}c.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=10cm]{Fig3.eps}
\caption{
\label{fig:f_j}
Individual 2D fit functions of the three physical processes, a) ``$Kpp$'', b) QF$_{{\rm \overline{K}A}}$ and c) $BG$ in the form of $\mathcal{E} (M,\,q) \, \rho_{3}(M,\,q) \, f_{j} (M,\,q)$ at the best fit parameter set.
The $z$-axis (color code) is the expected-mean event number to be observed.
The pale-blue is for the region where the expected number is below one.
The $z$-axis' color code of c) is changed to show its $(M,\,q)$-dependence clearly.
}
\end{center}
\end{figure}
The data $D(M,\,q)$ can be fitted by using the maximum likelihood method, whose likelihood $lnL_{\{\rm{\it fit}\}}$ is given by a Poisson distribution $P(X=D(M,\,q); \lambda_{D}(M,\,q))$ having mean value $\lambda_{D}(M,\,q)$ at each $(M,\,q)$-bin as:
\begin{eqnarray}
\label{Poisson}
lnL_{\{\rm{\it fit}\}} = -\sum_M \,\sum_q \,\ln P(X=D(M,\,q); \lambda_{D}(M,\,q)).
\end{eqnarray}
The fitting function $\lambda_{D}(M,\,q)$ is defined as:
\begin{eqnarray}
\label{PoissonSum}
\lambda_{D}(M,\,q) = \mathcal{E} (M,\,q) \, \rho_{3}(M,\,q) \left( \sum_j \, y_j \, f_{j} (M,\,q) \right),
\end{eqnarray}
where $y_j$ is the yield of the $j$-th physical process, and the first term $\mathcal{E} (M,\,q) \, \rho_{3}(M,\,q)$ is simply Fig.\,\ref{fig:PhS-Efficiency}b.
To examine whether we should introduce more sophisticated model functions, we also studied the following distributions.
In the $^3{\rm He}(K^-,\Lambda p)n$ reaction followed by $\Lambda \rightarrow p \pi^-$ decay, there are five kinematically independent observables in total.
The remaining three kinematical parameters, independent of $M$ and $q$, define the decay kinematics of $``K^-pp" \rightarrow \Lambda p$ and the $\Lambda \rightarrow p \pi^-$ decay asymmetry.
Thus, these parameters are sensitive to $J^P$ of the reaction channels.
For the ``$K^- pp$'' signal, we analyzed events in the window $M = 2.28 \sim 2.38$ GeV/$c^2$ where the major part of the component is located, and $q = 350 \sim 650$ MeV/$c$ where no severe interference is expected with $f_{{\{QF_{\rm \, \overline{K}A} \}}}$.
The angular distributions are fairly flat for any of the three kinematical parameters.
Therefore, the angular distribution is consistent with $S$-wave.
Thus, there is no specific reason to introduce any sophisticated terms in addition to Eq.\,\ref{B.W.G.}.
In fact, a flat distribution is naturally expected if the pole's quantum-number is $J^P = 0^-$.
We also analyzed the angular distributions for $f_{{\{QF_{\rm \, \overline{K}A} \}}}$ and $f_{\{BG\}}$.
However, again we found no specific reason to introduce further terms.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=8cm]{Fig4.eps}
\caption{
\label{fig:q-select}
$\Lambda p$ invariant mass spectrum for $\Lambda p n$ final state produced in the momentum transfer window of $350 < q < 650$ MeV/$c$.
The efficiency $\mathcal{E}(M,\,q)$ was corrected based on the simulation before the $q$ integration of the data.
Each fitted physical process, which is efficiency corrected and integrated over the $q$-window after the fit, is also given.
}
\end{center}
\end{figure}
We haven't considered the interference terms between the three physical processes as given in Eq.\,\ref{PoissonSum}, to avoid over fitting of our statistically limited data.
Instead, we applied a peak fitting window to reduce the interference effect on our fit result by the following procedures.
We conducted i) {\it the peak fit}, where $f_{{\{Q\!F_{\rm \, \overline{K}A} \}}} \,( M, q )$ is fitted by fixing all the parameters of $f_{{\{Q\!F_{\rm \, \overline{K}A} \}}} \,( M, q )$ and $f_{\{BG\}} (M,\,q)$ within the $q$-window where no severe interference with QF$_{{\rm \overline{K}A}}$ is expected.
We then iterated this procedure together with procedure ii) {\it a global fit to evaluate} $f_{{\{Q\!F_{\rm \, \overline{K}A} \}}} \,( M, q )$ {\it and} $f_{\{BG\}} (M,\,q)$ (by fixing parameters in $f\!_{\{\!{\rm {\it Kpp}}\!\}}$ except for the peak yield $C_{\rm {\it Kpp}}$), until procedures i) and ii) converged.
To exhibit this ``$K^- pp$'' candidate and to present the $M$ spectrum free from experimental acceptance, we plotted the spectrum by correcting our detector efficiency for the events in the momentum transfer window of $350\!<\!q\!<\!650$ MeV/$c$ where mostly $\mathcal{E}(M,\,q) \! \gg \! 0$, as shown in Fig.\,\ref{fig:q-select}.
To make fit values insensitive to the acceptance correction procedure, we corrected the acceptance as follows.
The data $D(M,\,q)$ was divided by $\mathcal{E} (M,\,q)$ bin-by-bin and integrated over $q$ at given $M$.
We applied the same procedure for the data error taking error-propagation into account.
For each projected physics process $\rho_{3} f_{j}$ (plotted as the curved lines in the figure), we integrated over $q$, by replacing the $\mathcal{E} (M,\,q) \, \rho_{3}(M,\,q)$ function (given in Fig.\,\ref{fig:PhS-Efficiency}b) with $\rho_{3}(M,\,q)$ (Fig.\,\ref{fig:PhS-Efficiency}a) to be multiplied by $f_{j} (M,\,q)$, {\it c.f.}, Eq.\,\ref{PoissonSum}.
In this window, the yield of other processes is largely suppressed in contrast to ``$K^- pp$''.
The QF$_{{\rm \overline{K}A}}$ distribution is also clearly separated from the ``$K^- pp$'' peak region, because the QF$_{{\rm \overline{K}A}}$ centroid is kinematically shifted to the heavier side, according to Eq.\,\ref{eq:QFA-centroid}, {\it c.f.}, a comparison of the spectral difference of the QF$_{{\rm \overline{K}A}}$ component insetted in blue curves in Fig.\,\ref{fig:data}b and Fig.\,\ref{fig:q-select}.
As a result, a distinct peak is observed below $M(Kpp)$.
\section{Fit Result}
The $S$-wave parameters obtained were; the mass eigenvalue $M_{\rm {\it Kpp}} = 2324 \pm 3 \, (stat.) \,^{+6}_{-3} \,(sys.)$ MeV/$c^2$ ({\it i.\,e.} $B_{\rm {\it Kpp}} \equiv M(Kpp) - M_{\rm {\it Kpp}} = 47 \pm 3 \, (stat.) \,^{+3}_{-6} \,(sys.)$ MeV), the width $\Gamma_{\rm {\it Kpp}} = 115 \pm 7 \, (stat.) \,^{+10}_{-20} \,(sys.)$ MeV, and the reaction form-factor parameter $Q_{\rm {\it Kpp}} = 381 \pm 14 \, (stat.)\,^{+57}_{-0} \,(sys.)$ MeV/$c$.
The $q$-integrated ``$K^- pp$'' formation yield below the threshold going to the $\Lambda p$ decay channel is evaluated to be $\sigma_{\rm {\it Kpp}} \cdot Br_{\Lambda p} = 7.2 \pm 0.3 \, (stat.) \,^{+0.6}_{-1.0} \,(sys.)$ $\mu$b (for $M<M(Kpp)$).
For the complete integration over all $q$ and $M$, the cross-section becomes $\sigma_{\rm {\it Kpp}}^{tot} \cdot Br_{\Lambda p} = 11.8 \pm 0.4 \, (stat.)\,^{+0.2}_{-1.7} \,(sys.)$ $\mu$b.
We evaluated the systematic errors caused by the spectrometer magnetic field strength calibrated by invariant masses of $\Lambda$ and $K^0$ decay, binning effect of the spectrum, and the contamination effects of the other final states ($\Sigma^0 pn$ and $\Sigma^-pp$) to the $\Lambda p n$ event selection.
To be conservative, the effects to the fit values are added linearly.
More detailed analysis will be given in a forthcoming full paper.
The $B_{\rm {\it Kpp}}$ $\sim 50$ MeV is much deeper than reported in our first publication since the assumption of a single pole structure was invalid.
It is also much deeper than chiral-symmetry-based theoretical predictions.
The $\Gamma_{\rm {\it Kpp}} \sim$ 110 MeV is rather wide, meaning very absorptive.
On the other hand, it should be similar to that of $\Lambda(1405) \rightarrow \Sigma\pi$, if ``$K^- pp$'' decays like `$\Lambda(1405)$'$ + $`$p$'$\rightarrow \Sigma \pi p$.
Thus, the observed large width indicates that the non-mesonic $YN$ channels would be the major decay mode of the ``$K^- pp$''.
Interestingly, the observed $Q_{\rm {\it Kpp}} \sim$ 400 MeV/$c$ is very large.
The large $Q_{\rm {\it Kpp}}$ value implies the formation of a very compact ($\,\sim0.5\,$fm) system referring to $\hbar\sim$\,200\,MeV/$c\,\cdot\,$fm.
The compactness of the system is also supported by the large $B_{\rm {\it Kpp}}$.
However, the present $Q_{\rm {\it Kpp}}$ can be strongly affected by the primary ${\overline{K}}N \rightarrow {\overline{K}}N$ reaction in the formation process, so one needs more study to evaluate the static form-factor parameter of ``$K^- pp$'' to deduce its size (or nuclear density) more quantitatively.
\section{Discussion and Conclusion}
We have demonstrated the existence of a peak structure in ${\rm{\it IM}}_{\!\Lambda p}$ below $M(Kpp)$, which can be kinematically separated very clearly from QF$_{{\rm \overline{K}A}}$ by selecting the momentum transfer window of $350 < q < 650$ MeV/$c$.
As shown in Fig.\,\ref{fig:data}a, the ``$K^- pp$'' distribution yield reduces near $\theta_n = 0$ as a function of $q$, and it is $\sim$ proportional to the phase space volume defined by Jacobian ({\it c.f.}, Fig.\,\ref{fig:PhS-Efficiency}a (or b)).
This is naturally expected if the $S$-wave harmonic-oscillator form-factor given in Eq.\,\ref{B.W.G.} is valid.
On the other hand, the QF$_{{\rm \overline{K}A}}$ distribution is highly concentrated at $\theta_n = 0$, where the phase space $\rho_{3}(M,\,q)$ is vanishing.
This is consistent with our previous result \cite{hashimoto}, in which no structure was found below $M(Kpp)$ at $\theta_n = 0$, {\it{i.e.}}, the leaking-tail of QF$_{{\rm \overline{K}A}}$ into the bound region hides the structure below $M(Kpp)$ at $\theta_n = 0$.
The present $\Lambda p n$ final state is the simplest channel for $K^-$ interacting with $^3$He.
In this final state, the ``{\it kinematical anomaly}" is only seen in ${\rm{\it IM}}_{\!\Lambda p}$ having an angular distribution consistent with $S$-wave.
Thus, there is no reasonable explanation as to why a peak structure could be formed below $M(Kpp)$ other than ``$K^- pp$''.
However, one may wonder whether a spurious bump near $M(Kpp)$ might be formed from some intermediate state converging (or converting) to a $\Lambda p n$ final state in the FSI.
Here we discuss possible candidates for such an intermediate state.
Energetically, the possible intermediate states could be `$\Lambda \!+\!p$', `$\Sigma \!+\!N$' and `$\Lambda(1405) \!+\!N$' below `$K^- \!\!+\!p\!+\!p$', which has an $s$-quark and two baryons (`$\Sigma(1385) \!+\!N$' is excluded because it requires $P$-wave).
In other words, a $Y^{(*)}$ (baryon with an $s$-quark) could be generated by the primary 2NA reaction, and the $Y^{(*)}$ could make a successive conversion reaction with another spectator nucleon, to form a $\Lambda p n$ final state due to the FSI.
Similar to Eq.\,\ref{eq:QFA-centroid}, the ${\rm{\it IM}}_{\!\Lambda p}$ of these channels can be given as:
\begin{eqnarray}
\label{eq:YN-centroid}
{\rm{\it IM}}_{\!\Lambda p}\left(\,{\mbox{`$Y^{\!(*)}\!+\!N$'}}\,\right) \approx \sqrt{ m_N^2 + m_{{Y}^{\!(*)}}^2 + 2 m_N \sqrt{ m_{{Y}^{(*)}}^2 + q^2} }.
\end{eqnarray}
First of all, observed ``$K^- pp$'' event concentration does not have the $q$-dependence required by Eq.\,\ref{eq:YN-centroid}.
Moreover, the ${\rm{\it IM}}_{\!\Lambda p}$ of `$\Lambda \!+\!p$' ($ \sim$2100), `$\Sigma \!+\!N$' ($ \sim$2175 MeV/$c^2$) channels are much too small at the kinematical boundary of $q\sim 500$ MeV/$c$.
The ${\rm{\it IM}}_{\!\Lambda p}$ of `$\Lambda(1405) \!+\! N$' is $ \sim 2371$ MeV/$c^2$ at the observed average $q$ distribution of $q\sim 450$ MeV/$c$ (assuming $\Lambda(1405)$ mass = 1405.1 MeV/$c^2$ (PDG \cite{Tanabashi:2018oca})).
Thus the error in the difference from $M_{\rm {\it Kpp}}$ ($\sim 2324$ MeV/$c^2$) is as large as five standard deviations.
A direct $\Lambda p$ formation due to 2NA ($K^- + $`$pp$'$\rightarrow \Lambda + p$) could be possible.
In this reaction, kaon momentum is 1 GeV/$c$ and the resulting $\Lambda p$ invariant mass $M$ calculated from Eq.\ref{eq:QFA-centroid} is 2.8 GeV/$c^2$.
In fact, an event concentration is observed at $(M\!c^2, \,qc) \sim (2.8, \,1.0)$ GeV as shown in Fig.\,\ref{fig:data}, but it is well separated from the region of interest.
Therefore, none of these can be valid candidates.
More complicated channels are even less likely to form a kinematical anomaly at the specific energy near $M(Kpp)$.
The ``$K^- pp$'' signal is significantly above the $M(\Lambda p)$ threshold, so it is unreasonable to explain it as a $\Lambda$-hypernucleus.
One may still wonder if the ``$K^- pp$'' signal could be due to the $\Lambda(1405)$ - proton hypernucleus ($_{\Lambda(1405)}^{~~~~~~~2}\rm{H}$), so that the meson (or constituent anti-quark) {\it degree-of-freedom} is already quenched in the system.
However, this is not consistent with present data.
In the $q$ distribution, the ``$K^- pp$'' signal located at lower $q$ extends up $\sim$ 650 MeV/$c$.
Thus a tightly bound ``$K^- pp$'' is more natural than a less bound $_{\Lambda(1405)}^{~~~~~~~2}\rm{H}$ measured from $M(\Lambda(1405)p)$.
It would be even less bound if the pole position of $\Lambda(1405)$ is $\sim$ 1425 rather than 1405 MeV/$c^2$ as the chiral-unitary model suggests \cite{Tanabashi:2018oca}.
In the $M$ distribution, the signal is much wider than that of $\Lambda(1405)$, so the major decay channel should be $YN$ rather than $\pi\Sigma N$, in contrast to $\Lambda(1405)\rightarrow \pi \Sigma $ (100\%).
This drastic change of the decay property is also not consistent with $_{\Lambda(1405)}^{~~~~~~~2}\rm{H}$ interpretation.
Moreover, if $\Lambda(1405)$ is a ``$K^- p$'' bound system, as recently accepted rather widely, the discrimination of the two interpretations is meaningless from the beginning.
It is more natural to interpret that the ${\overline K}$ in ``$K^- pp$'' is energetically stabilized ($B_{\rm {\it Kpp}}$ $\sim 50$ MeV) compared to that in ``$K^- p$'' ($\equiv\Lambda(1405)$: $B_{\rm {\it Kp}} \approx 5 \sim 25$ MeV), because of the presence of two protons (nucleons) nearby.
At the same time, the decay width becomes large ($\Gamma_{\rm {\it Kpp}} \sim$ 110 MeV in respect to $\Gamma_{\rm {\it Kp}} \sim$ 50 MeV), for the same reason.
The existence of the QF$_{{\rm \overline{K}A}}$ channel adjacent to ``$K^- pp$'' also supports this interpretation, because if the sub-threshold virtual `${\overline {K}}$' can form a nuclear bound state by capturing spectator nucleons, then it is natural to expect higher-energy virtual `${\overline {K}}$' production in `vacuum' (above $M(Kpp)$), which could be followed by `$K^-$'$+pp\rightarrow \Lambda p$ in FSI.
Thus, the simplest and natural interpretation is a kaonic nuclear bound state ``$K^- pp$''; a system composed of a $K^-$-meson and two protons with $J^P=0^-$, {\it I.\,E.} a highly excited novel form of nucleus with a kaon, in which {\it the mesonic degree-of-freedom} still holds.
In summary, the quasi-free virtual `$\overline{K}$' production $K^-$`$N$'$ \rightarrow $`$\overline{K}$'$N$ is the key reaction in the formation reaction, and $M(Kpp)$ is a doorway below which the ``$K^- pp$'' is formed.
Na\"{\i}vely speaking, if the energy of the `$\overline{K}$' produced is below its intrinsic mass ($E_K<m_K$), then the ``$K^- pp$'' will be formed.
On the other hand, if it is above the intrinsic mass ($E_K>m_K$), then the QF$_{{\rm \overline{K}A}}$ reaction happens (or the kaon escapes from nuclei).
\section*{Acknowledgements}
The authors are grateful to the staff members of J-PARC/KEK for their extensive efforts especially on the stable operation of the facility.
We are also grateful to the fruitful discussions with Professors Akinobu Dote, Toru Harada, Takayasu Sekihara, Khin Swe Myint and Yoshinori Akaishi.
This work is partly supported by MEXT Grants-in-Aid 26800158, 17K05481, 26287057, 24105003, 14102005 and 17070007.
Part of this work is supported by the Ministero degli Affari Esteri e della Cooperazione Internazionale, Direzione Generale per la Promozione del Sistema Paese (MAECI), StrangeMatter project.
|
2,869,038,154,362 | arxiv | \section{Introduction}\label{S:1}
Given a complete filtered probability space $\big(\Omega,\mathcal{F},\{\mathcal{F}_t\}_{t\geq0},P\big)$ satisfying the usual conditions, we consider the existence and uniqueness of strong solutions of a class of stochastic differential equations (SDEs) in $\mathbb{R}^d$, viz.
\begin{equation}
\begin{split}
dU_{t} &=\bar b(U_{t-};\xi)dt+ \bar\sigma(U_{t-};\xi)\cdot dB_t +\int_{(0 < |x| < 1)} \bar F(U_{t-},x;\xi)\, \widetilde
N(dtdx)\\
&+\int_{(|x| \geq 1)} \bar G(U_{t-},x;\xi) \,
N(dtdx), \quad t\geq 0 \\
U_0&=\kappa,
\end{split}
\end{equation}
where
\begin{enumerate}[label=(\roman*)]
\item $\{B_t\}$ denotes an $\mathbb{R}^d$ valued standard Brownian motion and $N$ a Poisson random measure driven by a L\'evy measure $\nu$. $\widetilde N$ denotes the corresponding compensated random measure. We also assume that $B$ and $N$ are independent.
\item The parameter $\xi$ is an $\mathcal{F}_0$-measurable random variable and takes values in some specific Hilbert space, viz. the Hermite-Sobolev spaces (see Section \ref{S:2}). The random variable $\kappa$ is $\mathbb{R}^d$ valued and $\mathcal{F}_0$-measurable. Unless stated otherwise, $\xi$ and $\kappa$ will be taken to be independent of the noise $B$ and $N$.
\item The coefficients $\bar\sigma, \bar b, \bar F$ and $\bar G$ are defined in terms of $\sigma, b, F$ and $G$ which are the coefficients of an associated stochastic PDE, see for example \cite[p. 524]{MR3647067}, \cite[p. 170]{MR3687773}, \cite[p. 237]{MR3063763}. Note that the coefficients are allowed to be $\mathcal{F}_0$ measurable.
\end{enumerate}
Such SDEs occurs in connection with a class of stochastic PDEs whose solutions take values in the space of tempered distributions $\mathcal{S}^\prime$, see for example \cite{MR3063763, MR3647067, JOTP-erratum, MR3687773}. We can study the ergodicity/stationarity properties of these stochastic PDEs via the corresponding finite dimensional SDEs. A standard approach in proving the existence and uniqueness results for SDEs is to assume that the coefficients are Lipschitz (see \cite{MR1011252, MR2020294, MR2512800, MR2560625, MR1398879, MR2001996, MR1121940} and the references therein). The goal of this article is to describe hypotheses, which include appropriate parameterized versions of Lipschitz regularity of the coefficients and prove in detail the existence and uniqueness results.
We now describe the layout of the paper. In Section \ref{S:2}, we describe the space of Schwartz class functions $\mathcal{S}$ and its dual, the space of tempered distributions $\mathcal{S}^\prime$. We also recall definitions of the Hermite-Sobolev spaces $\mathcal{S}_p, p \in \mathbb{R}$.
In Section \ref{S:3}, we state the notation and hypotheses followed in the rest of the article. In Theorem \ref{nrm-bd-rndm-inl} the existence and uniqueness result is proved for the reduced equation with `global Lipschitz' coefficients and then in Theorem \ref{interlacing-global-sde} proved for the general case (i.e. involving the large jumps) by an interlacing technique. In Theorem \ref{nrm-sqre-rndm-inl-fnl}, we prove the result for `local Lipschitz' coefficients.
In \cite[Proposition 3.7]{Levy-SPDE}, it is proved that the `local Lipschitz' regularity of the coefficients $\bar\sigma, \bar b, \bar F$ follow from explicit regularity assumptions on $\sigma, b, F$ provided other hypotheses are satisfied. Furthermore, the existence and uniqueness problems for the corresponding SPDEs are studied in \cite{Levy-SPDE}.
\section{Topology on Schwartz space}\label{S:2}
Let $\mathcal{S}$ be the space of rapidly decreasing smooth functions on $\mathbb{R}^d$ with dual $\mathcal{S}^\prime$, the space of tempered distributions (see \cite{MR771478}). Let $\mathbb{Z}^d_+:=\{n=(n_1,\cdots, n_d): \; n_i \text{ non-negative integers}\}$. If $n\in\mathbb{Z}^d_+$, we define $|n|:=n_1+\cdots+n_d$.
For $p \in \mathbb{R}$, consider the increasing norms $\|\cdot\|_p$, defined by the inner
products
\begin{equation}
\langle f,g\rangle_p:=\sum_{n\in\mathbb{Z}^d_+}(2|n|+d)^{2p}\langle f,h_n\rangle\langle g,h_n\rangle,\ \ \ f,g\in\mathcal{S}.
\end{equation}
In the above equation, $\{h_n: n\in\mathbb{Z}^d_+\}$ is an orthonormal basis for $\mathcal{L}^2(\mathbb{R}^d,dx)$ given by the Hermite functions and $\langle\cdot,\cdot\rangle$ is the usual
inner product in $\mathcal{L}^2(\mathbb{R}^d,dx)$. For $d=1$,
$h_n(t) :=(2^n n!\sqrt{\pi})^{-1/2}\exp\{-t^2/2\}H_n(t)$, where $H_n, t \in \mathbb{R}$ are the Hermite polynomials (see \cite{MR771478}). For $d > 1$, $h_n(x_1,\cdots,x_d) := h_{n_1}(x_1)\cdots h_{n_d}(x_d)$ for all $(x_1,\cdots,x_d) \in \mathbb{R}^d, n\in\mathbb{Z}^d_+$, where the Hermite functions on the right hand side are one-dimensional. We define the Hermite-Sobolev spaces $\mathcal{S}_p, p \in \mathbb{R}$ as the completion of $\mathcal{S}$ in
$\|\cdot\|_p$. Note that the dual space $\mathcal{S}_p^\prime$ is isometrically isomorphic with $\mathcal{S}_{-p}$ for $p\geq 0$. We also have $\mathcal{S} = \bigcap_{p}(\mathcal{S}_p,\|\cdot\|_p), \mathcal{S}^\prime=\bigcup_{p>0}(\mathcal{S}_{-p},\|\cdot\|_{-p})$ and $\mathcal{S}_0 = \mathcal{L}^2(\mathbb{R}^d)$.
For $x \in \mathbb{R}^d$, let $\tau_x$ denote the translation operators on $\mathcal{S}$
defined by
$(\tau_x\phi)(y):=\phi(y-x), \, \forall y \in \mathbb{R}^d$. These operators can be
extended to $\tau_x:\mathcal{S}'\to \mathcal{S}'$ by
\[\inpr{\tau_x\phi}{\psi}:=\inpr{\phi}{\tau_{-x}\psi},\, \forall \psi \in
\mathcal{S}.\]
\begin{proposition}\label{tau-x-estmte}
The translation operators $\tau_x, x \in \mathbb{R}^d$ have the following properties:
\begin{enumerate}[label=(\alph*)]
\item (\cite[Theorem 2.1]{MR1999259}) For $x \in \mathbb{R}^d$ and any $p \in \mathbb{R}$, $\tau_x: \mathcal{S}_p\to\mathcal{S}_p$
is a bounded linear map. In particular, there exists a real polynomial $P_k$ of
degree $k = 2(\lfloor|p|\rfloor +1)$ such that
\[\|\tau_x\phi\|_p\leq P_k(|x|)\|\phi\|_p, \, \forall \phi \in \mathcal{S}_p,\]
where $|x|$ denotes the Euclidean norm of $x$.
\item (\cite[Proposition 3.1]{MR2373102}) Fix $\phi \in \mathcal{S}_p$ for some $p \in \mathbb{R}$. The map $x \in\mathbb{R}^d \mapsto \tau_x\phi \in \mathcal{S}_p$ is continuous.
\end{enumerate}
\end{proposition}
\section{Finite dimensional SDEs}\label{S:3}
\subsection{setup and notations}\label{S:3-1}
We use the following notations throughout the paper.
\begin{itemize}
\item The set of positive integers will be denoted by $\mathbb{N}$. Recall that for $x \in \mathbb{R}^n$, $|x|$ denotes its Euclidean norm. The transpose of any element $x \in \mathbb{R}^{n\times m}$ will be denoted by $x^t$.
\item For any $r > 0$, define $\mathcal{O}(0,r):=\{x \in \mathbb{R}^d: |x|< r\}$. Then $\overline{\mathcal{O}(0,r)} = \{x \in \mathbb{R}^d: |x| \leq r\}$ and $\mathcal{O}(0,r)^c = \{x \in \mathbb{R}^d: |x|\geq r\}$.
\item Let $\big(\Omega,\mathcal{F},\{\mathcal{F}_t\}_{t\geq0},P\big)$ be a filtered complete probability space satisfying the usual conditions viz. $\mathcal{F}_0$ contains all $A\in\mathcal{F}$, s.t. $P(A)=0$ and $\mathcal{F}_t=\bigcap_{s>t}\mathcal{F}_s, t \geq 0$.
\item Let $p>0$. Let $\sigma = (\sigma_{ij})_{d\times d}, b=(b_1, \cdots, b_d)^t$ be such that $\sigma_{ij}, b_i:\Omega\to\mathcal{S}_{p}$ are $\mathcal{F}_0$ measurable and
\[\beta:= \sup\{\|\sigma_{ij}(\omega)\|_p, \|b_i(\omega)\|_p:\omega \in \Omega, 1 \leq i,j \leq d\} < \infty.\tag*{($\mathbf{\sigma b}$)} \label{sigma-b}\]
\item Define $\bar\sigma:\Omega\times\mathbb{R}^d\times\mathcal{S}_{-p}\to \mathbb{R}^{d\times d}$ and $\bar b:\Omega\times\mathbb{R}^d\times\mathcal{S}_{-p}\to \mathbb{R}^d$ by $\bar\sigma(\omega,z;y) := \inpr{\sigma(\omega)}{\tau_z y}$ and $\bar b(\omega,z;y) :=
\inpr{b(\omega)}{\tau_z y}$, where $(\inpr{\sigma(\omega)}{\tau_z y})_{ij}:= \inpr{\sigma_{ij}(\omega)}{\tau_z y}$ and $(\inpr{b(\omega)}{\tau_z y})_i := \inpr{b_i(\omega)}{\tau_z y}$.
\item Let $F:\Omega\times\mathcal{S}_{-p}\times \mathcal{O}(0,1) \to
\mathbb{R}^d$ and $G:\Omega\times\mathcal{S}_{-p}\times \mathcal{O}(0,1)^c \to
\mathbb{R}^d$ be $\mathcal{F}_0\otimes \mathcal{B}(\mathcal{S}_p)\otimes\mathcal{B}(\mathcal{O}(0,1))/\mathcal{B}(\mathbb{R}^d)$ and $\mathcal{F}_0\otimes \mathcal{B}(\mathcal{S}_p)\otimes\mathcal{B}(\mathcal{O}(0,1)^c)/\mathcal{B}(\mathbb{R}^d)$ measurable respectively. Here $\mathcal{B}(\mathcal{K})$ denotes the Borel $\sigma$-field of set $\mathcal{K}$.
\item Define $\bar F: \Omega\times\mathbb{R}^d\times \mathcal{O}(0,1)\times\mathcal{S}_{-p}\to \mathbb{R}^d$, $\bar G: \Omega\times\mathbb{R}^d \times \mathcal{O}(0,1)^c\times\mathcal{S}_{-p}\to \mathbb{R}^d$ by $\bar F(\omega,z,x;y) := F(\omega,\tau_z y,x), \ \bar G(\omega,z,x;y) := G(\omega,\tau_z y, x)$.
\item Let $\{B_t\}$ denote a standard Brownian motion and let $N$ denote a Poisson random measure driven by a L\'evy measure $\nu$. $\widetilde N$ will denote the corresponding compensated random measure. We also assume that $B$ and $N$ are independent.
\end{itemize}
Consider the following SDE in $\mathbb{R}^d$,
\begin{equation}\label{fd-sde-sln}
\begin{split}
dU_{t} &=\bar b(U_{t-};\xi)dt+ \bar\sigma(U_{t-};\xi)\cdot dB_t +\int_{(0 < |x| < 1)} \bar F(U_{t-},x;\xi)\, \widetilde
N(dtdx)\\
&+\int_{(|x| \geq 1)} \bar G(U_{t-},x;\xi) \,
N(dtdx), \quad t\geq 0 \\
U_0&=\kappa,
\end{split}
\end{equation}
where $\xi$ is an $\mathcal{S}_{-p}$ valued $\mathcal{F}_0$-measurable random variable and $\kappa$ is an $\mathbb{R}^d$ valued $\mathcal{F}_0$-measurable random variable. Unless stated otherwise, $\xi$ and $\kappa$ will be taken to be independent of the noise $B$ and $N$. Note that the $i$-th component of $\int_0^t \bar\sigma(U_{s-};\xi)\cdot dB_s$ is $\sum_{j=1}^d\int_0^t \bar\sigma_{ij}(U_{s-};\xi)\, dB^j_s$. We list some hypotheses.
\begin{enumerate}[label=\textbf{(F\arabic*)},ref=\textbf{(F\arabic*)}]
\item\label{F1} For all $\omega\in\Omega$ and $x \in \mathcal{O}(0,1)$ there exists a constant $C_x \geq 0$ s.t.
\begin{equation}\label{asm1}
\lvert F(\omega,y_1,x)-F(\omega,y_2,x)\rvert\leq C_x\|y_1-y_2\|_{-p-\frac{1}{2}}, \forall y_1,y_2\in\mathcal{S}_{-p}.
\end{equation}
We assume $C_x$ to depend only on $x$ and independent of $\omega$. Since $\|y\|_{-p-\frac{1}{2}} \leq \|y\|_{-p}, \forall y \in \mathcal{S}_{-p}$, we have
\[\lvert F(\omega,y_1,x)-F(\omega,y_2,x)\rvert\leq C_x\|y_1-y_2\|_{-p}, \forall y_1,y_2\in\mathcal{S}_{-p}.\]
\item\label{F2} The constant $C_x$ mentioned above has the following properties, viz.
\[\sup_{|x|<1}C_x<\infty,\quad \int_{(0<|x|<1)}C_x^2\, \nu(dx)<\infty.\]
\item\label{F3} $\sup_{\omega\in\Omega,|x|<1}|F(\omega,0,x)|<\infty$ and $\sup_{\omega\in\Omega}\int_{(0<|x|<1)} |F(\omega,0,x)|^2\,\nu(dx)<\infty$. \end{enumerate}
\begin{enumerate}[label=\textbf{(G\arabic*)},ref=\textbf{(G\arabic*)}]
\item\label{G1} The mapping $y\rightarrow G(\omega,y,x)$ is continuous for all $x \in \mathcal{O}(0,1)^c$ and $\omega \in \Omega$.
\end{enumerate}
\begin{remark}
Examples of coefficients $F$ and $G$ satisfying the above hypotheses can be constructed. See \cite[Example 3.1]{Levy-SPDE}.
\end{remark}
\begin{lemma}
[{\cite[Lemma 3.2]{Levy-SPDE}}]
\label{f-bd}
Assume \ref{F1}, \ref{F2} and \ref{F3}. Then, for any bounded set $\mathcal{K}$ in $\mathcal{S}_{-p}$ the following are true.
\begin{enumerate}[label=(\roman*)]
\item $\sup_{\omega\in\Omega,y\in \mathcal{K},|x|<1}|F(\omega,y,x)|<\infty$.
\item $\sup_{\omega\in\Omega,y\in \mathcal{K}}\int_{(0<|x|<1)}|F(\omega,y,x)|^2\nu(dx)=:\alpha(\mathcal{K})<\infty$.
\item $\sup_{\omega\in\Omega,y\in \mathcal{K}}\int_0^t\int_{(0<|x|<1)}|F(\omega,y,x)|^4\nu(dx)ds<\infty$ for all $0\leq t<\infty$.
\end{enumerate}
\end{lemma}
Using the continuity result in Proposition \ref{tau-x-estmte} the next result follows.
\begin{lemma}[{\cite[Lemma 3.3]{Levy-SPDE}}]
Suppose \ref{G1} holds. Then the map $z \in \mathbb{R}^d \to \bar G(\omega, z,x; \xi(\omega)) = G(\omega,\tau_z\xi(\omega),x) \in \mathbb{R}^d$ is continuous for all $x \in \mathcal{O}(0,1)^c$ and $\omega \in \Omega$.
\end{lemma}
\subsection{Global Lipschitz coefficients}\label{S:3-2}
In this subsection, we establish the existence and uniqueness of strong solutions of \eqref{fd-sde-sln} under `global Lipschitz' coefficients $\bar\sigma, \bar b, \bar F$. To do this we first study the same problem for the corresponding reduced equation, viz.
\begin{equation}\label{reduced-fd-sde}
\begin{split}
dU_{t} &=\bar b(U_{t-};\xi)dt+ \bar\sigma(U_{t-};\xi)\cdot dB_t +\int_{(0 < |x| < 1)} \bar F(U_{t-},x;\xi)\, \widetilde
N(dtdx), \quad t\geq 0 \\
U_0&=\kappa;
\end{split}
\end{equation}
with $\xi$ and $\kappa$ as in \eqref{fd-sde-sln}. Later, in Theorem \ref{interlacing-global-sde} we prove the result for equation \eqref{fd-sde-sln}.
\begin{theorem}
\label{nrm-bd-rndm-inl}
Let \ref{sigma-b}, \ref{F1}, \ref{F2} and \ref{F3} hold. Suppose the following conditions are satisfied.
\begin{enumerate}[label=(\roman*)]
\item $\kappa, \xi$ are $\mathcal{F}_0$ measurable, as stated in \eqref{fd-sde-sln}.
\item (Global Lipschitz in $z$, locally in $y$) For every bounded set $\mathcal{K}$ in $\mathcal{S}_{-p}$, there exists a constant $C(\mathcal{K})>0$ such that for all $z_1, z_2\in\mathbb{R}^d,\ y\in \mathcal{K}$ and $\omega\in\Omega$
\begin{equation}\label{Lipschitz-condition-rnm-inl}
\begin{split}
&|\bar{b}(\omega,z_1;y) - \bar{b}(\omega,z_2;y)|^2+ |\bar{\sigma}(\omega,z_1;y)-
\bar{\sigma}(\omega,z_2;y)|^2\\
&+\int_{(0 < |x| < 1)}|\bar{F}(\omega,z_1,x;y) - \bar{F}(\omega,z_2,x;y)|^2 \, \nu(dx) \leq C(\mathcal{K})\,
|z_1
- z_2|^2.
\end{split}
\end{equation}
\end{enumerate}
Then \eqref{reduced-fd-sde} has an $(\mathcal{F}_t)$ adapted strong solution $\{X_t\}$ with rcll paths. Pathwise uniqueness of solutions also holds, i.e. if $\{X^1_t\}$ is another such solution, then $P(X_t=X_t^1,t\geq0)=1$.
\end{theorem}
\begin{proof}
We split the proof in the following three steps, depending on assumptions on the random variables $\kappa$ and $\xi$.
\begin{enumerate}[label=Step \arabic*:]
\item $\kappa, \xi$ are $\mathcal{F}_0$ measurable with $\mathbb{E}|\kappa|^2<\infty$ and $\sup_{\omega\in\Omega} \|\xi(\omega)\|_{-p} < \infty$.
\item $\kappa, \xi$ are $\mathcal{F}_0$ measurable with $\mathbb{E}|\kappa|^2<\infty$.
\item $\kappa, \xi$ are $\mathcal{F}_0$ measurable.
\end{enumerate}
Positive constants appearing in our computations may be written as $\gamma$ and may change its values from line to line.
\underline{Step 1:} The existence is established by Picard iterations and the uniqueness by Gronwall inequality arguments. This follows the standard approach as in \cite[Theorem 5.2.1]{MR2001996}, where SDEs driven by Brownian motion were considered. In the present case, we get the linear growth of the coefficients directly from the structure of the coefficients, see \eqref{picard3} below.
First we prove the uniqueness. Let $\{U_t^1\}$ and $\{U_t^2\}$ be two solutions of \eqref{reduced-fd-sde}. Define, for $\omega\in\Omega$
\begin{align*}
&\Theta(t,\omega):=\bar b(\omega,U^1_{t-}(\omega);\xi(\omega))-\bar b(\omega,U^2_{t-}(\omega);\xi(\omega)),\\
&\Xi(t,\omega):=\bar\sigma(\omega,U^1_{t-}(\omega);\xi(\omega))-\bar\sigma(\omega,U^2_{t-}(\omega);\xi(\omega)),\\
&\Psi(t,x,\omega):=\bar F(\omega,U^1_{t-}(\omega),x;\xi(\omega))-\bar F(\omega,U^2_{t-}(\omega),x;\xi(\omega)).
\end{align*}
Using \eqref{Lipschitz-condition-rnm-inl}, Doob's $\mathcal{L}^2$ maximal
inequality and It\^o isometry, we have for some positive constant $\gamma$,
\begin{align}\label{diff1}
\begin{split}
&\mathbb{E}\left(\sup_{0\leq s\leq t}|U_s^1-U_s^2|^2\right)\\
&\leq 3t\ \mathbb{E}\int_0^t|\Theta(s)|^2ds+12\mathbb{E}\int_0^t|\Xi(s)|^2ds+12\mathbb{E}\int_0^t\int_{(0 < |x| < 1)}|\Psi(s,x)|^2\nu(dx)ds\\
&\leq 3\gamma(t+8) \int_0^t\mathbb{E}\left(\sup_{0\leq u\leq s}|U_{u}^1-U_{u}^2|^2\right)ds.
\end{split}
\end{align}
We then obtain the uniqueness of the solutions by a Gronwall inequality argument.
To show the existence of a strong solution, we use Picard iteration. Set $U_t^{(0)} = \kappa$ and define
\begin{equation}\label{fd-sde-sln-rndm-inl-pkrd}
U_{t}^{(k+1)}:=\kappa+\int_0^t\bar b(U_{s-}^{(k)};\xi)ds+ \int_0^t\bar\sigma(U_{s-}^{(k)};\xi)\cdot dB_s
+\int_0^t\int_{(0 < |x| < 1)} \bar F(U_{s-}^{(k)},x;\xi)\, \widetilde
N(dsdx),
\end{equation}
for all $k\geq0$. Fix $M \in \mathbb{N}$. For $k\geq1$, $t\in[0,M]$ we have
\begin{equation}\label{picard1}
\mathbb{E}\left(\sup_{0\leq s\leq t}|U_s^{(k+1)}-U_s^{(k)}|^2\right)\leq 3\gamma(M+8)\int_0^t\mathbb{E}\left(\sup_{0\leq u\leq s}|U_u^{(k)}-U_u^{(k-1)}|^2\right)ds.
\end{equation}
By \eqref{Lipschitz-condition-rnm-inl}, there exists a constant $C = C(Range(\xi))$ such that for $z\in\mathbb{R}^d$, $y\in Range(\xi)$
\begin{equation}\label{picard2}
\begin{split}
&|\bar{b}(\omega,z;y) - \bar{b}(\omega,0;y)|^2+ |\bar{\sigma}(\omega,z;y)-
\bar{\sigma}(\omega,0;y)|^2\\
&+\int_{(0 < |x| < 1)}|\bar{F}(\omega,z,x;y) - \bar{F}(\omega,0,x;y)|^2 \, \nu(dx) \leq C\,
|z|^2.
\end{split}
\end{equation}
Using \ref{sigma-b}), we have $|\bar b(\omega,0;y)|=|\langle b(\omega),y\rangle|\leq \beta \sqrt{d}\|y\|_{-p}$ and $|\bar\sigma(\omega,0;y)|=|\langle \sigma(\omega),y\rangle|\leq \beta d \|y\|_{-p}$. From \ref{F1}, we have $|\bar F(\omega,0,x;y)|=|F(\omega,y,x)|\leq C_x\|y\|_{-p}+|F(\omega,0,x)|$.
Therefore, using \eqref{picard2}, \ref{F2} and \ref{F3}, there exists a constant $D=D(Range(\xi))>0$ such that
\begin{equation}\label{picard3}
|\bar{b}(\omega,z;y)|^2+ |\bar{\sigma}(\omega,z;y) |^2
+\int_{(0 < |x| < 1)}|\bar{F}(\omega,z,x;y) |^2 \, \nu(dx) \leq D\,
(1+|z|^2).
\end{equation}
As in \eqref{diff1}, using \eqref{fd-sde-sln-rndm-inl-pkrd}, Doob's $\mathcal{L}^2$ maximal
inequality and It\^o isometry and \eqref{picard3} we get
\begin{equation}\label{picard4}
\mathbb{E}\left(\sup_{0\leq s\leq t}|U_s^{(1)}-U_s^{(0)}|^2\right) \leq(3t^2+24t) D \ \mathbb{E} (1+|\kappa|^2).
\end{equation}
Therefore by induction from \eqref{picard1}, there exists a positive constant $\tilde C$ s.t.
\begin{align}\label{picard5}
\mathbb{E}\left(\sup_{0\leq s\leq t}|U_s^{(k+1)}-U_s^{(k)}|^2\right)\leq\frac{(\tilde Ct)^{k+1}}{(k+1)!},\ \forall k\geq0,\ t\in[0,M].
\end{align}
For positive integers $m,\ n$ with $m>n$, we have
\begin{align}\label{converge-123}
\begin{split}
\lim_{m,n\rightarrow\infty}\mathbb{E}\sup_{0\leq t\leq M}|U_t^{(m)}-U_t^{(n)}|^2 &=\lim_{m,n\rightarrow\infty}\mathbb{E}\sup_{0\leq t\leq M}\left|\sum_{k=n}^{m-1}\big(U_t^{(k+1)}-U_t^{(k)}\big)\right|^2\\
&\leq\lim_{n\rightarrow\infty}\sum_{k=n}^{\infty}\mathbb{E}\sup_{0\leq t\leq M}\big|U_t^{(k+1)}-U_t^{(k)}\big|^2k^2\left(\sum_{k=n}^{\infty}k^{-2}\right).
\end{split}
\end{align}
The second series on the right hand side above converges. By \eqref{picard5}, the first series is bounded, since $\sum_{k=n}^{\infty}\frac{(\tilde CM)^{k+1}}{(k+1)!}k^2\rightarrow0$ as $n\rightarrow\infty$. Therefore $\{U_t^{(m)}:m\in\mathbb{N}\}$ is Cauchy and hence converges to some $\{X_t\}_{t\in[0,M]}$ in $\mathcal L^2(\lambda\times P)$, where $\lambda$ denotes the Lebesgue measure on $[0,M]$.\\
Applying the Chebyshev-Markov inequality in \eqref{picard5}, we get
\begin{align*}
P\left(\sup_{0\leq s\leq t}|U_s^{(k+1)}-U_s^{(k)}|\geq\frac{1}{2^{k+1}}\right)\leq\frac{(4\tilde Ct)^{k+1}}{(k+1)!}.
\end{align*}
By Borel-Cantelli lemma
\begin{align*}
P\left(\limsup_{k\rightarrow\infty}\sup_{0\leq s\leq t}|U_s^{(k+1)}-U_s^{(k)}|\geq\frac{1}{2^{k+1}}\right)=0.
\end{align*}
Therefore, we conclude that $\{U^{(k)}\}$ is almost surely uniformly convergent on $[0,M]$ to $\{X_t\}$, which is adapted and rcll. Using \eqref{picard3} and the fact that a.s. $\{X_t\}$ has at most countably many jumps, we have
\[\mathbb{E}\int_0^M\int_{(0 < |x| < 1)}|\bar F(X_{s-},x;\xi)|^2\nu(dx)ds \leq\mathbb{E}\int_0^M D(1+|X_{s-}|^2)ds \leq D \left[ M + \|X\|^2_{\mathcal L^2(\lambda\times P)}\right] < \infty.\]
Therefore $\{\int_0^t\int_{(0 < |x| < 1)} \bar F(X_{s-},x;\xi)\, \widetilde
N(dsdx)\}_{t\in[0,M]}$ exists. Similarly, we can show the existence of $\{\int_0^t\bar\sigma(X_{s-};\xi)\cdot dB_s\}_{t\in[0,M]}$ and $\{\int_0^t\bar b(X_{s-};\xi)ds\}_{t\in[0,M]}$.
By It\^o isometry and \eqref{Lipschitz-condition-rnm-inl}, we have the following convergence in $\mathcal L^2(P)$, viz. \[\int_0^t\int_{(0 < |x| < 1)} \bar F(U_{s-}^{(k)},x;\xi)\, \widetilde
N(dsdx) \xrightarrow{k\to \infty} \int_0^t\int_{(0 < |x| < 1)} \bar F(X_{s-},x;\xi)\, \widetilde
N(dsdx),\]
for each $t\in[0,M]$. Similarly, we conclude that $\int_0^t\bar\sigma(U_{s-}^{(k)};\xi)\cdot dB_s\rightarrow\int_0^t\bar\sigma (X_{s-};\xi)\cdot dB_s$ and $\int_0^t\bar b(U_{s-}^{(k)};\xi) ds\rightarrow\int_0^t\bar b(X_{s-};\xi)ds$ in $\mathcal L^2(P)$ as $k\rightarrow\infty$, for each $t\in[0,M]$. Since $\{X_t\}$ is rcll, from \eqref{fd-sde-sln-rndm-inl-pkrd}, we have a.s. $\forall t\in[0,M]$,
\[
X_t=\kappa+\int_0^t\bar b(X_{s-};\xi)ds+ \int_0^t\bar\sigma(X_{s-};\xi)\cdot dB_s
+\int_0^t\int_{(0 < |x| < 1)} \bar F(X_{s-},x;\xi)\, \widetilde
N(dsdx).
\]
Suppose $\{X_t^{(M)}\}$ and $\{X_t^{(M+1)}\}$ denote the solutions up to time $M$ and $M+1$ respectively. Then, by the uniqueness, $\{X_t^{(M+1)}\}_{t\in[0,M]}$ is indistinguishable from $\{X_t^{(M)}\}$ on $[0,M]$. Using this consistency, we obtain the solution of \eqref{reduced-fd-sde} on the time interval $[0,\infty)$. This concludes the proof for Step 1.
\underline{Step 2:} We follow the technique given in \cite[Theorem 3.3]{MR2560625}, where SDEs driven by Brownian motion were considered. For $k \in \mathbb{N}$, define $\chi_k:=\mathbbm1_{\{\|\xi\|_{-p}\leq k\}}$ and let $\xi^{(k)}:=\chi_k\ \xi$. Let $U^{(k)}$ be the solution of \eqref{reduced-fd-sde} with the initial condition $\xi^{(k)}$. Our aim is to show that $\chi_kU^{(k)}=\chi_kU^{(k+1)}$. Let $U^{(k)}_n$ and $U^{(k+1)}_n$ be the approximations of $U^{(k)}$ and $U^{(k+1)}$ obtained in Step 1 above. Now,
\[U^{(k)}_0(t)=\kappa,\ U^{(k+1)}_0(t)=\kappa\ \text{and}\ \chi_kU^{(k)}_0(t)=\chi_kU^{(k+1)}_0(t).\]
Observe that, for $\omega\in\Omega$
\[\chi_k(\omega)\bar b(\omega,U^{(k)}_0(s-)(\omega);\xi^{(k)}(\omega))=\chi_k(\omega)\bar b(\omega,U^{(k+1)}_0(s-)(\omega);\xi^{(k+1)}(\omega)).\]
Similar equalities hold for coefficients $\bar \sigma$ and $\bar F$.
Using \eqref{fd-sde-sln-rndm-inl-pkrd} and these equalities, a.s. $t\geq 0$, $\chi_kU_{1}^{(k)}(t)=\chi_kU_{1}^{(k+1)}(t)$. By induction a.s. $t\geq 0, \chi_kU_{n}^{(k)}(t)=\chi_kU_{n}^{(k+1)}(t)$.
Letting $n$ go to infinity and using the generalized Lebesgue DCT (see \cite[Theorem 3.4]{MR2560625}), we have, a.s. $\forall t\in[0,T]$, $\chi_kU^{(k)}(t)=\chi_kU^{(k+1)}(t)$. Note that $P\big(\bigcup_k\{\chi_k=1\}\big)=1$. Now define
\[X_t(\omega):=U^{(k)}(t)(\omega),\ \ \text{if $\|\xi(\omega)\|_{-p}\leq k$.}\]
Observe that, a.s. $\forall t\in[0,T], \chi_k U^{(k)}(t) = \chi_k X_t$. It is easy to check that $\{X_t\}$ satisfies \eqref{reduced-fd-sde}.
To prove the uniqueness, let $\{X_t\}$ and $\{Y_t\}$ be two solutions of \eqref{reduced-fd-sde}. Define
\[\widetilde F(\omega,z,x;y):=\mathbbm1_{\{\tilde y:\|\tilde y\|_{-p}\leq k\}}(y)\bar F(\omega,z,x;\mathbbm1_{\{\tilde y:\|\tilde y\|_{-p}\leq k\}}(y)y),\]
and $X^k_t:=\chi_k X_t$, for $\omega\in\Omega, k \in \mathbb{N}$. Similarly define $\{Y^k_t\}$ for $k \in \mathbb{N}$. Observe that
\begin{align*}
\widetilde F(\omega,z,x;\xi(\omega))&=\mathbbm1_{\{\tilde y:\|\tilde y\|_{-p}\leq k\}}(\xi(\omega))\bar F\big(\omega,z,x;\mathbbm1_{\{\tilde y:\|\tilde y\|_{-p}\leq k\}}(\xi(\omega))\xi(\omega)\big)\\
&=\mathbbm1_{\{\tilde\omega:\|\xi(\tilde\omega)\|_{-p}\leq k\}}(\omega)\bar F\big(\omega,z,x;\mathbbm1_{\{\tilde\omega:\|\xi(\tilde\omega)\|_{-p}\leq k\}}(\omega)\xi(\omega)\big),
\end{align*}
and
\begin{align*}
&\chi_k(\omega)\bar b(\omega,X_{s-}(\omega);\xi(\omega)) =\bar b(\omega,X^k_{s-}(\omega);\xi^k(\omega)),\\
&\chi_k(\omega)\bar\sigma(\omega,X_{s-}(\omega);\xi(\omega))=\bar\sigma(\omega,X^k_{s-}(\omega);\xi^k(\omega)),\\
&\chi_k(\omega)\bar F(\omega,X_{s-}(\omega),x;\xi(\omega)) =\chi_k(\omega)\bar F(\omega,X^k_{s-}(\omega),x;\chi_k(\omega)\xi^k(\omega))=\widetilde F(\omega,X^k_{s-}(\omega),x;\xi^k(\omega)).
\end{align*}
Therefore,
\begin{equation}\label{x^k-unq}
\begin{split}
X_t^k &= \chi_k X_t\\
&= \chi_k\ \kappa + \int_0^t \bar b(X^k_{s-};\xi^k)ds+ \int_0^t\bar\sigma(X^k_{s-};\xi^k)\cdot dB_s
+\int_0^t\int_{(0 < |x| < 1)}\widetilde F(X^k_{s-},x;\xi^k)\, \widetilde
N(dsdx).
\end{split}
\end{equation}
Now, in \eqref{x^k-unq} $\xi^k$ is norm bounded. Moreover, it is easy to check that $\bar b$, $\bar\sigma$ and $\widetilde F$ satisfy \eqref{Lipschitz-condition-rnm-inl}. By the uniqueness in Step 1, we conclude that $\{X^k_t\}$ is the unique solution of \eqref{reduced-fd-sde} with initial condition $\chi_k\ \kappa$ and in particular,
\[\chi_k(\omega)X_t=X^k_t=Y^k_t=\chi_k(\omega)Y_t.\]
Since $k$ is arbitrary, therefore, a.s. $\forall t\in[0,T]$, $X_t=Y_t$. This completes the proof for Step 2.
\underline{Step 3:} We follow the argument given in \cite[Theorem 6.2.3]{MR2512800}. Define $\Omega_M:=\{\omega\in\Omega:\ |\kappa|\leq M\}$ for each $M\in\mathbb{N}$. Then $\Omega=\bigcup_{M\in\mathbb{N}}\Omega_M$ and $\Omega_L\subseteq\Omega_M$ whenever $L\leq M$.
Let $\kappa^M(\omega):=\mathbbm1_{\{|\kappa|\leq M\}}(\omega)\kappa(\omega)$. Note that $\kappa^M\in\mathcal L^2$. By Step 2, there exists a unique solution, say $\{X_t^{\kappa^M}\}$, of the reduced equation \eqref{reduced-fd-sde} for the initial condition $\kappa^M$, i.e. a.s. $t \geq 0$
\[X_t^{\kappa^M} = \kappa^M + \int_0^t\bar b(X^{\kappa^M}_{s-};\xi)ds + \int_0^t\bar\sigma(X^{\kappa^M}_{s-};\xi)\cdot dB_s + \int_0^t\int_{(0 < |x| < 1)} \bar F(X^{\kappa^M}_{s-},x;\xi)\, \widetilde
N(dsdx).\]
We first show a.s. $\mathbbm1_{\{|\kappa|\leq L\}}(\omega) X_t^{\kappa^L}(\omega) = \mathbbm1_{\{|\kappa|\leq L\}}(\omega) X_t^{\kappa^M}(\omega), t \geq 0$ for all $M\geq L$. Define
\[\widetilde{F}(\omega,z,x;y):=\mathbbm1_{\{|\kappa|\leq L\}}(\omega)\bar F(\omega,z,x;y).\]
Now, $\{\mathbbm1_{\{|\kappa|\leq L\}} X_t^{\kappa^L}\}$ and $\{\mathbbm1_{\{|\kappa|\leq L\}} X_t^{\kappa^M}\}$ both satisfy the reduced equation
\begin{equation}
\begin{split}
dX_{t} &=\bar b(X_{t-};\mathbbm1_{\{|\kappa|\leq L\}}\xi)dt+ \bar\sigma(X_{t-};\mathbbm1_{\{|\kappa|\leq L\}}\xi)\cdot dB_t +\int_{(0 < |x| < 1)} \widetilde F(X_{t-},x;\mathbbm1_{\{|\kappa|\leq L\}}\xi)\, \widetilde
N(dtdx),\\
X_0&=\kappa^L.
\end{split}
\end{equation}
It is easy to check that $\bar b$, $\bar\sigma$, $\widetilde F$ satisfy \eqref{Lipschitz-condition-rnm-inl}. Then by the uniqueness in Step 2 for all $M\geq L$ a.s. \[\mathbbm1_{\{|\kappa|\leq L\}} X_t^{\kappa^L} = \mathbbm1_{\{|\kappa|\leq L\}} X_t^{\kappa^M},\ t \geq 0.\]
Since $\Omega_M$ increases to $\Omega$, for all $\epsilon>0$, there exists $M\in\mathbb{N}$, such that $P(\Omega_n)>1-\epsilon, \forall n > M$. Hence,
\[P\left(\sup_{t\geq0}|X^{\kappa^m}_t-X^{\kappa^n}_t|>\delta\right)<\epsilon, \ \forall \delta > 0, \forall m,n>M.\]
Therefore the sequence of processes $\{X^{\kappa^n}\}_{n\in\mathbb{N}}$ is uniformly Cauchy in probability and so is uniformly convergent in probability to a process, say $\{X_t\}$. We extract a subsequence for which the convergence holds uniformly and almost surely. This convergence implies that $\{X_t\}$ has rcll paths and solves \eqref{reduced-fd-sde}.
To prove the uniqueness, we consider the solution $\{X_t\}$ constructed above and compare it with any arbitrary solution $\{X'_t\}_{t\geq0}$ of \eqref{reduced-fd-sde}. We claim that for all $M\geq L$, $X'_t(\omega)=X^{\kappa^M}_t(\omega)$ for all $t\geq0$ and almost all $\omega\in\Omega_L$. Suppose for some $M\geq L$, it doesn't hold. Define
\[X''^{\kappa^M}_t(\omega):=
\begin{cases}
X'_t(\omega) \ \ \text{for}\ \omega\in\Omega_L,\\
X^{\kappa^M}_t(\omega),\ \text{for}\ \omega\in\Omega_L^c.
\end{cases}\]
Then $X''^{\kappa^M}$ and $X^{\kappa^M}$ are two distinct solutions of \eqref{reduced-fd-sde} with the same initial condition $\kappa^M$, which is a contradiction. This proves our claim. Next by applying a limiting argument we conclude that $P(X_t=X'_t, \forall t\geq0)=1$. This completes the proof of Step 3 as well as the theorem.
\end{proof}
We now consider the SDE \eqref{fd-sde-sln}. The next result follows by the interlacing technique (see \cite[Example 1.3.13, pp. 50-51]{MR2512800}).
\begin{theorem}\label{interlacing-global-sde}
Suppose all the assumptions of Theorem \ref{nrm-bd-rndm-inl} hold. In addition, assume that \ref{G1} holds. Then there exists a unique rcll adapted solution to \eqref{fd-sde-sln}.
\end{theorem}
\begin{proof}
We follow the proof of \cite[Theorem 6.2.9]{MR2512800}. We have already proved the existence and uniqueness of the reduced equation in Theorem \ref{nrm-bd-rndm-inl}. Now, we use the interlacing technique to complete the proof.
Let $\{\eta_n\}_{n\in\mathbb{N}}$ denote the arrival times for the jumps of the compound Poisson process $\{P_t\}_{t\geq0}$, where each $P_t=\int_{(|x|\geq1)}xN(t,dx)$. By Theorem \ref{nrm-bd-rndm-inl} there exists a unique solution $\{\widetilde U^{(1)}_t\}$ to the reduced equation \eqref{reduced-fd-sde}. Define
\[U_t := \begin{cases}
\widetilde U_t^{(1)};\ \ \ \text{for $0\leq t<\eta_1$}\\
\widetilde U_{\eta_1-}^{(1)} + \bar G(\widetilde U_{\eta_1-}^{(1)},\triangle P_{\eta_1};\xi);\ \ \ \text{for $t=\eta_1$}\\
U_{\eta_1} + \widetilde U^{(2)}_{t}-\widetilde U^{(2)}_{\eta_1};\ \ \ \text{for $\eta_1< t<\eta_2$}\\
U_{\eta_2-}+\bar G( U_{\eta_2-},\triangle P_{\eta_2};\xi);\ \ \ \text{for $t=\eta_2$}\\
\cdots
\end{cases}
\]
Here $\{\widetilde U^{(2)}_t\}$ denotes the unique solution to \eqref{reduced-fd-sde} with initial condition $U_{\eta_1}$. Then $\{U_t\}$ is an adapted rcll process and solves \eqref{fd-sde-sln}.
We show that the uniqueness follows by the interlacing structure. Let $\{\hat{U}_t\}$ be another solution of \eqref{fd-sde-sln}. Then by the uniqueness of the reduced equation, a.s.
\[
\hat U_t=\widetilde U_t=U_t;\ \ \text{for $0\leq t<\eta_1$}.\]
Since, a.s. $\hat U_{\eta_1-}=\widetilde U_{\eta_1-} = U_{\eta_1-}$, we have a.s.
\[\hat U_{\eta_1} = \hat U_{\eta_1-} + \bar G(\hat U_{\eta_1-},\triangle P_{\eta_1};\xi) = \widetilde U_{\eta_1-} + \bar G(\widetilde U_{\eta_1-},\triangle P_{\eta_1};\xi) = U_{\eta_1}.
\]
Since $\{\hat U_t\}$ has no large jump in the time interval $(\eta_1, \eta_2)$ we have, a.s. for $t \in (\eta_1, \eta_2)$
\begin{equation}\label{interlace-sde-eta-12}
\begin{split}
\hat U_t &= \hat U_{\eta_1}+\int_{\eta_1}^t\bar b(\hat U_{s-};\xi)ds+\int_{\eta_1}^t \bar\sigma(\hat U_{s-};\xi)\cdot dB_s +\int_{\eta_1}^t\int_{(0 < |x| < 1)} \bar F(\hat U_{s-},x;\xi)\, \widetilde
N(dsdx)\\
&=\hat U_{\eta_1}+\int_0^{t-\eta_1}\bar b(\hat U_{\eta_1+s-};\xi)ds+\int_0^{t-\eta_1} \bar\sigma(\hat U_{\eta_1+s-};\xi)\cdot dB_{\eta_1+s}\\ &\ \ +\int_0^{t-\eta_1}\int_{(0 < |x| < 1)} \bar F(\hat U_{\eta_1+s-},x;\xi)\, \widetilde
N_s^{\eta_1}(dsdx).
\end{split}
\end{equation}
We now describe $\{N_s^{\eta_1}\}$, which appeared in the last term of \eqref{interlace-sde-eta-12}. For any set $H \subset \mathbb{R}^d$, which is bounded away from $0$, i.e. $0 \notin \bar H$ and for any stopping time $\eta$, define \[N^{\eta}_{t}(H):=\left(N_{t+\eta}(H)-N_{\eta}(H)\right) \mathbbm{1}_{(\eta<\infty)}.\]
By strong Markov property \cite[Theorem 2.2.11]{MR2512800}, we have
$\mathbb{E}[e^{i\lambda N^{\eta}_{t}(H)}]=\mathbb{E}[e^{i\lambda N_{t}(H)}]$, $\{N^{\eta}_{t}\}$ is independent of $\mathcal{F}_{\eta}$, has rcll paths and is $(\mathcal{F}_{\eta+t})$ adapted. Furthermore,
$\mathbb{E}[N^{\eta}_{t}(H)]=t\nu(H)=\mathbb{E}[N_t(H)]$.
Note that the last equality of \eqref{interlace-sde-eta-12} is written in the reduced equation form. Since $\{U_t\}$ also solves the same reduced equation, by Theorem \ref{nrm-bd-rndm-inl} a.s. $\hat U_t=U_t$ for $\eta_1<t<\eta_2$. In particular, a.s. $\hat U_{\eta_2-} = U_{\eta_2-}$ and hence, a.s.
\[\hat U_{\eta_2} = \hat U_{\eta_2-} + \bar G(\hat U_{\eta_2-},\triangle P_{\eta_2};\xi) = U_{\eta_2-}+\bar G( U_{\eta_2-},\triangle P_{\eta_2};\xi) = U_{\eta_2}.\]
Continuing this way, we show that a.s. $U_t = \hat U_t, t \geq 0$. This completes the proof.
\end{proof}
\subsection{Local Lipschitz coefficients}\label{S:3-3}
In the previous subsection, we have established the existence and uniqueness results under `global Lipschitz' which we now extend for `local Lipschitz' coefficients.
Let $\widehat{\mathbb{R}^d}:=\mathbb{R}^d\cup\{\infty\}$ be the one point compactification of $\mathbb{R}^d$.
\begin{theorem}
\label{nrm-sqre-rndm-inl-fnl}
Let \ref{sigma-b}, \ref{F1}, \ref{F2}, \ref{F3} and \ref{G1} hold. Suppose the following conditions are satisfied.
\begin{enumerate}[label=(\roman*)]
\item $\kappa, \xi$ are $\mathcal{F}_0$-measurable.
\item (Locally Lipschitz in $z$, locally in $y$) For every bounded set $\mathcal{K}$ in $\mathcal{S}_{-p}$ and positive integer $n$ there exists a constant $C(\mathcal{K},n)>0$ s.t. for all $z_1, z_2\in \mathcal{O}(0,n),\ y\in \mathcal{K}$ and $\omega\in\Omega$
\begin{equation}\label{Lipschitz-condition-rnm-inl-nrmsqr-fnl}
\begin{split}
&|\bar{b}(\omega,z_1;y) - \bar{b}(\omega,z_2;y)|^2+ |\bar{\sigma}(\omega,z_1;y)-
\bar{\sigma}(\omega,z_2;y)|^2\\
&+\int_{(0 < |x| < 1)}|\bar{F}(\omega,z_1,x;y) - \bar{F}(\omega,z_2,x;y)|^2 \, \nu(dx) \leq C(\mathcal{K},n)\,
|z_1
- z_2|^2.
\end{split}
\end{equation}
\end{enumerate}
Then there exists an $(\mathcal{F}_t)$ stopping time $\eta$ and an $(\mathcal{F}_t)$ adapted $\widehat{\mathbb{R}^d}$ valued process $\{X_t\}$ with rcll paths such that $\{X_t\}$ solves \eqref{fd-sde-sln} upto time $\eta$ and $X_t=\infty$ for $t\geq\eta$. Further $\eta$ can be identified as follows: $\eta=\lim_m\theta_m$ where $\{\theta_m\}$ are $(\mathcal{F}_t)$ stopping times defined by $\theta_m:=\inf\{t\geq0:|X_t|\geq m\}$.\\
This is also pathwise unique in this sense: if $(\{X'_t\},\eta')$ is another such solution, then $P(X_t=X'_t, 0\leq t<\eta\wedge\eta')=1$.
\end{theorem}
\begin{proof}
To prove the existence result, we first obtain a version of the `global Lipschitz' condition \eqref{Lipschitz-condition-rnm-inl} for $\bar b(\omega,z;y)$, $\bar\sigma(\omega,z;y)$, $\bar F(\omega,z,x;y)$ from our assumption on `local Lipschitz' condition \eqref{Lipschitz-condition-rnm-inl-nrmsqr-fnl}.\\
Let $n,m \in \mathbb{N}$ and let $R$ be a positive real number. Let $h:\mathbb{R}^n \to \mathbb{R}^m$ satisfy $|h(x)-h(y)|\leq C|x-y|$ for all $x,y$ with $|x|, |y|\leq R$, where $C$ is a positive constant. Define
\[h^R(x) := \begin{cases}
h(x), \ \text{if}\, |x| \leq R\\
\frac{2R-|x|}{R}\cdot h(Rx/|x|),\ \text{if}\ R\leq|x|\leq 2R\\
0, \ \text{if}\ |x| \geq 2R.
\end{cases}
\]
By \cite[Chapter 5, Exercise 3.1]{MR1398879}, $h^R$ is Lipschitz continuous on $\mathbb{R}^n$. For every fixed $y$ and $\omega$, we construct $\bar\sigma^R(\omega,\cdot;y)$ for $\bar\sigma(\omega,\cdot;y)$ in the same way viz.,
\[\bar\sigma^R(\omega,z;y):= \begin{cases}
\bar\sigma(\omega,z;y),\ \text{for}\ |z|\leq R;\\
\frac{2R-|z|}{R}\cdot\bar\sigma\left(\omega,\frac{Rz}{|z|};y\right),\ \text{for}\ R\leq|z|\leq2R,\\
0,\ \text{for}\ |z|\geq2R.
\end{cases}
\]
Similarly define $\bar b^R(\omega,\cdot;y)$ and $\bar F^R(\omega,\cdot,x;y)$ for every fixed $x,y$ and $\omega$. Then using \eqref{Lipschitz-condition-rnm-inl-nrmsqr-fnl} and applying the above exercise, we conclude that $\bar b^R(\omega,z;y)$ and $\bar\sigma^R(\omega,z;y)$ are globally Lipschitz in $z$ as in \eqref{Lipschitz-condition-rnm-inl}. We now show \eqref{Lipschitz-condition-rnm-inl} holds for $\bar F^R(\omega,z,x;y)$.\\
By \eqref{Lipschitz-condition-rnm-inl-nrmsqr-fnl} and Lemma \ref{f-bd}, for any $z \in \mathbb{R}^d$ with $|z| \leq R$ and any bounded set $\mathcal{K}$ in $\mathcal{S}_{-p}$, we have
\begin{equation}\label{observe}
\begin{split}
&\int_{(0<|x|<1)}\left|\bar F\left(\omega,z,x;y\right)\right|^2\nu(dx)\\
&\leq 2 \int_{(0<|x|<1)}\left|\bar F\left(\omega,z,x;y\right)-\bar F\left(\omega,0,x;y\right)\right|^2\nu(dx) + 2\int_{(0<|x|<1)}\left|\bar F\left(\omega,0,x;y\right)\right|^2\nu(dx)\\
&\leq 2 C(\mathcal{K},R)R^2 +2 \alpha(\mathcal{K}), \ \forall y \in \mathcal{K}.
\end{split}
\end{equation}
Fix $z_1, z_2 \in \mathbb{R}^d$, with
$|z_1|\leq R$ and $R\leq|z_2|\leq 2R$. Then
\begin{align*}
&\int_{(0<|x|<1)}|\bar F^R(\omega,z_1,x;y)-\bar F^R(\omega,z_2,x;y)|^2\nu(dx)\\
&=\int_{(0<|x|<1)}\left|\bar F(\omega,z_1,x;y)-\frac{2R-|z_2|}{R}\cdot\bar F\left(\omega,\frac{Rz_2}{|z_2|},x;y\right)\right|^2\nu(dx)\\
&\leq2\int_{(0<|x|<1)}\left|\bar F(\omega,z_1,x;y)-\bar F\left(\omega,\frac{Rz_2}{|z_2|},x;y\right)\right|^2\nu(dx)\\
&\ \ \ \ \ \ +2\frac{||z_2|-R|^2}{R^2}\int_{(0<|x|<1)}\left|\bar F\left(\omega,\frac{Rz_2}{|z_2|},x;y\right)\right|^2\nu(dx)\\
&\leq2C(\mathcal{K},R)\left|z_1-\frac{Rz_2}{|z_2|}\right|^2+2\frac{||z_2|-R|^2}{R^2} \left[2 C(\mathcal{K},R)R^2 +2 \alpha(\mathcal{K}) \right]\\
&=|z_1-z_2|^2\left[6C(\mathcal{K},R)+\frac{4}{R^2}\alpha(\mathcal{K})\right].
\end{align*}
In the above calculation, we have used \eqref{observe} and two inequalities, viz. $\left|z_1-\frac{Rz_2}{|z_2|}\right|^2\leq|z_1-z_2|^2$ and $||z_2|-R|^2\leq|z_1-z_2|^2$. These inequalities are easy to verify. For example, the first one follows from the equivalent statement $|z_1|^2+R^2-2\frac{R}{|z_2|} (z_1)^t z_2 \leq|z_1|^2+|z_2|^2 - 2(z_1)^t z_2$.
Similar arguments show that \eqref{Lipschitz-condition-rnm-inl} holds for $\bar F^R$ for all $z_1, z_2 \in \mathbb{R}^d$. This shows that the `global Lipschitz' regularity \eqref{Lipschitz-condition-rnm-inl} holds for $\bar b^R$, $\bar\sigma^R$ and $\bar F^R$. Since $\bar b^R(\omega,0;y) = \bar b(\omega,0;y), \bar \sigma^R(\omega,0;y) = \bar \sigma(\omega,0;y)$ and $\bar F^R(\omega,0,x;y) = \bar F(\omega,0,x;y), \forall |x| < 1, y \in \mathcal{S}_{-p}$, the growth condition \eqref{picard3} can be established for $\bar b^R$, $\bar\sigma^R$ and $\bar F^R$ as done in Step 1 of Theorem \ref{nrm-bd-rndm-inl}. Then arguing as in Theorem \ref{nrm-bd-rndm-inl} (Steps 1, 2 and 3) and Theorem \ref{interlacing-global-sde}, for $R \in \mathbb{N}$, we have the existence of a unique process $\{X^R_t\}$ satisfying a.s. for every $t\geq0$
\begin{equation}
\begin{split}
X^R_t &=\kappa+\int_0^t\bar b^R(X_{s-}^R;\xi)ds+ \int_0^t\bar\sigma^R(X_{s-}^R;\xi)\cdot dB_s
+\int_0^t\int_{(0 < |x| < 1)} \bar F^R(X_{s-}^R,x;\xi)\, \widetilde
N(dsdx)\\
& +\int_0^t \int_{(|x| \geq 1)} \bar G(X_{s-}^R,x;\xi) \,
N(dsdx).
\end{split}
\end{equation}
Let $\pi_i, i=1,2,\cdots$ denote the arrival times for the jumps of the compound Poisson process $\{P_t\}_{t\geq0}$, where each $P_t=\int_{(|x|\geq1)}xN(t,dx)$. Let $m,n\in\mathbb{N}$ and $m<n$. Consider the stopping times
\[\theta_{m,i}^n:= \inf\{t\geq0:\ |X^m_t|\ \text{Or}\ |X^n_t|\geq m\}\wedge \pi_i.\]
Take $i = 1$. Then $\{X^m_t\}$ and $\{X^n_t\}$ both satisfy the same reduced equation
\begin{equation}
\begin{split}
dX_{t} &=\bar b^m(X_{t-};\xi)dt+ \bar\sigma^m(X_{t-};\xi)\cdot dB_t + \int_{(0 < |x| < 1)} \bar F^m(X_{t-},x;\xi)\, \widetilde
N(dtdx), \quad t < \theta_{m,1}^n,\\
X_0&=\kappa;
\end{split}
\end{equation}
First assume $\xi$ is norm bounded and consider the stopped processes $\{X^m_{t\wedge \theta_{m,1}^n}\}$ and $\{X^n_{t\wedge \theta_{m,1}^n}\}$. Then arguing as in the uniqueness proof of Step 1 in Theorem \ref{nrm-bd-rndm-inl}, we conclude a.s. $X^m_t = X^n_t, t < \theta_{m,1}^n$. In particular, a.s. $X^m_{t-} = X^n_{t-}$ for $t = \theta_{m,1}^n$. Further, for almost all $\omega$ such that $\pi_1(\omega) = \theta_{m,1}^n(\omega)$, we have
\[X^m_t(\omega) = X^m_{t-}(\omega) + \bar G(X^m_{t-}(\omega),\triangle N_t,;\xi) = X^n_t(\omega),\quad t = \pi_1(\omega).\]
We extend this result for $\mathcal{F}_0$ measurable $\xi$ by arguing as in Step 2 in Theorem \ref{nrm-bd-rndm-inl}.
Take $i=2$. Note that the contribution of the term involving $\bar G$ in $X^m_{t\wedge \theta_{m,2}^n}$ and $X^n_{t\wedge \theta_{m,2}^n}$ for the large jump at $t = \pi_1$ are the same. Arguing as in the case $i = 1$, we conclude a.s $X^m_t = X^n_t, t < \theta_{m,2}^n$.
Repeating the arguments, we have a.s. for all $i, m, n$ with $m < n, X^m_t = X^n_t, t < \theta_{m,i}^n$. Since a.s. $\pi_i \uparrow \infty$ as $i \to \infty$, a.s. for all $m, n$ with $m < n$ we have $X^m_t = X^n_t, t < \theta_m^n$, where
\[\theta_m^n := \inf\{t\geq0:\ |X^m_t|\ \text{Or}\ |X^n_t|\geq m\}.\]
In particular,
$\theta_m^n=\inf\{t\geq0:\ |X^m_t|\geq m\}=\inf\{t\geq0:\ |X^n_t|\geq m\}$. As such, $\theta_m^n$ is independent of $n(>m)$. Define $\theta_m:=\inf\{t\geq0:\ |X^m_t|\geq m\}$ and set
\[X_t:=
\begin{cases}
X_t^m\ \ \text{for}\ t\leq\theta_m,\\
\infty,\ \text{for}\ t \geq \eta,
\end{cases}\]
so that $(\{X_t\},\eta)$ is a solution of \eqref{fd-sde-sln} for $t < \eta:= \lim_{m\uparrow\infty}\theta_m$.
To prove the uniqueness, we consider the solution $(\{X_t\}, \eta)$ constructed above and compare it with any arbitrary solution $(\{X'_t\},
\eta')$ of \eqref{fd-sde-sln}. In the proof of existence of solutions, we had compared $\{X^m_t\}$ and $\{X^n_t\}$. We follow the same approach and define
\[\theta^R:=\inf\{t\geq0:|X_t|\ \text{Or}\ |X'_t|\geq R\}\wedge\eta\wedge\eta', \forall R \in \mathbb{N}.\]
We then conclude a.s. $X_t = X'_t, t < \theta^R, \forall R\in \mathbb{N}$. Letting $R$ go to infinity concludes the proof.
\end{proof}
\begin{remark}\label{explicit-conditions}
The `local Lipschitz' condition \eqref{Lipschitz-condition-rnm-inl-nrmsqr-fnl} follows from regularity assumptions on $\sigma, b$ and $F$, provided other hypotheses are satisfied (see \cite[Proposition 3.7]{Levy-SPDE}). As mentioned in Section \ref{S:1}, the class of SDEs \eqref{fd-sde-sln} considered above are related to a class of stochastic PDEs taking values in $\mathcal{S}^\prime$. The existence and uniqueness problems for these stochastic PDEs are studied in \cite{Levy-SPDE}.
\end{remark}
\textbf{Acknowledgement:} The first author would like to acknowledge the fact that he was supported by the NBHM (National Board for Higher Mathematics, under Department of Atomic Energy, Government of India) Post Doctoral Fellowship. The second author would like to acknowledge the fact that he was partially supported by the ISF-UGC research grant. The authors would like to thank Prof. B. Rajeev, Indian Statistical Institute Bangalore Centre, India for valuable suggestions during the work.
\bibliographystyle{plain}
|
2,869,038,154,363 | arxiv | \section*{Introduction}
A celebrated consequence of Waldhausen's theorem on Haken $3$--manifolds \cite{Waldo}
is that the fundamental group of the complement, endowed with a peripheral system, forms a complete invariant of links in the 3-sphere up to ambient isotopy.
The peripheral system is the data, for each component of a link $L$ in $S^3$, of a pair of elements $\{m_i,l_i\}$ of $\pi_1(S^3\setminus L)$---a
meridian and a preferred longitude---that generates the fundamental group of the corresponding boundary component of $S^3\setminus L$.
Although rather intractable in practice, the peripheral system is
nonetheless a fundamental link invariant,
and it is natural to expect that some weaker equivalence relations than ambient isotopy could be classified by an appropriate adaptation of the peripheral system.
During the fifties, this
has been the strategy of J.~Milnor in his attempt to classify links up to \emph{link-homotopy} \cite{Milnor}, that is up to homotopy deformations during which distinct connected components remain disjoint at all time.
In order to address
this link-homotopy classification problem,
Milnor introduced the \emph{reduced peripheral system}.
Roughly speaking, the \emph{reduced fundamental group} $R\pi_1(L^\textnormal{c})$ of a link $L$ is the largest quotient of the fundamental group of the complement
where any generator commutes with any of its conjugates;
if $\{\mu_i,\lambda_i\}_i$
is a peripheral system for $L$, with image $\{m_i,l_i\}_i$ under the projection onto $R\pi_1(L^\textnormal{c})$,
then a \emph{reduced peripheral system} of $L$ is $\{m_i,l_i N_i\}_i$, where $N_i$ is the normal subgroup of $R\pi_1(L^\textnormal{c})$ generated by $m_i$.
The reduced peripheral system, however, only yields a complete link-homotopy invariant for links with at most $3$ components.
The $4$--component case was tackled by J.~Levine \cite{Levine} only 40 years later, using a smaller normal subgroup for defining the reduced longitudes.
As a matter of fact, there exists a pair of $4$--component links, exhibited by J.R.~Hughes, with equivalent reduced peripheral systems but which are link-homotopically distinct \cite{Hughes93}.
It seems still unknown whether Levine's peripheral system classifies links up to link-homotopy.
In fact, this classification was achieved by N.~Habegger and X.S.~Lin by a rather different approach, which relies on representing links as the closure of string links \cite{HL}.
In the present paper, we take advantage of the recently developped theory
of welded links to provide, in an elementary way, a four-dimensional topological characterization of the information captured by Milnor's reduced peripheral system:
\begin{theo*}\label{thm:C}
Let $L$ and $L'$ be two oriented links in the $3$-sphere. The following are equivalent:
\begin{itemize}
\item[i.] $L$ and $L'$ have equivalent reduced peripheral systems;
\item[ii.] $L$ and $L'$ are \textrm{sv}-equivalent, as welded links;
\item[iii.] $\Spun^\bullet(L)$ and $\Spun^\bullet(L')$ are ribbon link-homotopic, as ribbon immersed solid tori.
\end{itemize}
\end{theo*}
Here, \emph{welded links} are generalized link diagrams, where we allow for virtual crossings in addition to the usual crossings, regarded up to an extended set of Reidemeister moves.
This is a sensible generalization in the sense that classical links inject into welded links, and that the fundamental group and (reduced) peripheral system naturally extends to this larger class of objects.
Part ii then gives a diagrammatic characterization of the reduced peripheral systems of links, by regarding them as welded links via their diagrams, up to \emph{$\textnormal{sv}$--equivalence},
which is the equivalence relation generated by the replacement of a classical crossing involving two strands of a same component by a virtual one;
this stresses in particular the fact that the $\textnormal{sv}$--equivalence is a
refinement of link-homotopy for classical links, see Remark \ref{rem:scc}.
This characterization actually follows from a more general result, which classifies all welded links up to $\textnormal{sv}$--equivalence, see Theorem \ref{thm:weldedclassif}.
Also, it follows that the non link-homotopic $4$--component links exhibited by Hughes in \cite{Hughes93} are $\textnormal{sv}$-equivalent;
this is made explicit in Appendix \ref{onvamangerdeschips}.
Part iii gives a topological characterization, in terms of $4$--dimensional topology.
A classical construction dating back to Artin \cite{cEmilletueur} produces a knotted surface in $4$--space from a link in $3$--space by spinning it around some plane.
By spinning as well projection rays from the link to the plane, this can be extended to a map $\Spun^\bullet$ producing \emph{ribbon-immersed solid tori},
\emph{i.e.} solid tori in $4$--space intersecting along only ribbon singularities. The \emph{ribbon link-homotopy} for such objects is
a notion of link-homotopy within the realm of ribbon-immersed solid tori, which allows for the removal/insertion of such ribbon singularities
inside the same connected component.
The reduced peripheral system for links hence appears in this way as an intrinsically $4$--dimensional invariant, rather than a $3$-dimensional one.\footnote{One might expect for this $4$-dimensional incarnation to be
in terms of knotted surfaces, we explain in Section
\ref{sec:surfaces} why this is not the case.}
As above, this characterization is obtained as a consequence of a more general result, characterizing the reduced peripheral system of welded links in terms of $4$--dimensional topology; see Theorem \ref{thm:4-dimclassif}.
It is thus noteworthy that our purely topological characterization $\textrm{i}\Leftrightarrow \textrm{iii}$ for classical links is actually
obtained as an application of virtual/welded knot theory.
\begin{acknowledgments}
The authors thank A.~Yasuhara for useful comments on an earlier version of this paper.
\end{acknowledgments}
\section{The reduced peripheral system of classical and welded links}
\subsection{Welded links} \label{sec:combinatorics}
In this section, we review the theory of welded links and Gauss diagrams.
\begin{defi}
\begin{itemize}
\item[]
\item An $n$-component \emph{welded diagram} is a planar immersion
of $n$ ordered and oriented circles, whose singular set is a
finite number of transverse double points, each double point being
labelled
either as a \emph{positive or negative (classical) crossing}, or as a \emph{virtual
crossing}:
\[
\begin{array}{ccccc}
\dessin{1cm}{CPositive}&\hspace{1cm}&\dessin{1cm}{CNegative}&\hspace{1cm}&\dessin{1cm}{CVirtuel}\\
\textrm{positive crossing}&&\textrm{negative crossing}&&\textrm{virtual crossing}
\end{array}.
\]
\item We denote by $\wGraph_n$ the set of $n$-component welded
diagrams up to the following \emph{welded moves}:
\[
\begin{array}{c}
\begin{array}{ccc}
\dessin{1.5cm}{vR1_1}
\stackrel{\textrm{vR1}}{\longleftrightarrow} \dessin{1.5cm}{vR1_2} \, & \,
\dessin{1.5cm}{vR2_1} \stackrel{\textrm{vR2}}{\longleftrightarrow} \dessin{1.5cm}{vR2_2}
\, & \,
\dessin{1.5cm}{vR3_1} \stackrel{\textrm{vR3}}{\longleftrightarrow} \dessin{1.5cm}{vR3_2}
\end{array}
\\
\textrm{virtual Reidemeister moves}\\[0.4cm]
\begin{array}{cc}
\dessin{1.5cm}{vR3_3}\stackrel{\textrm{Mixed}}{\longleftrightarrow} \dessin{1.5cm}{vR3_4} \, & \,
\dessin{1.5cm}{OC_1}\ \stackrel{\textrm{OC}}{\longleftrightarrow}
\dessin{1.5cm}{OC_2}
\end{array}
\\
\textrm{Mixed and OC\footnotemark moves}
\end{array}
\]
\footnotetext{Here, OC stands for \emph{over-commute}, as
a strand is passing over a virtual crossing; note that
the corresponding \emph{under-commute} move is forbidden.}
and classical Reidemeister moves R1, R2 and R3, which are the three usual moves of classical knot theory.
Elements of $\wGraph_n$ are called \emph{welded links}.
\end{itemize}
\end{defi}
A welded diagram with no virtual crossing is called \emph{classical}.
It is well-known that this set-theoretical inclusion induces an injection of the set $\mathcal{L}_n$ of $n$-component classical link diagrams up to classical Reidemeister moves,
into $\wGraph_n$;
as pointed out in Remark \ref{rem:inject}, this follows from the fact that the peripheral system is a complete link invariant.
\medskip
An alternative approach to welded links, which is often more tractable in practice, is through the notion of Gauss diagrams.
\begin{defi}
An $n$-component \emph{Gauss diagram} is an abstract collection of $n$ ordered and oriented circles, together with disjoint signed arrows whose endpoints are pairwise disjoint points of these circles.
For each arrow, the two endpoints are called \emph{head} and \emph{tail},
with the obvious convention that the arrow orientation goes from the tail to the head.
\end{defi}
To a welded diagram corresponds a unique Gauss diagram, given by joining the two preimages of each classical crossing by an arrow,
oriented from the overpassing to the underpassing strand and labelled by the crossing sign.
\begin{defi}
Two Gauss diagrams are \emph{welded equivalent} if they are related by
a sequence of the following \emph{welded moves}:
\[
\begin{array}{cccc}
\dessin{1.5cm}{GR1_3} \stackrel{\textrm{R1}}{\longleftrightarrow} \dessin{1.5cm}{GR1_2}
\, \, \, &
\, \, \, \dessin{1.5cm}{GR2_1} \stackrel{\textrm{R2}}{\longleftrightarrow} \dessin{1.5cm}{GR2_2} \, \, \,
& \, \, \,
\dessin{1.5cm}{GR3_1} \stackrel{\textrm{R3}}{\longleftrightarrow} \dessin{1.5cm}{GR3_2} \, \, \,
& \, \, \,
\dessin{1.5cm}{GOC_1} \stackrel{\textrm{OC}}{\longleftrightarrow} \dessin{1.5cm}{GOC_2},
\end{array}
\]
where move R3 requires the additional sign condition that $\varepsilon_2\varepsilon_3=\tau_2\tau_3$, where
$\tau_i=1$ if the $i^\textrm{th}$ strand (from left to right) is oriented upwards, and $-1$ otherwise.
\end{defi}
As the notation suggests, these four moves are just the Gauss diagram analogues, using the above correspondance,
of the three classical Reidemeister moves and the OC move for welded diagrams
(the Gauss diagram versions of the virtual Reidemeister and Mixed
moves being trivial).
As a matter of fact, it is easily checked that
welded equivalence classes of Gauss diagrams are in one-to-one
correspondence with welded links.
\begin{remarque}\label{rem:slide}
We will make use the following \emph{Slide} move
\[
\begin{array}{c}
\dessin{1.5cm}{slide_1} \stackrel{\textrm{Slide}}{\longleftrightarrow} \dessin{1.5cm}{slide_2}
\end{array}
\hspace{2cm}
\dessin{1.5cm}{Slide_diag_1} \stackrel{\textrm{Slide}}{\longleftrightarrow} \dessin{1.5cm}{Slide_diag_2}
\]
\noindent which is easily seen to follow from the R2, R3 and OC moves.
Note that this is a Gauss diagram analogue of the Slide move on arrow diagrams \cite{arrow}.
\end{remarque}
\medskip
In a welded diagram, a \emph{self-crossing} is a crossing where both preimages belong to the same component.
\begin{defi}
A \emph{self-virtualization} is a local move $\textrm{SV}$, illustrated
below, which replaces a classical self-crossing by a virtual one.
The \emph{\textrm{sv}-equivalence} is the equivalence relation on welded diagrams generated by self-virtualizations.
We denote by $\wGraphL ^{\textnormal{sv}}$ the quotient of $\wGraph_n$ under this relation.
\[
\begin{array}{ccc}
\dessin{1.2cm}{SV_1}\stackrel{\textrm{SV}}{\longleftrightarrow}\dessin{1.2cm}{SV_2}\stackrel{\textrm{SV}}{\longleftrightarrow}\dessin{1.2cm}{SV_3}
& \hspace{1cm}&
\dessin{1cm}{GSV_1}\stackrel{\textrm{SV}}{\longleftrightarrow}\dessin{1cm}{GSV_2}
\end{array}
\]
\end{defi}
At the Gauss diagram level, a self-crossing is represented by a \emph{self-arrow}, that is an arrow whose endpoints lie on the same component, and a self-virtualization move simply erases a self-arrow.
\begin{remarque}\label{rem:scc}
The link-homotopy relation for classical links, as defined by Milnor, is generated by the self-crossing change, \emph{i.e.} the local move that exchanges the relative position of two strands of a same component. As the left-hand side of
above picture suggests, a self-crossing change can be realized by two self-virtualizations, and the $\textnormal{sv}$--equivalence is thus a refinement of the link-homotopy relation for classical links. In \cite{ABMW}, self-virtualization was proven to actually extend classical link-homotopy for string links in the sense that two string-links are $\textnormal{sv}$--equivalent iff they are link-homotopic. Our main theorem, together with Hughes' counter-examples, shows that such an equivalence does not hold true for links.
\end{remarque}
We end this section with a normal form for Gauss diagrams up to self-virtualization.
\begin{defi}
A Gauss diagram is \emph{sorted} if each circle
splits into two arcs, the $t$ and the $h$--arc,
containing respectively tails only and heads only.
\end{defi}
\begin{remarque}\label{rem:SortedLabels}
Up to OC moves, a sorted Gauss diagram $D$ is uniquely determined by
the data of $n$ words
$\Big\{\prod_{j=1}^{k_i}\mu_{s_{ij}}^{\varepsilon_{ij}}\Big\}_{\! i}$
in the alphabet $\{\mu_1^{\pm1},\ldots,\mu_n^{\pm1}\}$, where the letter $\mu_{s_{ij}}^{\varepsilon_{ij}}$
indicates that the $j\,$th head met on $C_i$ when running along its oriented $h$--arc is connected by an $\varepsilon_{ij}$--signed arrow to one of the tails on $C_{s_{ij}}$.
\end{remarque}
\begin{lemme}\label{lemma:Sorted}
Every Gauss diagram is $\textnormal{sv}$--equivalent to a sorted one.
\end{lemme}
\begin{figure}
\[
\textnormal{TaH} : \dessin{1.2cm}{TaH_1}\ \xrightarrow[]{\textrm{R2}}\ \dessin{1.2cm}{TaH_2}\ \xrightarrow[]{\textrm{Slide}}\ \dessin{1.2cm}{TaH_3}.
\]
\caption{Tail across head move on Gauss diagrams}
\label{fig:TaH}
\end{figure}
\begin{proof}
Start with any given Gauss diagram, and choose any arbitrary order
on the circles to sort them one by one as follows.
Using $\textnormal{SV}$ moves, remove first all self-arrows from the considered
circle, and then
gather all heads located on it using $\textnormal{TaH}$ (tail across head) moves, described in Figure \ref{fig:TaH}.
Since there is no self-arrow left on the considered circle, the two extra
arrows appearing in the latter moves won't have endpoints on the considered circle, and as their heads, resp. tails, are close to the already
existing arrow head, resp. tail, this won't unsort already sorted
components.
Note that these two extra arrows could happen to be self-arrows, but this does not conflict with the sorting procedure.
\end{proof}
\subsection{Welded link groups and peripheral systems}
\label{sec:wgroup}
Let $L$ be a welded diagram.
\begin{defi}
\begin{itemize}
\item[]
\item The \emph{arcs} of $L$ are the maximal pieces of $L$ which do not underpass any classical crossing.
An arc is hence either a whole component or a piece of strand which starts and ends at some (possibly the same) crossings;
it might pass through some virtual crossings and overpass some
classical ones. At the level of Gauss diagrams, it corresponds to
portions of circles comprised between two heads.
\item The \emph{group of $L$}, denoted by $G(L)$, is defined by a Wirtinger-type presentation, where each arc yields a generator,
and each classical crossing yields a relation, as follows:
\[
\dessin{1.8cm}{Wirtinger1}\ \leadsto\ \alpha^{-1}\beta\alpha\gamma^{-1}
\hspace{1cm}
\dessin{1.8cm}{Wirtinger2}\ \leadsto\ \alpha\beta\alpha^{-1}\gamma^{-1}\\[-0.1cm]
\]
\end{itemize}
\end{defi}
Since virtual crossings do not produce extra generator or relation, it is clear that virtual Reidemeister moves and Mixed moves preserve the group presentation.
It is also easily checked that the isomorphism class of this group is invariant under classical Reidemeister and OC moves,
and is thus an invariant of welded links \cite{Kauffman,Satoh}.
If $L$ is a diagram of a classical link ${\mathcal L}$, then $G(L)$ is merely the fundamental group of the complement of an open tubular neighborhood of ${\mathcal L}$ in $S^3$;
in this case, an arc corresponds to the topological meridian which positively enlaces it.
By analogy, arcs of welded diagrams can be seen as some combinatorial meridians, and in what follows,
we will often blur the distinction between arcs/meridians of $L$ and the corresponding generators of $G(L)$.
We will also regularly, and sometimes implicitly, make use of the simple fact that two meridians of a same components are always conjugate.
\medskip
\begin{defi}
\begin{itemize}
\item[]
\item A \emph{basing} of $L$ is a choice
of one meridian for each component of $L$.
\item For each $i$, the \emph{$i\,$th preferred longitude} of
$L$ with respect to the basing $\{\mu_1,\ldots,\mu_n\}$ is the
element $\lambda_i\in G(L)$ obtained as
follows: when running along the $i\,$th component of $L$, starting
at the arc labelled by $\mu_i$ and following the orientation,
write $\omega^\varepsilon$ when passing under an arc labelled by $\omega$ at
a classical crossing of sign $\varepsilon$, and finally write
$\mu_i^{-k}$, where $k$ is the algebraic number of classical self-crossings
in the $i\,$th component.
\item A \emph{peripheral system} for $L$ is the group $G(L)$ endowed with the choice of a basing and
the data, for each $i$, of the $i\,$th preferred longitude.
\end{itemize}
\end{defi}
When $L$ is a classical link, a basing is the choice of a topological meridian for each component, and the $i\,$th preferred longitude represents a parallel copy of the $i\,$th component having linking number zero with it.
Hence the above definitions naturally generalize the usual notion of
peripheral system of links.
\medskip
Two peripheral systems $\big(G, \{(\mu_i,\lambda_i)\}_i\big)$
and $\big(G, \{(\mu'_i,\lambda'_i)\}_i\big)$ are \emph{conjugate}
if, for each $i$, there exists $\omega_i\in G$ such that $\mu'_i=\omega_i^{-1}\mu_i\omega_i$ and
$\lambda'_i=\omega_i^{-1}\lambda_i\omega_i$.
Two peripheral systems $\left(G, \{(\mu_i,\lambda_i)\}_i\right)$ and
$\left(G', \{(\mu'_i,\lambda'_i)\}_i\right)$ are \emph{equivalent} if
there exists an isomorphism $\psi: G'\rightarrow G$ such that $\big(G, \{(\mu_i,\lambda_i)\}_i\big)$
and $\big(G, \{(\psi(\mu'_i),\psi(\lambda'_i))\}_i\big)$ are conjugate.
The following is well-known, see for example \cite[Prop.6]{Kim}.
\begin{lemme}
Up to conjugation, peripheral systems are well-defined for welded
diagrams and yield, up to equivalence, a well-defined invariant
of welded links.
\end{lemme}
\begin{proof}
Suppose that $\{(\mu_i,\lambda_i)\}_i$ is a peripheral system of
the welded diagram $L$, and let $\mu'_i$ be another choice of meridian for the $i\,$th
component, yielding hence another preferred $i\,$th longitude $\lambda'_i$.
Then $\mu'_i=\omega_i^{-1}\mu_i\omega_i$ for some $\omega_i\in G(L)$, and by definition
$\lambda'_i=\omega_i^{-1}(\lambda_i \mu_i^k) \omega_i {\mu'_i}^{-k}$.
But substituting $\mu'_i$ for $\omega_i^{-1}\mu_i\omega_i$ in $\lambda'_i$ then gives
$\lambda'_i=\omega_i^{-1}\lambda_i \mu_1^k \omega_i \omega_i^{-1} \mu_i^{-k} \omega_i = \omega_i^{-1}\lambda_i \omega_i$.
This proves that the peripheral system of $L$ is uniquely determined up to conjugation.
Using this fact, it is then an easy exercise to check that equivalence classes of peripheral systems are well-defined for welded links, \emph{i.e.} that they are invariant under welded and classical Reidemeister moves.
More precisely, by an appropriate choice of basing, one can check that each classical Reidemeister move induces an isomorphism of the groups of the diagrams which preserves each preferred longitude; the argument
is even simpler for welded Reidemeister moves.\\[-0.5cm]
{\parpic[r]{$\begin{array}{c}
L\ \dessin{1.5cm}{R1_1} \stackrel{\textrm{R1}}{\longleftrightarrow} \dessin{1.5cm}{R1_2}L'
\end{array}$}
As an elementary illustration, let us consider two welded diagrams $L$ and $L'$ which
differ by an R1 move as shown on the right.
The generators $\alpha$ and $\beta$ of $G(L)$ shown in the figure satisfy $\alpha=\beta$, so $G(L)$ and $G(L')$ are clearly isomorphic.
Pick $\beta\in G(L)$, resp. $\alpha\in G(L')$, as meridian for the depicted component of $L$, resp. $L'$.
}
\noindent Then the corresponding preferred longitude of $L$ is of the form
$\omega\alpha\alpha^{-k}$ for some $\omega\in G(L)$ and some $k\in \mathbb{Z}$, while the
corresponding preferred longitude of $L'$ reads $\omega\alpha^{-k+1}$, since $L'$ contains one less positive self-crossing.
Hence the above isomorphism from $G(L)$ to $G(L')$ preserves the peripheral system.
\end{proof}
\begin{remarque}\label{rem:inject}
The peripheral system of classical links is a complete invariant \cite{Waldo}.
Since this invariant extends to welded links, in the sense that the above-defined invariant coincides with the usual peripheral system for classical links, this shows that classical links inject into welded links \cite{Kauffman,GPV}.
\end{remarque}
\subsection{Reduced group and reduced peripheral system}
As before, let us consider a welded diagram $L$.
\begin{defi}
For a group $G$ given with a finite generating set $X$, the \emph{reduced group of $G$}, denoted by $\textnormal R G$, is the quotient of $G$
by its normal subgroup generated by all elements $[\zeta,\omega^{-1}\zeta \omega]$, where $\zeta\in X$ and $\omega\in G$.
In particular, we define the \emph{reduced group of $L$} as the reduced group $\textnormal R G(L)$ of $G(L)$ with respect to its
Wirtinger generators.
\end{defi}
Note that $\textnormal R G(L)$ is the largest quotient of $G(L)$
where any meridian commutes with any of its conjugates. Since any two
meridians of a same component are conjugate elements, it can also be
defined as the quotient of $G(L)$ by the normal subgroup generated by
the elements $[\mu_i,\omega^{-1}\mu_i \omega]$ for all $\omega\in G(L)$, where $\{\mu_i\}_i$ is a fixed basing for $L$.
\begin{convention*}
In the rest of this paper, we shall use greek letters with tilda for elements in the group of a welded diagram,
and use the same letters, but without the tilda, to denote the corresponding elements in the reduced group.
In particular, we respectively denote by $\mu_i$ and $\lambda_i$ the images in $\textnormal R G(L)$ of any meridian $\widetilde{\mu}_i $ and longitude $\widetilde{\lambda}_i$ in $G(L)$.
\end{convention*}
\begin{defi}
The \emph{reduced peripheral system} for $L$ is the data
$$\big( \textnormal R G(L),\{(\mu_i,\lambda_i.N_i)\}_i\big),$$
associated to a peripheral system $\big(G(L),\{(\widetilde{\mu}_i,\widetilde{\lambda}_i)\}_i\big)$
where, for each $i$, $\lambda_i.N_i$ denotes the coset of $\lambda_i$ with
respect to $N_i$, the normal subgroup generated by the $i\,$th reduced
meridian $\mu_i$.
Two reduced peripheral systems are \emph{conjugate}
if they come from conjugate peripheral systems; and they are
\emph{equivalent} if there is a group isomorphism sending one
to a conjugate of the other.
\end{defi}
As explained in the introduction, Milnor introduced the reduced peripheral system
for classical links, and showed that it is a link-homotopy invariant. We have the following generalization.
\begin{lemme}\label{lem:lhinv}
Up to equivalence, the reduced peripheral system is a well-defined
invariant of welded links up to $\textnormal{sv}$--equivalence.
\end{lemme}
\begin{proof}
Since equivalent peripheral systems obviously yield equivalent
reduced peripheral systems, it suffices to prove the invariance under a single SV move.
Pick a self-crossing $s$ of some welded diagram $L$, and denote by
$L_s$ the diagram obtained by replacing $s$ by a virtual crossing:
\[
\begin{array}{c}
L\ \dessin{1.4cm}{SV_1_W} \hspace{.3cm} \stackrel{\textrm{SV}}{\longleftrightarrow} \hspace{.3cm} \dessin{1.4cm}{SV_2_W}\ L_s
\end{array}.
\]
Consider the three generators $\widetilde{\alpha},\widetilde{\beta},\widetilde{\gamma}$ of $G(L)$ involved in $s$, as
shown above.
Since the meridians $\widetilde{\alpha}$, $\widetilde{\beta}$ and $\widetilde{\gamma}$ all belong to the same
component, there are $\widetilde{\omega},\widetilde{\zeta}\in G(L)$ such that
$\widetilde{\beta}=\widetilde{\omega}^{-1}\widetilde{\alpha}\widetilde{\omega}$ and $\widetilde{\alpha}=\widetilde{\zeta}^{-1}\widetilde{\gamma}\widetilde{\zeta}$.
For $L$, the Wirtinger relation at $s$ is $\widetilde{\gamma}=\widetilde{\alpha}^{-1}\widetilde{\beta}\widetilde{\alpha}$; hence, we have that $\gamma=\alpha^{-1}\omega^{-1}\alpha \omega \alpha\equiv \omega^{-1}\alpha \omega=\beta$ holds in $\textnormal R G(L)$, which shows that $\textnormal R G(L)$ is isomorphic to $\textnormal R G(L_s)$.
It remains to show that this isomorphism preserves the reduced peripheral system.
Pick $\widetilde{\alpha}\in G(L)$ as meridian $\widetilde{\mu}$ for the component of $L$ containing $s$; the corresponding preferred longitude is given by
$\widetilde{\lambda}=\widetilde{\omega} \widetilde{\alpha}\widetilde{\zeta} \widetilde{\alpha}^{-k}$ for some integer $k$. Take the meridian $\widetilde{\mu}_s$ of the corresponding component of $L_s$ to be represented by $\widetilde{\alpha}$ again, so that the
preferred longitude is given by $\widetilde{\lambda}_s=\widetilde{\omega}\widetilde{\zeta} \widetilde{ \alpha}^{-k+1}$.
Then the isomorphism $\textnormal R G(L)\rightarrow \textnormal R G(L_s)$ maps $\mu$ to $\mu_s$, and the equality
\[
\lambda = \omega \alpha\zeta \alpha^{-k} \equiv \omega[\zeta,\alpha]\alpha\zeta \alpha^{-k} =
\omega\zeta \alpha\zeta^{-1}\alpha^{-1}\alpha\zeta \alpha^{-k}=\omega\zeta \alpha^{-k+1} \textrm{ (mod $N$)}
\]
shows that $\lambda.N$ is mapped to $\lambda_s.N_s$, where $N\subset
\textnormal R G(L)$ and $N_s\subset
\textnormal R G(L_s)$ denote the normal subgroups generated by
$\alpha$.
This handles one version of the SV move, but the other one is strictly similar.
\end{proof}
\begin{remarque}
Since we are overall working modulo the normal subgroup generated by the meridian $\alpha$, the above sequence of equalities for the longitude $\lambda$ could be slightly simplified. It seems however instructive to point out that one only needs to consider this normal subgroup in the second equality of this sequence;
this identifies precisely where this equivalence is needed to have the desired invariance property.
\end{remarque}
In particular, the reduced fundamental group is hence invariant under self-virtualization. Combining this with the sorted form given in Lemma \ref{lemma:Sorted},
we obtain the following presentation.
\begin{lemme}\label{lem:Rpres}
The reduced fundamental group $\textnormal R G(L)$ has the following presentation
\[
\textnormal R G(L) = \big\langle \mu_1,\ldots, \mu_n\,\,\big| \,\, [\mu_i,\lambda_i], \,
[\mu_i,\omega^{-1}\mu_i \omega], \textrm{ for all $i$ and for all $\omega\in F(\mu_i)$}
\big\rangle,
\]
where $F(\mu_i)$ denotes the free group on $\{\mu_i\}_i$.
\end{lemme}
\begin{proof}
Suppose first that $L$ corresponds to a sorted Gauss diagram.
The Wirtinger-type presentation for its welded link group provides, for every component $C_i$,
a generator $\mu_i$ corresponding to the $t$--arc, and a bunch of generators $\mu_i^j$ lying on the $h$--arc.
Any of the latter appears in exactly two relations which are of the form $\mu_i^j=\mu^{\pm1}_{i_1}\mu_i^{j-1}\mu^{\mp1}_{i_1}$
and $\mu_i^{j+1}=\mu^{\pm1}_{i_2}\mu_i^j\mu^{\mp1}_{i_2}$, setting $\mu_i^0:=\mu_i=:\mu_i^{r_i}$ where $r_i$ is the number of
heads on $C_i$; generators $\mu_i^j$ can hence be successively eliminated, ending with relations $ [\mu_i,\lambda_i]$ only.
Hence the presentation
\[
G(L) = \big\langle \mu_1,\ldots, \mu_n\,\,\big| \,\, [\mu_i,\lambda_i], \, \textrm{ for all $i$ } \big\rangle.
\]
Relations $[\mu_i,\omega^{-1}\mu_i \omega]$ then naturally arise when taking the reduced quotient.
The general case, where $L$ does not necessarily correspond to a sorted Gauss diagram, then follows readily from Lemmas \ref{lemma:Sorted} and \ref{lem:lhinv}.
\end{proof}
\begin{remarque}\label{rem:Presentation->Diagram}
Starting with a sorted Gauss diagram $D$, Lemma \ref{lem:Rpres}
provides a presentation for the associated reduced fundamental group
whose generators $\{\mu_i\}_i$ correspond to the $t$--arcs, and whose relations
make these generators commute with their own conjugates, as well as
with their associated longitude. Moreover, as words in the
generators, these longitudes are directly given by the words
associated to $D$ in Remark \ref{rem:SortedLabels}. Conversely, any
such presentation provides a word representative for each longitude
and hence describes, up to OC moves, a unique sorted Gauss diagram.
\end{remarque}
\begin{remarque}
Lemma \ref{lem:Rpres} is to be be compared with \cite[Thm.~4]{Milnor2}, where Milnor gives a similar presentation for the nilpotent quotients of the group of a classical link.
Actually, Milnor's argument being purely algebraic, the proof of \cite[Thm.~4]{Milnor2} applies verbatim to the case of welded links,
meaning in particular that the nilpotent quotient $G(L) / \Gamma_k G(L)$ has a presentation
$\big\langle \mu_1,\ldots, \mu_n\,\,\big| \,\, \Gamma_k F(\mu_i),\, [\mu_i,\lambda_i] \textrm{ for all $i$}\big\rangle$.
In fact, Milnor's proof can be adapted with minor adjustments to give an alternative proof of Lemma \ref{lem:Rpres}.
\end{remarque}
\section{Diagrammatic characterization of the reduced peripheral system}\label{sec:2}
This section is devoted to the proof of the following result, which readily implies the equivalence $\textrm{i}\Leftrightarrow \textrm{ii}$ in our main theorem, and which more generally classifies welded links up to $\textnormal{sv}$--equivalence.
\begin{theo}\label{thm:weldedclassif}
Two welded links are $\textnormal{sv}$--equivalent if and only if they have isomorphic reduced peripheral systems.
\end{theo}
For convenience, we will adopt here the Gauss diagram point of view,
fix a number $n\in{\mathds N}^*$ of components and set $\textnormal F(\mu)$ the free group
over elements $\mu_1,\ldots,\mu_n$.
\begin{proof}
The ``only if'' part of Theorem \ref{thm:weldedclassif} was proved in Lemma \ref{lem:lhinv}.
Conversely, let $L$ and $L'$ be two welded diagrams
with equivalent reduced peripheral systems $\big( \textnormal R
G(L),\{(\mu_i,\lambda_i.N_i)\}_i\big)$ and $\big( \textnormal R
G(L'),\{(\mu'_i,\lambda'_i.N'_i)\}_i\big)$. Using Lemma
\ref{lemma:Sorted}, we may assume that $L$ and $L'$ are both given by
sorted Gauss diagrams $D$ and $D'$. Following Remark
\ref{rem:Presentation->Diagram}, the strategy will be to apply welded
and $\textnormal{SV}$ moves on $D'$ so that the presentations for the reduced fundamental group associated to $D$ and $D'$ are the same, meaning that $D$ and $D'$ are the same up to OC moves. To keep notation light, we
will also denote by $D'$ all the sorted Gauss diagrams successively obtained by
modifying $D'$.
\begin{figure}
\[
\begin{array}{cc}
\dessin{2cm}{Basing2}&
\xrightarrow[]{\textrm{TaH}}\ \dessin{2cm}{Basing3}\ \xrightarrow[]{\textrm{TaH}}\ \dessin{2cm}{Basing4}
\\[1cm]
\phantom{\textrm{\scriptsize R2}}\uparrow \textrm{\scriptsize R2}& \\[.1cm]
\dessin{2cm}{Start}& \\[-.2cm]
\phantom{\textrm{\scriptsize R2}}\downarrow \textrm{\scriptsize R2}&\\[.1cm]
\dessin{2cm}{Longitude2} &
\xrightarrow[]{\textrm{R2}}\ \dessin{2cm}{Longitude3}\ \xrightarrow[]{\textrm{SV}}\ \dessin{2cm}{Longitude4}
\\[-.1cm]
\end{array}
\]
\caption{Making the equivalence an isomorphism}
\label{fig:Equivalence->Isomorphism}
\end{figure}
To begin, we first modify $D'$ so that the
isomorphism $\psi:\textnormal R G(L')\to\textnormal R G(L)$ sends each pair
$(\mu'_i,\lambda'_i)$ to $(\mu_i,\lambda_i)$.
Algebraically, transforming $\psi(\mu'_i,\lambda'_i)$ into $(\mu_i,\lambda_i)$ can be achieved in two steps:
\begin{enumerate}
\item since $\psi(\mu'_i)$ is a conjugate of $\mu_i$,
perform a sequence of ``elementary conjugations'', each replacing both $\mu'_i$ and $\lambda'_i$ by their conjugate under
${\mu_j'}^{\varepsilon}$, for some index $j\neq i$ and sign $\varepsilon$;
\item since $\psi(\lambda'_i)$ is an
element of $\lambda_i.N_i$, multiply $\lambda'_i$ by
an appropriate product of conjugates of ${\mu'_i}^{\pm1}$.
\end{enumerate}
These two steps have to be realized at the diagrammatical level:
\begin{enumerate}
\item for each
elementary conjugation, use R2 to add two parallel arrows going from the $j\,$th $t$--arc to the
starting extremity of the $i\,$th $t$--arc, and then pull the head of the
$(-\varepsilon)$--labelled arrow along the $i\,$th $t$--arc using the $\textnormal{TaH}$ move given in Figure \ref{fig:TaH}.
This yields an equivalent Gauss diagram which is still sorted, and
which realizes the desired elementary conjugation. See the upper half of Figure \ref{fig:Equivalence->Isomorphism} for an example where $\mu'_2$ and $\lambda'_2$ are conjugated by $\mu'_3$, the components being numbered from left to right;
\item for adding a conjugate $g{\mu'_i}^{\pm1}g^{-1}$ of ${\mu'_i}^{\pm1}$ to $\lambda'_i$,
use repeatedly R2 to add the trivial word $gg^{-1}$ to the $i$th
longitude of $D'$, and then a single SV move to produce the desired word $g{\mu'_i}^{\pm1}g^{-1}$. See the lower half of Figure \ref{fig:Equivalence->Isomorphism} for an example where the conjugate of $\mu'_2$ by ${\mu_1'}^{-1}\mu_3'$ is introduced in $\lambda'_2$, the components being numbered from left to right.
\end{enumerate}
We have thus realized the isomorphism $\psi:\textnormal R G(L')\to\textnormal R G(L)$ which sends each $\mu'_i$ to $\mu_i$, and each $\lambda'_i$ to $\lambda_i$.
As a matter of fact, we can now identify $\mu'_i$ with $\mu_i$, and according to the presentation given in Lemma \ref{lem:Rpres}, $\psi(\lambda'_i)$ and $\lambda_i$ differ, as words in the $\mu_j$'s, by a sequence of the following moves:
\begin{enumerate}
\item[i.] $\mu_j^{\pm1}\mu_j^{\mp1}\leftrightarrow 1$;
\item[ii.] $\omega\mu^{\pm1}_j\rightarrow \mu^{\pm1}_j\omega$ where $\omega$ is
of the form $\zeta^{-1}\mu^{\pm1}_j\zeta$ with $\zeta\in \textnormal F(\mu)$;
\item[iii.] $\mu^{\pm1}_j\rightarrow \omega^{-1}\mu^{\pm1}_j\omega$ where
$\omega$ is any representative in $\textnormal F(\mu)$ of the element $\lambda_j$.
\end{enumerate}
But each of these moves can be realized by modifying $D'$. Relations
i correspond indeed to R2 moves.
Relations ii can be handled exactly as in the proof of \cite[Lem. 4.26]{ABMW}.
For relations iii, consider first the particular representative
$\omega_0$ of $\lambda_j$ given by $D'$ following Remark \ref{rem:SortedLabels}.
At the level of $D'$,
the term $\mu^{\pm1}_j$ in relation iii corresponds to an arrow $a$
whose tail sits on the $j\,$th circle; moving this tail along the
whole circle component, against the orientation,
does conjugate $\mu^{\pm1}_j$ by $\omega_0$. Indeed, using
$\textnormal{TaH}$ moves, the tail of $a$ will cross
every head on its way at the cost of conjugating the head of $a$ with
the desired arrows; see Figure \ref{fig:ConjugatingMu} for an example where the $\mu_2$ factor in $\lambda_1'$ is conjugated by $\lambda_2'$, the components being numbered from left to right. For other representatives of $\lambda_j$, we note that
they differ from $\omega_0$ by a sequence of the following moves:
\begin{itemize}
\item[i'.] $\mu_k^{\pm1}\mu_k^{\mp1}\leftrightarrow 1$;
\item[ii'.] $\zeta^{-1}\mu^{\pm1}_k\zeta\rightarrow \mu^{\pm1}_k$ where
$\zeta$ is any element in
$\textnormal F(\mu)$.
\end{itemize}
Before sliding the tail of $a$ over the circle component, $D'$ should hence be modified so
that the slide operation does conjugate by the right word in
$\textnormal F(\mu)$. Again, relations i' can be realized using R2 moves. For relation ii', use the Slide move to remove by pairs the
arrows of $\zeta$ and $\zeta^{-1}$ away from the circle.
Then move the tail of $a$ along this loop, and perform the relations i' and ii' backwards.
\begin{figure}
\[
\dessin{2cm}{Conjugate1}\ \xrightarrow[]{\textrm{TaH}}\ \dessin{2cm}{Conjugate2}\ \xrightarrow[]{\textrm{TaH}}\ \dessin{2cm}{Conjugate3}
\]
\caption{Conjugating meridian factors in longitudes}
\label{fig:ConjugatingMu}
\end{figure}
\end{proof}
\section{A topological characterization of the reduced peripheral system}\label{sec:3}
Welded links are closely related to \emph{ribbon knotted tori} and \emph{ribbon solid tori} in
$S^4$, and the characterization of classical links having same
reduced peripheral systems given by Theorem \ref{thm:weldedclassif} can be recasted in terms of
$4$--dimensional topology.
\subsection{The enhanced Spun map}
Given a classical link $L\subset{\mathds R}^3$, a well-known procedure to
construct ribbon knotted tori in $4$--space is to take the \emph{Spun} of $L$:
consider a plane $\mathcal{P}$ which is disjoint from a $3$--ball containing
$L$, and spin $L$ around $\mathcal{P}$ inside ${\mathds R}^4\supset{\mathds R}^3$. The
result is a union of knotted tori, which we denote by $\Spun(L)$. If the
projection $D(L,\mathcal{P})$ of $L$ onto the plane $\mathcal{P}$ is regular, then
spinning as well the orthogonal projection rays from $L$ to $\mathcal{P}$
provides immersed solid tori whose boundary is $\Spun(L)$ and whose singularities are so-called
\emph{ribbon} disks, corresponding to
the crossings of $D(L,\mathcal{P})$.
Of course, this \emph{ribbon filling} depends on the choice of plane
$\mathcal{P}$, and more precisely on the diagram $D(L,\mathcal{P})$, which may by
changed by some sequence of Reidemeister moves.
But for each Reidemeister move, there is an associated singular diagram, that is a singular plane $\mathcal{P}_s$,
and spinning $L$ around $\mathcal{P}_s$ provides some singular ribbon filling which can be infinitesimally desingularized into the spun of one or the other side of the Reidemeister move.
This leads to the following definition, which settles a notion of
(singular) ribbon solid tori.
\begin{defi}
Let $\varphi:M\to S^4$ be an immersed $3$--dimensional manifold.
Let $D$ be a connected component of the singular set of $\varphi(M)$
contained in an open $4$--ball $B\subset S^4$.
We say that $D$ is a \emph{ribbon singularity}:\footnote{For the sake of exactitude, we provide here explicit formulas for the singularities, but informal descriptions follow in Remark \ref{rem:InformalRibbonSingularities}.}
\begin{itemize}
\item \emph{of type $0$} if $\varphi^{-1}(B)$ is the disjoint union $B_1\sqcup B_2$ of
two $3$--balls and there is a local system of coordinates for
$B\cong{\mathds R}^4$ such that $\left\{
\begin{array}{l}
\varphi(B_1)=\Big\{\big(t,r\cos(s),r\sin(s),0\big)\ \big|\
t,s\in{\mathds R},r\in[0,2]\Big\}\\[.2cm]
\varphi(B_2)=\Big\{\big(0,r\cos(s),r\sin(s),t\big)\ \big|\
t,s\in{\mathds R},r\in[0,1]\Big\}
\end{array}
\right.$;
\item \emph{of type $2$} if $\varphi^{-1}(B)$ is the disjoint union $B_1\sqcup B_2$ of
two $3$--balls and there is a local system of coordinates for
$B\cong{\mathds R}^4$ such that $\left\{
\begin{array}{l}
\varphi(B_1)=\Big\{\big(t,r\cos(s),r\sin(s),t^2\big)\ \big|\
t,s\in{\mathds R},r\in[0,2]\Big\}\\[.2cm]
\varphi(B_2)=\Big\{\big(t,r\cos(s),r\sin(s),-t^2\big)\ \big|\
t,s\in{\mathds R},r\in[0,1]\Big\}
\end{array}
\right.$;
\item \emph{of type $3$} if $\varphi^{-1}(B)$ is the disjoint union $B_1\sqcup B_2\sqcup
B_3$ of
three $3$--balls and there is a local system of coordinates for
$B\cong{\mathds R}^4$ such that $\left\{
\begin{array}{l}
\varphi(B_1)=\Big\{\big(t,r\cos(s),r\sin(s),0\big)\ \big|\
t,s\in{\mathds R},r\in[0,2]\Big\}\\[.2cm]
\varphi(B_2)=\Big\{\big(t,r\cos(s),r\sin(s),t\big)\ \big|\
t,s\in{\mathds R},r\in[0,1]\Big\}\\[.2cm]
\varphi(B_3)=\Big\{\big(t,r\cos(s),r\sin(s),-t\big)\ \big|\
t,s\in{\mathds R},r\in\big[0,\frac12\big]\Big\}
\end{array}
\right.$;
\item \emph{of type $\textnormal{SV}$} if $\varphi^{-1}(B)$ is the disjoint
union $B_1\sqcup B_2$ of two $3$--balls, $B_1$ and $B_2$ belongs
to the same connected component of $M$, and there is a local system
of coordinates for $B\cong{\mathds R}^4$ such that $\left\{
\begin{array}{l}
\varphi(B_1)=\Big\{\big(r,t,s,0\big)\ \big|\
t,s\in{\mathds R},r\in{\mathds R}_-\Big\}\\[.2cm]
\varphi(B_2)=\Big\{\big(0,r\cos(s),r\sin(s),t\big)\ \big|\
t,s\in{\mathds R},r\in[0,1],r\Big\}
\end{array}
\right.$.
\end{itemize}
\end{defi}
\begin{remarque}\label{rem:InformalRibbonSingularities}
In all four cases, the ribbon singularity $D$ corresponds to the disk $\Big\{\big(0,r\cos(s),r\sin(s),0\big)\ \big|\
s\in{\mathds R},r\in[0,1]\Big\}$.
Type $0$ corresponds to two solid
tubes, one being smaller than the other, intersecting
transversally; these are the usual ribbon singularities. Type $2$
corresponds to two solid tubes, one being smaller than the other,
intersecting tangentially; these occur when spinning a link
around a plane on which the link projects with two tangential
strands. Type $3$
corresponds to three solid tubes of increasing width,
intersecting simultaneously and transversally; these occur when spinning a link
around a plane on which the link projects with a triple
point. Type $\textnormal{SV}$ differs from type $0$ in
that one
preimage of the singular disk
lies on the boundary of $M$
instead of its interior, and in that the two preimages belong to the
same solid torus;
these occur when performing the link-homotopy which pushes at once a usual
ribbon (self) singularity through the boundary of $M$. Note that
a type $1$ seems to be missing here, which would correspond to spinning a link
around a plane on which the link projects with a cusp, but this
does not introduce any new kind of ribbon singularity.
\end{remarque}
\begin{defi}
\emph{Ribbon solid tori} are immersed solid tori in $S^4$ whose
singular locus is made of ribbon singularities of type
$0$. \emph{Generalized ribbon solid tori} are immersed solid tori in $S^4$ whose
singular locus is made of ribbon singularities of type $0$, $2$ and
$3$. \emph{Self-singular ribbon solid tori} are immersed solid tori in
$S^4$ whose singular locus is made of ribbon singularities of type
$0$ and $\textnormal{SV}$.
We say that two (generalized) ribbon solid tori are \emph{equivalent} if there is
a path among generalized ribbon solid tori connecting them, and we
say that they are \emph{ribbon link-homotopic} if there is path
among generalized and self-singular ribbon solid tori connecting them.
\end{defi}
Adding the spun of projection rays in the above definition of the $\Spun$ map provides a well-defined map $\Spun^\bullet$ from classical links to generalized ribbon soli tori.
The following result is the topological
characterization of the reduced peripheral system given by the equivalence $\textrm{i}\Leftrightarrow \textrm{iii}$ in our main theorem.
\begin{theo}\label{tralalilala}
Two classical links $L$ and $L'$ have isomorphic reduced peripheral
systems if and only if $\Spun^\bullet(L)$ and $\Spun^\bullet(L')$ are
ribbon link-homotopic.
\end{theo}
The proof is given in the next section.
As for the diagrammatic characterization given in Section \ref{sec:2},
this will follow from a more general result, Theorem \ref{thm:4-dimclassif}, characterizing the reduced peripheral system of welded links in terms of $4$--dimensional topology.
\subsection{The enhanced Tube map}
In this section, we prove Theorem \ref{tralalilala}, using the so-called $\Tube$ map.
Recall from \cite{Satoh} that Satoh's generalization of Yajima's $\Tube$ map is defined from welded links to ribbon knotted $2$-tori, and that for any welded link $L$, $\Tube(L)$ actually comes with a canonical ribbon filling.
In order to fully record this ribbon filling in the $\Tube$ map, and to connect with the $\Spun^\bullet$ map, we are led to the following notion.
\begin{defi}
We define \emph{generalized welded diagrams} as diagrams with cups and the following
kind of crossings:
\[
\begin{array}{c}
\dessin{1.2cm}{CClassical}\\
\textrm{classical}
\end{array}
\hspace{1cm}
\begin{array}{c}
\dessin{1.2cm}{CTangent}\\
\textrm{tangential}
\end{array}
\hspace{1cm}
\begin{array}{c}
\dessin{1.2cm}{CTriple}\\
\textrm{triple}
\end{array}
\hspace{1cm}
\begin{array}{c}
\dessin{1.2cm}{CVirtuel}\\
\textrm{virtual}
\end{array}.
\]
Then classical Reidemeister moves are replaced by a path of diagrams
going through the corresponding cusp, tangential ou triple
point. Other welded moves are still locally allowed.
We define \emph{self-singular welded diagrams} as diagrams with the following
kind of crossings, where the two strands involved in a semi-virtual crossing belong to a same component:
\[
\begin{array}{c}
\dessin{1.2cm}{CClassical}\\
\textrm{classical}
\end{array}
\hspace{1cm}
\begin{array}{c}
\dessin{1.2cm}{CVirtuel}\\
\textrm{virtual}
\end{array}
\hspace{1cm}
\begin{array}{c}
\dessin{1.2cm}{CSemiVirtuel}\\
\textrm{semi-virtual\footnotemark}
\end{array}.
\]
\footnotetext{Semi-virtual crossings were already introduced in
\cite{GPV} in connection with finite type invariants of virtual
knots.}
\emph{Self-virtualization} is defined for generalized welded diagrams
as the equivalence relation generated by the local moves turning a
semi-virtual crossing into either a classical crossing, or a virtual
one as follows:
\[
\dessin{1.2cm}{CClassical} \longleftrightarrow
\dessin{1.2cm}{CSemiVirtuel}
\longleftrightarrow \dessin{1.2cm}{CVirtuel}.
\]
\end{defi}
Following \cite[Sec. 3.2]{Aud_HdR}, one can then define a map
\[
\Tube^\bullet:\frac{\big\{\textrm{generalized welded
diagrams}\big\}}{\textrm{self-virtualization}}\to\frac{\big\{\textrm{generalized
ribbon solid tori}\big\}}{\textrm{ribbon
link-homotopy}}
\]
which, respectively, associates ribbon singularities of type $0$, $2$,
$3$ and $\textnormal{SV}$ to classical, tangential, triple and semi-virtual crossings
and connects these various singularities by pairwise disjoint $3$-balls, as prescribed by the welded diagram.
It is then a straightforward
adaptation of \cite[Prop. 3.7]{Aud_HdR} to prove that $\Tube^\bullet$
is one-to-one.
As a direct corollary of Theorem \ref{thm:weldedclassif}, we obtain the following alternative characterization of the reduced peripheral system, which holds for all welded links.
\begin{theo}\label{thm:4-dimclassif}
Two welded links $L_1$ and $L_2$ have isomorphic reduced peripheral
systems if and only if $\Tube^\bullet(L_1)$ and $\Tube^\bullet(L_2)$ are
ribbon link-homotopic.
\end{theo}
Theorem \ref{tralalilala} follows from this results.
Indeed, as essentially pointed out by Satoh in \cite{Satoh}, it is
clear from the definition of the $\Spun^\bullet$ map that, starting
with a diagram $D$ of a classical link $L$, the ribbon solid tori
$\Spun^\bullet(L)$ consists of ribbon singularities which are
connected by $3$-balls as combinatorially prescribed by $D$:
$\Spun^\bullet(L)$ and $\Tube^\bullet(L)$ are hence equivalent.
\subsection{Link-homotopy of ribbon surfaces in $4$-space}\label{sec:surfaces}
The original versions of the $\Spun$ and $\Tube$ maps produce ribbon $2$--tori,
which are just the boundary of some ribbon solid tori, rather than $3$-dimensional objects.
Obviously, any ribbon link-homotopy between two ribbon solid tori
induces a usual link-homotopy between their
boundaries. Building on this remark, it follows that:
\begin{prop}
If two classical links $L_1$ and $L_2$ have isomorphic reduced
peripheral systems, then $\Spun(L_1)$ and $\Spun(L_2)$ are link-homotopic.
\end{prop}
It is hence tempting to hope for the converse to hold true: this would
give a topological characterization of the reduced peripheral system in terms of spun surfaces up to link-homotopy.
However, this is not the case.
There is indeed a known global move on welded links, related to the torus eversion in $S^4$, under
which the $\Spun$ map is invariant, and this move
transforms every classical link into its \emph{reversed image}, which is the
mirror image with reversed orientation (see \cite{Blake} or
\cite[Prop. 2.7]{IndianPaper}).
Furthermore,
it can be checked
that the (reduced) peripheral system of a reversed image is given
from the initial one by just inverting the longitudes.
It follows easily
from these two observations
that, for instance, the positive and negative Hopf links have non-equivalent reduced peripheral systems,
whereas their spuns are isotopic, hence link-homotopic.
As a consequence, keeping track of the ribbon filling is mandatory to
preserve (reduced) peripheral systems, and Theorem \ref{tralalilala} is in this sense optimal.
|
2,869,038,154,364 | arxiv | \section{Introduction}
The observation of high principal quantum number Rydberg excitons in cuprous oxide~\cite{Kazimierczuk2014} (Cu$_{2}$O) has brought about renewed interest~\cite{Thewes2015,Gruenwald2016,Walther2018a,Zhang2018,Heckoetter2020a} in the material. A major motivation is the potential to combine long-range van der Waals interactions (which scale with principal quantum number, $n$, as $n^{11}$) between excitons with the quantum optical properties of exciton-polaritons~\cite{Khazali2017,Walther2018a,Poddubny2019}. This concept of Rydberg quantum optics builds on previous work using Rydberg states of laser cooled and thermal atomic vapours, where single-photon sources and photon-photon interactions have been demonstrated~\cite{Firstenberg2016,Adams2019}.
Mott-Wannier excitons were first observed in cuprous oxide by Hayashi \textit{et al.}~\cite{Hayashi1950,Hayashi1952} and Gross \textit{et al.}~\cite{Gross1956}, using absorption spectroscopy~\cite{Hayashi1950,Hayashi1952,Gross1956,Agekyan1977,Kavoulakis1997,Washington1977}. The highest valence and lowest conduction bands arise, respectively, from the 3d and 4s orbitals of the Cu atoms and are the origin of the lowest energy (yellow) exciton series. Because both bands are of even-parity, excitons with an odd-parity P-type envelope are dipole allowed~\cite{Elliott1957}. The seminal study by Kazimierczuk \textit{et al.}~\cite{Kazimierczuk2014} resolved excitons up to principal quantum number $n$ = 25 and more recently Versteegh \textit{et al.}~\cite{Versteegh2021} measured up to $n$ = 30. At these highest observed states, the spatial extent of the excitons is greater than 1~\si{\micro\meter} and the associated dipole moment is large enough to exhibit evidence of a Rydberg blockade~\cite{Kazimierczuk2014,Walther2018,Heckoetter2018,Heckoetter2020} arising from the van der Waals interactions between the excitons.
The first study of even-parity excitons in cuprous oxide up to $n=5$ was made by Fr\"{o}hlich \textit{et al.}~\cite{Froehlich1979,Uihlein1981}. The measured observable in these and other experiments~\cite{Matsumoto1996} was either few percent changes in the intensity of the transmitted laser light or spectrally-resolved photoluminescence (PL). The resulting two-photon excitation (TPE) spectra consist primarily of even-parity excitons with S- and D-type envelopes as shown in Fig.~\ref{Fig:ExcitonCartoon}.
From a quantum optics perspective, the study of even-parity excitons offers several advantages over that of the odd-parity states. TPE photoluminescence and SHG provide a signal at approximately twice the frequency of the excitation laser, making it easier to remove stray light. Unlike one-photon absorption, where interactions with optical phonons~\cite{Ueno1969} gives rise to strong background absorption, SHG is a background-free process that disappears above the bandgap of the material. These properties were essential to a recent demonstration of the microwave modulation of an optical carrier in cuprous oxide~\cite{Gallagher2021}. It is therefore an important question whether the even-parity spectrum may be resolved up to the same high principal quantum numbers ($n>25$) observed in one-photon excitation of the $nP$ series~\cite{Kazimierczuk2014,Versteegh2021}.
Recently, Mund \textit{et al.}~\cite{Mund2018,Mund2019,Farenbruch2020,Rommel2020} have studied even-parity excitonic states up to 9S using second-harmonic generation (SHG). In these experiments~\cite{Mund2018,Mund2019,Farenbruch2020,Rommel2020}, broadband femtosecond pulses simultaneously excited a range of Rydberg states, and the resulting second harmonic was spectrally resolved using a monochromator. Although cuprous oxide possesses inversion symmetry, SHG becomes allowed at exciton resonances when quadrupole terms are considered in the excitation or emission operators. The resulting selection rules are considered in detail in \cite{Schoene2016,Mund2018}. SHG may also occur if the crystal symmetry is lifted either by residual strain~\cite{Mund2019}, lattice imperfections or by externally applied fields~\cite{Rommel2020, Farenbruch2020}.
In this paper we present an alternative approach to SHG spectroscopy based on a continuously tuneable diode laser chopped into high-intensity nanosecond pulses, the duration of which can be varied using a fiber-based electro-optic modulator. The linewidth is near transform-limited to $\sim 50$~\si{\nano\electronvolt} for a typical pulse duration of 50~\si{\nano\second}, which is substantially less than the \si{\micro\electronvolt} spectral width of the excitonic features. This narrow linewidth enables direct measurement of the lineshape of high-$n$ excitons by scanning the laser. The high excitation intensity required for SHG is provided by a Raman fiber amplifier that achieves an average output power of up to 5~\si{\watt}. A representation of the excitation scheme is shown in Figure~\ref{Fig:ExcitonCartoon}. We combine the narrowband pulsed excitation with time-resolved single-photon counting detection, which enables measurements of the temporal profile of the emitted light over timescales from nanosecond to milliseconds, with resolution down to 50~\si{\pico\second}. Here, we show that this reveals the presence of weak decays from long-lived bound excitons states. We also perform spatially-resolved measurements of the SHG intensity, with a resolution on the micron scale.
\begin{figure}
\centering
\includegraphics[width=68mm]{Figures/ExcitonSpectrumCartoon4.pdf}
\caption{The yellow exciton series of Cu$_{2}$O arises from the highest valence band (VB) and lowest conduction band (CB). Under two-photon excitation (TPE) and along certain crystallographic axes, Cu$_{2}$O permits second-harmonic generation (SHG). SHG is enhanced when the combined energy of the two excitation photons is resonant with an even-parity exciton, predominantly with S (blue parabolas) or D (green parabolas) envelope. SHG is suppressed above the bandgap. The SHG spectrum on the right, which is shown in more detail in Fig.\,\ref{Fig:SpectralFigure}, is acquired by scanning the excitation energy over the series.}
\label{Fig:ExcitonCartoon}
\end{figure}
\section{Experimental overview}
The overall experimental schematic is shown in Fig.\,\ref{Fig:Setup Figure}. All the data presented in this paper were acquired from natural cuprite gemstones from the Tsumeb copper mine in Namibia. The material was oriented using a Laue camera (Multiwire Labs Ltd MWL120) and cut to expose the (111) plane. Rectangular samples of $2 \times 3$~\si{\milli\meter} were prepared by slicing the oriented material and polishing it on both sides to a thickness of $60 \pm 30$~\si{\micro\meter}. Full details of the preparation process are described elsewhere~\cite{Lynch2021}.
Each lamella of Cu$_{2}$O was positioned in a pocket between two CaF$_{2}$ windows and fixed in one corner with a dab of nail varnish. The windows were in turn held in a copper mount, also attached by nail varnish for thermal contact. This mount was fixed to a 3-axis piezo translation stage (Attocube Systems AG nanopositioners: $2 \times $~ANPx101/RES/LT/UHV and $1 \times$~ANPz102/RES/LT/UHV) in a Montana Instruments C2 Cryostation and cooled to a base temperature of 4~\si{\kelvin}. Two in vacuo aspheric lenses (NA = 0.6; Lightpath Technologies 354105) were aligned coaxially either side of the sample. The front lens focused laser excitation onto the sample and collected the back-scattered signal. The rear lens re-collimated the intense excitation beam to be dumped safely. Just before the cryostat, a long-pass dichroic filter with cutoff at 785~\si{\nano\meter} (Semrock DI03-R785-T3-25-D) separated signal light from the infrared excitation beam path. The excitation spot size inside the sample was $\approx0.5~\si{\micro\meter}$. Due to uncompensated birefringence in the CaF$_{2}$ windows the light polarization was slightly elliptical, and its orientation with respect to the in-plane crystallographic axes was not set.
\begin{figure*}
\centering
\includegraphics[width=150mm]{Figures/SetupFigure4.pdf}
\caption{Schematic of the experimental platform. The seed laser is stabilised using a commercial wavemeter and carried primarily by optical fiber (blue lines). After pulse modulation (EOM) and amplification (RFA) the free-space IR beam (dashed red lines) passes through a noise eater/pulse picker (AOM) before transfer to the cryostat, where it passes through in-vacuo aspheres mounted either side of the sample. Backscattered SHG (yellow dashed lines) is separated using a dichroic mirror (grey line) and fiber coupled to the detection system. Also shown are the LED and camera (CMOS) used for imaging the sample. Components in dashed boxes (ppLN crystal, camera flip mirror, and Fabry-P\'{e}rot etalon (FPE)) may be inserted or removed by redirecting appropriate optical fibers. Electrical feedback signals are represented by green arrows.}
\label{Fig:Setup Figure}
\end{figure*}
The seed laser (orange box in Fig.\,\ref{Fig:Setup Figure}) was a custom-built Littrow-configuration external cavity diode laser (ECDL) based on a ``butterfly'' packaged gain module (Innolume GmbH GM-1140-120-PM-130) centred at 1140~\si{\nano\meter}. The waveguide-type gain module had two output ports. The main output was via a pre-aligned, single mode polarization maintaining fiber pigtail. Placing a 1200~\si{\per\milli\meter} grating at the second free-space output port converted the gain module into an ECDL. By optimizing the geometry of the lever arm holding the grating we designed the ECDL cavity to reach the synchronous tuning regime~\cite{McNicholl1985,Nilse1999,Labachelerie1993}, where the effects of changing the grating angle and the cavity length are appropriately matched to maximize mode-hop-free the tuning range. This architecture, in conjunction with the anti-reflection coated free-space facet enabled mode-hop-free tuning over a \si{\milli\electronvolt} range. An additional IR etalon (not shown in Fig.~\ref{Fig:Setup Figure}) was used to estimate the short-term linewidth via the side-of-fringe technique, yielding an RMS width of 8~\si{\nano\electronvolt} measured over 500 \si{\milli\second}.
The zeroth order light at the free space output was passed through two consecutive $> 30$~\si{\decibel} free-space isolators (Thorlabs Inc. IO-4-1150-VLP) and sent to a precision wavemeter (Bristol Instruments 671A-NIR). The wavelength was measured at 4~\si{\hertz}, and computer-controlled tuning and wavelength stabilization was executed by two piezo devices regulating the laser cavity. Firstly, a slip-stick piezo actuator provided coarse tuning, by moving the grating lever arm in discrete steps corresponding to $\sim 400$~\si{\nano\electronvolt} in excitation energy. The total coarse tuning range exceeded 1130~--~1170~\si{\nano\meter}. Scans were performed by incrementing the coarse stepper and measuring the wavelength continuously as the signal was acquired. Secondly, a piezo stack at the tip of the coarse tuning actuator provided $\pm 40$~\si{\micro\electronvolt} of continuous fine tuning for a given actuator position. This piezo stack was powered by a 0 - 100~\si{\volt} driver with a resolution of 10~\si{\milli\volt}, which translates to spectral increments of $\sim 4$~\si{\nano\electronvolt}. The fine tuning was used primarily in conjunction with a feedback loop to stabilise the laser energy to within $\pm 80$~\si{\nano\electronvolt} of the set point.
The main fiber-pigtailed output ($\sim 80$~\si{\milli\watt}) was passed through an in-line fiber optical isolator (Innolume GmbH PMOI-1150-SS) and then chopped into nearly transform-limited pulses by an electro-optic intensity modulator (EOM; EOSpace lithium niobate Mach-Zehnder interferometer) driven by a pulse generator (Hewlett Packard 8131A). The pulse generator operated at variable repetition rate up to 100~\si{\mega\hertz} and pulse duration down to 700~\si{\pico\second}. In order to achieve the intensities required for TPE, the pulses were amplified in a Raman fiber amplifier (RFA; MPB Communications Inc RFL-P-5-1143-SF-NS) up to a maximum of 5~\si{\watt} average power. Provided the seed laser delivered $> 0.3$~\si{\milli\watt} of power, the average output power of the RFA was relatively insensitive to the duty cycle imposed by the EOM. A pulse duty cycle of 1\% therefore yielded peak amplified powers of several hundred watts~\cite{Dingjan2006}.
The output of the amplifier passed through a free-space acousto-optic modulator (AOM; Isomet M1080-T80L-1.5) that served two purposes. For most experiments it was used to stabilize the intensity of excitation light on the sample, as part of a servo loop based on the signal from a photodiode monitoring a pickoff near the sample. However, for experiments requiring a low duty cycle, for example to measure the lifetime of long-lived states, the minimum average seed power required by the RFA was a limitation. To overcome this, the AOM was also used as a pulse picker, selecting individual pulses or groups of pulses from the pulse train generated by the EOM.
Overall, the laser system was able to provide an optical power up to 5~\si{\watt} average, with arbitrary pulse duration (CW to 700~\si{\pico\second}) and repetition rate (single shot to 100~\si{\mega\hertz}). The laser could be tuned continuously and automatically to any point in the Rydberg excitonic series. If required, the output light could also be frequency doubled using a temperature stabilized periodically poled lithium niobate crystal (ppLN; Covesion MSHG1120-1.0-20) to address the excitons in one-photon excitation.
After the sample, the detection ensemble (Fig.\,\ref{Fig:Setup Figure}) was designed to provide complementary scales of spectroscopic resolution, as well as temporal and spatial sensitivity. For coarse spectral measurements, the signal was passed through a monochromator (Horiba iHR550) with a 2400 lines~\si{\per\milli\meter} grating giving a resolution of 250~\si{\micro\electronvolt}. Finer spectral details were resolved by a solid temperature-tuned Fabry-P\'{e}rot etalon (FPE; Light Machinery OP-7423-1686-1), which had a free spectral range of $248.6 \pm 0.8$~\si{\micro\electronvolt} and a finesse of $44.5 \pm 0.7$. Light dispersed by the monochromator or filtered by the FPE was detected by either a CMOS camera (Basler AG ace acA2040-120um) or single photon avalanche detector (SPAD; Micro Photon Devices Srl PDM grade D). When spectral resolution of the emission was not required, the SHG light was separated from other inelastic emission channels using a badnpass filter (Semrock FF01-580/14-25), and detected using the SPAD. A photon counting card (PicoQuant TimeHarp 260), synchronised with the EOM pulse generator, recorded the relative arrival time of signal photons. Temporal resolution was ultimately limited by the timing jitter of both the pulse generator driving the EOM and the SPAD, which was $< 50$~\si{\pico\second} in both cases.
The setup also included apparatus for high-resolution imaging of the sample surface. The sample was illuminated in a K\"{o}hler configuration~\cite{Koehler1893} by light from a yellow LED (Luxeon LXZ1-PX01). The rear aspheric served as a condenser lens in this configuration and the front aspheric served as the imaging objective. Additional lenses external to the cryostat completed the microscope. Using the Basler CMOS as a detector we achieved a resolution of approximately 2~\si{\micro\meter} per pixel over a field of view of $200 \times 200$~\si{\micro\meter}.
\section{Results}
\subsection{Time-resolved measurements and the {two-photon} emission spectrum}
A typical spectrum of the light emitted by the sample under two-photon excitation is shown in Fig.\,\ref{Fig:EmissionFigure}. Here the excitation laser was two-photon resonant with the 5D exciton level at 2.1686~\si{\electronvolt}, indicated by the red arrow. There are three dominant features. As in previous studies~\cite{Compaan1972,Petroff1975,Ito1997,Snoke1990,OHara1999,Jang2006,SOLACHECARRANCO2009,Li2013,FRAZER2015,Kitamura2017,Takahata2018}, photoluminescence is observed at the energy of the 1S ortho-exciton recombination line (A) and its $\Gamma^{-}_{3}$ phonon replica (B). These peaks reflect the dominant decay channel of Rydberg excitons, which is phonon-mediated relaxation to the lowest 1S excitonic level, with subsequent radiative decay to the valence band. The strong feature at the energy of the 5D state (C) is the second harmonic of the laser light. In Fig.~\ref{Fig:EmissionFigure} (c), this light was isolated with the bandpass filter that spanned the entire Rydberg exciton series (orange shaded region), before being sent through the Fabry-Perot etalon. Scanning the etalon revealed a spectral profile that is equivalent to that obtained from light frequency doubled in the ppLN crystal, with both measurements limited by the lineshape of the etalon. The lack of other features in Fig.\,\ref{Fig:EmissionFigure} (c) implies that we do not observe luminescence from the direct radiative decay of Rydberg states~\cite{Takahata2018,Kitamura2017,Steinhauer2020} within the sensitivity of our instrument.
\begin{figure}
\centering
\includegraphics[width=85mm]{Figures/EmissionFigurev3.pdf}
\caption{Emission spectrum under TPE resonant with the 5D state. The excitation pulses were of 1~\si{\nano\second} duration with a peak power of 8.8~\si{\watt} and a repetition rate of 10~\si{\mega\hertz}. The exposure time was 8 minutes. Three principal features were observed in (a): \textit{A}: direct quadrupole luminescence from the 1S ortho-exciton; \textit{B}: $\Gamma^{-}_{3}$ phonon assisted emission from the 1S state and; \textit{C}: the second harmonic of the laser. The red arrow indicates the energy of the 5D state. The orange box represents the 78~\si{\milli\electronvolt} spectral window covered by a bandpass filter used to isolate emission directly from the Rydberg series. (b) Magnified view of low-intensity light emission around 1.99~\si{\electronvolt}, showing narrow resonances (\textit{D}) associated with bound exciton states. (c) Spectrum of the emitted light resolved by the FPE after the bandpass filter (blue data points) compared to that of SHG generated by the ppLN crystal (purple shaded region).}
\label{Fig:EmissionFigure}
\end{figure}
Close examination of the spectrum in Fig.~\ref{Fig:EmissionFigure}(b) also reveals additional sharp peaks (D) at 1.99~\si{\electronvolt}. We note that the relative intensities of A, B and D are strongly spatially dependent. To gain further insight, we exploit our pulsed excitation and time-resolved detection system. By using both the EOM and the pulse picker, we probe delays that range over six orders of magnitude from nanoseconds to milliseconds. Figure \,\ref{Fig:TRHistograms} (a) shows a histogram of the photon counts from all the features in Fig.\,\ref{Fig:EmissionFigure} following a 50~\si{\nano\second} excitation pulse. Intense emission coinciding with the excitation pulse is apparent in the sharp peak at $t = 0$. A steady exponential decay in photon counts follows with a lifetime of $641 \pm 7$~\si{\micro\second}. A sample location was chosen where this feature was prominent enough to measure accurately. We note that this measured lifetime is a remarkably long. For comparison the longest lifetime reported for the lowest-lying 1S para-exciton state is 13~\si{\micro\second}~\cite{Mysyrowicz1979}.Inserting the bandpass filter suppressed this long-lived component, as shown in Fig.\,\ref{Fig:TRHistograms}, where the temporal profile of the remaining light traces that of the laser excitation pulse as expected for SHG. We therefore tentatively attribute this lifetime to the narrow resonances observed in Fig.~\ref{Fig:EmissionFigure}(b), since components (A) and (B) that are also blocked by the filter are associated with the 1S ortho-exciton and are short-lived. Previous detailed studies of the PL spectrum in this energy region revealed the existence of narrow resonances associated with bound excitons~\cite{Jang2006}, that are distinct from the phonon replicas associated with free ortho and para-excitons~\cite{Takahata2018}. Since these resonances were present in naturally occurring samples, but absent in synthetic material, it was concluded~\cite{Jang2006} that they are associated with unknown metallic impurities. The free 1S para-exciton is predicted to have a radiative lifetime of 7~ms in pure samples~\cite{OHara1999}; our lifetime measurements would therefore be consistent with exciton binding resulting in an enhanced radiative decay rate, as discussed in~\cite{Jang2006}. Lastly we note that the broadband emission underneath the bound exciton peaks observed by~\cite{Jang2006} also appears to be present in Fig.~\ref{Fig:EmissionFigure}(b).
The results presented in the following sections were obtained with the bandpass filter in place to isolate only the SHG signal; the PL from the 1S exciton and its phonon replicas as well as the long-lived states were filtered out.
\begin{figure}
\centering
\includegraphics[width=85mm]{Figures/TRFigureForPaper2.pdf}
\caption{Histograms of photon counts following TPE. (a) shows the histogram obtained following a 50~\si{\nano\second} excitation pulse resonant with the 9S state at a repetition rate of 1 \si{\kilo\hertz} and peak pulse power of 600~\si{\milli\watt}. The bandpass filter is removed such that all features seen in Fig.\,\ref{Fig:EmissionFigure} contribute. The photon counting card used 51.2~\si{\nano\second} bins, re-binned into 2.5~\si{\micro\second} bins in this figure. The dark blue data points and the light blue shading represents the mean and range of counts in the new bins respectively. The red solid line is an exponential fit with a constant offset. The intense spike at $t = 0$ coincides with the laser pulse. (b) shows a higher resolution histogram obtained with the bandpass filter shown in Fig.\,\ref{Fig:EmissionFigure} in place. The original bins here were 25~\si{\pico\second} re-binned into 670~\si{\pico\second} bins. The shaded region indicates the temporal profile of the excitation pulse, which was increased to 1.2~\si{\watt} peak power in order to fill the shorter time bins more quickly. Note that the apparent decay in intensity on the top of this pulse profile is a signature of pile-up~\cite{Davies1970} due to the 77~\si{\nano\second} dead time of the SPAD.}
\label{Fig:TRHistograms}
\end{figure}
\subsection{High resolution SHG spectroscopy}
\begin{figure*}
\centering
\includegraphics[width=170mm]{Figures/SpectralFigureLorentzians.pdf}
\caption{(a) A representative spectrum of the intensity of SHG emitted by cuprite under TPE for $n = 5$ up to the band edge. (b) shows a zoomed view of the high energy region of (a). The black data points and their error bars are, respectively, the mean value and standard error of a bin containing 4 measurements at each spectral point. The error bars are mostly smaller than the data points so a typical data point in (b) is shown with its error bar expanded by a factor of 30. The red line is a global fit composed of the incoherent sum of Lorentzian lineshapes. The shaded peaks show the contribution from each angular momentum series. The lower panels of (a) and (b) show the residuals of the fit normalised to the associated errors. (b) also shows high $n$ peaks of the P series measured using one-photon absorption in the same sample. The peaks are annotated with their principal quantum numbers and the vertical grey lines indicate the corresponding point of zero quantum defect. The excitation light was 50~\si{\nano\second} pulses of 160~\si{\milli\watt} peak power at a repetition rate of 5~\si{\mega\hertz}. The laser was tuned stepwise through the region and the emission light collected for 1~\si{\second} four times per step through a bandpass filter of width 78~\si{\milli\electronvolt} centred at 2.140~\si{\electronvolt}.}
\label{Fig:SpectralFigure}
\end{figure*}
To obtain a high-resolution exciton spectrum the laser was scanned under computer control across the region of interest. For these experiments, the pulse duration was set to 50~\si{\nano\second}, and the repetition rate to 5~\si{\mega\hertz}. The step size was varied between 1.2~\si{\micro\electronvolt}, to capture fine detail at high $n$, to 3.6~\si{\micro\electronvolt} at lower $n$. At each step, the wavelength was measured and the SHG signal was accumulated on the SPAD four times for 1~\si{\second} each time. Fourteen identical scans were recorded sequentially under the same conditions to produce an estimate of the experimental uncertainties. An example spectrum, acquired in a single mode-hop free scan across the region $n=5 \rightarrow \infty$ spanning 2.37~\si{\milli\electronvolt} (574~\si{\giga\hertz}) is shown in Fig.\,\ref{Fig:SpectralFigure} (a). A complex series of overlapping resonances extending to $n\approx 12$ is observed. It is apparent that odd parity $P$ states are observed, in addition to the expected S and D resonances. The amplitude of the peaks has an unusual dependence on $n$, firstly increasing to a maximum at $n = 7$, before decreasing abruptly to zero at higher $n$.
To gain further insight, we fit the entire spectrum with a series of resonances labelled by their principal quantum number $n$ and angular momentum $0\leq L \leq 3$ labelled S,P,D,F respectively. As discussed in previous works~\cite{Uihlein1981,Schweiner2017} these angular momentum labels are only approximate; we discuss the detailed assignment of the lines in term of the irreducible symmetry representations later.
The measured SHG intensity is given by
\begin{equation}
I_{\mathrm{SHG}} = A I_{\mathrm{IN}}^2 \left| \sum_{n,L} \chi^{(2)}_{n,L}\right|^2.
\label{Eqn:CoSum}
\end{equation}
Here $A$ is a scaling factor that includes the detection efficiency and sample thickness, $I_{\mathrm{IN}}$ is the intensity of the IR excitation light, and
\begin{equation}
\chi^{(2)}_{n,L} = \frac{S_{n,L}}{i \Gamma_{n,l} + (E-E_{n,L})}
\end{equation}
is the contribution to the susceptibility associated with each resonance. The quantity $S_{n,L}$ is a complex amplitude that contains the matrix elements for excitation and emission.
Fitting the spectrum with equation~\ref{Eqn:CoSum} is an example of an inverse problem; since the intensity is a scalar there is not enough information to unambiguously determine the real and imaginary parts of the amplitudes $S_{n,L}$ without additional information~\cite{Sehmi2017}. Indeed it has been shown~\cite{Busson2009} that for a spectrum containing $N$ resonances, there are $2^N$ sets of parameters that produce identical spectra. The impact of this effect on the parameters (positions, widths) extracted from the fit is considered in detail in Appendix~\ref{App:Fitting}. To avoid this ambiguity, we use a sum of symmetric Lorentzian profiles
\begin{equation}
I_{\mathrm{SHG}}= A I_\mathrm{\mathrm{IN}}^2 \sum_{n,L} \frac{\bar{S}_{n,L}^{2}}{\mathit{\Gamma}_{n,L}^{2} + (E - E_{n,L})^{2}}.
\label{Eqn:IncoSum}
\end{equation}
We note that this incoherent sum neglects the cross terms in equation (\ref{Eqn:CoSum}), and that the resulting amplitudes $\bar{S}_{n,L}$ are real.
As shown in Fig.\,\ref{Fig:SpectralFigure}, the fit based on this Lorentzian lineshape is in very good agreement with the measured data across the full energy range. The experimental uncertainties were weighted by the proximity of the data to the band edge, to reduce the impact of the deviations at low $n$ that dominate the residuals. These deviations are due to a small asymmetry of the S peaks with $n<8$, which is not captured by the model. This asymmetry may be due to residual effects of the interference terms present in equation~(\ref{Eqn:CoSum}) but neglected in equation~(\ref{Eqn:IncoSum}). From this fit, robust values were extracted for the parameters $\bar{S}_{n,L}$, $\mathit{\Gamma}_{n,L}$ and $E_{n,L}$. Uncertainties on the parameters were obtained from the standard error of the distribution of values produced by fitting each of the 14 spectral scans individually.
The results for each value of $n$ and $L$ are shown in Figs.\,\ref{Fig:PeakLocationAnalysis} - \ref{Fig:PeakAreaAnalysis}. The resonance energies $E_{n,L}$ (Fig.\,\ref{Fig:PeakLocationAnalysis}) can be described by the Rydberg formula
\begin{equation}
E_{n,L} = E_\mathrm{g}-\frac{R_{y}}{(n-\mathit{\delta}_{L})^{2}}.
\label{Eqn:energies}
\end{equation}
where $E_\mathrm{g}$ is the bandgap, $R_{y}$ is the excitonic Rydberg constant and $\mathit{\delta}_{L}$ is the quantum defect~\cite{Kazimierczuk2014,Krueger2020a,Schoene2016}. With the bandgap energy and Rydberg constant constrained to be the same for all four series, fits using equation~(\ref{Eqn:energies}) are in very good agreement with the resonance positions. These fits yielded $E_\mathrm{g} = 2.1720780$~\si{\electronvolt} $\pm~0.4$~\si{\micro\electronvolt} and $R_{y} = 82.40 \pm 0.05$~\si{\milli\electronvolt}, and the corresponding $\delta_{L}$ are listed in Table~\ref{Tab:QDefects}. The values for the bandgap energy and Rydberg constant agree well with those obtained from recent studies of the P odd-parity excitons \cite{Kazimierczuk2014,Versteegh2021}.
\begin{figure}
\centering
\includegraphics[width=85mm]{Figures/PeakLocationAnalysisLorentzOnly.pdf}
\caption{Analysis of the binding energies of the exciton peaks fitted in Figure~\ref{Fig:SpectralFigure}. One representative error bar has been expanded by a factor of 100 for ease of veiwing. Global values for the bandgap (2.1720780~\si{\electronvolt} $\pm$ 0.4~\si{\micro\electronvolt}) and Rydberg energy (82.40 $\pm$ 0.05~\si{\milli\electronvolt}) were extracted from the fit of all the angular momenta series simultaneously. These parameters then give rise to the quantum defect for each angular momentum series, $\delta_{L}$.}
\label{Fig:PeakLocationAnalysis}
\end{figure}
Considering the quantum defects, in previous studies, Kr\"{u}ger and Sch\"{o}ne~\cite{Krueger2020a,Schoene2016} measured $\delta_{\mathrm{S}} \approx 0.5, \delta_{\mathrm{P}} \approx 0.3, \delta_{\mathrm{D}} \approx 0.2$, and $\delta_{\mathrm{F}} \approx 0.1$. Our values are in reasonable agreement with these previous measurements, except for $\delta_{D}$, for which we obtain a smaller value.
\begin{table}
\centering
\begin{tabular}{c c c c c}
\hline \hline
\vspace{-9pt} \\
$L$ & \hspace{10pt} & Symmetry & \hspace{10pt} & $\delta_{L}$ \\
\hline
\vspace{-9pt} \\
S & & $\mathrm{\Gamma}_{5}^{+}$ & & 0.564 $\pm$ 0.002\\
P & & $\mathrm{\Gamma}_{8}^{-}$ & & 0.310 $\pm$ 0.003\\
D & & $\mathrm{\Gamma}_{5}^{+}$ & & 0.085 $\pm$ 0.003\\
F & & $\mathrm{\Gamma}_{8}^{-}$ & & 0.185 $\pm$ 0.007\\
\hline
\hline
\end{tabular}
\caption{Quantum defects for each angular momentum series extracted from the global fit in Figure~\ref{Fig:SpectralFigure} and plotted in Figure~\ref{Fig:PeakLocationAnalysis}. The symmetry representations were obtained from Schweiner~\textit{et al.}~\cite{Schweiner2017}.}
\label{Tab:QDefects}
\end{table}
At this point it is useful to consider the line assignments in Table~\ref{Tab:QDefects} in more detail. Details of the possible irreducible symmetry representations that may be associated with each angular momentum label $L$ are provided by Schweiner~\textit{et al.}~\cite{Schweiner2017}.
The $n$D states are shown to consist of a multiplet of states. The $\mathrm{\Gamma}_{3}^{+}, \mathrm{\Gamma}_{4}^{+}$ and the lower energy $\mathrm{\Gamma}_{5}^{+}$ state are close together, split by $< 50$~\si{\micro\electronvolt} at $n = 5$ while the remaining $\mathrm{\Gamma}_{5}^{+}$ state lies 190~\si{\micro\electronvolt} above its low energy counterpart at $n = 5$. This places it above the F multiplet in energy and makes it the highest energy state at a given $n$~\cite{Heckoetter2021}. The value of $\delta_{\mathrm{D}} \approx 0.2$ obtained by Kruger and Schone\cite{Krueger2020a,Schoene2016} was obtained from fitting the centre of mass of the multiplet of lines observed in absorption spectroscopy in an applied electric field. In contrast, in our two-photon experiments we are only sensitive to the D states with $\mathrm{\Gamma}_{5}^{+}$ symmetry and in particular, the higher energy state, which is predicted to have a two-photon oscillator strength $\sim 100 \times$ greater than that of its lower energy counterpart~\cite{Schweiner2017}. This explains why we measure $\delta_{\mathrm{D}} < \delta_{\mathrm{F}}$ even though our values for $\delta_{\mathrm{S}}$, $\delta_{\mathrm{P}}$ and $\delta_{\mathrm{F}}$ are in reasonable agreement with~\cite{Krueger2020a,Schoene2016}.
For the odd parity states, we note that the $n$P excitons were observed under two-photon excitation in previous studies~\cite{Mund2018}, and that the process has been shown to be weakly allowed~\cite{Mund2019}. Since the location of the P states is well-known from absorption studies, their assignment is straightforward. The assignment of the weaker $F$ series is more problematic. We cannot rule out that what we have assigned as the F series is in fact the lower energy $\mathrm{\Gamma}_{5}^{+}$ D excitons discussed above. The two states are very close in energy~\cite{Thewes2015,Schweiner2017,Heckoetter2021} and their oscillator strengths under TPE are both expected to be very small compared to the S and D states, which is born out in the spectrum. This assignment remains uncertain without performing experiments under external electric fields.
Next we consider the measured resonance widths, shown in Fig.\,\ref{Fig:PeakWidthAnalysis}. For comparison, we show the predicted $n^{-3}$ trend for non-radiative decay~\cite{Kazimierczuk2014}. For $n>8$ , all the series show significant extra broadening, with the width almost saturating rather than decreasing with $n$. Similar excess broadening for $n>8$ was also observed in one-photon absorption spectroscopy of the $n$P states, even in high quality samples at $< 1$~\si{\kelvin} temperature~\cite{Heckoetter2020}. However, the effect appears much more severe in our spectra, particularly for the S states. The consequence of this saturation is that the peaks begin to merge as they approach each other at high $n$, as is clearly visible in the spectrum.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{Figures/PeakWidthAnalysisLorentzOnly.pdf}
\caption{The fitted widths of the excitonic peaks in Figure~\ref{Fig:SpectralFigure} as a function of $n$. One representative error bar has been expanded by a factor of 20 for ease of viewing. The red lines represent a $n^{-3}$ trend.}
\label{Fig:PeakWidthAnalysis}
\end{figure}
Lastly we consider peak area, which is a measure of the strength of the SHG signal associated with each resonance. The area under each peak shown in Fig.~\ref{Fig:SpectralFigure} was obtained by numerical integration of the fit function. The results are shown in Fig.~\ref{Fig:PeakAreaAnalysis}, where the results are compared to a $n^{-3}$ scaling.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{Figures/PeakAreaAnalysisLorentzOnly.pdf}
\caption{The fitted areas of the excitonic peaks in Figure~\ref{Fig:SpectralFigure} as a function of $n$. One representative error bar has been expanded by a factor of 20 for ease of veiwing. The red lines are trends fitted to the data ($n > 6$ for S and D) weighted by the corresponding error bars. The power index $m_{L}$ is indicated for each series.}
\label{Fig:PeakAreaAnalysis}
\end{figure}
For SHG at the even parity states, the amplitude $S_{n,L} \propto Q^{nl\rightarrow\mathrm{VB}}M^{\mathrm{VB}\rightarrow nl}$~\cite{Gallagher2021}. Here $Q^{nl\rightarrow\mathrm{VB}}$ is the quadrupole matrix element between the Rydberg state and the valence band, which is predicted to scale as $|Q^{nl\rightarrow\mathrm{VB}}|^2 \propto n^{-3}$~\cite{Schweiner2017,Rommel2021}. The quantity $M^{\mathrm{VB}\rightarrow nl}$ is an effective matrix element for the two photon excitation. It contains products of dipole matrix elements $D^{\mathrm{VB}\rightarrow\alpha}D^{\alpha\rightarrow nl}$, where $D^{\mathrm{VB}\rightarrow\alpha}$ connects the valence band to the intermediate state $\alpha$, and $D^{\alpha\rightarrow nl}$ connects $\alpha$ to the even-parity excitonic Rydberg state $nl$. Since the dominant intermediate states $\alpha$ are far-off resonant, compact states belonging to the blue and violet series, we predict that $D^{\mathrm{VB}\rightarrow\alpha}$ is independent of $n$, while for the second $|D^{\alpha\rightarrow nl}|^2 \propto n^{-3}$. Taking $\mathit{\Gamma} \propto n^{-3}$ we obtain an overall $n^{-3}$ scaling for the peak area, while if the width is constant the predicted scaling is $n^{-6}$.
The trends visible in Fig.\,\ref{Fig:PeakWidthAnalysis} are surprising. The data for the {\it even} parity states are in strong disagreement with the predicted scaling, with the area increasing with $n$ to begin with, before dropping much faster than predicted at high $n$. Above $n>12$ no states can be resolved, and the SHG signal goes to zero between 2.1715 -- 2.1717~\si{\electronvolt}. There is a corresponding lump in the residuals in Fig.~\ref{Fig:SpectralFigure} in this region, but no discernible lines.
At low $n$, the behaviour may be qualitatively understood as a suppression of the amplitude of the lowest $n$ peaks which is predicted to occur due to state mixing with the the lowest member (1S) of the ``green'' exciton series~\cite{Schweiner2017}. A faster than expected reduction of the oscillator strength for $n>17$ was previously observed in one-photon absorption experiments on the $n$P series in high-quality samples. We compare the behaviour of the even parity states to that of the $n$P states, observed in both SHG (Fig.\,\ref{Fig:PeakWidthAnalysis}) and using one-photon absorption in the same sample (by frequency doubling the IR laser in the ppLN crystal (Fig.\,\ref{Fig:Setup Figure})). In both cases the scaling with $n$ appears more gradual than for the even parity states, particularly in the one-photon absorption measurements where states up to $n=15$ are visible.
The apparently abrupt cut-off of the SHG spectrum from of the even parity states has important implications for the utility of SHG and the even parity states in quantum optics applications. The factors governing the visibility of high-$n$ Rydberg excitons have been extensively studied in absorption experiments on the $n$P series~\cite{Lynch2021,Heckoetter2018,Heckoetter2018b,Versteegh2021}, and include Rydberg blockade~\cite{Kazimierczuk2014}, thermally or optically excited free carriers~\cite{Heckoetter2018,Heckoetter2018,Semkat2019}, charged defects~\cite{Lynch2021,Krueger2020,Heckotter2020}, and local strain and disorder~\cite{Heckoetter2018,Versteegh2021}. Therefore to gain further insight we carried out further experiments on the power dependence and spatial variation of the SHG spectrum.
\subsection{Power dependence}
Power-dependent measurements of the high-$n$ region of the SHG spectrum are shown in Fig.~\ref{Fig:PowerDependence}. By normalising the intensity of the 8S peak, we remove the quadratic dependence of the SHG intensity on pump power, and reveal the power dependent changes in the shape of the spectrum. We observe a clear shift of the exciton resonances to lower energy with increasing power, as well as an increasing suppression of the high-$n$ peaks. As shown in~\cite{Heckoetter2018b}, this combination of changes is consistent with increasing temperature, which leads to both a shift of the bandgap and an increased density of thermally excited free carriers. In contrast, non-thermal production of free carriers (by optical excitation or exciton-exciton interactions)~\cite{Heckoetter2018, Heckoetter2018b}, has been shown to lead to a suppression, but not a shift~\cite{Semkat2019}. Using the temperature dependence of the bandgap~\cite{Ito1997}, we can convert the measured energy shifts into a local change in temperature. The result is plotted in Fig.~\ref{Fig:PowerDependence}, where we have constrained the lowest temperature to that obtained from fitting the 1S phonon replica in Fig.~\ref{Fig:EmissionFigure}. These data suggest that the IR light can lead to significant local heating of the sample, which affects the visibility of the highest $n$ excitons. However, while it is clear that there is a significant effect at the highest powers, it is not clear from the data that this effect fully accounts for the observed cut-off of the even-parity spectrum at low power.
\begin{figure}
\centering
\includegraphics[width=85mm]{Figures/TempEnergyShiftv2.pdf}
\caption{Power dependent excitation spectra. (a) Excitation spectra taken at different excitation powers. 3 of the 5 spectra acquired are shown. Each has been normalised to the height of the 8S resonance. Spectra show a red shift with increasing power. (b) extracted sample temperature assuming the shift in the 8S resonance is due to the bandgap shift. (c) Zoom of the high $n$ region with the spectral shift applied such that peaks are at the same energy to aid comparison. The logarithmic intensity scale reveals how as excitation power increases, the highest $n$ resonances are no longer resolvable and the SHG intensity vanishes more rapidly.}
\label{Fig:PowerDependence}
\end{figure}
\subsection{Spatially resolved two-photon spectroscopy}
Previous experiments have also shown that the visibility of high-$n$ excitons varies locally even in the highest quality samples, with some regions of the sample exhibiting both a stronger lines and a higher maximum value of $n$ \cite{Heckoetter2020}. We have previously shown~\cite{Lynch2021} that our natural samples exhibit variation in the exciton spectrum on a length scale of tens of \si{\micro\meter}. Therefore, we exploited the spatial resolution of our SHG spectroscopy setup to measure the SHG spectrum and the SHG intensity at different locations across the sample.
Firstly, we acquired a similar spectrum to that shown in Fig.\,\ref{Fig:SpectralFigure} at a different location in the sample, 100~\si{\micro\meter} away. This individual spectrum, shown in green in Fig.\,\ref{Fig:SpatialMaps} (a), was very similar to the previous 14 spectral scans at the original location, one of which is shown in purple. The deviations in the location of the resonance positions in the two samples was $< 10$~\si{\micro\electronvolt} for the exciton states with $n > 8$. Every peak at the second location was slightly red-shifted relative to those at the first location. This confirmed our previous absorption measurements~\cite{Lynch2021} that showed the crystalline environment varying slightly across the sample and giving rise to subtle but measurable changes in peak energies. However, although the amplitudes and widths of the peaks changed dramatically in the spectrum at the new location, the rapid drop to zero of the SHG signal above $n=12$ was still present.
\begin{figure}
\centering
\includegraphics{Figures/MapImagePaperFigurev2.pdf}
\caption{Spatial variation in the sample. (a) shows two SHG spectra acquired at different locations separated by 100~\si{\micro\meter}. The sample was excited by 50~\si{\nano\second} excitation pulses of 160~\si{\milli\watt} peak power at a repetition rate of 5~\si{\mega\hertz}. Images of the sample acquired with the CMOS camera while the sample was under K\"{o}hler illumination from the LED (b) and of SHG emission with the SPAD (c). The laser was tuned to resonance with the 9D excitonic state and the emission passed through the Rydberg bandpass filter. (b) and (c) are of the same region of the sample. (d) shows histograms of photon counts in (c). The blue data correspond to the whole image and the red data to the small region bounded by the red circle in (c).}
\label{Fig:SpatialMaps}
\end{figure}
The sample position was then rastered on a 2~\si{\micro\meter} grid by the 3-axis piezo positioning stage to create a spatial map. A 2D map of the full SHG spectrum across the whole of the sample surface is currently beyond our technical capabilities, due to long-term instabilities in the laser scanning. Therefore to cover more of the sample, we fixed the excitation energy to 2.17102~\si{\electronvolt}, which corresponds to the energy of the 9D state, and measured the SHG intensity as a function of position. The resulting data are shown in Fig.\,\ref{Fig:SpatialMaps} (c) along with a visible-light image of the surface of the sample (b). The visible light image reveals the presence of localised voids and inclusions, as well as interference fringes that suggest that the sample is reasonably flat over longer length scales. As shown in Fig.\,\ref{Fig:SpatialMaps} (c) and (d), the SHG intensity varies by over an order of magnitude across the sample. Some of the regions where the SHG is less intense can be correlated with features in (b). The extremely bright spots at the lower left come from the jagged edge of the crystal and are not included in the histogram. To reduce the effect of these localised imperfections, we zoom in on the 50~\si{\micro\meter} diameter region bounded by the red circle in (c) which appears quite uniform in both Fig.\,\ref{Fig:SpatialMaps} (b) and Fig.\,\ref{Fig:SpatialMaps} (c). Here the histogram was much narrower, although the width of the distribution was much broader than can be accounted for by the measured variation in the 9D line position. In our previous measurements of the spatial variation of the absorption spectrum~\cite{Lynch2021}, the peak energy at $n = 6$ in gemstone material of a similar quality varied by $< 10$~\si{\micro\electronvolt} over a typical 50~\si{\micro\meter} region. The oscillator strength did not change substantially. Inspecting Fig.\,\ref{Fig:SpectralFigure}, the corresponding noise in count rate that a peak shift of 10~\si{\micro\electronvolt} would introduce to the SHG signal with the laser tuned near 9D would be on the order of 10\%. The distribution of counts indicated by the red histogram in Fig.\,\ref{Fig:SpatialMaps} (d) is substantially greater than 10\% from the mode. Taken together these results suggest that the observed fluctuations in SHG intensity arise primarily from a spatial variation in the SHG susceptibility, rather than merely a shift in the energy of the exciton. Comparable spatial fluctuations were observed previously in SHG arising from the 1S exciton~\cite{Mund2019}, where the authors showed that strain modifies the local SHG selection rules as well as giving rise to an energy shift. We conclude that SHG spectroscopy is more sensitive to the local sample environment than absorption spectroscopy. High resolution imaging is therefore an essential tool to identify regions of good crystal quality as well as sites of potential functional microstructures~\cite{Krueger2018,Konzelmann2019,Steinhauer2020}.
Concerning the disappearance of high-$n$ states, we note that the same rapid drop-off is observed in both locations probed in Fig.\,\ref{Fig:SpatialMaps} (a), meaning that this behaviour must originate from sample fluctuations on a length scale of $<50$~\si{\micro\meter}. While inhomogeneous broadening due to strain could cause the high $n$ states to be unresolvable, broadening on its own would not cause the observed reduction in oscillator strength. However, if higher $n$ states are more susceptible to strain-induced changes in the selection rules, then local fluctuations in the crystal quality could lead to a more rapid suppression of the high-$n$ peaks in second harmonic generation.
\section{Discussion}
We have shown that the combination of high-intensity nanosecond pulses and time-resolved single photon detection is a powerful way to study second harmonic generation and related processes in cuprous oxide. In particular, time-resolved measurements revealed the existence of an exceptionally long radiative lifetime which we associate with in-gap resonances at 1.99~\si{\electronvolt}. While bound excitons had been previously observed at this energy~\cite{Jang2006}, this appears to be the first measurement of their radiative lifetime. Future experiments would confirm the assignment of individual resonances to ortho- and para- bound states in~\cite{Jang2006} with spectrally resolved lifetime measurements across the energy range associated with these narrow resonances in Fig.\,\ref{Fig:EmissionFigure}.
Turning to the main results on second harmonic generation, we note that compared to recent experiments~\cite{Mund2018}, the use of longer bandwidth-limited pulses has enabled direct measurements of the SHG lineshapes without the requirement for a spectrometer. The result is a significant improvement in resolution, and an extension of the observed upper limit for even parity excitons to $n=12$. We find that the observed spectrum is in quantitative agreement with a model based on a sum of Lorentzian lineshapes, with possible deviations due to interference terms~\cite{Gruenwald2016} restricted to low-$n$ where individual resonances are well resolved.
Compared to studies of the odd-parity exciton spectrum using one-photon absorption, we find that the spectrum becomes very crowded at high $n$, with many overlapping resonances. This is partly because there are at two strongly allowed even-parity series, in contrast to the odd-parity case where the spectrum is dominated by the P states only. However there is also a strong contribution from the odd parity states. As observed in absorption spectroscopy, the resonances broaden faster than the expected $n^{-3}$ trend for $n > 8 $.
A striking feature of the SHG spectrum is the extremely rapid decrease in the intensity of the even-parity peaks for $n>7$, which we do not observe in absorption spectroscopy in the same sample. The observed spatial inhomogeneity of the coupling strength may play a role, as the data are averaged through the thickness of the sample. However while this effect may wash out the peaks, it doesn't explain why the intensity drops to zero rather than a finite value. Instead our measurements of the power dependence strongly suggest that local heating plays an important role. Empirically we observe that the amount of heating is strongly dependent on the purity of the sample; in samples with a significant concentration of copper vacancies~\cite{Lynch2021}, the heating from below-bandgap absorption of the IR pump light can be sufficient to preclude the observation of SHG. We must also consider that our back-scattered collection geometry may affect the high-$n$ end of the spectrum. As the signal light must first pass through the sample before it can be collected, the increasing optical depth near the band edge will suppress that portion of the spectrum to some degree. A forward collection geometry may be advantageous in reducing the path length of the signal light through the sample.
Regarding the observation of higher principal quantum number, it appears challenging to use SHG to match the $n>25$ observed in absorption spectroscopy of the odd-parity excitons~\cite{Kazimierczuk2014,Versteegh2021}. Nevertheless, our results show there is clear scope for improvement. Previous studies have suggested that charged impurities set the ultimate limit on the highest observable $n$, as well as contributing a background electric field that causes the anomalous broadening of the resonances~\cite{Heckoetter2020a,Semkat2019,Krueger2020}. Using higher-quality samples with strain-free mounting and careful surface treatment thus appears essential~\cite{Versteegh2021}. Better control of the input polarization would also enable us to fully exploit the selection rules to reduce the intensity of the odd-parity peaks and maximise the SHG efficiency, which in turn would reduce the pump power and the resultant heating. Two-photon photoluminescence, where the signal is incoherent emission from the 1S state~\cite{Versteegh2021}, may be more useful than SHG for observing the highest $n$ states as the weak quadrupole emission from the Rydberg state necessary for SHG is eliminated. In future, our experiment can perform simultaneous time-resolved measurements of both PL and SHG with appropriate filters, allowing these techniques to be compared. Lastly we note that the highly coherent, narrowband nature of the SHG produced in our experiment makes it very useful for applications, even within the currently observed range of principal quantum numbers. An example is the recent observation of the coherent modulation of the second harmonic by an applied microwave electric field~\cite{Gallagher2021}.
\section{Conclusion}
We have carried out high-resolution laser spectroscopy of the Rydberg excitonic states of cuprous oxide using second harmonic generation. We have assigned even parity excitonic states up to $n=12$ and measured their linewidth for the first time. We have also described a versatile experimental platform for studying the properties of Rydberg excitons in cuprous oxide, which has sufficient spectral, temporal and spatial resolution to study the SHG process in detail, and to reveal the existence of extremely long-lived bound exciton states.
\section{Acknowledgement}
This work was supported by the Engineering and Physical Sciences Research Council (EPSRC), United Kingdom, through research grants EP/P011470/1 and EP/P012000/1. The authors also acknowledge seedcorn funding from Durham University. LG acknowledges financial support from the UK Defence and Scientific Technology Laboratory via an EPSRC Industrial Case award. We are grateful to Ian Chaplin and Sophie Edwards (Durham University, Department of Earth Sciences) for the slicing and polishing of the samples used in this work, and to Patric Rommel for discussions of the scaling laws for quadrupole emission. Information on the data underpinning the results presented here, including how to access them, can be found in the Durham Research Online repository at [DOI].
|
2,869,038,154,365 | arxiv | \section{Introduction: Fourier Analysis of Boolean Functions}
We are concerned with \emph{Boolean-valued} functions $f:\{-1,1\}^{n}\rightarrow \{-1,1\}$ which form a subset of \emph{Boolean} functions $f:\{-1,1\}^{n}\rightarrow \mathbb{R}$. Every Boolean function $f:\{-1,1\}^{n}\rightarrow \mathbb{R}$ has a unique \emph{Fourier expansion} given by
$$f(x)=\sum_{S\subseteq [n]}\widehat{f}(S)\prod_{i\in S}x_{i},$$
where the real numbers $\widehat{f}(S)$ are the \emph{Fourier coefficients} of $f$ given by the formula
$$ \widehat{f}(S)=\textbf{E}\left[f(x)\prod_{i\in S}x_{i}\right].$$
(Here and everywhere else in the paper, the expectation $\textbf{E}[\cdot]$ is with respect to the uniform probability distribution on $\{-1,1\}^{n}$.) The \emph{Parseval's identity} is the fact that $\sum_{S\subseteq [n]}\widehat{f}(S)^{2}=\textbf{E}[f(x)^{2}].$ In particular, if $f$ is boolean-valued then this implies that $\sum_{S\subseteq [n]}\widehat{f}(S)^{2}=1$.\par
Given $f:\{-1,1\}^{n}\rightarrow \mathbb{R}$ and $i\in [n]$, we define the \emph{discrete derivative} $\partial_{i}f:\{-1,1\}^{n}\rightarrow \mathbb{R}$ by
$$\partial_{i}f(x)=\frac{f(x_{1},x_{2},\cdots ,1,x_{i+1},\cdots ,x_{n})-f(x_{1},x_{2},\cdots ,-1,x_{i+1},\cdots ,x_{n})}{2}.$$
The \emph{influence} of the $i$th coordinate on $f$ is defined by
$$\textbf{Inf}_{i}[f]=\textbf{E}[(\partial_{i}f)^{2}]=\sum_{S\ni i}\widehat{f}(S)^{2}.$$
In the particular case when $f$ is Boolean-valued, the derivative $\partial_{i}f$ is $\{-1,0,1\}$-valued. The \emph{total influence} of $f$ is
$$\textbf{Inf}[f]=\sum_{i=1}^{n}\textbf{Inf}_{i}[f]=\sum_{S\subseteq [n]}|S|\cdot \widehat{f}(S)^{2}.$$
The \emph{total degree} of $f$ is defined by
$$\text{deg}(f)=\max\{|S|:\widehat{f}(S)\neq 0\}.$$
Note that for a Boolean-valued function $f:\{-1,1\}^{n}\rightarrow \{-1,1\}$
$$\textbf{Inf}[f]=\sum_{S\subseteq [n]}|S|\cdot \widehat{f}(S)^{2}\leq \text{deg}(f)\cdot \sum_{S\subseteq [n]}\widehat{f}(S)^{2}=\text{deg}(f),$$
where we used the Parseval's identity to deduce the last step.\par
The linear coefficients of $f$ are the $n$ coefficients $\widehat{f}(\{1\}), \widehat{f}(\{2\}),\cdots,\widehat{f}(\{n\})$, and hereon we omit writing the curly braces inside to denote them. We are concerned with the following conjecture here:
\begin{conjecture}[Gopalan-Servedio \cite{open}]
\label{conj1}
Let $f:\{-1,1\}^{n}\rightarrow \{-1,1\}$ have total degree $d$. Let $\text{Maj}_{d}:\{-1,1\}^{d}\rightarrow \{-1,1\}$ be defined by $\text{Maj}_{d}(x)=\text{sgn}(x_{1}+x_{2}+\cdots+x_{d})$ with $\text{sgn}(0)=-1$. Then
$$\normalfont \sum_{i=1}^{n}\widehat{f}(i)\leq \sum_{j=1}^{d}\widehat{\text{Maj}}_{d}(j).$$
\end{conjecture}
\begin{remark}
The conjecture is trivial when $\text{deg}(f)=n$ since
$$\sum_{i=1}^{n}\widehat{f}(i)=2^{-n}\sum_{x\in \{-1,1\}^{n}}f(x)\cdot (x_{1}+x_{2}+\cdots+x_{n})\leq 2^{-n}\sum_{x\in \{-1,1\}^{n}}|x_{1}+x_{2}+\cdots+x_{n}|=\sum_{j=1}^{n}\widehat{\text{Maj}}_{n}(j).$$
\end{remark}
\section{Alternative Formalisms of the conjecture}
\begin{theorem}
The Conjecture \ref{conj1} is equivalent to each of the following inequalities
$$\normalfont \textbf{Inf}[f]-\textbf{Inf}[\text{Maj}_{d}] \leq 2 \cdot \sum_{k=1}^{n}\textbf{Pr}\left[\partial_{k} f=-1\right],$$
$$\normalfont \sum_{k=1}^{n}\left(\textbf{Pr}\left[\partial_{k} f=1\right]-\textbf{Pr}\left[\partial_{k} \text{Maj}_d=1\right]\right)\leq \sum_{j=1}^{n}\textbf{Pr}\left[\partial_{j} f=-1\right],$$
$$\normalfont 2\cdot \sum_{k=1}^{n}\textbf{Pr}\left[\partial_{k} f=1\right]\leq \textbf{Inf}[f]+\textbf{Inf}[\text{Maj}_d].$$
\end{theorem}
\begin{proof}
Let
$$ f(x)=\sum_{\substack{S\subseteq [n]}}\widehat{f}(S)\prod_{i\in S}x_{i},$$
and
$$\partial_{i}f(y):=\frac{f(y_{1},y_{2},\cdots ,1,y_{i+1},\cdots ,y_{n})-f(y_{1},y_{2},\cdots ,-1,y_{i+1},\cdots ,y_{n})}{2}$$
for all $y\in \{-1,1\}^{n-1}$ which takes values $+1,0,-1$. We have
$$ \textbf{Pr}[\partial_{i}f=0]=2^{-(n-1)}\sum_{y\in \{-1,1\}^{n-1}}(1+\partial_{i}f(y))\cdot (1-\partial_{i}f(y))=1-\textbf{Inf}_{i}[f],$$
$$\textbf{Pr}[\partial_{i}f=1]=2^{-(n-1)}\sum_{y\in \{-1,1\}^{n-1}}\frac{(\partial_{i}f(y)+1)\cdot \partial_{i}f(y)}{2}=\frac{1}{2}(\textbf{Inf}_{i}[f]+\widehat{f}(i)),$$
$$\textbf{Pr}[\partial_{i}f=-1]=2^{-(n-1)}\sum_{y\in \{-1,1\}^{n-1}}\frac{(\partial_{i}f(y)-1)\cdot \partial_{i}f(y)}{2}=\frac{1}{2}(\textbf{Inf}_{i}[f]-\widehat{f}(i)).$$
where we used the fact that $\widehat{f}(i)=\textbf{E}[(\partial_{i}f)].$ Each of the assertions follow easily from the above equations.
\end{proof}
\bibliographystyle{amsplain}
|
2,869,038,154,366 | arxiv | \section{Introduction}
With the applications in mobile phone unlocking, access control, payment tool and other security scenarios, biometric systems are widely used in our daily lives. Among the most popular biometric modalities, fingerprint and face play vital roles in numerous access control applications. However, several reported studies \cite{ramachandra2017presentation,singh2020survey} have demonstrated that the existing systems are easily spoofed by presentation attacks (PAs) made from some low-cost materials, e.g. Rigid Mask for face \cite{heusch2020deep} and silica-gel for fingerprint \cite{9457215}. These issues raise wide concerns about the vulnerability of biometric systems incorporated in access control applications. Therefore it is essential to detect the presentation attacks to achieve reliable biometric applications.
To reliably address the vulnerability of the biometric systems to PAIs, several Presentation Attack Detection (PAD) methods have been proposed \cite{ramachandra2017presentation}, which can be divided into hardware and software based methods.
Hardware based solutions \cite{9335499,heusch2020deep,7018027} employ the special types of sensors to capture liveness characteristics. For instance, Heusch et al. \cite{heusch2020deep} adopt short wave infrared (SWIR) imaging technology to detect face PAs, which shows superior performance over similar models working on color images. Light field camera (LFC) is introduced by Raghavendra et al.\cite{7018027} to detect PAs by exploring the variation of the focus between multiple depth images. For fingerprint, an optical coherence tomography (OCT)-based PAD system is designed by Liu et al. \cite{9335499} to obtain the depth information of fingerprints. Generally speaking, hardware-based solutions are sensor-specific, resulting in strong security but weak applicability because of usability or cost limitations.
\begin{figure*}
\centering
\includegraphics[width=.88\textwidth]{motivation}
\caption{ The groups of software based presentation attack detection. Group 1 is Input Preprocessing, Group 2 refers to Model Designing and Group 3 is Loss Function Designing. Different from existing groups, the proposed method investigates the initialization of the PA detector, which can be concluded as an independent solution, i.e. Group 4: Parameter Initializing.
}
\label{fig:motivation}
\end{figure*}
Fig. \ref{fig:motivation} illustrates the recent progress on the software based PAD algorithms that can be categorized into three groups: 1) Input Preprocessing, 2) Model Design and 3) Loss Function. In the case of Input Preprocessing \cite{liu2018learning,8616677,9079541,wang2021rgb,guo2019improving}, Larbi et al. \cite{8616677} propose a model, namely DeepColorFASD, which adopts various color spaces (RGB, HSV, YCbCr) as input to achieve the reliable performance of PAD. Despite the improvement, the additional color spaces need to be processed, which will lead to extra computation. Meanwhile, the contribution of different color spaces is limited to improve the generalization against unexpected PAIs.
For fingerprint anti-spoofing, a Generative Adversarial Network (GAN) based data augmentation, called Universal Material Generator (UMG), is proposed by Chugh et al. \cite{9079541}. UMG transfers the style (texture) characteristics between fingerprint images to train a robust PA detector. However, such method needs the images from the target material or target capture device to obtain the style characteristics, which might be inaccessible.
Different from Input Preprocessing, the Model Designing approaches pay more attention to the specific CNN based architectures. Many prior works adopt hand-crafted features such as LBP \cite{de2013can}, HoG \cite{yang2013face}, SIFT \cite{patel2016secure} and Surf \cite{boulkenafet2016face}, then employ traditional classifier such as LDA and SVM.
But hand-crafted features are sensitive to the noise, illumination and have poor generalization performance. Consequently, structure-specific methods based on convolution neural network (CNN) are proposed. In particular, Liu et al. \cite{liu2018learning} propose a CNN-RNN model to learn auxiliary features, including depth and rPPG, for PAD. A Deep Tree Network (DTN) is proposed by Liu et al. \cite{liu2019deep} to partition the spoof samples into semantic sub-groups and detect PAs by routing test samples into the similar clusters. Yu et al. \cite{yu2020searching} propose a Central Difference Convolution (CDC) layer to capture intrinsic detailed patterns via aggregating both intensity and gradient information. By adopting Neural Architecture Search (NAS), the CDC based network can achieve superior performance. Although competitive results can be achieved through specially designed architecture, the cross domain performance of CNN based methods is still limited, leading to the limited application in real scenarios.
To tackle the cross-domain issue, existing works are trying to improve the generalization by training models with specific learning objectives. For example, the one-class loss proposed by George et al. \cite{george2020learning} tries to learn a compact embedding space for the bona fide class which is only a special case of attacks. Furthermore, the one-class loss only considers the domain-invariant features and ignores the differences among domains. Hence, Jia et al. \cite{jia2020single} propose an Asymmetric Triplet Loss to mine the PAD features and design a Single-Side Adversarial Loss to align the mined features between the different domains. However, considering that the CNN based PA detectors are trained in a sophisticated way, the learning objectives might not be successfully trained.
An important interference factor is the initialization of the PA detector. As a general rule, training from scratch and pre-training using ImageNet \cite{russakovsky2015imagenet} are two common methods. However, in PAD, it is challenging to collect large-scale data, hence, without any prior knowledge, it is difficult to train the model from scratch (i.e. random initialization) to learn discriminative features. On the other side, taking pre-trained model from ImageNet as initialization is also not a proper choice. As a large-scale dataset, the cost of time and computation carried on ImageNet is an over-heavy workload to train new proposed PAD CNN architectures. Meanwhile, face and fingerprint images are quite different from natural images in both texture and context, the pre-trained model is thus not an ideal and reasonable starting point for PAD task.
\begin{figure*}
\centering
\includegraphics[width=.96\textwidth]{pipeline}
\caption{The pipeline of the proposed self-supervised learning based method (IF-OM). The feature extractor $D(\cdot)$ is trained by De-Folding and De-Mixing tasks simultaneously. In De-Folding task, the input image is folded and the target of the model is to reconstruct the image by maximizing the cycle consistency. In De-Mixing task, two images are mixed by a random weight $\epsilon$, and the learning objective is to guarantee the topological/operational consistency between the input space and feature space embedded by $D(\cdot)$. Note that both $x_i$ and $x_j$ are employed for De-Folding, but only the pipeline of $x_i$ is shown in the figure, due to the same processing for $x_i$ and $x_j$.
}
\label{fig:pipeline}
\end{figure*}
As shown in Fig.\ref{fig:motivation}, in order to solve the addressed problems, a self-supervised learning based method, denoted as IF-OM, is proposed in this paper. Without any annotations, two pretext tasks are designed to train the network, which is finally set as the initialization of the PA detector.
Based on the chirality and symmetry of the face and fingerprint, a Cycle Consistency based De-Folding task is designed to force CNN based model to reconstruct the folded images by learning the specific patterns among various regions in faces or fingerprints.
To further strengthen the representation, a Topological Consistency based De-Mixing task is then proposed to facilitate the extraction of global features by disentangling the mixed images from two samples.
Without any extra training samples and annotations, the proposed methods can obtain an ideal initialization, which can improve fingerprint PAD baseline from 73.92\% to 90.96 \% in TDR (True Detection Rate)@1\% FDR (False Detection Rate) and promote the EER (Equal Error Rate) of face PAD baseline by 8.12 \%. In summary,
\begin{itemize}
\item A Cycle Consistency based De-Folding task is designed for fingerprint and face PAD to explore the specific patterns among different regions.
\item As a complementary task, Topological Consistency based De-Mixing task is proposed to learn more hierarchical features to better represent images.
\item The proposed method, which only works in a lite and unsupervised mode, achieves impressive performance in terms of face and fingerprint PAD.
\end{itemize}
\section{Related Works}
Since this paper mainly focuses on a PAD solution based on self-supervised learning, we review the most representative works of such technology as below.
Self-supervised learning refers to learning methods in which CNNs are trained with automatically generated labels and then transferred to other computer vision tasks \cite{jing2020self}. Based on the categories of the generated labels, self-supervised learning can be roughly divided into generative and contrastive learning. However, both kinds of methods can not be directly used in PAD. As fingerprint and face images for recognition are lack of colorful information, and generally with low resolution, generative learning, such as image colorization \cite{zhang2016colorful} and image super resolution \cite{ledig2017photo}, can not be conducted in this case.
Meanwhile, for image in-painting \cite{pathak2016context} and GANs \cite{goodfellow2014generative,zhu2017unpaired}, large-scale data is required to establish a compact feature space, while the dataset for PAD can not meet the requirement.
On the other side, face images have strong spatial specification after alignment, while, the spatial relation of fingerprints is weak.
Hence, contrastive learning, like predicting the relative position \cite{doersch2015unsupervised} and rotation \cite{gidaris2018unsupervised}, is easy for face but too hard for fingerprint.
Another group of contrastive learning is instance discrimination, like MoCo \cite{he2020momentum,chen2020improved}, SimCLR \cite{chen2020simple} and BYOL \cite{grill2020bootstrap}. Through embedding each instance/image into different classes, the mentioned studies have shown solid improvement in natural images. However, in the PAD task, bijective relation between each image and prediction leads model to learn identification rather than PAD features, resulting in poor generalization performance \cite{wang2020cross}.
In this paper, a novel self-supervised learning, namely IF-OM, is proposed to improve the performance of PAD. Unlike existing methods, the proposed method is free of any annotations and extra data, and pays more attention to the specification of the face and fingerprint. Particularly, two pretext tasks, De-Mixing and De-Folding, are proposed to search a reasonable initialization for the PA detector. As an in-image part, De-Folding explores the properties of face and fingerprint, such as chirality and symmetry. While, in the out-of-image part, De-Mixing requires the model to embed images into a distinguishable feature space with the same topological structure in the input space. By drawing De-Folding and De-Mixing simultaneously, adequate PAD features are extracted, which can be useful to detect PAs. Extensive experiments clearly show the significant improvements in performance of face and fingerprint PAD.
\section{Proposed Method}
\label{sec:method}
Figure \ref{fig:pipeline} presents the block diagram of the proposed IF-OM, which adopts De-Folding and De-Mixing to reliably capture the hierarchical features useful for PAD. The goal of De-Folding is to reconstruct the raw image from the folded image. Since the folded image and the corresponding ground truth can be easily obtained, the model in this task is directly trained by maximizing Cycle Consistency in an explicit way. While De-Mixing task is an ill-posed problem, where a single mixed image corresponds to two different images (irrespective of order). Hence, we introduce a new loss function called Topological Consistency to train the model for De-Mixing in an implicit way. In the following sections, we will present the detailed discussion on the proposed method.
\subsection{In-Image De-Folding}
\label{sec:ccdf}
The patterns of face and fingerprint are quite different from that of the natural images. A typical case of the point is that the fingerprint and face perform a symmetric distribution in the global view of the images, but chirality in the local patterns, such as texture features and reflection. In the field of PAD, print photos, replay-videos and 3D masks are typical attacks for biometric recognition systems. Although the attacks are similar with the bona fide samples from the view of human vision, the texture of the attacks is generally unusual with anomalous reflection due to the specification of the instruments.
As the features for PAD are mostly identical with the chiral features, a chirality-related pretext task, denoted as De-Folding is proposed in this paper.
As shown in Fig. \ref{fig:IF}, based on different modalities, we propose two strategies to fold images. In the case of face, a vertical line is adopted to cut the input image $x_i$ into two patches, $A_1$ and $A_2$, which are then randomly selected to flip horizontally to obtain $A'_{1}$ and $A'_{2}$. Through resizing and averaging $A'_{1}$ and $A'_{2}$, the folded image $f_i$, can be calculated, which is then drawn as the input of the following part. Unlike face, fingerprint shows chirality in both vertical and horizontal direction. Hence, $x_i$ are cropped into four patches, $\{A_1,A_2,A_3,A_4\}$, by the vertical and horizontal lines. And the flips with various directions are correspondingly adopted to generate $\{A'_1,A'_2,A'_3,A'_4\}$. In order to improve the difficulty of the task and prevent the model from over fitting, the lines for cutting are randomly localized rather than frozen in the middle of image.
Since the paired data, i.e. ($f_i$,$x_i$) for De-Folding can be generated easily, the model is trained in an explicit way by maximizing cycle consistency. In particular, given $f_i$ as input, a feature extractor $D(\cdot)$ is adopted to embed $f_i$ into a latent representation $z_i$, while a generator $G(\cdot)$ is employed to reconstruct $z_i$ to $y_i$. By following such cycle pipeline $x_i \rightarrow f_i \rightarrow y_i$, $D(\cdot)$ and $G(\cdot)$ are trained end-to-end
by the learning objective,
\begin{equation}
\label{eq:1}
\begin{aligned}
&\min_{G,D} \underset{\substack{x_i \sim \mathcal{X}_t \\ f_i \sim \mathcal{T}(\mathcal{X}_t)}}{\mathcal{L}_r}(y_i,x_i) + \underset{\substack{x_i \sim \mathcal{X}_t \\ f_j \sim \mathcal{T}(\mathcal{X}_t)}}{\mathcal{L}_g}(y_j,x_i) \\
&\underset{\substack{x_i \sim \mathcal{X}_t \\ f_i \sim \mathcal{T}(\mathcal{X}_t)}}{\mathcal{L}_r}(y_i,x_i) \!=\! \underset{\substack{x_i \sim \mathcal{X}_t \\ f_i \sim \mathcal{T}(\mathcal{X}_t)}}{\mathbb{E}}\! || y_i - x_i ||_2 \\
&\underset{\substack{x_i \sim \mathcal{X}_t \\ f_j \sim \mathcal{T}(\mathcal{X}_t)}}{\mathcal{L}_g}(y_j,x_i) \!=\!
\underset{x_i \sim \mathcal{X}_t}{\mathbb{E}}\![F(x_i)] - \underset{f_j \sim \mathcal{T}(\mathcal{X}_t)}{\mathbb{E}}\![F(y_j)]
\end{aligned}
\end{equation}
where $\mathcal{T}(\cdot)$ is the image folding and $\mathcal{X}_t$ refers to the training set.
$F(\cdot)$ is a discriminator, which has the same architecture with $D(\cdot)$ and is trained by maximizing $\mathcal{L}_g$. $\mathcal{L}_r$ leads $y_i$ to be similar with $x_i$ in a supervised mode, while $\mathcal{L}_g$ adopt the loss of WGAN \cite{arjovsky2017wasserstein} to ensure the realness of $y_i$ by following unsupervised setting.
\begin{figure}
\centering
\includegraphics[width=.48\textwidth]{inImageDeFolding}
\caption{The pipeline of folding inputs in De-Folding task. (a) presents the case of face and (b) refers to the fingerprint case. The fingerprint sample is selected from LivDet2017 \cite{mura2018livdet} and face is cropped from \cite{gecer2019ganfit}.
}
\label{fig:IF}
\end{figure}
\subsection{Out-of-Image De-Mixing}
\label{sec:tcdm}
Since De-Folding is an in-image task, the model leans to represent the images with region-specific features for reconstruction.
Such pretext task pays more attention to the local patterns, but neglects the differences between the samples. As a result, varying samples can be embedded into the similar representations. However, PAD is a binary classification task, in which the ideal embedding space is compact but distinguishable for different samples. Therefore, in this paper, another pretext task, denoted as De-Mixing is proposed to further enhance the discrimination among the different samples. The model is not only required to reconstruct folded images but also disentangle mixed image from different samples.
In De-Mixing task, two samples $x_i$ and $x_j$ are mixed into $M_{ij}$ by
\begin{align}
\label{eq:2}
M_{ij} = \underset{{\epsilon \sim \mathbb{U}(0,1)}}{\Delta}(x_i, x_j) =\epsilon x_i + (1-\epsilon) x_j
\end{align}
where $\mathbb{U}(0,1)$ is the uniform distribution from 0. to 1.,and $\epsilon$ is a scalar sampled from $\mathbb{U}(0,1)$ for mixing. Given $M_{ij}$ as input, the feature extractor $D(\cdot)$ is required to disentangle $M_{ij}$ into $x_i$ and $x_j$. However, such a requirement draws the task as an ill-posed problem, which is hard to train end-to-end. Considering the groundtruth of De-Mixing, both $\{ x_i,x_j\}$ and $\{x_j,x_i\}$ are the correct results. But for $D(\cdot)$, the order changing in the groundtruth is regarded as different labels. To overcome the problem, De-Mixing task is trained in an implicit way using topological consistency $\mathcal{L}_t$,
\begin{equation}
\label{eq:3}
\begin{aligned}
\underset{x_i,x_j \sim \mathcal{X}_t}{\mathcal{L}_t}(x_i,x_j) &=
\underset{x_i,x_j \sim \mathcal{X}_t}{\mathbb{E}}||z_{ij} - \hat{z}_{ij} + \delta||_2 \\
\hat{z}_{ij} &= \Delta(z_i,z_j)
\end{aligned}
\end{equation}
where $\epsilon$ for $M_{ij}$ and $\hat{z}_{ij}$ is identical, $z_i$, $z_j$ and $z_{ij}$ are the outputs of $D(x_i)$, $D(x_j)$ and $D(M_{ij})$ respectively and $\delta$ is a random noise sampled from a Gaussian distribution with 0. mean and 0.1 standard deviation. By minimizing the distance between $z_{ij}$ and $\hat{z}_{ij}$, the mixing operation is identical in both image and embedding space.
Since $\{z_i,z_j,z_{ij}\}$ has the same topological structure with $\{x_i,x_j,M_{ij}\}$, $M_{ij}$ can be de-mixed easily in the embedding space of $D(\cdot)$, which approximately meets the target of De-Mixing. Note that topological consistency has trivial solution, e.g. embedding the same code for all images, $\delta \in \mathcal{N}(0,0.1)$ is thus added into $\mathcal{L}_t$ to enhance the gradients against collapsing cases.
\subsection{IF-OM based Presentation Attack Detection}
\label{sec:PAD}
Considering the complementarity between De-Folding and De-Mixing, the proposed method trains $D(\cdot)$ with both pretext tasks simultaneously, and the total learning objective can be concluded as
\begin{align}
&\min_{G,D} \underset{\substack{x_i \sim \mathcal{X}_t \\ f_i \sim \mathcal{T}(\mathcal{X}_t)}}{\mathcal{L}_r}\!\!\!\!\!\!(y_i,x_i) +\!\!\!\!\!\! \underset{\substack{x_i \sim \mathcal{X}_t \\ f_j \sim \mathcal{T}(\mathcal{X}_t)}}{\mathcal{L}_g}\!\!\!\!\!\!(y_j,x_i) +\!\!\!\!\!\! \underset{x_i,x_j \sim \mathcal{X}_t}{\mathcal{L}_t}\!\!\!\!\!\!(x_i,x_j)
\end{align}
After training, $D(\cdot)$ is employed as the initialization for a presentation attack detector $H(\cdot)$. Compared with $D(\cdot)$, $H(\cdot)$ has an additional fully-connected layer to map $z_i$ into a single scalar $v_i$, i.e spoofness score. Spoofness score reflects the category probability (PA or not) of the given sample $x_i$. $H(\cdot)$ is trained through a common cross entropy based objective as follow:
\begin{align}
\label{eq:5}
\underset{x_i \in \mathcal{X}_t}{\mathcal{L}_c}\!\!\!(x_i,v_i) \!\!=\!\! - \!\! \underset{x_i \in \mathcal{X}_t}{\mathbb{E}}[u_i log(v_i) + (1-u_i) log(1-v_i)]
\end{align}
where $v_i = H(x_i)$ and $u_i$ is the category annotation of $x_i$. For clarity, the proposed method is summarized in Algo. 1.
\begin{algorithm}[!htb]
\label{algo:1}
\caption{Presentation Attack Detection using IF-OM}
\begin{algorithmic}
\Require\\
Feature Extractor $D(\cdot)$; Generator $G(\cdot)$; Discriminator $F(\cdot)$; Training Set $\mathcal{X}_t$; Presentation Attack Detector $H(\cdot)$;
\Ensure \State Trained $H(\cdot)$;
\end{algorithmic}
\begin{algorithmic}[1]
\While{ $D(\cdot)$ has not converged}
\For{$x_i$,$x_j$ in $\mathcal{X}_t$ }
\State Derive folded input $f_i$ from $\mathcal{T}(x_i)$;
\State Reconstruct $f_i$ to $y_i$ through $G(D(f_i))$;
\State Update $G(\cdot)$ and $D(\cdot)$ by minimizing Eq.(\ref{eq:1});
\State Update $F(\cdot)$ by maximizing $\mathcal{L}_g$;
\State Calculate $M_{ij}$ from $x_i$ and $x_j$ through Eq.(\ref{eq:2});
\State Update $D(\cdot)$ by minimizing Eq.(\ref{eq:3})
\EndFor
\EndWhile
\While{ $H(\cdot)$ has not converged}
\State Adopt $D(\cdot)$ as the initialization of $H(\cdot)$;
\For{$x_i$ in $\mathcal{X}_t$ }
\State Obtain spoofness score $v_i$ by $H(x_i)$;
\State Update $H(\cdot)$ by minimizing Eq.(\ref{eq:5});
\EndFor
\EndWhile
\State Return $H(\cdot)$;
\end{algorithmic}
\end{algorithm}
\begin{table*}[]
\centering
\Huge
\resizebox{1.\textwidth}{!}{
\begin{tabular}{ccc|ccccccccc|ccc}
\hline
& & & \multicolumn{3}{c|}{GreenBit} & \multicolumn{3}{c|}{DigitalPersona} & \multicolumn{3}{c|}{Oranthus} & \multicolumn{3}{c}{\textbf{Mean $\pm$ s.d.}} \\ \cline{4-15}
\multirow{-2}{*}{Baseline} & \multirow{-2}{*}{\begin{tabular}[c]{@{}c@{}}In-Image\\ De-Folding\end{tabular}} & \multirow{-2}{*}{\begin{tabular}[c]{@{}c@{}}Out-of-Image\\ De-Mixing\end{tabular}} & EER(\%) & AUC(\%) & \multicolumn{1}{c|}{TDR(\%) } & EER(\%) & AUC(\%) & \multicolumn{1}{c|}{TDR(\%) } & EER(\%) & AUC(\%) & TDR(\%) & EER(\%) & AUC(\%) & TDR(\%) \\ \hline
$\surd$ & $\times$ & $\times$ & 3.99 & 99.13 & 81.67 & 5.08 & 98.80 & 68.64 & 3.23 & 99.24 & 71.46 & 4.10 $\pm$ 0.93 & 99.06 $\pm$ 0.23 & 73.92 $\pm$ 6.86 \\ \hline
$\surd$ & $\surd$ & $\times$ & 3.17 & 99.38 & 86.01 & 3.90 & 99.03 & 72.78 & 2.88 & 99.24 & 77.45 & 3.32 $\pm$ 0.53 & 99.22 $\pm$ 0.18 & 78.75 $\pm$ 6.71 \\ \hline
$\surd$ & $\times$ & $\surd$ & 3.58 & 99.19 & 84.95 & \textbf{3.44} & 99.14 & 74.56 & 2.66 & 99.61 & 90.98 & 3.23 $\pm$ 0.50 & 99.31 $\pm$ 0.26 & 83.50 $\pm$ 8.31 \\ \hline
\rowcolor[HTML]{EFEFEF}
$\surd$ & $\surd$ & $\surd$ & \textbf{2.88} & \textbf{99.65} & \textbf{94.14} & 3.49 & \textbf{99.26} & \textbf{81.36} & \textbf{1.40} & \textbf{99.75} & \textbf{97.37} & \textbf{2.59 $\pm$ 1.07} & \textbf{99.55 $\pm$ 0.26} & \textbf{90.96 $\pm$ 8.47} \\ \hline
\end{tabular}
}
\caption{Performance of the Proposed Method with or without Each Component in the terms of EER (\%) $\downarrow$, AUC (\%) $\uparrow$ and TDR(\%)@FDR=1.0\% $\uparrow$ under the Cross-Material Setting on LivDet2017.}
\label{tab:ablation_finger}
\end{table*}
\begin{table*}[]
\centering
\setlength\tabcolsep{6pt}
\resizebox{1.\textwidth}{!}{
\begin{tabular}{ccc|cccccc|ccc}
\hline
& & & \multicolumn{3}{c|}{{[}O,M{]} to {[}C, I{]}} & \multicolumn{3}{c|}{{[}C,I{]} to {[}O, M{]}} & \multicolumn{3}{c}{\textbf{Mean $\pm$ s.d.}} \\ \cline{4-12}
\multirow{-2}{*}{Baseline} & \multirow{-2}{*}{\begin{tabular}[c]{@{}c@{}}In-Image\\ De-Folding\end{tabular}} & \multirow{-2}{*}{\begin{tabular}[c]{@{}c@{}}Out-of-Image\\ De-Mixing\end{tabular}} & EER(\%) & AUC(\%) & \multicolumn{1}{c|}{TDR(\%) } & EER(\%) & AUC(\%) & TDR(\%) & EER(\%) & AUC(\%) & TDR(\%) \\ \hline
$\surd$ & $\times$ & $\times$ & 25.65 & 79.14 & 4.07 & 28.14 & 79.05 & 18.66 & 26.90 $\pm$ 1.76 & 79.10 $\pm$ 0.06 & 11.37 $\pm$ 10.32 \\ \hline
$\surd$ & $\surd$ & $\times$ & 20.33 & 84.51 & 9.79 & 25.38 & 81.16 & 26.70 & 22.86 $\pm$ 3.57 & 82.84 $\pm$ 2.37 & 18.25 $\pm$ 11.96 \\ \hline
$\surd$ & $\times$ & $\surd$ & 21.07 & 85.17 & 11.93 & 26.33 & 80.25 & 27.49 & 23.70 $\pm$ 3.72 & 82.71 $\pm$ 3.48 & 19.71 $\pm$ 11.00 \\ \hline
\rowcolor[HTML]{EFEFEF}
$\surd$ & $\surd$ & $\surd$ & \textbf{18.96} & \textbf{89.48} & \textbf{30.48} & \textbf{18.60} & \textbf{89.76} & \textbf{30.30} & \textbf{18.78 $\pm$ 0.25} & \textbf{89.62 $\pm$ 0.20} & \textbf{30.39 $\pm$ 0.13} \\ \hline
\end{tabular}%
}
\caption{Performance of the Proposed Method with or without Each Component in the terms of EER (\%) $\downarrow$, AUC (\%) $\uparrow$ and TDR(\%)@FDR=1.0\% $\uparrow$ under the Cross-Dataset Setting on OULU-NPU (O), CAISIA-FASD (C), Idiap Replay-Attack (I) and MSU-MFSD (M).}
\label{tab:ablation_face}
\end{table*}
\section{Experimental Results and Analysis}
To evaluate the performance of the proposed method,
extensive experiments are carried on the publicly-available datasets, including LivDet2017\cite{mura2018livdet}, OULU-NPU \cite{boulkenafet2017oulu}, CASIA-FASD \cite{zhang2012face}, Idiap Replay-Attack \cite{chingovska2012effectiveness} and MSU-MFSD \cite{wen2015face}. We first introduce the datasets and the corresponding implementation details. Then, the effectiveness of the proposed method is validated by analyzing the contribution of each component. Since this is the first time to adopt self-supervised learning for PAD, we finally compare the proposed method with both existing self-supervised methods and PA detectors to further prove the superiority of the proposed method.
\subsection{Datasets and Implementation Details}
As the proposed method is evaluated in two modalities, including fingerprint and face, we separately introduce the details of the corresponding protocols as follows:
\subsubsection{Fingerprint.} Due to the complete experimental settings, LivDet 2017 \cite{mura2018livdet} is used to test the methods on fingerprint PAD. The dataset is composed of over 17,500 fingerprint images captured from three different readers, i.e., Green Bit, Orcanthus and Digital Persona. For each sensor, about 1760 fingerprint images are used for training, 440 images for validation and 3740 images for testing. To evaluate the generalization of the competing methods, cross-material and cross-sensor settings \cite{9457215} are used in this paper. For cross-material case, the spoof materials available in test set are deemed as unknown material, which are inaccessible during training. The partition of materials follows the setting in \cite{9457215}. In the cross-sensor protocol, PA detectors are trained by the images collected using a randomly-selected sensor, and then tested using the images from the other sensors. Equal Error Rate (EER), Area Under Curve (AUC) and true detection rate (TDR) @ false detection rate (FDR)=1\% are used to evaluate the performance of detection.
In terms of network architecture, MobileNet V2 \cite{sandler2018mobilenetv2} is selected as the backbone for the feature extractor and discriminator, while the corresponding generator is designed by following U-Net architecture \cite{ronneberger2015u}. The model is trained by Adam with $\beta_1=0.9$ and $\beta_2=0.999$. The learning rate is 1e-6 with 5e-4 weight decay. Batch size for training is 12. When comes to training PA detector based on the proposed method, the batch size is set as 128, learning rate refers to 1e-4 and the other parameters are identical with IF-OM. Note that, in order to test the capacity of feature extraction and reduce the dependence on data scale, only the training set adopted for PAD is drawn to train the proposed method.
We compare the proposed method with both self-supervised learning based methods and presentation attack detectors. For self-supervised learning based methods, GAN based discriminator \cite{isola2017image} and auto-encoder based encoder \cite{isola2017image} are set as the baseline of the generative learning, while MoCo V2 \cite{chen2020improved} is selected as the representative method of the contrastive learning. In the terms of PA detector, LivDet 2017 winner \cite{mura2018livdet} and FSB \cite{8306930} are adopted as the competing method. For more comprehensive analysis on the proposed method, multiple models based PA detectors, including RTK-PAD \cite{9457215} and FSB + UMG Wrapper \cite{9079541} are also included for reference.
\subsubsection{Face.} To test the performance of face PAD, 4 datasets including OULU-NPU \cite{boulkenafet2017oulu} (denoted as O), CASIA-FASD \cite{zhang2012face} (denoted as C), Idiap Replay-Attack \cite{chingovska2012effectiveness} (denoted as I) and MSU-MFSD \cite{wen2015face} (denoted as M) are adopted in this paper for evaluation using a cross-dataset protocol. In particular, [O,M] and [C,I] are set as two groups. The model is trained in one group and tested in the other.
In this case, MTCNN algorithm \cite{zhang2016joint} is adopted for face detection and alignment. All the detected faces are resized to (256,256). ResNet18 \cite{He_2016_CVPR} is set as the backbone for the feature extractor and discriminator. The model is trained with 32 batch size by Adam optimizer ($\beta_1=0.9,\beta_2=0.999$). The learning rate is 1e-4 and the weight decay is 5e-4. For PAD, the feature extractor is fine-tuned by SGD with 1e-2 learning rate, 0.9 momentum and 5e-4 weight decay.
Besides the mentioned self-supervised methods, the state-of-the-art PA detectors, including DeepPixBiS \cite{george2019deep}, SSDG-R \cite{jia2020single}, CDC \cite{yu2020searching} are conducted in this paper. The effectiveness is validated by the improvement of such methods adopting the proposed method as initialization.
This paper adopts the public platform pytorch for all experiments using a work station with CPUs of 2.8GHz, RAM of 512GB and GPUs of NVIDIA Tesla V100.
\begin{table}[]
\centering
\Huge
\resizebox{.48\textwidth}{!}{
\begin{threeparttable}
\begin{tabular}{c|ccc|ccc}
\hline
& \multicolumn{3}{c|}{Fingerprint} & \multicolumn{3}{c}{Face} \\ \cline{2-7}
\multirow{-2}{*}{} & EER(\%) & AUC(\%) & TDR(\%) & EER(\%) & AUC(\%) & TDR(\%) \\ \hline
Baseline & 13.26 & 93.46 & 29.28 & 42.30 & 59.51 & 4.14 \\ \hline
GAN based Discriminator & 12.59 & 93.66 & 34.33 & 38.99 & 63.38 & 4.34 \\
AE based Encoder & 11.20 & 94.83 & 27.78 & 35.65 & 65.79 & 2.70 \\ \hline
MoCo V2 & 20.87 & 86.18 & 13.10 & 40.77 & 64.58 & 6.04 \\ \hline
\rowcolor[HTML]{EFEFEF}
Ours: IF-OM & \textbf{8.87} & \textbf{96.55} & \textbf{56.42} & \textbf{31.28} & \textbf{73.68} & \textbf{10.06} \\ \hline\hline
Pre-Trained from ImageNet & 4.10 & 99.06 & 73.92 & 26.90 & 79.10 & 11.37 \\ \hline
\rowcolor[HTML]{EFEFEF}
Ours:IF-OM (ImageNet) & \textbf{2.59} & \textbf{99.55} & \textbf{90.96} & \textbf{18.78} & \textbf{89.62} & \textbf{30.39} \\ \hline
\end{tabular}
\begin{tablenotes}
\item More details of the results in each case are given in the appendix.
\end{tablenotes}
\end{threeparttable}
}
\caption{Performance Comparison between the Proposed Method and Self-Supervised Methods in the Terms of EER(\%) $\downarrow$, AUC(\%) $\uparrow$ and TDR(\%)@FDR=1.0\% $\uparrow$ under the Cross-Material Setting on LivDet2017 and the Cross-Dataset Setting on [O, M] and [C, I]. }
\label{tab:SSL}
\end{table}
\begin{table*}[]
\centering
\setlength\tabcolsep{8pt}
\resizebox{1.\textwidth}{!}{
\begin{threeparttable}
\begin{tabular}{c|c|cccc|cc}
\hline
\multicolumn{2}{c|}{} & \multicolumn{2}{c}{Cross-Material Case} & \multicolumn{2}{c|}{Cross-Sensor Case} & \multicolumn{2}{c}{\textbf{Mean}} \\ \cline{3-8}
\multicolumn{2}{c|}{\multirow{-2}{*}{}} & \multicolumn{1}{c}{ACE(\%)} & \multicolumn{1}{c}{TDR(\%)} & \multicolumn{1}{c}{ACE(\%)} & TDR(\%) & \multicolumn{1}{c}{ACE(\%)} & TDR(\%) \\ \hline
& LivDet 2017 Winner & 4.75 $\pm$ 1.40 & - & - & - & - & - \\
& F.S.B. & 4.56 $\pm$ 1.12 & 73.32 $\pm$ 15.52 & 32.40$\pm$16.92 & 21.26$\pm$28.06 & 18.48 & 50.69 \\
\multirow{-3}{*}{Single Model} & \cellcolor[HTML]{EFEFEF}Ours: IF-OM & \cellcolor[HTML]{EFEFEF}\textbf{2.48 $\pm$ 0.98} & \cellcolor[HTML]{EFEFEF} \textbf{90.96 $\pm$ 8.47} & \cellcolor[HTML]{EFEFEF}\textbf{19.82 $\pm$ 9.80} & \cellcolor[HTML]{EFEFEF}\textbf{33.43 $\pm$ 24.12} & \cellcolor[HTML]{EFEFEF}\textbf{11.15} & \cellcolor[HTML]{EFEFEF}{\textbf{62.20}} \\\hline\hline
& F.S.B. + UMG Wrapper & \multicolumn{1}{c}{4.12 $\pm$ 1.34} & \multicolumn{1}{c}{80.74 $\pm$ 10.02} & \multicolumn{1}{c}{\textbf{20.37 $\pm$ 12.88}} & \textbf{43.23 $\pm$ 28.31} & 12.25 & 61.99 \\
\multirow{-2}{*}{\begin{tabular}[c]{@{}c@{}}Multiple Models\\ (Ensemble Learning)\end{tabular}} & RTK-PAD & \multicolumn{1}{c}{\textbf{2.12 $\pm$ 0.72}} & \multicolumn{1}{c}{\textbf{91.20 $\pm$ 7.59}} & \multicolumn{1}{c}{21.87 $\pm$ 10.48} & 34.70 $\pm$ 25.30 & \textbf{12.00} & \textbf{62.95} \\ \hline
\end{tabular}
\begin{tablenotes}
\footnotesize
\item The competing methods only report the result in ACE and TDR@FDR=1.0\%, we thus test the proposed method in ACE.
\item More details of the results in each case are given in the appendix.
\end{tablenotes}
\end{threeparttable}
}
\caption{Performance Comparison between the Proposed Method and the State-Of-The-Art Methods on LivDet2017 under the Cross-Material and Cross-Sensor Settings in the Terms of Average Class Error (\%) $\downarrow$ and TDR(\%)@FDR=1.0\%$\uparrow$. }
\label{tab:finger_compare}
\end{table*}
\begin{table*}[]
\centering
\setlength\tabcolsep{6pt}
\resizebox{1.\textwidth}{!}{
\begin{threeparttable}
\begin{tabular}{c|cccccc|ccc}
\hline
& \multicolumn{3}{c|}{{[}O,M{]} to {[}C, I{]}} & \multicolumn{3}{c|}{ {[}C,I{]} to {[}O, M{]}} & \multicolumn{3}{c}{\textbf{ Mean $\pm$ s.d.}} \\ \cline{2-10}
& EER(\%) & AUC(\%) & \multicolumn{1}{c|}{ TDR(\%)} & EER(\%) & AUC(\%) & TDR(\%) & EER(\%) & AUC(\%) & TDR(\%) \\ \hline
Baseline: DeepPixBiS & 22.93 & 79.13 & 0.00 & 22.45 & 85.70 & 24.37 & 22.69 $\pm$ 0.34 & 82.42 $\pm$ 4.65 & 12.19 $\pm$ 17.23 \\
\rowcolor[HTML]{EFEFEF}
Ours:DeepPixBiS + IF-OM & \textbf{15.94} & \textbf{92.60} & \textbf{42.34} & \textbf{22.14} & \textbf{86.36} & \textbf{34.08} & \textbf{19.04 $\pm$ 4.38} & \textbf{89.48 $\pm$ 4.41} & \textbf{38.21 $\pm$ 5.84} \\ \hline \hline
Baseline: SSDG-R & 20.92 & 88.07 & 9.72 & 22.57 & 85.61 & 15.95 & 21.75 $\pm$ 1.17 & 86.84 $\pm$ 1.74 & 12.84 $\pm$ 4.41 \\
\rowcolor[HTML]{EFEFEF}
Ours: SSDG-R + IF-OM & \textbf{18.60} & \textbf{88.80} & \textbf{15.52} & \textbf{18.92} & \textbf{88.59} & \textbf{47.90} & \textbf{18.76 $\pm$ 0.23} & \textbf{88.70 $\pm$ 0.15} & \textbf{31.71 $\pm$ 22.90} \\ \hline \hline
Baseline: CDC & 28.94 & 78.96 & 13.93 & 23.30 & 83.42 & 25.83 & 26.12 $\pm$ 3.99 & 81.19 $\pm$ 3.15 & 19.88 $\pm$ 8.41 \\
\rowcolor[HTML]{EFEFEF}
Ous: CDC + IF-OM & \textbf{26.00} & \textbf{81.73} & \textbf{14.76} & \textbf{21.86} & \textbf{85.77} & \textbf{34.95} & \textbf{23.93 $\pm$ 2.93} & \textbf{83.75 $\pm$ 2.86} & \textbf{24.86 $\pm$ 14.28} \\ \hline
\end{tabular}%
\begin{tablenotes}
\footnotesize
\item $^*$ This paper adopts ResNet-18 as the backbone for CDC.
\end{tablenotes}
\end{threeparttable}
}
\caption{Performance of Various PAD Methods with or without the Proposed Method as Initialization in the Terms of EER(\%) $\downarrow$, AUC(\%) $\uparrow$ and TDR(\%)@FDR=1.0\% $\uparrow$ under the Cross-Dataset Setting of Face. }
\label{tab:face_compare}
\end{table*}
\subsection{Effectiveness Analysis of the Proposed Method}
To quantify the contribution of Out-of-Image De-Mixing and In-Image De-Folding, we test the performance of PAD with or without the corresponding pretext task. Table \ref{tab:ablation_finger} and Table \ref{tab:ablation_face} show the results carried on fingerprint and face cases respectively. The baseline is set as the model pre-trained from ImageNet for PAD. Compared with the baseline, both De-Folding and De-Mixing can provide more reasonable initialization. Specifically, an increase of 9.58\% in mean TDR@FDR=1.0\% is achieved by adopting De-Mixing as the pretext task in Table \ref{tab:ablation_finger}. When it comes to face, De-Folding improves the EER of baseline from 26.90\% to 22.86\%. This indicates that both components in the proposed method can promote PAD effectively. Among all the cases, the most significant improvement is obtained when all the designed components are adopted, i.e. IF-OM can reach to 18.78\% and 2.59\% mean EER in face and fingerprint respectively, which significantly outperforms those of baseline.
\subsection{Comparison with Related Methods}
\subsubsection{Comparison with Self-Supervised Methods}
Due to the difference between the natural and face/fingerprint images, directly adopting existing self-supervised methods for PAD is not a proper choice.
To our best knowledge, this is the first time to employ a specific self-supervised method for PAD.
To validate the effectiveness of the proposed method, we compare IF-OM with the existing self-supervised methods. As the results listed in Table \ref{tab:SSL}, the proposed method outperforms existing methods significantly. In terms of face, when trained from scratch, our method can achieve an EER of 31.28\%, exceeding other self-supervised methods by around 4$\sim$10 \% absolutely. Meanwhile, the proposed method can further improve the performance of the model pre-trained from ImageNet. Typically, in the case of fingerprint, IF-OM reaches 90.96\% TDR when FDR=1.0\%, which outperforms the initialization from ImageNet in a large margin, i.e. 90.96\% vs. 73.92\%. Note that the data scale of PAD is limited, hence, contrastive learning can not reach to competitive results and may lead model to learn useful features for identification but useless for PAD.
\subsubsection{Comparison with Presentation Attack Detectors}
To further verify the effectiveness of the proposed method, we compare it with the state-of-the art methods. As the results listed in Table \ref{tab:finger_compare}, under the cross-material and cross-sensor settings, the proposed method can outperform other single model based methods by a large margin. In the cross-sensor case, compared with FSB, a reduction of 12.58\% in average classification error (ACE) can be obtained by IF-OM. By comprehensively analyzing cross-material and -sensor protocols, IF-OM can promote PA detector to 11.15\% mean ACE, even exceeding the multiple model based methods, which convincingly proves the advantage of IF-OM.
When comes to face, we re-implement some famous PA detectors and investigate the improvement from IF-OM in different detectors. As listed in Table \ref{tab:face_compare}, IF-OM can facilitate the performance of detection by around 5$\sim$25\% in mean TDR@FDR=1.0\%. When DeepPixBiS is used as the detector, IF-OM can improve the AUC of PAD from 82.42\% to 89.48\%. The experimental results indicate that the proposed method is general and can be integrated with various PA detectors.
\section{Conclusion}
In this paper, we proposed a self-supervised learning based method to improve the generalization performance of PA detectors.
De-Folding and De-Mixing pretext tasks included in the method work together as a local-global strategy. That is, De-Folding requires the model to reconstruct the folded image to the row by extracting region-specific features, i.e. local information, while De-Mixing drives the model to derive instance-specific features, i.e. global information, by disentangling the mixed samples. The generalization ability is finally improved by the comprehensive local-global view on training samples.
The effectiveness of the proposed method is verified in terms of face and fingerprint PAD, including 5 publicly-available datasets: LivDet2017, OULU-NPU, CASIA-FASD, Idiap Replay-Attack and MSU-MFSD. In the future, we will further investigate the application of the proposed method in other tasks, such as fingerprint/face recognition and face detection/alignment.
\section{Acknowledgment}
The work is partially supported by the National Natural Science Foundation of China under grants no. 62076163 and 91959108, the Shenzhen Fundamental Research fund JCYJ20190808163401646, JCYJ20180305125822769, Tencent “Rhinoceros Birds”-Scientific Research Foundation for Young Teachers of Shenzhen University and the Research Council of Norway (No. 321619 Project “OffPAD”).
|
2,869,038,154,367 | arxiv | \section{Introduction}
\label{intro}
Since the first detection of the afterglows \citep{cos97,par97,fra97} and the
host galaxies \citep{blo98,blo99,fru99b} of gamma-ray bursts (GRBs), by now
it has well been established that long-duration GRBs are cosmological events
occurring in star-forming galaxies
\citep[and references therein]{pac98a,fru99a,ber01,fra02,chr04,sol05,fru06},
and are most likely produced by the core-collapse of massive stars
(Woosley, Heger \& Weaver 2002; Piran 2004; Zhang \& M\'esz\'aros 2004;
Woosley \& Heger 2006a, and references therein). This scenario has received
strong support from the cumulative evidence that some, if
not all, long-duration GRBs are associated with supernovae (SNe), either
from direct observations of supernova features in the spectra of GRB
afterglows, or from indirect observations of rebrightening and/or flattening
(called ``red bumps'') in GRB afterglows which are interpreted as the
emergence of the underlying supernova lightcurves
\citep[and references therein]{del06,woo06x,woo06}. The discovery of the
connection between GRBs and supernovae has been one of the most exciting
developments in the fields of GRBs and supernovae in the past decade.
Interestingly, all the supernovae that have been spectroscopically confirmed
to be associated with GRBs, including SN 1998bw/GRB 980425 \citep{gal98},
SN 2003dh/GRB 030329 \citep{hjo03,sta03}, SN 2003lw/GRB 031203 (Malesani et
al. 2004; Sazonov, Lutovinov \& Sunyaev 2004), and the most recent one,
SN 2006aj/GRB 060218
\citep{mas06,mod06,cam06,sol06,pia06,mir06,cob06}, are Type Ic having
no detectable hydrogen and helium lines. However, the supernovae that are
associated with GRBs also remarkably differ from ordinary Type Ibc supernovae:
they have extremely smooth and featureless spectra indicating very large
expansion velocity, are much more energetic (i.e., involving much larger
explosion energy), and eject significantly larger amount of nickels
\citep{ham04,del06,woo06}, except SN 2006aj/GRB 060218 which is somewhat
closer to normal SNe Ibc (see below; Mazzali et al. 2006). For these reasons,
they are often called ``hypernovae'' to be distinguished from normal
supernovae \citep{iwa98,pac98a,pac98b}. A correlation between the peak
spectral energy of GRBs and the peak bolometric luminosity of the underlying
supernovae are found by \citet{li06}, based on the multi-wavelength
observations on the above four pairs of GRBs-SNe.
The discovery of GRB-SN connection has provided us with important clues to
the progenitors of GRBs, since it is broadly believed that Type Ibc
supernovae are produced by the core-collapse of Wolf-Rayet (WR) stars who
have lost their hydrogen (possibly also helium) envelopes due to strong
stellar winds or interaction with companions
\citep[and references therein]{sma02,woo02,fil04,woo06a}. In fact, for
several GRBs, observations with high quality optical spectra have identified
the presence of highly ionized lines with high relative velocities most
likely coming from shells or clumps of material from WR stars, supporting
WR stars as the GRB progenitors
\citep[see, however, Hammer et al. 2006]{mir03,sch03,klo04,che06}.
A systematic study on the GRB afterglows carried out by \citet{zeh04}
suggested that all long-duration GRBs are associated with supernovae. However,
it appears that only a small fraction of Type Ic supernovae are able to
produce GRBs, since the rate of GRBs and hypernovae are several orders of
magnitude lower than the rate of core-collapse supernovae \citep{pod04}.
Although both long-duration GRBs and core-collapse supernovae are found in
star-forming galaxies, their location in the hosts and the morphology and
luminosities of their host galaxies are significantly different as most
clearly revealed by the recent study of \citet{fru06} with {\it Hubble Space
Telescope} ({\it HST}) imaging. The core-collapse supernovae trace the blue-light
of their hosts that are approximately equally divided between spiral and
irregular galaxies, while long GRBs are
far more concentrated on the brightest regions of faint and irregular
galaxies. \citet{fru06} argued that their results may be best understood if
GRBs are formed from the collapse of extremely massive and low-metallicity
stars.
The preference of long-duration GRBs to low-metallicity galaxies
\citep{fyn03,hjo03a,lef03,sol05,fru06} has been strengthened by the recent
paper of \citet{sta06}, in which a strong anti-correlation between the
isotropic energy of five nearby SN-connected GRBs and the oxygen abundance
in their host galaxies was found, which was used to argue that the life in
the Milky Way is protected away from GRBs by metals. \citet{sta06} have
suggested that long-duration GRBs do not trace star formation, but trace the
metallicity.
The discovery of GRB 060218 and its association with SN 2006aj by {\it Swift}\, has
shed more light on the GRB-SN connection as well as on the nature of GRBs.
GRB~060218 has a cosmological redshift $z = 0.0335$ corresponding to a
luminosity distance of $147 {\rm Mpc}$ ($\Omega_m = 0.3$, $\Omega_\Lambda =
0.7$, and $H_0 = 70 ~{\rm km~s}^{-1} {\rm Mpc}^{-1}$), which makes it the
second nearest GRB among those having determined redshifts (about four times
the distance of GRB 980425 at $z=0.0085$; Campana et al. 2006; Pian et al.
2006; Sollerman et al. 2006). GRB 060218 is very unusual in several aspects.
It has an extremely long duration, about $2,100$ s. Its spectrum is very soft,
with a photon index $2.5\pm 0.1$ and peak energy $E_{\rm peak} = 4.9_{-0.3}^{+0.4}\,
{\rm keV}$ in the GRB frame. The isotropic equivalent energy is $E_{\rm iso} =
(6.2\pm 0.3)\times 10^{49}$ ergs extrapolated to the $1$--$10,000$ keV in
the rest frame energy band \citep{cam06}, which is at least 100 times fainter
than normal cosmological GRBs but among a population of under-energetic
GRBs \citep{saz04,lia06}.
Although the supernova associated with GRB 060218, i.e. SN 2006aj, is broadly
similar to those previously discovered GRB-connected supernovae, it also shows
some remarkable unusual features \citep{pia06,sol06,maz06a}. Among the four
GRB-connected supernovae mentioned above, SN 2006aj is the faintest one,
although still brighter than normal Type Ibc supernovae. Its lightcurve rises
more rapidly, and its expansion velocity indicated by the spectrum is
intermediate between that of other GRB-connected supernovae and that of
normal SNe Ibc. Modeling of the spectra and the lightcurve of SN 2006aj
reveals that SN 2006aj
is much less energetic compared to other GRB-connected supernovae: it had an
explosion energy $E_{\rm in} \approx 2\times 10^{51}$ ergs, ejected a mass
$M_{\rm ej} \approx 2 M_\odot$, compared to $E_{\rm in} \sim 3$--$6\times 10^{52}$
ergs, and $M_{\rm ej}\sim 10 M_\odot$ of the others \citep{maz06a}. This suggests
that SN 2006aj is closer to normal Type Ibc supernovae than to the other
GRB-connected supernovae, and there does not exist a clear gap between
hypernovae and normal Type Ibc supernovae \citep{li06}.
The X-ray afterglow observation by the X-Ray Telescope (XRT) on board {\it Swift}\,
on GRB 060218 started 159~s after the burst trigger. A very interesting
feature in the early X-ray afterglow is that it contains a soft black-body
component which has a temperature about $0.17$ keV and comprises about
$20\%$ of the total X-ray flux in the $0.3$--$10$ keV range, lasting from
159~s up to $\sim 10,000$~s. The black-body component was not detected in
later XRT observations \citep{cam06}. The total energy contained in the
black-body component, as estimated by Campana (private communication), is
$\approx 10^{49}$ ergs. \citet{cam06} interpreted it as supernova shock
breakout from a dense wind surrounding the progenitor WR star of the
supernova.
\citet{but06} conducted an analysis on the early X-ray afterglows of a
sample ($>70$) of GRBs observed by the XRT/{\it Swift}. He found that although most of
the afterglow spectra can be fitted with a pure power law with extinction, a
small fraction of them show appreciable soft thermal components at $5$--$10\%$
level. His reanalysis on GRB 060218 showed that the black-body component
contains energy as much as $2.3\times 10^{50}$ ergs and has a duration
$\approx 300$ s. According to Butler's analysis, the soft black-body component
even dominates the flux after $\sim 1,000$~s from the burst trigger.
Flashes from shock breakout in core-collapsed supernovae were first predicted
by \citet{col68} almost forty years ago, originally proposed for GRBs that had
not been discovered yet. However, they have not been unambiguously detected in
supernova observations yet \citep{cal04}. This is mainly due to the transient
nature of the event. It is generally expected that the flash from shock
breakout precedes the supernova, is much brighter and harder than the
supernova radiation but has a very short time-duration.
According to the general theory of core-collapsed supernova explosion, the
liberation of explosive energy in the interior of a progenitor star generates
a shock wave. The shock wave propagates outward. However, the external
appearance of the star remains unaltered, until the shock wave reaches a
point (the shock breakout point) near the stellar surface where the diffusion
velocity of photons begins to exceed the shock velocity. The postshock
radiation can then leak out in a burst of ionizing radiation, producing a
brilliant flash in the UV/X-ray band \citep[for a comprehensive review see
Matzner \& McKee 1999]{kle78,che79,ims89}.
For the famous Type II SN 1987A, theoretical calculations have shown that
the shock emergence from the surface of the progenitor (Sk~1, a blue
supergiant) would have produced a radiation of $\sim 10^{47}$ ergs in the
extreme UV to soft X-ray band, lasting 1--3 minutes
\citep{ims88,ims89,ens92,bli00}.
In fact, in the observed bolometric lightcurve of SN 1987A, there was a fast
initial decline phase which could be the tail of the lightcurve produced by
the shock breakout \citep{ims89}. If the shock breakout interpretation of
the soft black-body component in GRB 060218 is confirmed, it would have
important impact on the theories of both GRBs and supernovae. The case of
GRB 060218/SN 2006aj would also be the first unambiguous detection of a shock
breakout event from supernovae.
Although the propagation of a strong shock in a supernova and the appearance
of shock emergence (shock breakout) have been intensively studied both
analytically and numerically (Klein \& Chevalier 1978; Imshennik \& Nad\"ezhin
1988, 1989; Ensman \& Burrows 1992; Blinnikov et al. 1998, 2000, 2002;
Matzner \& McKee 1999; Tan, Matzner \& McKee 2001), in the
situation of supernovae produced from stars with dense stellar winds they have
not been fully explored yet. If the stellar wind of the progenitor is very
optically thick---which is indeed the case for Type Ibc supernovae whose
progenitors are believed to be WR stars---the shock breakout will occur in
the wind region after the shock passes through the surface of the
star, instead of in the region inside the star. Since a stellar wind has a
mass density profile very different from that of a star, the model
that has been developed for the shock emergence in supernovae with progenitors
without stellar winds cannot be directly applied to the case of progenitors
with dense stellar winds.
In this paper, we present a simple model for semi-analytically computing
the propagation of a strong shock in a dense stellar wind, and estimating
the characteristic quantities for the transient event from the shock breakout
in SNe Ibc. The model is obtained by an extension of the existing model for
the shock propagation and breakout in supernovae produced by the core-collapse
of stars without dense
stellar winds. Then, we apply the model to SN 2006aj and examine if the
soft black-body component in the early X-ray afterglow of GRB 060218 can
be interpreted by the supernova shock breakout.
The paper is organized as follows. In Sec.~\ref{wr}, we describe a simple
but general model for the mass density and velocity profile for the wind
around a WR star, and calculate the optical depth in the wind. In
Sec.~\ref{shock}, we model the propagation of a supernova shock wave in a
stellar wind, taking into account the relativistic effects. In
Sec.~\ref{energy}, we analyze the evolution of the shock front, and
the radiation energy contained in it. In Sec.~\ref{emergence}, we present
a procedure for calculating the quantities characterizing the transient event
arising from the shock breakout, including the released energy, the
temperature, the time-duration, and the momentum of the shock front at the
time of shock breakout. In Sec.~\ref{result}, we present our numerical
results. In Sec.~\ref{application}, we apply our model to
GRB 060218/SN 2006aj. In Sec.~\ref{sum}, we summarize our results and draw
our conclusions.
Appendix~\ref{b1} is devoted to the formulae for computing the optical depth
of a wind in the framework of the standard stellar wind model.
Appendix~\ref{star} lists the formulae for computing the characteristic
quantities for supernova shock breakout from a star without winds, in the
trans-relativistic regime. Appendix~\ref{correlation} presents a correlation
in the WR star parameters.
\section{Mass Density Profile of the Wind of a Wolf-Rayet Star and the Optical
Depth in the Wind}
\label{wr}
Wolf-Rayet stars are very luminous, hot, and massive stars that are nearly
at the end of their stellar lives. Based on their spectra, WR stars are often
classified as WN stars (nitrogen dominant) and WC stars (carbon dominant).
WR stars are characterized by extremely dense stellar winds, losing mass at
a rate of $10^{-6}$--$10^{-4} M_\odot$ yr$^{-1}$ with a wind terminal velocity
of $700$--$3,000$ km s$^{-1}$. The mass lost from WR stars by stellar
winds is so enormous that most (if not all) of hydrogens of WR stars have
been lost. This is a main reason for the general belief that WR stars are the
progenitors of Type Ibc supernovae.
Some basic relations in the physical parameters of WR stars can be found
in \citet{lan89}, \citet{sch92}, and \citet{nug00}.
The wind of a WR star is usually extremely dense. This is characterized by
the fact that, for the majority of WR stars, the ratio of the momentum of
the wind ($\dot{M} v_\infty$, where $\dot{M}$ is the mass-loss rate,
$v_\infty$ is the terminal velocity of the wind) to the momentum of
radiation ($L/c$, where $L$ is the luminosity of the star, $c$ is the speed
of light) is much larger than unity, indicating that on average each photon
leaving the star must be scattered several times and the wind must be
optically thick. As a result, the photospheric radius ($R_{\rm ph}$, the radius
where the optical depth $\tau_w = 2/3$) often differs from the core radius
of the star ($R_\star$, the radius where $\tau_w = 20$ by definition)
by a factor $> 2$.
\begin{figure}
\vspace{2pt}
\includegraphics[angle=0,scale=0.47]{rph_rstar.eps}
\caption{Photospheric radius versus stellar core radius, for 86 Galactic WRs
(filled circles for WC-type, open circles for WN-type) and 6 LMC WRs
(triangles, WC-type only). The dashed lines show the relation of $R_{\rm ph}
= R_\star$, $R_{\rm ph} = 2 R_\star$, and $R_{\rm ph} = 10 R_\star$ (upward).
}
\label{rph}
\end{figure}
In Fig.~\ref{rph}, we plot the photospheric radius against the core radius
for 86 Galactic WC-type and WN-type stars (Hamann, Koesterke \& Wessolowski
1995; Koesterke \& Hamann 1995) and 6 LMC WC-type stars \citep{gra98},
determined with the ``standard model'' of stellar winds. For many WRs,
especially those of WC-type, we have $R_{\rm ph} > 2 R_\star$.
The mass density of a steady and spherically symmetric wind is related to
the mass-loss rate and the wind velocity by
\begin{eqnarray}
\rho(r) = \frac{\dot{M}}{4\pi r^2 v_r(r)} \;, \label{rho_w}
\end{eqnarray}
where $r$ is the radius from the center of the star, and $v_r$ is the radial
velocity of the wind. We model the velocity of the wind by
\citep{sch96,ign00,nug02}
\begin{eqnarray}
v_r(r) = v_\infty \left(1-\frac{\alpha R_\star}{r}\right)^b \;,
\label{vr}
\end{eqnarray}
where $\alpha<1$ and $b\ge 1$ are free parameters. The presence of $\alpha$
in equation~(\ref{vr}) is to ensure that the mass density of the wind is
regular at the stellar radius $r=R_\star$.
In the ``standard model'' of stellar winds the value of $b$ is assumed to
be unity, as in the case of O-stars. However, it has been argued that for WR
stars $b$ can be significantly larger \citep{rob94,sch97,lep99}. According
to the calculations of \citet{nug02}, $b$ is typically in the range of
$4$--$6$.
The value of $\alpha$ can be determined by the radial velocity of the wind
at $r=R_\star$. If we define $\varepsilon = v_\star/v_\infty$, where
$v_\star \equiv v_r(R_\star)$, then
\begin{eqnarray}
\alpha = 1-\varepsilon^{1/b} \;.
\label{alp}
\end{eqnarray}
Typically, $v_\star$ has the order of the sound speed at $R_\star$, and
$\varepsilon \sim 0.01$ \citep{sch96}.
\begin{figure}
\vspace{2pt}
\includegraphics[angle=0,scale=0.475]{slope.eps}
\caption{The log-slope of the wind density, $s\equiv d\ln\rho/d\ln r = -2 -b
(r/\alpha R_\star -1)^{-1}$, as a function of the radius $r$. As $r\rightarrow
\infty$, $s$ approaches $-2$ (the upper dashed line). For small $r$, $s$ is
significantly smaller than $-2$. A shock wave accelerates only in the region
of $s<-3$ (below the lower dashed line; see Sec.~\ref{shock}). Left to right:
$b= 1$--$10$ with $\Delta b= 1$.
}
\label{slope}
\end{figure}
In the outer wind region, where $r\gg R_\star$ and $v_r\approx v_\infty$,
the wind density $\rho\sim r^{-2}$. In the region close to the stellar
surface (i.e., $r\sim R_\star$), the wind density has a much steeper
log-slope (Fig.~\ref{slope}). As will be seen in Sec.~\ref{shock}, it is the
very steep mass density profile near the surface of the star that makes it
possible for a shock wave to accelerate in the wind region. We will also see
in Sec.~\ref{result} that shock breakout takes place at a radius not far
from the surface of $r=R_\star$. So we adopt equation~(\ref{vr}) for the
wind velocity profile since its asymptotic form $v_r = v_\infty$ (and $\rho
\propto r^{-2}$) is not accurate for describing the wind velocity (and hence
the mass density) near $r=R_\star$.
The opacity $\kappa_w$ in the WR wind region is complex and generally a
function of radius \citep{nug02,gra05}. However, compared to the mass density,
the opacity changes very slowly with radius. For example, at the sonic point
in the wind, we have $d\ln\kappa_w/d\ln r \sim 0.001$--$0.03$ \citep{nug02},
while $|d\ln\rho/d\ln r| \ga 2$ always. Hence, to calculate the optical depth
in the wind, we can approximate $\kappa_w$ by a constant although its value is
uncertain at some level. Then, the optical depth in the wind is
\begin{eqnarray}
\tau_w \equiv \int_r^\infty \kappa_w\rho dr = \frac{A}{(b-1)
\alpha R_\star} \left[\left(1-\frac{\alpha}{y}\right)^{1-b}
-1\right] \;,
\label{tau}
\end{eqnarray}
where $y\equiv r/R_\star$ and $A\equiv \kappa_w\dot{M}/(4\pi v_\infty)$.
\begin{figure}
\vspace{2pt}
\includegraphics[angle=0,scale=0.475]{alpha_yph.eps}
\caption{The solution for $\alpha$ and $y_{\rm ph} = R_{\rm ph}/R_\star$, for
$\varepsilon=10^{-5}-10^{-1}$. Top to bottom: $b= 1 - 10$ with $\Delta b
= 1$. The filled circles on each curve label the values of $\varepsilon
= 10^{-5}$, $10^{-4}$, $10^{-3}$, $10^{-2}$, and $10^{-1}$ from left to
right.
}
\label{alpha}
\end{figure}
As commonly adopted in the literature, we define the stellar core radius
$R_\star$ of the WR star to be the radius where $\tau_w = 20$. Then, we can
rewrite the optical depth as
\begin{eqnarray}
\tau_w = \tau_0 \left[\left(1-\frac{\alpha}{y}\right)^{1-b}
-1\right] \;,
\label{tau_r}
\end{eqnarray}
where
\begin{eqnarray}
\tau_0 \equiv\frac{A}{(b-1)\alpha R_\star} =
\frac{20}{(1-\alpha)^{1-b} -1} \;.
\label{tau0}
\end{eqnarray}
By definition, the boundary of the photosphere is at the photospheric
radius where $\tau_w = 2/3$. Then, we can solve for $y_{\rm ph} \equiv R_{\rm ph}/
R_\star$ from equation~(\ref{tau_r})
\begin{eqnarray}
y_{\rm ph} = \alpha \left[1-\left(1+\frac{2}{3\tau_0}
\right)^{1/(1-b)}\right]^{-1} \;.
\label{yph}
\end{eqnarray}
For a given $b$, $y_{\rm ph}$ is a decreasing function of $\alpha$. As $\alpha
\rightarrow 1$, we have $\tau_0\rightarrow 0$ and $y_{\rm ph}\rightarrow 1$. As
$\alpha\rightarrow 0$, we have $\tau_0\approx 20/[(b-1)\alpha]$ and $y_{\rm ph}
\rightarrow 30$. Thus, in general we must have $1<y_{\rm ph} <30$. By
equation~(\ref{alp}), $\alpha$ is a decreasing function of $\varepsilon$.
The above results for the optical depth are valid only for $b>1$. The
corresponding formulae for $b=1$ (the ``standard model'') are given in
Appendix~\ref{b1}.
In Fig.~\ref{alpha}, we plot $\alpha$ versus $y_{\rm ph}$, solved from
equations~(\ref{alp}) and (\ref{yph}) (or eq.~\ref{yph1} when $b=1$) for a
set of discrete values of $b$ (from 1 to 10) and continuous values of
$\varepsilon$ (from $10^{-5}$ to $10^{-1}$). The value of $y_{\rm ph}$ sensitively
depends on $\varepsilon$. For the same value of $b$, $y_{\rm ph}$ drops quickly as
$\varepsilon$ decreases. When $\varepsilon$ is fixed, $y_{\rm ph}$ decreases
if $b$ increases from $b=1$, very fast for small values of $\varepsilon$.
However, for $b>3$, the effect of the variation in $b$ on the value of
$y_{\rm ph}$ is not dramatic.
The opacity $\kappa_w$, and the corresponding optical depth $\tau_w$, are
for the optical photons in the wind and are hence valid for calculating the
mass density profile of the wind before a supernova shock passes through it. As
will be seen in the following sections, the radiation generated by the
supernova shock wave in the wind of a WR star is in the X-ray band and we
also need consider the opacity and the optical depth to the X-ray photons for
computing the thickness of the shock front and the emergence of the shock
wave (Secs.~\ref{energy} and \ref{emergence}).
The X-ray opacity of a gas strongly depends on the ionization state of the
gas \citep{kro84}. The radiation generated by a supernova shock wave in
the wind of a WR star has a luminosity $L_{\rm X} \ga 10^{45}$ ergs s$^{-1}$
(Secs.~\ref{emergence} and \ref{result}), which contains enough photons to
fully ionize the surrounding gas. This fact can be seen from the ionization
parameter, defined as the ratio of the photon number density to the particle
number density, $\Xi \equiv L_{\rm X}/\left(4\pi c r^2 n_{\rm H} \varepsilon_{\rm
ph}\right)$, where $\varepsilon_{\rm ph}$ is the energy of photons. Using
equation~(\ref{rho_w}) (where $v_r\approx v_\infty$) and $n_{\rm H} = \rho/
\mu_{\rm H} m_{\rm H}$, where $m_{\rm H}$ is the mass of proton, and
$\mu_{\rm H} \approx 2$ is the mean molecular weight per proton, we get
\begin{eqnarray}
\Xi &\approx& \frac{\mu_{\rm H} m_{\rm H} L_{\rm X} v_\infty}{\dot{M}
\varepsilon_{\rm ph} c} \nonumber \\
&\approx& 4.4\times 10^6\;\mu^{-1}
\left(\frac{L_{\rm X}}{10^{45}{\rm ergs}\, {\rm s}^{-1}}\right)
\left(\frac{\varepsilon_{\rm ph}}{1\,{\rm keV}}\right)^{-1} \;,
\label{xxi}
\end{eqnarray}
where
\begin{eqnarray}
\mu \equiv \left(\frac{\dot{M}}{5\times 10^{-5}
M_\odot\,{\rm yr}^{-1}}\right) \left(\frac{
v_\infty}{2,000\,{\rm km}\,{\rm s}^{-1}}
\right)^{-1} \;.
\label{mu2}
\end{eqnarray}
Hence, for typical parameters we have $\Xi\ga 10^6$, which means that a tiny
fraction of the radiation would be enough to fully ionize the gas in the
wind. Then, the absorption opacity is negligible \citep{kro84}. The opacity
in the wind to the X-ray photons is then dominated by the electron scattering
opacity, $\kappa_{\rm es} = 0.2$ cm$^2$ g$^{-1}$, which is insensitive to
the photon energy if the photon energy is much smaller than the electron
mass energy \citep{akh65}.
The X-ray optical depth in the wind is
\begin{eqnarray}
\tau_{\rm X} \equiv \kappa_{\rm es} \int_r^\infty \rho dr = \iota\tau_w \;,
\hspace{1cm} \iota \equiv \frac{\kappa_{\rm es}}{\kappa_w} \;.
\label{taux}
\end{eqnarray}
The X-ray photospheric radius, defined by $\tau_{\rm X}=2/3$, is then
\begin{eqnarray}
y_{{\rm ph},{\rm X}} = \alpha \left[1-\left(1+\frac{2}{3\iota\tau_0}
\right)^{1/(1-b)}\right]^{-1} \;.
\label{yphx}
\end{eqnarray}
\section{Propagation of a Strong Shock Wave in the Wind}
\label{shock}
The propagation of a strong shock wave in a gas is determined by two competing
processes: the collection of mass from the ambient gas makes the shock wave
decelerate, and the steep downward gradient of the gas mass density makes the
shock wave accelerate. Based on previous self-similar analytical solutions
and numerical works, \citet{mat99} have proposed a continuous and simple form
for the shock velocity that accommodates both spherical deceleration and
planar acceleration
\begin{eqnarray}
v_s \propto \left(\frac{E_{\rm in}}{m}\right)^{1/2}
\left(\frac{\rho r^3}{m}\right)^{-\beta_1} \;,
\label{mm}
\end{eqnarray}
where $\beta_1\approx 0.2$. In the above equation, $E_{\rm in}$ is the explosion
kinetic energy, $m(r)\equiv M(r) - M_{\rm rem}$, $M_{\rm rem}$ is the mass of the
material that will become the supernova remnant, and $M(r)$ is the mass of the
material contained in radius $r$.
After the shock has collected an enough amount of mass so that $m(r)$ does
not change significantly any more, we have $v_s \propto\left(\rho r^3
\right)^{-\beta_1}$,
the behavior of the shock is purely determined by the profile of the mass
density in the region that the shock is plowing into. Then, for a spherically
symmetric gas, the shock wave accelerates when $d\left(\rho r^3\right)/dr <0$,
and decelerates when $d\left(\rho r^3\right)/dr >0$.
To generalize the formalism to the case of a relativistic shock wave,
\citet{gna85} has suggested to replace the shock velocity $v_s$ on the
left-hand side of equation~(\ref{mm}) by $\Gamma_s \beta_s$, where
$\beta_s\equiv v_s/c$, $c$ is the speed of light, and $\Gamma_s \equiv
\left(1-\beta_s^2\right)^{-1/2}$ is the Lorentz factor. The equation
so obtained quite accurately describes both the limits of non-relativistic
($\beta_s^2 \ll 1$, i.e., $\Gamma_s\approx 1$) and ultra-relativistic
($\beta_s^2\approx 1$, i.e., $\Gamma_s\gg 1$) shocks. However, \citet{tan01}
have shown that it is not accurate in the trans-relativistic regime
($\beta_s^2$ close to $1$ but $\Gamma_s$ not large enough). \citet{tan01}
have suggested the following formula for both non-relativistic and
trans-relativistic shocks
\begin{eqnarray}
\Gamma_s \beta_s &=& p\left(1+p^2\right)^{0.12} \;, \nonumber\\
p &\equiv& 0.736 \left(\frac{E_{\rm in}}{m c^2}\right)^{1/2}
\left(\frac{\rho r^3}{m}\right)^{-0.187} \;.
\label{tan}
\end{eqnarray}
With numerical simulation, \citet{tan01} have verified equation~(\ref{tan})
for trans-relativistic and accelerating shocks with $\Gamma_s \beta_s$ up to
a value $\sim 10$. However, the limited numerical resolution in their
code has not allowed them to follow the acceleration of a non-relativistic
shock into the ultra-relativistic regime \citep{tan01}.
Although equation~(\ref{tan}) has never been tested on relativistic and
decelerating shock waves, in the non-relativistic limit it returns to the
formula of Matzner \& McKee, i.e. equation~(\ref{mm}), which applies to both
accelerating and decelerating shocks. Hence, we assume that
equation~(\ref{tan}) applies to both accelerating and decelerating
relativistic shocks.
Because of the compactness of WR stars, the problem that we are solving
here is right in the trans-relativistic regime (as will be confirmed latter
in this paper). Thus we will use equation~(\ref{tan}) to calculate the
momentum of a shock wave propagating in a wind of a WR star. In addition,
since the wind contains a negligible amount of mass, at the radius where shock
breakout takes place (either inside the star but close to its surface, or
in the wind region), we have $m\approx M_{\rm ej}$, where $M_{\rm ej}$ is the ejected
mass.
Although the equation for the shock movement that we use in this paper is
the same as that used by \citet{mat99} and \citet{tan01}, the mass density
profile in the wind of a star is very different from that in the interior
of a star. In the outer layer of a star the mass density drops quickly as
the radius increases by a very small amount, as described by
equation~(\ref{rho_star}), Hence, as the shock wave approaches the surface
of the star, it always accelerates according to $v_s\propto \rho^{-\beta_1}$
since $m\approx M_{\rm ej}$ and $r\approx R_\star$.
The situation is very different in a stellar wind. A shock wave propagating
in a wind with a density given by
equations~(\ref{rho_w}) and (\ref{vr}) accelerates in the region near the
stellar surface $r = R_\star$, but decelerates at large radius since $\rho
\propto r^{-2}$ and $d(\rho r^3)/dr >0$ for $r\gg R_\star$ (Fig.~\ref{slope}).
The transition from acceleration to deceleration occurs at a radius determined
by $d(\rho r^3)/dr = 0$, where the shock velocity reaches the maximum. The
transition radius is found to be
\begin{eqnarray}
R_a = (1+b) \alpha R_\star \;. \label{ra}
\end{eqnarray}
After passing the transition radius, the shock wave starts decelerating. The
maximum $\Gamma_s \beta_s$ is then obtained by submitting $r=R_a$ into
equation~(\ref{tan})
\begin{eqnarray}
\left(\Gamma_s \beta_s\right)_{\max}
&=& p_{\max}\left(1+p_{\max}^2\right)^{0.12} \;,
\nonumber\\[1mm]
p_{\max} &=& 1.181\, [\alpha f(b)]^{-0.187} \nonumber\\
&& \times \left(\frac{E_{\rm in}}
{M_{\rm ej} c^2}\right)^{1/2} \left(\frac{\Psi}{M_{\rm ej}}
\right)^{-0.187} \;,
\label{pmax0}
\end{eqnarray}
where $f(b) \equiv (1+b)\left(1+1/b\right)^b$, and the mass function $\Psi$
is defined by
\begin{eqnarray}
\Psi \equiv \frac{\dot{M} R_\star}{v_\infty}
= 1.654\times 10^{-9} M_\odot\, \mu
\left(\frac{R_\star}{3 R_\odot}\right) \;,
\label{psi}
\end{eqnarray}
where $\mu$ is defined by equation~(\ref{mu2}).
The function $\Psi$ is an estimate of the mass contained in the photosphere
region of the wind. A correlation between $\Psi$ and $R_{\rm ph}$ is presented in
Appendix~\ref{correlation}.
Submitting fiducial numbers in, we get
\begin{eqnarray}
p_{\max} &=& 1.137\mu^{-0.187}\, \left[\frac{\alpha f(b)}{f(5)}
\right]^{-0.187}\left(\frac{E_{\rm in}}{10^{52} {\rm ergs}}
\right)^{0.5} \nonumber\\
&&\times \left(\frac{M_{\rm ej}}{10 M_\odot}\right)^{-0.313}
\left(\frac{R_\star}{3 R_\odot}\right)^{-0.187} \;.
\label{pmax}
\end{eqnarray}
Thus, typically, the shock wave is trans-relativistic.
\section{Energy of the Radiation Contained in the Shock Front}
\label{energy}
The gas pressure behind a relativistic shock front, measured in the
frame of the shocked gas, is \citep{bla76}
\begin{eqnarray}
p_2 = (\gamma_2-1)(\hat{\gamma}_2\gamma_2+1)\rho c^2 \;,
\end{eqnarray}
where $\hat{\gamma}_2$ is the polytropic index of the shocked gas, $\gamma_2$
is the Lorentz factor of the shocked gas, and $\rho$ is the mass density of
the unshocked gas. The Lorentz factor $\gamma_2$ is related to the Lorentz
factor of the shock front $\Gamma = \Gamma_s$ by the equation~(5) of
\citet{bla76}. Since WR winds are radiation-dominated, we have
$\hat{\gamma}_2 = 4/3$. Then, we can approximate $p_2$ by
\begin{eqnarray}
p_2 \approx F_p(\Gamma_s v_s)\, \rho \Gamma_s^2 v_s^2 \;,
\end{eqnarray}
where $F_p(\Gamma_s v_s)\sim 1$ is defined by
\begin{eqnarray}
F_p(x) \equiv \frac{2}{3} + \frac{4}{21\left(1+0.4252\,
x^2\right)^{0.4144}} \;,
\label{fapprox}
\end{eqnarray}
which has the correct asymptotic values as $\Gamma_s v_s\rightarrow
\infty$ (ultra-relativistic limit) and $\Gamma_s v_s\rightarrow 0$
(non-relativistic limit), and has a fractional error $<0.3\%$ in the
trans-relativistic regime.
Denoting the temperature of the radiation behind the shock front by
$T_2$, then the pressure of the radiation measured in the frame of the
shocked gas is
\begin{eqnarray}
\frac{1}{3} a T_2^4 \approx p_2 \approx F_p(\Gamma_s v_s)\,
\rho \Gamma_s^2 v_s^2 \;,
\label{p_r}
\end{eqnarray}
where $a$ is the radiation density constant.
A strong shock has a very narrow front. In the non-relativistic limit, the
geometric thickness of the shock front is \citep{ims88,ims89}
\begin{eqnarray}
\Delta r_s \approx \frac{c}{\kappa_{\rm es}\rho v_s} \;, \label{ims}
\end{eqnarray}
where $\kappa_{\rm es}$ is the electron scattering opacity (see Sec.~\ref{wr}).
That is, the thickness of the shock front is equal to the mean free path
of photons multiplied by the optical depth of the shock
\begin{eqnarray}
\tau_s = \frac{c}{v_s} \;. \label{tau_s}
\end{eqnarray}
For an ultra-relativistic blast wave, the total energy stored in the shock
wave is proportional to $\Gamma_s^2 r^3$ and hence the thickness of the shell
of shocked particles is $\sim r/\Gamma_s^2$ \citep{bla76}. That is, in the
ultra-relativistic limit, the thickness of the shock front measured in the
rest frame is $\propto \Gamma_s^{-2}$. Hence, using the optical depth of the
shock and that of the wind, we can estimate the geometric thickness of a
relativistic shock front in the rest frame by
\begin{eqnarray}
\Delta r_s \approx \xi\frac{\tau_s}{\tau_{\rm X}}\frac{r}{\Gamma_s^2} \;,
\label{ims_g}
\end{eqnarray}
where $\tau_{\rm X}$ is the X-ray optical depth (eq.~\ref{taux}), and
\begin{eqnarray}
\xi &\equiv& \frac{\tau_{\rm X}}{\kappa_{\rm es}\rho r}
= \left|\frac{\partial\ln\tau_{\rm X}}{\partial\ln r}\right|^{-1}
\nonumber\\
&=& \frac{y}{\alpha(b-1)}\left[1-\frac{\alpha}{y}-
\left(1-\frac{\alpha}{y}\right)^b\right] \;.
\label{xi}
\end{eqnarray}
Equation~(\ref{ims_g}) returns to equation~(\ref{ims}) in the non-relativistic
limit. The function $\xi(y)$ is an increasing function of $y$. As $y
\rightarrow\infty$, we have $\xi\rightarrow 1$. As $y\rightarrow \alpha$, we
have $\xi\rightarrow 0$. (When $b=1$, $\xi$ is given by eq.~\ref{xi1} in
Appendix~\ref{b1}.)
The total energy of the radiation contained in the shock front, measured
in the frame of the shocked gas, is then
\begin{eqnarray}
E_R \approx \frac{1}{3} \left(aT_2^4\right) 4\pi r^2 (\gamma_2
\Delta r_s) \approx \frac{4\pi\tau_s\gamma_2}{3\tau_{\rm X}
\Gamma_s^2} \xi \left(aT_2^4\right) r^3 \;,
\label{e_r}
\end{eqnarray}
where the factor $1/3$ accounts for the fact that the energy density is
not uniform (concentrated at the boundary of the shock), and the factor
$\gamma_2$ in $(\gamma_2 \Delta r_s)$ accounts for the Lorentz contraction.
Submitting equation~(\ref{p_r}) into equation~(\ref{e_r}), we get
\begin{eqnarray}
E_R \approx \frac{4\pi \tau_s\gamma_2}{\tau_{\rm X} \Gamma_s^2} \xi
F_p(\Gamma_s v_s)\,\rho r^3 \Gamma_s^2 v_s^2 \;.
\label{e_r2}
\end{eqnarray}
Using the definition of $\xi$, we have
\begin{eqnarray}
E_R \sim \frac{4\pi\gamma_2c}{\Gamma_s^2 \kappa v_s} F_p r^2
\Gamma_s^2 v_s^2 \propto r^2 \Gamma_s v_s \;,
\nonumber
\end{eqnarray}
since $F_p\sim 1$ and $\gamma_2/\Gamma_s \sim 1$.
In the accelerating region ($r<R_a$), $\Gamma_s v_s$ and $\gamma_2$ increase
with $r$. Hence, the total energy contained in the shock front as measured
by the rest observer, $\gamma_2 E_R$, increases with $r$.
In the decelerating region ($r>R_a$, $\rho\sim r^{-2}$), by
equation~(\ref{tan}) we have, approximately, $\Gamma_s v_s \propto r^{-0.2}$,
thus $E_R \propto r^{1.8}$. In the non-relativistic limit, $\gamma_2 E_R
\approx E_R \propto r^{1.8}$. In the
ultra-relativistic limit, $\gamma_2 E_R\propto \gamma_2 r^{1.8} \propto
r^{1.6}$ since $\gamma_s = \Gamma_s/\sqrt{2}\propto r^{-0.2}$. Hence,
in the region of $r>R_a$, we also expect that the total energy contained in
the shock front, $\gamma_2 E_R$, increases with $r$ although the shock is
decelerating. This is caused by the fact that the volume contained in the
shock front increases with $r$.
\section{Emergence of the Shock and the Characteristic Quantities}
\label{emergence}
Inside the star or deep inside the wind, because of the large optical depth
in the gas photons have a diffusion velocity that is smaller than the
velocity of the shock front, so that the radiation generated by the shock wave
is trapped inside the boundary of the shock. As the shock wave moves toward
the boundary of the photosphere, the optical depth in the gas drops, until a
radius is reached where the diffusion velocity of photons begins to exceed
the velocity of the shock front. Then, the radiation generated by the shock
wave starts to escape from the star to produce a bright flash, and the shock
becomes visible to a remote observer.
Thus, the shock emerges at a radius where the optical depth of the gas to the
radiation generated by the shock is equal to the optical depth
of the shock
\begin{eqnarray}
\tau_{\rm X} = \frac{c}{v_s} \;, \label{br_con}
\end{eqnarray}
since beyond that radius photons diffuse outward faster than the shock front
moves \citep{ims88,ims89,mat99}. Since $v_s<c$ always, the shock must emerge
at a radius where $\tau_{\rm X} >1$. By equations~(\ref{tau_r}) and (\ref{taux}),
the maximum breakout radius (determined by $\tau_{\rm X}=1$) is at
\begin{eqnarray}
y_{\max} = \alpha\left[1-\left(1+\frac{1}{\iota\tau_0}
\right)^{1/(1-b)}\right]^{-1} \;,
\end{eqnarray}
which is approached by an ultra-relativistic shock. [When $b=1$, the
$y_{\max}$ is given by equation~(\ref{ymax1}).]
The evolution of $\Gamma_s \beta_s$ is determined by equation~(\ref{tan}),
which can be recast into
\begin{eqnarray}
\Gamma_s \beta_s &=& p\left(1+p^2\right)^{0.12} \;, \nonumber\\
p &=& 1.181 \left(\frac{E_{\rm in}}{M_{\rm ej} c^2}\right)^{1/2}\left(
\frac{\Psi}{M_{\rm ej}}\right)^{-0.187} \nonumber\\
&& \times y^{-0.187}\left(1-\frac{\alpha}{y}
\right)^{-0.187 b} \;,
\label{tan2}
\end{eqnarray}
where equations~(\ref{rho_w})and (\ref{vr}) have been used, and $\Psi$ is
the mass function defined by equation~(\ref{psi}).
With equations~(\ref{tan2}), (\ref{taux}), and (\ref{tau_r}) (or \ref{tau1}
if $b=1$), we can calculate the radius where the shock breakout takes place,
$R_{\rm br} \equiv y_{\rm br} R_\star$, by numerically solving the algebraic
equation~(\ref{br_con}).
After having $y_{\rm br}$, we can calculate the momentum of the shock wave at
$y=y_{\rm br}$, by equation~(\ref{tan2}).
\begin{figure*}
\vspace{2pt}
\includegraphics[angle=0,scale=0.75]{breakout_kappa.eps}
\caption{Characteristic quantities of shock emergence as functions of
the opacity in the stellar wind. The solid line corresponds to $\varepsilon
=10^{-2}$. The dashed line corresponds to $\varepsilon=10^{-3}$. Other
parameters are: $b = 5$, $E_{\rm in} = 10^{52} {\rm ergs}$, $M_{\rm ej} = 10
M_\odot$, and $R_\star = 3 R_\odot$.
}
\label{breakout_kappa}
\end{figure*}
Since the shock breakout occurs at a radius where $\tau_s = \tau_{\rm X}$
(eq.~\ref{br_con}), by equation~(\ref{e_r2}), the total energy of the
radiation generated by the shock breakout as measured by a rest observer is
\begin{eqnarray}
E_{\rm br} \equiv \left[\gamma_2 E_R\right]_{r=R_{\rm br}}
\approx \left.4\pi \xi F_\gamma^2 F_p\, \rho r^3
\Gamma_s^2 v_s^2\right|_{r=R_{\rm br}} \;,
\label{ebr0}
\end{eqnarray}
where $F_p = F_p(\Gamma_s v_s)\sim 1$, $F_\gamma = F_\gamma(\Gamma_s)\equiv
\gamma_2/\Gamma_s\sim 1$. The Lorentz factors $\gamma_2$ and $\Gamma_s$ are
related by the equation~(5) of \citet{bla76}. For the case of $\hat{\gamma}_2
=4/3$, $F_\gamma$ can be approximated by
\begin{eqnarray}
F_\gamma(x) \approx \frac{1}{\sqrt{2}}+\frac{1-1/\sqrt{2}}
{[1+0.9572\, (x - 1)]^{0.9325}} \;,
\nonumber
\end{eqnarray}
which gives the correct asymptotic values at the non-relativistic limit
($\Gamma_s\rightarrow 1$) and the ultra-relativistic limit ($\Gamma_s
\rightarrow \infty$), and has a fractional error $<0.08\%$ in the
trans-relativistic case.
Submitting equations~(\ref{rho_w}) and (\ref{vr}) into equation~(\ref{ebr0})
and making use of equation~(\ref{psi}), we get
\begin{eqnarray}
E_{\rm br} &\approx& \Psi c^2 \left[\xi F_\gamma^2 F_p \Gamma_s^2
\beta_s^2\right]_{y=y_{\rm br}} y_{\rm br}\left(1-\frac{\alpha}
{y_{\rm br}}\right)^{-b} \nonumber\\
&\approx& 1.48 \times 10^{46} {\rm ergs}\; \mu
\left(\frac{R_\star}{3 R_\odot}\right)
\left(\frac{y_{\rm br}}{5}\right)
\left(1-\frac{\alpha}{y_{\rm br}}\right)^{-b} \nonumber\\
&&\times \left[\xi F_\gamma^2 F_p \Gamma_s^2 \beta_s^2
\right]_{y=y_{\rm br}} \;.
\label{ebr_eq}
\end{eqnarray}
\begin{figure*}
\vspace{2pt}
\includegraphics[angle=0,scale=0.75]{breakout_ein.eps}
\caption{Characteristic quantities of shock emergence as functions of
the explosion kinetic energy. The solid line corresponds to $\varepsilon=
10^{-2}$. The dashed line corresponds to $\varepsilon=10^{-3}$. Other
parameters are: $b = 5$, $\kappa_w = 0.7$ cm$^2$ g$^{-1}$, $M_{\rm ej} = 10
M_\odot$, and $R_\star = 3 R_\odot$. The dotted line shows the solution
for shock breakout from a star without a wind, with the same $M_{\rm ej}$,
$R_\star$, and $\kappa_\star= 0.2$ cm$^2$ g$^{-1}$, $\zeta=1$.
}
\label{breakout_ein}
\end{figure*}
Similarly, from equation~(\ref{p_r}), we can obtain the temperature of the
radiation measured in a rest frame
\begin{eqnarray}
T_{\rm br} &\equiv& \left[\gamma_2 T_2\right]_{r=R_{\rm br}} \nonumber\\
&\approx& \left(\frac{3\Psi c^2}{4\pi a R_\star^3}
\right)^{1/4} \left[F_\gamma F_p^{1/4}\Gamma_s^{3/2}
\beta_s^{1/2}\right]_{y=y_{\rm br}} \nonumber\\
&&\times y_{\rm br}^{-1/2}\left(1-\frac{\alpha}{y_{\rm br}}
\right)^{-b/4} \nonumber\\
&\approx& 0.800\times 10^6 {\rm K}\; \mu^{0.25}
\left(\frac{R_\star}{3 R_\odot}\right)^{-0.5}
\left(\frac{y_{\rm br}}{5}\right)^{-0.5} \nonumber\\
&&\times\left(1-\frac{\alpha}{y_{\rm br}}\right)^{-b/4}
\left[F_\gamma F_p^{1/4}\Gamma_s^{3/2} \beta_s^{1/2}
\right]_{y=y_{\rm br}} \;.
\label{tembr_eq}
\end{eqnarray}
The time-duration of the shock breakout event is set by the time spent by
a photon to diffuse out to the surface of the photosphere from the breakout
radius. Since at the radius of shock breakout the diffusion velocity of
photons is equal to the velocity of the shock wave, we have
\begin{eqnarray}
t_{\rm br} &\approx& \frac{R_{{\rm ph},{\rm X}}-R_{\rm br}}{v_{s,{\rm br}}} = \frac{R_\star
}{\beta_{s,{\rm br}} c} \left(y_{{\rm ph},{\rm X}}-y_{\rm br}\right) \nonumber\\
&\approx& 6.96\, {\rm s}\, \left(\frac{R_\star}{3 R_\odot}
\right)\beta_{s,{\rm br}}^{-1} \left(y_{{\rm ph},{\rm X}}-y_{\rm br}\right) \;,
\label{tbr_eq}
\end{eqnarray}
where $R_{{\rm ph},{\rm X}} = y_{{\rm ph},{\rm X}} R_\star$ is the X-ray photospheric radius
(eqs.~\ref{yphx} and \ref{yphx1}), and $v_{s,{\rm br}} = \beta_{s,{\rm br}} c
\equiv v_s(r=R_{\rm br})$ is the speed of the shock wave at the time of breakout.
\section{Results}
\label{result}
Unlike in the cases of non-relativistic and ultra-relativistic shock waves,
where the quantities characterizing the transient event from the shock breakout
can be expressed with factorized scaling relations of input parameters (e.g.,
the eqs.~36--38 of Matzner \& McKee 1999), in the trans-relativistic case
here we must numerically solve the relevant equations for the characteristic
quantities.
\begin{figure*}
\vspace{2pt}
\includegraphics[angle=0,scale=0.75]{breakout_mej.eps}
\caption{Characteristic quantities of shock emergence as functions of
the ejected mass. The solid line corresponds to $\varepsilon=10^{-2}$.
The dashed line corresponds to $\varepsilon=10^{-3}$. Other parameters are:
$b = 5$, $\kappa_w = 0.7$ cm$^2$ g$^{-1}$, $E_{\rm in} = 10^{52}$ ergs, and
$R_\star = 3 R_\odot$. The dotted line shows the solution for shock breakout
from a star without a wind, with the same $E_{\rm in}$, $R_\star$, and
$\kappa_\star= 0.2$ cm$^2$ g$^{-1}$, $\zeta=1$.
}
\label{breakout_mej}
\end{figure*}
All the relevant equations are in Sec.~\ref{emergence}, supplemented by the
formulae for the wind mass function $\Psi$ and the optical depth in
Secs.~\ref{wr} and \ref{shock} (and Appendix~\ref{b1} when $b=1$). We can
eliminate $\dot{M}$ and $v_\infty$ from the equations by using
\begin{eqnarray}
\Psi = \frac{80\pi (b-1) \alpha R_\star^2}{\kappa_w\left[(1-\alpha)
^{1-b}-1\right]} \;,
\label{psi_ka}
\end{eqnarray}
which is obtained by submitting equation~(\ref{tau0}) into the definition of
$\Psi$ (eq.~\ref{psi}). [When $b=1$, the corresponding equation is
(\ref{psi1}).] Since $\alpha$ is a function of $\varepsilon$ and $b$
(eq.~\ref{alp}), we can then choose the input parameters to be $E_{\rm in}$,
$M_{\rm ej}$, $R_\star$, $\varepsilon$, $b$, and $\kappa_w$.
Note, two opacities are involved in our model: $\kappa_w$, the optical opacity
in the wind of a WR star, which is used to calculate the mass density profile
in the wind before the shock wave passes through it; $\kappa_{\rm X} = \kappa_{\rm es}$,
the X-ray opacity in the wind, which is used to calculate the interaction of
the X-ray photons generated by the shock wave with particles in the wind
(see Sec.~\ref{wr}). Since $\kappa_{\rm es} = 0.2$ cm$^2$ g$^{-1}$ is a constant
but $\kappa_w$ is somewhat uncertain, we treat $\kappa_w$ as an input
parameter.
Compared to the case of shock breakout from a star without a wind
\citep[and Appendix~\ref{star} in this paper]{mat99}, here we have two
additional parameters: $\varepsilon$ and $b$, both describing the shape of
the wind velocity profile (eqs.~\ref{vr} and \ref{alp}). However, in the case
of a star, the opacity is fairly well determined so there are essentially
only three parameters: the explosion energy $E_{\rm in}$, the ejected mass $M_{\rm ej}$,
and the stellar radius $R_\star$. Although there is yet another parameter
$\zeta \equiv \rho_1/\rho_\star$ (see Appendix~\ref{star}), which
is typically $0.2$ for blue supergiants and $0.5$ for red supergiants
\citep{cal04}, the characteristic quantities (at least the energy, the
temperature, and the shock momentum) at shock breakout are very
insensitive to $\zeta$ \citep{mat99}. While for the problem here, i.e., a
dense stellar wind surrounding a star, the opacity $\kappa_w$ is poorly known.
Modeling of the WR winds indicates that $\kappa_w$ is in the range of
$0.3$--$0.9$ cm$^2$ g$^{-1}$ at the sonic point, and slightly larger at
larger radii \citep{nug02}.
The parameter $\varepsilon$, which is the ratio of the wind velocity at the
stellar surface to the terminal velocity of the wind, is usually thought to
be in the range of $0.001$--$0.1$, and typically around $0.01$ \citep{sch96}.
The parameter $b$, which characterizes the log-slope of the wind velocity in
the region near the stellar surface, is taken to be unity in the ``standard
model'' of stellar winds. However, as already mentioned in Sec.~\ref{wr}, for
the winds of WR stars $b$ can be much larger than unity as argued by
\citet{rob94}, \citet{sch97}, and \citet{lep99}, and is typically in the
range of $4$--$6$ \citep{nug02}.
In our numerical modeling, we allow $\kappa_w$ to vary from $0.2$ to $0.9$
cm$^2$ g$^{-1}$, $\varepsilon$ to vary from $10^{-5}$ to $10^{-1}$, and
$b$ from 1 to 10. We allow $E_{\rm in}$ to vary from $10^{51}$ ergs (for normal
core-collapse supernovae) to $10^{53}$ ergs (for hypernovae), $M_{\rm ej}$
to vary from $1 M_\odot$ to $20 M_\odot$. Although WR stars are compact and
have small radii,
to fully explore the effect of variation in the stellar radius on the
results, we allow $R_\star$ to vary from $1 R_\odot$ to $30 R_\odot$.
Whenever numbers are quoted, we refer to the fiducial values $\kappa_w= 0.7$
cm$^2$ g$^{-1}$, $\varepsilon = 0.01$, $b=5$, $E_{\rm in} = 10^{52}$ ergs,
$M_{\rm ej} = 10 M_\odot$, and $R_\star = 3 R_\odot$, unless otherwise stated.
Our numerical results for the characteristic quantities of the shock breakout,
including the total energy ($E_{\rm br}$, eq.~\ref{ebr_eq}), the temperature
($T_{\rm br}$, eq.~\ref{tembr_eq}), the time-duration ($t_{\rm br}$, eq.~\ref{tbr_eq}),
and the shock momentum ($\Gamma_{s,{\rm br}}\beta_{s,{\rm br}}$, eq.~\ref{tan2} with
$y=y_{\rm br}$) are presented in Figs.~\ref{breakout_kappa}--\ref{breakout_b}.
\begin{figure*}
\vspace{2pt}
\includegraphics[angle=0,scale=0.75]{breakout_rstar.eps}
\caption{Characteristic quantities of shock emergence as functions of
the core radius of the star. The solid line corresponds to $\varepsilon=
10^{-2}$. The dashed line corresponds to $\varepsilon=10^{-3}$. Other
parameters are: $b = 5$, $\kappa_w = 0.7$ cm$^2$ g$^{-1}$, $E_{\rm in} = 10^{52}$
ergs, and $M_{\rm ej} = 10 M_\odot$. The dotted line shows the solution for
shock breakout from a star without a wind, with the same $E_{\rm in}$, $M_{\rm ej}$,
and $\kappa_\star = 0.2$ cm$^2$ g$^{-1}$, $\zeta=1$.
}
\label{breakout_rstar}
\end{figure*}
Figure~\ref{breakout_kappa} shows $E_{\rm br}$, $T_{\rm br}$, $t_{\rm br}$, and
$\Gamma_{s,{\rm br}} \beta_{s,{\rm br}}$ as functions of the opacity $\kappa_w$.
Solid lines correspond to $\varepsilon = 10^{-2}$. Dashed lines correspond
to $\varepsilon =10^{-3}$. Other parameters take the fiducial values, as
indicated in the figure caption. For $\varepsilon = 10^{-2}$, $E_{\rm br}$ is a
slow but not monotonic function of $\kappa_w$. For $\varepsilon = 10^{-3}$,
$E_{\rm br}$ increases with $\kappa_w$. As $\kappa_w$ increases from $0.2$
to $0.9$ cm$^2$ g$^{-1}$, $E_{\rm br}$ increases by a factor $\approx 1.2$ when
$\varepsilon =10^{-2}$, and $\approx 2.6$ when $\varepsilon =10^{-3}$. The
temperature $T_{\rm br}$ increases by a factor $\approx 4$ in both cases. The
opacity $\kappa_w$ has also an effect on $t_{\rm br}$, which decreases by a factor
when $\approx 4$ when $\varepsilon =10^{-2}$, and $\approx 2.6$ when
$\varepsilon =10^{-3}$. Similar to the temperature, the momentum of the shock
front is also an increasing function of $\kappa_w$.
As $\kappa_w$ increases from $0.2$ to $0.9$ cm$^2$ g$^{-1}$, $\Gamma_{s,{\rm br}}
\beta_{s,{\rm br}}$ increases by a factor $\approx 2.2$ (for both $\varepsilon =
10^{-2}$ and $\varepsilon = 10^{-3}$). Similar to the case of breakout from
a star, the results do not change dramatically with the opacity if $b$ is
around 5. Thus, the poor knowledge in the opacity in the stellar winds will
not affect our results drastically.
All the dependence on $\kappa_w$ manifests itself though the mass function
$\Psi$ in equation~(\ref{psi_ka}) (and eq.~\ref{psi1} when $b=1$), which
shows that $\Psi\propto \kappa_w^{-1}$. Then, from the condition for the
shock breakout (eq.~\ref{br_con}), it can be checked that $y_{\rm br}$ decreases
with $\kappa_w$, and $(1-\alpha/y_{\rm br})^{-b}$ increases with $\kappa_w$.
From the dependence of $E_{\rm br}$, $T_{\rm br}$, $t_{\rm br}$, and $\Gamma_{s,{\rm br}}
\beta_{s,{\rm br}}$ on $\Psi$ and $y_{\rm br}$, it is not hard to understand the trend
shown in Fig.~\ref{breakout_kappa}. First, equation~(\ref{tan2}) implies
that $\Gamma_{s,{\rm br}} \beta_{s,{\rm br}}$ is a strong increasing function of
$\kappa_w$. Then, equation~(\ref{tbr_eq}) implies that $t_{\rm br}$ is a
decreasing function of $\kappa_w$. In equation~(\ref{ebr_eq}), $\Psi$ and
$y_{\rm br}$ decrease with $\kappa_w$, but $\Gamma_{s,{\rm br}}^2 \beta_{s,{\rm br}}^2$
and $(1-\alpha/y_{\rm br})^{-b}$ increase with $\kappa_w$. The overall result on
$E_{\rm br}$ is that shown in the top-left panel in Fig.~\ref{breakout_kappa}.
Since the radius of shock breakout decreases with $\kappa_w$, the temperature
$T_{\rm br}$ must increase with $\kappa_w$.
\begin{figure*}
\vspace{2pt}
\includegraphics[angle=0,scale=0.75]{breakout_b.eps}
\caption{Characteristic quantities of shock emergence as functions of
the wind parameter $b$. The solid line corresponds to $\varepsilon=10^{-2}$.
The dashed line corresponds to $\varepsilon=10^{-3}$. Other parameters
are: $b = 5$, $\kappa_w = 0.7$ cm$^2$ g$^{-1}$, $E_{\rm in} = 10^{52}$ ergs,
$M_{\rm ej} = 10 M_\odot$, and $R_\star = 3 R_\odot$.
}
\label{breakout_b}
\end{figure*}
Figure~\ref{breakout_ein} shows the same set of characteristic quantities
as functions of the explosion kinetic energy. Symbols and values of parameters
are similar to Fig.~\ref{breakout_kappa} and are explained in the figure
caption. To compare with the results for a star without a wind, we show with
dotted lines the corresponding characteristic quantities calculated for the
shock breakout from a star with the formulae in Appendix~\ref{star}, for the
same values of $M_{\rm ej}$ and $R_\star$, and $\kappa_\star = 0.2$ cm$^2$
g$^{-1}$, $\zeta = 1$.
As the explosion energy increases from $10^{51}$ ergs to $10^{53}$ ergs, the
breakout energy increases by a factor $\approx 117$ when $\varepsilon =
10^{-2}$, and $\approx 188$ when $\varepsilon = 10^{-3}$. This increasing
rate is much faster than that in the case of breakout from a star, in which
the breakout energy increases only by a factor of $\approx 22$. The increase
in the temperature is also faster, which is by a factor of $\approx 33$ when
$\varepsilon = 10^{-2}$, and $\approx 55$ when $\varepsilon = 10^{-3}$ for a
star with a dense wind, and only $\approx 4.6$ for a star without a wind.
While for the breakout time-duration, it appears that for the case of a
stellar wind the time-duration does not change rapidly when $E_{\rm in}$
increases, in contrast to the case of a star. This is caused by the fact that
when a star is surrounded by a dense stellar wind the shock wave has more
space for acceleration and hence at the time of emergence the shock front
is more relativistic (see the panel for the shock momentum), its
velocity approaches the speed of light limit. As we have seen in
Sec.~\ref{emergence}, when the shock velocity approaches the speed of light,
the breakout radius approaches $y_{\max}$. The distance between $y_{\max}$
and $y_{\rm ph}$ does not change with the explosion energy.
The momentum of the shock front varies with $E_{\rm in}$ at about the same rate
for the case of a stellar wind and the case of a star.
The curvature of the curves in Fig.~\ref{breakout_ein} confirms our claim
at the beginning of this section that in the trans-relativistic case the
characteristic quantities of shock breakout in general cannot be written
as factorized scaling formulae of input parameters.
Figure~\ref{breakout_mej} shows the characteristic quantities as functions
of the ejected mass. As the ejected mass $M_{\rm ej}$
increases from $1 M_\odot$ to $20 M_\odot$, the breakout energy decreases by
a factor $\approx 7.8$ when $\varepsilon =10^{-2}$, and $\approx 9.1$ when
$\varepsilon = 10^{-3}$, faster than the case of a star for which the
decreasing factor $\approx 4.4$. The temperature also drops faster. The
variation in the breakout time-duration is not fast in both the cases of
stellar winds and stars. That is, the time-duration of the shock breakout
is not very sensitive to the ejected mass. The momentum of the shock front
drops slightly slower than that in the case of a star.
Figure~\ref{breakout_rstar} shows the characteristic quantities as functions
of the core radius of the star, which, as in the case of a star \citep{mat99},
is the parameter that most dramatically affects the values of the
characteristic quantities. As $R_\star$ increases from $1 R_\odot$ to $30
R_\odot$, the breakout energy increases by a factor $\approx 69$ when
$\varepsilon =10^{-2}$, and $\approx 51$ when $\varepsilon = 10^{-3}$.
However, this factor is smaller than that in the case of star
without a wind, which is $\approx 277$. The temperature drops very fast,
caused by the increase in the area of the surface emitting the radiation. As
$R_\star$ increases from $1 R_\odot$ to $30 R_\odot$, the temperature drops
by a factor of $\approx 16$ when $\varepsilon = 10^{-2}$, and $\approx 21$
when $\varepsilon = 10^{-3}$, in contrast to the factor $\approx 5.8$ in the
case of a star. The variation in the stellar radius also has a dramatic effect
on the breakout time-duration, although the effect is less prominent than
in the case of a star. The factor by which the breakout time-duration
increases is $\approx 43$ when $\varepsilon =10^{-2}$, and $\approx 32$
when $\varepsilon = 10^{-3}$, in contrast to that $\approx 590$ in the case
of a star. The momentum of the shock front drops by a factor $\approx 4.8$
when $\varepsilon =10^{-2}$, and $\approx 4.5$ when $\varepsilon = 10^{-3}$,
similar to the factor $\approx 3$ in the case of a star.
\begin{figure*}
\vspace{2pt}
\includegraphics[angle=-90,scale=0.741]{tbr_obs.eps}
\caption{The observed time-duration of shock breakout (defined by
eq.~\ref{tbr_obs}) for the models in
Figs.~\ref{breakout_kappa}--\ref{breakout_b} (from left to right). The solid
line corresponds to $\varepsilon = 10^{-2}$. The dashed line corresponds to
$\varepsilon = 10^{-3}$. The thin lines are the breakout time-duration
$t_{\rm br}$ without the light-travel-time correction (eq.~\ref{tbr_eq}). The
figure shows that in all models the light-travel-time correction is not
important. In some models the thin and thick lines are even not visually
distinguishable.
}
\label{t_obs}
\end{figure*}
Finally, in Fig.~\ref{breakout_b} we show the characteristic quantities as
functions of $b$. The breakout energy $E_{\rm br}$ increases with $b$. As $b$
increases from $1$ to $10$, $E_{\rm br}$ increases by a factor of $\approx 2.9$
when $\varepsilon =10^{-2}$, and $\approx 12$ when $\varepsilon = 10^{-3}$.
The smaller is $\varepsilon$, the faster does $E_{\rm br}$ increases with $b$.
However, if $b$ is in the range of 4--6, the change in $E_{\rm br}$ is not
essential, only by a factor of $\approx 1.2$--$1.3$. The temperature has a
similar trend. When $\varepsilon =10^{-2}$, the time-duration
$t_{\rm br}$ decreases with $b$. When $\varepsilon =10^{-3}$, $t_{\rm br}$ decreases
with $b$ until $b$ grows to a value $\approx 3$, beyond which $t_{\rm br}$
increases but slowly. The momentum of the shock front increases with $b$,
caused by the fact that a larger $b$ results in a steeper density profile
and an enhanced acceleration of the shock.
From Figs.~\ref{breakout_kappa}--\ref{breakout_rstar}, and
Fig.~\ref{breakout_b} at $b=4$--$6$, the effects of variation in $\varepsilon$
from $10^{-2}$ to $10^{-3}$ can be summarized as follows: the breakout energy
$E_{\rm br}$ increases by a factor of $1.8$--$4$; the temperature $T_{\rm br}$ increases
by a factor of $3$--$5$; the shock momentum $\Gamma_{s,{\rm br}} \beta_{s,{\rm br}}$
increases by a factor of $2.3$--$2.5$; and the time-duration $t_{\rm br}$ decreases
by a factor of $2.3$--$4.3$.
Figures~\ref{breakout_ein}--\ref{breakout_rstar} also show that, for a star
with a dense wind the shock breakout is more energetic than that for a star
without a wind. This is not surprising, since a star with a dense wind has
effectively a larger radius so that the shock wave has more space and more
time for acceleration. For the same set of common parameters
($E_{\rm in}$, $M_{\rm ej}$, $R_\star$, but not the opacity), for typical parameters the
total energy of the radiation from shock breakout is larger by a factor
$>10$ if the star is surrounded by a dense wind. The momentum of the shock
front is also larger by a factor $\sim 10$. The temperature does not show
a universal trend because of increase in the shock breakout radius, but
generally it is larger if the progenitor is surrounded by a dense wind
due to the great enhancement in the breakout energy. The time-duration is
larger for the case of stellar winds as an obvious result of increase in
the effective radius of the star.
The shock breakout occurs inside the maximum acceleration radius $R_a$
(eq.~\ref{ra}) in all the models presented in
Figs.~\ref{breakout_kappa}--\ref{breakout_b}.
In the above calculations of the breakout time-duration, the light-travel-time
has not been taken into account. In other words, $t_{\rm br}$ is the duration
measured in the supernova frame. The duration observed by a remote observer,
$t_{{\rm br},{\rm obs}}$, differs from $t_{\rm br}$ by an effect caused by the travel-time of
light---which arises from the fact that an observer will see more distant
annuli of the stellar disk with a time-delay \citep{ens92}. The effect of
light-travel-time could be extremely important when the $t_{\rm br}$ calculated by
equation~(\ref{tbr_eq}) is short, which is definitely true here since WR
stars are compact. We approximate the observed time-duration of the shock
breakout event by
\begin{eqnarray}
t_{{\rm br},{\rm obs}} = \sqrt{t_{\rm br}^2+t_{\rm light}^2} \;,
\label{tbr_obs}
\end{eqnarray}
where $t_{\rm light}$ is the light-travel-time.
In the calculation of the light-travel-time $t_{\rm light}$, the relativistic beaming
effect must be taken into account since the shock wave in our models is
relativistic \citep{kat94}. In the ultra-relativistic limit, the beaming
angle is $\theta\sim 1/\Gamma_{{\rm ph},X}$, where $\Gamma_{{\rm ph},X}$ is the Lorentz
factor of the photosphere which can estimated by $\Gamma_{{\rm ph},X}\approx
\Gamma_s$. In the non-relativistic limit, we should have $\theta=\pi/2$.
Hence, we use an interpolation formula for $\theta$
\begin{eqnarray}
\theta = \frac{\pi}{\pi(\Gamma_s-1)+2} \;. \label{theta}
\end{eqnarray}
Then, the light-travel-time is
\begin{eqnarray}
t_{\rm light} = \frac{R_{{\rm ph},{\rm X}}}{c}(1-\cos\theta) \;.
\end{eqnarray}
When $\Gamma_s\gg 1$, we have $t_{\rm light} \approx R_{{\rm ph},{\rm X}}/(2\Gamma_s^2 c)$.
\begin{table*}
\centering
\begin{minipage}{160mm}
\caption{Models of Type Ibc supernova explosion and the predicted
characteristic parameters for the shock breakout. Input parameters: $E_{\rm in}$,
$M_{\rm ej}$, $R_\star$, $\kappa_w$, $\varepsilon$, and $b$. Output parameters:
$y_{{\rm ph},{\rm X}}$, $y_{\rm br}$, $(\Gamma_s\beta_s)_{{\rm br}} = \Gamma_{s,{\rm br}}\beta_{s,
{\rm br}}$, $E_{\rm br}$, $T_{\rm br}$, $t_{{\rm br},{\rm obs}}$, and $\mu$. Models 1--4 are normal
SNe Ibc. Models 5--7 are hypernovae.
}
\label{model}
\begin{tabular}{llllllllllllll}
\hline
Model\hspace{0.cm} & ${E_{\rm in}}^{\rm a}$\hspace{0.cm} &
${M_{\rm ej}}^{\rm b}$\hspace{0.cm} & ${R_\star}^{\rm c}$\hspace{0.cm} &
${\kappa_w}^{\rm d}$\hspace{0.08cm} & ${\varepsilon}^{\rm e}$\hspace{0.42cm} &
${b}^{\rm f}$\hspace{0.08cm} & ${y_{{\rm ph},{\rm X}}}^{\rm g}$\hspace{0.cm} &
${y_{\rm br}}^{\rm h}$\hspace{0.27cm} & ${(\Gamma_s\beta_s)_{{\rm br}}}^{\rm i}$ &
${E_{\rm br}}^{\rm j}$\hspace{0.19cm} & ${T_{\rm br}}^{\rm k}$\hspace{0.cm} &
${t_{{\rm br},{\rm obs}}}^{\rm l}$\hspace{0.cm} & ${\mu}^{\rm m}$\\
\hline
1 & 1 & 3 & 3 & 0.7 & 0.01 & 5 & 1.73 & 1.45 & 1.98 & 1.3 & 5.4 & 2.8 & 0.30\\
2 & 1 & 4 & 3 & 0.2 & 0.02 & 5 & 4.24 & 2.45 & 0.818 & 1.2 & 1.9 & 25 & 1.7\\
3 & 1.5 & 6 & 5 & 0.5 & 0.01 & 1 & 3.11 & 1.61 & 0.760 & 1.3 & 1.7 & 35 & 2.4\\
4 & 2 & 2 & 5 & 0.7 & 0.001 & 5 & 1.31 & 1.22 & 6.72 & 19 & 28 & 1.0 & 0.095\\
5 & 40 & 10 & 3 & 0.2 & 0.01 & 5 & 3.21 & 2.53 & 5.73 & 22 & 16 & 4.9 & 1.0\\
6 & 50 & 10 & 10 & 0.7 & 0.01 & 5 & 1.73 & 1.50 & 7.52 & 140 & 22 & 5.5 & 0.98\\
7 & 60 & 15 & 10 & 0.7 & 0.002 & 5 & 1.39 & 1.28 & 13.7 & 320 & 62 & 2.6 & 0.32\\
\hline
\end{tabular}
$^{\rm a}$Explosion kinetic energy in units of $10^{51}$ ergs.\\
$^{\rm b}$Ejected mass in units of $M_\odot$.\\
$^{\rm c}$Core radius of the progenitor (the radius at the optical depth of
20), in units of $R_\odot$.\\
$^{\rm d}$Optical opacity in the wind, in units of cm$^2$ g$^{-1}$.\\
$^{\rm e}$Ratio of the wind velocity at the stellar surface (where $r=
R_\star$) to the terminal velocity of the wind (eq.~\ref{alp}).\\
$^{\rm f}$Parameter specifying the profile of the wind velocity
(eq.~\ref{vr}).\\
$^{\rm g}$Radius of the X-ray photosphere in units of $R_\star$.\\
$^{\rm h}$Radius of the shock front at the time of shock breakout in units of
$R_\star$.\\
$^{\rm i}$Momentum of the shock front (eq.~\ref{tan2}) at the time of shock
breakout.\\
$^{\rm j}$Total energy of the radiation from the shock breakout, in units
of $10^{46}$ ergs.\\
$^{\rm k}$Temperature of the radiation from the shock breakout, in units of
$10^6\, {\rm K} = 0.08617$ keV.\\
$^{\rm l}$Observed time-duration of the shock breakout event, in units of
seconds.\\
$^{\rm m}$Observable defined by the mass-loss rate and the terminal velocity
of the stellar wind through eq.~(\ref{mu2}).\\
\end{minipage}
\end{table*}
For the models presented in Figs.~\ref{breakout_kappa}--\ref{breakout_b}, we
have calculated the light-travel-time correction to the observed time-duration
of the shock breakout event. The results are shown in Fig.~\ref{t_obs}. It
turns out that the light-travel-time correction is not important. This is
caused by the fact that for relativistic shock breakout the light-travel-time
is significantly reduced by the the relativistic beaming effect.
Numerical results for a set of supernova and WR star models are presented in
Table~\ref{model}. From these results we find that the efficiency of
converting the supernova explosion energy to the shock breakout energy,
defined by the ratio of the breakout energy to the explosion energy, is
typically in the range of $10^{-4}$--$10^{-5}$. This efficiency is smaller
than that in the case of Type II supernovae, which is typically $\sim
10^{-3}$ if the progenitor
is a red supergiant, or $\sim 10^{-4}$ if the progenitor is a blue supergiant.
This is again caused by the fact that WR stars have much smaller radii than
red and blue supergiants.
\section{Application to GRB 060218/SN 2006aj}
\label{application}
\begin{figure}
\vspace{2pt}
\includegraphics[angle=0,scale=0.463]{ebr_tem_wrs.eps}
\caption{Energy versus temperature of shock breakout in Type Ibc supernovae
produced by core-collapse of a sample of WR stars with $R_{\rm ph}/R_\star >2$.
}
\label{ebr_tem_wrs}
\end{figure}
\begin{figure}
\vspace{2pt}
\includegraphics[angle=0,scale=0.47]{gsbr_tbr_wrs2.eps}
\caption{Shock momentum versus the observed time-duration of shock breakout
in Type Ibc supernovae produced by core-collapse of the same sample of WRs
in Fig.~\ref{ebr_tem_wrs}.
}
\label{gsbr_tbr_wrs2}
\end{figure}
As stated in the Introduction, recently it has been claimed that supernova
shock breakout has been observed in the early X-ray emission of GRB 060218,
based on the observation that a fraction ($\approx 20\%$) of the radiation
in the lightcurve (from 159~s up to $\sim 10,000$~s after the trigger of the
burst) is a soft black-body of temperature $\approx 0.17$ keV \citep{cam06}.
The total energy estimated for this black-body component is $\approx 10^{49}$
ergs in the $0.3$--$10$ keV band, and $\approx 2\times 10^{49}$ in bolometric
(S. Campana, private communication). A reanalysis carried out by \citet{but06}
revealed an even larger energy in the black-body, which is $\approx 2 \times
10^{50}$ ergs, with a duration $\approx 300$~s.
The overall constraint on the black-body component in the early X-ray
afterglow of GRB 060218 is summarized as follows: the total energy
$\ga 10^{49}$ ergs, the temperature is in the range of $0.1$--$0.19$ keV
(i.e., $1.2$--$2.2\times 10^6$ K), and the duration $\ga 300$~s
\citep{cam06,but06}.
In this Section, we apply the procedure developed in previous sections to
calculate the characteristic quantities of the shock breakout event for
SN 2006aj with the assumption that the supernova was produced by the
core-collapse of a WR star surrounded by a dense wind, and examine if the
black-body component in GRB 060218 can be interpreted by the shock
breakout in SN 2006aj.
\begin{figure}
\vspace{2pt}
\includegraphics[angle=0,scale=0.476]{tembr_ebr.eps}
\caption{The breakout energy versus the breakout temperature (with $b=5$).
Solid lines correspond to $\kappa_w=0.2$ cm$^2$ g$^{-1}$. Dashed lines
correspond to $\kappa_w=0.9$ cm$^2$ g$^{-1}$. Different solid lines (and
different dashed lines) correspond to different values of $\varepsilon$:
$10^{-2}$, $10^{-3}$, $10^{-4}$ and $10^{-5}$ (upward). Along each line the
stellar radius $R_\star$ varies from $1 R_\odot$ (the triangle) to $100
R_\odot$ (the point). The supernova explosion energy $E_{\rm in} = 2\times 10^{51}
{\rm ergs}$. The ejected mass $M_{\rm ej} = 2 M_\odot$. The region bounded by the
dotted line indicates the observational constraint on the total energy and
the temperature of the black-body component in the X-ray afterglow of
GRB 060218.
}
\label{tembr_ebr}
\end{figure}
\begin{figure}
\vspace{2pt}
\includegraphics[angle=0,scale=0.476]{tembr_ebr2.eps}
\caption{Same as Fig.~\ref{tembr_ebr} but with $b=1$.
}
\label{tembr_ebr2}
\end{figure}
First, we apply the procedure to the WR stars in Fig.~\ref{rph}, which are
among the best studied catalog of WR stars with model-determined
stellar and photospheric radii \citep{ham95,koe95,gra98}. We pick up only
stars with $y_{\rm ph} = R_{\rm ph}/R_\star>2$, since otherwise the $\varepsilon$
given by our simplified model would be too small (Fig.~\ref{alpha}). The
sample so selected consists of total 36 stars, including 20 Galactic
WCs, 10 Galactic WNs, and 6 LMC WCs. The majority of WC stars have been
included. Since theses stars were modeled by the standard stellar wind model
(Appendix~\ref{b1}), we choose $b=1$. The $\alpha$ parameter is then obtained
from the published values of $y_{\rm ph}=R_{\rm ph}/R_\star$ in the papers cited above
by equations~(\ref{tau0_1}) and (\ref{yph1}). The value of $\varepsilon$ is
calculated with equation~(\ref{alp}). Then, with the published values of
$\dot{M}$, $v_\infty$, and $R_\star$, the constant mean opacity $\kappa_w$
can be calculated with equation~(\ref{psi1}).
For the supernova explosion energy and the ejected mass, we take the values
obtained by modeling the spectra and the lightcurve of SN 2006aj:
$E_{\rm in} = 2\times 10^{51}$ ergs, and $M_{\rm ej} = 2 M_\odot$ \citep{maz06a}.
Then, for each star we have all of the six parameters needed for calculating
the characteristic quantities of shock breakout.
Our results are shown in Fig.~\ref{ebr_tem_wrs} for the breakout energy
versus the breakout temperature, and in Fig.~\ref{gsbr_tbr_wrs2} for
the breakout shock momentum versus the breakout time-duration. Note, here
the breakout time-duration has included the light-travel-time
(eq.~\ref{tbr_obs}), so it corresponds to the observed time-duration. From
Fig.~\ref{ebr_tem_wrs} we see that, although the temperature is in the range
of the black-body component in GRB 060218 for several WC stars (on the left
end), the total energy of the radiation arising from the shock breakout never
exceeds $10^{47}$ ergs, i.e., always smaller than the total energy of the
observed black-body component in GRB 060218 by more than two orders of
magnitude.
From Fig.~\ref{gsbr_tbr_wrs2}, the time-duration of the shock breakout never
exceeds 100 s, also well below the observational limit on the black-body
component in GRB 060218.
Thus, it appears that none of the stars in the considered sample of WRs is
able to produce a supernova with shock breakout energy that is large enough
to explain the black-body component observed in the early X-ray afterglow of
GRB 060218.
Of course, GRBs are rare events compared to supernovae \citep{pod04}, they
may require progenitors that are in more extreme conditions than the WRs in
our sample. To test the possibility for explaining the black-body component
in GRB 060218 with shock breakout from WR stars in a larger parameter space,
we have calculated a number of models with a large range of parameters.
The results are shown in Fig.~\ref{tembr_ebr} ($b=5$) and
Fig.~\ref{tembr_ebr2} ($b=1$). The explosion energy and the ejected mass are
fixed at $E_{\rm in} = 2\times 10^{51}$ ergs and $M_{\rm ej} = 2 M_\odot$, as obtained
by modeling the spectra and the lightcurve of SN 2006aj \citep{maz06a}. We
allow
$\varepsilon$ to vary from $10^{-2}$ to $10^{-5}$. For the opacity $\kappa_w$,
we choose two extreme values: $0.2$ cm$^2$ g$^{-1}$ (solid lines) and $0.9$
cm$^2$ g$^{-1}$ (dashed lines). The observational bound on the total energy
and the temperature of the black-body component in the early X-ray emission
of GRB 060218 is shown in the figures by the region bounded by the dotted
lines.
The radius of the star, $R_\star$, which is the parameter that the
characteristic quantities of the shock breakout are most sensitive to, is
allowed to vary from $1 R_\odot$ to $100 R_\odot$, covering a space of
radii that is more than enough for WR stars.
Figures~\ref{tembr_ebr} and \ref{tembr_ebr2} show that to explain the
black-body component observed in the early X-ray emission of GRB 060218, the
radius of the progenitor WR star must be $\ga 100 R_\odot$. It is very
unlikely that there exist WR stars having so large stellar radii. Although
it is possible to get $E_{\rm br} > 10^{49}$ ergs with $R_\star<100 R_\odot$ if
$\varepsilon$ is very small and/or $\kappa_w$ is very large, the corresponding
$T_{\rm br}$ would be too high to be consistent with the temperature of the
black-body component in GRB 060218.
\section{Summary, Conclusions, and Discussions}
\label{sum}
We have presented a simple model for calculating the characteristic
quantities (total energy, temperature, time-duration, and shock momentum)
for the flashes arising from shock breakout in Type Ibc supernovae produced
by the core-collapse of Wolf-Rayet stars surrounded by dense stellar
winds. The wind velocity is modeled by equation~(\ref{vr}), a profile that
is often adopted in the study of stellar winds. However, in contrast to the
case for O-stars where the parameter $b$ is close to unity, for WR star
winds $b$ can be much larger and is usually in the range of $4$--$6$
\citep{nug02}. The opacity in the wind, $\kappa_w$, is assumed to be a
constant, which is a reasonable approximation for the calculation of the
optical depth since the opacity varies with radius very slowly compared to
the mass density of the wind \citep{nug02}. Modeling of the opacity in the
winds of WR stars indicates that $\kappa_w$ is in the range of $0.3$--$0.9$
cm$^2$ g$^{-1}$ \citep{nug02}.
Our model is an extension of the existing model for calculating the
characteristic quantities for supernova shock breakout from a star without
a wind, which is suitable for Type II supernovae
\citep{ims88,ims89,mat99,tan01}. Due to the compactness of WR stars, the
shock momentum is expected to be trans-relativistic at the time of breakout.
Thus, we have followed \citet{bla76} and \citet{tan01} to take into account
the relativistic effects.
Because of the large optical depth in the wind, the supernova shock breakout
occurs in the wind region rather than in the interior of the star. This is
equivalent to say that the presence of a dense stellar wind effectively
increases the radius of the star. As a result, the shock has more space and
more time for acceleration, and the shock breakout appears to be more energetic
than in the case for the same star but the effect of the stellar winds is
not taken into account (see. e.g., Blinnikov et al. 2002).
The formulae for determining the radius where the shock breakout occurs
and that for computing the characteristic quantities for the radiation
arising from the shock breakout are collected in Sec.~\ref{emergence}. They
include equations~(\ref{br_con}), determining the breakout radius;
(\ref{tan2}), evaluating the momentum of the shock;
(\ref{ebr_eq}), (\ref{tembr_eq}), and (\ref{tbr_eq}), calculating the
energy, temperature, and the time-duration of the radiation from shock
breakout. Although exact and analytic solutions are impossible because of
the trans-relativistic nature of the problem, all the equations are
algebraic and a simple numerical program is able to calculate all the
characteristic quantities. The model contains six input parameters:
the explosion kinetic energy ($E_{\rm in}$), the ejected mass ($M_{\rm ej}$),
the core radius of the star ($R_\star$, the radius where the
optical depth $\tau_w=20$), the opacity in the wind ($\kappa_w$), the
parameter $b$ specifying the wind velocity profile, and the ratio of
the wind velocity at the stellar surface (where $r=R_\star$) to the terminal
velocity of the wind ($\varepsilon$).
Our numerical results are summarized in
Figs.~\ref{breakout_kappa}--\ref{breakout_b} and Table~\ref{model}.
Figs.~\ref{breakout_kappa}--\ref{breakout_b} illustrate how the
characteristic quantities vary with the input parameters. As in the case
of shock breakout from a star without a wind, the core radius of the star is
the most important parameter affecting the results. That is, the
characteristic quantities are most sensitive to the variation in the
stellar radius. This feature leads to the possibility for distinguishing
the progenitors of supernovae by observing the flashes from the shock
breakout \citep{cal04}. In addition, in the case of dense stellar winds,
the results are more sensitive to the variation in the supernova explosion
kinetic
energy. For example, roughly speaking, $E_{\rm br} \propto E_{\rm in}$ when the star
has a dense wind, in contrast to $E_{\rm br} \propto E_{\rm in}^{0.6}$ in the case of
a star without a wind. Overall, the shock breakout from a star with a dense
wind is more energetic than that from a star without a wind.
For a star of the same radius, and for the same explosion kinetic energy
and ejected mass, the total energy released by the shock breakout is larger
by a factor $> 10$ if the star is surrounded by a thick wind. The
time-duration is also larger, and the shock momentum at the time of
breakout is more relativistic.
For explosion energy $E_{\rm in}=10^{51}$ ergs, ejected mass $M_{\rm ej}=3 M_\odot$,
and stellar radius $R_\star = 3 R_\odot$ (typical values for normal SNe Ibc),
we get breakout energy $E_{\rm br} \approx 1.3\times 10^{46}$ ergs,
temperature $T_{\rm br} \approx 5.4\times 10^6$ K $\approx 0.46$ keV, and
observed time-duration $t_{{\rm br},{\rm obs}}\approx 2.8$ s if other parameters take
fiducial values ($\kappa = 0.7$ cm$^2$ g$^{-1}$, $b=5$, and $\varepsilon =
0.01$). For $E_{\rm in}=5\times 10^{52}$ ergs, $M_{\rm ej}=10 M_\odot$, and $R_\star
= 10 R_\odot$ (typical values for hypernovae), we get $E_{\rm br} \approx
1.4\times 10^{48}$ ergs, $T_{\rm br} \approx 2.2\times10^7$ K $ \approx 1.9$ keV,
and $t_{{\rm br},{\rm obs}}\approx 5.5$ s. More numerical results are shown in
Table~\ref{model}.
We have applied our model to GRB 060218/SN 2006aj, in which a soft
black-body component has been observed in the early X-ray emission of the
GRB and has been interpreted as an evidence for the supernova shock breakout
\citep{cam06}. We take the values of the supernova explosion energy and the
ejected mass obtained by modeling the spectra and the lightcurve of the
supernova \citep{maz06a}. We find that, the energy released by the supernova
shock breakout in a thick wind
of a WR progenitor star is generally too small to explain the black-body
radiation in GRB 060218. To obtain the breakout energy and the temperature
that are consistent with the observational constraint, the core radius of
the progenitor WR star has to be $> 100 R_\odot$, which is much too large
for a WR star. Thus, we conclude that the black-body component in the X-ray
afterglow of GRB 060218 cannot be interpreted by the shock breakout in the
underlying supernova. Instead, it must originate from other processes which
might be related to the GRB outflow (see, e.g., Fan, Piran \& Xu 2006).
This conclusion is in agreement with the analysis by Ghisellini, Ghirlanda
\& Tavecchio (2006).
One may argue that GRB-connected supernovae should be highly aspherical so
that our spherical model might have under-estimated the energy of the shock
breakout. The effect of explosion asymmetry can be estimated as follow.
Assume that the explosion produces a shock wave in a solid angle $\Omega\equiv
4\pi\omega <4\pi$ with a kinetic explosion energy $E_{\rm in}$, which ejects a mass
$M_{\rm ej}$ from the progenitor. The shock wave is symmetric in the azimuthal
direction and does not expand to the outside of $\Omega$. The motion of the
shock wave would then be the same as that of a spherical shock wave
($\omega=1$) with a kinetic explosion energy $\omega^{-1} E_{\rm in}$ and an
ejected mass $\omega^{-1} M_{\rm ej}$, assuming that the progenitor is spherically
symmetric. Then, by equation~(\ref{tan2}), $p \propto E_{\rm in}^{1/2}
M_{\rm ej}^{-0.313} \omega^{-0.187} = \left(\omega^{-0.374} E_{\rm in}\right)^{1/2}
M_{\rm ej}^{-0.313}$. That is, the motion of the asymmetric shock wave can be
calculated by equation~(\ref{tan2}) but with $E_{\rm in}$ replaced by a larger
$E_{\rm in}^\prime = \omega^{-0.374} E_{\rm in}$. Then, by Fig.~\ref{breakout_ein},
the temperature $T_{\rm br}$, the shock momentum $\Gamma_{s,{\rm br}} \beta_{s,{\rm br}}$,
and the {\em isotropic-equivalent} energy $E_{\rm br}$ of the asymmetric shock
breakout are larger than that in a spherical explosion with the same $E_{\rm in}$
and $M_{\rm ej}$. However, the time-duration $t_{\rm br}$ is not sensitive to $\omega$.
Indeed, aspherical explosion has
been claimed to be observed in the luminous Type Ic SN 2003jd, in which
the double-lined profiles in the nebular lines of neutral oxygen and
magnesium revealed in later-time observations by Subaru and Keck are
explained as results of observing an aspherical supernovae along a
direction almost perpendicular to the axis of the explosion \citep{maz05}.
However, for SN 2006aj, there is no any evidence for aspherical explosion.
Observation on the radio afterglow and modeling of it indicate that the
outflow associated with GRB 060218 is mildly relativistic so should be more
or less spherical (Soderberg et al. 2006; Fan, Piran \& Xu 2006, see also
Li 2006).
We should also remark that whether the progenitors of GRBs are surrounded by
dense winds is still an open question. Although a wind-type density
profile is naturally expected for the environment surrounding a GRB as
its progenitor is broadly thought to be a massive star, observations on the
GRB afterglows have revealed that most of the afterglow data are consistent
with a constant density external medium and only a handful of bursts can
be well modeled by the wind model
\citep[and references therein]{ber03,zha04,pan05,fry06}. For the case of
GRB 060218, modeling of its radio afterglow also does not favor a dense
circum-burst wind profile \citep{sod06,fan06}.
A theoretical argument against strong winds surrounding GRB progenitors
comes from the consideration of angular momentum
\citep[and references therein]{yoo05,woo06a}. For a black hole formed from
the core-collapse of a massive star to have a disk rotating around it and
to launch a relativistic jet, the progenitor star must rotate rapidly with the
specific angular momentum in the core $j\ga 3\times 10^{16}$ cm$^2$ s$^{-1}$
\citep{mac99}. To satisfy this requirement, the progenitor star should not
have had a phase with an intense stellar wind since a dense wind is very
effective in removing angular momentum. Given the fact that the mass-loss
rate of a star sensitively depends on its metallicity \citep{vin05} and the
observations that GRBs prefer to occur in galaxies with low metallicity
\citep{fyn03,hjo03a,lef03,sol05,fru06,sta06}, it is reasonable to expect that
the progenitors of GRBs should
not have dense stellar winds surrounding them. Even in this situation,
however, the radius of the massive progenitor star is also very unlikely to
be large enough ($>100 R_\odot$) to explain the black-body component in
GRB 060218 since its progenitor star has only a mass $\sim 20 M_\odot$ as
obtained by modeling the supernova lightcurve and spectra \citep{maz06a}.
In addition,
if the progenitor does not have a thick wind, then in calculating the results
for the shock breakout one should use the formulae in Appendix~\ref{star}
for a star without a wind. But in Sec.~\ref{result} we have seen that the
formulae for a star without a wind lead to smaller total energy in the
radiation from the shock breakout than the formulae for a star with a dense
wind.
In spite of the disappointing result on GRB 060218/SN 2006aj, our model is
expected to have important applications to Type Ibc supernovae since whose
progenitors are broadly believed to be WR stars. In addition, some Type II
supernovae appear also to be related to progenitor stars with intensive
stellar winds, e.g. SNe IIn (also called IIdw) \citep{ham04}. Observations
on the transient events from supernova shock breakout will be the most
powerful approach for diagnosing the progenitors of supernovae. For this goal
we would like to mention {\it LOBSTER}, an upcoming space observatory
dedicated to detect soft X-ray flashes from shock breakout in supernovae
\citep{cal04}.
\section*{Acknowledgments}
The author thanks S. Campana for useful communications and sharing data,
and B. Paczy\'nski for many inspiring discussions on gamma-ray bursts,
supernovae, and shock breakout. He also thanks an anonymous referee for
a very helpful report which has led significant improvements to the paper.
|
2,869,038,154,368 | arxiv | \section{Introduction}
The majority of stars do not form in isolation but in environments where stellar densities are higher than in the Galactic field \citep{RN25, RN59}. Depending on the initial conditions in these star-forming regions, they either evolve into bound clusters or unbound groups. Most stars born in clusters do not remain there past an age of 10 million years (Myr) and only about 10 per cent are observed in gravitationally bound clusters at this age \citep{RN25}. Young stars that are not members of bound clusters are usually observed in unbound groups of stars \citep[i.e. associations,][]{RN19} before they disperse into the Galactic field.
For the last two decades, the prevailing view has been that bound star clusters are fundamental units of star formation - in that most stars form in these dense, embedded environments until gas exhaustion \citep{RN292} or residual gas expulsion conclude star formation. Gas expulsion can also lead to the cluster's dissolution \citep{RN283,RN284,RN285,RN49,RN292,RN286}. In this scenario, associations must have formed as single or multiple clusters and expanded into their unbound state \citep[so-called monolithic star formation - e.g.][]{RN268,RN274}. An alternative view is that depending on the initial conditions of the molecular clouds, clusters or associations are formed when smaller clustered regions with differing stellar densities assemble hierarchically. These smaller groups of stars are the result of hierarchical fragmentation of the molecular clouds. In this scenario, star formation can lead to the formation of a dense, bound star cluster but can also result in lower-density, unbound associations \citep[so-called hierarchical star formation - e.g.][]{RN275,RN267}. Recent work has shown that associations are unlikely to be dissolved clusters, supporting the latter star formation scenario \citep[e.g.][]{RN6, RN80, RN186}.
One way of testing these two scenarios observationally is to determine the initial density (i.e. spatial structure) and virial ratio (i.e. velocity structure) of star-forming regions. This remains difficult as dynamical evolution can lead to significant changes to star-forming regions over a short period of time \citep[e.g.][]{RN4, RN258} such as a rapid reduction in density in regions with initially high stellar densities \citep[e.g.][]{RN277,RN8}. Initial substructure can be erased \citep[e.g.][]{RN14,RN4,RN248}, the location of the most massive stars can change due to dynamical mass segregation \citep[e.g][]{RN15,RN5} and primordial binary systems can be destroyed \citep[e.g.][]{RN261,RN256,RN277,RN257}. A dense phase can also have disruptive effects on protoplanetary discs and young planetary systems around stars in star-forming regions \citep[e.g.][]{RN271,RN272,RN269,RN270,2019arXiv190211094N}.
Our earlier work showed that information from the spatial distribution of star-forming regions can be used to distinguish the initial bound/unbound state (initial virial ratio) \citep[][Paper I]{RN5}. \citet[][Paper II]{RN1} showed that using radial velocity dispersion in combination with a spatial structure diagnostic \citep[Q-parameter,][]{RN16} can help constrain initial conditions in star-forming regions with high local densities. In this paper, we will focus on stars that become unbound from young star-forming regions.
Observationally, unbound stars are easiest to identify when their velocities are higher than their surroundings and they have high mass and therefore high luminosity, such as OB stars. These stars have a mass of at least 8 M$_{\sun}$ and short lifetimes of up to a few tens Myr, with the most massive stars undergoing core-collapse supernovae after only a few Myr \citep{RN10}. Most star-forming regions dissolve after only a few tens Myr but they can still outlive the massive stars located within them \citep{RN21}. As a consequence, OB stars should not be found outside these regions. However, there are OB stars found far outside star-forming regions moving at much higher velocities than normally expected. These OB stars have been termed runaway stars by \citet{RN67}. They show a peculiar velocity (i.e. velocity relative to a rest frame) in excess of $\sim$30-40\,km\,s$^{-1}$ and/or are located at large distances from any star-forming regions or the Galactic plane \citep[e.g.][]{RN255, RN67, RN276, RN50, RN190, RN137,RN293}. Almost all currently identified runaway stars are high-mass stars \citep[see the recent catalogue in][]{RN263}. Low-mass runaway star detections remain rare \citep{RN227, RN252}. A lower velocity limit for runaway stars has been suggested by \citet{RN137} at $\sim$5\,km\,s$^{-1}$, as stars ejected with this velocity can still travel a considerable distance in just a few Myr and end up tens of pc from any star-forming regions, satisfying distance-based runaway star definitions \citep[e.g.][]{RN190}. This subset of slower runaway stars has recently been termed walkaway stars \citep{RN136}.
Runaway and walkaway stars are thought to be the result of the same two ejection mechanisms. \citet{RN67} suggested that ejection of these stars is due to the binary supernova mechanism. This posits that in a close binary, the secondary star is ejected when the more massive primary reaches the end of its life and undergoes a core-collapse supernova. With a high enough kick velocity from the supernova, the main-sequence companion gets ejected due to binary disruption, almost always leaving an ejected singleton star. \citet{RN189} suggested an alternative mechanism due to dynamical interaction. In dense star-forming regions, stars interact with each other dynamically and close encounters between three or even four stars can lead to ejection of one or two of them. When a single, massive star interacts with a binary where the secondary is a lower-mass star, the single star can replace this secondary binary star, which is then ejected from the star cluster at a similar maximum velocity than in the binary supernova mechanism \citep{RN54}.
In this paper we use pure $N$-body simulations with differing initial conditions to investigate if the number and velocity distributions of unbound stars can allow us to place constraints on the initial density and velocity structure in star-forming regions. We aim to make predictions for observations of fast unbound stars from young star-forming regions that can be probed with \textit{Gaia} Data Release 2 (DR2). DR2 contains five-parameter astrometry (position, parallax and proper motion) for over 1.3 billion sources down to an apparent G-magnitude limit of G $\approx$ 21, whereas radial velocity information is only available for brighter sources ($\sim$7.2 million) \citep{RN238}. Our simulations provide us with 6D-parameter space results (position and velocity), but we focus on the 2D-plane and 2D-velocity, i.e. tangential velocity, which is calculated from proper motion and distance (or parallax) in observations.
This paper is organised as follows. In section 2, we present the initial conditions used for the $N$-body simulations and our definition of unbound stars. Section 3 is dedicated to the results with a discussion of these results in section 4, followed by our conclusions in section 5.
\section{Methods}
\subsection{Initial conditions}
Our simulated star-forming regions are set up with 1000 systems per simulation distributed across an initial radius of 1 pc. All systems are initially single stars (no primordial binaries) and their masses are randomly sampled for every single simulation from the \citet{RN203} initial mass function (IMF). This form of the IMF combines the Salpeter (\citeyear{RN204}) power-law slope for stars with masses above 1 M$_{\sun}$ with a Chabrier (\citeyear{RN200}) log-normal IMF approximation at the lower-mass end. The Maschberger IMF is described by a probability density function with three parameters $\alpha$ = 2.3 (power-law exponent for higher mass stars), $\beta$ = 1.4 (describing the IMF slope of lower-mass stars) and $\mu$ = 0.2 (average stellar mass) \citep{RN203}:
\begin{equation}
p(m) \propto \cfrac{\left(\cfrac{m}{\mu}\right)^{-\alpha}}{\left(1+\left(\cfrac{m}{\mu}\right)^{1-\alpha}\right)^\beta}
\label{eq:MaschbergerIMF}
\end{equation}
\noindent We sample stellar masses \textit{m} between 0.1 M$_{\sun}$ (we do not include brown dwarfs) and 50 M$_{\sun}$, resulting in total masses between $\sim$500-700 M$_{\sun}$ for each of our star-forming regions.
The spatial structure is set up following the method described in \citet{RN14}. It uses fractal distributions to define the observed substructure in young star-forming regions using a single parameter, the fractal dimension $D$. Starting with a cube with side $N_{\rm{div}}$ = 2, a parent particle is placed at the centre. This first parent cube is subdivided into equal-sized $N_{\rm{div}}^3$ sub-cubes with a first-generation descendant in each centre. Depending on the survival probability $N_{\rm{div}}^{(D-\rm{3})}$ that is set by the fractal dimension $D$, these descendants can become parents themselves. For a low fractal dimension fewer descendants become parents, whereas more descendants survive when using a high fractal dimension. Descendants that do not survive are deleted along with their parent. The positions of the surviving particles are adjusted by adding a small amount of noise. This process continues until more stars than required are generated within the original cube. We cut a sphere from this cube and reduce the remaining stars down to the required number by random deletion.
We use a set of four different fractal dimensions for our simulations to investigate a wide parameter space. Starting with highly substructured star-forming regions ($D$ = 1.6), we then gradually reduce the level of substructure ($D$ = 2.0 and $D$ = 2.6) finishing with a roughly uniform, smooth sphere ($D$ = 3.0).
Like the spatial structure, the velocity structure in our simulations is also set-up to mimic observed star-formation environments. Molecular gas clouds show turbulence that can be passed down to the stars that form from them. The velocity dispersion increases with the size of the clouds. In molecular clouds, large velocity dispersions can occur on large scales whereas on small scales there are smaller dispersions, i.e. similar velocities \citep{RN27}. Star formation occurs in filamentary structures within these gas clouds, where the velocity dispersion is low \citep{RN281}. To represent this velocity structure in our simulations we follow \citet{RN14}, which results in close stars with similar velocities and distant stars with different velocities. The process starts by assigning a random velocity to the parents. The next generation inherits this velocity, which is in turn adjusted by a random component that gets smaller with every following generation. The velocities of the stars are finally scaled to five different global virial ratios. The global virial ratio $\alpha_{\rm{vir}}$ describes the ratio of total kinetic energy $T$ of all stars to the modulus of the total potential energy $\varOmega$ of all stars, $\alpha_{\rm{vir}}$ = $ T/|\varOmega|$. A star-forming region in virial equilibrium has a global virial ratio $\alpha_{\rm{vir}}$ = 0.5, with subvirial regions at values below and supervirial ones above.
In our parameter space, we investigate star-forming regions initially in virial equilibrium as well as two regions that are initially subvirial ($\alpha_{\rm{vir}}$ = 0.1 and $\alpha_{\rm{vir}}$ = 0.3) and two supervirial ($\alpha_{\rm{vir}}$ = 1.0 and $\alpha_{\rm{vir}}$ = 1.5) initial settings. These global virial ratios describe the bulk motion of the stars as a whole. On local scales stars have similar, correlated velocities, meaning star-forming regions can be locally subvirial even if they are not subvirial on a global scale. This can lead to local, but not global collapse during the early dynamical evolution of the star-forming region \citep{RN4,RN1}.
We use the $N$-body integrator {\tt kira} from the {\tt Starlab} package \citep{RN236,RN193} to evolve our star-forming regions over a defined time period. The integrator uses an input $N$-body system defined by our initial conditions and evolves it over time giving output at different snapshots. The motion of the stars in the simulations is followed using a fourth-order, block-time-step Hermite scheme \citep{RN209}.
With four different initial fractal dimensions and five different initial virial ratios, we run 20 simulations of each of the 20 combinations for a time period of 10 Myr to cover the early phases of the evolution of a star-forming region. The only changes within the simulations sharing the same initial conditions are the random number seed used to initiate the creation of the fractal (i.e. initial positions and velocities of stars) and the sampling of stellar masses from the IMF. For each set of initial conditions, we combine the results of all 20 simulations, thus creating a larger data set for analysis. Our star-forming regions do not have a gas potential and there is no external/tidal field applied. The stars do not undergo stellar evolution and are not in primordial binaries or initially mass-segregated. This allows us to identify the effects of different initial spatial and velocity substructure on the unbound population from young star-forming regions. In future work, we will include both stellar evolution and primordial binaries to quantify the effect each has on stars becoming unbound from these regions.
\subsection{Unbound stars and fractions by mass class}
We consider a star $i$ to be unbound once it has positive total energy (i.e. its kinetic energy $T_{i}$ is larger than the modulus of its potential energy $\varOmega_{i}$). Its kinetic energy is given by:
\begin{equation}
T_{i} = \frac{1}{2} m_{i} |\mathbfit{v}_{i} - \mathbfit{v}_{cr}|^{2},
\label{eq:kin_en}
\end{equation}
where $m_{i}$ is the mass of star $i$ and $\mathbfit{v}_{i}$ and $\mathbfit{v}_{cr}$ are the velocity vectors of this star and of the centre of the region, respectively. The potential energy of the star $i$ is given by the sum of the potential energy between star $i$ and every other star $j$:
\begin{equation}
\mathit{\Omega_{i}} = - \sum_{i \neq j} \cfrac{G m_{i} m_{j}}{r_{ij}},
\label{eq:pot_en}
\end{equation}
where $G$ is the gravitational constant, $m_{i}$ and $m_{j}$ are the stellar masses of $i$ and $j$ and $r_{ij}$ is the distance between them.
After identifying all unbound stars in each snapshot, we divide them up into two mass classes (MC): low/intermediate-mass (<8 M$_{\sun}$) and high-mass ($\geq$8 M$_{\sun}$) stars. We then calculate unbound fractions by normalising the number of unbound stars (UB) by the total number of stars (TOT) in that specific mass class:
\begin{equation}\label{eq:Unbound fraction}
\text{Unbound fraction} = \cfrac{N_{MC,UB}}{N_{MC,TOT}}
\end{equation}
We estimate the standard error of the mean (SE) as a representation of the uncertainty connected to the unbound fractions, where $s$ is the sample standard deviation and $n$ is the number of simulations:
\begin{equation}\label{eq:StandardError}
SE = \frac{s}{\sqrt{n}}
\end{equation}
\begin{figure}
\centering
\includegraphics[width=0.95\linewidth]{Method_fig1.eps}
\caption{Unbound fractions from ten simulations (initially subvirial $\alpha_{\rm{vir}}$ = 0.3, with high level of initial substructure $D$ = 1.6) showing the spread of the unbound fractions between statistically identical simulations.}
\label{fig:Ejected2}
\end{figure}
The uncertainty is caused by the stochastic nature of the underlying dynamical evolution \citep{RN4,RN5}. In our parameter space study, this different evolution is evident in the different unbound fractions from statistically identical, individual simulations as shown in Fig. \ref{fig:Ejected2}. This figure illustrates how different the unbound fractions can be for ten simulations with the same initial conditions (initially subvirial ($\alpha_{\rm{vir}}$ = 0.3) and high levels of substructure ($D$ = 1.6)). The different lines represent the fractions of unbound stars as a function of time and in this example they can increase over the simulation time to values between $\sim$18-48 per cent after 10 Myr.
\section{Results}
For the following analysis of velocities, we focus on 2D-velocities to allow us to make predictions for proper motion observations, such as from the recent \textit{Gaia} DR2 \citep{RN238}. In observations we have a fixed two-dimensional plane, whereas the choice of 2D-plane from simulations is arbitrary. The 2D-velocity results shown in this section represent the tangential velocity in the xy-plane (i.e. calculated as the motion across the sky would be in observations), however any other choice of 2D-plane gives us the same results after considering statistical noise.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{Cumulative_fig2.eps}
\caption{Cumulative 2D-velocity distributions at four different simulation times (in columns: 0 Myr, 1 Myr, 5 Myr, 10 Myr) and for the different initial condition sets. Each row represents a different fractal dimension from $D$ = 1.6 (top row) to $D$ = 3.0 (bottom row). The five different initial virial ratios ($\alpha_{\rm{vir}}$ = 0.1 (blue), $\alpha_{\rm{vir}}$ = 0.3 (orange), $\alpha_{\rm{vir}}$ = 0.5 (green), $\alpha_{\rm{vir}}$ = 1.0 (red), $\alpha_{\rm{vir}}$ = 1.5 (purple)) are shown in each panel for each fractal dimension and time.}
\label{fig:Cumulative1}
\end{figure*}
\subsection{Cumulative 2D-velocity distributions of all stars}
We first focus on the cumulative distributions of the 2D-velocities and analyse how these evolve over the time period covered by our simulations. For each set of initial conditions, the cumulative distributions contain all stars from 20 simulations. In Fig. \ref{fig:Cumulative1}, we see the evolution of the cumulative distributions of the 2D-velocity at four different times from the left to the right column (0, 1, 5 and 10 Myr). From the top row to the bottom, we see that the four different fractal dimensions at 0 Myr show almost identical cumulative velocity distributions for all five initial virial ratios. This is to be expected as the virial ratio acts as a scaling factor for the initial velocities.
During the first 1 Myr, star-forming regions that are initially highly to moderately substructured ($D \leq$ 2.0) collapse and undergo violent relaxation \citep[e.g.][]{RN60,RN297,RN296,RN298,RN299,RN4,RN300} with subvirial regions ($\alpha_{\rm{vir}} <$ 0.5) collapsing rapidly to form bound, spherical clusters \citep{RN5}. Some of the initially virialised regions ($\alpha_{\rm{vir}}$ = 0.5) undergo a local collapse in regions of high substructure. Even though they are initially virialised on a global scale, they can be subvirial locally resulting in a localised collapse. Star-forming regions with little or no initial substructure ($D \geq$ 2.6) collapse only when they are also initially subvirial.
At 1 Myr (second column), the velocity distributions of different initial virial ratios show similar velocities for identical levels of initial substructure. Initially highly subvirial regions ($\alpha_{\rm{vir}}$ = 0.1) that are slowest at the start of the simulations attain similar velocities to initially virialised and supervirial regions when $D \leq$ 2.0 or higher velocities when $D \geq$ 2.6. Violent relaxation leads to an increase in velocity, which is highest in highly subvirial, substructured initial conditions.
After 5 Myr and 10 Myr (third and fourth column), in initially more substructured regions ($D \leq$ 2.0) the evolution of the cumulative distributions follows a similar pattern.The bound, initially subvirial or virialised regions ($\alpha_{\rm{vir}} \leq$ 0.5) have very similar velocity distributions as the initially subvirial regions approach virial equilibrium after violent relaxation. Initially supervirial regions ($\alpha_{\rm{vir}} >$ 0.5) remain unbound and at higher average velocities. The difference between the subvirial/virial and supervirial distributions becomes clearer the older the simulated regions get, as the initially subvirial/virialised regions slow down compared to the initially supervirial ones.
Star-forming regions with less substructure initially ($D \geq$ 2.6) do not show the clear separation of velocity distributions between subvirial/virial and supervirial initial ratios. Only initially highly supervirial regions ($\alpha_{\rm{vir}}$ = 1.5) have a velocity distribution at later times that can be distinguished from those with lower virial ratios. The initially smooth, sphere-like regions ($D$ = 3.0) still show a grouping together of the velocity distributions after 5 Myr. The two initially supervirial distributions ($\alpha_{\rm{vir}}$ = 1.0 and 1.5) are located either side of the initially subvirial and virialised ones. Despite both being supervirial, they exhibit considerably different velocity distributions. Moderately supervirial regions ($\alpha_{\rm{vir}}$ = 1.0) have the slowest, whereas highly supervirial regions ($\alpha_{\rm{vir}}$ = 1.5) have the fastest cumulative 2D-velocities. This behaviour continues for the remaining 5 Myr and at the end of our simulations the moderately supervirial cases are still indistinguishable from those of initially subvirial/virialised ($\alpha_{\rm{vir}} \leq$ 0.5) cases.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{Cumulative_fig3.eps}
\caption{Long-term evolution of the cumulative 2D-velocity distributions at four different simulation times (10, 25, 50 and 100 Myr) for the five different initial virial ratios ($\alpha_{\rm{vir}}$ = 0.1 (blue), $\alpha_{\rm{vir}}$ = 0.3 (orange), $\alpha_{\rm{vir}}$ = 0.5 (green), $\alpha_{\rm{vir}}$ = 1.0 (red), $\alpha_{\rm{vir}}$ = 1.5 (purple)) and constant fractal dimension ($D$ = 3.0).}
\label{fig:Long-term_evolution}
\end{figure*}
\subsubsection{Long-term evolution of initially smooth star-forming regions}
For these initially smooth star-forming regions ($D$ = 3.0), we follow the evolution of their cumulative distributions for a longer time period. We evaluate if they evolve differently or just more slowly than initially more substructured star-forming regions. The evolution of these smooth regions is shown at 10 Myr, 25 Myr, 50 Myr and 100 Myr in Fig. \ref{fig:Long-term_evolution}.
The cumulative distributions for initially subvirial and virialised regions ($\alpha_{\rm{vir}} \leq$ 0.5) continue to be similar as they are in a state of virial equilibrium. The velocity distribution for the moderately supervirial regions ($\alpha_{\rm{vir}}$ = 1.0, red) starts to become distinguishable from the initially subvirial/virialised regions after 50 Myr, as these regions slow down compared to the moderately supervirial one.
But even after 100 Myr the velocities of moderately supervirial regions are still much closer to those of initially subvirial/virial star-forming regions than the highly supervirial scenario. Initially smooth, supervirial star-forming regions appear to evolve in a similar fashion than the more substructured regions but on a much longer timescale. The long-term evolution of the cumulative distributions shows that the average velocities decrease at later times for initially subvirial/virialised regions, as the global gravitational field of the bound clusters cause stars to decelerate.
\subsection{Unbound fractions of stars from initially subvirial and virialised regions}
In this section we turn to unbound fractions for initially subvirial and virialised star-forming regions ($\alpha_{\rm{vir}} \leq$ 0.5). We exclude the two supervirial scenarios as in these globally unbound, expanding regions most stars are born unbound. In our simulations, we do not have any stellar evolution, so stars can only become unbound due to dynamical interactions with other stars \citep{RN189} and not from supernova kicks \citep{RN67}. In the absence of an external tidal field, lower-mass stars mainly become unbound due to effects of 2-body relaxation \citep{RN161} whereas high-mass stars require dynamical interactions with other high-mass stars in binaries or higher order multiple systems (e.g. trapezium-like) to become unbound \citep{RN38,RN17}.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{Unbound_fig4.eps}
\caption{Unbound fractions by mass class for initially subvirial and virialised star-forming regions ($\alpha_{\rm{vir}} \leq$ 0.5). Each row represents a different fractal dimension starting from $D$ = 1.6 (top row) to $D$ = 3.0 (bottom row). The columns show the three subvirial and virial initial ratios. The red points represent the unbound fraction of low/intermediate-mass stars (<8 M$_{\sun}$) over the simulation time, whereas the yellow points represent the unbound fraction of high-mass stars (>8 M$_{\sun}$). The uncertainties of the fractions are calculated using the standard error of the mean (Eq. \ref{eq:StandardError}).}
\label{fig:Ejected1}
\end{figure*}
\subsubsection{Effects of different levels of substructure in regions with the same initial virial ratio}
In Fig. \ref{fig:Ejected1}, unbound fractions for star-forming regions with an initially highly subvirial ratio ($\alpha_{\rm{vir}}$ = 0.1) are shown in the first column. With high levels of initial substructure ($D$ = 1.6, first row) stars in both mass classes show similar unbound fractions from 5 Myr to the end of the simulations. These regions, regardless of initial degree of substructure, will undergo rapid collapse and violent relaxation. While low/intermediate-mass stars become unbound early in the simulations, high-mass stars show a more gradual increase and match the lower-mass unbound fraction at $\sim$5 Myr. The lower-mass unbound fraction decreases with less initial substructure and settles on the same level of $\sim$20 per cent for more moderate amounts of initial substructure ($D$ = 2.0-3.0) after 10 Myr. At the end of our simulations, the high-mass unbound fractions in the four initial substructure scenarios reach final values between 22$\,\pm\,$3 per cent and 28$\,\pm\,$4 per cent.
We see a delay in high-mass stars becoming unbound that increases the lower the level of initial substructure (i.e. higher fractal dimension $D$) in initially highly subvirial regions ($\alpha_{\rm{vir}}$ = 0.1, first column). In these simulations the degree of collapse reduces with lower amounts of initial substructure, resulting in a longer formation time for multiple star systems that can eject massive stars. The low/intermediate-mass stars also show a delay in stars becoming unbound for $D$ = 2.6-3.0. The delay is most obvious in regions with no initial substructure ($D$ = 3.0, bottom row). On average, only 7 stars (all are low/intermediate-mass) per simulation become unbound in the first $\sim$0.5 Myr. The lack of initial substructure combined with the low initial virial ratio appears to result in a ``balanced" collapse that keeps virtually all stars in a bound configuration for a short period of time ($\sim$0.5 Myr).
In initially moderately subvirial simulations ($\alpha_{\rm{vir}}$ = 0.3, second column) the star-forming regions undergo an initial collapse but the degree of collapse is lower when compared to the highly subvirial simulations. We decrease the level of initial substructure and see a significant decrease in the low/intermediate-mass unbound fraction for every change in fractal dimension. After 10 Myr, the high-mass unbound fractions only slightly decrease (from 20$\,\pm\,$4 per cent to 19$\,\pm\,$3 per cent) for regions with initially high or moderate levels of substructure ($D \leq$ 2.0). Further decreasing the initial substructure reduces the high-mass unbound fraction to 16$\,\pm\,$2 per cent ($D$ = 2.6) and 12$\,\pm\,$3 per cent ($D$ = 3.0). The high-mass unbound fractions are only different for simulations with higher ($D \leq$ 2.0) and no initial substructure ($D$ = 3.0).
In regions with initial fractal dimensions $D$ = 2.0-3.0, high-mass stars do not become unbound early in the simulations. The collapse happens fastest in initially highly substructured star-forming regions ($D$ = 1.6) and high-mass stars can become unbound much earlier than in less substructured star-forming regions. The lower the level of initial substructure, the longer it takes to form dynamical multiples that can eject high-mass stars \citep{RN38}. Our simulations suggest that it can take over 3 Myr longer for high-mass stars to become unbound when there is a lack of initial substructure in moderately subvirial initial conditions (lower, middle panels).
In all simulations that are initially virialised ($\alpha_{\rm{vir}}$ = 0.5, third column), regardless of substructure, the unbound fraction of low/intermediate-mass stars is at least double the fraction of unbound high-mass stars after 10 Myr, which reaches $16\,\pm\,3$ per cent in initially highly substructured regions ($D$ = 1.6). In these star-forming regions 37$\,\pm\,$3 per cent of all low/intermediate-mass stars become unbound at the end of our simulations.
Initially virialised, highly substructured star-forming regions can collapse locally and binary clusters can form \citep{RN210}. Binary clusters are a pairing of star clusters that are physically close to each other in space \citep{RN278,RN288,RN287,RN290,RN291}. They can be a result of the dynamical evolution of young star-forming regions as shown by \citet{RN210}. We see these binary clusters at the end of more than half of the 20 simulations and they appear to have an effect on the unbound fractions. The presence of binary clusters lowers the sub-cluster potential energy, effectively creating two smaller clusters with smaller potential wells. In consequence, stars require lower kinetic energy to become unbound. This increases the low/intermediate-mass unbound fraction, but does not affect the high-mass unbound fraction in the same way. Due to the form of the IMF, there is a much smaller number of high-mass stars present in our simulations. During the localised collapse into binary clusters, these high-mass stars can move to different local regions, reducing the likelihood of creating dynamical multiple systems that can eject high-mass stars from our regions.
We do find a higher unbound fraction for high-mass stars than low/intermediate-mass stars in several simulations with initially virialised, highly substructured conditions ($\alpha_{\rm{vir}}$ = 0.5, $D$ = 1.6) that do not result in the creation of binary clusters. In individual simulations where binary clusters are present, we see a higher than average fraction ($\sim$30 per cent compared to the average value of $\sim$16 per cent) of unbound high-mass stars when the low/intermediate-mass unbound fraction is high as well ($\sim$40-70 per cent) or when the absolute number of high-mass stars is high to begin with (i.e. 9 or more high-mass stars per simulation). This increases the chances of forming high-mass dynamical multiple systems, which would lead to more ejections.
Lower levels of initial substructure or smooth regions ($D$ = 2.0-3.0) that are initially virialised ($\alpha_{\rm{vir}}$ = 0.5) do not form binary clusters \citep{RN210}. In our simulations, this considerably reduces the unbound fractions. Star-forming regions that are initially in virial equilibrium and smooth ($D$ = 3.0) undergo very little dynamical evolution and most of the stars ($\sim$87 per cent) remain bound throughout the simulations.
\subsubsection{Effects of different initial virial ratios in regions with the same levels of substructure}
For star-forming regions with a high degree of initial substructure ($D$ = 1.6, first row in Fig. \ref{fig:Ejected1}) increasing the initial virial ratio has the opposite effect on the unbound fractions in the two mass classes. The increase in initial kinetic energy (higher virial ratio) in the regions decreases the fraction of unbound high-mass stars whereas it increases the fraction of low/intermediate-mass unbound stars. While an initially highly subvirial region ($\alpha_{\rm{vir}}$ = 0.1) has the same unbound fraction after 10 Myr in both mass classes, the more virialised a highly substructured region is initially the higher its unbound fraction of low/intermediate-mass stars and the lower its high-mass unbound fraction. The low/intermediate-mass unbound fraction is highest in initially virialised regions due to the presence of binary clusters.
In regions with a lower level of initial substructure ($D$ = 2.0, second row) differences in initial virial ratio have no effect on the low/intermediate mass unbound fractions, which are virtually the same for all three initial virial ratio scenarios (values between 19$\,\pm\,$1 per cent and 21$\,\pm\,$2 per cent) at 10 Myr. The high-mass unbound fraction is highest in the initially most subvirial regions ($\alpha_{\rm{vir}}$ = 0.1). The degree of collapse is highest here and unstable multiple star systems can form quickly. After about 6 Myr, the high-mass unbound fraction reaches 21$\,\pm\,$3 per cent and starts to level out (22$\,\pm\,$3 per cent at 10 Myr), suggesting that unstable multiple star systems are no longer present or do not lead to any further high-mass star ejections. The initially more moderate, subvirial ($\alpha_{\rm{vir}}$ = 0.3) simulations have a similar high-mass unbound fraction than the virialised case in the first $\sim$6 Myr of the simulations (10$\,\pm\,$2 per cent vs. 9$\,\pm\,$2 per cent). The difference in initial virial ratio ($\alpha_{\rm{vir}}$ = 0.3 vs. 0.5) appears to have no effect on the early evolution of these simulated regions. Later in the simulation, the initially moderately subvirial ($\alpha_{\rm{vir}}$ = 0.3) regions continue to eject high-mass stars and reach an unbound fraction of 19$\,\pm\,$3 per cent after 10 Myr, which is a similar value than in the highly subvirial case ($\alpha_{\rm{vir}}$ = 0.1), whereas the high-mass unbound fraction in initially virialised regions levels out after $\sim$ 7 Myr and remains at 10$\,\pm\,$2 per cent to the end of the simulations at 10 Myr.
At low levels of or no initial substructure ($D$ = 2.6 and 3.0, third and fourth row) the low/intermediate-mass unbound fractions are highest when the regions are initially highly subvirial ($\alpha_{\rm{vir}}$ = 0.1) as these regions collapse initially. Even though the moderately subvirial ($\alpha_{\rm{vir}}$ = 0.3) regions initially collapse, this does not result in a higher low/intermediate unbound fraction than in the initially virialised regions that do not undergo collapse. When there is little or no initial substructure, star-forming regions will only collapse when the initial virial ratio is subvirial. The collapse increases the likelihood that unstable multiple systems form, which facilitates the ejection of high-mass stars. With higher initial virial ratios, these multiple systems take longer to form or do not form at all. As a result, high-mass stars take longer to become unbound and the final unbound fractions at 10 Myr are lower the more virialised and smooth the regions are initially.
\subsection{2D-velocity of unbound stars from initially subvirial and virialised star-forming regions}
\subsubsection{Cumulative 2D-velocity distributions}
\begin{figure}
\centering
\includegraphics[width=0.85\linewidth]{Cumulative_unbound_fig5.eps}
\caption{Cumulative distributions for unbound stars showing the 2D-velocities at 10 Myr with initial fractal dimension $D$ = 1.6 for initially subvirial and virialised clusters. The distributions for $\alpha_{\rm{vir}}$ = 0.1 (blue), $\alpha_{\rm{vir}}$ = 0.3 (orange) and $\alpha_{\rm{vir}}$ = 0.5 (green) are shown zoomed in to the central part of the curve, highlighting that for these initial conditions, the velocities of the unbound stars do not differ much between different virial ratios for the same degree of initial spatial and kinematic substructure.}
\label{fig:Cumulative2}
\end{figure}
In Fig. \ref{fig:Cumulative2}, we show the 2D-velocity cumulative distributions for unbound stars from initially subvirial/virialised regions ($\alpha_{\rm{vir}} \leq$ 0.5) with a high level of initial substructure ($D$ = 1.6) at 10 Myr. As we have seen for all (bound and unbound) stars in Fig. \ref{fig:Cumulative1} the cumulative distributions in initially subvirial/virialised simulations are very similar for all four initial fractal dimensions. The cumulative distributions of unbound stars in Fig. \ref{fig:Cumulative2} show a similar picture of very similar distributions for a fractal dimension of $D$ = 1.6 (initially highly substructured regions). Even for these initially substructured regions where we see a more dynamic early evolution (i.e. violent relaxation and initial collapse), it is difficult to distinguish between different initial virial ratios at the end of the simulations.
\begin{figure}
\centering
\includegraphics[width=0.85\linewidth]{Cumulative_unbound_fig6.eps}
\caption{Cumulative distributions for unbound stars showing the 2D-velocities at 10 Myr for an initially highly substructured and subvirial region with $D$ = 1.6 and $\alpha_{\rm{vir}}$ = 0.1 (blue) and an initially almost smooth and virialised region with $D$ = 2.6 and $\alpha_{\rm{vir}}$ = 0.5 (green). The comparison illustrates that the more substructured and subvirial a star-forming region is initially, the faster the unbound stars escape.}
\label{fig:Cumulative3}
\end{figure}
\citet{RN3} analysed the spatial and velocity distributions of very different initial conditions after 4 Myr with a smaller, but similar, set of initial conditions. He found that the cumulative velocity distributions differ between the initial conditions and that the initially moderately subvirial, substructured simulations result in higher velocity unbound stars compared with initially virialised simulations with a low level of substructure. Fig. \ref{fig:Cumulative3} illustrates the cumulative velocity distributions of very different initial conditions after 10 Myr: initially highly substructured and highly subvirial simulations ($D$ = 1.6, $\alpha_{\rm{vir}}$ = 0.1, blue) compared to simulations with a low level of substructure that are initially virialised ($D$ = 2.6, $\alpha_{\rm{vir}}$ = 0.5, green). We also find that unbound stars from substructured, subvirial regions are moving at higher 2D-velocities (after 10 Myr) however the differences between the distributions are not quite as large as in \citet{RN3}. This highlights that cumulative velocity distributions can only distinguish between vastly different initial spatial and velocity conditions.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{Violin_fig7.eps}
\caption{Violin plots showing the 2D-velocity distributions of unbound stars at 10 Myr from all initially subvirial and virialised regions ($\alpha_{\rm{vir}}$ = 0.1 (blue), $\alpha_{\rm{vir}}$ = 0.3 (orange)) and virialised ($\alpha_{\rm{vir}}$ = 0.5 (green)). The violins are scaled by count, the wider the violins are at any point the more stars in our regions have this 2D-velocity. The larger the violins overall, the more stars have become unbound during the simulation time. The thick vertical bar in the centre shows the interquartile range with the white dot representing the median. The long vertical line represents the 95 per cent confidence interval.}
\label{fig:Violin}
\end{figure*}
\subsubsection{Violin plots of 2D-velocity distributions}
Violin plots are a data visualisation technique that combines a box plot with a density trace or a kernel density estimate \citep{RN207}. Like box plots, violin plots also show the median and interquartile range for a variable, as well as any asymmetries and outlier data. They can be useful when comparing distributions of a variable (2D-velocity) over different categories (initial conditions for star-forming regions). Unlike box plots, violin plots includes all data from the underlying distribution and give information about the shape of the distribution. They show all peaks and the position of those peaks, their amplitude, and give insight into any presence of clustering in the data. The outer shape represents all data, with the widest parts corresponding to the value (i.e. 2D-velocity) with the highest probability of occurring in the population \citep{RN207}, which can be also interpreted as the most common 2D-velocity in our case.
Fig. \ref{fig:Violin} shows the 2D-velocity distributions on a log-scale for all initially subvirial and virialised regions (left to right) and all four fractal dimensions (decreasing degree of initial substructure - top to bottom) after 10 Myr. The plots include all unbound stars from 20 simulations combined and represent an average. The wider each of the violin plots is at any point, the more stars are likely to have this 2D-velocity. For each fractal dimension (in each row), the width of the violin plot is scaled by the total number of unbound stars for this initial virial ratio. For two violin plots with the same total number of unbound stars, the widest part will have the same width. However the absolute number of stars with this velocity can be different, e.g. for fractal dimension $D$ = 2.0 (second row) the blue ($\alpha_{\rm{vir}}$ = 0.1) and green ($\alpha_{\rm{vir}}$ = 0.5) violin plots both contain a total of $\sim$4100 unbound stars from 20 simulations each, resulting in the widest part of their violin plots having the same width in Fig. \ref{fig:Violin}. Due to difference in the distributions, there are $\sim$80 more stars at the most common velocity for the initially virialised (green) violin plot.
The thick vertical bar in the centre represents the interquartile range with the white dot representing the median. The thin long vertical line represents the 95 confidence interval. We use a bandwidth following the \citet{RN289} reference rule to smooth our data for the violin plots\footnote{https://seaborn.pydata.org/generated/seaborn.violinplot.html}. The violin plots are cut at the low-velocity end and only show the actual data points there, instead of the tails of the underlying Gaussian kernel density estimate. This allows us to identify the lowest actual 2D-velocity directly from the plot and avoids the appearance of negative 2D-velocities.
\begin{figure*}
\centering
\begin{minipage}[t]{1.0\columnwidth}
\centering
\vspace{0pt}
\includegraphics[width=0.95\linewidth]{Violin_fig8.eps}
\caption{Violin plots showing 2D-velocity (xy-plane) distributions of unbound stars at three simulation times for two selected initial conditions (initially subvirial, substructured (blue) and initially virialised with no substructure (green)).}
\label{fig:Violin_2D}
\end{minipage}
\hfill{}
\begin{minipage}[t]{1.0\columnwidth}
\centering
\vspace{0pt}
\includegraphics[width=0.95\linewidth]{Violin_fig9.eps}
\caption{Violin plots showing 3D-velocity distributions of unbound stars at three simulation times for two selected initial conditions (initially subvirial, substructured (blue) and initially virialised with no substructure (green)).}
\label{fig:Violin_3D}
\end{minipage}
\end{figure*}
Initially highly substructured regions ($D$ = 1.6, Fig. \ref{fig:Violin} first row) have a large number of unbound stars for all three initial virial ratios. The fastest stars are ejected from initially highly subvirial regions ($\alpha_{\rm{vir}}$ = 0.1, blue) with the peak velocity reaching $\sim$70\,km\,s$^{-1}$. These regions have fewer unbound stars ($\sim$260 per simulation) in total and fewer stars at similar velocities with a wider spread of velocities around $\sim$1\,km\,s$^{-1}$ compared to the two higher virial ratio scenarios. Despite these differences, the median velocity is similar ($\sim$1.5\,km\,s$^{-1}$) to the other two scenarios ($\sim$1.3\,km\,s$^{-1}$ - both for $\alpha_{\rm{vir}}$ = 0.3 and 0.5). A large number of unbound stars from highly substructured, moderately subvirial regions ($\alpha_{\rm{vir}}$ = 0.3, orange) move at a similar 2D-velocity of $\sim$1\,km\,s$^{-1}$ after 10 Myr, creating noticeable arms in the violin plots. The total number of unbound stars increases to $\sim$300 per simulation. The arms become most pronounced in the initially virialised case ($\alpha_{\rm{vir}}$ = 0.5, green) with $\sim$370 unbound stars per simulation. Despite the increase in the total number of unbound stars, the most common velocity remains around $\sim$1\,km\,s$^{-1}$. The higher the initial virial ratio in initially highly substructured regions, the more likely it is that unbound stars are moving with more similar velocities, whereas unbound stars are more evenly spread over different velocities in initially more subvirial regions.
With a lower level of initial substructure ($D$ = 2.0, second row) the shape of the distributions changes for all three initial virial ratios. The shape of the velocity distributions of the two initially subvirial scenarios ($\alpha_{\rm{vir}}$ < 0.5) is now almost identical. The violin plot for highly subvirial regions is wider than the moderately subvirial scenario, meaning more stars become unbound (20 more stars per simulation). We see the pronounced arms in the violin plots now also for highly subvirial regions with less spread in the 2D-velocities. The fastest stars from the two subvirial regions now only reach $\sim$30\,km\,s$^{-1}$ and their median velocities are almost identical ($\sim$1.3\,km\,s$^{-1}$). In initially virialised regions ($\alpha_{\rm{vir}}$ = 0.5, green), the arms in the 2D-velocity become even more pronounced at $\sim$1\,km\,s$^{-1}$. The maximum velocity is lower ($\sim$13\,km\,s$^{-1}$) than in the subvirial cases ($\sim$30\,km\,s$^{-1}$), however the median velocity remains similar ($\sim$1.2\,km\,s$^{-1}$).
In regions with little or no initial substructure ($D \geq$ 2.6, third and fourth row), initially highly subvirial regions ($\alpha_{\rm{vir}}$ = 0.1) show a similar violin shape to the more substructured regions ($D$ = 2.0) and also have a similar number of unbound stars ($\sim$200-210 unbound stars per simulation). The sizes of the violins shrink considerably (i.e. fewer unbound stars) for initially moderately subvirial ($\alpha_{\rm{vir}}$ = 0.3) and virialised ($\alpha_{\rm{vir}}$ = 0.5) regions and we see $\sim$90-130 unbound stars per simulation. This indicates a much less dynamical early evolution with the number of unbound stars only $\sim$30 per cent of what they are in simulations with the highest level of initial substructure. Despite this, the violins retain their overall familiar shape of having arms around the most common velocity of $\sim$1\,km\,s$^{-1}$ and a median velocity ($\sim$1.2-1.3\,km\,s$^{-1}$), which is similar to all other initial conditions.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{Violin_fig10.eps}
\caption{Violin plots showing the 2D-velocity distributions of unbound stars at 10 Myr split by mass class (low/intermediate-mass - left half, high-mass - right h) from all initially subvirial and virialised clusters ($\alpha_{\rm{vir}}$ = 0.1 (blue), $\alpha_{\rm{vir}}$ = 0.3 (orange)) and virialised ($\alpha_{\rm{vir}}$ = 0.5 (green)). All plots are scaled to have the same width as there is only a very small number of unbound high-mass stars. The widest part of the each violin half represents the 2D-velocity with the highest probability. Dashed lines represent the median and the interquartile range, the 95 per cent confidence interval is no longer shown.}
\label{fig:Violin_mass}
\end{figure*}
Our unbound definition is based on stars reaching the escape velocity (total positive energy) from the star-forming regions, which is $\sim$3\,km\,s$^{-1}$ in our simulated regions. In Fig. \ref{fig:Violin} we see that the minimum 2D-velocity of unbound stars can be as low as $\sim$0.03\,km\,s$^{-1}$ after 10 Myr. Once unbound stars leave the denser parts of a star-forming region, they interact with fewer or no other stars and slow down gradually. However, the apparent slow-down in our simulations by up to two orders of magnitude is likely due to projection effects. Fig. \ref{fig:Violin_2D} shows violin plots for two, very different initial conditions (blue - initially highly subvirial, substructured and green - initially virialised, no substructure) at three different times during the simulations. Already after 1 Myr, a low-velocity tail forms in 2D-space that extends to velocities an order of magnitude lower than the escape velocity. In full 3D-velocity space in Fig. \ref{fig:Violin_3D}, we see that after 1 Myr the lowest velocities are only 1-2\,km\,s$^{-1}$ lower than the escape velocity. This suggests that unbound stars slow down, however not to the extent suggested by the 2D-velocities.
\begin{figure*}
\centering
\includegraphics[width=0.95\textwidth]{Runaway_fig11.eps}
\caption{Runaway stars (2D-velocity > 30\,km\,s$^{-1}$) by mass over the simulation time for the four fractal dimensions and the three different initial virial ratios: subvirial ($\alpha_{\rm{vir}}$ = 0.1 (blue) and $\alpha_{\rm{vir}}$ = 0.3 (orange)) and virialised ($\alpha_{\rm{vir}}$ = 0.5 (green). The y-axis is limited to 1 M$_{\sun}$, as all of our runaway stars have very low mass.}
\label{fig:Runaway}
\end{figure*}
This 2D-projection effect could affect cluster membership identification when observing proper motion (or 1D-radial velocity) in isolation. Depending on relative position to the cluster, these ``slow" unbound stars could be identified as not having originated from the cluster at all due to being too far away or still bound due to their central location in the star-forming region. However, our simulations suggest that only $\sim$1 per cent of these unbound stars with low 2D-velocities are located in the central parts of star-forming regions after 10 Myr. This limits the extent of mistakenly assigning membership to ``slow" unbound stars, when only proper motion information is available.
In Fig. \ref{fig:Violin_mass} we use split violin plots to show the 2D-velocities separately for the two mass-classes. The plots are now scaled to the same width as we have at most $\sim$40 unbound high-mass stars compared with over 7000 lower-mass unbound stars from a set of 20 simulations. The widest part of each half still represents the 2D-velocity with the highest probability of occurring. Dashed lines represent the median and the interquartile range, the 95 per cent confidence interval is no longer identified on the plots. The violin plots are again cut at the low-velocity end and only show the actual data points, instead of the tails of the underlying Gaussian kernel density estimate. This allows us to identify the lowest actual 2D-velocity directly from the plot and avoids the appearance of negative 2D-velocities.
In Fig. \ref{fig:Violin_mass} we see that the shape of the low/intermediate-mass violins is nearly identical to the shape of the total population of unbound stars in Fig. \ref{fig:Violin} as most unbound stars are lower mass. Due to the low number of unbound high-mass stars the velocity distributions of unbound high-mass stars can have a jagged outline depending on the bandwidth used. We use the same bandwidth setting \citep[following][]{RN289} as in Fig. \ref{fig:Violin} resulting in the right half (unbound high-mass stars) of our split violin plots in Fig. \ref{fig:Violin_mass} appearing as a smooth distribution despite the small sample size. A small sample size can make conclusions from violin plots unreliable and we limit our interpretation of them to general differences in median, minimum and maximum velocity between the two mass-classes. To gain more insight into the velocity distributions of unbound high-mass stars using violin plots would require an increase in the sample size, i.e. a much higher number of simulations.
For all initial condition scenarios at 10 Myr, high-mass unbound stars have a higher median (and interquartile range) than the low/intermediate-mass stars and also a much higher minimum 2D-velocity. The mechanism for high-mass stars to become unbound is different to that of low/intermediate-mass stars. High-mass stars will only become unbound from our star-forming regions after a dynamical interaction with other massive stars in multiples. These dynamical interactions make unbound high-mass stars move faster on average, however the fastest stars are in fact from the low-mass end. The differences in 2D-velocities between the mass classes is present in all initial condition combinations, so is not affected by the initial spatial or velocity structure in the star-forming regions.
\subsection{Runaway and walkaway stars}
Finally, we analyse how effective star-forming regions with different initial conditions are at ejecting runaway and walkaway stars. We only use 2D-velocity and the lower boundary value of 30\,km\,s$^{-1}$ \citep[e.g.][]{RN255, RN276, RN190, RN137} for our runaway definition and velocities between 5-30\,km\,s$^{-1}$ for walkaways \citep{RN137, RN136}.
Fig. \ref{fig:Runaway} shows all stars from 20 simulations per initial condition moving with a 2D-velocity (xy-plane) above 30\,km\,s$^{-1}$. All of them are from the low end of the mass spectrum, not a single runaway star is more massive than 0.5 M$_{\sun}$. We have the highest number of runaway stars from initially highly substructured, subvirial regions ($\alpha_{\rm{vir}}$ = 0.1, $D$ = 1.6) regardless of the choice of 2D-plane. Only the fastest one is present in all three 2D-planes and is moving with a 2D-velocity between 50-70\,km\,s$^{-1}$ depending on the choice of plane. The other two runaways have lower velocities between 30-40\,km\,s$^{-1}$. With at most three ejected runaways from a set of 20 simulations, we see that regardless of the initial velocity or spatial structure, runaway stars are rare from our chosen initial conditions.
\begin{figure*}
\centering
\begin{minipage}[t]{0.95\textwidth}
\centering
\includegraphics[width=1.0\linewidth]{Walkaway_fig12a.eps}
\end{minipage}
\begin{minipage}[t]{0.95\textwidth}
\centering
\includegraphics[width=1.0\linewidth]{Walkaway_fig12b.eps}
\end{minipage}
\begin{minipage}[t]{0.95\textwidth}
\centering
\includegraphics[width=1.0\linewidth]{Walkaway_fig12c.eps}
\end{minipage}
\caption{Walkaway stars (2D-velocity: 5-30\,km\,s$^{-1}$) by mass (using a log-scale) over the simulation time for the four fractal dimensions and the three different initial virial ratios: subvirial ($\alpha_{\rm{vir}}$ = 0.1 (blue, top row) and $\alpha_{\rm{vir}}$ = 0.3 (orange, middle row)) and virialised ($\alpha_{\rm{vir}}$ = 0.5 (green, bottom row)). A few stars (single points) are only identified as walkaways for a few snapshots, this is due to them being ejected close to the lower walkaway velocity boundary and slowing down to fall below the boundary shortly after ejection.}
\label{fig:Walkaway}
\end{figure*}
Going to walkaway velocities (5-30\,km\,s$^{-1}$) produces a few high-mass walkaways and a large number of low-mass walkaways across all initial conditions. Fig. \ref{fig:Walkaway} shows all walkaways from the 20 simulations across each initial condition set. The more violent the early evolution of a star-forming region is, the higher the number of walkaway stars. In the most violently evolving initial condition set-up - initially highly substructured ($D$ = 1.6) and highly subvirial ($\alpha_{\rm{vir}}$ = 0.1), we have on average $\sim$0.5 high-mass walkaways per simulation and $\sim$20 low/intermediate-mass walkaways per simulation.
The lower the initial level of substructure (larger fractal dimension $D$) the lower the overall number of walkaway stars, with initially more subvirial regions (Fig. \ref{fig:Walkaway} top row) producing more walkaway stars, which are also ejected earlier in the simulations. We see a number of temporary walkaways that appear as walkaways only for a few snapshots. These are stars ejected just at the minimum walkaway velocity. After ejection they slow down and disappear from our plots once they drop below 5\,km\,s$^{-1}$ (minimum walkaway velocity), however this does not mean that they have been recaptured by the star-forming region. Initially virialised star-forming regions with no substructure ($\alpha_{\rm{vir}}$ = 0.5, $D$ = 3.0 - bottom right panel) produce on average only 2 low/intermediate-mass walkaways per simulation. This is an order of magnitude fewer walkaway stars than in the initial condition scenario (initially highly substructured ($D$ = 1.6), highly subvirial ($\alpha_{\rm{vir}}$ = 0.1) - top left panel) that produces the largest number of walkaway stars.
\section{Discussion}
We summarise the results of our $N$-body simulations as follows. Cumulative velocity distributions of star-forming regions with different initial conditions have limited usefulness in clearly distinguishing between different initial spatial and velocity structure. When comparing the long-term evolution of regions with different levels of initial substructure, regions with high levels of initial substructure evolve very quickly kinematically, with supervirial regions (unbound by definition) showing the fastest 2D-velocities. The cumulative velocity distributions of unbound stars from initially subvirial and virialised simulations are difficult to distinguish after 10 Myr and only show differences for extremely different initial conditions (see Fig. \ref{fig:Cumulative3}).
The unbound fraction differs considerably for different combinations of initial spatial and velocity structure. This suggests that the unbound population around young, bound star clusters could possibly be used to draw conclusions about the initial conditions. Around initially smooth ($D$ = 3.0), virialised ($\alpha_{\rm{vir}}$ = 0.5) star-forming regions, we find a low number of ejected stars (slow walkaways, but no runaways) and virtually no unbound high-mass stars after 10 Myr. Around initially substructured, subvirial regions that have undergone violent relaxation, we find a large number of unbound low/intermediate-mass stars. We also find a few high-mass ejected stars (at walkaway velocities) and one low-mass runaway star in three of the 20 simulations. The unbound fractions are possibly influenced by our choice of initial density as higher densities increase the likelihood of encountering and interacting with other stars.
Initial densities can differ greatly from those currently observed due to the amount of dynamical evolution that a region undergoes. The level of spatial substructure in a region can constrain the dynamical evolution of regions with different initial densities - the higher the initial density, the quicker substructure is erased \citep{RN8}. Our simulated star-forming regions have been set up with a high, median local initial density (10$^{3}$-10$^{4}$ M$_{\sun}$ pc$^{-3}$).
After about 1 Myr, regions with initial spatial substructure have evolved into smooth, centrally concentrated regions, whose densities can be directly compared to observed star-forming regions. The density in our simulations after a few Myr is 10$^{1}$-10$^{3}$ M$_{\sun}$ pc$^{-3}$ \citep{RN8} comparable to many nearby star-forming regions where observed present-day densities do not exceed $\sim$400 M$_{\sun}$ pc$^{-3}$ \citep[e.g.][]{RN277}.
High-mass stars are less likely to become unbound than low/intermediate-mass stars if a region is not initially very subvirial. When they do escape from their birth environment they do so at higher velocity and become at least walkaway stars (> 5\,km\,s$^{-1}$). With our chosen initial conditions, high-mass stars do not reach the velocity regime of runaway stars. Only the evolution of star-forming regions that are initially subvirial ($\alpha_{\rm{vir}}$ < 0.5) and/or substructured ($D \leq$ 2.0) is dynamic enough to produce any runaway stars, all of which are low-mass. This is in apparent contrast to observations, where due to observational bias, predominantly high-mass runaways are found \citep{RN263} as they are more luminous and easier to observe. Historically, the definition of runaway stars is based on OB stars \citep{RN67}, following \citet{RN263}, we suggest to extend this definition to lower-mass stars. Lower-mass stars appear to reach runaway velocities more often than higher-mass stars and these could be found around many young star-forming regions when testing our predictions with \textit{Gaia} DR2.
Using data from \textit{Gaia} DR1 \citep{GaiaDR1}, \citet{RN294} report that two of the most massive stars (HD46223 and HD46106) in NGC2244 are moving away from each other and from the centre of this young cluster at a larger velocity than the other cluster stars. They suggest that HD46223 has been ejected from the cluster, possibly due to dynamical interactions with other massive stars in the centre. The inferred velocity of 1.38\,km\,s$^{-1}$ from its proper motion \citep{RN294} is far below the lower velocity boundary for walkway stars and it is unclear if this star is actually unbound. Our simulated star-forming regions (1000 single stars) have an escape velocity of $\sim$3\,km\,s$^{-1}$. NGC2244 is estimated to have $\sim$2000 members \citep{RN295} suggesting that HD46223 might not have reached escape velocity and might still be bound to the cluster despite its apparent ejection. In our simulations we also see massive stars moving outwards after dynamical interactions at velocities higher than their surroundings. If they are moving more slowly than the escape velocity they will remain bound to the cluster, slow down and eventually return in direction of the cluster centre.
Violin plots show that the velocity distributions do indeed differ between initial conditions, particularly when the regions are initially highly substructured. These distributions also indicate that the vast majority of low/intermediate-mass stars become unbound at just around the escape velocity. We show that 2D-velocity information appears to be an underestimate of the full 3D-velocity for a proportion of unbound stars. This can have implications for membership determination of young star-forming regions, where full velocity parameter space information is not available. The \textit{Gaia} DR2 data set contains a much larger number of stars only with proper motion data, missing information about the radial velocity for many fainter stars. If the 2D-velocity is indeed an underestimate of the full space velocity for some stars, we might mistakenly assign a cluster membership to stars with slow proper motions or not be able to trace back stars to their birth cluster.
Escaping, ejected or unbound stars from simulations have been studied previously \citep[e.g.][]{RN239,RN46,RN3,RN235,RN241,RN222}. \citet{RN3} found a similar connection between unbound stars (i.e. number, velocity, spatial distribution) and the initial substructure and virial ratio with a more limited set of initial conditions. Other studies \citep{RN239,RN241} used Plummer spheres \citep{RN198} to set up the initial spatial distribution of the clusters and included primordial binaries. The conclusion from these studies was that the number and mass fraction of unbound stars depend strongly on the initial cluster radius or initial density and to a lesser extent on the parameters of the primordial binaries \citep{RN239, RN241} or the initial virial ratio \citep{RN239}. With their inclusion of primordial binaries, the results of these studies are not directly comparable to ours.
Our results show that differences in the initial spatial substructure can have a considerable effect on the fraction, the velocity and the masses of unbound stars. Due to the lack of stellar evolution in our short simulation time of 10 Myr, we miss the effects of supernova kicks causing stars to become unbound due to the binary supernova ejection mechanism \citep{RN67}. In our simulations, binaries will only form dynamically (i.e. are not present from the beginning of our simulations) and we may therefore be underestimating the impact of the dynamic ejection mechanism \citep{RN189} as we only find a few lower-mass runaways stars. \citet{RN235} showed that a higher fraction of primordial binaries increases the number of higher-velocity (20-100\,km\,s$^{-1}$) stars.
\section{Conclusions}
In this paper, we present $N$-body simulations of star-forming regions set up with a range of different initial spatial and velocity structures. We investigate if the dynamical evolution results in differences in the unbound population after 10 Myr. The conclusions from our simulations are summarised as follows.
\renewcommand{\labelenumi}{(\roman{enumi})}
\begin{enumerate}
\item Cumulative 2D-velocity distributions of all stars in simulated star-forming regions cannot provide strong insights into the long-term-term evolution of star-forming regions with differing initial spatial and velocity structure. When focussing on unbound stars, clear differences in the cumulative distributions are only found when comparing vastly different initial conditions.
\item Unbound fractions of stars of different masses show clear differences between the initial conditions and could prove useful to distinguish between initial spatial and velocity structures. Only when a region is initially very subvirial can we expect to find a higher fraction of unbound high-mass stars than low/intermediate-mass stars in the vicinity of the region.
\item If high-mass stars manage to escape their birth region, they are likely to reach at least walkaway velocities. However based on our simulations, not every young star-forming region will create a high-mass runaway or walkaway star.
\item Most low/intermediate-mass stars leave the regions at velocities just above the escape velocity. However, the fastest stars from our simulations are also low/intermediate-mass stars. We see a number of low/intermediate-mass walkaway stars from every initial condition set. This number increases for regions that evolve more dynamically (more initial substructure and lower virial ratio). As a result, we should find at least a small number of these stars around virtually every young and high-density star-forming region. The fact that most observed fast stars are still high-mass is very likely due to observational bias/limitations. This changes with \textit{Gaia} DR2 where five-parameter space astrometry for stars down to sub-solar mass is already available for star-forming regions nearby \citep{RN238}. We will use this data to search for walkaway and runaway stars from young star-forming regions in a future paper.
\end{enumerate}
\section*{Acknowledgements}
CS acknowledges PhD funding from the 4IR STFC Centre for Doctoral Training in Data Intensive Science and thanks ESTEC/ESA in Noordwijk, in particular the \textit{Gaia} team, for hosting her as a visiting researcher for six months. RJP acknowledges support from the Royal Society in the form of a Dorothy Hodgkin Fellowship.
\bibliographystyle{mnras}
|
2,869,038,154,369 | arxiv | \section{Introducation:-} It is expected that space-time will have quantum fluctuations in it$[1]$. These fluctuations are expected to dominate as we measure the space-time at smaller and smaller lengths. Finally being one at Planck length $[2]$. Thus space-time get a foam like structure near Planck length. This is called space-time foam$[3]$. There have been two models proposed to study space-time foam till now. The first model proposed to study the space-time foam was called the wormhole picture$[4]$. In it we took space-time to be multiple connected. So we took the space-time manifold to have a large value of first Betti number $B_1$. We took $B_2 =0$ in this case $[5]$. However it was realized that this picture let to the wrong value of Q.C.D. $ \theta$-parameter $[6]$. Macroscopic wormholes also led to the wrong value of the black hole entropy $[5]$. So this picture was discarded and an alternative picture of space-time foam was proposed by S.W. Hawking, which was called the quantum bubble picture $[5]$. In it we take $B_1 = B_3 = 0$ and we also take a very large value of second Betti number $ B_2$. Thus the space-time manifold is simple connected in this case. Quantum bubble picture predicted the correct value of Q.C.D. $ \theta$-parameter. In this picture there is also an elegant way to describe black hole evaporation. Macroscopic real black holes evaporate down to Planck size by radiating Hawking radiation. At this stage they are left with no energy or charge. Then they disappear in a sea of virtual black holes. Thus this picture seems to be a more realistic picture of space-time foam.\\
In this paper we will analyze some aspects of quantum bubble picture. We will start this paper by reviving the basic bubbles that are used in the construction of space-time foam.
\section{ Basic Quantum Bubbles} There are five basic bubbles that can be taken to be the building blocks of space-time foam i.e., $ S^2 \times S^2 ( \chi =4, \tau =0), {}CP^2(\chi=3,\tau = 1),{} \overline{CP^2}( \chi=3, \tau=-1), {} K3(\chi=24, \tau = 16),{}\overline{K3} (\chi=24,\tau=-16)$. However the need of space-time to have a spin structure rules out $CP^2$ and $\overline{CP^2}$ and the contributions of $K3$ and $\overline{K3}$ are suppressed by Atiyah-Singer theorem as they contain Fermions zero mode$[6]$. So we are left with $S^2 \times S^2$ as the basic building block of space-time foam. $S^2 \times S^2$ has the symmetry group of $SO(3) \times SO(3)$ however $S^2 \times S^2 - \{p\}$ has the symmetry group of $U(1) \times U(1)$. Here $p$ is a point on the bubble. If we take a spatial section of $S^2 \times S^2 - \{p\}$ we get $S^2 \times S^1 - \{p\}$. The topology of $S^2 \times S^1 - \{p\}$ looks like the mouth of two wormholes. We also know that spatial section of a black hole looks like a wormhole. So the topology $S^2 \times S^2 -\{p\}$ seems to look like the topology of a pair of black holes. Further we know that in Ernst solution a pair of black holes can be created by electric or magnetic fields$[7]$. This is similar to the pair creation in ordinary particle physics. Now as the quantum bubbles are basically quantum fluctuations in the metric, so we can interpit the bubble with topology $S^2 \times S^2 - \{p\}$ as a virtual black hole loop. Here we deform the loop on which the black hole moves into an oval with bottom representing the creation and top representing the annihilation of a pair of black holes.
\section{Virtual Black Holes} We need electric or magnetic charges to produce a pair of black holes from Ernst solution. However the quantum bubbles form even in absence of any field. The reason for that is that virtual black holes are not classical geometries and hence need not satisfy Einsteins equations. They are rather solutions of the Wheeler-DeWitt equation. The solution of Wheeler-DeWitt equation is a wave function on space of geometries . Wheeler-DeWitt equation is given by $[8]$.
\begin{equation}
\left[ m_p G_{ijkl}\frac{\delta}{\delta h_{ij} \delta h_{kl}} - m_p h^{\frac{1}{23}}K+ h^\frac{1}{2}T^m_m\left[\phi_o,\frac{-\delta}{\delta \phi_o}\right]\right] \psi(h_{ij},\phi_o)=0
\end{equation}
Here $m_p$ is the Planck mass, $T_m^m$ is the energy momentum tensor and $G_{ijkl}$ is given by
\begin{equation}
G_{ijkl} = \frac{1}{2}\sqrt{h}(h_{ik}h_{jl} + h_{il}h_{jk} - h_{ij}h_{kl})
\end{equation}
They also need to satisfy the following momentum constraint equation
\begin{equation}
\left[-2im_p\left[\frac{\delta}{\delta h_{ij}}\right]_{|j} +T^m_m\left[\phi_o,\frac{\delta}{\delta \phi_o}\right]\right]\psi(h_{ij},\phi_o)=0
\end{equation}
Now we can expand $\psi(h_{ij},\phi_o)$ in terms of creation and annihilation operators.
\begin{equation}
\psi(h_{ij},\phi_o) = \sum_i (\beta_{1i}a_i + \beta^{\dagger}_{2i}a_i^{\dagger})
\end{equation}
And
\begin{equation}
\overline{\psi}(h_{ij},\phi_o) = \sum_i (\beta_{2i}a_i + \beta^{\dagger}_{1i}a_i^{\dagger})
\end{equation}
Here $a_i^{\dagger}$ is the creation operator for a black hole and $\beta_{1i}$ and $\beta_{2i}$ are the complete set of bases for the conformally invariant wave equation. Now we can form a closed
loop from these creation and annihilation operators. If $|0 \rangle$ is the vacuum state of geometries from which we can generate the black hole geometry, and $ \alpha$ is the amplitude for the formation of a quantum bubble, than we can write
\begin{equation}
\alpha = \langle0|\overline{a^{\dagger}(h_{ij},\phi_o)a(h'_{ij}, \phi'_o)} \ \
\overline{a^{\dagger}(h'_{ij},\phi'_o) a(h_{ij}, \phi_o)} |0 \rangle
\end{equation}
However it is difficult to solve the Wheeler-DeWitt equation as it is faced with the problem of factor ordering. Factor ordering deals with the problem that arises due to the fact that it does not matter how we order momentum in classical equations but it does matter when they become operators. However we can get some insight about the virtual black holes from the fact that they satisfy Wheeler-DeWitt equation. The fact that they satisfy Wheeler-DeWitt equation and need not not satisfy Einstein equations means that they need not have any electric or magnetic charges that would be required by Ernst solution for pair creation of black holes. Even though they are not Classical geometries we can expect that they will have the basic feature of real black holes like an event horizon and hence the no-hair theorem will hold for them also. A particle that will fall into a the virtual black hole will lose all information except that which is allowed by the no-hair theorem$[9]$. So this particle will be radiated as a combination of some other particles before the annihilation of the pair of virtual black holes. This will cause loss of information to take place universally in space-time as an intrinsic property of space-time.
\section{Loss of Quantum Coherence} It is well known that in case of real black holes the falling of certain states in the black hole leads to non-unitary evolution and hence loss of quantum coherence. A similar effect happens for the virtual black holes. There is an elegant formalism called the Superscattering formalism given by S.W.Hawking to describe it$[5]$. In it we start from taking a complete set of creation operators for states of a quantum field theory at past and future infinity. Let these set of creation operators at past and future infinity be $P_i$ and $F_i$ respectively. We define a Superscattering operator as one that maps a density matrix at past infinity to one at future infinity. So if $\rho_-$ and $\rho_+$ are density matrix at past and future infinity respectively and ${\$}$
is the Superscattering operator, than we can write
\begin{equation}
\rho_+ = {\$}\rho_-
\end{equation}
If the initial state is $\vert \psi_1 \rangle \langle \psi_2 \vert$ and the final state is $\vert \psi_3 \rangle \langle \psi_4 \vert$, then we have
\begin{equation}
{\$} = \langle P_2^{\dagger} F_3 F_4^{\dagger}P_1 \rangle
\end{equation}
If the set of creation operators are a complete set at both past and future infinity then we have
\begin{equation}
\langle P_2^{\dagger} F_3 F_4^{\dagger}P_1 \rangle =\langle P_2^{\dagger} F_3 \rangle \langle F_4^{\dagger}P_1 \rangle
\end{equation}
Here $\langle P_2^{\dagger} F_3 \rangle $ is the $S$-matrix and $\langle F_4^{\dagger}P_1 \rangle$ is its adjoint. So we can write
\begin{equation}
{\$} = SS^{\dagger}
\end{equation}
Thus we can write
\begin{equation}
{\$}\rho = S\rho S^{\dagger}
\end{equation}
Then using the conservation of probability we get
\begin{equation}
Tr({\$}\rho)=1 \ \mbox{if} \ \ Tr(\rho)=1
\end{equation}
Then we have
\begin{equation}
1= Tr({\$}\rho)=Tr(SS^{\dagger}\rho)= Tr(\rho)
\end{equation}
This will hold if we have
\begin{equation}
SS^{\dagger} = 1
\end{equation}
This means that the $S$-matrix is unitary.
Now to see how the density matrix evolves we have to take the trace of the future density matrix, which is given by
\begin{equation}
Tr(\rho_+^2)=Tr(({\$}\rho_-)({\$}\rho_-))
\end{equation}
As the Supperscattering matrix factorises we can write
\begin{equation}
Tr(({\$}\rho_-)({\$}\rho_-)) = Tr(S\rho_-S^{\dagger}S\rho_-S^{\dagger})
\end{equation}
This can be written as
\begin{equation}
Tr(S\rho_-S^{\dagger}S\rho_-S^{\dagger}) = Tr(S\rho^2_-S^{\dagger}) = Tr(\rho_-^2)
\end{equation}
This is the trace of the past density matrix and thus the evolution here is unitary \\
Thus for the case where the Superscattering matrix factorizes we have unitary evolution and hence no loss of quantum coherence. However if the creation operators do not form a complete set at past or future infinity then the Superscattering matrix will not factorize and hence the evolution will be non-unitary and so the quantum coherence will be lost. In case of real black holes $F_i$ do not form a complete set of bases and in case of a virtual black hole both $F_i$ and $P_i$ do not form complete set of bases. So like real black holes, virtual black holes also lead to loss of quantum coherence. As every where in space-time we have a certain probability for the existence of virtual black holes so we can never build a truly unitary theory and quantum coherence will be lost universally.\\
Now we take the state inside and outside the event horizon at past infinity as $|IP_i \rangle$ and $|OP_j \rangle$ and the state inside and outside the event horizon at future infinity as $|IF_k \rangle $ and $|OF_l \rangle$ respectively. Than the state at past infinity $|P \rangle$ and future infinity $|F \rangle$ can be written as follows
\begin{equation}
|P \rangle = \lambda^{ij}|IP_i \rangle \times |OP_j \rangle
\end{equation}
And
\begin{equation}
|F \rangle = \lambda^{kl}|IF_k \rangle \times |OF_l \rangle
\end{equation}
Now the extent to which quantum coherence is lost at past and future infinity will be got by taking the trace over $|IP_i \rangle$ and $|IF_k \rangle$ respectively.\\
So if $\tau_P$ and $\tau_F$ denote the measure of loss of quantum coherence at past and future infinity than we have
\begin{equation}
\tau_P = 1 - Tr(\rho_-^2) = 1 - \lambda^{ij} \lambda^{ab}\overline{\lambda}_{ib}\overline{\lambda}_{aj}
\end{equation}
And
\begin{equation}
\tau_F = 1 - Tr(\rho_+^2) = 1 - \lambda^{kl} \lambda^{cd}\overline{\lambda}_{kd}\overline{\lambda}_{cl}
\end{equation}
Now if we define the total information lost as $\tau$, than it will be given by
\begin{equation}
\tau = \tau_P + \tau_F
\end{equation}
If no quantum coherence is lost than $\tau = 0$ and the Superscattering matrix will factorize. For any other value of $\tau$ the Superscattering matrix will not factorize.
To proceed further we need to first review certain topological invariants in quantum gravity$[10]$. Apart from the simple two well known topological invariants in quantum gravity i.e., the signature and Eulers number of the space-time manifold there is another topological invariant in quantum gravity which is called the topological invariant volume. The total action $\textbf{S}$ for space-time manifold is given by
\begin{equation}
\textbf{S} = \frac{-\Lambda}{8\pi L_p^2}Vol(M,g)
\end{equation}
Here $L_p$ is the Planck length and $Vol(M,g)$ is the volume of the manifold and $\Lambda$ is the cosmological constant.\\
If we redefine the metric as follows:-
\begin{equation}
g_{\mu\nu} = \frac{3}{|\Lambda|} \overline{g}_{\mu\nu} \ \ \mbox{With} \ \ \Lambda = \pm 3
\end{equation}
Then the action becomes
\begin{equation}
\textbf{S} = \frac{-\overline{g}}{8\pi L_p^2 \Lambda} \overline{v}(M,\overline{g})
\end{equation}
Here $\overline{v}(M,\overline{g})$ is the topological invariant volume. We can also write the partition function in terms of $\overline{v}$ as follows:-
\begin{equation}
Z = \sum_{\overline{v}}\rho(\overline{v}) exp\left[ \frac{9}{8\pi\Lambda L_p^2}\overline{v} \right]
\end{equation}
Here $\rho(\overline{v})$ is the measure of density of topology.\\
Now we can see how much quantum coherence is lost for a given topological invariant volume.\\
If the total flux of wave function that enters $\overline{v}$ is $\Gamma_1$ and the total flux that leaves $\overline{v}$ is $ \Gamma_2$, then the total information lost in $\overline{v}$ is $\eta(\overline{v})$ and it is given by
\begin{equation}
\eta(\overline{v}) = \frac{\Gamma_1 - \Gamma_2}{\Gamma_1}
\end{equation}
Now as the amplitude for formation of a virtual black hole is $\alpha$, we can write
\begin{equation}
\eta(\overline{v}) = \tau\alpha^2\overline{v}
\end{equation}
Now we can write $ \Gamma_2$ in terms of $ \Gamma_1$ as follows
\begin{equation}
\Gamma_2 = \Gamma_1( 1 - \tau \alpha^2 \overline{v})
\end{equation}
It is obvious that if no quantum coherence is lost than $ \Gamma_1 = \Gamma_2$ as $\tau=0$ in this case.
\section{Entropy of Space-time}
The fact that virtual black hole picture make the space-time to have an intrinsic entropy can be seen in various ways.\\
The fact that it lead to a mixed state will imply that the space-time has an entropy given by
\begin{equation}
S = \overline{v} \alpha^2\left[ Tr(\rho_- In\rho_-^{-1}) + Tr(\rho_+ In\rho_+^{-1}) \right]
\end{equation}
This will be zero for a pure state when no quantum coherence is lost.\\
Another way of looking at the intrinsic entropy of space-time is by observing that the fact that space-time gets a coarse structure at Planck length due to the presence of virtual black holes. As here different microstates will lead to the same macrostate i.e., the smooth space-time manifold at length much larger than Planck length, so space-time will have an intrinsic entropy. Here the intrinsic entropy of space-time will be given by
\begin{equation}
S = k \ ln W
\end{equation}
Here $W$ is the number of microstates generate by the virtual black holes and $k$ is the Boltzmanns constant.\\
But probable the best way to look at the intrinsic entropy of space-time generate by virtual black holes is by black hole thermodynamics. It is well known that real black holes have entropy proportional to one fourth of the surface area of the event horizon $[9]$. If real black holes have entropy than so will the virtual black holes. This will directly imply that space-time has an intrinsic entropy.
If $A$ is the area of a single virtual black hole than we can write the intrinsic entropy of space-time per invariant topological volume as
\begin{equation}
S= \frac{AN}{4L_p^2}
\end{equation}
Here $N$ is the number of virtual black holes per invariant topological volume $\overline{v}$. $N$ will be equal to twice the product of probability for the formation of a virtual black hole loop and the invariant topological volume.
\begin{equation}
N= 2\alpha^2 \overline{v}
\end{equation}
We know that the action here is proportional to the volume of the space-time manifold. So for large volumes the action also becomes large. Thus the contributions from virtual black holes with radius larger than the Planck length will get suppressed in the partition function. \\
It can also be shown that the quantum fluctuations in the metric are given \\
by $[1]$.
\begin{equation}
\Delta g = -1 + \sqrt{1 +\frac{L_p^2}{L^2}}
\end{equation}
Here $\Delta g$ are the fluctuations in the metric and $L$ is the length at which we measure the geometry.
Thus at Planck length the quantum fluctuations in geometry become of order one. So it is not possible to define geometries less than Planck length. However if we assume the radius of virtual black holes to be less than the Planck length that is what we will be implying. So the virtual black holes can not have radius less than the Planck length.\\
Thus the radius of the virtual black holes should be taken equal to Planck length. So we can write the area of the quantum bubble with topology $ S^2\times S^2 - \{p\}$ as follows:-
\begin{equation}
A = 16 \pi^2 L_p^4
\end{equation}
Thus we can write the intrinsic entropy of space-time per topological invariant volume as follows:-
\begin{equation}
S = 8 \pi^2 L_p^2 \alpha^2 \overline{v} = \frac{8 \pi^2 L_p^2}{\tau}\eta(\overline{v})
\end{equation}
This is the intrinsic entropy of space-time calculated from black hole thermodynamics.\\
Thus it can be shown in more than one way that virtual black holes will lead space-time to have an intrinsic entropy.
\section{Conclusion}
Quantum bubble picture seems to be the more correct picture of space-time foam as compaired to the wormhole picture. The bubble of topology $ S^2 \times S^2 -\{p\}$ is the building block of space-time and is representative of virtual black hole loop. Virtual black holes are solutions to Wheeler-DeWitt equation and hence need not have any electric or magnetic charges as would be required if they satisfied Einstein equations. Virtual black holes lead to the loss of quantum coherence, which we have calculated in this paper. Virtual black holes also lead the space-time to have an intrinsic entropy.
\section{References}
\begin{enumerate}
\item
L.G.Garay - gr-qc/9911002
\item
M.A.Marlov - quantum violation of topology in small spatial regions - Preprient P - 0187 of Inst. Nucl. Rev. Moscow - 1980
\item
S.W.Hawking - Nucl. Phys - B114, 349 -1978
\item
S.W.Hawking - Phys. Rev - D37, 904 - 1988
\item
S.W.Hawking - hept-th/9510029
\item
J.Preskill, S.P.Trivedi and M.B.Wise - Phys. Lett - B223, 26 - 1989
\item
F.J.Ernst - J. Math. Phys - 17, 515 - 1976
\item
J.B.Hartle and S.W.Hawking - Phys. Rev - D28, 2960 - 1983
\item
S.W.Hawking - Commun. Math. Phys - 43, 199 - 1975
\item
S.Carlip - Class. Quantum. Grav - 15, 2629 - 1998
\end{enumerate}
\end{document}
|
2,869,038,154,370 | arxiv | \section{Introduction}
\label{sec:introduction}
Magnetic resonance~(MR) imaging has been widely utilized to diagnose patients, as it is
non-ionizing, non-invasive, and has a range of contrast mechanisms.
However, MR images do not directly provide electron density information, which is
essential for some applications such as MR-based radiotherapy
treatment planning or attenuation correction in hybrid PET/MR
scanners. A straightforward solution is to separately scan a computed
tomography~(CT) image, but this is time-consuming, costly, potentially harmful to patients,
and requires accurate MR/CT registrations. Therefore, to
avoid the CT scan, a variety of approaches have been proposed to
synthesize CT images from available MR images~\cite{chartsias2017,
hofmann2011, Roy2017, wolterink2017, zhang2018}.
For example, by using paired MR and CT atlases, atlas-based methods~\cite{hofmann2011} first register multiple atlas MR images to a subject MR image, and then the warped atlas CT images are combined to synthesize a subject CT image.
Deep learning-based methods~\cite{Roy2017} have designed different convolutional neural network~(CNN) structures to directly learn the MR-to-CT mapping.
\begin{figure*}[!tp]
\setlength{\abovecaptionskip}{2mm}
\setlength{\belowcaptionskip}{-3mm}
\centering
\subfloat[]{
\label{fig:motivation_gt}
\begin{minipage}[t]{0.225\textwidth}
\centering
\includegraphics[width=2.6cm]{Fig-motivation-gt.pdf}
\end{minipage}
}
\subfloat[]{
\label{fig:motivation_syn}
\begin{minipage}[t]{0.225\textwidth}
\centering
\includegraphics[width=2.6cm]{Fig-motivation-syn.pdf}
\end{minipage}
}
\subfloat[]{
\label{fig:motivation_diff}
\begin{minipage}[t]{0.24\textwidth}
\centering
\includegraphics[width=3.07cm]{Fig-motivation-diff.pdf}
\end{minipage}
}
\caption{\small Visual example of a cycleGAN result. We show (a) ground-truth CT image and input MR image, (b) synthetic CT image and reconstructed MR image, and (c) the relative errors between the ground-truth/synthetic CT images (upper) and the input/reconstructed MR images (lower) .}
\label{fig:motivation}
\end{figure*}
Although these methods can produce good synthetic images, they rely on
a large number of paired CT and MR images, which are hard to obtain in
practice, especially for specific MR tissue contrasts. To relax the
requirement of paired data, Wolterink et al.~\cite{wolterink2017} and
Chartsias et al.~\cite{chartsias2017} used a cycleGAN~\cite{zhu2017} for
MR-to-CT synthesis on unpaired data with promising results. They
used a CNN to learn the MR-to-CT mapping with the help of an
adversarial loss, which forces synthetic CT images to be
indistinguishable from real CT images. To ensure the synthetic CT
image correctly corresponds to an input MR image, another CNN is
utilized to map synthetic CT back to the MR domain and the
reconstructed image should be identical to the input MR image
(i.e., cycle-consistency loss).
However, due to a lack of direct constraints between the synthetic and
input images, the cycleGAN cannot guarantee structural consistency
between these two images.
As shown in Fig.~\ref{fig:motivation}, the reconstructed MR image is almost identical to the input MR image, indicating the cycle consistency is well kept, but the synthetic CT image is quite different from the ground-truth, especially for the skull region, which illustrates that the structure of the synthetic CT image is not consistent with that of the input MR image.
To overcome this, Zhang et al.~\cite{zhang2018} trained two auxiliary CNNs respectively for segmenting MR and CT images and also defined a loss to force the segmentation of the synthetic image to be the same as the ground-truth segmentation of the input image. This requires a training dataset with ground-truth segmentations of MR and CT images, which further complicates the training data requirements.
In this work, we propose a structure-constrained cycleGAN to constrain structural consistency without requiring ground-truth segmentations. By using the modality independent neighborhood descriptor~\cite{heinrich2012}, we define a structure-consistency loss enforcing the extracted features in the synthetic image to be voxel-wise close to the ones extracted in
the input image. Additionally, we use a position-based selection strategy for selecting training images instead of a completely random selection scheme. Experimental results on synthesizing CT images from brain MR images show that our method achieves significantly better results compared to a conventional cycleGAN with various metrics, and approximates the cycleGAN trained with paired data.
\begin{figure}[!tp]
\setlength{\abovecaptionskip}{2mm}
\setlength{\belowcaptionskip}{-4mm}
\centering
\includegraphics[width=9cm]{Fig-network.pdf}
\caption{\small Illustration of our proposed structure-constrained cycleGAN. Two generators (i.e., $G_{\mbox{\tiny{CT}}}$ and $G_{\mbox{\tiny{MR}}}$) learn cross-domain mappings between CT and MR domains. The training of these mappings is supervised by adversarial, cycle-consistency, and structure-consistency losses.}
\label{fig:sturcture}
\end{figure}
\section{Method}
In this section, we introduce our proposed structure-constrained cycleGAN. As shown in Fig.~\ref{fig:sturcture}, our method
contains two generators $G_{\mbox{\tiny{CT}}}$ and $G_{\mbox{\tiny{MR}}}$, which provide the MR-to-CT and CT-to-MR mappings, respectively. In addition, discriminator $D_{\mbox{\tiny{CT}}}$ is
used to distinguish between real and synthetic CT images, and
discriminator $D_{\mbox{\tiny{MR}}}$ is for MR images. Our training loss includes three
types of terms: an adversarial loss~\cite{Goodfellow2014} for matching the distribution of synthetic images to target CT or MR domain; a cycle-consistency loss~\cite{zhu2017} to prevent generators from producing synthetic images that are irrelevant to the inputs; and a structure-consistency loss to constrain structural consistency between input and synthetic images.
\subsection{Adversarial loss}
The adversarial loss~\cite{Goodfellow2014} is applied to both
generators. For the generator $G_{\mbox{\tiny{CT}}}$ and its discriminator $D_{\mbox{\tiny{CT}}}$, the
adversarial loss is defined as
\begin{equation}
\mathcal{L}_{\mbox{\tiny{GAN}}} (G_{\mbox{\tiny{CT}}}, D_{\mbox{\tiny{CT}}}) = D_{\mbox{\tiny{CT}}} (G_{\mbox{\tiny{CT}}} (I_{\mbox{\tiny{MR}}}))^2 + \left( 1- D_{\mbox{\tiny{CT}}}
(I_{\mbox{\tiny{CT}}}) \right) ^2 \ ,
\end{equation}
where $I_{\mbox{\tiny{CT}}}$ and $I_{\mbox{\tiny{MR}}}$ denote the unpaired input CT and MR images.
During the training phase, $G_{\mbox{\tiny{CT}}}$ tries to generate a synthetic CT image
$G_{\mbox{\tiny{CT}}} (I_{\mbox{\tiny{MR}}})$ close to a real CT image, i.e., $\max_{G_{\mbox{\tiny{CT}}}}
\mathcal{L}_{\mbox{\tiny{GAN}}} (G_{\mbox{\tiny{CT}}}, D_{\mbox{\tiny{CT}}})$, while $D_{\mbox{\tiny{CT}}}$ is to distinguish
between a synthetic CT image $G_{\mbox{\tiny{CT}}} (I_{\mbox{\tiny{MR}}})$ and a real image $I_{\mbox{\tiny{CT}}}$,
i.e., $\min_{D_{\mbox{\tiny{CT}}}} \mathcal{L}_{\mbox{\tiny{GAN}}} (G_{\mbox{\tiny{CT}}}, D_{\mbox{\tiny{CT}}})$. Similarly,
the adversarial loss for $G_{\mbox{\tiny{MR}}}$ and $D_{\mbox{\tiny{MR}}}$ is defined as
\begin{equation}
\mathcal{L}_{\mbox{\tiny{GAN}}} (G_{\mbox{\tiny{MR}}}, D_{\mbox{\tiny{MR}}}) = D_{\mbox{\tiny{MR}}} (G_{\mbox{\tiny{MR}}} (I_{\mbox{\tiny{CT}}})) ^ 2 + \left( 1-
D_{\mbox{\tiny{MR}}} (I_{\mbox{\tiny{MR}}}) \right) ^2 \ .
\end{equation}
\subsection{Cycle-consistency loss}
To prevent the generators from producing synthetic images that are irrelevant to the
inputs, a cycle-consistency loss~\cite{zhu2017} is utilized for
$G_{\mbox{\tiny{CT}}}$ and $G_{\mbox{\tiny{MR}}}$ forcing the reconstructed images $G_{\mbox{\tiny{CT}}}
\left( G_{\mbox{\tiny{MR}}} (I_{\mbox{\tiny{CT}}}) \right)$ and $G_{\mbox{\tiny{MR}}} \left( G_{\mbox{\tiny{CT}}} (I_{\mbox{\tiny{MR}}}) \right)$ to
be identical to their inputs $I_{\mbox{\tiny{CT}}}$ and $I_{\mbox{\tiny{MR}}}$. This loss is written as
\begin{equation}
\begin{aligned}
\mathcal{L}_{\mbox{\tiny{cycle}}} (G_{\mbox{\tiny{CT}}}, G_{\mbox{\tiny{MR}}}) & = \Vert G_{\mbox{\tiny{CT}}} \left( G_{\mbox{\tiny{MR}}} (I_{\mbox{\tiny{CT}}}) \right) - I_{\mbox{\tiny{CT}}}
\Vert_1 \\ & \quad + \Vert G_{\mbox{\tiny{MR}}} \left( G_{\mbox{\tiny{CT}}} (I_{\mbox{\tiny{MR}}}) \right) - I_{\mbox{\tiny{MR}}}
\Vert_1 \ .
\end{aligned}
\end{equation}
\begin{figure*}[!tp]
\setlength{\abovecaptionskip}{2mm}
\setlength{\belowcaptionskip}{-1mm}
\centering
\subfloat[]{
\label{fig:MIND_a}
\begin{minipage}[t]{0.23\textwidth}
\centering
\includegraphics[width=2.9cm]{Fig-MIND-a.pdf}
\end{minipage}
}
\subfloat[]{
\label{fig:MIND_b}
\begin{minipage}[t]{0.23\textwidth}
\centering
\includegraphics[width=2.9cm]{Fig-MIND-b.pdf}
\end{minipage}
}
\subfloat[]{
\label{fig:MIND_c}
\begin{minipage}[t]{0.23\textwidth}
\centering
\includegraphics[width=2.9cm]{Fig-MIND-c.pdf}
\end{minipage}
}
\subfloat[]{
\label{fig:MIND_d}
\begin{minipage}[t]{0.23\textwidth}
\centering
\includegraphics[width=2.9cm]{Fig-MIND-d.pdf}
\end{minipage}
}
\caption{\small Illustration of the MIND feature. (a) To extract the MIND feature at $x$, a patch around $x+\alpha$ is compared with a patch around $x$ for each $x+\alpha \in R_{nl}$; (b) comparison between $x$ and $x+\alpha$ of $I$ in (a) equals a comparison of $I$ and $I'(\alpha)$ at $x$; (c) the CT image paired with MR image in (a); (d) visual examples of MIND features extracted at voxels $A,B,C$ within paired MR and CT images in (a) and (c).}
\label{fig:MIND}
\end{figure*}
\subsection{Structure-consistency loss}
Since the cycle-consistency loss does not necessarily ensure structural consistency (as discussed in Sec.~\ref{sec:introduction}), our method uses an extra structure-consistency loss between the synthetic and input images.
However, as these two images are respectively in MR and CT domains, we first map these images into a common feature domain by using a modal-independent structural feature, and then the structural consistency between the synthetic and input images is measured in this feature domain.
In this work, we use the modality independent neighborhood descriptor~(MIND)~\cite{heinrich2012} as the structural feature. MIND is defined using a non-local patch-based self-similarity and depends on image local structure instead of intensity values. It has been previously applied to MR/CT image registration as a similarity metric.
Figure~\ref{fig:MIND}(d) shows visual examples of MIND features extracted at different voxels in MR and CT images.
In the following paragraphs, we introduce the MIND feature and our structure-consistency loss in detail.
The MIND feature extracts distinctive image structure by comparing each patch with all its neighbors in a non-local region.
As shown in Fig.~\ref{fig:MIND}(a), for voxel $x$ in image $I$, the MIND feature $F_{x}$ is an $|R_{nl}|$-length vector, where $R_{nl}$ denotes a non-local region around voxel $x$, and each component $F_{x} ^{(\alpha)}$ for a voxel $x + \alpha \in R_{nl}$ is defined as
\begin{equation}
F_{x} ^{(\alpha)} (I) = \frac{1}{Z} \exp \left(- \frac{D_{\mathcal{P}} (I, x, x+\alpha)}{V(I,x)} \right) \ ,
\end{equation}
where $Z$ is a normalization constant so that the maximal component of $F_{x}$ is 1.
$D_{\mathcal{P}} (I, x, x+\alpha)$ denotes the $L_2$ distance between two image patches $\mathcal{P}$ respectively centered at voxel $x$ and voxel $x+\alpha$ in image $I$, and $V(I, x)$ is an estimation of local variance at voxel $x$, which can be written as
\begin{eqnarray}
\label{eqn:Dp}
D_{\mathcal{P}} (I, x, x+\alpha) & = & \sum_{p \in \mathcal{P}} \left( I(x+p) - I(x+\alpha + p) \right) ^ 2 \ , \\
V(I, x) & = & \frac{1}{4} \sum_{n \in \mathcal{N} } D_{\mathcal{P}} (I, x, x+n) \ ,
\end{eqnarray}
where $\mathcal{N}$ is the 4-neighborhood of voxel $x$.
It is difficult to directly compute the operation $D_{\mathcal{P}}$ and its gradient using Eqn.~\ref{eqn:Dp} in a deep network.
Instead, as shown in Fig.~\ref{fig:MIND}(b), $D_{\mathcal{P}}$ can be equivalently computed
by using a convolutional operation as
\begin{equation}
\label{equ:gpu_implement}
D_{\mathcal{P}} (I, x, x+ \alpha) = C * (I - I'(\alpha)) ^ 2 \ ,
\end{equation}
where $C$ is an all-one kernel of the same size as patch $\mathcal{P}$, and $I'(\alpha)$ denotes $I$ translated by $\alpha$.
By doing this, the structural feature can be extracted via several simple operations and the gradients of these operations can be easily computed.
Based on the MIND feature introduced above, the structure-consistency loss in our method is defined to enforce the extracted MIND features in the synthetic images $G_{\mbox{\tiny{CT}}} (I_{\mbox{\tiny{MR}}})$ or $G_{\mbox{\tiny{MR}}} (I_{\mbox{\tiny{CT}}})$ to be voxel-wise close to the ones extracted in their inputs $I_{\mbox{\tiny{MR}}}$ or $I_{\mbox{\tiny{CT}}}$, which can be written as
\begin{equation}
\begin{aligned}
\mathcal{L}_{\mbox{\tiny{structure}}} (G_{\mbox{\tiny{CT}}}, G_{\mbox{\tiny{MR}}}) = & \frac{1}{N_{\mbox{\tiny{MR}}} |R_{nl}|} \sum_{x} \Vert
F_x (G_{\mbox{\tiny{CT}}} (I_{\mbox{\tiny{MR}}})) - F_x (I_{\mbox{\tiny{MR}}}) \Vert _1 \\
& + \frac{1}{N_{\mbox{\tiny{CT}}} |R_{nl}|} \sum_{x} \Vert
F_x (G_{\mbox{\tiny{MR}}} (I_{\mbox{\tiny{CT}}})) - F_x (I_{\mbox{\tiny{CT}}}) \Vert _1 \ , \\
\end{aligned}
\end{equation}
where $N_{\mbox{\tiny{MR}}}$ and $N_{\mbox{\tiny{CT}}}$ respectively denote the number of voxels in
input images $I_{\mbox{\tiny{MR}}}$ and $I_{\mbox{\tiny{CT}}}$, and $\Vert \cdot \Vert_1$ is the $L_1$ norm.
In this work, we use a $9 \times 9$ non-local region and a $7 \times
7$ patch for computing structure-consistency loss.
Furthermore, instead of an all-one kernel $C$, we utilize a Gaussian kernel $C_{\sigma}$ with standard deviation $\sigma = 2$ to reweight the importance of voxels within patch $\mathcal{P}$ in Eqn.~\ref{equ:gpu_implement}.
In preliminary experiments, we tried different non-local regions, patch sizes, and $\sigma$ values, but did not observe improved performance.
\subsection{Training loss}
Given the definitions of adversarial, cycle-consistency, and
structure-consistency losses above, the training loss of our proposed
method is defined as:
\begin{equation}
\begin{aligned}
\mathcal{L} (G_{\mbox{\tiny{CT}}}, G_{\mbox{\tiny{MR}}}, D_{\mbox{\tiny{CT}}}, & D_{\mbox{\tiny{MR}}}) = \mathcal{L}_{\mbox{\tiny{GAN}}} (G_{\mbox{\tiny{CT}}}, D_{\mbox{\tiny{CT}}}) + \mathcal{L}_{\mbox{\tiny{GAN}}}
(G_{\mbox{\tiny{MR}}}, D_{\mbox{\tiny{MR}}})\\
& + \lambda_1 \mathcal{L}_{\mbox{\tiny{cycle}}} (G_{\mbox{\tiny{CT}}}, G_{\mbox{\tiny{MR}}}) + \lambda_2 \mathcal{L}_{\mbox{\tiny{structure}}} (G_{\mbox{\tiny{CT}}}, G_{\mbox{\tiny{MR}}}) \ ,\\
\end{aligned}
\end{equation}
where $\lambda_1$ and $\lambda_2$ control the relative importance of
the loss terms. During training, $\lambda_1$ is set to 10
as per~\cite{wolterink2017, zhu2017} and $\lambda_2$ is set to 5.
To optimize $\mathcal{L}$, we alternatively update $D_{\mbox{\tiny{MR/CT}}}$ (with $G_{\mbox{\tiny{MR/CT}}}$ fixed) and $G_{\mbox{\tiny{MR/CT}}}$ (with $D_{\mbox{\tiny{MR/CT}}}$ fixed).
\subsection{Network structure}
Our method is composed of four trainable neural networks, i.e.,
two generators, $G_{\mbox{\tiny{CT}}}$ and $G_{\mbox{\tiny{MR}}}$, and two discriminators, $D_{\mbox{\tiny{CT}}}$ and $D_{\mbox{\tiny{MR}}}$, and
we use the same network structures as~\cite{zhu2017,
wolterink2017} in this work. That is, two generators, $G_{\mbox{\tiny{CT}}}$ and $G_{\mbox{\tiny{MR}}}$, are
2D fully convolutional networks (FCNs) with two stride-2 convolutional
layers, nine residual blocks, and two fractionally-strided convolutional
layers with stride $\frac{1}{2}$. The two discriminators, $D_{\mbox{\tiny{CT}}}$ and $D_{\mbox{\tiny{MR}}}$, are 2D FCNs consisting of five convolutional layers to classify whether $70 \times 70$ overlapping image patches are real or synthetic. For further details, please refer to~\cite{zhu2017}.
\subsection{Position-based selection strategy}
\label{sec:input_selection_strategy}
Although our input MR and CT slices are unpaired, we can get the positions of
their slices within the volumes. Slices in the middle of the volume necessarily have more brain tissue than peripheral slices. Thus, instead of feeding in slices at extremely different positions of the brain, e.g., a peripheral CT slice and a medial MR slice, we input training slices at similar positions; this is referred to as a position-based selection~(PBS) strategy.
That is, the MR and CT slices are linearly aligned considering their respective numbers of slices within the volumes, and given the $i$-th MR slice in its volume, the index $T(i)$ of corresponding CT slice selected by our method is determined by
\begin{equation}
T(i) = \left\{
\begin{array}{rcl}
\left[ i \cdot \frac{K_{\mbox{\tiny{CT}}} - 1}{K_{\mbox{\tiny{MR}}} - 1} \right] + \ m \ , & \quad & \mbox{if} \ 5 \leq \left[ i \cdot \frac{K_{\mbox{\tiny{CT}}} - 1}{K_{\mbox{\tiny{MR}}} - 1} \right] < K_{\mbox{\tiny{CT}}} - 5 , \\
\left[ i \cdot \frac{K_{\mbox{\tiny{CT}}} - 1}{K_{\mbox{\tiny{MR}}} - 1} \right] \ , & \quad & \mbox{otherwise} ,
\end{array} \right .
\end{equation}
where $K_{\mbox{\tiny{MR}}}$ and $K_{\mbox{\tiny{CT}}}$ respectively denote the number of slices in unpaired MR and CT volumes. $[ \cdot ]$ denotes the rounding function, and $m$ is a random integer within the range of $[-5,5]$.
This strategy forces the discriminators to be stronger at distinguishing synthetic images from real ones, thus avoiding mode collapse. This in turn forces our generators to be better in order to \textit{trick} our discriminators.
We evaluate this position-based selection strategy in Sec.~\ref{sec:experiment}.
\section{Experiments}
\label{sec:experiment}
\subsection{Data set}
The MR and CT volumes are respectively obtained using a Siemens
Magnetom Espree 1.5T scanner~(Siemens Medical Solutions, Malvern, PA)
and a Philips Brilliance Big Bore scanner~(Philips Medical Systems,
Netherlands) under a routine clinical protocol for brain cancer patients. Geometric distortions in MR volumes are corrected using a 3D correction algorithm in the Siemens Syngo console workstation. All MR volumes are N4 corrected and normalized by aligning the white matter peak identified by fuzzy C-means.
The data set contains the brain MR and CT volumes of 45 patients, which were divided into a training set containing MR and CT volumes of 27 patients, a validation set of 3 patients for model and epoch selection, and a test set of 15 patients for performance evaluation.
As in~\cite{wolterink2017}, the experiments were performed on 2D sagittal image slices.
Each MR or CT volume contains about 270 sagittal images, which are resized and padded to $384 \times 256$ while maintaining the aspect ratio, and the intensity ranges are respectively $[-1000,3500]$ HU for CT and $[0,3500]$ for MR.
To augment the training set, each image is padded to $400 \times 284$ and then randomly cropped to $384 \times 256$ as training samples.
\begin{figure*}[!tp]
\setlength{\abovecaptionskip}{2mm}
\setlength{\belowcaptionskip}{-4mm}
\centering
\subfloat[MAE]{
\label{fig:box_a}
\begin{minipage}[t]{0.23\textwidth}
\centering
\includegraphics[width=2.8cm]{Fig-boxplot-a.pdf}
\end{minipage}
}
\subfloat[PSNR]{
\label{fig:box_b}
\begin{minipage}[t]{0.23\textwidth}
\centering
\includegraphics[width=2.8cm]{Fig-boxplot-b.pdf}
\end{minipage}
}
\subfloat[SSIM]{
\label{fig:box_c}
\begin{minipage}[t]{0.23\textwidth}
\centering
\includegraphics[width=2.8cm]{Fig-boxplot-c.pdf}
\end{minipage}
}
\subfloat[SSIM(HG)]{
\label{fig:box_d}
\begin{minipage}[t]{0.23\textwidth}
\centering
\includegraphics[width=2.8cm]{Fig-boxplot-d.pdf}
\end{minipage}
}
\caption{\small Comparison of different methods on synthesizing CT images in boxplots, where the diamond and number in blue denote the respective mean and $\ast$ denotes $p<0.001$ compared to the conventional cycleGAN using a paired sample t-test. }
\label{fig:boxplot}
\end{figure*}
\subsection{Experimental results}
We compare the proposed method to the conventional
cycleGAN~\cite{zhu2017,wolterink2017} (denoted as ``cycleGAN'') and a
cycleGAN trained with paired data (denoted as ``cycleGAN
(paired)''), which represents the best that a cycleGAN can achieve.
To evaluate the position-based selection strategy
in Sec.~\ref{sec:input_selection_strategy}, a cycleGAN using this
strategy during training, denoted as ``cycleGAN (PBS)'', is also included in comparison.
As in \cite{zhu2017, wolterink2017}, the learning rate is set to 0.0002 for all compared methods.
To quantitatively compare these methods, we use mean absolute
error~(MAE), peak signal-to-noise ratio~(PSNR), and structural
similarity~(SSIM) between the ground-truth CT volume and the synthetic
one, which are computed within the head region mask and averaged over
15 test subjects. Furthermore, SSIM over regions with high gradient magnitudes (denoted as ``SSIM(HG)'') is also computed to measure the quality of bone regions in synthetic images.
The maximum value in PSNR and the dynamic range in SSIM are set to 4500, as the range of our CT data is $[-1000,3500]$ HU.
As shown in Fig.~\ref{fig:boxplot}, our proposed method achieves significantly better performance than conventional cycleGAN in all the metrics~($p<0.001$) and produces similar results compared to the cycleGAN trained with paired data.
Compared to randomly selecting training slices at any position, our proposed position-based selection strategy produces significantly higher SSIM(HG) score~($p<0.001$) with marginal improvement in the other three metrics. Figure~\ref{fig:visual} shows visual examples of synthetic CT images by different methods from a test subject.
\begin{figure*}[!tp]
\setlength{\abovecaptionskip}{2mm}
\setlength{\belowcaptionskip}{-2mm}
\subfloat[]{
\label{fig:visual_a}
\begin{minipage}[t]{0.178\textwidth}
\centering
\includegraphics[width=2.35cm]{Fig-visual-a.pdf}
\end{minipage}
}
\subfloat[]{
\label{fig:visual_b}
\begin{minipage}[t]{0.178\textwidth}
\centering
\includegraphics[width=2.35cm]{Fig-visual-b.pdf}
\end{minipage}
}
\subfloat[]{
\label{fig:visual_c}
\begin{minipage}[t]{0.178\textwidth}
\centering
\includegraphics[width=2.35cm]{Fig-visual-c.pdf}
\end{minipage}
}
\subfloat[]{
\label{fig:visual_d}
\begin{minipage}[t]{0.178\textwidth}
\centering
\includegraphics[width=2.35cm]{Fig-visual-d.pdf}
\end{minipage}
}
\subfloat[]{
\label{fig:visual_e}
\begin{minipage}[t]{0.178\textwidth}
\centering
\includegraphics[width=2.60cm]{Fig-visual-e.pdf}
\end{minipage}
}
\caption{\small Visual comparison of synthetic CT images using different
methods. For one test subject, we show (a)~the ground-truth CT
image and input MR image; the synthetic CT image and its
difference image (compared to ground-truth CT image) generated
by (b)~cycleGAN, (c)~cycleGAN (PBS), (d)~cycleGAN (paired), and
(e)~proposed method. The small text in each sub-image is the corresponding accuracy on this test subject.}
\label{fig:visual}
\end{figure*}
\section{Conclusion}
We propose a structure-constrained cycleGAN for brain MR-to-CT synthesis using unpaired data. Compared to the conventional cycleGAN~\cite{zhu2017,wolterink2017}, we define an extra structure-consistency loss based on the modality independent neighborhood descriptor to constrain structural consistency and also introduce a position-based selection strategy for selecting training images.
The experiments show that our method generates better synthetic CT
images than the conventional cycleGAN and produces results similar to a cycleGAN trained with paired data.
\subsubsection*{Acknowledgments.}
This work is supported by the NSFC (11622106, 11690011, 61721002) and the China Scholarship Council.
\bibliographystyle{splncs04}
|
2,869,038,154,371 | arxiv | \section{Introduction}
Let $G$ be a Lie group and $(\pi,E)$ be a Banach representation of $G$, that is,
a morphism of groups $\pi: G \to \GL(E)$ such that the orbit maps
$$ \gamma_v: G \to E, \ \ g\mapsto \pi(g)v ,$$
are continuous for all $v\in E$.
\par We say that a vector $v$ is $k$-times differentiable if $\gamma_v \in C^k(G, E)$
and write $E^k\subset E$ for the corresponding subspace. The smooth vectors are then defined by $E^\infty=\bigcap_{k=0}^\infty E^k$.
\par The space $E^k$ carries a natural Banach structure: if $p$ is a defining norm
for the Banach structure on $E$, then a $k$-th Sobolev norm of $p$ on $E^k$
is defined as follows:
\begin{equation} \label{def StSob} p_k (v):=\Big(\sum_{m_1 +\ldots +m_n\leq k} p(d\pi(X_1^{m_1}\cdot \ldots \cdot X_n^{m_n})v)^2\Big)^{1\over 2}\quad (v \in E^k)\, .\end{equation}
Here $X_1, \ldots, X_n$ is a fixed basis for the Lie algebra $\gf$ of $G$, and
$d\pi: \U(\gf)\to \operatorname{End}(E^\infty)$ is, as usual, the derived representation for the
universal enveloping algebra $\U(\gf)$ of $\gf$. Then $E^k$, endowed with the norm $p_k$, is
a Banach space and defines a Banach representation of $G$. Furthermore, $E^\infty$ carries a natural Fr\'echet structure, induced by the Sobolev norms $(p_k)_{k \in \N_0}$.
The corresponding $G$-action on $E^\infty$ is smooth and of moderate growth, i.e.~an
$SF$-representation in the terminology of \cite{bk}.
\par In case $(\pi, \Hc)$ is a unitary representation on a Hilbert space $\Hc$, there is
an efficient way to define the Fr\'echet structure on $\Hc^\infty$
via a Laplace element
\begin{equation}\label{def D1} \Delta = -\sum_{j=1}^n X_j^2\end{equation}
in $\U(\gf)$. More specifically, one defines the $2k$-th Laplace Sobolev norm in this case
by
\begin{equation}\label{Lap-Sob} {}^\Delta p_{2k} (v) := p (d\pi(({\bf 1}+ \Delta )^k) v) \qquad (v\in E^{2k})\, .\end{equation}
The unitarity of the action then implies that the standard Sobolev norm
$p_{2k}$ is equivalent to ${}^\Delta p_{2k}$.
\par For a general Banach representation $(\pi, E)$ we still have
$E^\infty=\bigcap_{k=0}^\infty \operatorname{dom}(d\pi(\Delta^k))$, but it is no longer true that
${}^\Delta p_{2k}$, as defined in \eqref{Lap-Sob}, is equivalent to $p_{2k}$:
it typically fails that $p_{2k}$ is dominated by $^{\Delta}p_{2k}$, for example if
$-1\in \operatorname{spec} (d\pi(\Delta))$ or if elliptic regularity fails as in Remark \ref{regremark} below.
\par In the following we use $\Delta$ for the expression \eqref{def Delta}, a first-order
modification of $\Delta$ as defined in \eqref{def D1}, in order to make $\Delta$ selfadjoint
on $L^2(G)$. In case $G$ is unimodular, we remark that the two notions \eqref{def Delta} and
\eqref{def D1} coincide.
One of the main results of this note is that every Banach representation
$(\pi, E)$ admits a constant $R=R(E)>0$ such that the operator $d\pi(R^2+ \Delta) : E^\infty \to E^\infty$ is invertible, see Corollary \ref{cor spectral gap}. The constant $R$ is closely
related to the growth rate of the representation, i.e.~the growth of the weight
$w_\pi(g)=\|\pi(g)\|$.
More precisely, for the Laplace Sobolev norms defined as
\begin{equation}\label{Rost1} {}^\Delta p_{2k} (v) := p (d\pi((R^2 + \Delta)^k) v) \qquad (v\in E^{2k})\, ,\end{equation}
we show that the families $(p_{2k})_k $ and $\left({}^\Delta p_{2k}\right)_k$ are equivalent in the
following sense: Let $m$ be the smallest even integer greater \textcolor{black}{or} equal to $1+\dim G$. Then there
exist constants $c_k, C_k>0$ such that
$$ c_k \cdot {}^\Delta p_{2k}(v)\leq p_{2k}(v) \leq C_k \cdot {}^\Delta p_{2k+m}(v)\qquad (v\in E^\infty)\, .$$
\par The above mentioned results follow from a novel factorization
of the delta dis\-tri\-bution $\delta_{\bf 1}$ on $G$, see Proposition \ref{factorize} in the main text
for the more technical statement.
This in turn is a consequence of the functional calculus for $\sqrt{\Delta}$, developed
in \cite{cgt}, and previously applied to representation theory in \cite{gkl} to derive factorization results for
analytic vectors.
The functional calculus allows us to define Laplace Sobolev norms for any order $s\in \R$ by
$${}^\Delta p_{s} (v) := p (d\pi((R^2+\Delta)^{s\over 2}) v) \qquad (v \in E^\infty)\,.$$
On the other hand \cite{bk} provided another definition of Sobolev norms
for any order $s\in \R$; they were denoted $Sp_s$ and termed induced Sobolev norms
there. The norms $Sp_s$ were based on a noncanonical localization to a neighborhood of $\e \in G$, identified with the unit ball in $\R^n$, and used the $s$-Sobolev norm on $\R^n$. We show that the two notions ${}^\Delta p_{s}$ and $Sp_s$ are equivalent up to
constant shift in the parameter $s$, see Proposition \ref{prop compare}. The more geometrically defined norms ${}^\Delta p_{s}$ may therefore replace the norms $Sp_s$ in \cite{bk}.
\par Our motivation for this note stems from harmonic analysis on homogeneous spaces, see
for example \cite{b} and \cite{dkks}.
Here one encounters naturally the dual representation of some $E^k$ and in this context it is often quite cumbersome to estimate the dual norm of
$p_k$, caused by the many terms in the definition \eqref{def StSob}. On the other hand
the dual norm of ${}^\Delta p_s$, as defined by one operator $d\pi((R^2+\Delta)^{s\over 2})$, is easy to control and simplifies a variety of technical issues.
\section{Some geometric analysis on Lie groups}\label{SectionGeometricAnalysis}
Let $G$ be a Lie group of dimension $n$ and $\metric$ a left invariant Riemannian metric on $G$.
The Riemannian measure $dg$ is a left
invariant Haar measure on $G$.
We denote by $d(g,h)$ the distance function associated to $\metric$
(i.e.~the infimum of the lengths of all paths connecting group elements $g$ and $h$), by $B_r(g) = \{x \in G\ |\ d(x,g)<r\}$ the ball of radius $r$ centred at $g$,
and we set
$$ d(g) :=d(g,\e) \qquad (g\in G) \, .$$
Here are two key properties of $d(g)$, which will be relevant later, see \cite{garding}:
\begin{lem}\label{lem=sm} If
$w : G \to \R_+$ is locally bounded and submultiplicative (i.e.~$w(gh) \leq w(g)w(h)$), then there exist
$c_1,C_1>0$ such that
$$w(g) \leq C_1 e^{c_1 d(g)} \qquad (g\in G)\ .$$
\end{lem}
\begin{lem} There exists $c_G>0$ such that for all $C>c_G$, $\int_G e^{-C d(g)} \ dg < \infty$.
\end{lem}
\textcolor{black}{Convolution in this article is always left convolution, i.e.~for integrable functions
$\varphi, \psi\in L^1(G)$ we define $\varphi\ast\psi\in L^1(G)$ by
$$ \varphi\ast \psi(g)= \int_G \varphi(x)\psi(x^{-1} g) \ dx\qquad (g\in G)\, .$$
If we denote by $\mathcal{D}'(G)$ the space of distributions, resp.~by $\mathcal{E}'(G)$ the subspace of
compactly supported distributions, then the convolution above naturally extends to distributions provided
one of them is compactly supported, i.e.~lies in $\mathcal{E}'(G)$.}
Denote by $\VG$ the space of left-invariant vector fields on $G$.
It is common to identify the Lie algebra $\g$ with $\VG$ where $X\in \g$ corresponds to the
vector field $\widetilde X$ given by
$$(\widetilde{X} f) (g) = \frac{d}{dt}\Big|_{t=0} f(g \exp(t X))\ \qquad (g\in G, f\in C^\infty(G))\, .$$
We note that the adjoint of $\widetilde X$ on the Hilbert space
$L^2(G)$ is given by
$$\widetilde{X}^*= -\widetilde{X} - \tr (\ad X)\, ,$$
and $\widetilde X^*=-\widetilde X$ in case $\g$ is unimodular. Let us
fix an orthonormal basis ${\mathcal B}=\{ X_1, \dots, X_n\} $ of $\g$ with respect
to $\metric$. Then the Laplace--Beltrami operator $\Delta=d^*d$
associated to $\metric$ is given explicitly by
\begin{equation} \label{def Delta}\D = \sum_{j=1}^n (-\widetilde{X_{j}} - \tr (\ad X_j))\ \widetilde{X_{j}}\, .\end{equation}
As $(G, \metric)$ is complete, $\D$ is essentially selfadjoint \textcolor{black}{with spectrum contained in $[0,\infty)$}. We
denote by
$$\SD = \int \lambda \ dP(\lambda)$$
the corresponding spectral resolution. It provides us with a
measurable functional calculus, which allows to define
$$f(\SD)= \int f(\lambda)\ dP(\lambda)$$
as an unbounded operator $f(\SD)$ on $L^2(G)$ with domain
$$D(f(\SD))=\left\{\func \in L^2(G) \mid \int |f(\lambda)|^2\ d\langle P(\lambda)\func, \func\rangle < \infty \right\}.$$
We are going to apply the above calculus to
a certain function space. To do so, for $R'>0$ we define a region
\begin{eqnarray*}
\WRpT = \left\{z \in \C \mid |\operatorname{Im} z| < R' \right\} \cup \left\{z \in
\C \mid |\operatorname{Im} z| < \vartheta |\operatorname{Re} z| \right\}.
\end{eqnarray*}
For $R>0$, $s \in \R$, the relevant function space is then defined as
\begin{eqnarray*}\mathcal{F}_{R,s} &= \{f \in C^\infty(\mathbb{R}, \mathbb{C}) \mid f \text{ even, } \exists \vartheta>0\ \exists R'>R: f \in \mathcal{O}(\WRpT) ,\\ & \qquad \forall k\in \N: \sup_{z \in \WRpT} |\partial_z^k f(z)|(1+|z|)^{k-s}<\infty\}\ .\end{eqnarray*}
\textcolor{black}{See the Appendix to \S2 in \cite{cgt} for a related space of functions.}\\
The resulting operators $f(\SD)$ are given by a distributional kernel $K_f \in \mathcal{D}'(G\times G)$, $\langle f(\SD)\ \func, \psi\rangle= \langle K_f, \func \otimes \psi\rangle$ for all $\func,\psi \in C^\infty_c(G)$. $K_f$ has the following properties:
\begin{itemize}
\item smooth outside the diagonal: For $\Delta(G)=\{(g,g) | g \in G\}$, $K_f \in C^\infty(G
\times G \setminus \Delta(G))$,
\item left invariant: $K_f(gx,gy) = K_f(x,y)$,
\item hermitian: $K_f(x,y) = \overline{K_f(y,x)}$\ .
\end{itemize}
By left invariance $f(\SD)$ is a convolution operator with kernel $\kappa_f(x^{-1}y) := K_f(\e,x^{-1}y) = K_f(x,y)$:
\begin{equation} \label{conv0}\langle f(\SD)\ \func, \psi\rangle= \langle K_f, \func \otimes \psi\rangle = \langle \kappa_f(x^{-1}y) , (\func \otimes \psi)(x,y)\rangle =
\langle \func \ast \kappa_f, \psi \rangle\end{equation}
for all $\func,\psi \in C^\infty_c(G)$.
The distribution $\kappa_f\in \mathcal{D}'(G)$ is a smooth function on $G\setminus \{\e\}$. \textcolor{black}{Because $K_f$ is hermitian, the
kernel $\kappa_f$ is involutive in the sense that
\begin{equation} \label{kappa symmetric} \kappa_f(x) = \overline{\kappa_f(x^{-1})} \qquad (x \in G). \end{equation} }
In particular, $\kappa_f$ is left differentiable at $x \in G \setminus\{\e\}$, if and only if it is right differentiable at $x$.
\par We define the weighted $L^1$-Schwartz space on $G$ by
$${\mathcal S}_R(G):=\{ f \in C^\infty(G)\mid \forall u, v \in {\mathcal U} (\g): \ (\tilde u_l \otimes \tilde v_r) f \in L^1(G, e^{Rd(g)} dg) \}\ ,$$
where $\tilde u_l$, resp.~$\tilde v_r$, is the left, resp.~right, invariant differential operator
on $G$ associated with $u, v\in {\mathcal U}(\g)$.
A theorem by Cheeger, Gromov and Taylor
\cite{cgt} allows us to describe the global behavior of $\kappa_f$:
\begin{thm}\label{Kernel}
Let $R,\varepsilon>0$, $s \in \mathbb{R}$ and $f \in \mathcal{F}_{R,s}$. Then $\kappa_f = \kappa_1 + \kappa_2$, where
\begin{enumerate}
\item $\kappa_1 \in \mathcal{E}'(G)$ is supported in $B_\varepsilon(\e)$, and $K_1(x,y) = \kappa_1(x^{-1}y)$ is the kernel of a pseudodifferential operator on $G$ of order $s$,
\item $\kappa_2 \in \mathcal{S}_R(G)$.
\end{enumerate}
\end{thm}
\textcolor{black}{Part (1) is the content of Theorem 3.3 in \cite{cgt}. For (2), the pointwise decay of $\kappa_2$ is stated in (3.45) there, while the Schwartz estimates are obtained as in their Appendix to \S2.}
From part (1) and the kernel estimates for pseudodifferential operators, we obtain $\kappa_1 \in C^{-s-n-\varepsilon}_c(G)$ for $\varepsilon>0$ \textcolor{black}{small enough, provided $-s-n-\varepsilon>0$. Here $C^{\alpha}_c(G)$ denotes the space of H\"{o}lder continuous functions of order $\alpha>0$, with compact support.}
Applying the theorem to the function $f(z) = (R'^2+z^2)^{-m}$ for $m\in \N$, which lies
in ${\mathcal F}_{R,-2m}$ for any $R<R'$, we conclude the following factorization of the Dirac distribution $\delta_\e$:
\begin{pro}\label{factorize}
Let $R'>R>0$, $m \in \mathbb{N}$. Then
\begin{equation}\label{factorization}\delta_\e = [(R'^2 + \Delta)^m \delta_\e] \ast \kappa
,\end{equation}
where $\kappa =\kappa_1 + \kappa_2$ \textcolor{black}{has the properties} from Theorem \ref{Kernel} with $s=-2m$.
\end{pro}
\begin{proof}\textcolor{black}{ Set $T:=f(\sqrt{\Delta})$ and $S:=\frac{1}{f}(\sqrt{\Delta})$. Notice that $S(\varphi) = (R'^2 + \Delta)^m\varphi\in C_c^\infty(G)$ and thus
$TS(\varphi)=\varphi$ for all $\varphi \in C_c^\infty(G)$ by the functional calculus. In particular,
\begin{equation} \label{conv1} \varphi= [(R'^2+\Delta)^m\varphi] \ast \kappa\end{equation}
since $T$ is given by right convolution with $\kappa=\kappa_f$, see \eqref{conv0}.
Choose a Dirac sequence $\varphi_n \to\delta_\e$. Passing to the limit in \eqref{conv1} yields
\begin{equation} \label{conv2} \delta_\e = [(R'^2 + \Delta)^m \delta_\e] \ast \kappa, \end{equation}
as asserted.} \end{proof}
\section{Banach representation of Lie groups}
In this section we briefly recall some basics on Banach representation of Lie groups
and apply Proposition \ref{factorize} to the factorization of vectors in $E^k$.
\par For a Banach space $E$ we denote by $GL(E)$ the associated
group of \textcolor{black}{topological linear} isomorphisms. By a {\it
Banach representation} $(\pi, E)$ of a Lie group $G$ we understand a group homomorphism
$\pi: G \to GL(E)$ such that the action
$$ G\times E \to E, \ \ (g, v)\mapsto \pi(g)v,$$
is continuous. For a vector $v\in E$ we denote by
$$\gamma_v: G\to E, \ \ g\mapsto \pi(g)v,$$
the corresponding continuous orbit map. Given $k \in \mathbb{N}_0$, the subspace $E^{k} \subset E$ consists of all $v\in E$ for which $\gamma_v \in C^k$. We write $E^\infty = \bigcap_k E^{k}$ and refer to $E^\infty$ as the space of smooth vectors. Note that all
$E^k$ for $k\in \N_0\cup\{\infty\}$ are $G$-stable.
\begin{rem} \label{B-rep} Let $(\pi, E)$ be a Banach representation.
The uniform boundedness principle implies that
the function
$$w_\pi: G \to \R_+, \ \ g\mapsto \|\pi(g)\|,$$
satisfies the assumptions of Lemma \ref{lem=sm}. \end{rem}
Let
$$c_\pi:=\inf\{c >0\mid \exists C>0: \ w_\pi(g)\leq C e^{c d(g)}\}\, .$$
For $R>0$ we introduce the exponentially weighted spaces
$$\RnG := L^1(G, w_R dg), \ \ w_R(g) = e^{R d(g)}\ .$$
\textcolor{black}{ Notice that $\RnG \subset \mathcal{R}_{R'}(G)$ for $R>R'$ and that the corresponding Fr\'echet algebra
$\mathcal{R}(G):=\bigcap_{R>0} \RnG$ is independent of the particular choice of the metric
$\metric$. }
Denote by $\pi_l$ the left regular representation of $G$ on $\RnG$, and by $\pi_r$ the right regular representation.
A simple computation shows that $\RnG$ becomes a Banach algebra
under left convolution
$$\func*\psi(g)=\int_G \func(x)\ [\pi_l(x)\psi](g) \ dx \qquad (\func, \psi \in \RnG, g\in G)\, $$
for $R>c_G$.
More generally, whenever $(\pi, E)$ is a Banach representation, Lemma \ref{lem=sm} and Remark \ref{B-rep} imply
that
$$\Pi(\func)v:= \int_G \func(g)\ \pi(g)v\ dg \qquad (\func\in \RnG, v\in E)$$
defines an absolutely convergent Banach space valued integral for $R>R_E:=c_\pi+c_G$. Hence the prescription
$$\RnG\times E\to E, \ \ (\func, v)\mapsto \Pi(\func)v,$$
defines a continuous algebra action of $\RnG$ (here continuous refers to the continuity of the
bilinear map $\RnG\times E\to E$).
As an example, the left-right representation $\pi_l\otimes \pi_r$ of $G\times G$ also induces a Banach representation on $\RnG$.
\par Our concern is now with the \textcolor{black}{space of} $k$-times differentiable vectors $\RnG^{k}$
of $(\pi_l\otimes \pi_r, \RnG)$.
It is clear that $\RnG^{k}$
is a subalgebra of $\RnG$ and that
$$\Pi(\RnG^{k})\ E \subset E^{k}\ ,$$
whenever $(\pi, E)$ is a Banach representation and $R>R_E$.
\begin{thm} Let $R>0$ and $k=2m$ for $m\in \N$. Set $k':=k - \dim G - 1 \geq 0$. Then there exists a
$\kappa\in \RnG^{k'}$ such that: For all Banach representations $(\pi, E)$ with
$R>R_E$ one has the following factorization of $k$-times differentiable vectors
\begin{equation}\label{vfactorization} v = \Pi (\kappa) d\pi ((R^2 + \Delta)^m)v \qquad (v\in E^k)\, .\end{equation}
\end{thm}
\begin{proof}
Recall the factorization \eqref{factorization} of $\delta_\e$,
$$\delta_\e = [(R^2 + \Delta)^m \delta_\e]\ast \kappa\ . $$
We claim $\kappa \in \RnG^{k'}$. \textcolor{black}{Indeed, for $s=-2m$, $n= \dim G$ and $\varepsilon \in (0,1)$, Theorem \ref{Kernel} shows that $\kappa_1 \in C^{2m-\dim G - \varepsilon}_c(G) \subset \RnG^{k'}$ and $\kappa_2 \in \mathcal{S}_R(G) \subset \RnG^{k'}$.
We then obtain that
$$ \gamma_v = [(R^2 + \Delta)^m \gamma_v]\ast \kappa\ ,$$
see also \eqref{conv1}, and evaluation at $g=\e$ gives
$$ v = \gamma_v(\e) = \int_G \kappa(g^{-1}) \pi (g) d\pi ((R^2 + \Delta)^m)v \ dg\ . $$
Now recall from \eqref{kappa symmetric} that $\kappa(g)= \overline{\kappa(g^{-1})}$ and that with our choice of $f(z) = (R^2+z^2)^{-m}$ from
before the kernel $\kappa$ is even real. Hence
$$v = \Pi(\kappa) d\pi((R^2 + \Delta)^m) v\ , $$
as asserted. }
\end{proof}
\begin{cor} \label{cor spectral gap} Let $R>R_E$. Then
$$d\pi\left(R^2+\Delta\right): E^\infty\to E^\infty$$
is invertible.
\end{cor}
\begin{rem} (Spectral gap for Banach representations) We can interpret
Corollary \ref{cor spectral gap} as a spectral gap theorem for Banach representations in terms
of $R_E= c_G + c_\pi$. However, we note that the bound $R>R_E$ can be improved for special classes
of representations. For example,
if $(\pi, E)$ is a unitary representation, then
$$\operatorname{Re} \langle d\pi(\Delta) v, v\rangle \geq 0$$
for all $v \in E^\infty$, and hence
$d\pi(\Delta) + R^2$ is injective for all $R>0$. Moreover, the Lax-Milgram theorem implies
that $d\pi(\Delta)+R^2$ is in fact invertible. On the other hand, our bound in Corollary
\ref{cor spectral gap} gives information about the convolution kernel of the inverse of $d\pi(\Delta)+R^2$ for $R>c_G$.
\end{rem}
\section{Sobolev norms for Banach representations}
\subsection{Standard and Laplace Sobolev norms}
As before, we let $(\pi, E)$ be a Banach representation. On $E^\infty$, the space
of smooth vectors, one usually defines Sobolev norms as follows.
Let $p$ be the norm underlying $E$. We fix a basis ${\mathcal B}=\{ X_1, \ldots, X_n\}$ of
$\g$ and set
\begin{align*}p_k(v)&:=\Big[\sum_{m_1+\ldots+m_n\leq k} p(d\pi (X_1^{m_1}\cdot \ldots \cdot X_n^{m_n})v)^2 \Big]^{1\over 2}\qquad (v \in E^\infty)\, . \end{align*}
Strictly speaking this notion depends on the choice of the basis ${\mathcal B}$ and
$p_{k, {\mathcal B}}$ would be the more accurate notation. However, a different
choice of basis, say ${\mathcal C}=\{ Y_1, \ldots, Y_n\}$ leads to
an equivalent family of norms $p_{k, {\mathcal C}}$, i.e.~for all $k$ there exist constants
$c_k , C_k>0$ such that
\begin{equation} \label{sandwich} c_k \cdot p_{k, {\mathcal C}}(v) \leq p_{k,{\mathcal B}}(v)\leq C_k \cdot p_{k, {\mathcal C}}(v)\qquad (v\in E^\infty)\, .\end{equation}
Having said this, we now drop the subscript ${\mathcal B}$ in the definition of $p_k$ and
simply refer to $p_k$ as the {\it standard $k$-th Sobolev norm} of $(\pi, E)$.
Note that $p_k$ is Hermitian (i.e.~obtained from a Hermitian inner product) if $p$ was Hermitian.
\par The completion of $(E^\infty, p_k)$ yields $E^k$. In particular, $(E^k, p_k)$ is a
Banach space for which the natural action $G\times E^k \to E^k$ is continuous, i.e.~defines a Banach representation.
\par The family $(p_k)_{k\in \N}$ induces a Fr\'echet structure on $E^\infty$ (in view of \eqref{sandwich} of course independent of the choice of ${\mathcal B}$) such that the natural
action $G\times E^\infty\to E^\infty$ becomes continuous.
\par Now we introduce a family of {\it Laplace Sobolev norms}, first of even order
$k \in 2 \N_0$, as follows. Let $R>R_E$ and set
\begin{align*}
{}^{\Delta} p_{k}(v)&:= p\left( d\pi\left((R^2+\Delta)^{k/2}\right) v\right) \qquad (v\in E^\infty) \ .
\end{align*}
Of course, a more accurate notation would include $R>0$, i.e.~write ${}^{\Delta,R} p_{k}$ instead of ${}^{\Delta} p_{k}$. In addition, $\Delta$ also depends on the basis ${\mathcal B}$. For purposes of readability we decided to suppress this data in the notation.
\begin{pro} {\rm (Comparison of the families $(p_{2k})_{k\in\N_0}$ and
$({}^{\Delta} p_{2k})_{k \in \N_0}$)} \label{prop Pb}\\
For all $k\in \N_0$ there exist $\textcolor{black}{c_k}$, $C_k>0$ such that for all $v \in E^{\infty}$
$$ \textcolor{black}{c_k \cdot {}^\Delta p_{2k}(v)\leq p_{2k}(v) \leq C_k \cdot {}^\Delta p_{2k+m}(v)\, ,}$$
\textcolor{black}{where $m$ is the smallest even integer greater or equal to $1+\dim G$.}
\end{pro}
\begin{proof}
The first inequality follows directly from the definitions of $p_{2k}$, ${}^{\Delta} p_{2k}$. The second is a consequence of the factorization \eqref{vfactorization}.
\end{proof}
\begin{rem}\label{regremark} In general it is not true that $p_{2k}$ is smaller than a multiple of
${}^\Delta p_{2k}$. In other words, an index shift as in Proposition \ref{prop Pb},
is actually needed. As an example we consider $E=C_0(\R^2)$ of continuous functions on $\R^2$
which vanish at infinity, endowed with the sup-norm $p(f)=\sup_{x\in \R^2} |f(x)|$.
Then $E$ becomes a Banach representation for the regular action
of $G=(\R^2, +)$ by translation in the arguments. In this situation there exists a function $u\in
E$ such $\Delta u \in E$ but $\partial_y^2 u\not \in E$, see \cite[Problem 4.9]{gt}. Hence $p_2(u)=\infty$, while
${}^\Delta p_2(u)<\infty$.
\end{rem}
\subsection{Sobolev norms of continuous order $s\in\R$}
\subsubsection{Induced Sobolev norms}
In \cite{bk} Sobolev norms for a Banach representation $(\pi, E)$ were defined for all
parameters $s \in \R$. We briefly recall their construction.
We endow the continuous dual $E'$ of $E$ with the dual norm
$$p'(\lambda):=\sup_{p(v)\leq 1} |\lambda(v)| \qquad (\lambda\in E')\, .$$
For $\lambda \in E'$ and $v \in E^\infty$ we define the matrix coefficient
$$m_{\lambda, v}(g) =\lambda(\pi(g) v) \qquad (g\in G)\ ,$$
which is a smooth function on $G$. Given an open relatively compact neighborhood $B\subset G$ of $\e$, diffeomorphic to the
open unit ball in $\R^n$, we fix $\varphi \in C_c^\infty(G)$ such that $\operatorname{supp} (\varphi)\subset B$
and $\phi(\e)=1$. The function $\phi \cdot m_{\lambda, \phi}$ is
then supported in $B$ and upon identifying $B$ with the open unit ball in $\R^n$, say
$B_{\R^n}$, we denote by $\|\phi \cdot m_{\lambda, v}\|_{H^s(\R^n)}$ the corresponding Sobolev
norm.
We then set
$$Sp_s(v):=\sup_{\lambda \in E'\atop p'(\lambda)\leq 1} \| \phi \cdot m_{\lambda, v}\|_{H^s(\R^n)}
\qquad (v\in E^\infty)\, .$$
In the terminology of \cite{bk} this defines a $G$-continuous norm on $E^\infty$.
\subsubsection{Laplace Sobolev norms}
For $R>R_E$ and $s\in \R$, on the other hand the functional calculus for $\SD$ also gives rise to a $G$-continuous norm on $E^\infty$: We define
\begin{equation} \label{Rost2}{}^{\Delta} p_{s}(v) := p((R^2+\Delta)^{s/2} \gamma_v(g) |_{g=\e})\qquad (v\in E^\infty) \, .\end{equation}
\subsubsection{Comparison results}
\begin{pro} {\rm (Comparison of the families $(Sp_{s})_{s \geq 0}$ and
$({}^{\Delta} p_{s})_{s \geq 0}$)} \label{prop compare}
Let $R>R_E$. Then for all $s \geq 0$, $\eps>0$, there exist $c_s, C_s>0$ such that for all $v \in E^{\infty}$
$$c_s\cdot Sp_{s}(v) \leq {}^{\Delta} p_{s}(v) \leq C_s\cdot Sp_{s+\frac{n}{2}+\varepsilon}(v) \ .$$
\end{pro}
\begin{proof}
The first inequality was shown in \cite{bk} for $k \in 2 \mathbb{N}$. It follows for all $s \geq 0$ by interpolation.
For the second inequality, we apply the standard Sobolev embedding theorem for
$\R^n$ and obtain that
$$\|\phi \cdot m_{\lambda, v} \|_{H^{s+\frac{n}{2}+\varepsilon}(\R^n)} \gtrsim \| \phi\cdot m_{\lambda, v}\|_{C^{s}(B_{\R^n})} \gtrsim |\lambda((R^2+\Delta)^{s/2} \pi(g)v)|_{g=\e}|\ .$$
The assertion follows by taking the supremum over $\lambda \in E'$ with $p'(\lambda)\leq 1$.
\end{proof}
\subsection{Sobolev norms of order $s\leq 0$}
The natural way to define negative Sobolev norms is by duality. For a Banach representation
$(\pi, E)$ with defining norm $p$ and $k\in\N_0$ we let $p_k'$ be the norm
of $(E')^k$ and define $p_{-k}$ as the dual norm of $p_{k}'$, i.e.
$$p_{-k}:= (p_k')' \, .$$
The norm $p_{-k}$ is naturally defined on $((E')^k)'$. Now observe that the natural inclusion
$(E')^k \hookrightarrow E'$ is continuous with dense image and thus
yields a continuous dual inclusion $E''\hookrightarrow ((E')^k)'$. The double-dual
$E''$ contains $E$ in an isometric fashion. Hence $p_{-k}$ gives rise to a
natural norm on $E$, henceforth denoted by the same symbol, and the completion of $E$ with respect to $p_{-k}$ will be denoted by $E^{-k}$.
\begin{rem} In case $E$ is reflexive, i.e.~$E''\simeq E$, the space $E^{-k}$ is isomorphic to the
strong dual of $(E')^k$.
\end{rem}
\par On the other hand we have seen that the families $(p_k)_k$ and $\left({}^\Delta p_k\right)_k$ are equivalent. In this regard we note that
${}^\Delta p_{-k}$ as defined in \eqref{Rost2} coincides with the dual norm of ${}^\Delta p'_k$.
As a corollary of Proposition \ref{prop Pb} (and interpolation to also non-even indices
$k\in \N_0$) we have:
\begin{cor} For all $k\in \N_0$ there exist constants $c_k, C_k>0$ such that
\begin{equation} c_k\cdot p_{-k}(v) \leq {}^\Delta p_{-k}(v) \leq C_k \cdot p_{-k+n+1}(v) \qquad (v\in E^\infty)\, .\end{equation}
\end{cor}
|
2,869,038,154,372 | arxiv | \section{Introduction}
One of the most important ideas in modern stable homotopy theory is
the notion of a structured ring spectrum, an enhancement of the
representing object for a multiplicative cohomology theory. A
structured ring spectrum is a spectrum equipped with a
homotopy-coherent multiplication; classically the coherence data is
packaged up in an operad. When the multiplication is coherently
commutative (as in the familiar examples of $H\mathbb{Z}$, $ku$, and
$MU$), the classical operadic description of the multiplication
involves an $E_\infty$ operad.
May originally observed that all $E_\infty$ operads are equivalent up
to a zig-zag of maps of operads~\cite{Maygils} and showed that
equivalent $E_\infty$ operads have equivalent homotopical categories
of algebras. As an elaboration of this basic insight it is now
well-understood that all possible notions of commutative ring spectrum
agree. For instance, in the symmetric monoidal categories of EKMM
$S$-modules~\cite{EKMM} and of diagram spectra~\cite{MMSS} (i.e.,
symmetric spectra and orthogonal spectra), the associated categories
of commutative monoids are homotopically equivalent to the classical
category of $E_\infty$-ring spectra~\cite{MayQuinnRay, LMS}.
Moreover, the homotopy theories of the categories of commutative
monoids are equivalent to the homotopy theories of the category of
algebras over any reasonable $E_\infty$ operad~\cite[\S II.4]{EKMM}.
Our focus in this paper is on equivariant generalizations of
$E_\infty$ ring spectra. At first blush, it might seem that we can
give an analogous account of the situation. After all, for any
compact Lie group $G$ and universe $U$ of finite dimensional
$G$-representations, there is the classical notion of an equivariant
$E_\infty$ ring spectrum structured by the equivariant linear
isometries operad on $U$~\cite{LMS}. For each $U$, there are
equivariant analogues of the modern categories of spectra (i.e.,
equivariant orthogonal spectra and equivariant $S$-modules) that are
symmetric monoidal categories \cite{MM, HHR}. Moreover, once again
commutative monoids in these categories are equivalent to classical
equivariant $E_\infty$ ring spectra (see~\cite[\S4-5]{MM}).
However, this is not the whole story. Fix a symmetric monoidal
category $\mathcal Sp_G$ of equivariant spectra that is tensored over
$G$-spaces and is a model of the equivariant stable homotopy category
specified by a complete universe $U$. For any operad ${\catsymbfont{O}}$ of
$G$-spaces, we can form the category of ${\catsymbfont{O}}$-algebras in $\mathcal Sp_G$.
There are many different $G$-operads ${\catsymbfont{O}}$ such that the underlying
non-equivariant operad is $E_\infty$; for instance, for any universe
$U'$, the equivariant linear isometries operad over $U'$ provides an
example. Any operad with that property might be entitled to be
thought of as a $G$-$E_\infty$ operad. However, operadic algebras in
$\mathcal Sp_G$ over different such operads can look very different, as the
following example illustrates.
\begin{motivatingexample}
Let ${\mathcal E}$ be an $E_{\infty}$ operad in spaces, and view it as an
operad in $G$-spaces by giving it the trivial $G$-action. Thus the
$n$\textsuperscript{th} space is equivalent to $E\Sigma_{n}$ with a
trivial $G$-action. Let ${\mathcal E}_{G}$ denote any $E_{\infty}$ operad in
$G$-spaces for which the $n$\textsuperscript{th} space $({\mathcal E}_G)_n$ is
a universal space for $(G\times\Sigma_{n})$-bundles in $G$-spaces
(e.g., the $G$-linear isometries operad for a complete universe $U$).
Then algebras over ${\mathcal E}$ and algebras over ${\mathcal E}_{G}$ in orthogonal
$G$-spectra are different. In fact, for almost all positive cofibrant
orthogonal $G$-spectra $E$,
\[
{\mathcal E}_{n+}\wedge_{\Sigma_{n}}E^{\wedge n}\not\simeq ({\mathcal E}_G)_{n+}\wedge_{\Sigma_{n}}E^{\wedge n}.
\]
The easiest way to see this generic inequality is by computing the
$G$-geometric fixed points. If $E=\Sigma^{\infty}G_{+}$, then for all
$n$, $E^{\wedge n}$ is a free $G$-spectrum. This means, in particular,
that the geometric fixed points of the free ${\mathcal E}$-algebra on $E$ are
$S^{0}$. However, if $n=|G|$, then $({\mathcal E}_{G})_n$ has cells of the form
$G\times\Sigma_{n}/\Gamma$, where $\Gamma$ is the graph of the
homomorphism $G\to\Sigma_{n}$ describing the left action of $G$ on
itself. The $G$-spectrum
\[
(G\times\Sigma_{n}/\Gamma)_{+}\wedge_{\Sigma_{n}} E^{\wedge n}
\]
is the Hill-Hopkins-Ravenel norm $N_{e}^{G}(E)$, and in particular,
the geometric fixed points are non-trivial.
\end{motivatingexample}
Moreover, it turns out there are many intermediate classes of
$G$-operads that structure equivariant commutative ring spectra that
are richer than ${\catsymbfont{E}}$-algebras but are not ${\catsymbfont{E}}_G$-algebras. Our
interest in these different notions of equivariant commutative ring
spectra was motivated by recent work of Hopkins and the second author
which showed that equivariantly, Bousfield localization does not
necessarily take ${\catsymbfont{E}}_G$-algebras to ${\catsymbfont{E}}_G$-algebras. For formal
reasons, the Bousfield localization of any equivariant commutative
ring spectrum must have a multiplication that is an ${\catsymbfont{E}}$-algebra, but
that is all that is guaranteed. An antecedent of this general result
appears in work of McClure~\cite{mccluretate} which shows that the
Tate spectrum of an ${\catsymbfont{E}}_G$-algebra only necessarily has a
multiplication that is structured by ${\catsymbfont{E}}$ and is usually not itself
an ${\catsymbfont{E}}_G$-algebra.
Our goal in this paper is to provide conceptual descriptions of these
intermediate multiplications on equivariant spaces and spectra in
terms of the Hill-Hopkins-Ravenel norm. We do this via a careful
study of the $G$-operads that structure intermediate multiplications,
which we characterize in terms of the allowable norms on algebras over
them, as suggested by the example above. For this reason, we refer to
such operads as $N_\infty$ operads.
Fix a finite group $G$. A $G$-operad ${\mathcal O}$ consists of a sequence of
$G \times \Sigma_{n}$ spaces ${\mathcal O}_{n}$, $n \geq 0$, equipped with a
$G$-fixed identity element $1 \in {\mathcal O}_{1}$ and a composition map
satisfying equivariant analogues of the usual axioms (see
Definition~\ref{def:goper} for details).
\begin{definition}\label{def:introgeinfop}
An $N_\infty$ operad is a $G$-operad such that
\begin{enumerate}
\item The space ${\mathcal O}_{0}$ is $G$-contractible,
\item The action of $\Sigma_{n}$ on ${\mathcal O}_{n}$ is free, and
\item ${\mathcal O}_{n}$ is a universal space for a family ${\mathcal F}_{n}({\mathcal O})$
of subgroups of $G\times\Sigma_{n}$ which contains all subgroups of
the form $H\times\{1\}$.
\end{enumerate}
In particular, the space ${\mathcal O}_{1}$ is also $G$-contractible.
\end{definition}
Forgetting the $G$-action, an $N_\infty$ operad yields a
non-equivariant $E_\infty$ operad. Examples include the equivariant
little isometries operads and equivariant little disks operads; see
Definition~\ref{def:operexa} for details.
Our first main theorem is a classification of $N_\infty$ operads in
terms of the relationship between the universal spaces ${\mathcal O}_{n}$
forced by the operadic structure maps. Associated to an $N_\infty$
operad, there is a naturally defined collection (indexed by the
subgroups of $G$) of categories of finite sets, called admissible
sets. We can organize the admissible sets as follows. Define a
symmetric monoidal coefficient system to be a contravariant functor
$\underline{{\mathcal C}}$ from the orbit category of $G$ to the category of
symmetric monoidal categories and strong monoidal functors.
There is a canonical coefficient system that assigns to the orbit
$G/H$ the category of finite $H$-sets and $H$-maps, with symmetric
monoidal product given by disjoint union. We have a poset $\cI$ of
certain sub-coefficient systems of the canonical coefficient system,
ordered by inclusion (i.e., the ones closed under Cartesian product
and induction, see Definition~\ref{def:poset}). Let $\cN_{\infty}\text{-}\mbox{Op}$ denote the
category of $N_\infty$ operads, regarded as a full subcategory of
$G$-operads and $G$-operad maps.
\begin{theorem}
There is a functor
\[
{\underline{\cC}}\colon \cN_{\infty}\text{-}\mbox{Op} \to \cI
\]
which descends to a fully-faithful embedding
\[
{\underline{\cC}}\colon \Ho(\cN_{\infty}\text{-}\mbox{Op})\to\cI,
\]
where the homotopy category is formed with respect to the maps of
$G$-operads which are levelwise $G \times \Sigma_n$-equivalences.
\end{theorem}
We conjecture that in fact this embedding is an equivalence of
categories; as we explain in Section~\ref{sec:fullness}, there are
natural candidate $N_\infty$ operads to represent each object in
$\cI$. An interesting question is to determine if all homotopy
types are realized by equivariant little disks or linear isometries
operads.
\begin{remark}
The proof of the preceding theorem involves a calculation of the
derived mapping space between $N_\infty$ operads (see
Proposition~\ref{prop:mapcontract}); in particular, we show that the
space of endomorphisms of an $N_\infty$ operad is contractible.
\end{remark}
The import of this classification theorem is that it establishes that
$N_\infty$ operads are essentially completely controlled by the
isotropy condition in the definition. This allows for very surprising
results about the cofree spectra with an action of an $N_\infty$
operad.
\begin{theorem}
If ${\mathcal O}$ is an $N_\infty$ operad and $R$ is an ${\mathcal O}$-algebra with the
property that the natural map
\[
R\to F(EG_{+},R)
\]
is an equivariant equivalence, then $R$ is equivalent (as
${\mathcal O}$-algebras) to an ${\mathcal E}_G$-algebra.
\end{theorem}
Our other main theorems provide
a characterization of structures on algebras over $N_\infty$ operads.
The indexed product construction that underlies the norm makes sense
in the symmetric monoidal category of $G$-spaces with the Cartesian
product, where the resulting functor is simply coinduction. In this
situation, we show in Theorem~\ref{thm:GeneralTransfers} that an
algebra over an $N_\infty$ operad has precisely those transfers $H \to
G$ such that $G/H$ is an admissible $G$-set. Specifically, we have
the following result.
\begin{theorem}
For an algebra $X$ in $G$-spaces over a suitable $N_\infty$ operad,
the abelian group valued coefficient system
\[
\underline{\pi_{k}(X)}\colon\underline{\Set}\to\mathcal Ab
\]
defined by
\[
(T\in\mathcal Set^{H})\mapsto \pi_{k}\big(F(T,X)^{H}\big)
\]
has transfer maps
\[
f_{\ast}\colon\pi_{k}\big(F(T,i_{H}^{\ast}X)^{H}\big)\to \pi_{k}\big(F(S,i_{H}^{\ast}X)^{H}\big)
\]
for any $H$-map $f \colon T \to S$ of admissible $H$-sets and all
$k\geq 0$. Moreover, for the little disks and Steiner operads, these
transfers maps agree with the classical transfers.
\end{theorem}
These are therefore incomplete Mackey functors, studied by Lewis
during his analysis of incomplete universes \cite{LewisHurewicz,
LewisChange}.
\begin{remark}
In the result above, ``suitable'' refers to a certain technical
property of $N_\infty$ operads that we prove for the equivariant
Steiner and linear isometries operads in Section~\ref{sec:inter}.
\end{remark}
In orthogonal $G$-spectra, we show in
Theorem~\ref{thm:ExistenceofNorms} that an algebra over a suitable
$N_\infty$ operad is characterized as a $G$-spectrum equipped with maps
\[
G_{+}\wedge_{H}N^{T} \iota^*_H R\to R
\]
for the admissible $H$-sets $T$. This gives rise to the following
characterization:
\begin{theorem}
If $R$ is an algebra in orthogonal $G$-spectra over an $N_\infty$
operad ${\mathcal O}$, then
\[
\underline{\pi_{0}(R)}
\]
is a commutative Green functor.
If the ${\mathcal O}$ action interchanges with itself, then for any admissible
$H$-set $H/K$ we have a ``norm map''
\[
\underline{\pi_{0}(R)}(G/K)\xrightarrow{n_{K}^{H}}\underline{\pi_{0}(R)}(G/H)
\]
which is a homomorphism of commutative multiplicative monoids.
The maps $n_{K}^{H}$ satisfy the multiplicative version of the Mackey
double-coset formula.
\end{theorem}
Thus just as the homotopy groups of algebras in spaces over the
Steiner operad on an incomplete universe gave incomplete Mackey
functors with only some transfers, the zeroth homotopy group of an
algebra in spectra over the linear isometries operad on an incomplete
universe gives incomplete Tambara functors with only some norms.
The paper is organized as follows. In Section~\ref{sec:conv}, we
explain our assumptions and conventions about the kinds of operadic
actions and categories of equivariant spectra we are working with. We
introduce the notion of $N_\infty$ operads in Section~\ref{sec:defs}.
We use this to explain in Section~\ref{sec:Admissibles} that
associated to an $N_\infty$ operad, there is a naturally defined
collection (indexed by the subgroups of $G$) of categories of finite
sets, called admissible sets, and that if two operads have the same
admissible sets, then they are equivalent. In Section~\ref{sec:LUDU},
we perform a surprising computation: we show that for a generic
incomplete universe, the little disks operad and the linear isometries
operad are different. In Section~\ref{sec:hocat} we discuss the
connection between the homotopy category of $N_\infty$ operads and the
poset $\cI$. In Section~\ref{sec:Algebras}, we then show that the
admissible sets correspond to indexed products that an algebra over
the operad must have. In Section~\ref{sec:TransfersAndNorms}, we work
out this characterization in equivariant spaces and spectra. In the
case of algebras in $G$-spaces over $N_\infty$ operads, this
perspective explains the transfers that arise in $G$-equivariant
infinite loop space theory. In the case of equivariant spectra, this
structure controls which norms occur in a ring spectrum. Finally, in
the appendix we collect some miscellaneous technical results: in
Section~\ref{sec:monadicalgebras} we show that weakly equivalent
$N_\infty$ operads have equivalent homotopical categories of algebras
and we explain the comparison to rigid realizations of $N_\infty$
operadic algebras in terms of equivariant EKMM spectra, and finally in
Section~\ref{sec:appfix} we describe geometric fixed points of
algebras.
\subsection{Acknowledgements}
We want to thank Mike Hopkins, Mike Mandell, and Peter May for much
useful advice and many helpful conversations. The paper benefited
enormously from a careful reading of a previous draft by Peter May.
The paper was also improved by comments from Aaron Royer, Anna-Marie
Bohmann, Emanuele Dotto, Justin Noel, Tomer Schlank, and David White.
\section{Conventions on operadic algebras in equivariant spectra}\label{sec:conv}
Fix a finite group $G$ and a complete universe $U$ of
$G$-representations. Let $\mathcal Sp_{G}$ denote the category of orthogonal
$G$-spectra~\cite{MM}. We will always regard $\mathcal Sp_G$ as equipped with
the homotopy theory specified by the weak equivalences detected by the
equivariant stable homotopy groups indexed by $U$~\cite[III.3.2]{MM};
$\mathcal Sp_G$ is a model of the equivariant stable category and all
representation spheres are invertible~\cite[III.3.8]{MM}. However,
the multiplicative structures we study are often described by linear
isometries operads over other universes and in general the language of
incomplete universes is very useful in describing $N_\infty$ operads.
The key point we want to emphasize is that although the multiplicative
structure varies, the additive structure does not.
We now want to be clear about what we mean by an operadic algebra in
$\mathcal Sp_{G}$. Since $\mathcal Sp_{G}$ is tensored over $G$-spaces (with the
tensor of a $G$-space $A$ and an orthogonal $G$-spectrum $E$ computed
as $A_+ \sma E$), we can define the category $\mathcal Sp_{G}[{\catsymbfont{O}}]$ of
${\catsymbfont{O}}$-algebras for any operad ${\catsymbfont{O}}$ in $G$-spaces. This is the notion
of operadic algebra we study in this paper. However, there is
the potential for terminological confusion: even when ${\catsymbfont{O}}$ is a
classical $G$-$E_\infty$ operad, for instance the $G$-linear
isometries operad, the category $\mathcal Sp_{G}[{\catsymbfont{O}}]$ {\em is not} equivalent
to the classical category of $G$-$E_\infty$ ring spectra~\cite{LMS}.
The latter is defined using the category of ``coordinate-free''
$G$-spectra and the twisted half-smash product, and requires of
necessity operads augmented over the $G$-linear isometries operad.
(This terminological point is clearly explained
in~\cite[\S13]{Mayrant}.) Note that it is the case that the homotopy
categories of $\mathcal Sp_{G}[{\catsymbfont{O}}]$ and the classical category of
$G$-$E_\infty$ ring spectra are equivalent.
See Appendix~\ref{sec:monadicalgebras} for further discussion of such
comparison results.
We could also have worked with the equivariant analogues of EKMM
$S$-modules (e.g., see~\cite{ElmendorfMay} for a discussion of this
category) based on $U$. However, since we rely at various points on
the homotopical analysis of the norm from~\cite[App.~B]{HHR}, it is
convenient for our purposes to work with orthogonal $G$-spectra. We
have no doubt that our theorems are independent of the specific model
of the equivariant stable category, however.
Finally, we note that our results have analogues in the situation when
the (additive) homotopy theory on $\mathcal Sp_G$ is indexed on an incomplete
universe. However, in this situation some care must be taken. The
underlying analysis was begun by Lewis~\cite{lewisincomplete}, who
analyzed the homotopy theory of $G$-spectra on incomplete
universes, and various subtleties about the connections between the
additive and multiplicative structures are known to experts. We leave
the elaboration in this setting to the interested reader. However, we
note that our analysis in Section~\ref{sec:LUDU} of the linear
isometries operads also provides a criterion for the special case when
both the additive and multiplicative universes are the same but
potentially incomplete.
\section{Equivariant operads and indexing systems}\label{sec:defs}
In this section, we define $N_\infty$ operads and give a number of
examples. We then move on to introduce definitions and notations for
{\em indexing systems}, which allows us to precisely state our main
result describing the homotopy category of $N_\infty$ operads in terms
of a certain poset.
\subsection{Equivariant $N_\infty$ operads}\label{ssec:Ninfty}
In this section we review the definitions and standard examples of
$G$-operads that we will work with.
\begin{definition}\label{def:goper}
A $G$-operad ${\mathcal O}$ consists of a sequence of $G \times \Sigma_{n}$
spaces ${\mathcal O}_{n}$, $n \geq 0$, such that
\begin{enumerate}
\item There is a $G$-fixed identity
element $1 \in {\mathcal O}_{1}$,
\item and we have $G$-equivariant compositions maps
\[
{\mathcal O}_{k}\times{\mathcal O}_{n_{1}}\times\dots\times{\mathcal O}_{n_{k}}\to{\mathcal O}_{n_{1}+\dots+n_{k}}
\]
which satisfy the usual compatibility conditions with each other and
with the action of the symmetric groups
(see~\cite[2.1]{costenoblewaner}). In particular, if
$n_{1}=\dots=n_{k}=n$, then the map is actually
$(G\times \Sigma_{k}\wr\Sigma_{n})$-equivariant.
\end{enumerate}
When ${\mathcal O}_0 = *$, we say that ${\mathcal O}$ is a reduced operad.
\end{definition}
\begin{remark}
Note that in contrast to the usual convention, we will treat
$G$-operads as having left actions of symmetric groups via the
inversion, as this makes certain formulas easier to understand. It
also allows a simultaneous equivariant treatment of the $G$ and
$\Sigma_n$-actions.
\end{remark}
We will primarily be interested in the equivariant analogues of
$E_\infty$ operads. For this, we need the notion of a family and of a
universal space for a family.
\begin{definition}\label{def:Family}
A family for a group $G$ is a collection of subgroups closed under
passage to subgroup and under conjugacy.
If $\mathcal F$ is a family, then a universal space for $\mathcal F$
is a $G$-space $E\mathcal F$ such that for all subgroups $H$,
\[
(E\mathcal F)^H\simeq \begin{cases}
\ast & H\in\mathcal F, \\
\varnothing & H\not\in\mathcal F.
\end{cases}
\]
\end{definition}
For later purposes, there is an equivalent definition that is more
categorical.
\begin{definition}
A sieve in a category ${\mathcal C}$ is a full subcategory $\mathcal D$ such
that if $B\to C$ is in $\mathcal D$ and if $A\to B$ is in ${\mathcal C}$, then
the composite $A\to C$ is in $\mathcal D$.
\end{definition}
With this, we have two equivalent formulations of a family.
\begin{proposition}\label{prop:Families}\mbox{}
\begin{enumerate}
\item A family of subgroups ${\mathcal F}$ determines a sieve in the orbit
category by considering the full subcategory generated by the objects
$G/H$ for $H\in{\mathcal F}$. Similarly, the collection of all $H$ such that
$G/H$ is in a sieve in ${\mathcal O}^{G}$ forms a family.
\item A family of subgroups ${\mathcal F}$ is also equivalent to a sieve
$\mathcal Set_{{\mathcal F}}$ in the category of finite $G$-sets, where again the
identification specifies that $T$ is in the sieve if and only if the
stabilizers of points of $T$ are in the family.
\end{enumerate}
\end{proposition}
\begin{remark}
An equivalent condition to condition (ii) in
Proposition~\ref{prop:Families} is that the sieve in $G$-sets is the
full subcategory generated by those $G$-sets $T$ such that the space
of $G$-equivariant maps from $T$ to $E{\mathcal F}$ is contractible.
\end{remark}
\begin{definition}\label{def:geinfop}
An $N_\infty$ operad is a $G$-operad such that
\begin{enumerate}
\item The space ${\mathcal O}_{0}$ is $G$-contractible,
\item The action of $\Sigma_{n}$ on ${\mathcal O}_{n}$ is free,
\item and ${\mathcal O}_{n}$ is a universal space for a family ${\mathcal F}_{n}({\mathcal O})$
of subgroups of $G\times\Sigma_{n}$ which contains all subgroups of
the form $H\times\{1\}$.
\end{enumerate}
In particular, the space ${\mathcal O}_{1}$ is also $G$-contractible.
\end{definition}
Historically, most sources have focused on the situation where
${\mathcal O}_{n}$ is a universal principle $(G,\Sigma_n)$-bundle; i.e.,
${\mathcal O}_{n}^{\Lambda}$ is nonempty and contractible for $\Lambda$ which intersects $\Sigma_{n}$ trivially
(e.g., see~\cite{costenoblewaner}). As we shall recall, this is the
analogue of restricting attention to a complete universe. We will
refer to such an $N_\infty$ operad as ``complete'' and follow the
literature in calling these $E_{\infty}$ $G$-operads. For any
$H \subset G$, there is a forgetful functor from $N_\infty$ operads on
$G$ to $N_\infty$ operads on $H$. When $G = {e}$, it is clear from the
definition that an $N_\infty$ operad is an ordinary $E_{\infty}$
operad.
\begin{lemma}
The underlying non-equivariant operad for any $N_\infty$ operad is an
$E_\infty$ operad.
\end{lemma}
The category $\cN_{\infty}\text{-}\mbox{Op}$ of $N_\infty$ operads, regarded as a full
subcategory of the category of $G$-operads and $G$-operad maps, is a
category with weak equivalences. The weak equivalences are ultimately
lifted from the homotopy theory on $G$-spaces where a map $f \colon
X \to Y$ of $G$-spaces is a $G$-equivalence if the induced maps
$f^H \colon X^H \to Y^H$ on $H$-fixed points are nonequivariant weak
equivalences for each (closed) subgroup $H \subset G$.
\begin{definition}\label{defn:weak}
A map of ${\mathcal O} \to {\mathcal O}'$ of $G$-operads is a weak equivalence if each
map ${\mathcal O}(n)^{\Gamma} \to {\mathcal O}'(n)^{\Gamma}$ is an equivalence for all subgroups $\Gamma
\subseteq G \times \Sigma_n$.
\end{definition}
Note that this definition of weak equivalence does not generalize the
usual weak equivalences on operads (i.e., the maps of operads which
are underlying equivalences of spaces for each $n$) when $G = e$;
rather, this is a generalization of Rezk's
notion of weak equivalence of operads~\cite[\S 3.2.10]{Rezk}. The
generalization of the usual notion would lead to a weak equivalence of
$N_\infty$ operads being a levelwise $G$-equivalence of spaces, and
under this definition the linear isometries operad on a genuine
universe and any $G$-trivial $E_\infty$ operad would be equivalent via
a zig-zag.
\begin{remark}
One can also ask for a weaker notion of weak equivalence wherein one checks only the fixed points for subgroups of $G\times\Sigma_{n}$ which intersect $\Sigma_{n}$ trivially. This arises for instance in work of Dotto and Schlank considering $G$-operads in terms of presheaves on certain subcategories of the orbit category. For $N_\infty$ operads, the two notions coincide, since all of the other fixed points are assumed to be empty; for this reason, we do not discuss this further.
\end{remark}
We now turn to examples. The $N_\infty$ operads which arise most
frequently in equivariant algebraic topology are the linear isometries
operad on a universe $U$ and variants of the little disks operad on a
universe $U$. To be precise, let $U$ denote a countably
infinite-dimensional real $G$-inner product space which contains each
finite dimensional sub-representation infinitely often and for which
the $G$-fixed points are non-empty. We emphasize that $U$ is not
assumed to be complete. Our presentation is heavily based on the
excellent treatment of~\cite[\S 10]{guilloumay}; we refer the
interested reader to that paper for more discussion.
\begin{definition}{\mbox{}}\label{def:operexa}
\begin{enumerate}
\item The {\emph{linear isometries}} operad $ {\mathcal L}(U)$ has
$n$\textsuperscript{th} space $ {\mathcal L}(U^{n},U)$ of (nonequivariant)
linear isometries from $U^{n}$ to $U$. The $G\times\Sigma_{n}$-action
is by conjugation and the diagonal action. The distinguished element
$1 \in {\mathcal L}(U,U)$ is the identity map, and the structure maps are
induced from composition.
\item The {\emph{little disks}} operad $\mathcal{D}(U)$ has
$n$\textsuperscript{th} space $\mathcal{D}(U)_n$ given as the colimit of
embeddings of $n$ copies of the disk in the unit disk of a finite
subrepresentation $V$ in $U$. Precisely, let $D(V)$ denote the unit
disk in $V$. A little disk is a (nonequivariant) affine map $D(V) \to
D(V)$. We define $\mathcal{D}_{V}(U)_n$ as the space of $n$-tuples of
nonoverlapping little disks, where $G$ acts by conjugation on each
disk and $\Sigma_n$ in the obvious way. The distinguished element
$1 \in \mathcal{D}_{V}(U)_1$ is the identity map and the structure maps are
induced from composition. For $V \subseteq W$, there is a map induced
by taking the disk $v \mapsto av + b$ to the disk $w \mapsto aw + b$.
We define $\mathcal{D}(U) = \colim_V \mathcal{D}_V(U)$.
\item The {\emph{embeddings}} operad can be defined as follows. Fix a
real representation $V \subset U$ with $G$-invariant inner product,
and let $\mathcal{E}(V)_{n}$ be the $G$-space of $n$-tuples of topological
embeddings $V \to V$ with disjoint image (topologized as a
$G$-subspace of the space of all embeddings with $G$ acting by
conjugation). The distinguished element $1 \in \mathcal{E}(V)_1$ is the
identity map and the structure maps are induced by composition and
disjoint union. As above, we can pass to the colimit over $V$.
\item The {\emph{Steiner}} operad $\mathcal{K}(U)$ is a (superior) variant of
the little disks operad $\mathcal{D}(U)$. Fix a real representation $V \subset
U$ with $G$-invariant inner product. Define $R_{V} \subset E(V)_1$ to
be the $G$-subspace of distance reducing embeddings $f \colon V \to
V$. A Steiner path is a map $h \colon I \to R_V$ with $h(1) = \id$.
Let $P_V$ denote the $G$-space of Steiner paths (with $G$-action
coming from the action on $R_V$). There is a natural projection map
$\pi \colon P_V \to R_V$ given by evaluation at $0$. Define $\mathcal{K}(V)_n$
to be the $G$-space of $n$-tuples of Steiner paths $\{h_i\}$ such that
the projections $\pi(h_i)$ have disjoint images. The Steiner operad
is defined to be $\mathcal{K}(U) = \colim_V \mathcal{K}(V)$.
\end{enumerate}
\end{definition}
\begin{remark}
The equivariant little disks operad is unfortunately extremely poorly
behaved; products of disks are not necessarily disks, and as observed
in~\cite[\S 3]{Mayrant}, the colimit over inclusions $V \subseteq W$
that defines $\mathcal{D}(U)$ is not compatible with the colimit of
$\Omega^V \Sigma^V$. These problems are fixed by the Steiner operad,
and for these reasons the equivariant Steiner operad is preferable in
most circumstances. Moreover, the Steiner operad is necessary for
capturing multiplicative structures (i.e., $E_\infty$ ring spaces) via
operad pairings --- there are equivariant operad pairs
$\big(\mathcal{K}(V), {\mathcal L}(V)\big)$ for each $V \subset U$ and $(\mathcal{K}_U, {\mathcal L}_U)$.
In contrast, it does not seem possible to have an operad pairing
involving the little disks operad. See~\cite[10.2]{guilloumay} for
further discussion of this point.
\end{remark}
We have the following result about the $G$-homotopy type of the little
disks and Steiner operads~\cite[9.7, 10.1]{guilloumay}.
\begin{proposition}
Let $V \subset U$ be a real representation with $G$-invariant inner
product. Then the $n$th spaces $\mathcal{D}(V)_n$ and $\mathcal{K}(V)_n$ are
$G \times \Sigma_n$-equivalent to the equivariant configuration space
$F(V, n)$.
\end{proposition}
Passing to colimits, this has the following corollary:
\begin{corollary}
The $G$-operads $\mathcal{D}(U)$ and $\mathcal{K}(U)$ are $N_\infty$ operads for any
universe $U$.
\end{corollary}
The classical argument about contractibility of the spaces of equivariant isometries shows the following lemma.
\begin{lemma}
The $G$-operad ${\mathcal L}(U)$ is an $N_\infty$ operad for any universe $U$.
\end{lemma}
One of our original motivations for this paper was to understand the
relationship between $\mathcal{D}(U)$ or $\mathcal{K}(U)$ and $ {\mathcal L}(U)$ in the case of a
general universe $U$. We give an answer in the spirit of Lewis'
beautiful work relating dualizability of an orbit $G/H$ to whether it
embeds in the universe $U$~\cite{lewisincomplete}. The surprising
conclusion of our study will be just how far apart $\mathcal{K}(U)$ and
$ {\mathcal L}(U)$ can be for an incomplete universe $U$; see
Section~\ref{sec:LUDU}.
\subsection{Indexing systems}\label{ssec:Indexing}
There is a close connection between our $N_\infty$ operads and certain
subcategories of the categories of finite $G$-sets. However, as is
often the case in equivariant homotopy, we never want to consider just
the group $G$; instead we should consider all subgroups on equal
footing. This motivates the following replacement for a category.
\begin{definition}
A {\emph{categorical coefficient system}} is a contravariant functor
$\underline{{\mathcal C}}$ from the orbit category of $G$ to the category of
small categories.
\end{definition}
As we will almost never be talking about abelian group valued
coefficient systems in this paper, we will often abusively drop the
prefix ``categorical''.
\begin{definition}
A {\emph{symmetric monoidal coefficient system}} is a contravariant
functor $\underline{{\mathcal C}}$ from the orbit category of $G$ to the
category of symmetric monoidal categories and strong symmetric
monoidal functors.
If $\underline{{\mathcal C}}$ is a symmetric monoidal coefficient system, then
the {\emph{value at H}} is $\underline{{\mathcal C}}(G/H)$, and will often be
denoted $\underline{{\mathcal C}}(H)$.
For a symmetric monoidal coefficient system $\underline{{\mathcal C}}$, let
\[
i_{H}^{\ast}\colon\underline{{\mathcal C}}(G)\to\underline{{\mathcal C}}(H)
\]
denote the restriction map associated to the natural map $G/H\to G/G$.
\end{definition}
We can also consider ``enriched'' coefficient systems that take values
in enriched categories. Most of the naturally arising categories in
equivariant homotopy actually sit in enriched symmetric monoidal
coefficient systems.
\begin{definition}
Let $\underline{\mathcal Top}_{(-)}$ be the enriched coefficient system of
spaces. The value at $H$ is $\mathcal Top_{H}$, the category of $H$-spaces and
all (not just equivariant) maps. Similarly, let
$\underline{\mathcal Top}^{(-)}$ be the associated ``level-wise fixed
points'', the value at $H$ is $\mathcal Top^{H}$, the category of $H$-spaces
and $H$-maps. There are two compatible symmetric monoidal structures:
disjoint union and Cartesian product.
Let $\underline{\mathcal Sp}_{(-)}$ be the enriched coefficient system of
spectra. The value at $H$ is $\mathcal Sp_{H}$, the category of $H$-spectra
and all maps. Let $\underline{\mathcal Sp}^{(-)}$ be the associated
coefficient system whose value at $H$ is the category of $H$-spectra
and $H$-maps. We again have two symmetric monoidal structures we can
consider: wedge sum and smash product.
\end{definition}
The most important category for our study of $N_\infty$ operads is the
coefficient system of finite $G$-sets.
\begin{definition}
Let $\underline{\Set}$ be the symmetric monoidal coefficient system of finite
sets. The value at $H$ is $\mathcal Set^{H}$, the category of finite $H$-sets
and $H$-maps. The symmetric monoidal operation is disjoint union.
\end{definition}
We will associate to every $N_\infty$ operad a subcoefficient system of
$\underline{\Set}$. The operadic structure gives rise to additional structure on
the coefficient system.
\begin{definition}
We say that a full sub symmetric monoidal coefficient system ${\mathcal F}$ of
$\underline{\Set}$ is {\emph{closed under self-induction}} if whenever
$H/K\in {\mathcal F}(H)$ and $T\in {\mathcal F}(K)$, $H\times_{K} T\in {\mathcal F}(H)$.
\end{definition}
\begin{definition}
Let ${\mathcal C}\subset{\mathcal D}$ be a full subcategory. We say that ${\mathcal C}$ is a
{\emph{truncation subcategory}} of ${\mathcal D}$ if whenever $X\to Y$ is monic
in ${\mathcal D}$ and $Y$ is in ${\mathcal C}$, then $X$ is also in ${\mathcal C}$.
A truncation sub coefficient system of a symmetric monoidal
coefficient system $\underline{{\mathcal D}}$ is a sub coefficient system that
is levelwise a truncation subcategory.
\end{definition}
In particular, for finite $G$-sets, truncation subcategories are those
which are closed under passage to subobjects.
\begin{definition}
An {\emph{indexing system}} is a truncation sub symmetric monoidal
coefficient system $\underline{\cF}$ of $\underline{\Set}$ that contains all trivial sets and is closed under self
induction and Cartesian product.
\end{definition}
\begin{definition}\label{def:poset}
Let $\mathcal Coef(\mathcal Set)$ be the poset of all subcoefficient systems of
$\underline{\Set}$, ordered by inclusion. Let $\cI$ be the poset of all
indexing systems.
\end{definition}
With this, we can state our main result describing the homotopy
category of $N_\infty$ operads.
\begin{theorem}
There is a functor
\[
{\underline{\cC}}\colon \cN_{\infty}\text{-}\mbox{Op} \to \cI
\]
which descends to a fully-faithful embedding of categories
\[
{\underline{\cC}}\colon \Ho(\cN_{\infty}\text{-}\mbox{Op})\to\cI.
\]
\end{theorem}
\section{Admissible sets and $N_\infty$ operads}\label{sec:Admissibles}
The construction of the functor ${\underline{\cC}}$ proceeds in two steps. We first
define a functor, also called ${\underline{\cC}}$, from symmetric sequences with an
analogous universal property for their constituent spaces to the poset
$\mathcal Coef(\mathcal Set)$. We then show that if a symmetric sequence arises from
an operad, then the resulting value of ${\underline{\cC}}$ actually lands in
$\cI$.
\subsection{Symmetric sequences and the functor ${\underline{\cC}}$}
We begin looking very generally at what sorts of families of subgroups
can arise, using only at the universal space property of the spaces
in an $N_\infty$ operad and the freeness of the $\Sigma_{n}$-action.
\begin{definition}
An $N_\infty$ symmetric sequence is a symmetric sequence ${\mathcal O}$ in
$G$-spaces such that for each $n$,
\begin{enumerate}
\item ${\mathcal O}_{n}$ is a universal space for a family ${\mathcal F}_{n}({\mathcal O})$ of subgroups of $G\times\Sigma_{n}$ and
\item $\Sigma_{n}$ acts freely.
\end{enumerate}
\end{definition}
In particular, the underlying symmetric sequence for an $N_\infty$
operad is always of this form.
Our entire analysis hinges on a standard observation about the
structure of subgroups of $G\times\Sigma_{n}$ which intersect
$\Sigma_{n}$ trivially.
\begin{proposition}\label{prop:graph}
If $\Gamma\subset G\times\Sigma_{n}$ is such that $\Gamma\cap
(\{1\}\times\Sigma_{n})=\{1\}$, then there is a subgroup $H$ of $G$
and a homomorphism $f\colon H\to \Sigma_{n}$ such that $\Gamma$ is the
graph of $f$.
\end{proposition}
Thus the subgroup $\Gamma$ is equivalent to an
$H$-set structure on $\underline{n}=\{1,\dots,n\}$. It will be
essential to our future analysis to recast the whole story in terms of
$H$-sets.
\begin{definition}
For an $H$-set $T$, let $\Gamma_{T}$ denote the graph of the
homomorphism $H\to\Sigma_{|T|}$ defining the $H$-set structure. We
write that an $H$-set $T$ is {\emph{admissible}} for ${\mathcal O}$ if
$\Gamma_{T}\in{\mathcal F}_{|T|}({\mathcal O})$.
\end{definition}
The requirements associated to the stipulation that ${\mathcal F}_{\ast}({\mathcal O})$
forms a family (closure under subgroups and conjugacy) translates to
the following observation in terms of admissibility:
\begin{proposition}\label{prop:admitfam}
If an $H$-set $T$ of cardinality $n$ is admissible, then
\begin{enumerate}
\item for all subgroups $K\subset H$, $i_{K}^{\ast}(T)$ is admissible, and
\item the $gHg^{-1}$-set $g\cdot T$ is admissible.
\item every $H$-set isomorphic to $T$ (as an $H$-set) is admissible.
\end{enumerate}
\end{proposition}
Proposition~\ref{prop:admitfam} actually shows that the admissible
sets assemble into a sub coefficient system of $\underline{\Set} $. This allows
us to define the functor ${\underline{\cC}}$.
\begin{definition}
Let ${\underline{\cC}}({\mathcal O})$ denote the full subcoefficient system of $\underline{\Set}$ whose
value at $H$ is the full subcategory of $\mathcal Set_{H}$ spanned by the
admissible $H$-sets.
\end{definition}
\begin{proposition}
If ${\mathcal O}\to{\mathcal O}'$ is a map of $N_\infty$ symmetric sequences, then
\[
{\underline{\cC}}({\mathcal O})\subset{\underline{\cC}}({\mathcal O}').
\]
\end{proposition}
\begin{proof}
Let $T$ be an admissible set for ${\mathcal O}$. By definition, this means that
${\mathcal O}_{|T|}^{\Gamma_{T}}\neq\emptyset$. Since we have a
$G\times\Sigma_{|T|}$-equivariant map ${\mathcal O}_{|T|}$ to ${\mathcal O}'_{|T|}$, we
know that the $\Gamma_{T}$ fixed points of ${\mathcal O}'_{|T|}$ cannot be
empty.
\end{proof}
To refine our map, we recall the relevant notion of weak equivalence
for $G$-symmetric sequences.
\begin{definition}
A map $f\colon {\mathcal O}\to{\mathcal O}'$ between $G$-symmetric sequences is a weak
equivalence if for each $n$ it induces a weak equivalence of
$G \times \Sigma_n$ spaces.
\end{definition}
Notice that a weak equivalence of $N_\infty$ operads give rise to an
underlying equivalence of $N_\infty$-symmetric sequences. Unpacking
the definition immediately gives the following proposition.
\begin{proposition}
If $f\colon {\mathcal O}\to{\mathcal O}'$ is a weak equivalence between
$N_\infty$-symmetric sequences, then ${\underline{\cC}}({\mathcal O})={\underline{\cC}}({\mathcal O}')$.
\end{proposition}
\subsection{Symmetric monoidal structure of ${\underline{\cC}}({\mathcal O})$ and the operadic structure}\label{sec:Properties}
For an $N_\infty$ operad ${\mathcal O}$, the spaces ${\mathcal O}_{n}$ do not exist in
isolation, and the structure maps on ${\mathcal O}$ assemble to show that
${\underline{\cC}}({\mathcal O})$ has extra structure. We first show that ${\underline{\cC}}({\mathcal O})$ is never
empty.
\begin{proposition}\label{prop:TrivialSets}
For all subgroups $H$ and for all finite sets $T$ of cardinality $n$,
the trivial $H$-set $T$ is admissible.
\end{proposition}
\begin{proof}
This follows from condition (iii) of Definition~\ref{def:geinfop}.
\end{proof}
\begin{lemma}\label{lem:DisjointUnion}
The coefficient system ${\underline{\cC}}({\mathcal O})$ is closed under (levelwise)
coproducts, and is thus a symmetric monoidal subcoefficient system of
$\underline{\Set}$.
\end{lemma}
\begin{proof}
We give the proof for the case of $S\amalg T$; other cases are
analogous. Let $m_{1}=|S|$ and $m_{2}=|T|$. By definition, the fact
that $S$ and $T$ are admissible $H$-sets means that there exist
subgroups $\Gamma_1 \subset G \times \Sigma_{m_1}$ and
$\Gamma_2 \subset G \times \Sigma_{m_2}$ which are the graphs of
homomorphisms
\[
f_1 \colon H \to \Sigma_{m_1} \qquad \textrm{and} \qquad f_2 \colon
H \to \Sigma_{m_2}
\]
respectively.
Since ${\catsymbfont{O}}$ is an operad, we know there exists a composition map
\[
\gamma \colon {\catsymbfont{O}}_2 \times {\catsymbfont{O}}_{m_1} \times {\catsymbfont{O}}_{m_2} \to {\catsymbfont{O}}_{m_1 + m_2}
\]
which is at least $G \times
(\{e\} \times \Sigma_{m_1} \times \Sigma_{m_2})$ equivariant. Let
$\Gamma \subset G \times \Sigma_{m_1+m_2}$ be the subgroup specified
by the graph
\[
\Gamma = \{(h, f_1(h) \amalg f_2(h)) \, | \, h \in H\}.
\]
Consider the map $\gamma^{\Gamma}$ induced by passage to fixed points.
On the left hand side, by hypothesis we know that the fixed points are
contractible --- this is true for ${\catsymbfont{O}}_{m_1}$ and ${\catsymbfont{O}}_{m_2}$ by
admissibility, and for ${\catsymbfont{O}}_{2}$ by
Proposition~\ref{prop:TrivialSets}. Therefore, ${\catsymbfont{O}}_{m_1 +
m_2}^{\Gamma}$ cannot be empty and is therefore contractible.
Translating, this means precisely that $S \amalg T$ is an admissible
$H$-set.
\end{proof}
Already we have neglected structure on the category of finite
$G$-sets. In addition to the disjoint union, there is a Cartesian
product. This is a form of the disjoint union, however, as $G/K\times
G/H$ is the ``disjoint union'' of $G/H$ indexed by the $G$-set $G/K$:
\[
G/K\times G/H \cong \coprod_{G/K}G/H,
\]
where $G$ acts on both the indexing set and the summands.
Induction has a similar formulation as an indexed coproduct, and our
admissible sets are closed under some forms of each operation.
\begin{lemma}\label{lem:CartesianProduct}
For each $H$, the category ${\mathcal C}_{H}({\mathcal O})$ is closed under Cartesian
product, and thus ${\underline{\cC}}({\mathcal O})$ inherits the structure of a symmetric
bimonoidal category levelwise.
\end{lemma}
\begin{proof}
Without loss of generality, we may assume that $H=G$, and let $S$ be
an admissible $G$-set of cardinality $m$ and $T$ one of cardinality
$n$. Associated to $S$ is a subgroup $\Gamma_{S}$ which is the graph
of $f\colon G\to\Sigma_{m}$, and associated to $T$, we have a similar
subgroup $\Gamma_{T}$ and function $h\colon G\to\Sigma_{n}$. Now there
is an embedding
\[
\Delta\colon\Sigma_{m}\times\Sigma_{n}\to\Sigma_{m}\wr\Sigma_{n}
\]
which is just the diagonal on the $\Sigma_{n}$ factor, and we let
$F\colon G\to\Sigma_{m}\wr\Sigma_{n}$ be $\Delta\circ(f\times
h)$. Finally, let $\Gamma_{S\times T}$ be the graph of $F$.
We need now to show two things:
\begin{enumerate}
\item that $({\mathcal O}_{m}\times{\mathcal O}_{n}^{m})^{\Gamma_{S\times T}}$ is non-empty (which in turn forces the $\Gamma_{S\times T}$-fixed points of ${\mathcal O}_{mn}$ to be non-empty) and
\item that the function $F$ classifies the $G$-set $S\times T$.
\end{enumerate}
For the first part, we observe that $\Gamma_{S\times T}$ acts on
${\mathcal O}_{m}\times{\mathcal O}_{n}^{m}$ via its natural action on the two named
factors. Thus
\[
({\mathcal O}_{m}\times{\mathcal O}_{n}^{m})^{\Gamma_{S\times
T}}={\mathcal O}_{m}^{\Gamma_{S\times T}}\times ({\mathcal O}_{n}^{m})^{\Gamma_{S\times
T}}.
\]
The action on the ${\mathcal O}_{m}$ term factors through the canonical
quotient map
\[
G\times\Sigma_{m}\wr\Sigma_{n}\to G\times\Sigma_{m},
\]
and the image of $\Gamma_{S\times T}$ under this quotient map is
$\Gamma_{S}$. By assumption, ${\mathcal O}_{m}^{\Gamma_{S}}$ is contractible,
and hence so is ${\mathcal O}_{m}^{\Gamma_{S\times T}}$.
The action on the second factor is slightly more complicated. We make
the following observation: the diagonal map ${\mathcal O}_{n}\to{\mathcal O}_{n}^{m}$ is
$(G\times\Sigma_{m}\times\Sigma_{n})$-equivariant, where $\Sigma_{m}$
acts trivially on the first factor and where we have identified
$\Sigma_{m}\times\Sigma_{n}$ with its image under $\Delta$. The group
$\Gamma_{S\times T}$ is contained in the subgroup
$G\times \textrm{Im}(\Delta)$, and so the diagonal map is $\Gamma_{S\times
T}$-equivariant. By constructions, the action of $\Gamma_{S\times T}$
on ${\mathcal O}_{n}$ is via $\Gamma_{T}$, and we therefore have fixed
points. This implies that ${\mathcal O}_{n}^{m}$ has $\Gamma_{S\times T}$-fixed
points as well.
For the second part, we make a simple observation: in the arrow
category of finite sets, the automorphism group of the canonical
projection $S\times T\to S$ is isomorphic to
$\Sigma_{m}\wr\Sigma_{n}$. The $\Sigma_{m}$ acts by permuting the
base, and then the $\Sigma_{n}^{m}$ acts as the automorphisms of the
fibers. By our construction of $F$, the resulting $G$-set is the one
in which the base is the $G$-set $S$, and where all of the fibers are
the $G$-set $T$.
\end{proof}
\begin{lemma}\label{lem:SelfInduction}
The symmetric monoidal coefficient system ${\underline{\cC}}({\mathcal O})$ is closed under
self-induction.
\end{lemma}
\begin{proof}
Without loss of generality, we may assume $H=G$, as for the proof
given, we may simply replace all instances of $G$ with $H$. Now assume
that $G/K$ is in ${\mathcal C}_{G}({\mathcal O})$, and let $T$ be in ${\mathcal C}_{K}({\mathcal O})$. Let
$n$ be the cardinality of $T$, and let $m$ be the index of $K$ in $G$.
Associated to $T$ is a homomorphism $\pi\colon K\to \Sigma_{n}$, and
by assumption, ${\mathcal O}_{n}^{\Gamma_{T}}\simeq \ast$. Finally, let
$g_{1},\dots, g_{m}\in G$ be a complete set of coset representatives
for $G/K$, and let $\sigma\colon G\to\Sigma_{m}$ be the homomorphism
induced by the left action of $G$ on $G/K$. Again, by assumption,
${\mathcal O}_{m}^{\Gamma_{G/K}}\simeq \ast$.
To prove the result, we must explicitly describe the induced set
$G\times_{K}T$. The argument is standard. Since $\{g_{1},\dots,
g_{m}\}$ is a complete set of coset representatives of $G$, for $1\leq
i\leq n$, we have a homomorphism
\[
\big(\sigma,(k_1,\dots,k_n)\big)\colon G\to \Sigma_{m}\wr K,
\]
where $\sigma$ and each of the functions $k_{i}$ are defined by
\[
g\cdot g_{i}=g_{\sigma(i)} k_{i}(g).
\]
The homomorphism $G\to\Sigma_{nm}$ describing the induced set
$G\times_{K}T$ arises from this homomorphism via the map $\pi$:
\[
Ind(g)=\Big(\sigma(g),\big(\pi(k_{1}(g)), \dots,\pi(k_{m}(g))\big)\Big)\in \Sigma_{m} \wr \Sigma_{n}.
\]
We need to now analyze the fixed points of $\Gamma$, the graph of
$Ind$, on ${\mathcal O}_{m}\times\big({\mathcal O}_{n}\big)^{m}$. The group
$G\times\Sigma_{m}\wr\Sigma_{n}$ acts independently on ${\mathcal O}_{m}$ and
on ${\mathcal O}_{n}^{m}$. On ${\mathcal O}_{m}$, it acts via the canonical quotient to
$G\times\Sigma_{m}$, and on ${\mathcal O}_{n}^{m}$, $G$ acts diagonally while
$\Sigma_{m}\wr\Sigma_{n}$ has the obvious action. Thus
\[
\Big({\mathcal O}_{m}\times\big({\mathcal O}_{n}\big)^{m}\Big)^{\Gamma}={\mathcal O}_{m}^{\Gamma_{G/K}}\times\big({\mathcal O}_{n}^{m}\big)^{\Gamma}.
\]
It will suffice to show that these fixed points are non-empty. The
first factor is actually contractible, by assumption, so we need only
produce a fixed point for the second factor. Since the
$\Gamma_{T}$-fixed points of ${\mathcal O}_{n}$ are non-empty, we can find a
point $x\in{\mathcal O}_{n}$ such that
\[
(k,\pi k)\cdot x=x
\]
for all $k\in K$. Then we quickly show that
\[
y=\big((g_{1},1)\cdot x,\dots,(g_{m},1)\cdot x\big)
\]
is a $\Gamma$-fixed point. To streamline the typesetting, let
$\sigma=\sigma(g)$, and $k_{i}=k_{i}(g)$, and let
\[
\gamma=\Big(g,\sigma,\big(\pi(k_{1}), \dots,\pi(k_{m})\big)\Big).
\]
Then we have a chain of equalities
\begin{align*}
\gamma\cdot y
&= (g,1)\cdot \Big(\big(g_{\sigma^{-1}(1)},\pi
k_{\sigma^{-1}(1)} \big)\cdot x,\dots, \big(g_{\sigma^{-1}(m)},\pi
k_{\sigma^{-1}(m)}\big)\cdot x\Big) \\ &=
\Big(\big(g\cdot g_{\sigma^{-1}(1)},\pi k_{\sigma^{-1}(1)}\big)\cdot x,\dots, \big(g\cdot g_{\sigma^{-1}(m)},\pi k_{\sigma^{-1}(m)}\big)\cdot x\Big) \\
&=
\Big((g_{1},1)\big(k_{\sigma^{-1}(1)},\pi k_{\sigma^{-1}(1)}\big)\cdot x,\dots,(g_{m},1)\big(k_{\sigma^{-1}(m)},\pi k_{\sigma^{-1}(m)}\big)\cdot x\Big) \\
&= \big((g_{1},1)\cdot x,\dots,(g_{m},1)\cdot x\big)=y.
\end{align*}
Thus we conclude that $\big({\mathcal O}_{m}\times({\mathcal O}_{n})^{m}\big)^{\Gamma}$
is non-empty, and therefore so is ${\mathcal O}_{nm}^{\Gamma}$.
\end{proof}
One way to package
Lemmata~\ref{lem:DisjointUnion}, \ref{lem:CartesianProduct},
and \ref{lem:SelfInduction} is via the $G$-symmetric monoidal
structure on the category of finite $G$-sets. Induction is actually a
special kind of disjoint union: we simply allow the group $G$ to act
on the indexing set (in this case $G/H$) for the disjoint
union. Working more generally, we see that we can easily make sense of
a disjoint union of $(-)$-sets $S_{t}$ indexed by a $G$-set $T$
provided
\begin{enumerate}
\item $S_{t}$ is a $Stab(t)$-set and
\item $S_{g\cdot t}$ is in bijective correspondence with $S_{t}$ and the action of $g$ intertwines the $Stab(t)$ and $gStab(t)g^{-1}$ actions.
\end{enumerate}
Our lemmas can then be repackaged in this language.
\begin{corollary}\label{cor:GDisjointUnion}
If $T\in{\mathcal C}_{G}({\mathcal O})$ and if for all $t\in T$, we have an admissible
$Stab(t)$-set $S_{t}$ satisfying the compatibility condition above,
then
\[
\coprod_{t\in T}S_{t}\in {\mathcal C}_{G}({\mathcal O}).
\]
\end{corollary}
\begin{warning}
While it is true that ${\underline{\cC}}({\mathcal O})$ forms a coefficient system and is
closed under some indexed coproducts, it is not true that ${\underline{\cC}}({\mathcal O})$
is always closed under {\emph{arbitrary}} induction (making it a kind
of category-valued Mackey functor). The norm machinery described in
Section~\ref{sec:NormsandInduction} can be used to produce operads
which close up ${\underline{\cC}}({\mathcal O})$ under certain inductions.
\end{warning}
Thus far we have used only the composition structure of the operad
(and hence, all of this would work in a non-unital context). For the
last piece of structure, we must have a unital algebra.
\begin{lemma}\label{lem:Summands}
The coefficient system ${\underline{\cC}}({\mathcal O})$ is a truncation subcoefficient
system of $\underline{\Set}$: if $Z=S\amalg T$ is an admissible $G$-set, then
both $S$ and $T$ are admissible.
\end{lemma}
\begin{proof}
We use the unit map to show this. The admissibility of $Z$ shows that
there is a map $f\colon G\to \Sigma_{|Z|}$ and
${\mathcal O}_{|Z|}^{\Gamma_{Z}}\simeq \ast$. The disjoint union decomposition
of $Z$ into $S\amalg T$ shows that we can choose this map to factor
through the inclusion
$\Sigma_{|S|}\times\Sigma_{|T|}\subset\Sigma_{|Z|}$ (in fact, the
subgroup $\Gamma_{Z}$ corresponding to $Z$ probably does not have this
property; however, a conjugate of $\Gamma_{Z}$ will). In this case,
the projection of $\Gamma_{Z}$ onto $G\times\Sigma_{|S|}$ realizes the
subgroup $\Gamma_{S}$ corresponding to $S$, and similarly for $T$.
We now use the composition and the identity to deduce the
result. Consider the composition:
\[
{\mathcal O}_{|Z|}\times{\mathcal O}_{1}^{|S|}\times{\mathcal O}_{0}^{|T|}\to{\mathcal O}_{|S|}.
\]
This map is $(G\times\Sigma_{|S|}\times\Sigma_{|T|})$-equivariant,
where on the first factor, the action is via the obvious inclusion and
where the action on the target is via the quotient to
$G\times\Sigma_{|S|}$. Since the map defining the $G$-action on $Z$
factors through $\Sigma_{|S|}\times\Sigma_{|T|}$, the group
$\Gamma_{Z}$ is actually a subgroup of
$G\times\Sigma_{|S|}\times\Sigma_{|T|}$. The $\Gamma_{Z}$-action on
${\mathcal O}_{|S|}$ is via the quotient $\Gamma_{|S|}$, so
\[
{\mathcal O}_{|S|}^{\Gamma_{Z}}={\mathcal O}_{|S|}^{\Gamma_{S}}.
\]
Since the spaces in the operad are universal spaces for a family, it
will again suffice to show that
\[
\big({\mathcal O}_{|Z|}\times{\mathcal O}_{1}^{|S|}\times{\mathcal O}_{0}^{|T|}\big)^{\Gamma_{Z}}={\mathcal O}_{|Z|}^{\Gamma_{Z}}\times({\mathcal O}_{1}^{|S|}\times{\mathcal O}_{0}^{|T|})^{\Gamma_{Z}}\neq\emptyset.
\]
By assumption, the first factor is non-empty. For the second, the
diagonal map
\[
{\mathcal O}_{1}\times{\mathcal O}_{0}\to{\mathcal O}_{1}^{|S|}\times{\mathcal O}_{0}^{|T|}
\]
is $\Sigma_{|S|}\times\Sigma_{|T|}$-equivariant, with the image being
the fixed points. The space ${\mathcal O}_{1}\times{\mathcal O}_{0}$ is
$G$-equivariantly contractible, so we know that in fact
\[
\emptyset\neq({\mathcal O}_{1}^{|S|}\times{\mathcal O}_{0}^{|T|})^{G\times\Sigma_{|S|}\times\Sigma_{|T|}}\subset ({\mathcal O}_{1}^{|S|}\times{\mathcal O}_{0}^{|T|})^{\Gamma_{Z}}.\qedhere
\]
\end{proof}
\begin{corollary}
The coefficient system ${\underline{\cC}}({\mathcal O})$ is closed under finite limits.
\end{corollary}
\begin{proof}
Equalizers are subobjects in $\underline{\Set}$, and
Lemma~\ref{lem:CartesianProduct} shows that each category is also
closed under finite products.
\end{proof}
Putting together all of these lemmas, we deduce the following theorem.
\begin{theorem}\label{thm:FunctorC}
The functor ${\underline{\cC}}$ is a functor from the homotopy category of $N_\infty$
operads to the poset $\cI$.
\end{theorem}
\subsection{Application: Linear isometries and little disks}\label{sec:LUDU}
We pause here to provide a surprising application: for all but three
finite groups $G$, there are universes $U$ such that the linear
isometries and little disks (or Steiner) operads associated to $U$ are
inequivalent. To show this, we need only apply our functor ${\underline{\cC}}$.
\begin{theorem}\label{thm:LUFamily}
For the equivariant linear isometries operad on $U$, the admissible
$H$-sets are those $T$ such that there is an $H$-equivariant embedding
\[
{\mathbb{Z}}[T]\otimes U\to U.
\]
\end{theorem}
\begin{proof}
In fact, the statement of the theorem is a restatement of definition
of the linear isometries operad. If $T$ is an admissible $H$-set, then
by definition
\[
{\mathcal L}(U^{\oplus n},U)^{\Gamma_{T}}={\mathcal L}_{\Gamma_{T}}(U^{\oplus n},U)\neq \emptyset.
\]
The group $\Gamma_{T}$ acts on $U$ via the quotient $H$. The only
question is how it acts on
\[
U^{\oplus n}={\mathbb{Z}}\{1,\dots,n\}\otimes U.
\]
On the tensor factor $U$, the $\Gamma_{T}$-action is again via the
quotient $H$. On the other tensor factor, by the definition of $T$,
the $\Gamma_{T}$-action is the $H$-action on ${\mathbb{Z}}[T]$. This gives the
result.
\end{proof}
The truncation and disjoint union conditions on our indexing sets
shows that admissibility is completely determined by the admissibility
of orbits $H/K$. The condition for admissibility for ${\mathcal L}(U)$ then is
that there is an $H$-equivariant embedding
\[
\Ind_{K}^{H}i_{K}^{\ast}U\to i_{H}^{\ast}U.
\]
This requirement is actually a ``cofamily'' condition in $H$: if $K$
is subconjugate to some $K'$ in $H$, then ${\mathbb{Z}}[H/K']\otimes U$
$H$-embeds into $U$ whenever ${\mathbb{Z}}[H/K]\otimes U$ does.
\begin{theorem}\label{thm:DUFamily}
For the equivariant little disks operad on $U$, the admissible
$H$-sets are those $T$ such that there is an $H$-equivariant embedding
\[
T\to U.
\]
\end{theorem}
\begin{proof}
This is essentially due to Lewis. An embedding of $T$ into $U$ can be
fattened into a tiny equivariant neighborhood of $T$ embedded into
$U$. This is an embedding of $T\times D$ into $U$ which is
$H$-equivariant, and this is exactly what an element of the
$\Gamma_{T}$-fixed points of
\[
\mathcal{D}\left(\coprod_{1}^{n}D,D\right)=\mathcal{D}(\{1,\dots,n\}\times D,D)
\]
looks like. Just as in the linear isometries case, the existence of a
single embedding is sufficient to have a contractible space.
\end{proof}
\begin{corollary}
For any universe $U$, there is a map in the homotopy category of operads
\[
{\mathcal L}(U)\to \mathcal{D}(U).
\]
\end{corollary}
\begin{proof}
For any finite $H$-set $T$, $T$ always $H$-equivariantly embeds into
$\mathbb R\{T\}$. Thus if $T$ is admissible for ${\mathcal L}(U)$, then it is
also admissible for $\mathcal{D}(U)$.
\end{proof}
Since the condition on the category ${\underline{\cC}}({\mathcal L}(U))$ described in
Theorem~\ref{thm:LUFamily} is much more stringent than the one for the
category ${\underline{\cC}}(\mathcal{D}(U))$ described in Theorem~\ref{thm:DUFamily}, there is, a priori, no reason that the operads need be the same for a particular universe. We will show that in fact, they can be
different (and for most groups, hugely different, as explained in
Theorem~\ref{thm:VeryDifferent} below). We first show an important
example in which they coincide.
\begin{theorem}
If $N$ is a normal subgroup in $G$ and if $U_{N}$ is the universe
generated by ${\mathbb{R}}[G/N]$, then $ {\mathcal L}(U_{N})$ and $\mathcal{D}(U_{N})$ are
equivalent.
\end{theorem}
This universe is the $N$-fixed points of the complete universe, and
this statement should be viewed as an analogue of the symmetric
monoidal embedding of $G/N$-spectra in $G$-spectra.
\begin{proof}
We just have to show that the admissible sets are the same in both
cases, and these are the sets with stabilizer containing $N$. Since
$N$ is normal in $G$, there is no difference between restricting to
$H$ and restricting to $HN$, and in this case, $U_{N}$ restricts to
$U_{N}$ but with $G$ replaced by $HN$. It therefore suffices to look
at those $G$ sets which are admissible.
The admissible $G$-sets for $\mathcal{D}(U_{N})$ are those with stabilizer $H$
such that $G/H$ embeds in $U_{N}$. Since $G$ is finite,
\[
\textnormal{Emb}_{G}(G/H,U)=U^{H}-\bigcup_{H<K} U^{K},
\]
where $H<K$ means $H$ is properly subconjugate to $K$. For all
subgroups $H$, the $H$ fixed points are equal to the $HN$-fixed
points, and so if $H$ does not contain $N$, there are no embeddings of
$G/H$ into $U$. On the other hand, if $H$ does contain $N$, then the
transfer shows that the $H$-fixed points of $U_{N}$ is the universe
generated by ${\mathbb{R}}[G/H]$. This visibly contains $G/H$. Thus the
admissible $G$-sets for $\mathcal{D}(U_{N})$ are those with stabilizers
containing $N$.
For $ {\mathcal L}(U)$, we need only to determine those $H$ such that
${\mathbb{Z}}[G/H]\otimes U_{N}$ embeds in $U_{N}$. The universe $U_{N}$ has
the defining feature that all of $N$ fixes $U_{N}$ and such that
larger subgroups of $G$ move points. If $H$ contains $N$, then the
desired condition obviously holds. If $H$ does not contain $N$, then
$N$ does not fix ${\mathbb{Z}}[G/H]$, and therefore, there are no embeddings.
Thus in both cases, the admissible $G$-sets are precisely those whose
stabilizers contain $N$, and $\mathcal{D}_{n}(U_{N})$ and ${\mathcal L}_{n}(U_{N})$ are
equivalent.
\end{proof}
We can now prove the main result in this subsection: for all but two
groups, there are universes $U$ such that ${\mathcal L}(U)$ and $\mathcal{D}(U)$ are
inequivalent.
\begin{theorem}\label{thm:Failing}
If $G$ is a finite group of order bigger than $3$, then there is a
universe $U$ such that ${\mathcal L}(U)$ and ${\mathcal D}(U)$ are not equivalent.
\end{theorem}
The proof follows immediately from a small, representation theoretic
lemma.
\begin{lemma}\label{lem:FaithfulSubRep}
If $G$ is a finite group of order bigger than $3$, then there is a
representation $V$ such that
\begin{enumerate}
\item $G$ embeds into $V$, and
\item there is a non-trivial irreducible representation $W$ of $G$ such that $W$ is not a summand of $V$.
\end{enumerate}
\end{lemma}
In fact, $V$ can be chosen as a faithful representation.
\begin{proof}
First note that if $G$ is a simple group of order at least $5$, then
every non-trivial representation of $G$ is faithful. By the class
equation, there are more than $2$ non-trivial irreducible complex
representations, and hence at least $2$ non-trivial, irreducible real
representations. Any one such representation will satisfy the
conditions of the lemma.
Now assume that $N$ is a non-trivial, proper normal subgroup of
$G$. Let $\bar{\rho}_{N}$ denote the quotient of the real regular
representation of $N$ by the trivial summand. Then we claim that
$V=\Ind_{N}^{G}\bar{\rho}_{N}$ satisfies the conditions of the
lemma.
The reduced regular representation is faithful and induction
preserves this property. Since the representation is faithful, the collection of all vectors with non-trivial stabilizer is a union of proper hyperplanes of $V$, and since $G$ is finite, this is a proper subset of $V$. Thus $G$ embeds into $V$.
For the second condition, let $\lambda$
denote a non-trivial real representation of the quotient group
$G/N$. Frobenius reciprocity shows that there are no non-trivial maps
between complexifications of $V$ and $\lambda$ (since the restriction
of $\lambda$ to $N$ is always trivial), and thus $\lambda$ is not a
summand of $V$.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:Failing}]
Let $V$ be a faithful representation of $G$ satisfying the conditions
of Lemma~\ref{lem:FaithfulSubRep}, and let $U=\infty(1+V)$. Then by
assumption, $G$ embeds into $U$, so $G/\{e\}$ is an admissible $G$-set
for ${\mathcal D}(U)$. However, $U$ is not the infinite regular representation,
since $V$ does not contain every irreducible representation of $G$,
and so $G/\{e\}$ is not an admissible $G$-set for ${\mathcal L}(U)$.
\end{proof}
If $G$ has order $2$ or $3$, then this will fail: there are only two
irreducible real representations: the trivial one and multiplication
by the corresponding root of unity. Thus in these cases there are only
two universes: the trivial universe and the complete universe.
With slightly more care, we can refine the above theorem.
\begin{theorem}\label{thm:VeryDifferent}
If $G$ is not simple, then there is a universe $U$ such that $\mathcal{D}(U)$
is not equivalent to ${\mathcal L}(W)$ for any universe $W$.
\end{theorem}
\begin{proof}
Let $N$ be a non-trivial, proper normal subgroup of $G$, and let
$V=\Ind_N^G\bar{\rho}_N$ as in the proof of
Lemma~\ref{lem:FaithfulSubRep}. Since this is a faithful
representation of $G$, we know that $G$ embeds in $V$. If $G$ is
admissible for ${\mathcal L}(W)$, then Theorem~\ref{thm:LUFamily}, $W$ must be
the compete universe. In particular, ${\underline{\cC}}({\mathcal L}(W))=\underline{\Set}$.
We prove the theorem by showing that $G/N$ itself does not embed in
$U=\infty(1+V)$. Obviously, any such embedding lands in a finite
subrepresentation, so we show that $G/N$ does not embed into $k(1+V)$
for any $k$. A map of $G$-sets
\[
G/N\to k(1+V),
\]
is the same as an $N$-fixed point of $k(1+V)$. However, since $N$ is a
normal subgroup,
\[
i_N^\ast V=[G:N]\bar{\rho}_N,
\]
which has no fixed points. Thus any map lands entirely in the trivial
factor, and hence is a constant map on $G/N$.
\end{proof}
\begin{remark}
We do not know for which simple groups Theorem~\ref{thm:VeryDifferent}
holds. For cyclic groups of prime order, it fails: there are only two
indexing systems, the trivial and complete one, both of which
correspond to little disks and linear isometries operads. For $A_{2n+1}$,
the restriction of the quotient of the defining representation for
$\Sigma_{2n+1}$ by the trivial summand generates a universe in which
$A_{2n+1}/D_{4n+2}$ does not embed, showing that for $A_{2n+1}$,
Theorem~\ref{thm:VeryDifferent} holds.
\end{remark}
\section{The homotopy category of $N_\infty$ operads}\label{sec:hocat}
In this section, we show that the functor ${\underline{\cC}}$ is a fully-faithful
embedding and explain why we believe that it is fact an equivalence.
\subsection{Faithfulness}
We begin by recording some easy results about the relationships
between coefficient systems that correspond to natural constructions
on operads.
\begin{proposition}
If ${\mathcal O}$ and ${\mathcal O}'$ are $N_\infty$ operads, then ${\mathcal O}\times{\mathcal O}'$ is an
$N_\infty$ operad, and
\[
{\underline{\cC}}({\mathcal O}\times{\mathcal O}')={\underline{\cC}}({\mathcal O})\cap{\underline{\cC}}({\mathcal O}').
\]
\end{proposition}
\begin{proof}
The only part that requires any proof is the second part; the operadic
properties are straightforward. The second part is actually a standard
observation in equivariant homotopy theory: if $E{\mathcal F}$ and $E{\mathcal F}'$ are
universal spaces for families ${\mathcal F}$ and ${\mathcal F}'$ respectively, then
$E{\mathcal F}\times E{\mathcal F}'$ is a universal space for ${\mathcal F}\cap{\mathcal F}'$. This
follows immediately from consideration of the fixed points. The
translation to the categorical version is then as above.
\end{proof}
\begin{corollary}\label{cor:OperadComparison}
If ${\underline{\cC}}({\mathcal O})\subset{\underline{\cC}}({\mathcal O}')$, then the natural projection
\[
{\mathcal O}\times{\mathcal O}'\to{\mathcal O}
\]
is a weak equivalence.
\end{corollary}
\begin{proof}
For all $n$, both $({\mathcal O}\times{\mathcal O}')_{n}$ and ${\mathcal O}_{n}$ are universal
spaces for the same family of subgroups.
\end{proof}
\begin{corollary}
If ${\underline{\cC}}({\mathcal O})={\underline{\cC}}({\mathcal O}')$, then in the homotopy category, ${\mathcal O}$ and
${\mathcal O}'$ are isomorphic.
\end{corollary}
\begin{proof}
Apply Corollary~\ref{cor:OperadComparison} twice to the zig-zag ${\mathcal O}
\mathchoice{\longleftarrow}{\leftarrow}{\leftarrow}{\leftarrow} {\mathcal O} \times {\mathcal O}' \to {\mathcal O}'$.
\end{proof}
\begin{corollary}
If ${\underline{\cC}}({\mathcal O})\subset{\underline{\cC}}({\mathcal O}')$, then in the homotopy category, we have
a map
\[
{\mathcal O}\to{\mathcal O}'.
\]
\end{corollary}
In order to go further, we calculate the derived space of maps between
two operads ${\mathcal O}$ and ${\mathcal O}'$.
\begin{proposition}\label{prop:mapcontract}
The derived mapping space from any $G$-operad ${\catsymbfont{O}}$ to an $N_\infty$
operad ${\catsymbfont{O}}'$ is either empty or contractible.
\end{proposition}
\begin{proof}
We perform the calculation in the category of $G$-operads in
simplicial sets. Since $G$ is discrete, there is a model structure on
$G \times \Sigma_n$-simplicial sets where the weak equivalences and
fibrations are detected on passage to fixed point spaces (and the
cofibrations are the monomorphisms)~\cite[3.1.9]{Rezk}. Let
$\mathrm{SymSeq}_{\textrm{GSet}^{\Delta^{\op}}}$ denote the category of symmetric sequences of
$G$-simplicial sets. Since this is equivalent to the product (over
$n \geq 0$) of the categories of $G \times \Sigma_n$-simplicial sets,
there is a levelwise model structure on $\mathrm{SymSeq}_{\textrm{GSet}^{\Delta^{\op}}}$ in which the
weak equivalences and fibrations are detected pointwise. The
forgetful functor from the category of $G$-operads in simplicial sets
to $\mathrm{SymSeq}_{\textrm{GSet}^{\Delta^{\op}}}$ has a left adjoint free functor, and the transfer
argument of~\cite[3.2.10]{Rezk} applies to lift the model structure on
$\mathrm{SymSeq}_{\textrm{GSet}^{\Delta^{\op}}}$ to one on $G$-operads in simplicial sets. Note that
these model structures are simplicial and cofibrantly-generated.
Let $G\text{-}{\mbox{Op}}({\catsymbfont{T}})$ denote the category of $G$-operads in topological
spaces and let $G\text{-}{\mbox{Op}}(\textrm{Set}^{\Delta^{\op}})$ denote the category of $G$-operads in
simplicial sets. The geometric realization and singular complex
functors preserve products and so induce an adjoint pair
\[
\Sing \colon G\text{-}{\mbox{Op}}({\catsymbfont{T}}) \rightleftarrows G\text{-}{\mbox{Op}}(\textrm{Set}^{\Delta^{\op}}) \colon |-|.
\]
Furthermore, since both of these functors preserve weak
equivalences~\cite[3.1.10]{Rezk}, we can compute the derived mapping
space in either category. More precisely, the fact that $|-|$ and
$\Sing$ preserve equivalences and are such that the unit and co-unit
of the adjunction are natural weak equivalences implies that there is
a weak equivalence
\[
L^H G\text{-}{\mbox{Op}}({\catsymbfont{T}}) ({\catsymbfont{O}}, {\catsymbfont{O}}') \simeq L^H G\text{-}{\mbox{Op}}(\textrm{Set}^{\Delta^{\op}})
(\Sing {\catsymbfont{O}}, \Sing {\catsymbfont{O}}'),
\]
where $L^H$ denotes the Dwyer-Kan simplicial mapping space.
This latter can be computed as the internal mapping space in the model
category of operads in $G$-simplicial sets after replacing the source
with a cofibrant object and the target with a fibrant object. In this
model structure a cofibrant replacement of a $G$-operad can be
computed as a retract of a cell operad. Moreover, the fibrant objects
are precisely the levelwise fibrant objects and so in particular
$\Sing {\catsymbfont{O}}'$ is fibrant.
Thus, we can compute the mapping space by resolving the $G$-operad
$\Sing {\catsymbfont{O}}$ as a cell object. That is, $\Sing {\catsymbfont{O}} = \colim_n X_n$,
where each stage $X_n$ can be described as the (homotopy) pushout
\[
\xymatrix{
\mathbb{F} A_n \ar[r] \ar[d] & X_{n-1} \ar[d] \\
\mathbb{F} B_n \ar[r] & X_{n}.
}
\]
Here $\mathbb{F}$ is the free functor from $\mathrm{SymSeq}_{\textrm{GSet}^{\Delta^{\op}}}$ to
$G\text{-}{\mbox{Op}}(\textrm{Set}^{\Delta^{\op}})$. Therefore, there is an equivalence
\[
\Map(\Sing {\catsymbfont{O}},-) \simeq \holim_n \Map(X_n, -).
\]
It now suffices to show that $\Map(X_n, -)$ is contractible. Inductively, we can use the
pushout description of $X_n$ above to reduce to the case of free
$G$-operads. Finally, observe that maps from a free operad into any
$N_\infty$ operad are contractible or empty: by adjunction, they are
computed on the level of symmetric sequences, and any $N_\infty$
operad is made up of universal spaces.
\end{proof}
\begin{corollary}
The functor ${\underline{\cC}}$ is a faithful embedding of $\Ho(\cN_{\infty}\text{-}\mbox{Op})$ into
$\cI$.
\end{corollary}
\subsection{Towards fullness}\label{sec:fullness}
We now explain why we believe that in fact ${\underline{\cC}}$ is an equivalence of
categories. We will use the categorical Barratt-Eccles operad of
Guillou-May~\cite[2.3]{guilloumay}. To produce operads in spaces, we
simply take the geometric realization of the nerve.
\begin{definition}
The categorical Barratt-Eccles operad is defined by
\[
\mathbb O_{n}=i^*Map(G,\Sigma_{n}),
\]
where $i^*\colon \mathcal Set\to\Cat$ is the right-adjoint to the ``object''
functor.
The operadic structure maps are simply induced by the embeddings of
products of symmetric groups into bigger ones.
\end{definition}
The functor $i^*$ assigns to each set the category whose objects are
the set and for which there is a unique morphism in each direction
between any pair of objects.
\begin{remark}
The operad $\mathbb O$ is the norm from trivial categories to
$G$-categories of the Barratt-Eccles operad $\underline{\Sigma}$,
defined by
\[
\underline{\Sigma}_{n}=i^*\Sigma_{n}.
\]
From this perspective, it is immediate that $\mathbb O_{n}$ has fixed
points for all subgroups $H$ of $G\times\Sigma_{n}$ for which
$H\cap\Sigma_{n}=\{e\}$.
\end{remark}
Associated to an element of $\mathcal Coef(\mathcal Set)$ is a collection of families
${\mathcal F}_{n}$ of subgroups of $G\times\Sigma_{n}$: $T$ is an $H$-set in
our coefficient system if and only if $\Gamma_{T}$ is in
${\mathcal F}_{|T|}$. Using this, we can build a sub-symmetric sequence in
categories of $\mathbb O$.
\begin{definition}
If ${\mathcal F}_{\ast}$ is a sequence of families of subgroups of
$G\times\Sigma_{\ast}$, then let
\[
\mathbb O^{{\mathcal F}}_{n}=i^*\{f\in\mathbb O_{n} \,|\, Stab(f)\in{\mathcal F}_{n}\}.
\]
\end{definition}
Since the family is closed under conjugation, for each $n$, $\mathbb
O^{{\mathcal F}}_{n}$ is a $G\times\Sigma_{n}$-subcategory of $\mathbb O$. By
construction, the geometric realization of $\mathbb O^{{\mathcal F}}$ is an
$N_\infty$ symmetric sequence, and similarly, we immediately have the
following.
\begin{proposition}
Let ${\mathcal F}_{\ast}$ be the sequence of families of subgroups associated
to an $N_\infty$ symmetric sequence ${\mathcal O}$. Then we have
\[
{\underline{\cC}}(|\mathbb O^{{\mathcal F}}|)={\underline{\cC}}({\mathcal O}).
\]
\end{proposition}
We make the following conjecture, which would establish an equivalence
of categories between $\Ho(\cN_{\infty}\text{-}\mbox{Op})$ and $\cI$.
\begin{conjecture}\label{thm:Surjective}
If ${\mathcal C}$ is an indexing system and if ${\mathcal F}$ is the associated sequence
of families of subgroups, then $\mathbb O^{{\mathcal F}}$ is a sub-operad of
$\mathbb O$.
\end{conjecture}
An interesting question (about which we do not have a conjectural
answer) is whether or not all homotopy types in $\cN_{\infty}\text{-}\mbox{Op}$ are realized by
the operads that ``arise in nature'', i.e., the equivariant Steiner
and linear isometries operads.
\section{The structure of $N_\infty$-algebras}\label{sec:Algebras}
Although we can consider algebras over an $N_\infty$ operad ${\catsymbfont{O}}$ in
any symmetric monoidal category enriched over $G$-spaces, we are most
interested in the examples of orthogonal $G$-spectra with the smash
product and $G$-spaces with the Cartesian product. In both of these
examples, the notion of weak equivalence of operads given in
Definition~\ref{defn:weak} is validated by the fact that a weak
equivalence of $N_\infty$ operads induces a Quillen equivalence of the
associated categories of algebras. (See
Appendix~\ref{sec:monadicalgebras} for details.) Therefore, the
associated data of the coefficient system captures all of the relevant
structure. We now turn to describing this structure in geometric
terms.
Specifically, the name $N_\infty$ refers to the additional structure
encoded by an $N_\infty$ operad: norms, or more precisely indexed
products. In spectra with the smash product, these arise as the
Hill-Hopkins-Ravenel norm, and the operadic structure encodes the
analogue of the counit of the adjunction between the norms and the
forgetful functors for commutative ring spectra. In spaces with the
Cartesian product, these arise as coinduction, and the operadic
structure maps encode the transfer in algebras over the Steiner
operads.
In the following definition, we use the technical device of exploiting
the equivalence of categories between orthogonal $G$-spectra on the
complete universe and orthogonal $G$-spectra on a trivial
universe~\cite[\S VI.1]{MM}, as pioneered in the Hill-Hopkins-Ravenel
construction of the norm. Specifically, given an orthogonal
$G$-spectrum $X$ on a complete universe, we forget to the trivial
universe, perform the construction indicated in the formula, and then
left Kan extend back to the complete universe.
\begin{definition}
Let $T$ be an $G$-set.
\begin{enumerate}
\item If $E$ is an orthogonal $G$-spectrum, then let
\[
N^{T}E=\left(G\times\Sigma_{|T|}/\Gamma_{T+}\right)\wedge_{\Sigma_{|T|}}
E^{\wedge |T|}.
\]
\item If $X$ is a $G$-space, then let
\[
N^{T}X=\left(G\times\Sigma_{|T|}/\Gamma_{T}\right)\times_{\Sigma_{|T|}}
X^{\times |T|}.
\]
\end{enumerate}
\end{definition}
As stated, there is a potential conflict of notation --- $N^{T}E$
could refer to the preceding definition or to the Hill-Hopkins-Ravenel
norm. This ambiguity is resolved by the following proposition, which
uses the fact that $G$-spaces and orthogonal $G$-spectra are tensored
over $G$-spaces. If $X$ and $Y$ are $G$-spaces, we write $F(X,Y)$ to
denote the space of all continuous maps from $X$ to $Y$, given the
conjugation $G$-action.
\begin{proposition}
Let $T$ be an $H$-set.
\begin{enumerate}
\item Let $E$ be an orthogonal $G$-spectrum. Then a decomposition
$T=\coprod_{i} H/K_{i}$ gives a homeomorphism
\[
\left(G\times\Sigma_{|T|}/\Gamma_{T+}\right)\wedge_{\Sigma_{|T|}} E^{|T|}\cong G_{+}\wedge_{H}\bigwedge_{i} N_{K_{i}}^{H}i_{K_{i}}^{\ast} E,
\]
where $N_{K_{i}}^{H}$ is the Hill-Hopkins-Ravenel norm.
\item Let $X$ be a $G$-space. Then we have a homeomorphism
\[
\left(G\times\Sigma_{|T|}/\Gamma_{T}\right)\times_{\Sigma_{|T|}} X^{\times |T|}\cong G\times_{H}F(T,X).
\]
\end{enumerate}
\end{proposition}
\begin{proof}
The first statement is essentially the definition of the norm. The second follows
immediately from the Cartesian product endowing $G$-spaces with a
symmetric monoidal structure.
\end{proof}
\begin{proposition}
The assignments
\[
(T,E)\mapsto N^{T}(E)
\quad\text{ and }\quad
(T,X)\mapsto N^{T}(X)
\]
specify strong symmetric monoidal functors in both factors, and
moreover we have natural homeomorphisms
\[
N^{S\times T}(E)\cong N^{S}N^{T}(E)
\quad\text{ and }\quad
N^{S\times T}(X)\cong N^{S}N^{T}(X).
\]
\end{proposition}
\begin{proof}
The first part is immediate from the definition. For the second,
unpacking Lemma~\ref{lem:CartesianProduct} makes the above
isomorphisms very clear. The identification of the subgroup of
$\Sigma_{|S\times T|}$ associated to $\Gamma_{S\times T}$ shows that
the two sides are the same.
\end{proof}
\subsection{The structure of ${\mathcal O}$-algebras}
We focus on the general structure of ${\mathcal O}$-algebras in $G$-spaces and
orthogonal $G$-spectra. For brevity of exposition, we will describe
all of our maps and structure for orthogonal $G$-spectra herein, using
the smash product. Everything we say holds {\emph{mutatis mutandis}}
for $G$-spaces using the Cartesian product.
We start with the most basic structure: an algebra over an $N_\infty$
operad looks like an ordinary, classical algebra over a
non-equivariant $E_{\infty}$ operad.
\begin{proposition}\label{prop:Naive}
If $R$ is an ${\mathcal O}$-algebra in spectra, then $R$ is a naive
$E_{\infty}$ ring spectrum in the sense that $R$ has a multiplication
that is unital, associative, and commutative up to all higher
homotopy.
\end{proposition}
\begin{proof}
Choose an ordinary, non-equivariant $E_{\infty}$ operad ${\mathcal E}$ and
endow it with a trivial $G$-action. Since ${\underline{\cC}}(E)$ is the initial
object in $\cI$, we know that we have a map from (an operad
equivalent to) ${\mathcal E}$ to ${\mathcal O}$. Thus any ${\mathcal O}$-algebra is by
restriction an ${\mathcal E}$-algebra.
\end{proof}
The other admissible sets appear as extra structure.
\begin{construction}\label{cons:OperadMaps}
For an orthogonal $G$-spectrum $E$ and $T$ an admissible $H$-set for
${\mathcal O}$ with associated subgroup $\Gamma_{T}$, then by definition of
admissibility, we are given a $(G\times\Sigma_{|T|})$-contractible
space of maps
\[
(G\times\Sigma_{|T|})/\Gamma_{T}\to{\mathcal O}_{|T|},
\]
and smashing over $\Sigma_{|T|}$ with $E^{\wedge |T|}$ yields a
contractible space of maps
\[
G_{+}\wedge_{H}N^{T}i_{H}^{\ast}E\to {\mathcal O}_{|T|+}\wedge_{\Sigma_{|T|}}E^{\wedge
|T|}.
\]
\end{construction}
This contractible space of maps gives us extra structure for an
${\mathcal O}$-algebra.
\begin{lemma}\label{lem:Norms}
If $R$ is an ${\mathcal O}$-algebra and $T$ is an admissible $H$-set, then
there is a contractible space of maps
\[
G_{+}\wedge_{H}N^{T}i_{H}^{\ast}R\to R
\]
built from the maps of Construction~\ref{cons:OperadMaps}.
\end{lemma}
\begin{proof}
The maps in question are the composite
\[
G_{+}\wedge_{H}N^{T}i_{H}^{\ast}R\to {\mathcal O}_{|T|+}\wedge_{\Sigma_{|T|}}R^{\wedge
|T|}\to R,
\]
where the first map is any of the maps in
Construction~\ref{cons:OperadMaps} arising from the contractible space
\[
F_{G\times\Sigma_{|T|}}(G\times\Sigma_{|T|}/\Gamma_{T},{\mathcal O}_{|T|})={\mathcal O}_{|T|}^{\Gamma_{T}}.\qedhere
\]
\end{proof}
\begin{remark}
By convention, we assume that the empty set is always admissible. In
this case, we can again construct a contractible space of maps
\[
G_{+}\wedge_{H} N^{\emptyset}i_{H}^{\ast}R\to R,
\]
since by assumption, $N^{\emptyset} i_{H}^{\ast} R$ is the symmetric
monoidal unit.
\end{remark}
We can strengthen these results. Recall that the category of algebras
over an $E_{\infty}$ operad is homotopically tensored over finite sets
in the sense that given an algebra $R$ and a map $T\to S$ of finite
sets, we have a contractible space of maps $R^{|T|}\to R^{|S|}$
encoding the multiplication. An analogous result holds in this
context, where here the algebras over an $N_\infty$ operad ${\mathcal O}$ are
homotopically tensored over ${\mathcal C}_{G}({\mathcal O})$.
\begin{theorem}\label{thm:NormsasTransfers}
If $T$ and $S$ are admissible $G$-sets and $f\colon T\to S$ is a
$G$-map, then for any ${\mathcal O}$-algebra $R$, we can construct a
contractible space of maps $N^{T}R\to N^{S}R$ encoding the
multiplication.
\end{theorem}
\begin{proof}
For $S$ a trivial $G$-set, this is the content of
Lemma~\ref{lem:Norms}. For the general case, we observe that a general
map between $G$ sets can be written as a disjoint union of surjective
maps onto orbits inside $S$. Disjoint unions correspond to external
smash products, and hence, it suffices to consider $S$ a single orbit
and $T\to S$ surjective. This, however, can be rewritten as
\[
T\to \coprod_{|T/G|}S\to S,
\]
where the first map is the disjoint union of the surjection restricted
to each orbit of $T$ and the second is just the fold map. It will
therefore suffice to show two things:
\begin{enumerate}
\item That associated to the fold map we can construct a contractible
space of maps, and
\item associated to a surjective map $G/H\to G/K$, we can construct a
contractible space of maps.
\end{enumerate}
The fold map in turn is just $S$ times the fold map sending $|T/G|$
points with trivial $G$ action to a single point. We have a
contractible space of maps
\[
R^{\wedge |T/G|}\to R
\]
by Lemma~\ref{lem:Norms} again, applied to the trivial $G$-set. Taking
the norm $N^{S}(-)$ of these produces the required contractible space
of maps for the fold.
Now consider $T=G/H$ and $S=G/K$. By possibly composing with an
automorphism of $T$, we may assume that $H$ is a subgroup of $K$ and
that the map is the canonical quotient. In this case, the map is
\[
G\times_{K}(K/H\to K/K).
\]
Since $K/H$ is a summand of $i_{K}^{\ast}(G/H)$, we know that $K/H$ is
an admissible $K$-set. Lemma~\ref{lem:Norms} gives us a contractible
space of maps
\[
N^{K/H}(i_{K}^{*}R)\to i_{K}^{*}R.
\]
Applying the functor $N_{K}^{G}$ produces a contractible space of maps
\[
N^{G/H}(R)\to N^{G/K}(R),
\]
as required.
\end{proof}
\begin{remark}\label{rem:OperadsonGSets}
One way of interpreting Theorem~\ref{thm:NormsasTransfers} is that
equivariant operads should really be indexed on finite $G$-sets, not
just (a skeleton of) finite sets. Such a definition is very natural
using the perspective on $\infty$-operads developed in Lurie~\cite{HA}
--- instead of working with fibrations over Segal's category $\Gamma$,
equivariant $\infty$-operads should be defined as fibrations over the
equivariant analogue $\Gamma_G$. We intend to return to explore this
perspective in future work.
\end{remark}
\begin{corollary}\label{cor:Functorial}
If $S$, $S'$, and $S''$ are finite admissible $G$-sets, and
\[
S\xrightarrow{f} S'\xrightarrow{f'} S''
\]
are maps of $G$-sets and if $R$ is an ${\mathcal O}$-algebra, then for any
choice of maps coming from Theorem~\ref{thm:NormsasTransfers}, the
following diagram commutes up to homotopy
\[
\xymatrix{{N^{S}} R\ar[r]^{f_{\sharp}} \ar[rd]_{(f'\circ f)_{\sharp}} & {N^{S'}R} \ar[d]^{f'_{\sharp}} \\
& {N^{S''}R.}}
\]
\end{corollary}
\begin{theorem}\label{thm:ExistenceofNorms}
An ${\mathcal O}$-algebra $R$ is an orthogonal $G$-spectrum with maps
\[
G_{+}\wedge_{H}N^{T}i_{H}^{\ast}R\to R
\]
for all admissible $H$-sets $T$ such that the following conditions
hold.
\begin{enumerate}
\item For all admissible $G$-sets $S$ and $T$, the following diagram
homotopy commutes
\[
\xymatrix{
{N^{S\amalg T}Y\simeq N^{S}R\wedge N^{T}R}\ar[r]\ar[rd] & {R\wedge
R}\ar[d] \\ {} & {R.}}
\]
\item For all admissible $G$-sets $S$ and $T$, the following diagram
homotopy commutes
\[
\xymatrix{
{N^{S\times T}R\simeq N^{S}N^{T}R}\ar[r]\ar[rd] & {N^{S}R}\ar[d] \\ {}
& {R.}}
\]
\item For all admissible sets $S$ and $T$ such that for some $K\subset
G$, $i_{K}^{\ast}(S)\cong i_{K}^{\ast}(T)$, the following diagram
homotopy commutes
\[
\xymatrix{
{i_{K}^{\ast}N^{S}R\simeq N^{i_{K}^{\ast}S}i_{K}^{\ast}
R}\ar@{<->}[rr] \ar[dr] & & {N^{i_{K}^{\ast}T}i_{K}^{\ast}R\simeq
i_{K}^{\ast}N^{T}R}\ar[dl] \\ {} & {i_{K}^{\ast}R.} & {}}
\]
\end{enumerate}
In fact, all of these diagrams commute up to coherent homotopy; this
coherence data is precisely the information encoded by the operad.
\end{theorem}
The first two conditions express compatibility with the multiplication
and with the other norms. The third part shows that the structure is
well-behaved upon passage to fixed points. We spell out a short,
illuminating, example.
\begin{example}
Let $G=C_{2}$ (although any finite group will work here), and let
${\mathcal O}$ denote an $N_\infty$ operad weakly equivalent to the Steiner
operad on the complete universe. By assumption, ${\mathcal O}_{2}$ is the
universal space $E_{C_{2}}\Sigma_{2}$ for $\Sigma_{2}$-bundles in
$C_{2}$-spaces. If we let $\rho_{2}$ denote the regular representation
of $C_{2}$ and $\tau$ denote the sign representation of $\Sigma_{2}$,
then a cofibrant model for ${\mathcal O}_{2}$ is given by
\[
S\big(\infty (\rho_{2}\otimes\tau)\big)=\lim_{\rightarrow}
S(n\rho_{2}\otimes\tau).
\]
Inside of this is of course $S(\rho_{2}\otimes\tau)$. This has a cell
structure given by
\[
\big((C_{2}\times\Sigma_{2})/C_{2}\amalg (C_{2}\times\Sigma_{2})/\Delta\big)\cup_{f} (C_{2}\times\Sigma_{2})\times e^{1},
\]
where $\Delta$ is the diagonal copy of $C_{2}=\Sigma_{2}$, and $f$ is
the canonical quotient
\[
f\colon (C_{2}\times\Sigma_{2})\times
S^{0}=(C_{2}\times\Sigma_{2})\amalg (C_{2}\times\Sigma_{2})\to
(C_{2}\times\Sigma_{2})/C_{2}\amalg (C_{2}\times\Sigma_{2})/\Delta.
\]
Thus if we have an ${\mathcal O}$-algebra $R$, then the zero cells together
give a map
\[
R^{\wedge 2}\vee N^{C_{2}}i_{e}^{\ast} R\to R,
\]
while the attaching map for the one-cell identifies the restriction of
the map on the first factor with the restriction of the map on the
second factor.
\end{example}
\subsection{Norms, coinductions, and cotensors of $N_\infty$ operads}\label{sec:NormsandInduction}
We now describe the behavior of $N_\infty$ operads and
characterizations of their collections of admissible sets under
various standard functors. Our basic tool is the following standard
result:
\begin{proposition}\label{prop:lax}
Let $F \colon {\catsymbfont{C}} \to {\catsymbfont{D}}$ be a lax symmetric monoidal functor between
symmetric monoidal categories ${\catsymbfont{C}}$ and ${\catsymbfont{D}}$. Given an operad ${\catsymbfont{O}}$
in ${\catsymbfont{C}}$, then $F {\catsymbfont{O}}$ is an operad in ${\catsymbfont{D}}$, and $F$ induces a
functor
\[
{\catsymbfont{C}}[{\catsymbfont{O}}] \to {\catsymbfont{D}}[F{\catsymbfont{O}}]
\]
connecting the categories of ${\catsymbfont{O}}$-algebras and $F{\catsymbfont{O}}$-algebras.
\end{proposition}
\begin{proof}
The fact that $F {\catsymbfont{O}}$ forms an operad in ${\catsymbfont{D}}$ is a standard
consequence of regarding operads as monoids in symmetric sequences;
e.g., see~\cite[3.3]{westerland} for a more detailed discussion. To
see that $F$ induces a functor on algebras, it suffices to exhibit a
natural map
\[
(F {\catsymbfont{O}}) FX \to F({\catsymbfont{O}} X)
\]
in ${\catsymbfont{D}}$, where $(F {\catsymbfont{O}}) X$ denotes the free $F {\catsymbfont{O}}$-algebra on
$X$. Writing this out, we want a natural map
\[
\coprod^{\infty}_{n = 0} F{\catsymbfont{O}}(n) \otimes_{\Sigma_n} (FX)^{n} \to
F\left(\coprod^{\infty}_{n=0} {\catsymbfont{O}}(n) \otimes_{\Sigma_n} X^n\right).
\]
The lax symmetric monoidal structure of $F$ induces a composite
\[
F{\catsymbfont{O}}(n) \otimes (FX)^{n} \to F{\catsymbfont{O}}(n) \otimes F(X^n) \to
F({\catsymbfont{O}}(n) \otimes X^n),
\]
and now we map this into the orbits and then the coproduct. By the
universal property of the coproduct, as $n$ varies these maps assemble
into the desired map.
\end{proof}
\subsubsection{Coinduction and $N_\infty$ operads}
Just as restriction of an $N_\infty$ operad is again an $N_\infty$
operad, coinduction preserves the collection of $N_\infty$ operads.
\begin{definition}
If ${\mathcal O}$ is an $H$-$N_\infty$ operad, then let $N_{H}^{G}{\mathcal O}=F_{H}(G,{\mathcal O})$ be the
$N_\infty$ operad defined by
\[
F_{H}(G,{\mathcal O})_{n}=F_{H}(G,{\mathcal O}_{n})\cong
F_{H\times\Sigma_{n}}(G\times\Sigma_{n},{\mathcal O}_{n}).
\]
\end{definition}
These spaces assemble into an operad using the diagonal map on $G$ to
see that coinduction is lax symmetric monoidal. The last equality
shows that this is actually a universal space for a family of
subgroups of $G\times\Sigma_{n}$. Identifying the family is fairly
straightforward and lets us identify the admissible sets.
\begin{proposition}
For any finite group $G$, if ${\mathcal F}$ is a family of subgroups of
$H\subset G$, then $F_{H}(G,E{\mathcal F})$ is a universal space for the family
of subgroups of $G$ corresponding to the sieve
${i_{H}^{\ast}}^{-1} \mathcal Set_{{\mathcal F}}$.
\end{proposition}
\begin{proof}
By the adjunction, for any finite $G$-set $T$ (in fact, for any
$G$-space), we have a homeomorphism
\[
F^{G}(T,F_{H}(G,E{\mathcal F}))\cong F^{H}(i_{H}^{\ast} T,E{\mathcal F}).
\]
This space is either contractible or empty according to whether
$i_{H}^{\ast} T$ is in $\mathcal Set_{{\mathcal F}}$ or not, respectively.
\end{proof}
Specializing to the families which arise from an $N_\infty$ operad, we
conclude the following.
\begin{proposition}
Let ${\mathcal O}$ be an $H$-$N_\infty$ operad. For any $K\subset G$, a $K$ set
$T$ is admissible if any only if for all $g\in G$,
\[
i_{H\cap gKg^{-1}} g\cdot T\in {\mathcal C}({\mathcal O})(H\cap gKg^{-1}).
\]
\end{proposition}
\begin{proof}
Let $n$ be the cardinality of a finite $K$ set $T$. Consider
$G\times\Sigma_{n}/\Gamma_{T}$. By the previous proposition, we need
only check that the restriction of this to $H\times\Sigma_{n}$ is in
the family associated to ${\mathcal O}_{|T|}$. By the double coset formula,
this restriction is a disjoint union of $H\times\Sigma_{n}$-sets of
the form
\[
H\times\Sigma_{n}/\big(H\times\Sigma_{n}\cap (g,\sigma)\Gamma_{T}
(g,\sigma)^{-1}\big).
\]
The conjugates of $\Gamma_{T}$ are all again graphs of functions. In
this case, the conjugate of $\Gamma_{T}$ is the graph of the function
describing the $gKg^{-1}$-set $g\cdot T$ (with $\sigma$ here just
providing an isomorphism of this $gKg^{-1}$-set with
another). Intersecting this with $H\times\Sigma_{n}$ is again the
graph of a homomorphism, this one with domain $H\cap gKg^{-1}$. The
result follows.
\end{proof}
\begin{corollary}
An $H$-set $T$ is admissible for $F_{H}(G,{\mathcal O})$ if and only if it is
admissible for ${\mathcal O}$.
\end{corollary}
\begin{corollary}
If $N$ is a normal subgroup of $G$, then the condition is simply that
a $K$-set is admissible if and only if its restriction to $N\cap K$ is
admissible.
\end{corollary}
\begin{corollary}
If $N$ is a normal subgroup of $G$, then ${\mathcal C}(N_{N}^{G}{\mathcal O})$ is the
closure of ${\mathcal C}({\mathcal O})$ under the operation
\[
\Ind_{(-)\cap N}^{(-)}\colon \mathcal Set_{(-)\cap N}\to \mathcal Set_{(-)}.
\]
\end{corollary}
We can now explain the connection between norms of algebras and algebras over the norm of an $N_\infty$ operad ${\mathcal O}$. One of the defining features of the norm in spectra is a homeomorphism
\[
N_{H}^{G} \Sigma^{\infty}(X_{+}) \cong \Sigma^{\infty}\big(F_{H}(G,X)_{+}\big),
\]
which follows immediately from the fact that $\Sigma^{\infty}_{+}$ is
a symmetric monoidal functor from spaces with Cartesian product to
spectra with the smash product. Thus we expect a close connection
between algebras in spaces or spectra over an $N_\infty$ operad and
those over its norm. The following corollary is an immediate
consequence of Proposition~\ref{prop:lax}.
\begin{corollary}\label{cor:NormsofAlgebras}
If $R$ is an ${\mathcal O}$-algebra in spaces or spectra for an $N_\infty$
$H$-operad ${\mathcal O}$, then $N_{H}^{G}(R)$ is naturally a
$N_{H}^{G}{\mathcal O}$-algebra.
\end{corollary}
\subsubsection{Cotensoring and $N_\infty$ operads}
We close this subsection with a small result of independent interest:
cofree naive commutative $G$-ring spectra are automatically genuine
commutative $G$-ring spectra. This follows from the cotensoring
operation of spaces on $N_\infty$ operads.
\begin{proposition}\label{prop:UniversalCoTensor}
Let $E$ be a universal space for a finite group $G$. If $X$ is a $G$-space, then
$
F(X,E)
$
is again a universal space for $G$.
\end{proposition}
\begin{proof}
A universal space is determined by the property that for any $G$-space $Y$, the space of $G$-equivariant maps
\[
F(Y,E)^{G}
\]
is either empty or contractible. Using the adjunction
\[
F\big(Y,F(X,E)\big)^{G}\cong F(Y\times X,E)^{G},
\]
we see that $F(X,E)$ again has the desired property.
\end{proof}
\begin{proposition}
If $X$ is a non-empty $G$-space, then for any $N_\infty$ operad ${\mathcal O}$, there is
an $N_\infty$ operad $F(X,{\mathcal O})$ defined by
\[
F(X,{\mathcal O})_n=F(X,{\mathcal O}_n),
\]
where $X$ is viewed as a $G\times\Sigma_{n}$-space with trivial $\Sigma_{n}$ action and and with coordinatewise structure maps.
\end{proposition}
\begin{proof}
Since the cotensor is lax monoidal (using the diagonal map on $X$), Proposition~\ref{prop:lax} implies
that $F(X,{\mathcal O})$ forms an operad. Proposition~\ref{prop:UniversalCoTensor} then implies that all space are universal spaces for some family of subgroups of $G\times\Sigma_{n}$. We need only show that $\Sigma_{n}$ acts freely.
Let $H\subset\Sigma_{n}$ be non-trivial, and consider the $H$-fixed points of $F(X,{\mathcal O}_{n})$. Since $\Sigma_{n}$ acted trivially on $X$ and since $X$ was non-empty, the restriction of $X$ to $H$ is built entirely out of cells with stabilizer $H$. Since $i_{\Sigma_{n}}^{\ast}{\mathcal O}_{n}=E\Sigma_{n}$, freeness follows.
\end{proof}
Naturality of the function object immediately gives the following
proposition.
\begin{proposition}\label{prop:NaturalityofOperadNorms}
If $f\colon X\to Y$ is a map of non-empty $G$-spaces, then
$f^{\ast}$ is a map of $G$-operads
\[
F(Y,{\mathcal O})\to F(X,{\mathcal O}),
\]
and hence any $F(X,{\mathcal O})$-algebra is naturally a $F(Y,{\mathcal O})$-algebra.
\end{proposition}
In particular, the map to the terminal space $\ast$ shows that any
$F(X,{\mathcal O})$-algebra is naturally a ${\mathcal O}$-algebra.
When the $N_\infty$ $H$-operad is the restriction of a $N_\infty$
$G$-operad, then we can combine
Proposition~\ref{prop:NaturalityofOperadNorms}
and Corollary~\ref{cor:NormsofAlgebras}.
\begin{corollary}\label{cor:PreservationofAlgebra}
If $R$ is an ${\mathcal O}$-algebra in spaces or spectra for a $N_\infty$
$G$-operad ${\mathcal O}$, then $N_{H}^{G}i_{H}^{\ast}(R)$ is again naturally
an ${\mathcal O}$-algebra.
More generally, if $T$ is a finite $G$-set, then $N^{T}(R)$ is
naturally an ${\mathcal O}$-algebra.
\end{corollary}
There is an extremely important (and somewhat surprising) case when $X
= EG$ --- the cotensor $F(EG,{\catsymbfont{O}})$ is then a genuine $G$-$E_{\infty}$ operad
for any ${\catsymbfont{O}}$. To make sense of this claim, consider the mapping
space $F(EG,E\Sigma_n)$, regarded as a $G \times \Sigma_n$-space where
$G \times \Sigma_n$ acts on $EG$ via the projection to $G$ and on
$E\Sigma_n$ via the projection to $\Sigma_n$. Regarded as a universal
space for a family of subgroups of $G \times \Sigma_n$, $E\Sigma_n$
can admit maps only from spaces with isotropy contained entirely in
$G \times \{1\}$ --- but this is precisely the case for $EG$.
\begin{proposition}\label{prop:whoa}
For any $N_\infty$ operad ${\mathcal O}$, the $N_\infty$ operad $F(EG,{\mathcal O})$ is a
$G$ $E_\infty$ operad.
\end{proposition}
\begin{proof}
It suffices to show this for the trivial $N_\infty$ operad ${\mathcal O}^{tr}$,
since $F(EG,{\mathcal O}^{tr})$ maps to $F(EG,{\mathcal O})$ for any other $N_\infty$
operad ${\mathcal O}$.
Let $\Gamma$ be any subgroup of $G\times\Sigma_n$ that intersects
$\Sigma_n$ trivially. To show that the $\Gamma$ fixed points of the
cotensor are nonempty, by adjunction we need only show that
\[
(G\times\Sigma_n/\Gamma)\times EG
\]
can be built out of cells of the form $G/H\times\Sigma_n$. The
cellular filtration of $EG$ shows that it in turn suffices to show
that $G\times\Sigma_n$-equivariantly, we have an isomorphism
\[
(G\times\Sigma_n/\Gamma)\times (G\times\Sigma_{n}/\Sigma_{n})\cong \coprod G\times\Sigma_n
\]
This follows immediately from the equivalences
\begin{align*}
G\times (G\times\Sigma_n/\Gamma)\cong (G\times\Sigma_n/\{e\}\times\Sigma_n)\times (G\times\Sigma_n/\Gamma)\\
\cong G\times\Sigma_n\times_{\{e\}\times\Sigma_n} i_{\{e\}\times\Sigma_n}^{\ast} (G\times\Sigma_n/\Gamma).
\end{align*}
Since $\{e\}\times\Sigma_n$ is normal and since by assumption
\[
\Gamma\cap \{e\}\times\Sigma_{n}=\{e\},
\]
we have an equivariant isomorphism
\[
i_{\{e\}\times\Sigma_n}^{\ast} (G\times\Sigma_n/\Gamma) \cong \coprod_{|G/H|} \{e\}\times\Sigma_n,
\]
where $H$ is the image of $\Gamma$ under the projection to $G$.
\end{proof}
This now gives the following theorem.
\begin{theorem}
If $R$ is an algebra in orthogonal $G$-spectra over any $N_\infty$
operad ${\mathcal O}$, then the localized orthogonal $G$-spectrum
\[
F(EG_{+},R)
\]
is automatically an algebra over the terminal $N_\infty$
operad. Moreover, the map
\[
R\to F(EG_{+},R)
\]
is a map of ${\mathcal O}$-algebras, where the target is an ${\mathcal O}$-algebra by
the diagonal map ${\mathcal O}\to F(EG_{+},{\mathcal O})$. Analogous results hold for
an algebra over ${\mathcal O}$ in $G$-spaces.
\end{theorem}
\begin{proof}
We give the proof for spectra; the case of spaces is analogous.
First, observe that $F(EG_+,R)$ is an algebra over the operad in
spectra specified by the cotensor $F(EG_+,\Sigma^{\infty}_+ {\catsymbfont{O}})$,
since $F(EG_+,-)$ is lax monoidal (using the diagonal map on $EG$).
Next, there is a natural map of operads
\[
\Sigma^{\infty} F(EG,{\catsymbfont{O}})_+ \to F(\Sigma^\infty
EG_+, \Sigma^\infty {\catsymbfont{O}}_+)
\]
induced by the continuity of the functor $\Sigma^\infty (-)_+$. The
first assertion now follows from Proposition~\ref{prop:whoa}, and the
second is immediate.
\end{proof}
\subsection{Multiplicative action maps}\label{sec:inter}
Based on the example of algebras over the commutative operad, one
expects that the operations parametrized by $N_\infty$ operads are
multiplicative in the sense that for any point $o \in {\catsymbfont{O}}(n)$, the
induced map
\[
\mu_{o} \colon X^{\sma n} \to X
\]
is itself a map of ${\catsymbfont{O}}$-algebras, where the domain is given the
diagonal action of ${\catsymbfont{O}}$. More generally, we would expect this also
to hold equivariantly, where now the maps described in Lemma~\ref{lem:Norms}
and Theorem~\ref{thm:NormsasTransfers} are maps of appropriate
algebras.
Classically, this situation is described via the formalism of
interchange of operads~\cite[\S 1]{Dunn}, which we review below. To
study the case of Theorem~\ref{thm:NormsasTransfers}, wherein we
consider the norm of a map of ${\mathcal O}$-algebras, we need to also address
the connection between algebras over the norm of an operad and the
norm of algebras over an operad.
Recall that given an object $X$ which is simultaneously an
${\catsymbfont{O}}$-algebra and an ${\catsymbfont{O}}'$-algebra, we say that the two actions
interchange if for each point $x \in {\catsymbfont{O}}_n$, the map $X^{n} \to X$ is
a map of ${\catsymbfont{O}}'$-algebras and vice-versa. We can express this
relationship by requiring that the diagram
\[
\xymatrix{
(X^{n})^{m} \ar[r]^{\cong} \ar[d]^{\alpha^m} & (X^{m})^{n} \ar[r]^{\beta^n} & X^n \ar[d]^{\alpha} \\
X^m \ar[rr]^{\beta} & & X\\
}
\]
commute for each $\alpha \in {\catsymbfont{O}}(n)$ and $\beta \in {\catsymbfont{O}}'(m)$, where
the homeomorphism is given by the permutation that takes lexicographic
order to other lexicographic order.
Interchange of operads is described by the tensor product of operads; by construction, $X$ is an ${\catsymbfont{O}}$-algebra and an ${\catsymbfont{O}}'$-algebra such that the actions interchange if and only if $X$ is an ${\catsymbfont{O}} \otimes {\catsymbfont{O}}'$-algebra~\cite[\S 1]{Dunn}. The universal property of the tensor product of operads can also be described in terms of the
theory of pairings of operads~\cite{Maypairing} (see~\cite[\S 6.1]{guilloumay} for a discussion in the equivariant setting); a pairing
\[
({\catsymbfont{O}}, {\catsymbfont{O}}') \to {\catsymbfont{O}}''
\]
is a collection of suitable coherent maps ${\catsymbfont{O}}_n \times {\catsymbfont{O}}'_m \to {\catsymbfont{O}}''_{nm}$. In this language, the tensor product is the universal recipient for pairings.
The $N_\infty$-condition is a homotopical one, parameterizing (as we
saw above) the ways to coherently multiply elements where we allow the
group to act on both the elements and on the coordinates. We therefore
expect that the tensor product of $N_\infty$ operads will always be
$N_\infty$:
\begin{conjecture}
If ${\mathcal O}$ and ${\mathcal O}'$ are $N_\infty$ operads, then (subject to suitably
cofibrancy conditions) ${\mathcal O}\otimes {\mathcal O}'$ is an $N_\infty$ operad and moreover
\[
{\underline{\cC}}({\mathcal O}\otimes{\mathcal O}')={\underline{\cC}}({\mathcal O})\vee{\underline{\cC}}({\mathcal O}'),
\]
where $\vee$ denotes the least upper bound in the poset $\cI$.
\end{conjecture}
In particular, the conjecture implies that for any algebra over an
$N_\infty$ operad ${\mathcal O}$, the operad action interchanges with itself.
An immediate corollary of the definition of interchange is that when
the operadic action interchanges with itself, the maps in
Lemma~\ref{lem:Norms} are maps of ${\mathcal O}$-algebras:
\begin{proposition}
Let $R$ be an algebra over an $N_\infty$ operad ${\mathcal O}$, and assume that
the ${\mathcal O}$-action interchanges with itself. Then for any surjective
maps $S\to T$ of admissible $H$-sets, the structure maps in
Theorem~\ref{thm:NormsasTransfers}
\[
N^{S} i_{H}^{\ast}R\to N^{T}i_{H}^{\ast} R
\]
are maps of $N^{T}i_{H}^{\ast}{\mathcal O}$-algebras.
\end{proposition}
We intend to return to a general analysis of the theory of the tensor
product of $G$-operads elsewhere. However, for the cases of most
interest in applications, namely the equivariant Steiner and linear
isometries operads, it is possible to verify the necessary interchange
relations directly.
In~\cite[\S 10]{guilloumay}, it is shown that there is a pairing of
operads
\[
\big(\mathcal{K}(V), \mathcal{K}(W)\big) \to \mathcal{K}(V \oplus W),
\]
relying on an interchange map
\[
\theta \colon \mathcal{K}_n(U) \times \mathcal{K}_m(U) \to \mathcal{K}_{nm}(U \oplus U)
\]
that takes $n$ Steiner paths $\{k_1, \ldots, k_n\}$ and $m$ Steiner
paths $\{k'_1, \ldots , k'_m\}$ to the collection of the $nm$
product paths
\[
k_i \times k_j' \colon I \to R_U \times R_U \subset R_{U \oplus U}.
\]
ordered lexicographically. Choosing an equivariant homeomorphism
$U \oplus U \to U$, we deduce the following consequence:
\begin{proposition}\label{prop:Steiner}
Let $X$ be an algebra over the equivariant Steiner operad on $U$.
Then the operad action satisfies interchange with itself.
\end{proposition}
\begin{corollary}\label{cor:TransfersAreLoopMaps}
If $X$ is an algebra over ${\mathcal K}(U)$, then for any admissible $H$-set
$T$, the structure maps
\[
N^{T}i_{H}^{\ast}X\to i_{H}^{\ast}X
\]
are maps of ${\mathcal K}(U)$-algebras.
\end{corollary}
Essentially the same construction works for the linear isometries
operad. To be precise, given $f \in {\mathcal L}_n(U)$ and $g \in {\mathcal L}_m(U)$,
we can decompose these into their components --- $f \colon U^n \to U$
gives rise to $f_1, f_2, \ldots, f_n \colon U \to U$ and $g \colon
U^m \to U$ gives rise to $g_1, g_2, \ldots, g_m \colon U \to U$. The
interchange map here takes $\{f_i\}, \{g_i\}$ to the map
\[
(U \oplus U)^{mn} \to U\oplus U
\]
by the lexicographic pairings $\{f_i \oplus g_j\}$.
Therefore, using again a chosen homeomorphism $U \oplus U \to U$, we
have the following result.
\begin{proposition}
Let $R$ be an algebra over the equivariant linear isometries operad on $U$.
Then the operad action satisfies interchange with itself.
\end{proposition}
\begin{corollary}\label{cor:NormsAreRingMaps}
If $R$ is an algebra over ${\mathcal L}(U)$, then for any admissible $H$-set $T$, the structure maps
\[
N^{T}i_{H}^{\ast}R\to i_{H}^{\ast}R
\]
are maps of ${\mathcal L}(U)$-algebras.
\end{corollary}
\section{$N_\infty$-spaces and $N_\infty$-ring spectra: Transfers and
norms}\label{sec:TransfersAndNorms}
In this section, we interpret the structure on algebras over $N_\infty$
operads in the two cases of most interest: $G$-spaces and orthogonal
$G$-spectra. In the former, the admissible sets control which
transfer maps exist; this provides a conceptual interpretation of the
way in which $N_\infty$ operads controls the structure of equivariant
infinite loop spaces. In the latter, the admissible sets control
which norms exist; this provides a conceptual interpretation of the
way in which $N_\infty$ operads controls the structure of equivariant
commutative ring spectra.
\subsection{$N_\infty$ algebras in spaces and the
transfer}\label{sec:transfers}
We begin by applying the machinery developed above to produce the
transfer in algebras over an $N_\infty$ operad in spaces. The most
important examples of $N_\infty$ operads from the point of view of
spaces are the equivariant Steiner operads ${\mathcal K}(U)$, which model
equivariant infinite loop spaces. The goal of this section is to
describe how the transfer naturally arises from the operadic structure
maps.
In this section, we state our results in terms of an operad ${\mathcal O}$ such
that the action of ${\mathcal O}$ on any ${\mathcal O}$-algebra $X$ interchanges with
itself. (Recall that Proposition~\ref{prop:Steiner} tells us this is
true for ${\mathcal K}(U)$.) The following is a restatement of
Theorem~\ref{thm:NormsasTransfers} in the context of $G$-spaces.
\begin{theorem}\label{thm:GeneralTransfers}
If ${\mathcal O}$ is an $N_\infty$ operad, $S$ and $T$ are admissible $H$-sets,
and $f\colon T\to S$ is an $H$-map, then for any ${\mathcal O}$-algebra $X$ in
$G$-spaces, we have a contractible space of maps
\[
F(T,i_{H}^{\ast}X)\to F(S,i_{H}^{\ast}X),
\]
and if the map $f$ is surjective, then any choice is homotopic to a map of $N^{S}({\mathcal O})$-algebras.
\end{theorem}
Applying fixed points and passing to homotopy groups, we produce
interesting maps:
\begin{theorem}\label{thm:TransfersinSpaces}
If $S$ and $T$ are admissible $H$-sets, if $f\colon T\to S$ is an
$H$-map, and if $X$ is an ${\mathcal O}$-algebra in $G$-spaces, then there is
unique, natural (in $X$) map of abelian groups
\[
f_{\ast}\colon\pi_{k}\big(F(T,i_{H}^{\ast}X)^{H}\big)\to \pi_{k}\big(F(S,i_{H}^{\ast}X)^{H}\big)
\]
for all $k\geq 0$.
\end{theorem}
\begin{proof}
Without loss of generality, we may assume that $H=G$. If the map $f$
is not surjective, then we may use the splitting
\[
S=Im(f)\amalg S'
\]
to produce a decomposition
\[
F(S,X)\cong F(Im(f),X)\times F(S',X).
\]
Proposition~\ref{prop:Naive} guarantees that for all $k\geq 0$, the
induced decomposition on homotopy groups of fixed points is a
splitting of abelian monoids. Our map $f_{\ast}$ is the composite of
the map induced by $f\colon T\to Im(f)$ with the inclusion of the
summand associated to $S'$. We therefore may assume that $f$ is
surjective.
Since the spaces in our operad are contractible, there is a unique homotopy class for the structure map given by Theorem~\ref{thm:GeneralTransfers}
\[
f_{\sharp}^{G}\colon F(T,X)^{G}\to F(S,X)^{G},
\]
which gives rise to a unique map of homotopy groups:
\[
f_{\ast}\colon \pi_{k}\big(F(T,X)^{G}\big)\to \pi_{k}\big(F(S,X)^{G}\big).
\]
Proposition~\ref{prop:Naive} guarantees that for all $k\geq 0$, the homotopy groups of all fixed points of $N^{T}(X)$ are abelian monoids. It is obvious that $f_{\ast}$ is a map of abelian groups for $k\geq 1$. Since we may assume that the $f_{\sharp}$ comes from a surjective map, our interchange assumption guarantees that the map $f_{\sharp}$ is a map of $N^{S}({\mathcal O})$-algebras. Thus $f_{\ast}$ is a map of abelian monoids for all $k$.
\end{proof}
\begin{corollary}
If $H/K$ is an admissible $H$-set, then associated to the canonical
projection map
\[
\pi_{K}^{H}\colon H/K\to H/H
\]
we have a natural map of abelian monoids
\[
tr_{K}^{H}=\pi_{K*}^{H}\colon \pi_{k} X^{K}\to \pi_{k}X^{H}.
\]
\end{corollary}
This map has the feel of the transfer map: on homotopy groups, we have
a map that goes from the fixed points for a subgroup back to the fixed
points for a larger group. We shall shortly verify that upon passage
to spectra that this does give the usual transfer. Before doing so,
we deduce some very nice structural corollaries from
Theorem~\ref{thm:ExistenceofNorms}.
\begin{proposition}\label{prop:DoubleCoset}
If $H/K$ is an admissible $H$-set, then the double coset formula determining the restriction of $tr_{K}^{H}$ to any subgroup $K'$ of $H$ holds:
\[
res_{K'}^{H}tr_{K}^{H}=\bigoplus_{g\in K'\backslash H/K} tr_{K'\cap gKg^{-1}}^{K'} res_{K'\cap gKg^{-1}}^{K}.
\]
\end{proposition}
This proposition, often called the ``Mackey double coset formula'' really has a simpler interpretation: the restriction to a subgroup $K'$ of the transfer associated to an $H$-set $T$ is the transfer associated to the $K'$-set $i_{K'}^{\ast} T$. As such, this is an immediate consequence of Theorem~\ref{thm:ExistenceofNorms} (iii).
\begin{corollary}
For an ${\mathcal O}$-algebra $X$ for which the ${\mathcal O}$-action interchanges with itself, the abelian group valued coefficient system
\[
\underline{\pi_{k}(X)}\colon\underline{\Set}\to\mathcal Ab
\]
defined by
\[
(T\in\mathcal Set^{H})\mapsto \pi_{k}\big(F(T,X)^{H}\big)
\]
has transfers for any admissible sets.
\end{corollary}
These are therefore incomplete Mackey functors, studied by Lewis during his analysis of incomplete universes \cite{LewisHurewicz, LewisChange}.
\begin{remark}
The forgetful functor on abelian group valued coefficient systems has a right adjoint: coinduction. By the universal property of the product, we have a natural isomorphism
\[
\underline{\pi_{k} F(G/H,X)}\cong \textnormal{CoInd}_{H}^{G}i_{H}^{\ast} \underline{\pi_{k}(X)}.
\]
This can be further simplified, using the construction of coinduction:
\[
\textnormal{CoInd}_{H}^{G}i_{H}^{\ast}\underline{M}(T)=\underline{M}(G/H\times T).
\]
This final formulation has an obvious extension to more general $G$-sets than orbits, and we follow Lewis's notation
\[
\underline{M}_{S}(T)=\underline{M}(S\times T)
\]
for a fixed $G$-set $S$.
The ${\mathcal O}$-algebra structure that interchanges with itself endows the homotopy coefficient system of an ${\mathcal O}$-algebra $X$ with natural transformations
\[
\underline{\pi_{k}(X)}_{T}\to\underline{\pi_{k}(X)}
\]
for all admissible sets $T$ and which commute with restriction. If all sets are admissible, then this is equivalent to a Mackey functor structure on $\underline{\pi_{k}(X)}$~\cite{HillHopkins}.
\end{remark}
\begin{remark}
One of the classical ways to package the data of a Mackey functor is
via additive functors from the Burnside category of spans of finite
$G$-sets into some other category. There is an ``incomplete'' version
of these that can be used in our context. The appropriate notion of a
``span'' for our incomplete Mackey functors is an isomorphism class of
a pair of maps $S\leftarrow U\rightarrow T$, where $U\to T$ is a
pull-back of a map between admissible sets. These objects forms a
subcategory of the Burnside category. A full treatment of this
approach also engages with the issues from
Remark~\ref{rem:OperadsonGSets} of indexing our operads on finite
$G$-sets rather than on natural numbers. We intend to return to this
issue in a subsequent paper.
\end{remark}
Having seen that the homotopy groups of an ${\mathcal O}$-algebra in $G$-spaces
have transfers analogous to those possessed by the homotopy groups of
genuine spectra, we restrict attention to ${\mathcal O}={\mathcal K}(U)$ or ${\mathcal O}={\mathcal D}(U)$
for a universe $U$ and show that we are in fact constructing the usual
transfer. Recall that an equivariant ${\catsymbfont{O}}$-algebra $X$ is
``group-like'' if $\pi_{0}(X^{H})$ is an abelian group for all
$H\subset G$. We have the following delooping result:
\begin{proposition}[\cite{costenoblewaner}]
If $X$ is a group-like ${\mathcal K}(U)$-algebra or ${\mathcal D}(U)$-algebra then there
is an equivariant spectrum $\mathfrak X$ indexed on $U$ for which $X$
is the zero space. Similarly, a map of ${\mathcal K}(U)$-algebras $X\to Y$
deloops to a map $\mathfrak X\to\mathfrak Y$ of spectra indexed on
$U$.
\end{proposition}
We can now deloop any of our structure maps since
Corollary~\ref{cor:TransfersAreLoopMaps} implies that they are
infinite loop maps.
\begin{corollary}
Fix some universe $U$, and let $H/K$ be an admissible $H$-set for
${\mathcal K}(U)$. If $X$ is a grouplike ${\mathcal K}(U)$-algebra, then we have a map
of spectra indexed by $U$:
\[
F_{K}(H,\mathfrak X)\to i_{H}^{\ast}\mathfrak X,
\]
where $\mathfrak X$ is the spectrum whose zero space is $X$, and where
$F_{K}(H,\mathfrak X)$ is the coinduced spectrum. Moreover, the
homotopy class is unique.
\end{corollary}
In this context, we see another interpretation of
Theorem~\ref{thm:ExistenceofNorms} (iii). The relevant spaces in the
operad ${\mathcal O}$ parameterize the homotopies making the diagrams
\[
\xymatrix{
{i_{K}^{\ast}N^{S}\mathfrak X\simeq N^{i_{K}^{\ast}S}i_{K}^{\ast} \mathfrak X}\ar@{<->}[rr] \ar[dr] & & {N^{i_{K}^{\ast}T}i_{K}^{\ast}\mathfrak X\simeq i_{K}^{\ast}N^{T}\mathfrak X}\ar[dl] \\
{} & {i_{K}^{\ast}\mathfrak X.} & {}}\]
commute. This is again an incarnation of the double-coset formula.
When ${\mathcal O}$ is ${\mathcal K}(U)$ for some universe $U$, then these transfers
recover the classical transfers.
\begin{proposition}
If $X$ is a group-like ${\mathcal K}(U)$-algebra, then the operadic transfer
map associated to an admissible set $G/H$ gives rise to the
ordinary transfer.
\end{proposition}
\begin{proof}
This identification essentially follows from the definition of the
action of the little disks operad on $\Omega^V S^V$. Due to the
problems with suspension in the context of the little disks operad, we
will have to shift between ${\mathcal K}(U)$ and ${\mathcal D}(V)$ in the following
argument.
First, observe that if $G/H$ is an admissible $G$-set for ${\mathcal K}(U)$,
then it is an admissible $G$-set for ${\mathcal D}(U)$ and so for some finite
dimensional subspace $V \subset U$, we have a $G$-equivariant embedding
\[
G/H\times D(V)\hookrightarrow D(V).
\]
For a particular subspace $V$, these choices can be inequivalent, but
letting the dimension grow yields our contractible space of maps
\[
G/H\times D(U)\hookrightarrow D(U).
\]
Thus in the limit, any choices we made becomes equivalent, and we can
restrict attention to some finite dimensional $V$ and the $V$-fold
loops.
Since $X$ is a ${\mathcal K}(U)$-space, delooping~\cite{costenoblewaner}
implies that $X \simeq \Omega^V Y$ as a ${\mathcal K}(U)$-space for some $Y$.
Changing operads, we can regard $X$ as having a ${\mathcal D}(V)$ action which
is compatible with the ${\mathcal K}(U)$ action. Any embedding of the form
$G/H\times D(V)\hookrightarrow D(V)$ induces a Pontryagin-Thom map
\[
S^{V}\to G/H_{+}\wedge S^{V}.
\]
Taking maps out of this produces a map of algebras
\[
F_H(G_+,i_H^{\ast}\Omega^{V}Y)\cong F(G/H_+ \sma
S^{V},Y)\to \Omega^{V}Y,
\]
which in this case manifestly represents the same homotopy class as
the map constructed in Theorem~\ref{thm:NormsasTransfers}; the
Pontryagin-Thom collapse yields precisely the operadic structure map
in this case. But of course this collapse is also the same as the
classical construction of the transfer map~\cite{Adams}.
\end{proof}
\begin{remark}
One can also deduce the preceding comparison of transfers from the
fact the description of the transfer as the composite of the inverse
of the Wirthmuller isomorphism and the action map $G \sma_H X \to
X$~\cite[4.15]{Schwede}. Specifically, the result follows from this
characterization along with the fact that the delooping of the
operadic multiplication of a group-like ${\catsymbfont{O}}$-space produces the fold
map of $G$-spectra.
\end{remark}
\subsection{$N_\infty$-ring spectra and the norm}
We now study the case of $N_\infty$ algebras in orthogonal $G$-spectra.
The arguments are essentially the same as in the preceding subsection,
but the interpretation is different. The proof of
the following is identical to the proof of
Theorem~\ref{thm:TransfersinSpaces} and
Proposition~\ref{prop:DoubleCoset}, so we omit it.
\begin{theorem}
If $R$ is an algebra over an $N_\infty$ operad ${\mathcal O}$, then
\[
\underline{\pi_{0}(R)}
\]
is a commutative Green functor.
If the ${\mathcal O}$ action interchanges with itself, then for any admissible
$H$-set $H/K$ we have a ``norm map''
\[
\underline{\pi_{0}(R)}(G/K)\xrightarrow{n_{K}^{H}}\underline{\pi_{0}(R)}(G/H)
\]
which is a homomorphism of commutative multiplicative monoids.
The maps $n_{K}^{H}$ satisfy the multiplicative version of the Mackey double-coset formula.
\end{theorem}
Thus just as the homotopy groups of algebras in spaces over the Steiner operad on an incomplete universe gave incomplete Mackey functors with only some transfers, the zeroth homotopy group of an algebra in spectra over the linear isometries operad on an incomplete universe gives incomplete Tambara functors with only some norms.
|
2,869,038,154,373 | arxiv | \section{Introduction}
\label{introduction}
Combinatorial optimization problems aim to find the optimal combination satisfying certain constraints. It is crucial to obtain combinatorial optimization problems quickly in practice. However, a computational explosion occurs when the number of combinations becomes enormous; a brute-force method cannot solve this problem.
Simulated and quantum annealing principles in physics play a crucial role in developing specialized machines for combinatorial optimization problems. Typical examples include D-Wave Advantage by D-Wave Systems Inc.\cite{d-wave}, Digital Annealer by Fujitsu Ltd.\cite{digital_annealer}, and CMOS Annealing Machine by Hitachi Ltd.\cite{cmos} The input form of these annealing machines is a quadratic form of binary variables called the quadratic unconstrained binary optimization (QUBO) formulation, which is equivalent to the Ising model. Hence, one should convert the original combinatorial optimization problems to the QUBO formulation. Then, the specialized machines quickly explore the optimal solution based on simulated or quantum annealing principles. Reference \cite{ising_formulation} gives various examples of the Ising models and QUBO formulations for famous combinatorial optimization problems.
Although specialized machines give only approximate solutions, we can use them in various practical situations. Many research institutes and companies are investigating applications of annealing machines to address social issues. Such applications include machine learning \cite{qloss, k-means, black-box}, financial optimization \cite{portfolio}, space debris removal \cite{space_debri}, traffic flow \cite{AGV, traffic}, medical imaging \cite{IMRT}, and E-commerce \cite{e-commerce}. As mentioned above, the annealing machines can solve combinatorial optimization problems that are difficult to solve with conventional computers within reasonable calculation time. Hence, they are attracting much attention as a new computational technology.
Annealing machines with the QUBO formulation can also solve problems with continuous variables; binary expansions of the original continuous variables are used \cite{encoding}. The corresponding QUBO formulation is then derived. There are studies on the binary expansion method with which annealing machines solve linear regression problems \cite{linear_regression} and support vector machines \cite{support_vector_machine}. Annealing machines reduce the computational time by a factor of 2.8 for relatively large linear regression problems \cite{linear_regression}; this study showed the superiority of annealing machines.
However, annealing machines have the limitation that the number of variables is not very large. Practical optimization problems require many variables, which makes it impossible to embed the QUBO formulation. For example, D-Wave Advantage can only solve problems up to about 5000 qubits. As mentioned above, continuous optimization problems require the discretization of variables. Previous studies used a naive discretization with a fixed number of binary variables for each continuous variable. Hence, the number of necessary binary variables increases proportionally with the number of continuous variables. It is problematic for a large-scale optimization problem with continuous variables. A rough discretization decreases the prediction accuracy because of insufficient resolution.
We propose an efficient discretization method for annealing machines, which involves the correlation between continuous variables. We carry out a short sampling for the original problems with continuous variables in advance. Then, the information on the correlations leads to a problem-specific discretization. The basic idea is as follows. We share the binary variables between the continuous variables that tend to be roughly the same value. Thus, the number of binary variables can be reduced without degrading the prediction accuracy. We investigated the effectiveness of the proposed method through a numerical experiment on linear regression problems.
The remainder of the paper is as follows. In Sec.~2 we give a brief review of the formulation for the annealing machines. We review the linear regression problem and its expression as a QUBO formulation in Sec.~3. In Sec.~4, we present the proposed efficient correlation-based discretization method. In Sec.~5, we explain the numerical experiment to investigate the effectiveness of the proposed method. Finally, we conclude the paper in Sec.~6.
\section{Basic Formulation for Annealing Machines}
\subsection{Ising model and QUBO formulation}
The Ising model is one of the most fundamental models to describe the properties of magnetic materials. Let $\sigma_i \in \{-1, 1\}$ denote a variable for the $i$-th spin. Then, the energy function of the Ising model with the state vector $\bm{\sigma}$ is defined as follows:
\begin{align}
\label{eq:ising_model}
E(\bm{\sigma}) = -\sum_{i < j} J_{ij} \sigma_i \sigma_j - \sum_i h_i \sigma_i,
\end{align}
where $J_{ij} \in \mathbb{R}$ corresponds to the two-body interaction between the spins $\sigma_i$ and $\sigma_j$, and $h_i \in \mathbb{R}$ is the external magnetic field on $\sigma_i$.
The QUBO formulation has the following cost function for the state vector $\bm{z}$:
\begin{align}
E(\bm{z}) = -\sum_{i, j}Q_{ij}z_iz_j,
\label{eq:qubo}
\end{align}
where $z_i \in \{0,1\}$ is $i$-th binary variable in $\bm{z}$, and $Q_{ij} \in \mathbb{R}$ is the strength of the interaction between the binary variables $z_i$ and $z_j$. This formulation is equivalent to the Ising model via the variable transformation with
\begin{align}
z_i = \frac{1 + \sigma_i}{2}.
\label{eq:variable_change}
\end{align}
Conversion between $\{J_{ij}\}, \{h_i\}$ and $\{Q_{ij}\}$ is also possible.
From the computational viewpoint, annealing machines are domain-specific, and their role is simple; to find the ground state that minimizes Eqs.~\eqref{eq:ising_model} or \eqref{eq:qubo}. Despite the domain-specific characteristic, we can solve various combinatorial optimization problems with annealing machines. The reason is that various optimization problems are converted into the QUBO formulation \cite{ising_formulation}; we formulate the Ising model or QUBO formulation so that the ground state coincides with the optimal solution of the combinatorial optimization problem. Hence, we can solve combinatorial optimization problems with the annealing machines.
As stated above, the Ising model and the QUBO formulation are equivalent. However, the QUBO formulation is more suitable for computation, and then we employ the QUBO formulation below.
\subsection{Two types of annealing machines}
\label{seq:anenaling}
The QUBO formulation is widely applicable to different types of annealing machines.
There are two main types of annealing machines currently under development. One uses simulated annealing (SA) and the other uses quantum annealing (QA). Although the mechanisms of the search process differ, both types accept the QUBO formulation as the input for specific hardware. We briefly review why the QUBO formulation is suitable for both types of machines.
SA is an algorithm based on thermal fluctuations. When the temperature is high, the thermal fluctuations are large. The algorithm searches the large area in the state spaces. A gradual decrease in temperature indicates the settling down to the ground state. Numerically, the probability $P$ of a state change is often defined as follows:
\begin{align}
P = \min\left[1, \exp\left(-\frac{\Delta E}{T}\right)\right],
\label{eq:probability}
\end{align}
where $\Delta E$ is the energy difference when a certain state is changed, and $T$ is temperature. The ground state is adequately obtained if the temperature decreases slowly enough \cite{gibbs}. Note that $\Delta E$ in Eq.~\eqref{eq:probability} is evaluated using Eq.~\eqref{eq:qubo}. The algorithm uses the cost function of the QUBO formulation. It indicates that the QUBO formulation is suitable as the input for machines.
QA was proposed by Kadowaki and Nishimori and uses quantum fluctuation \cite{quantum_annealing}. Instead of the classical binary spin variables, we need quantum spins denoted by the Pauli matrix. Introducing the state vector with the $z$ component of the Pauli matrices, $\hat{\bm{\sigma}}^z$, and the $x$ component for $i$-th spin, $\hat{\sigma}_i^x$, the Hamiltonian is defined as follows:
\begin{align}
\widehat{H} = E(\hat{\bm\sigma}^z)-\Gamma\sum_i \hat{\sigma}^x_i,
\label{eq:qa_ising_model}
\end{align}
where $\Gamma$ is a control parameter. The second term in the right-hand side (r.h.s.) of Eq.~\eqref{eq:qa_ising_model} corresponds to the quantum fluctuations. The initial state is prepared as the ground state under quantum fluctuations. By gradually weakening the quantum effect, i.e., $\Gamma$, QA seeks the ground state determined by the first term on the r.h.s. of Eq.~\eqref{eq:qa_ising_model}. QA can use more rapid scheduling than SA \cite{quantum_annealing_scheduling}. Such a computation method is also called adiabatic quadratic computing \cite{adiabatic}.
The annealing mechanism of QA differs from that of SA. However, the final ground state is defined only by the first term on the r.h.s. of Eq.~\eqref{eq:qa_ising_model}. It is straightforward to obtain the first term from the QUBO formulation in Eq.~\eqref{eq:qubo}. Hence, practical QA machines, such as the D-Wave Advantage, accept the QUBO formulation as the input.
The main topic of this paper is the discretization of continuous variables. Although the annealing mechanisms vary between SA and QA, we can use the following discussions with the QUBO formulation for both SA and QA machines. Because of the simpler notations, only the classical cases, i.e., SA cases, are considered in the following sections; we do not consider any quantum effects.
\section{Linear Regression and QUBO Formulation}
\subsection{Linear regression problems}
Linear regression is one of the fundamental methods in data analysis with which we seek the relationship between a response variable and explanatory variables. Although matrix operations are sufficient to solve simple linear regression problems, further computational efforts are necessary in large-scale cases or when there are regularization terms. Hence, the use of an annealing machine is desirable. With these problems in mind, Date and Potok discussed the implementation and increasing the speed of simple linear regression problems using an annealing machine \cite{linear_regression}. Linear regression problems also yield an example to investigate discretization methods of continuous variables. We briefly review Date and Potok's discussion.
A linear regression model is defined as
\begin{align}
y = w_1 + w_2 x_{2} + \cdots + w_D x_{D},
\label{eq_regression_model}
\end{align}
where $w_d \in \mathbb{R} \,\, (d = 1, \dots, D)$ is the $d$-th parameter, and $x_{d} \in \mathbb{R} \,\, (d = 2, \dots, D)$ is the $d$-th input variable. They are summarized in the following forms:
\begin{align}
\bm{w} &= [w_1, w_2, \cdots, w_D]^\mathrm{T} \in \mathbb{R}^{D}, \\
\bm{x} &= [1, x_{2}, \cdots, x_{D}]^\mathrm{T} \in \mathbb{R}^{D},
\end{align}
where the first component in $\bm{x}$ is a dummy variable; the dummy variable is convenient for the vector expression of Eq.~\eqref{eq_regression_model} as follows:
\begin{align}
y = \bm{w}^\mathrm{T} \bm{x}.
\end{align}
In linear regression problems, we seek the model parameters $\bm{w}$ in Eq.~\eqref{eq_regression_model} that are suitable for our obtained data. Let $\{(\bm{x}_i,y_i):i=1,2,\cdots,N\}$ be a set of $N$ training data, where $\bm{x}_i = [1, x_{i2}, \cdots, x_{iD}]^\mathrm{T} \in \mathbb{R}^{D}$ is a $D$-dimensional input vector, and $y_i \in \mathbb{R}$ is an output for $i$-th data. We introduce an output vector $\bm{y} = [y_1, y_2, \cdots, y_N]^\mathrm{T}$ and matrix $X \in \mathbb{R}^{N \times D}$ defined as
\begin{align}
X = \begin{bmatrix}
\, 1 & x_{12} & \cdots & x_{1D} \\
\, 1 & x_{22} & \cdots & x_{2D} \\
\, \vdots & \vdots & \ddots & \vdots \\
\, 1 & x_{N2} & \cdots & x_{ND} \\
\end{bmatrix}.
\end{align}
Then, a conventional linear regression problem has the following cost function:
\begin{align}
\widetilde{E}(\bm{w}) = \|\bm{y}-X\bm{w}\|^2=\bm{y}^\mathrm{T}\bm{y}-2\bm{w}^\mathrm{T}X^\mathrm{T}\bm{y}+\bm{w}^\mathrm{T}X^\mathrm{T}X\bm{w}.
\label{eq:pre_reg_cost}
\end{align}
In Eq.~$\eqref{eq:pre_reg_cost}$, $\bm{y}^\mathrm{T}\bm{y}$ is meaningless in optimizing $\bm{w}$. Hence, Eq.\:$\eqref{eq:pre_reg_cost}$ comes down to the following problem:
\begin{align}
E(\bm{w}) = \bm{w}^\mathrm{T}X^\mathrm{T}X\bm{w}-2\bm{w}^\mathrm{T}X^\mathrm{T}\bm{y}.
\label{eq:reg_cost}
\end{align}
Of course, it is possible to find the minimum for Eq.~\eqref{eq:reg_cost} via simple matrix operations. However, the matrix operations include a matrix inversion with a high computational cost. Annealing machines directly solve the optimization problem in the form of Eq.~\eqref{eq:reg_cost}.
\subsection{QUBO formulation for linear regression}
\label{sec:pre_research}
As denoted in Sec.~\ref{introduction}, Date and Potok showed that the QUBO formulation and the annealing machines enable faster computation than conventional classical computers even in simple linear regression problems without any regularization term \cite{linear_regression}. We now explain the QUBO formulation for simple linear regression problems.
First, we introduce $\bm{b} = [b_1, b_2, ..., b_K]^\mathrm{T} \,\, (K \in \mathbb{N})$ as a basis vector. For later use, the components in $\bm{b}$ are in ascending order of the absolute value. An example of this vector is $\bm{b}=[\frac{1}{2}, -\frac{1}{2}, 1, -1, 2, -2]^\mathrm{T}$.
Second, we introduce a $D \times K$ binary matrix $\widetilde{\bm{z}}\in\{0, 1\}^{D \times K}$. Then, the discretization of the linear regression weight $w_i$ is
\begin{align}
w_i = \sum_{k=1}^{K}b_k\widetilde{z}_{ik} \quad (i \in 1,2,\dots,D),
\label{eq:w_i}
\end{align}
which indicates that the binary variable $\widetilde{z}_{ik}$ acts like a flag to determine whether the basis $b_k$ is used for the expression. For the above example of $\bm{b}=[\frac{1}{2}, -\frac{1}{2}, 1, -1, 2, -2]^\mathrm{T}$, we can express the values with $\{-\frac{7}{2}, -3, \dots, 3, \frac{7}{2}\}$. Note that there may not be a one-to-one correspondence between $w_i$ and $\widetilde{\bm{z}}$; we can use redundant expressions for original variables. Although the redundant basis could yield good annealing performance, this point is not our main research topic.
Third, let $\bm{z}=[\widetilde{z}_{11},\cdots,\widetilde{z}_{1K},\widetilde{z}_{21},\cdots,\widetilde{z}_{2K},\cdots,\widetilde{z}_{D1},\cdots,\widetilde{z}_{DK}]^T$ denote the flattened one-dimensional vector derived from the binary matrix $\widetilde{\bm{z}}$. Then, we rewrite Eq.~\eqref{eq:w_i} in a matrix form as follows:
\begin{align}
\bm{w} = B\bm{z},
\label{eq:w}
\end{align}
where $B = I_D \otimes \bm{b}$ is the basis matrix derived from the Kronecker product of the $D$-dimensional unit matrix $I_D$ and $\bm{b}$. An example of the matrix $B$ is given later.
Finally, we obtain the QUBO formulation of linear regression from Eqs.~\eqref{eq:reg_cost} and \eqref{eq:w} as follows:
\begin{align}
E(\bm{z}) = \bm{z}^\mathrm{T}B^\mathrm{T}X^\mathrm{T}XB\bm{z}-2\bm{z}^\mathrm{T}B^\mathrm{T}X^\mathrm{T}\bm{y}.
\label{eq:lr_qubo}
\end{align}
\section{Proposed Efficient Correlation-based Discretization Method}
\label{sec:bit_cut_method}
\subsection{Problems with naive method}
\label{sec:problems_of_naive_method}
As mentioned in Sec.~\ref{sec:pre_research}, the size of the binary vector $\bm{z}$ is $D \times K$ because we assign $K$ binary variables to each continuous variable with the naive discretization method. However, for large-scale problems, this method is likely to exceed the size limit of the annealing machine. How can we reduce the number of binary variables within the size limit of the annealing machine?
The simplest method is to reduce the components of the basis assigned in the binary expansion. However, this discretization reduces the precision of the original continuous variables and the ranges that the variables can represent.
One may employ the discretization method for each continuous variable. The flexible discretization may yield good performance. However, we cannot determine the required expression for each original continuous variable before solving the problem.
It is also possible to choose randomly continuous variables with reduced bases; this randomized method might yield a good accuracy on average. However, it could degrade the accuracy of a realization.
In summary, the reduction in the basis degrades the precision of each original continuous variable and yields narrow ranges. Hence, this reduction could lead to poor prediction accuracy for the linear regression.
\subsection{Basic idea of proposed method}
To avoid the degradation in precision and narrower ranges, we exploit the same binary variables for discretization. For example, $\bm{b}=[\frac{1}{2}, -\frac{1}{2}, 1, -1, 2, -2]^\mathrm{T}$ expresses $-2.5$ and $-1.5$ as
\begin{align*}
-2.5 &= \begin{bmatrix} 0,1,0,0,0,1 \end{bmatrix} \bm{b} \\
&= 0 \times b_1 + 1 \times b_1 + 0 \times b_2 + \cdots + 1 \times b_6, \\
-1.5 &= \begin{bmatrix} 1,0,0,0,0,1 \end{bmatrix} \bm{b} \\
&= 1 \times b_1 + 0 \times b_1 + 0 \times b_2 + \cdots + 1 \times b_6.
\end{align*}
When we consider two large negative values, $b_6$ will commonly be used. Hence, the coefficient of $b_6$ is always $1$. If two variables are strongly correlated, the coefficients of $b_6$ for them will be the same. That is, the change in the coefficients of $b_6$ largely affects these two variables, which indicates the strong correlation between them. Hence, we can use the same binary variable for $b_6$ in this example.
Note that using the same binary variable does not degrade the precision of approximation, and the range is unchanged. These are different from the naive methods in Sec.~\ref{sec:problems_of_naive_method}. Note that we must find pairs of continuous variables with strong correlations. Hence, we need a preliminary evaluation of correlations in the original continuous variables. Although this evaluation takes some computational effort, only rough estimations for the correlations are sufficient; the computational cost is not high.
The basic idea of the proposed method is summarized as follows: evaluate correlations, find strongly correlated pairs, and reduce redundant binary variables. In the following sections, we first explain how to reduce the number of binary variables and then give the entire procedure to derive the QUBO formulation.
\subsection{Choice of basis matrix}
\label{sec:choice_of_basis_matrix}
As reviewed in Sec.~\ref{sec:pre_research}, continuous parameters $\bm{w}$ are discretized using Eq.~\eqref{eq:w}. For example, we here consider $D=2$, $K=3$, $\bm{b} = [b_1, b_2, b_3]^\mathrm{T}$. Then, Eq.~\eqref{eq:w} yields the following discretization:
\begin{align}
\begin{bmatrix}
w_{1} \\
w_{2} \\
\end{bmatrix}
=
\begin{bmatrix}
b_{1} & b_{2} & b_{3} & 0 & 0 & 0\\
0 & 0 & 0 & b_{1} & b_{2} & b_{3}\\
\end{bmatrix}
\begin{bmatrix}
\widetilde{z}_{11} \\
\widetilde{z}_{12} \\
\widetilde{z}_{13} \\
\widetilde{z}_{21} \\
\widetilde{z}_{22} \\
\widetilde{z}_{23} \\
\end{bmatrix}.
\label{eq:matrix_w_example}
\end{align}
That is, $3$ bits are assigned to each of the continuous variables $w_1$ and $w_2$, and the total number of binary variables is $6$.
We assume that the continuous variables $w_1$ and $w_2$ are strongly correlated. We then modify matrix $B$ by considering the strong correlation between the continuous variables. In accordance with the modification, the size of $\bm{z}$ is also reduced. The result of the modification is as follows:
\begin{align}
\begin{bmatrix}
w_{1} \\
w_{2} \\
\end{bmatrix}
=
\begin{bmatrix}
b_{1} & b_{2} & 0 & 0 & b_{3}\\
0 & 0 & b_{1} & b_{2} & b_{3}\\
\end{bmatrix}
\begin{bmatrix}
\widetilde{z}_{11} \\
\widetilde{z}_{12} \\
\widetilde{z}_{21} \\
\widetilde{z}_{22} \\
\widetilde{z}_{23} \\
\end{bmatrix}.
\label{eq:bit_cut_example}
\end{align}
Note that the redundant binary variable $\widetilde{z}_{13}$ is deleted by sharing the binary variable $\widetilde{z}_{23}$ with $w_1$ and $w_2$. If further reduction is necessary, we can use the following expression:
\begin{align}
\begin{bmatrix}
w_{1} \\
w_{2} \\
\end{bmatrix}
=
\begin{bmatrix}
b_{1} & 0 & b_{2} & b_{3}\\
0 & b_{1} & b_{2} & b_{3}\\
\end{bmatrix}
\begin{bmatrix}
\widetilde{z}_{11} \\
\widetilde{z}_{21} \\
\widetilde{z}_{22} \\
\widetilde{z}_{23} \\
\end{bmatrix}.
\label{eq:bit_cut_example2}
\end{align}
In summary, we seek a reduced matrix $B'$ and the corresponding vector for the binary variables $\bm{z}'$ with the following form:
\begin{align}
\bm{w} = B'\bm{z}'.
\label{eq:bitcut_w}
\end{align}
Note that the binary variable with a large absolute value is shared first. As mentioned in Sec.~\ref{sec:pre_research}, $\bm{b}$ is in ascending order. Therefore, the order of sharing is $b_K, b_{K-1}, \dots, b_1$. For a linear regression problem, there is a change in the optimization variables, and Eq.~\eqref{eq:lr_qubo} is modified as
\begin{align}
E(\bm{z}') = (\bm{z}')^\mathrm{T} (B')^\mathrm{T} X^\mathrm{T} X (B') \bm{z}' - 2 (\bm{z}')^\mathrm{T} (B')^\mathrm{T} X^\mathrm{T}\bm{y}.
\label{eq:lr_qubo_modified}
\end{align}
Note that the cost function is still in the QUBO formulation.
\subsection{Procedure to obtain reduced QUBO formulation}
To reduce the redundant binary variables, we should find strongly correlated pairs in advance. Hence, we carry out a short sampling procedure for the original problem with the continuous variables with a conventional computer. We then choose correlated pairs that share the same binary variables. We determine the correlated pairs using the correlation matrix with a certain threshold.
Note that the threshold is determined by how many pairs we want to make; the limitation of the number of spins in the annealing machines affects this.
The procedure is summarized as follows:
\begin{enumerate}
\item[(i)] Sample short-time series data of continuous variables using Monte Carlo methods.
\item[(ii)] Calculate the correlations between continuous variables from the time series data.
\item[(iii)] Define a basis matrix $B'$ and binary vector $\bm{z}'$ using the information on the correlations. That is, reduce matrix $B$ and $\bm{z}$ for the variable pairs whose correlation values exceed a certain threshold.
\item[(iv)] Derive the QUBO formulation by using matrix $B'$ and the reduced $\bm{z}'$.
\end{enumerate}
We give some comments on this procedure.
First, we need additional samplings in step (i). If the sampling step takes a long time, this procedure could not be practical even if the optimization is fast in annealing machines. However, we do not need highly accurate information on the correlations; only a rough estimation is sufficient for the pairing in step (ii). Hence, the short-time sampling is sufficient, and its computational cost is low.
Second, one may obtain the correlations analytically in some cases, for example, in Gaussian random Markov fields. In such cases, we can skip the Monte Carlo samplings in step (i).
Third, the number of reduced binary variables depends on the threshold with which we define the strongly correlated pairs. As stated above, we simply set the threshold in accordance with how much we want to reduce the number of variables.
\section{Numerical Experiment}
\label{sec:experiment}
As mentioned above, continuous variables with strong correlations tend to have the same value. Hence, the proposed method reduces the number of binary variables without a significant loss in prediction accuracy.
To verify the performance of the proposed method, we applied it to a linear regression problem. We calculate the mean absolute errors by generating artificial data. We compared the performance of the proposed method with that of the random reduction methods.
\subsection{Artificial data generation}
We use the following linear function to generate artificial data:
\begin{align}
f(\bm{x}) =& 15.5+15.5x_1+10.0x_2+10.0x_3+5.0x_4+5.0x_5 \nonumber \\
& -0.5x_6-0.5x_7-15.5x_8-15.5x_9.
\label{eq:target_problem}
\end{align}
Hence, the number of parameters is $D = 10$. First, we generate $x_d \, (d = 1, \dots, 9)$ from a uniform distribution $[-1,1]$ and we set it as the $i$-th input data $\bm{x}_i$. Observation noise $\eta_i \sim \mathcal{N}(0,1)$ is then sampled from the standard normal distribution. By adding $\eta_i$ to the true output $f(\bm{x}_i)$ as
\begin{align}
y_i = f(\bm{x}_i) + \eta_i,
\end{align}
we have the $i$-th data $(\bm{x}_i, y_i)$.
We repeat this process $1000$ times and split the generated data into two sets, i.e., training and test. The size of the training data set is $100$, and the remaining $900$ data pairs are used as the test data set. Since the problem is not difficult to solve, we use a small amount of the training data.
\subsection{Samplings and evaluating the correlations}
Once we fix the training data, the cost function in Eq.~\eqref{eq:reg_cost} is determined. As the preparation step, we then obtain short-time series data from the Monte Carlo sampling for the linear regression cost function in Eq.~\eqref{eq:reg_cost}, which has the original continuous variables. Samples are generated from the Gibbs-Boltzmann distribution with the aid of the Metropolis rule. We generate the candidates for the Metropolis rule by adding a random variable generated from $\mathcal{N}(0,0.5)$ to a randomly selected parameter. The sampling interval is $2 \times D = 20$ steps, and the length of the time-series data is $100$. The sampling temperature is $0.1$, although we confirmed that the other temperatures are also suitable to obtain a rough estimation of the correlations.
\begin{figure}[b]
\centering
\includegraphics[keepaspectratio,width=90mm]{fig1.eps}
\caption{(Color online) Correlations between continuous variables. Short-time series data from the Monte Carlo samplings yields the correlations. Rough estimation is sufficient.}
\label{fig:correlation}
\end{figure}
Figure~\ref{fig:correlation} shows an example of correlation matrices. There are several correlated pairs. The highest correlated one is $(w_1, w_3)$. Although $(w_1,w_2)$ and $(w_2,w_3)$ are candidates, we do not use them because $w_1$ and $w_3$ appear in the pair $(w_1, w_3)$. We then select the next pairs $(w_0, w_5)$ and $(w_8, w_9)$. We set the threshold to $0.8$. Hence, we reduced the binary variables from these three pairs, i.e., $(w_1,w_2)$, $(w_0, w_5)$, and $(w_8, w_9)$ for the case in Fig.~\ref{fig:correlation}.
\subsection{Creating basis matrix and binary vector}
Although we do not know detailed information about parameters $\{w_i\}$ in advance, it is necessary to define a basis vector in Eq.~\eqref{eq:w_i}. Since our aim was to show the effectiveness of the proposed method, we choose the basis vector $\bm{b}$ as follows:
\begin{align}
\bm{b} =
\begin{bmatrix}
0.5 & -0.5 & 1 & -1 & 2 & -2 & 4 & -4 & 8 & -8
\end{bmatrix}^{\mathrm{T}}.
\label{eq:b}
\end{align}
Hence, $K=10$ in Eq.~\eqref{eq:w_i}. Note that the choice of the basis vector $\bm{b}$ is sufficient to represent the target function $f(\bm{x})$ in Eq.~\eqref{eq:target_problem}. The $\bm{b}$ yields the basis matrix $B$ via $B=I_D \otimes \bm{b}$.
Next, we redefine matrix $B'$ with the aid of the above-mentioned highly correlated pairs. For the selected pairs, we share certain binary variables, as exemplified in Sec.~\ref{sec:choice_of_basis_matrix}; the order is from highest absolute value to lowest. In the experiment, we changed the number of shared binary variables to each pair and investigated the decrease in the prediction accuracy.
Finally, we redefine $\bm{z}'$ in accordance with the redefined matrix $B'$. We then obtain the final QUBO formulation in Eq.~\eqref{eq:lr_qubo_modified} for the annealing machines.
\subsection{Annealing procedure}
We use a simple SA algorithm to find optimal solutions $\bm{z}'$ of the derived QUBO formulation. The SA simulations ran on a MacBookAir with Inter Core i5. In SA, the temperature schedule was as follows:
\begin{align}
T_t = T_0 \times \gamma^{t},
\end{align}
where $T_t$ is the temperature at the $t$-th iteration, $T_0$ is the initial temperature, and $\gamma$ is the decay rate. One iteration is defined as $2 \times \textrm{(\# of spins)}$ updates in accordance with the conventional Metropolis rule. After the iterations, we output the state that minimizes the QUBO formulation during the search procedures. Table~\ref{tb:annealing_parameter} shows the SA parameters.
The obtained $\bm{z}'$ leads to the final estimation for the continuous variables $\{ w_i \}$ via Eq.~\eqref{eq:bitcut_w}.
\begin{table}[b]
\centering
\caption{Parameters for annealing procedure}
\begin{tabular}{l|c} \hline\hline
Number of iterations & $10^6$ \\
Initial temperature $T_0$ & $500$ \\
Decay rate $\gamma$ for temperature scheduling & $0.99996$ \\ \hline\hline
\end{tabular}
\label{tb:annealing_parameter}
\end{table}
\subsection{Numerical results}
\subsubsection{Performance in prediction accuracy}
\begin{figure}[t]
\includegraphics[keepaspectratio, width=90mm]{fig2.eps}
\caption{(Color online) Mean absolute error of annealing results. Filled squares show errors by employing the reduction method using randomly selected pairs. Filled circles correspond to those with the proposed method.}
\label{fig:regression_compare}
\end{figure}
First, we investigated how the number of reductions affects the prediction accuracy. We used another reduction method for comparison, with which we randomly select the same number of pairs of continuous variables. We conducted $10$-fold cross-validation and evaluated the mean absolute errors (MAEs) and the standard deviations. Note that the correlation matrix and selected pairs vary with the training data set. Hence, the number of selected pairs varied with each training data set.
Figure~\ref{fig:regression_compare} shows the numerical results. The horizontal axis represents the number of reduced binary variables for each pair; for example, `2bit cut' indicates that a selected pair of continuous variables share two binary variables. Note that the total number of reduced binary variables was generally larger than the value of the horizontal axis because we selected several pairs of variables. The filled squares show the errors with the reduction method using randomly selected pairs. The filled circles correspond to those with the proposed method. The error bars indicate the standard deviations.
Since we added the noise with the standard normal distribution, the minimum value of MAE was about $1$. Figure~\ref{fig:regression_compare} shows that the random reduction method performed worse due to increasing the number of shared bits. Even the $1$-bit cut resulted in a sudden decrease in the prediction accuracy.
We see a moderate increase in the MAE for the proposed, indicating its validity. Even the $10$-bit cut case yielded better accuracy than the $1$-bit cut case with the random reduction method, although this extreme case is not practical.
Therefore, it is possible to reduce several shared binary variables without significant degradation in prediction accuracy.
\subsubsection{Performance in variable reduction and computational time}
\begin{table}[tb]
\centering
\caption{Total number of binary variables and computation time}
\begin{tabular}{c|cc} \hline\hline
\begin{minipage}{20mm}
\begin{center}
Number of \\reduced binary variables per pair
\end{center}
\end{minipage}
&
\begin{minipage}{25mm}
\begin{center}
Total number of\\ binary variables\\ (average for $10$ trials)
\end{center}
\end{minipage}
&
\begin{minipage}{25mm}
\begin{center}
Computational time [sec]\\ (average for $10$ trials)
\end{center}
\end{minipage} \\
\hline
$0$ & $100 \pm 0$ & $331.7 \pm 1.7$\\
$1$ & $96.5 \pm 0.7$ & $309.0 \pm 4.5$\\
$2$ & $93.0 \pm 1.3$ & $287.4 \pm 8.0$\\
$3$ & $89.5 \pm 2.0$ & $266.6 \pm 10.9$\\
$4$ & $86.0 \pm 2.7$ & $247.2 \pm 14.6$\\
$5$ & $82.5 \pm 3.4$ & $228.7 \pm 17.2$\\
$6$ & $79.0 \pm 4.0$ & $210.2 \pm 20.0$\\
$7$ & $75.5 \pm 4.7$ & $193.0 \pm 22.4$\\
$8$ & $72.0 \pm 5.4$ & $176.2 \pm 24.3$\\
$9$ & $68.5 \pm 6.0$ & $160.2 \pm 26.3$\\
$10$ & $65 \pm 6.7$ & $145.3 \pm 26.8$\\
\hline\hline
\end{tabular}
\label{tb:number_of_bit}
\end{table}
Table~\ref{tb:number_of_bit} shows the number of binary variables and computational time; the values in the table are the average for ten trials in the cross-validation procedure. The first row in the table corresponds to the naive application of the QUBO formulation without any reduction.
We see that the reduction with $6$ variables per pair yielded about a 20\% reduction in the total number of variables. Even in the $6$-bit cut case, the prediction accuracy did not degrade much, as we see in Fig.~\ref{fig:regression_compare}. As stated in Sec.~\ref{introduction}, the limitation of available variables is severe with annealing machines. From a practical viewpoint, the proposed method will be crucial to embedding the QUBO formulation into the annealing machines. The reduction in variables also contributed to rapid computation, as shown in Table~\ref{tb:number_of_bit}.
\section{Conclusion}
We proposed an efficient correlation-based discretization method for annealing machines. The numerical experiment showed that it is possible to reduce the number of binary variables without a large loss in prediction accuracy. Recall that there are the following two constraints with annealing machines:
\begin{itemize}
\item[1.] We need the QUBO formulation as the input; i.e., only binary variables are acceptable.
\item[2.] There is a severe limitation on the number of binary variables; the number of binary variables for annealing machines is not large.
\end{itemize}
Compared with naive methods used in previous studies, the proposed method adequately addresses these two constraints. Note that we can flexibly increase or decrease the total number of binary variables by changing the correlation threshold. When one wants to use annealing machines for large-scale problems, the proposed method could be of great help in practice.
This study was the first attempt at a correlation-based reduction idea. Although the proposed method was verified, further studies are needed with other examples and more practical ideas. There are remaining issues. First, the proposed method handles only correlated pairs. There are various other ways to combine correlated variables. We will investigate which type of combination yields good performance. Second, we assumed the fully connecting type annealing machines. Some annealing machines are loosely coupled, such as the D-Wave Advantage and CMOS Annealing Machine. We need another procedure to embed the derived QUBO formulation into such machines.
|
2,869,038,154,374 | arxiv | \section{Introduction} \label{sec:intro}
Exoplanets orbiting close to their host stars are subjected to the intense X-Ray (1$\sim$100$\rm\AA$) and Extreme Ultraviolet (100$\sim$912$\rm\AA$) radiation (hereafter XUV) from their host stars. Due to the intense XUV radiation, gas-rich planets are likely to suffer from hydrodynamic atmospheric escape that would push the gas beyond their Roche lobes \citep{2003APJL...598..L121,2007Natur...450..854,2013Icar...226..1678}. By studying the process of the escape of atmosphere, we can understand the composition and structure of exoplanetary atmosphere. Understanding such process is also imperative to probe the habitability of exoplanets because water can be lost through hydrodynamic escape \citep{1996JGR...101..26039,2007Icar...191..453,2010MNRAS...409..963,2013APJ...765..131,2017arxiv...1705..05535, 2019ApJ...872..99}.
Observations among transit systems have revealed excess stellar spectrum absorption apart from the optical occultation by planet itself. The detection in ultraviolet Ly$\alpha$ of hot-Jupiter HD 209458b was the first transit case in which \citet{2003Nature...422..143} found a $\sim$15\% Ly$\alpha$ absorption by analysing the data of HST/STIS observations with medium-resolution. The absorption of Ly$\alpha$ was confirmed by \citet{2007ApJ...671..L61,2008ApJ...688..1352}, who demonstrated a lower absorption $\sim$8.9$\%$ $\pm$ 2.1$\%$. Subsequently, \citet{2010ApJ...709..1284} reanalysed the HST/STIS observations of HD 209458b and proposed the Ly$\alpha$ absorption depth in wavelength range [1212 $\rm\AA$, 1220 $\rm\AA$] was 6.6$\%$ $\pm$ 2.3$\%$. However, the optical occultation by HD 209458b is only $\sim$1.5\% \citep{2000APJ...529..L41,2000APJ...529..L45}. Thus the expanded natural hydrogen around the planet is a good interpretation for the excess absorption. The excess absorption of Ly$\alpha$ has also been detected in HD 189733b and GJ 436b. For HD 189733b, an absorption of $\sim$5$\%$ has been detected by \citet{2010A&A...514..72,2012A&A...543..L4} and \citet{2013A&A...551A..A63}. Observations of warm-Neptune GJ 436b also indicate that it has an expanded exosphere \citep{2014APJ...786..132}. It was detected that a large comet-like tail of hydrogen was surrounding GJ 436b, and this planet obscured almost 50$\%$ about the Ly$\alpha$ of its host star \citep{2011A&A...529..A80,2015Nature...522..459}.
Moreover, \citet{2007Nature...445..511} revealed an excess $\sim$0.03\% absorption in the Balmer jump and continuum from HD 209458b. \citet{2012ApJ...751..86} reported the detection of H$\alpha$ excess absorption for HD 189733, which hints that there are the excited hydrogen atoms in its atmosphere. Recently, \citet{2018NatAs...2..714Y} found the excess absorption of H$\alpha$ in KELT-9b. Due to the very high temperature of KELT-9b (4600K), the excited atoms can produce an extended hydrogen envelope of 1.64 planetary radius, which implies the escape of hydrogen. The excess absorptions are also found in helium and heavier elements. \citet{2004APJ...604..L69} detected O $_{\rm\uppercase\expandafter{\romannumeral1}}$, C $_{\rm\uppercase\expandafter{\romannumeral2}}$ in the atmosphere of HD 209458b. Subsequently, \citet{2013A&A...553A..A52} also detected oxygen atoms and possibly C $_{\rm\uppercase\expandafter{\romannumeral2}}$ in the upper atmosphere.
Generally, the excess absorptions are attributed to the escape of gas in the extended envelope. Hydrodynamical simulations including the process of radiative transfer and photochemistry are implemented to study the mechanism of atmospheric escape \citep{2004Icar...170..167,2005APJ...621..1409,2007P&SS...55..1426,2008P&SS...56..1260,2009APJ...693..23,2011ApJ...733..98, 2013ApJ...766..102,
2016MNRAS...460..1300, 2016A&A...586..75, 2018A&A...619A.151K, 2018ApJ...866...47S}, and the observational absorption of Ly$\alpha$ for HD 209458b and HD 189733b can be explained at some extent by such model \citep{2013Icar...226..1678,2016ApJ...818..107,2019arXiv190310772O}. On the other hand, it is believed that planetary magnetic field would play an important role in controlling the atmospheric escape \citep{2010APJ...709..670,2011APJ...730..27,2011APJ...728..152,2012APJL...753..L4,2014MNRAS...444..3761,2015APJ...813..50,2017MNRAS.470.4330E,2019MNRAS.483.2600D}.
Thousands of exoplanets have been discovered to date, however, only a few among them have been detected undergoing atmospheric escape and most theoretical studies focus on the well-known observed transit systems as mentioned above. It is crucial to find more information about the latent and unexplored atmospheric escape. To explore the properties of the escaping atmosphere, it is necessary to know the dependence of the mass loss rates on the physical parameters of exoplanets since the mass loss rates can describe the levels of the absorption of Ly$\alpha$ in a certain extent (The absorptions also depend on the degree of ionization). The mass loss rates of exoplanet are related to many physical parameters, such as the masses and radii etc. However, at a large extent the mass loss rates are determined by the XUV fluxes received by the planets and the mean densities of the planets, namely, $\dot{M}\varpropto \frac{F_{xuv}}{\rho}$ \citep{2003APJL...598..L121,2009AA...506..399}. The equation of energy-limit presented by \citep{2003APJL...598..L121,2009AA...506..399} have been generally used to estimate the mass loss rates. This hints that one can obtain the general trend of mass loss rates if the distributions of XUV fluxes and the mean densities of many planets are known.
In this paper, we aim to compare the properties of atmosphere of exoplanet with different XUV flux and density and inspect whether the energy-limited equation is suitable for them. Furthermore, we investigate the absorption of stellar Ly$\alpha$ by the exoplanets' atmosphere for a variety of samples ranging from Earth-Like planets to Jupiter-like planets. As an exploring work, the absorption by interstellar medium is not included because a goal of this paper is to discuss how the stellar Ly$\alpha$ is absorbed by the atmosphere of exoplanet rather than predicting the observable signals. To discuss the properties of atmosphere and the absorption of Ly$\alpha$ by the planetary atmosphere, we select some sample from those planets that have been confirmed (Section. 2.1) and calculate the mass loss rates in a large parameter space. We obtain the XUV spectra by using the method of \citet{2011A&A...532..6}(Section 2.2). The hydrodynamic model and the calculation of Ly$\alpha$ absorption are presented in Section 2.3 and 2.4. In Section \ref{sec:result2}, we present the results of our selected sample and give a statistical analysis of them. In Section \ref{sec:discu}, we discuss the limitations of our work. Finally in Section \ref{sec:summ}, we summarize the results.
\section{Method and Model} \label{sec:memo}
\subsection{Sample selected } \label{subsec:samples}
The observations show that the radii of most exoplanets are smaller 2R$_J$ (R$_J$ is the radius of Jupiter.) (http://exoplanet.eu). Thus, we confined the planets in the range of the radius less than 2 R$_J$. In addition, for exoplanets with high mass their atmospheres can be compact and the escape of species is relatively difficult. We thus set the upper limit of the mass to 2 M$_J$ (M$_J$ is the mass of Jupiter). Finally, we further selected the sample by their gravitational potentials. The calculations of \citet{2016A&A...585..2s} found that the hydrodynamic escape of the atmosphere is difficult for the exoplanets with high gravitational potentials ($>$4$\times$10$^{13}$ erg g$^{-1}$), which means that in order to produce the hydrodynamic escape the exoplanets with high mass should have large radius (or relatively low mean densities). Thus, the gravitational potential of the sample planets are smaller than 4$\times$10$^{13}$ erg g$^{-1}$.
According to \citet{2014ApJ...794..133}, planets with radii 0.0885-0.177 R$_J$ (R$_J$ is the radius of Jupiter) are Super-Earths, 0.177-0.354 R$_J$ are Mini-Neptunes and 0.354-0.531 R$_J$ are Super-Neptunes. In the investigation, we also classified the planets with different sizes. Specifically, the planets with radius smaller than 0.2 R$_J$ are Earth-like planets, 0.2 -0.6 R$_J$ are Neptune-like planets, 0.6-1.0 R$_J$ are Saturn-like planets. Finally, the planets with radius larger than 1.0 R$_J$ are Jupiter-like planets.
\citet{2007Natur...450..854} suggests that the exoplanets with a separation smaller than 0.15AU will produce significant mass loss. Hence, we selected those planets that their separation are less than 0.1 AU. The separations are in the range of 0.01-0.09AU for Earth-like and Neptune-like planets, 0.04-0.07 AU for Saturn-like planets and 0.02-0.08AU for Jupiter-like planets. The host stellar masses of each system are in the range of 0.08-1.105 M$_{\sun}$ (M$_{\sun}$ is the mass of the Sun) for Earth-like, 0.15-1.223 M$_{\sun}$ for Neptune-like, 0.816-1.3 M$_{\sun}$ for Saturn-like and 0.8-1.46 M$_{\sun}$ for Jupiter-like planets. Even though the stellar mass in some Earth-like and Neptune-like systems are relatively small, these systems account for a small proportion in each group.
Based on the conditions above, we selected 90 exoplanets from the real systems (http://exoplanet.eu and https://exoplanetarchive.ipac.caltech.edu/), and some artificially made planets will be added by the real planets. For the sake of a diversity, we investigated planets from Earth-sized to Jupiter-sized. We emphasize that it is not the goal of this paper to predict real mass loss rates and the Ly$\alpha$ absorptions of real planets because the accurate characterizations are difficult due to the lack of some important physical input of the host stars. For example, the XUV spectral energy distributions (SEDs) can not be obtained easily because they can not be well determined from observations due to the obscure of ISM. In addition, the compositions of Earth-like planets can be different from the hydrogen-dominated atmosphere of giant planets. Here we assumed that they hold a hydrogen-rich envelope. The hydrogen of their atmospheres can originate from the dissociation of H$_{2}$O \citep[]{1983Icarus...53..479,2019ApJ...872..99}. Furthermore, the hydrogen-rich atmosphere can appear in the early phase of Earth-like planets due to the process of outgassing or accretion \citep[]{2008APJ...685..1237}. The observational signals in Ly$\alpha$ for Earth-like planets have been discussed by \citet{2019A&A...622..46} and \citet{2019A&A...623A.131K}.
\begin{figure}
\begin{minipage}[t]{0.5\linewidth}
\centering
\includegraphics[width=3.6in,height=3.in]{fig1a.pdf}
\end{minipage}
\begin{minipage}[t]{0.5\linewidth}
\centering
\includegraphics[width=3.6in,height=2.9in]{fig1b.pdf}
\end{minipage}
\caption{The sample of this work. In the left panel, the x-axis is the planetary mass, the y-axis is the planetary radius and the contours are the planetary mean density plotted in log$_{10}$ form. The planets in the left panel are the observed ones. In the right panel, the x-axis is the planetary mean density and y-axis is integrated XUV flux. The dark blue asterisks represent the planets corresponding to the same planets in the left panel and the light blue diamonds represent the artificially made planets.\label{sample12} }
\end{figure}
The left panel of Figure. 1 shows the mass and radius of the sample planets (In the panel log $\rho$ expresses the logarithm of the planetary mean density). One can see that our sample covers various exoplanets ranging from Jupiter-like planets to Earth-like planets. We neglected the Earth-like planets with large radius because such planets may be composed of gas rather than rock. As discovered by Lammer et al. (2003, 2009), the mass loss rates are controlled essentially by the XUV flux and the mean density of the planets. Furthermore, the Ly$\alpha$ absorption by the planetary atmosphere can be estimated after the mass loss rates are obtained. Therefore, we plot the sample in the Fxuv-$\overline{\rho}$ diagram (The right panel in Figure 1. For more details of Fxuv, see Section \ref{subsec:xuv}). The dark blue asterisks represent the planets of the Radius-Mass diagram. As we can see, most planets locate at the center of the Fxuv-$\overline{\rho}$ diagram and their mean densities are similar or lower than that of Jupiter. The planets with high density locate at the right of the panel. In order to investigate the the influence of Fxuv and $\overline{\rho}$ on the mass loss rates and absorption of stellar Ly$\alpha$ by planetary atmosphere on a larger Fxuv-$\rho$ scale, we added many artificially made planets which are represented by the light blue diamonds. Most of the artificial ones are made by changing the input Fxuv of the planets and the rest of those planets is made by changing the planetary mass and radius. Most of mock planets made by changing planetary masses and radii are Earth-like planets or Neptune-like planets, as the initial number of planet selected in this range is less than Jupiter-like planets. By modifying the masses and radii of those Earth-like to Neptune-like planets, the sample planets cover uniformly the Fxuv-$\overline{\rho}$ diagram. In total, about 450 planets are included.
In our solar system, the mean density of the rocky planets are higher than those of the gaseous planets. The distributions of the mean densities of our sample are similar with that of the solar system. In our sample the mean densities of Jupiter-like and Saturn-like planets are in the range of 0.08-1.4 g/cm$^{3}$. For the Neptune-like planets, the mean densities are in the range of 0.3-2.65 g/cm$^{3}$. The smallest density of Earth-like planet is 0.9 g/cm$^{3}$ while the largest density is 8.2 g/cm$^{3}$. A such distribution means that the densities of most Jupiter-like planets are lower than those of Earth-like planets. Furthermore, the distributions of the gravitational potentials are different for different planets. For earth-like and Neptune-like planets, their gravitational potentials are smaller than 3$\times$10$^{12}$ erg g$^{-1}$. However, the gravitational potentials of Saturn-like and Jupiter-like planets cover a large range (1$\times$10$^{12}$-3$\times$10$^{13}$ erg g$^{-1}$) and are generally greater than those of planets with smaller sizes although there is overlap around 3$\times$10$^{12}$ erg g$^{-1}$. The highest flux we set is 4$\times$ 10$^{5}$ erg/cm$^{2}$/s, which corresponds a planet orbiting the Sun at 0.003 AU. The lowest value of the XUV flux is 2$\times$ 10$^{2}$ erg/cm$^{2}$/s below which the occurrences of hydrodynamic escape can be difficult. Our calculations cover a large range in the Fxuv-$\rho$ diagram. One can see from Fxuv-$\rho$ diagram that in any given level of fluxes there are many planets with different mean density. By investigating the dependence of the mass loss rates on the XUV flux and means density, we can further conclude the general trend of Ly$\alpha$ absorption and explore what factors determine the absorption levels.
\subsection{The XUV spectra of stars } \label{subsec:xuv}
As discussed above, the mass loss rates are related with the properties of planet and the XUV irradiation. It is difficult to obtain the accurate spectra of star because ISM can obscure the EUV emission of the stars. \citet{2011A&A...532..6} fitted the observation for late F to early M dwarfs and found that the XUV luminosity (integrated XUV flux received at the planetary orbits) can be depicted by an empirical relation:
\begin{equation}
logL_{euv}=(29.12\pm0.11)-(1.24\pm0.15)logt
\end{equation}
\begin{equation}
L_{x}=\left\{
\begin{array}{lcl}
6.3\times10^{-4}L_{bol} (t \leq t_i) \\
1.89\times10^{28}t^{-1.55}(t \textgreater t _i)
\end{array}
\right.
\end{equation}
with t$_i$=2.03$\times$10$^{20}$L$_{bol}$ $^{-0.65}$, where t is the stellar age in Gyr, L$_{euv}$ and L$_{x}$ (erg/s) are the luminosity in EUV band and Xray band, respectively.
This means that the X-ray and EUV integrated fluxes can be obtained if the age of the star is known. In this paper, we only choose the systems with given ages and the ages are obtained from http://exoplanet.eu. Even though the ages usually given with relatively large uncertainties, an accurate age is not necessary because the purpose of the present work is to explore the response of the mass loss rates and the Ly$\alpha$ absorption depth to XUV flux. Thus, the age only provide a XUV flux estimation. In order to simulate the emission the XUV spectra of stellar corona, we used the software of XSPEC-APEC in which the wavelength domain of 1 $\rm\AA$ to 912 $\rm\AA$ was used. The free parameters in APEC are metallicity and coronal temperature. We first set the metallicity to solar value \citep{2009ARA&A...47..481}, so the spectra only depend on the coronal temperature. To obtain the XUV spectra of all stars, we calculated lots of spectra by using XSPEC-APEC. By comparing the theoretic and empirical ratio of Lx to Leuv in a given age, the optimum coronal temperatures of the stars can be defined. Finally, the XUV spectra of all stars are obtained by the method. We inspected the profiles of those XUV spectra and found that the $\beta$ indexes of all spectra are $\sim$ 0.9 ($\beta$=F$_{1-400{\rm\AA}}$/F$_{1-912{\rm\AA}}$, for details see \citet{2016ApJ...818..107} ). Thus, the influence of the profiles of the XUV spectra is slight (We evaluated the variations of the mass loss rates produced by the different profiles and found the change is smaller than 5\%.). We emphasized that the XUV fluxes obtained by the Equation (1) and (2) can result in deviation for those real planets due to the uncertainties of age and the models. However, the empirical spectra can express the essential feature of those stars. Thus, as a theoretical exploration the spectra can be used to describe the response of the atmosphere of planet on the different levels of Fxuv. In the premise, we will explore how the mass loss rates and the absorptions of Ly$\alpha$ are affected by the XUV flux of the stars and the properties of the planets.
\subsection{Hydrodynamic simulations } \label{subsec:hdsim}
We used the 1D hydrodynamic models \citep{2018CHA&A...42..81} to simulate the atmospheric structures of our selected planet sample. Compared to the early models of \citet{2013ApJ...766..102} and \citet{2016ApJ...818..107}, there are two improvements in the model of \citet{2018CHA&A...42..81}. First, the former models use the solar EUV (100-912 $\rm\AA$) as stellar radiation spectra, while in the later model developed by \citet{2018CHA&A...42..81}, the radiation spectra expand to XUV (1-912 $\rm\AA$). The photoionization cross sections in X-Ray are smaller than those of EUV, so the heating by X-Ray is not remarkable compared with EUV. However, the photoionization produced by X-ray can be important for heavy elements. Second, the former ones only include hydrogen, helium and electrons, but the later one also includes heavier elements such as C, N, O, Si. To be specific, the later model includes 18 kinds of particles, among which are 7 kinds of neutral particles, 10 kind of ions and electrons. In such condition, the photochemical reactions included photoionization, photodissociation, impact ionization, recombination, charge exchange and other important reactions (Table. 1). For more details of the hydrodynamic model, the reader can refer to the papers above.
In the simulations, the metallicities of planets are consistent with those of their host stars. For those planets with unknown metallicities, we used the solar metallicity. In order to express the average of the energy received by the planet over the whole surface, the XUV fluxes are divided by a factor of 4. We choose the outer boundaries to be the host stars' radii for our sample. For the systems whose host stars' radii far exceed the planetary radii (such as Earth-like planet), the calculating outer boundaries are set to 15 times the planetary radii. In order to cover the surface of the host stars, the results are extrapolated to the radii of their host stars.
\begin{longtable}{lll}
\caption{The coefficients of the chemical reactions.}
\label{table:label} \\
\hline
H${_2}$ + $h\nu$ $\rightarrow$ H${_2^+}$ + e & & \citet{2016ApJ...818..107} \\
H${_2}$ + $h\nu$ $\rightarrow$ H${^+}$ + H + e & & \citet{2016ApJ...818..107} \\ H + $h\nu$ $\rightarrow$ H${^+}$ + e & & \citet{2002APJ...575..33} \\
He + $h\nu$ $\rightarrow$ He${^+}$ + e & & \citet{2002APJ...575..33} \\
C + $h\nu$ $\rightarrow$ C${^+}$ + e & & \citet{1996ApJ...465..487V}\\
N + $h\nu$ $\rightarrow$ N${^+}$ + e & & \citet{1996ApJ...465..487V} \\
O + $h\nu$ $\rightarrow$ O${^+}$ + e & & \citet{1996ApJ...465..487V} \\
Si + $h\nu$ $\rightarrow$ Si${^+}$ + e & & \citet{1996ApJ...465..487V} \\
Si$^+$ + $h\nu$ $\rightarrow$ Si$^{2+}$ + e & & \citet{1996ApJ...465..487V} \\
H${_2}$ + M $\rightarrow$ H + H + M & 1.5 $\times$ 10$^{-9}$e$^{-48000/T}$ & \citet{1992JPCRD..21..411B} \\
H + H + M $\rightarrow$ H${_2}$ + M & 8.0 $\times$ 10$^{-33}$(300/T)$^{0.6}$ & \citet{1970JChPh..53.4395H}\\
H${_2^+}$ + H${_2}$ $\rightarrow$ H${_3^+}$ + H & 2.0 $\times$ 10$^{-9}$ & \citet{1974JChPh..60.2840T}\\
H${_3^+}$ + H $\rightarrow$ H${_2^+}$ + H${_2}$ & 2.0 $\times$ 10$^{-9}$ & \citet{2004Icar...170..167}\\
H${_2^+}$ + H $\rightarrow$ H${^+}$ + H${_2}$ & 6.4 $\times$ 10$^{-10}$ & \citet{1979JChPh..70.2877K}\\
H${^+}$ + H${_2}$(v $\geq$ 4) $\rightarrow$ H${_2^+}$ + H & 1.0 $\times$ 10$^{-9}$e$^{-21900/T}$ & \citet{2004Icar...170..167}\\
He${^+}$ + H${_2}$ $\rightarrow$ HeH${^+}$ + H & 4.2 $\times$ 10$^{-13}$ & \citet{1989JChPh..91.4593S} \\
He${^+}$ + H${_2}$ $\rightarrow$ H${^+}$ + H + He & 8.8 $\times$ 10$^{-14}$ & \citet{1989JChPh..91.4593S}\\
HeH${^+}$ + H${_2}$ $\rightarrow$ H${_3^+}$ + He & 1.5 $\times$ 10$^{-9}$ & \citet{1980JChPh..73.4976B}\\
HeH${^+}$ + H $\rightarrow$ H${_2^+}$ + He & 9.1 $\times$ 10$^{-10}$ & \citet{1979JChPh..70.2877K}\\
H${^+}$ + e $\rightarrow$ H + $h\nu$ & 4.0 $\times$ 10$^{-12}$(300/T)$^{0.64}$ & \citet{1995MNRAS.272...41S}\\
He${^+}$ + e $\rightarrow$ He + $h\nu$ & 4.6 $\times$ 10$^{-12}$(300/T)$^{0.64}$ & \citet{1995MNRAS.272...41S}\\
H${_2^+}$ + e $\rightarrow$ H + H & 2.3 $\times$ 10$^{-8}$(300/T)$^{0.4}$ &\citet{1977JPhB...10.3797A}\\
H${_3^+}$ + e $\rightarrow$ H${_2}$ + H & 2.9 $\times$ 10$^{-8}$(300/T)$^{0.65}$ & \citet{1994Sci...263..785S}\\
H${_3^+}$ + e $\rightarrow$ H + H + H & 8.6 $\times$ 10$^{-8}$(300/T)$^{0.65}$ & \citet{1995PhRvL..74R4099D}\\
HeH${^+}$ + e $\rightarrow$ He + H & 1.0 $\times$ 10$^{-8}$(300/T)$^{0.6}$ & \citet{1989PhRvA..40.4318Y}\\
H + e $\rightarrow$ H${^+}$ + e + e & 2.91 $\times$ 10$^{-8}$ ($\rm\frac{1}{0.232+U}$)U$^{0.39}$exp(-U), U=13.6/$E_e$eV & \citet{1997ADNDT..65....1V}\\
He + e $\rightarrow$ He${^+}$ + e + e & 1.75 $\times$ 10$^{-8}$ ($\rm\frac{1}{0.180+U}$)U$^{0.35}$exp(-U),U=24.6/$E_e$eV& \citet{1997ADNDT..65....1V} \\
H + He${^+}$ $\rightarrow$ H${^+}$ + He & 1.25 $\times$ 10$^{-15}$(300/T)$^{-0.25}$ & \citet{2007ApJ...666....1G}\\
H${^+}$ + He $\rightarrow$ H + He${^+}$ & 1.75 $\times$ 10$^{-11}$(300/T)$^{0.75}$exp(-128000/T)& \citet{2007ApJ...666....1G}\\
O + e $\rightarrow$ O${^+}$ + e + e & 3.59 $\times$ 10$^{-8}$ ($\rm\frac{1}{0.073+U}$)U$^{0.34}$exp(-U),U=13.6/$E_e$eV & \citet{1997ADNDT..65....1V}\\
C + e $\rightarrow$ C${^+}$ + e + e & 6.85 $\times$ 10$^{-8}$ ($\rm\frac{1}{0.193+U}$)U$^{0.25}$exp(-U),U=11.3/$E_e$eV& \citet{1997ADNDT..65....1V}\\
O${^+}$ + e $\rightarrow$ O + $h\nu$ & 3.25 $\times$ 10$^{-12}$(300/T)$^{0.66}$ & \citet{2007AA...466.1197W}\\
C${^+}$ + e $\rightarrow$ C + $h\nu$ & 4.67 $\times$ 10$^{-12}$(300/T)$^{0.60}$ & \citet{2007AA...466.1197W}\\
C${^+}$ + H $\rightarrow$ C + H${^+}$ & 6.30 $\times$ 10$^{-17}$(300/T)$^{-1.96}$exp(-170000/T)& \citet{1998ApJ...502.1006S}\\
C + H${^+}$ $\rightarrow$ C${^+}$ + H & 1.31 $\times$ 10$^{-15}$(300/T)$^{-0.213}$ & \citet{1998ApJ...502.1006S}\\
C + He${^+}$ $\rightarrow$ C${^+}$ + He & 2.50 $\times$ 10$^{-15}$(300/T)$^{-1.597}$ & \citet{2007ApJ...666....1G}\\
O${^+}$ + H $\rightarrow$ O + H${^+}$ & 5.66 $\times$ 10$^{-10}$(300/T)$^{-0.36}$exp(8.6/T)& \citet{2007AA...466.1197W}\\
O + H${^+}$ $\rightarrow$ O${^+}$ + H & 7.31 $\times$ 10$^{-10}$(300/T)$^{-0.23}$exp(-226.0/T) & \citet{2007AA...466.1197W}\\
N + e $\rightarrow$ N${^+}$ + e + e & 4.82 $\times$ 10$^{-8}$ ($\rm\frac{1}{0.0652+U}$)U$^{0.42}$exp(-U),U=14.5/$E_e$eV & \citet{1997ADNDT..65....1V}\\
N${^+}$ + e $\rightarrow$ N + $h\nu$ & 3.46 $\times$ 10$^{-12}$(300/T)$^{0.608}$ & \citet{1973AA....25..137A}\\
Si + e $\rightarrow$ Si${^+}$ + e + e & 1.88 $\times$ 10$^{-7}$ ($\rm\frac{1+\sqrt{U}}{0.376+U}$)U$^{0.25}$exp(-U),U=8.2/$E_e$eV & \citet{1997ADNDT..65....1V}\\
Si${^+}$ + e $\rightarrow$ Si + $h\nu$ & 4.85 $\times$ 10$^{-12}$(300/T)$^{0.60}$ & \citet{1973AA....25..137A}\\
Si${^+}$ + e $\rightarrow$ Si${_2^+}$ + e + e & 6.43 $\times$ 10$^{-8}$ ($\rm\frac{1+\sqrt{U}}{0.632+U}$)U$^{0.25}$exp(-U),U=16.4/$E_e$eV &\citet{1997ADNDT..65....1V} \\
Si${_2^+}$ + e $\rightarrow$ Si${^+}$ + $h\nu$ & 1.57 $\times$ 10$^{-11}$(300/T)$^{0.786}$ & \citet{1973AA....25..137A}\\
H${^+}$ + Si $\rightarrow$ H + Si${^+}$ & 7.41 $\times$ 10$^{-11}$(300/T)$^{-0.848}$ & \citet{2007ApJ...666....1G}\\
He${^+}$ + Si $\rightarrow$ He + Si${^+}$ & 3.30 $\times$ 10$^{-9}$ & \citet{2007AA...466.1197W}\\
C${^+}$ + Si $\rightarrow$ C + Si${^+}$ & 2.10 $\times$ 10$^{-9}$ & \citet{2007AA...466.1197W}\\
H + Si${_2^+}$ $\rightarrow$ H${^+}$ + Si${^+}$ & 2.20 $\times$ 10$^{-9}$(300/T)$^{-0.24}$ & \citet{1996ApJS..106..205K}\\
H${^+}$ + Si${^+}$ $\rightarrow$ H + Si${_2^+}$ & 7.37 $\times$ 10$^{-10}$(300/T)$^{-0.24}$ & \citet{1996ApJS..106..205K}\\
\hline
\end{longtable}
\subsection{Radiative transfer} \label{subsec:raditran}
In order to calculate the absorption of stellar Ly$\alpha$ by the atmosphere of the planets, the radiative transfer equations are as following
\begin{equation}
F_{out}=F_{in}e^{-\tau}
\end{equation}
\begin{equation}
\tau=\int_{Z_{0}}^{Z} n\sigma dl
\end{equation}
Finally, the absorption depth by the planet and its atmosphere can be expressed as
\begin{equation}
Absorption\,depth=\frac{F_{in}-F_{out}}{F_{in}}
\end{equation}
In equation (3)-(5), F$_{in}$ is the incident intrinsic stellar flux (note that it is different from stellar Fxuv.) F$_{out}$ is the emergent flux due to the occultation by planets and the absorption by their surrounding atmospheres along the ray path. In the radiation transfer equations, the factor $\tau$ represents the optical depth, which is dependent on the atmospheric structure and directly pertaining to the particles' number density n and cross section $\sigma$.
\subsection{Cross section---Ly$\alpha$ }\label{subsubsec:crosslh}
Ly$\alpha$ absorption occurs in a hydrogen atom when an electron absorbs 10.2 ev energy and jumps from n=1 level to n=2 level, where n is the quantum number. The cross-section of Ly$\alpha$ absorption can be evaluated via,
\begin{equation}
\sigma_{12}=\frac{\pi e^2f_{12}\phi_{\nu}}{m_ec}
\end{equation}
in $\rm cm^2$, where e is the elementary charge of an electron, m$_e$ is the electronic mass, $f_{12}$ is the oscillator strength, which is 0.4162 at 1215.67 $\rm\AA$ \citep{Mihalas1978}. $\phi_\nu$ is the Voigt profile, which combines Doppler and Lorentz profiles. The Voigt profile is related to Voigt function H(a,u) through the following equations \citep{Rybicki 2004},
\begin{equation}
\phi_\nu=(\Delta\nu_D)^{-1}\pi^{-\onehalf}H(a,u)
\end{equation}
\begin{equation}
H(a,u)\equiv\frac{a}{\pi}\int\limits_{-\infty}^{+\infty}\frac{e^{-y^2}dy}{a^2+(u-y)^2}
\end{equation}
\begin{equation}
a\equiv\frac{\Gamma}{4\pi\Delta\nu_D}, u\equiv\frac{\nu-\nu_0}{\Delta\nu_D}
\end{equation}
\begin{equation}
\Gamma=\gamma+2\nu_{col}, \Delta\nu_D=\frac{\nu_0}{c}\sqrt{\frac{2kT}{m_a}}
\end{equation}
where a is the damping parameter and u is the frequency offset, $\nu_0$ is the line center frequency, $\Delta\nu_D$ (assuming no turbulence) is the Doppler width, and $\Gamma$ is the transition rate. Here we set the damping parameter a to be $4.699\times10^{-4}$.
\section{\textbf{Results}} \label{sec:result2}
\subsection{\textbf{The mass loss rates}\label{result21}}
\subsubsection{\textbf{The dependence of $\dot{M}$ on Fxuv and $\overline{\rho}$}\label{result21}}
We investigated 442 systems in which the number of Jupiter-like, Saturn-like, Neptune-like and Earth-like planets are 151, 78, 104 and 109. The mass loss rate predicted by the hydrodynamic model is defined as:
\begin{equation}
\dot{M}=4\pi r^{2}\rho \upsilon.
\end{equation}
here $\rho$ and $\upsilon$ are the density and the velocity of the escaping particles.
In addition, the absorbed XUV irradiation is converted into heat and does work on the particles of the atmosphere to overcome the gravitational potential and supply the particles' kinetic and thermal energy. According to \citet{2007A&A...472..329} and \citet{2009AA...506..399}, the mass loss rates can be expressed as
\begin{equation}
\rm\dot{M}=\frac{\pi F_{xuv} \eta R_{xuv}^2}{\Delta\phi+\frac{\upsilon^2_{R_L}-\upsilon^2_{R_0}}{2}+ c_p(T_{R_L}-T_{R_0})}
\end{equation}
where R$_{xuv}$ is the XUV absorption radius (at which the mean optical depth is 1). $\eta$ is the heating efficiency, which is the ratio of the gas heating energy to the whole XUV energy input. $\Delta\phi$, $\upsilon^2_{R_L}-\upsilon^2_{R_0}$ and $c_p(T_{R_L}-T_{R_0}$) are the variations (from the lower atmosphere boundary to the Roche Lobe) of the gravitational potential, kinetic and thermal energy respectively. $R_0$ and $R_L$ are the locations of the atmosphere lower boundary and the Roche Lobe. $\upsilon$ and T are the velocity and temperature of the particles. $c_p$ is the specific heat at a constant pressure per unit mass. In the energy-limited loss approximation \citep{2003APJL...598..L121, 2007A&A...472..329,2009AA...506..399}, the kinetic and thermal energy of the escaping particles are far smaller than the gravitational potential. Thus, the energy-limited equation is expressed as
\begin{equation}
\dot{M}= \frac{\pi F_{xuv} \eta R_{xuv}^2}{\Delta\phi}
\end{equation}
When the effect of stellar tidal force is included, $\Delta\phi = \frac{G M_P}{R_P} K(\xi) $ \citep{2007A&A...472..329}. Therefore the energy-limited equation can be further expressed as
\begin{equation}
\rm\dot{M}=\frac{3\beta_{xuv}^{2}\eta Fxuv}{4 K(\xi) G\rho},
\end{equation}
where $\beta_{xuv}$ is the ratio of XUV absorption radiu to planetary radius.(Here we use the subscript xuv to distinguish it from the spectral index $\beta$.) $\rho$ is the planetary mean density and G is the gravitational constant. K($\xi$) is the potential energy reduction factor due to the stellar tidal forces \citep{2007A&A...472..329},
\begin{equation}
K(\xi)=1-\frac{3}{2\xi}+\frac{1}{2\xi^3}
\end{equation}
with
\begin{equation}
\xi=(\frac{Mp}{3M_\star})^{\frac{1}{3}}\frac{a}{Rp}
\end{equation}
where M$_{p}$ and M$_\star$ are the planetary and stellar masses, a is the separation between planet and its host star, R$_{p}$ is the planetary radius. By comparing Equation (12) and Equation (14), one can found that the energy-limited formula can be revised by the terms of the kinetic and thermal energy. In the paper, the Equation (12)is defined as revised energy-limited formula. We will discuss the effect of the kinetic and thermal energy in Section 3.1.2.
The energy-limited equation hints that the mass loss rates are the function of the XUV flux and the mean density.
Figure. 2 shows the dependence of the mass loss rate on the properties of exoplanet and the XUV flux. First, it is clear that the mass loss rates increase with the increase of the XUV irradiation as expected. The mass loss rates of Jupiter-like planets are in the order of magnitude of 10$^{9}$-10$^{10}$ g/s when the XUV integrated flux is about 200 erg/cm$^{2}$/s. In the case of F$_{xuv}$=4$\times$10$^{5}$ erg/cm$^{2}$/s, the mass loss rates of Jupiter-like planets increase a factor of $\sim$ 1000, which is in the order of magnitude of 10$^{13}$ g/s. Such a trend is also found for the Saturn-like, Neptune-like and Earth-like planets. For example, the mass loss rates of the Earth-like planets vary almost linearly with F$_{xuv}$ from 10$^{8}$ g/s to 10$^{11}$ g/s. Second, we note that the mass loss rates of the planets decrease with the decrease of their sizes if the integrated flux is given. For the Saturn-like planets, the mass loss rates decline by a factor of a few comparing with those of Jupiter-like planets. However, for those smaller planets the mass loss rates can decrease by an order of magnitude or more. Such a behaviour reflects the fact that the atmospheric escape of the planet is inverse with the mean density.
\begin{figure}
\centering
\includegraphics[width=4in,height=3.in]{fig2.pdf}
\caption{The mass loss rates of planets at different XUV fluxes. Cross: Jupiter-like planets. Triangles: Saturn-like planets. Circles: Neptune-like planets. Squares: Earth-like planets. \label{fdtdmass} }
\end{figure}
To clarify, we selected some planets of our sample to calculate the mass loss rates on the different XUV fluxes. The corresponding value of the fluxes are 200, 400, 1000, 2000, 4000, 2$\times$10$^{4}$, 4$\times$10$^{4}$, 2$\times$10$^{5}$ and 4$\times$10$^{5}$ erg cm$^{-2}$ s$^{-1}$ . We showed the correlation between the mean density and the mass loss rate in Figure. 3. Note that the real flux in the calculation is divided by a factor of 4 so that the mass loss rates of a few Jupiter-like planets can not be calculated in the situation of the low XUV flux. The mass loss rates of those planets are calculated by using the energy-limited equation (The value of $\beta_{XUV}^2\eta$ is obtained from our fitting formulae. For details, see Section. 4.1.). Evidently, the mass loss rates of the planets with lower density are higher than those of the planet with higher density. Thus, the hydrodynamic simulation confirmed the physical validity of the energy-limited assumption.
\begin{figure}
\centering
\includegraphics[width=4in,height=3.in]{fig3.pdf}
\caption{The mass loss rates of planets with different density. Cross: Jupiter-like planets. Triangles: Saturn-like planets. Circles: Neptune-like planets. Squares: Earth-like planets. $\oplus$: the mass loss rates calculated by using the energy-limited equation. \label{fdtdmass} }
\end{figure}
However, we note that the mass loss rates of different planets are slightly different even if the densities of the planets are same. For example, at $\overline{\rho}$ $\approx$ 0.28 g cm$^{-3}$ the mass loss rates of Jupiter-like planets are higher than those of Saturn-like planets although the XUV fluxes are same. Such behaviours also occur at $\overline{\rho}$ $\approx$ 1 g cm$^{-3}$ and $\overline{\rho}$ $\approx$ 1.8 g cm$^{-3}$.
This reflects that the energy available for driving the escape of the atmosphere are different for different planets. There are three factors that can result in the variations of the mass loss rate. First, the heating efficiency of different planets is different. Second, the effective area of energy deposition are different for different planets. Third, the tidal forces of the host stars. The effect of tidal force has been discussed by \citet{2007A&A...472..329} who found that the tidal forces of star can enhance the mass loss rates. Thus, we will discuss the first two factors below.
\subsubsection{\textbf{The heating efficiency and the XUV absorption radius }\label{result21}}
A main factor in determining the mass loss rates is the heating fraction of the XUV radiation. The net heating efficiency $\eta$ is defined as:
\begin{equation}
\eta=(H_{heat}-L_{cooling})/\Sigma_{\nu} F \nu
\end{equation}
where $F\nu$ is the input XUV energy at frequency $\nu$, H$_{heat}$ and L$_{cooling}$ are the radiative heating and cooling, respectively.
We calculated the heating efficacy of our $\sim$ 400 planets and found that the heating efficiencies are insensitive to an individual parameter, but it is dependent of the product of the XUV flux and the gravitational potential (hereafter log(F$_{xuv}GM_p/R_p$)). The left panel of Figure. 4. shows the heating efficacy with respect to log(F$_{xuv}GM_p/R_p$). The triangles represent the planets with gravitational potential lower than 1.5$\times$10$^{13}$ erg g$^{-1}$ and the cross (plus sign) are planets with gravitational potential higher than 1.5$\times$10$^{13}$ erg g$^{-1}$. For planets with gravitational potential smaller than 1.5$\times$10$^{13}$ erg g$^{-1}$, we can see from this figure that the larger the log(F$_{xuv}GM_p/R_p$), the higher the heating efficiency $\eta$. For the planets that concentrate on the range of 14 $<$ log(F$_{xuv}GM_p/R_p$) $<$ 16, the heating efficiencies of most planets are lower than 0.3. Only a few planets appear higher heating efficiency. When 16 $<$ log(F$_{xuv}GM_p/R_p$) $<$ 18, the heating efficiency rises from about 0.13 to 0.45. For planets with log(F$_{xuv}GM_p/R_p$) larger than 18, $\eta$ varies in the range of 0.3-0.45. The appearance of $\eta$ is, however, different for the planets with relatively large gravitational potential (the cross in Figure. 4 ). In fact, the gravitational potential makes a separation to the $\eta$. We can see from the left panel of Figure .4 that the heating efficiency is generally lower for the planets with very large gravitational potential compared to those with relatively small gravitational potential. We further showed the dependence of $\eta$ on the gravitational potential in the right panel of Figure. 4. For the planets with relatively small gravitational potential, the heating efficiency can vary a factor of 9 and appear an increasing trend with the increase of gravitational potential. At the same time, the values of $\eta$ express a transition around $\sim$ 1$\times$10$^{13}$ erg g$^{-1}$ above which the heating efficiency decrease with the gravitational potential. Such behavior causes the lower mass loss rates for some Jupiter-like planets. As shown in Figure. 2, we found that there are some Jupiter-like planets of which the mass loss rates deviate from the general trend. They are generally smaller than the mass loss rates of other Jupiter-like planets and even smaller than those of some Neptune-like and Earth-like planets. The mass loss rates are related to the mean densities. For such planets, their mean densities are around 1 g/cm$^3$. The higher mean densities result in the decrease of a factor of a few for the mass loss rates comparing with those of Jupiter-like planet with lower mean densities. At the same time, these planets all have relatively large gravitational potentials ($>$ 1.5$\times$10$^{13}$ erg/g) which leads to the relatively small heating efficiency $\eta$ (most are in the range of 0.05-0.1). Therefore, for these planets the mass loss rates could be about 10 times lower than that of other Jupiter-like planets. We also note that \citet{2016A&A...585..2s} found a rapid decrease of the evaporation efficiencies (By assuming heating efficiency $\eta$=1 in calculating the mass loss rates predicted by the energy-limited equation, they defined evaporation efficiency $\eta_{eva}=\frac{\dot{M}_{model}}{\dot{M}_{enengy-limited}}$ and found $\eta$=1.2 $\eta_{eva}$ .) when the gravitational potential is higher than $\sim$ 1.3$\times$10$^{13}$ erg g$^{-1}$, which is similar with our results. However, we did not find the linear dependence of $\eta$ on the gravitational potential when the values of gravitational potential are lower than $\sim$ 1.3$\times$10$^{13}$ erg g$^{-1}$ (Figure. 2 of \citet{2016A&A...585..2s}). Our calculations based on a larger sample suggest that the heating efficiency can vary a factor of a few even if the gravitational potentials of the planets are same.
\begin{figure}
\begin{minipage}[t]{0.5\linewidth}
\centering
\includegraphics[width=3.6in,height=3.0in]{fig4a.pdf}
\end{minipage}
\begin{minipage}[t]{0.5\linewidth}
\centering
\includegraphics[width=3.6in,height=3.0in]{fig4b.pdf}
\end{minipage}
\caption{Left panel: The relation between heating efficiency and the log(F$_{xuv}GM_p/R_p$). Right panel: The relation between heating efficiency and the $GM_p/R_p$. Triangles: the planets with a gravitational potentials are lower than 1.5$\times$10$^{13}$ erg g$^{-1}$. Cross: the planets with a gravitational potentials are greater than 1.5$\times$10$^{13}$ erg g$^{-1}$.}
\end{figure}
Furthermore, one needs to know the mean XUV absorption radius R$_{XUV}$ in order to calculate the mass loss rates predicted by the energy-limited equation. Here R$_{XUV}$ is defined as:
\begin{equation}
R_{XUV}=\frac{\sum_{\nu} R_{\nu}(\tau_{\nu}=1) F_{\nu}}{F_{XUV}}.
\end{equation}
In Equation (16), R$_{\nu}(\tau_{\nu}=1)$ is the radius where the optical depth at frequency $\nu$ is unit. F$_{\nu}$ is the XUV flux at frequency $\nu$. We calculated the XUV absorption radius by equation (16) for all the planets. The variations of R$_{XUV}$ are shown in Figure. 5.
The upper panel is the R$_{XUV}$ with respect to the planetary radii. For all the planets, R$_{XUV}$ is in the range of 1.05-1.7 R$_{p}$. We found that the values of R$_{XUV}$ are related to the the sizes of planets. For the Jupiter-like planets, the R$_{XUV}$ mainly concentrates on a small range which is 1.1-1.2 R$_{p}$. The values of R$_{XUV}$ vary from 1.1 to 1.5 R$_{p}$ when the sizes of planets are in the range of 0.2-0.8 R$_{J}$. For planets with radii less than 0.2 R$_{J}$, the R$_{XUV}$ could reach 1.4-1.7 R$_{p}$. Compared to the Earth-like planets, the R$_{XUV}$ of larger planets such as Jupiter-like planets could decrease by a factor of 1.5.
The middle panel is the R$_{XUV}$ with respect to the planetary mass. The R$_{XUV}$ is basically negatively correlated to the planetary mass. For the planets with masses less than 0.1 M$_{J}$, the R$_{XUV}$ tends to be larger, especially for those less than 0.01 M$_{J}$. When the masses become higher than 0.2 M$_{J}$, the R$_{XUV}$ is mainly in 1.1-1.2 R$_p$. Both the upper panel and the middle panel suggest that the smaller planets such as Earth-like planets tend to have larger R$_{XUV}$, which means their XUV absorption radii are relatively far from the planetary surface. The lower panel of Figure. 5 expresses how the R$_{XUV}$ varies with the gravitational potential. It is clear that the absorption radii increase with the decrease of the gravitational potentials. For the planets with a high gravitational potential, the values of R$_{XUV}$ are in the range of 1.1-1.2 R$_p$. The increase of R$_{XUV}$ is dramatic when the gravitational potentials are lower than $\sim$ 10$^{12}$ erg g$^{-1}$. This is reasonable because for these planets the gravitational potentials are so low that the atmospheres can expand to higher altitudes to absorb the stellar XUV irradiation.
\begin{figure}
\centering
\includegraphics [width=4.in,height=2.5in] {fig5a.pdf}
\centering
\includegraphics [width=4.in,height=2.5in] {fig5b.pdf}
\centering
\includegraphics [width=4.in,height=2.5in] {fig5c.pdf}
\caption{The upper panel: the variations of R$_{XUV}$ with the radii. The middle panel: the variations of R$_{XUV}$ with the masses. The lower panel shows the dependence of R$_{XUV}$ on the gravitational potential.}
\end{figure}
In Figure. 6a and 6b we compared the mass loss rates predicted by our hydrodynamic model(y-axis) and the Equation (12) and (14)(x-axis). The mass loss rates of Equation (12) and (14) are calculated by using our $\beta_{XUV}$ and $\eta$. The mass loss rates in Figure. 6b are corrected by the kinetic and thermal energy of the escaping atmosphere (Equation. 12). Different symbols represent the mass loss rates of different types of planets. Before correcting for the kinetic and thermal energy of the escaping atmosphere (see Figure .6a), the hydrodynamic mass loss rates are basically consistent with that of the energy-limited when the mass loss rates are lower than a ``critical value" for different type of planets. For Earth-like and Neptune-like planets, it is about 10$^{10}$-10$^{11}$g/s. For Saturn-like and Jupiter-like planets, it is about 10$^{11}$-10$^{12}$g/s. By comparing with Figure. 2, we found that the corresponding XUV levels of the critical mass loss rates are about 2$\times$10$^4$-3$\times$10$^4$ erg/cm$^2$/s for Earth-like and Neptune-like planets and $\sim$ 4$\times$10$^4$ erg/cm$^2$/s for Saturn-like and Jupiter-like planets. Above the XUV radiation level, the hydrodynamic mass loss rates of the planets are lower than the energy-limited ones, especially for the smaller planets such as Earth-like and Neptune-like planets. The deviation of the two kinds of mass loss rates comes from the kinetic and thermal energy of the escaping atmosphere. To specify, we show the influence of the kinetic and thermal energy in Figure. 6b. After correcting for the kinetic and thermal energy of the escaping atmosphere, the mass loss rates of our model are generally consistent with that predicted by the revised energy-limited equation (Equation (12)) for all kinds of planets.
The uncertainties of the mass loss rates obtained from the energy-limited formula are mainly attributed to the unknown heating efficiencies and the absorption radii of the XUV irradiation. Many studies have applied the fixed heating efficiencies for some types of planets. In our study, the heating efficiencies are different for different planets even if they are same types of planets. At the same time, the absorption radii also vary with the physical parameters of planets (Figure. 5). In Figure. 6a and 6b, we used the heating efficiencies and the absorption radii of our hydrodynamic models to remove the uncertainties. However, the deviations are still evident for the energy-limited case. This highlights the importance of kinetic and thermal energy in using the energy-limited equation. As shown in Figure. 6a and 6b, a portion of the energy of XUV irradiation is converted the kinetic and thermal energy of the escaping atmosphere. Neglecting the kinetic and thermal energy will result in an overestimation to the mass loss rates when the energy-limited formula is used. We further show the dependence of the kinetic and thermal energy to the planetary parameters in the Figure. 6c. It is clear that the sum of the kinetic and thermal energy increases with the increase of log(F$_{xuv}GM_p/R_p$). The increasing trend is obvious for all types of planets although there is a relatively large spread for Earth-like planets and Jupiter-like planets. Finally, We show the ratios of the kinetic and thermal energy to the gravitational potential in Figure. 6d. The ratios are smaller than unit when log(F$_{xuv}GM_p/R_p$) is smaller than 16. With the increase of log(F$_{xuv}GM_p/R_p$), the ratios become greater than unit for most of Earth-like planets and Neptune-like planets while the ratios are still smaller than unit for most of Jupiter-like planets. The ratios of Saturn-like planets express a mixed variation for the case of high log(F$_{xuv}GM_p/R_p$). Same trend also appears in the Figure. 6a. The mass loss rates of Jupiter-like planets predicted by the energy-limited formula are more consistent with that of hydrodynamic model because the gravitational potential is dominant. For smaller planets, the deviation of the mass loss rate becomes obvious with the increase of the mass loss rate. Thus, the results obtained from the energy-limited formula should be revised by the kinetic and thermal energy of the escaping atmosphere, especially for those planets with high F$_{xuv}GM_p/R_p$.
\begin{figure}
\begin{minipage}[t]{0.5\linewidth}
\centering
\includegraphics[width=3.6in,height=3.0in]{fig6a.pdf}
\end{minipage}
\begin{minipage}[t]{0.5\linewidth}
\centering
\includegraphics[width=3.6in,height=3.0in]{fig6b.pdf}
\end{minipage}
\begin{minipage}[t]{0.5\linewidth}
\centering
\includegraphics[width=3.6in,height=3.0in]{fig6c.pdf}
\end{minipage}
\begin{minipage}[t]{0.5\linewidth}
\centering
\includegraphics[width=3.6in,height=3.0in]{fig6d.pdf}
\end{minipage}
\caption{Panel (a) and (b) show the comparison of the mass loss rates predicted by our model and that of energy-limited equation. The mass loss rates of energy-limited assumption are calculated by using our $\beta_{XUV}$ and $\eta$. In the panel (a) $\dot{M}$(el) is the mass loss rates calculated by the energy-limited equation and is not corrected for the kinetic and thermal energy of the escaping atmosphere. In the panel (b) $\dot{M}$(el-revised) is corrected for the kinetic and thermal energy of the escaping atmosphere. The panel (c) expresses the sum of the kinetic and thermal energy. The ratios of he kinetic and thermal energy to the gravitational potential is shown in panel (d).}
\end{figure}
\subsection{\textbf{The dependence of the absorption depth of Lyman $\alpha$ on F$_{XUV}$ and $\overline{\rho}$}}\label{result21}
We calculated the absorption of stellar Ly$\alpha$ by using our sample and found that the excess absorption levels of Ly$\alpha$ could be higher if the planets have lower mean densities and relatively higher integrated Fxuv. To specify, Figure \ref{fdtdmass} shows the absorption depths in the Fxuv-$\rho$ diagram. The x-axis is the planetary mean density and the y-axis is the integrated Fxuv. The absorption depths are divided into three or four levels depending on the absorption depth range. The absorptions of different depth are distinguished by different colored symbols. For the left and right panels of Figure. 7, the velocity ranges are [-150, -50]$\cup$[50, 150] km/s and [-150, 150] km/s from the Ly$\alpha$ line center, respectively. In the [-50, 50] km/s range from the Ly$\alpha$ line center, the stellar Ly$\alpha$ could be contaminated by the ISM \citep{2019A&A...622..46} and the geocoronal Ly$\alpha$ emission lines, so we exclude the range in the left panel of Figure. 7.
\begin{figure}
\includegraphics[width=3.5in,height=2.85 in]{fig7a.pdf}
\includegraphics[width=3.5in,height=3 in]{fig7b.pdf}
\caption{The statistical distribution of absorption depths. The x-axis is the planetary mean density and y-axis is the integrated XUV flux. Different colored symbols represent different absorption levels. The left and right panels are the distributions of Ly$\alpha$ absorption depths (not including the optical occultation) which are calculated in [-150,-50]$\cup$[50,150] km/s and [-150,150]km/s from the line center 1215.67$\rm\AA$ respectively. \label{fdtdmass} }
\end{figure}
For the Ly$\alpha$ absorption depth, we can see from Figure. 7 that no matter what wavelength range it is, the average absorption levels have a similar distribution trend. The regions of stronger Ly$\alpha$ absorptions are in the upper left region of the Fxuv-$\rho$ diagram where the planets receive higher irradiation and have lower mean densities. This indicates higher mass loss rates around those planets. It is shown in the left panel of Figure. 7 that for planets with lower mean densities, the decreasing Fxuv can cause the decrease of the absorption levels. For those planets with medium or high mean density ( $>$ 1g/cm$^{3}$), the dependence of Ly$\alpha$ absorption on the XUV flux shows a middle sensibility. Comparing with those planets with lower densities, one can see that there is a mixed region in which the absorption levels are variable in a large range. For instance, in the case of F$_{XUV}$=10$^{4}$ erg/cm$^{2}$/s the absorptions produced by the atmosphere vary with the mean densities of the planets from $>$ 5\% and $<$ 1\%. Finally, the absorption levels are very low for those planets with high densities ($\rho > $ 1g/cm$^{3}$ ) and low XUV fluxes ( lower than 10$^{4}$ erg/cm$^{2}$/s). In the lower right region, almost all absorptions are lower than 1\%. Generally, those planets with higher densities are the Earth-like or Neptune-like planets.
It is also clear from the left panel of Figure. 7 that the absorptions are dependent of XUV flux. Most absorption levels of the planets are still higher than 1\% if the flux is higher than 2$\times$ 10$^{4}$erg cm$^{2}$/s. In the range of 10$^{3}$erg cm$^{2}$/s-10$^{4}$erg cm$^{2}$/s, the absorptions decrease with the increase of density. For those planets with high density, their absorptions can be a factor of a few smaller than those planets with lower density. Below 10$^{3}$erg cm$^{2}$/s, only planets with low densities appear significant absorptions. Such behaviors can also be found in the right panel of Figure. 7. Although the absorption of line center is included, the distributions of Ly$\alpha$ absorptions reflect the same trend as shown in the left panel of Figure. 7. The difference of two cases are that the absorption levels of the right panel are far lager than those of the left panel because of the strong absorption in the line center. \citet{2019A&A...623A.131K} modeled the in-transit absorption in Ly$\alpha$ for the terrestrial planets with nitrogen and hydrogen-dominated atmospheres under different levels of stellar irradiation. They also found the deeper absorption for the planets with higher XUV and smaller density, which is consistent with our results.
\begin{figure}
\centering
\includegraphics[width=3.6in,height=2.6in]{fig8.pdf}
\caption{The Ly$\alpha$ absorption in the range of [-150,-50]$\cup$[50,150] km/s as a function of mass loss rates.}
\end{figure}
The absorption levels are proportional to the mass loss rates. We showed the absorption of [-150,-50]$\cup$[50,150] km/s in Figure. 8. It is clear form Figure. 8 that higher absorption levels correspond to higher mass loss rates. For instance, the absorption depths of most planet can attain 1\% if the mass loss rates are higher than 10$^{10}g/s$. For the planets with the mass loss rates higher than 10$^{11}g/s$, the absorption level can attain 10\% or higher. In the cases of the mass loss rates lower than 10$^{10}g/s$, most planets appear low absorption level except a few planets. Finally, the absorption can be neglected if the mass loss rates are lower than 10$^{9}g/s$.
\section{Discussion} \label{sec:discu}
\subsection{\textbf{The fit to the parameters of the (revised) Energy-limited equation }}\label{subsec:result22}
The (revised) energy-limited equation is a convenient way to evaluate the planetary mass loss rates \citep{2003APJL...598..L121}. In fact, Figure. 2 and Figure. 3 can be explained at a certain extent by the energy-limited assumption. However, it is not easy to evaluate the planetary mass loss rates by using Equation (12) and Equation (14) because the values of $\beta_{xuv}$ and $\eta$ must be specified. For convenience, one only needs to know the product of $\beta_{xuv}^{2}$ and $\eta$ in evaluating the mass loss rates. Therefore, we fitted the parameters $\beta_{xuv}^{2}$ $\eta$ for the planet with the gravitational potential smaller than 1.5$\times$10$^{13}$ erg g$^{-1}$. The planets are classified into four categories by their gravitational potentials. We used different functions to fit the values of $\beta_{xuv}^{2}$ $\eta$ and showed the fit in Figure. 9. The fit of $\beta_{xuv}^{2}$ $\eta$ to log(GM$_{p}$F$_{xuv}$/R$_{p}$) can be expressed as :
\[\beta_{xuv}^{2} \eta=\begin{cases}
7.980(\pm 2.315)-1.091(\pm 0.292) \theta +0.0383(\pm 0.0092) \theta^{2} & GM_{p}F_{xuv}/R_{p} < 1.5\times 10^{12}\\
-1.901(\pm 0.0537)+0.135(\pm 0.0033) \theta & 1.5\times 10^{12} \leqslant GM_{p}F_{xuv}/R_{p} < 5\times 10^{12}\\
-0.581(\pm 0.1218)+0.061(\pm 0.0072) \theta & 5\times 10^{12} \leqslant GM_{p}F_{xuv}/R_{p} < 1\times 10^{13}\\
-1.232(\pm 0.0192)+0.093(\pm 0.0110) \theta & 1\times 10^{13} \leqslant GM_{p}F_{xuv}/R_{p} < 1.5\times 10^{13},
\end{cases}\]
where $\theta$=log(GM$_{p}$F$_{xuv}$/R$_{p}$).
\begin{figure}
\centering
\includegraphics[width=3.6in,height=2.6in]{fig9.pdf}
\caption{The fit of $\beta_{xuv}^{2}$ $\eta$ for planets with the gravitational potentials smaller than 1.5$\times$10$^{13}$ erg g$^{-1}$. The x-axis is product of stellar irradiation and planetary gravitational potential, and y-axis is the $\beta_{xuv}^{2}$ $\eta$. Black square: the planets with the gravitational potential smaller than 1.5$\times$10$^{12}$ erg g$^{-1}$. Cyan circle: the gravitational potential are between 1.5$\times$10$^{12}$ erg g$^{-1}$ and 5$\times$10$^{12}$ erg g$^{-1}$. Green triangle: the gravitational potential are between 5$\times$10$^{12}$ erg g$^{-1}$ and 10$^{13}$ erg g$^{-1}$. Blue cross: the gravitational potential are between 1$\times$10$^{13}$ erg g$^{-1}$ and 1.5$\times$10$^{13}$ erg g$^{-1}$. The colored lines are the fit to the corresponding $\beta_{xuv}^{2}$ $\eta$.}
\end{figure}
When the GM$_{p}$/R$_{p}$*F$_{XUV}$ is given, there is a trend that the values of $\beta_{xuv}^{2}$ $\eta$ decrease with the increase of gravitational potential except the third group. We found that the distributions of $\beta_{xuv}^{2}$ $\eta$ of the third group are higher than those of adjacent groups. The gravitational potentials of these planets (5$\times$10$^{12}$ erg g$^{-1}$ - 10$^{13}$ erg g$^{-1}$) are in an intermediate range, but the slope of this line is the smallest. To explain the issue, we checked the dependence of the XUV absorption radius $\beta_{xuv}$ and heating efficiency $\eta$ on the gravitational potential (see Figure. 4 and Figure. 5). For planets with gravitational potentials smaller than 1.5$\times$10$^{12}$ erg g$^{-1}$, $\beta_{xuv}$ is in the range of 1.1-1.7. Their values of $\eta$ vary from 0.05 to 0.45. Thus, $\beta_{xuv}^{2}$ $\eta$ covers a broad range. For planets with gravitational potentials higher than 1.5$\times$10$^{12}$ erg g$^{-1}$, $\beta_{xuv}$ is in a small range (1.03-1.3). However, the heating efficiencies are different for different planets. For the second and fourth group, the distributions of heating efficiency are similar with those of the first group. Due to the smaller $\beta_{xuv}$, their $\beta_{xuv}^{2}$ $\eta$ are smaller than those of the first group. Furthermore, $\beta_{xuv}$ will decrease with the increase of gravitational potentials so that the values of $\beta_{xuv}^{2}$ $\eta$ of the fourth group are smaller than those of the second group. However, the planets in the third group (most planets in the third group is Jupiter-like planets) are close to the transitional regions of heating efficiency (see the right panel of Figure. 4) where the heating efficiencies maintain higher values (0.3-0.45). Compared with the second and fourth group, their XUV absorption radii are similar. Thus, the high heating efficiency of the third group caused the high values of $\beta_{xuv}^{2}$ $\eta$ and a smaller slope of the third line.
\subsection{\textbf{The influence of the outer boundary on absorption depth}}
In the simulations, the atmospheric outer boundaries of planets are set to equal the stellar radii. However, the real boundaries can not be determined well because the atmosphere of planet can be constrained to a few or tens planetary radii by the stellar wind. For planets with the size of Jupiter, the total pressure of the stellar wind can be balanced at about a few planetary radii by the ram pressure and the thermal pressure of planet\citep{2009APJ...693..23}. Thus, the outer boundaries can roughly be denoted by the radius of host star. However, the calculating outer boundaries of the Earth-like planets could be far larger than their real boundary or magnetosphere due to the decrease of planetary pressure with the increase of radius. Here we inspected the dependence of the absorption of Ly$\alpha$ on atmospheric outer boundaries by some planets of our sample. The effect of outer boundary on the Ly$\alpha$ absorption is shown in Figure. 11. In the left panel of Figure. 11, we investigated the cases of Jupiter-like and Saturn-like planets. The right panel showed the dependence of the absorption on outer boundary for Neptune-like and Earth-like planet.
As we can see, both the absorptions increase with the increasing atmospheric boundaries. For most Jupiter-like planets, the increase is prominent with the increase of outer boundary so that the maximum absorption can attain 30\%-50\%. For the planets with the size of Neptune and Earth, the absorptions produced within 10 R$_{p}$ are smaller than 10\% (even the absorptions of some Earth-like planet are negligible). With the increase of the radii, the absorption of some planets can attain 15\%. If the real boundary or magnetosphere of Earth-like planet are smaller than the radii of their host stars, this hints that the absorptions of Earth-like planets in Figure. 7 can be overestimated. In addition, the absorptions of some Earth-like planets are almost insensitive to the outer boundary. As shown in the right panel of Figure. 11, the increase of a factor of a few in the outer boundary only results in a few percent raise in Ly$\alpha$ absorption. Compared to the case of Jupiter-like planets, it is obvious that detecting the absorption signals of Ly$\alpha$ in Earth-like planets is not easy.
\begin{figure}
\includegraphics[scale=0.45,trim=10 320 10 0]{fig10a.pdf}
\includegraphics[scale=0.45,trim=10 320 1130 0]{fig10b.pdf}
\caption{L$\alpha$ transit depths in line wing([1215.06 $\rm\AA$, 1215.47 $\rm\AA$] $\cup$ [1215.87 $\rm\AA$, 1216.28 $\rm\AA$]) versus outer boundary. The left panel denotes the average transit depths for Jupiter-like and Saturn-like planets. The right panel shows the average transit depths for Neptune-like and Earth-like planets. \label{boundarytransit}}
\end{figure}
\subsection{\textbf{The limitation of this work}}\label{subsec:futurework}
In our work, the mechanism of atmospheric escape is thermal escape due to intense stellar XUV radiation. We account for the photochemical interactions of planetary particles. However, other aspects such as the influence of interplay between stellar wind and planetary wind and the impact of planetary magnetism are not taken into consideration. Moreover, this work is based on 1D simulations of hydrodynamic atmospheric escape. For tidally-locking planets, the results may lead to some deviation. Therefore, in our future study, we are going to take these factors into account. Furthermore, in this paper, we only focus on the absorption of Ly$\alpha$ by the atmosphere of planet. It is the first step towards predicting the observable signals because we do not include the influence of charge-exchange and the extinction of ISM. In addition, an accurate XUV spectra is also needed if the observable signals of some special targets need to be known. Finally, the excess absorption depth of stellar Ly$\alpha$ by planetary atmosphere we predicted here can provide some clues for future observations.
\section{Summary} \label{sec:summ}
In this paper, we investigated about 450 transit systems. We obtained the atmospheric structures of our selected planets based on our 1D hydrodynamic atmospheric escape simulations \citep{2011ApJ...733..98,2013ApJ...766..102,2016ApJ...818..107,2018CHA&A...42..81} and simulate the absorption of stellar Ly$\alpha$ by these planets' atmosphere. Based on the simulations, we found that the mass loss rates are dependent of the mean density and the XUV irradiation. Our results suggest the energy-limited assumption reflects the essential physics of the hydrodynamic escape of the atmosphere. However, the energy-limited equation can overestimate the mass loss rates due to the neglect of the kinetic and thermal energy of the escaping atmosphere. We found that the overestimation is prominent for planets with smaller sizes. For Jupiter-like planets, the deviation of the mass loss rates are lower due to their large gravitational potential. By correcting the kinetic and thermal energy and using the heating efficiency and the absorption radius of XUV irradiation of our hydrodynamic model, the results of our hydrodynamic mass loss rates are consistent with those of revised energy-limited equation. We calculated the heating efficiency and XUV absorption radius for each planet. The heating efficiency are almost proportional to the logarithm of the product of the XUV flux and the gravitational potential(i.e., log(F$_{xuv}GM_p/R_p$)). The R$_{XUV}$ tended to be higher when the planetary radii and masses are smaller. Finally, in order to use the energy-limited equation easily we fitted the $\beta^{2}_{XUV}\eta$ by using our results.
In addition, we obtained some statistical properties about the distribution of the Ly$\alpha$ absorption depth. We found that the absorption depth would be larger if the planetary mean densities are lower and the integrated XUV Flux are higher. This means that the planets with lower mean densities and subjected to more intense Fxuv are likely to increase their excess absorption depth.
Moreover, different absorption levels could be approximately divided by different mass loss rates. The higher the mass loss rates, the higher the absorption depth. The obvious absorptions will appear when mass loss rates are higher than 10$^{11}$ g/s. For the case of lower mass loss rates, the absorption decreases to a low level. Finally, the strong absorption levels appear in the planets with large size, which can be attribute to the higher mass loss rates of those planets.
We thank the referee for their comments and
suggestions, which helped to improve the quality of this work. For
the author, this work is supported by National Natural Science Foundation of China (Nos.11273054; Nos.11333006) and by the project `` Technology of Space Telescope Detecting Exoplanet and Life " from National Defense Science and Engineering Bureau civil spaceflight advanced research project (D030201). This work has made use of the MUSCLES Treasury Survey High-Level Science Products; doi:10.17909/T9DG6F.
|
2,869,038,154,375 | arxiv | \section{Introduction}
We are concerned here with the fractional Hardy inequality in an arbitrary
domain $\Omega\subsetneq \mathbb{R}^N$, which states that if $1<p<\infty$ and $0<s<1$ with $ps>1$, then
\begin{align}\label{eq:hardy}
\iint_{\Omega\times\Omega} \frac{|u(x)-u(y)|^p}{|x-y|^{N+ps} } \,dx\,dy
\geq \mathcal D_{N,p,s} \int_{\Omega} \frac{|u(x)|^p}{m_{ps}(x)^{ps}}\,dx
\end{align}
for all $u\in$ \textit{\r{W}}$^s_p(\Omega)$, the closure of $C_c^\infty(\Omega)$ with respect to the left side of \eqref{eq:hardy}.
The pseudodistance $m_{ps}(x)$ is defined in (\ref{eq:malpha}); its most important property for the present discussion is that for \emph{convex} domains $\Omega$ we have $m_{ps}(x) \leq \dist(x, \Omega^c)$. We denote by $\mathcal D_{N,p,s}$ the \emph{sharp} constant in~\eqref{eq:hardy}, which was recently found by Loss and Sloane \cite{LossSloane} and is explicitly given in \eqref{eq:hardyconst} below. This constant is independent of $\Omega$ and coincides with that on the halfspace which was earlier found in \cite{KBBD-bc,FrankSeiringer}.
By the (well-known) Sobolev inequality the left side of \eqref{eq:hardy} dominates an $L_q$-norm of $u$. Our main result, the fractional HSM inequality, states that the left side of \eqref{eq:hardy}, even after subtracting the right side, is still strong enough to dominate this $L_q$-norm. More precisely, we shall prove
\begin{thm}\label{thm:HSM}
Let $N\geq 2$, $2\leq p<\infty$ and $0<s<1$ with $1<ps<N$. Then there is a constant $\sigma_{N,p,s}>0$ such that
\begin{align}\label{eq:main}
\iint_{\Omega\times\Omega} \frac{|u(x)-u(y)|^p}{|x-y|^{N+ps} } \,dx\,dy
- \mathcal D_{N,p,s} \int_{\Omega} \frac{|u(x)|^p}{m_{ps}(x)^{ps}}\,dx
\geq \sigma_{N,p,s} \, \left( \int_{\Omega} |u(x)|^q \,dx \right)^{p/q}
\end{align}
for all open $\Omega\subsetneq\mathbb{R}^N$ and all $u\in$ \textit{\r{W}\,}$^s_p(\Omega)$, where $q=Np/(N-ps)$.
\end{thm}
Inequality \eqref{eq:main} has been conjectured in \cite{FrankSeiringer} in analogy to the local HSM inequalities \cite{M,BFT}.
Recently, Sloane \cite{Sloane} found a remarkable proof of \eqref{eq:main} for $p=2$ and $\Omega$ being a half-space. Our result generalizes this to any $p\geq 2$ and any $\Omega$. We emphasize that our constant $\sigma_{N,p,s}$ can be chosen independently of $\Omega$. Therefore Theorem \ref{thm:HSM} is the fractional analog of the main inequality of \cite{FLHSM}, which treats the local case.
We know explain the notation in \eqref{eq:main}. The sharp constant \cite{LossSloane} in \eqref{eq:hardy} is
\begin{equation}\label{eq:hardyconst}
\mathcal D_{N,p,s} = 2 \pi^{\frac{N-1}2} \frac{\Gamma(\frac{1+ps}2)}{\Gamma(\frac{N+ps}2)}
\int_0^1 \left( 1 - r^{(ps-1)/p} \right)^p \frac{dr}{(1-r)^{1+ps}} \,.
\end{equation}
In the special case $p=2$ we have
\[
\mathcal D_{N,2,s} =
2\pi^{\frac{N-1}{2}} \frac{\Gamma(\frac{1+2s}{2})}{\Gamma(\frac{N+2s}{2})}
\frac{B\left(\frac{1+2s}{2}, 1-s \right)
-2^{2s}}{ 2^{2s+1}s} = 2\kappa_{N,2s},
\]
where $\kappa_{N,2s}$ is the notation used in \cite{KBBD-bc, LossSloane, DydaHint}. We denote
\begin{align}
d_{\omega}(x) &= \inf\{ |t| : x+t\omega \not\in \Omega \}, \quad x\in \mathbb{R}^N,\, \omega \in \mathbb S^{N-1},
\end{align}
where $\mathbb S^{N-1}=\{x\in \mathbb{R}^N: |x|=1\}$ is the $(N-1)$-dimensional unit sphere. Following \cite{LossSloane} we set for $\alpha>0$
\begin{align}\label{eq:malpha}
m_\alpha(x)
&= \left( \frac{2\pi^{\frac{N-1}{2}} \Gamma(\frac{1+\alpha}{2}) }{\Gamma(\frac{N+\alpha}{2}) } \right)^{\frac{1}{\alpha}}
\left(\int_{\mathbb S^{N-1}} \frac{d\omega}{d_\omega(x)^\alpha} \right)^{-\frac{1}{\alpha}},
\end{align}
which is analogous to the pseudodistance $m(x)$ of Davies \cite[Theorem~5.3.5]{Davies}.
We recall that for convex domains $\Omega$, we have $m_\alpha(x) \leq d(x)$,
see \cite{LossSloane}.
This paper is organized as follows. In the next three sections we present three independent proofs of \eqref{eq:main}, but only the last one in full generality. In Section~\ref{sec:proof2}, we use the ground state representation for half-spaces as the starting point. This allows us to obtain \eqref{eq:main} for half-spaces and any $p\geq 2$. In Section~\ref{sec:proof1} we derive a~fractional Hardy inequality (\ref{Hball}) for balls with two additional terms, and then deduce \eqref{eq:main} in case when $p=2$ and $\Omega$ is a ball or a half-space. In the last section, we extend the method developed in \cite{FLHSM} and use results from \cite{GRR} and \cite{LossSloane} to prove Theorem~\ref{thm:HSM} for arbitrary domains.
\subsection*{Acknowledgment}
The authors would like to thank M. Loss and C. Sloane for useful discussions.
\section{The inequality on a halfspace}\label{sec:proof2}
In this section, we prove Theorem~\ref{thm:HSM} in the particular case when $\Omega=\mathbb{R}^N_+=\{x\in\mathbb{R}^N:\ x_N>0\}$. Our starting point is the inequality
\begin{equation}
\label{eq:gsr}
\iint_{\mathbb{R}^N_+\times\mathbb{R}^N_+} \frac{|u(x)-u(y)|^p}{|x-y|^{N+ps} } \,dx\,dy
- \mathcal D_{N,p,s} \int_{\mathbb{R}^N_+} \frac{|u(x)|^p}{x_N^{ps}}\,dx
\geq c_p J[v] \,,
\end{equation}
where $c_p$ is an explicit, positive constant (for $p=2$ this is an identity with $c_2=1$),
$$
J[v] := \iint_{\mathbb{R}^N_+\times\mathbb{R}^N_+} \frac{|v(x)-v(y)|^p}{|x-y|^{N+ps} } (x_N y_N)^{(ps-1)/2} \,dx\,dy \,,
$$
and $v(x):=x_N^{-(ps-1)/p} u(x)$. This inequality was derived in \cite{FrankSeiringer}, using the `ground state representation' method from \cite{FrSe1}. We note that $m_{ps}(x)=x_N$ in the case of a halfspace, as a quick computation shows (see also \cite[(7)]{LossSloane}).
In order to derive a lower bound on $J[v]$ we make use of the bound
\begin{equation*}
(x_N y_N)^a \geq \min\{x_N^{2a},y_N^{2a}\} = 2a \int_0^\infty \chi_{(t,\infty)}(x_N) \chi_{(t,\infty)}(y_N) t^{2a-1} \,dt
\end{equation*}
for $a>0$. Combining this inequality with the fractional Sobolev inequality (see Lemma \ref{sob} below) and Minkowski's inequality, we can bound
\begin{align*}
J[v] & \geq (ps-1) \int_0^\infty \iint_{\{x_N>t,\,y_N>t\}} \frac{|v(x)-v(y)|^p}{|x-y|^{N+ps} } \,dx\,dy \ t^{ps-2}\,dt \\
& \geq (ps-1) \mathcal C_{N,p,s} \int_0^\infty \left( \int_{\{x_N>t\}} |v(x)|^q \,dx \right)^{p/q} \,t^{ps-2}\,dt \\
& \geq (ps-1) \mathcal C_{N,p,s} \left( \int_{\mathbb{R}^N_+} |v(x)|^q \left( \int_0^{x_N} t^{ps-2}\,dt \right)^{q/p} dx \right)^{p/q} \\
& = \mathcal C_{N,p,s} \left( \int_{\mathbb{R}^N_+} |v(x)|^q \, x_N^{q(ps-1)/p} \,dx \right)^{p/q} \,.
\end{align*}
Recalling the relation between $u$ and $v$ we arrive at \eqref{eq:main}. This completes the proof of Theorem \ref{thm:HSM} when $\Omega=\mathbb{R}^N_+$.
\qed
\medskip
In the previous proof we used the Sobolev inequality on half-spaces for functions which do not necessarily vanish on the boundary. For the sake of completeness we include a short derivation of this inequality. The precise statement involves the closure $\dot W^s_p(\mathbb{R}^N_+)$ of $C_c^\infty(\overline{\mathbb{R}^N_+})$ with respect to the left side of \eqref{eq:hardy}.
\begin{lem}\label{sob}
Let $N\geq 1$, $1\leq p<\infty$ and $0<s<1$ with $ps<N$. Then there is a constant $\mathcal C_{N,p,s}>0$ such that
\begin{align*}\label{eq:sob}
\iint_{\mathbb{R}^N_+\times\mathbb{R}^N_+} \frac{|u(x)-u(y)|^p}{|x-y|^{N+ps} } \,dx\,dy
\geq \mathcal C_{N,p,s} \, \left( \int_{\mathbb{R}^N_+} |u(x)|^q \,dx \right)^{p/q}
\end{align*}
for all $u\in\dot W^s_p(\mathbb{R}^N_+)$, where $q=Np/(N-ps)$.
\end{lem}
\begin{proof}
If $\tilde u$ denotes the even extension of $u$ to $\mathbb{R}^N$, then
\begin{align*}
\iint_{\mathbb{R}^N\times\mathbb{R}^N} \frac{|\tilde u(x)-\tilde u(y)|^p}{|x-y|^{N+ps} } \,dx\,dy
&= \, 2 \iint_{\mathbb{R}^N_+\times\mathbb{R}^N_+} \frac{|u(x)-u(y)|^p}{|x-y|^{N+ps} } \,dx\,dy \\
&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!
+ 2 \iint_{\mathbb{R}^N_+\times\mathbb{R}^N_+} \frac{|u(x)-u(y)|^p}{(|x'-y'|^2+(x_N+y_N)^2)^{(N+ps)/2}} \,dx\,dy \\
& \leq \, 4 \iint_{\mathbb{R}^N_+\times\mathbb{R}^N_+} \frac{|u(x)-u(y)|^p}{|x-y|^{N+ps} } \,dx\,dy \,.
\end{align*}
On the other hand, by the `standard' fractional Sobolev inequality on $\mathbb{R}^N$ (see, e.g., \cite{FrSe1} for explicit constants) the left side is an upper bound on
\begin{equation*}
\mathcal S_{N,p,s} \, \left( \int_{\mathbb{R}^N} |\tilde u(x)|^q \,dx \right)^{p/q}
= 2^{p/q} \mathcal S_{N,p,s} \, \left( \int_{\mathbb{R}^N_+} |u(x)|^q \,dx \right)^{p/q} \,. \qedhere
\end{equation*}
\end{proof}
\begin{rem}
The above proof of the fractional HSM inequality works analogously in the local case, that is, to show that
\begin{equation}\label{eq:hsm}
\int_{\mathbb{R}^N_+} |\nabla u|^p \,dx - \left(\frac{p-1}{p}\right)^p \int_{\mathbb{R}^N_+} \frac{|u|^p}{x_N^{p}}\,dx
\geq \sigma_{N,p,1} \left( \int_{\mathbb{R}^N_+} |u|^q \,dx \right)^{p/q},
\ q= \frac{Np}{N-p},
\end{equation}
for $u\in$ \textit{\r{W}\,}$^1_p(\mathbb{R}^N_+)$ when $N\geq 3$ and $2\leq p<N$. Again, the starting point \cite{FrSe1} is to bound the left side from below by an explicit constant $c_p>0$ times
$$
\int_{\mathbb{R}^N_+} |\nabla v|^p x_N^{p-1} \,dx \,,
\qquad v= x_N^{-(p-1)/p} u \,.
$$
(For $p=2$, this is an identity with $c_2=1$.) Next, we write $x_N^a = a \int_0^\infty \chi_{(t,\infty)}(x_N) t^{a-1} \,dt$ and use Sobolev's inequality on the half-space $\{x_N>t\}$ together with Minkowski's inequality. Note that the sharp constants in this half-space inequality are known explicitly (namely, given in terms of the whole-space constants via the reflection method of Lemma \ref{sob}).
\end{rem}
The sharp constant in \eqref{eq:hsm} for $p=2$ and $N=3$ was found in \cite{BFL}. We think it would be interesting to investigate this question for the non-local inequality \eqref{eq:main} and we believe that \cite{Sloane} is a promising step in this direction.
\section{The inequality on a ball}\label{sec:proof1}
Our goal in this section is to prove a fractional Hardy--Sobolev--Mazya inequality on the ball $B_r\subset\mathbb{R}^N$, $N\geq 2$, of radius $r$ centered at the origin. The argument follows that from the previous section, but is more involved. More precisely, we shall prove
\begin{prop}\label{ballprop}
Let $N\geq 2$, $p=2$ and $\frac{1}{2}<s<1$. Then there is a constant $c=c(s,N)>0$ such that for every $0<r<\infty$ and $u\in$\textit{\r{W}\,}$^s_2(B_r)$,
\begin{align}
\int_{B_r} \! \int_{B_r}
\frac{|u(x)-u(y)|^2}{|x-y|^{N+2s}} \,dx\,dy & - \mathcal D_{N,p,s} \int_{B_r} \frac{(2r)^{2s}}{(r^2-|x|^2)^{2s}} |u(x)|^2 \,dx
\nonumber\\
& \geq c \left(\int_{B_r} {|u(x)|^{q}}\,dx \right) ^ {2/q}, \label{HSMball}
\end{align}
where $q=2N/(N-2s)$.
\end{prop}
This proves Theorem \ref{thm:HSM} in the special case $\Omega=B_r$ and $p=2$ with $m_{2s}(x)$ replaced by $(r^2-|x|^2)/2r$. We note that $(r^2-|x|^2)/2r \leq \dist (x,B_r^c)$ for $x\in B_r$. (As an aside we note, however, that it is \emph{not always} true that $(r^2-|x|^2)/2r$ is greater than $m_{2s}(x)$. Indeed, take $x=0$ and $N=2$.)
We also note that Proposition \ref{ballprop} implies Theorem \ref{thm:HSM} for $\Omega=\mathbb{R}^N_+$ (and $p=2$). Indeed, by translation invariance the proposition implies the inequality also on balls $B(a_r, r)$ centered at $a_r=(0,\ldots,0,r)$. We have $\dist(x,B(a_r, r)^c) \leq \dist(x, (\mathbb{R}^N_+)^c)$, and hence the result follows by taking $r \to\infty$.
The crucial ingredient in our proof of Proposition \ref{ballprop} is
\begin{lem}\label{Hball}
Let $N\geq 2$, $\frac{1}{2}<s<1$ and define $w_N(x)=(1-|x|^2)^{\frac{2s-1}{2}}$ for $x\in B_1\subset\mathbb{R}^N$. Then for all $u\in$\textit{\r{W}\,}$^s_2(B_1)$
\begin{align}
\int_{B_1} \! \int_{B_1}
\frac{|u(x)-u(y)|^2}{|x-y|^{N+2s}} \,dx\,dy & - \mathcal D_{N,2,s} \int_{B_1} \frac{2^{2s}}{(1-|x|^2)^{2s}} |u(x)|^2 \,dx
\nonumber\\
& \geq \tilde J[v] + c \int_{B_1} |v(x)|^2 \,dx \,,
\label{hardyinball}
\end{align}
where $v= u/w_N$,
$$
\tilde J[v] = \int_{B_1} \!\int_{B_1} |v(x)-v(y)|^2 \frac{w_N(x)w_N(y)}{|x-y|^{N+2s}} \,dx\,dy
$$
and $c= s^{-1}(2^{2s-1}-1)|\mathbb S^{N-1}|>0$.
\end{lem}
This inequality is somewhat analogous to \eqref{eq:gsr} in the previous proof. We emphasize, however, that there are two terms on the right side of \eqref{hardyinball} and we will need both of them. Accepting this lemma for the moment, we now complete the
\begin{proof}[Proof of Proposition \ref{ballprop}]
By scaling, we may and do assume that $r=1$, that is, we consider only the unit ball $B_1\subset \mathbb{R}^N$. We put $v=u/w_N$ with $w_N$ defined in Lemma \ref{Hball}. According to that lemma, the left side of \eqref{HSMball} is bound from below by
\begin{align}
\tilde J[v] + c \int_{B_1} |v(x)|^2 w_N(x)^2 \,dx, \label{eq:reduction}
\end{align}
(Here we also used that $w_N\leq 1$.)
For $x,y\in B_1$ we have
\begin{align*}
w_N(x)w_N(y) &\geq \min\{(1-|x|^2)^{2s-1}, (1-|y|^2)^{2s-1}\} \nonumber\\
&= (2s-1) \int_0^1 \chi_{(t,1]}(1-|x|^2) \chi_{(t,1]}(1-|y|^2) t^{2s-2}\,dt \,,
\end{align*}
and therefore,
\begin{align*}
& \tilde J[v] + c \int_{B_1} |v(x)|^2 w_N(x)^2\,dx \nonumber\\
& \geq (2s-1)
\int_0^1 \left( \int_{B_{\sqrt{1-t}}} \! \int_{B_{\sqrt{1-t}}} \frac{|v(x)-v(y)|^2 }{|x-y|^{N+2s}} \,dx\,dy
+ c \int_{B_{\sqrt{1-t}}} |v(x)|^2 \,dx \right) t^{2s-2}\,dt \,.
\end{align*}
The fractional Sobolev inequality \cite[(2.3)]{ChenKumagai} and a scaling argument imply that there is a $\tilde c>0$ such that for all $r>0$,
\begin{equation*
r^{2s} \int_{B_r} \! \int_{B_r}
\frac{|v(x)-v(y)|^2 }{|x-y|^{N+2s}} \,dx\,dy
+ c \int_{B_r} |v(x)|^2 \,dx \geq \tilde c r^{2s} \left( \int_{B_r} |v(x)|^{q}\,dx \right)^{2/q}.
\end{equation*}
Combining the last two relations and applying Minkowski's inequality, we may bound
\begin{align}
& \tilde J[v] + c \int_{B_1} |v(x)|^2 w_N(x)^2\,dx \nonumber\\
& \geq (2s-1) \tilde c \int_0^1 \left( \int_{B_{\sqrt{1-t}}} |v(x)|^{q} \,dx \right)^{\frac{2}{q}} (\sqrt{1-t})^{2s} t^{2s-2}\,dt \nonumber\\
&\geq (2s-1) \tilde c
\left( \int_{B_1} |v(x)|^{q} \left( \int_0^{1-|x|^2} (1-t)^s t^{2s-2}\,dt \right)^{\frac{q}{2}} dx \right)^{\frac{2}{q}}. \label{suma}
\end{align}
We observe that
\[
\int_0^{1-|x|^2} (1-t)^s t^{2s-2}\,dt \geq B(s+1,2s-1) (1-|x|^2)^{2s-1},
\]
which follows from the fact that $y\mapsto \int_0^y (1-t)^s t^{2s-2}\,dt / \int_0^y t^{2s-2}\,dt$
is decreasing on $(0,1)$.
This allows us to bound the expression in (\ref{suma}) from below by
\begin{align*}
(2s-1)& B(s+1,2s-1) \tilde c \left( \int_{B_1} |v(x)|^{q} (1-|x|^2)^{(s-1/2)q} dx \right)^{\frac{2}{q}}\\
&=
(2s-1) B(s+1,2s-1) \tilde c \left( \int_{B_1} |u(x)|^{q} dx \right)^{\frac{2}{q}} \,,
\end{align*}
and we are done.
\end{proof}
This leaves us with proving Lemma \ref{Hball}. We need to introduce some notation. The regional Laplacian (see, e.g., \cite{MR2214908})
on an open set $\Omega\subset\mathbb{R}^N$ is, up to a multiplicative constant, given by
\[
L_\Omega u(x) = \lim_{\varepsilon\to 0^+}
\int_{\Omega \cap \{|y-x|>\varepsilon\}}\frac{u(y)-u(x)}{|x-y|^{N+2s}} \,dy.
\]
This operator appears naturally in our context since
$$
\int_\Omega \overline{u(x)} (L_\Omega u)(x) \,dx
= -\frac12 \int_\Omega \! \int_\Omega \frac{|u(x)-u(y)|^2}{|x-y|^{N+2s}} \,dx\,dy \,.
$$
Our proof of Lemma \ref{Hball} relies on a pointwise estimate for $L_{B_1} w_N$. In dimension $N=1$ this can be computed explicitly and we recall from \cite[Lemma 2.1]{DydaHint} that
\[
- L_{(-1,1)}w_1(x) = \frac{(1-x^2)^{\frac{-2s-1}{2}}}{2s} \left(
B( s+{\textstyle \frac{1}{2}} ,1-s) - (1 - x)^{2s} + (1 + x)^{2s} \right) \,.
\]
Hence, by \cite[(2.3)]{DydaHint},
\begin{equation}\label{Lw1}
- L_{(-1,1)}w_1(x) \geq c_1 (1-x^2)^{\frac{-2s-1}{2}} + c_2 (1-x^2)^{\frac{-2s+1}{2}},
\end{equation}
where
\[
c_1=\frac{B( s+{\textstyle \frac{1}{2}} ,1-s) - 2^{2s}}{2s},\quad
c_2=\frac{2^{2s}-2}{2s} \,.
\]
\begin{lem}\label{laplasjanupball}
Let $N\geq 2$ and let $w_N$ be as in Lemma \ref{Hball}. Then
\[
-L_{B_1} w_N(x) \geq \frac{c_1}{2} \int_{\mathbb S^{N-1}}|h_N|^{2s} dh
\cdot (1-|x|^2)^{-\frac{2s+1}{2}}
+ \frac{c_2}2|\mathbb S^{N-1}| \cdot (1-|x|^2)^{-\frac{2s-1}{2}} \,.
\]
\end{lem}
\begin{proof}
By rotation invariance we may assume that $\mathbf{x}=(0,0,\ldots,0,x)$. With the notation $p=\frac{2s-1}{2}$ we have
\begin{align*}
-L_{B_1}w_N(\mathbf{x}) &= p.v. \int_{B_1} \frac{(1-|\mathbf{x}|^2)^p - (1-|y|^2)^p}
{|\mathbf{x}-y|^{N+2s}}\,dy\\
&=\frac{1}{2} \int_{\mathbb S^{N-1}} dh
\; p.v.\int_{-xh_N - \sqrt{x^2h_N^2-x^2+1}}^{-xh_N + \sqrt{x^2h_N^2-x^2+1}}
\frac{ (1-|x|^2)^p - (1-|x+h t|^2)^p}{|t|^{1+2s}} \,dt \,.
\end{align*}
We calculate the inner principle value integral by changing the variable
$t=-xh_N + u \sqrt{x^2h_N^2-x^2+1}$
\begin{align*}
g(x,h)&:=
p.v.\int_{-xh_N - \sqrt{x^2h_N^2-x^2+1}}^{-xh_N + \sqrt{x^2h_N^2-x^2+1}}
\frac{ (1-|x|^2)^p - (1-|x+h t|^2)^p}{|t|^{1+2s}} \,dt\\
&=p.v. \int_{-1}^1
\frac{(1-x^2)^p - (1-u^2)^p(1-x^2+x^2h_N^2)^p}
{|-xh_N + u \sqrt{x^2h_N^2-x^2+1}|^{1+2s}}\,
\sqrt{x^2h_N^2-x^2+1}\,du\\
&=
(1-x^2+x^2h_N^2)^{p-s}
p.v.\int_{-1}^1 \frac{ (1- \frac{x^2h_N^2}{1-x^2+x^2h_N^2})^p - (1-u^2)^p}
{ |u - \frac{xh_N}{\sqrt{1-x^2+x^2h_N^2}}|^{1+2s}}\,du\\
&=(1-x^2+x^2h_N^2)^{-1/2}
(-L_{(-1,1)}w_1)(\frac{xh_N}{\sqrt{1-x^2+x^2h_N^2}}) \,.
\end{align*}
Hence by (\ref{Lw1}) we have
\begin{align*}
g(x,h) &\geq
(1-x^2+x^2h_N^2)^{-1/2} \bigg(
c_1 (1- \frac{x^2h_N^2}{1-x^2+x^2h_N^2})^{\frac{2s-1}{2}-2s}\\
&\qquad\qquad\qquad\qquad\qquad\quad
+ c_2 (1- \frac{x^2h_N^2}{1-x^2+x^2h_N^2})^{\frac{2s-1}{2}-2s+1}
\bigg)\\
&= c_1(1-x^2+x^2h_N^2)^{s}(1-x^2)^{-\frac{2s+1}{2}}+
c_2(1-x^2+x^2h_N^2)^{s-1}(1-x^2)^{-\frac{2s-1}{2}}\\
&\geq c_1 |h_N|^{2s} (1-x^2)^{-\frac{2s+1}{2}}
+ c_2 (1-x^2)^{-\frac{2s-1}{2}} \,.
\end{align*}
Thus
\begin{align*}
-L_{B_1}w_N(\mathbf{x}) &=
\frac{1}{2} \int_{\mathbb S^{N-1}}g(x,h) dh \\
&\geq
\frac{c_1}{2} \int_{\mathbb S^{N-1}}|h_N|^{2s} dh
\cdot (1-x^2)^{-\frac{2s+1}{2}}
+ \frac{c_2}2|\mathbb S^{N-1}| \cdot (1-x^2)^{-\frac{2s-1}{2}} \,,
\end{align*}
and we are done.
\end{proof}
Finally, we are in position to give the
\begin{proof}[Proof of Lemma \ref{Hball}]
We use the ground state representation formula \cite{FrSe1}, see also \cite[Lemma 2.2]{DydaHint},
\begin{align*}
\int_{B_1} \! \int_{B_1}
\frac{|u(x)-u(y)|^2}{|x-y|^{N+2s}} \,dx\,dy + 2\int_{B_1} \frac{Lw_N(x)}{w_N(x)} |u(x)|^2 \,dx
= \tilde J[v]
\end{align*}
with $u=w_N v$ and $\tilde J$ as defined in the lemma. The assertion now follows from Lemma \ref{laplasjanupball}, which implies that
$$
-2 \frac{Lw_N(x)}{w_N(x)} \geq \mathcal D_{N,2,s} \frac{2^{2s}}{(1-|x|^2)^{2s}} + c (1-|x|^2)^{-2s+1}
$$
with $c=c_2 |\mathbb S^{N-1}|>0$. Indeed, here we used $2^{2s-1} \mathcal D_{1,2,s} = c_1$ and
\[
\mathcal D_{N,2,s} = \mathcal D_{1,2,s}\cdot \frac{1}{2}\int_{\mathbb S^{N-1}} |h_N|^{2s}\,dh \,.
\]
as a quick computation shows.
\end{proof}
\section{The inequality in the general case}\label{sec:proof3}
In this section we shall give a complete proof of Theorem \ref{thm:HSM}. Our strategy is somewhat reminiscent of the proof of the Hardy--Sobolev--Maz'ya inequality in the local case in \cite{FLHSM}. As in that paper we use an averaging argument \'a la Gagliardo--Nirenberg to reduce the multi-dimensional case to the one-dimensional case. We describe this reduction in Subsection \ref{sec:reduc} and establish the required 1D inequality in Subsection \ref{sec:key}.
\subsection{Reduction to one dimension}\label{sec:reduc}
The key ingredient in our proof of Theorem \ref{thm:HSM} is the following pointwise estimate of a function on an interval.
\begin{lem}\label{lem:dim1}
Let $0<s<1$, $q\geq 1$ and $p\geq 2$ with $ps>1$. Then there is a $c=c(s,q,p)<\infty$ such that for all $f\in C_c^\infty(-1,1)$
\begin{equation}\label{eq:aim}
\|f\|_\infty^{p+q(ps-1)} \leq
c \left( \int_{-1}^1 \int_{-1}^1 \frac{|f(x)-f(y)|^p}{|x-y|^{1+ps}}\,dy\,dx -\mathcal D_{1,p,s} \int_{-1}^1 \frac{|f(x)|^p}{(1-|x|)^{ps}}\,dx
\right)
\|f\|_q^{q(ps-1)}.
\end{equation}
\end{lem}
Due to the particular form of the exponents this inequality has a scale-invariant form.
\begin{cor}\label{cor:key}
Let $0<s<1$, $q\geq 1$ and $p\geq 2$ with $ps>1$. Then, with the same constant $c=c(p,s,q)<\infty$ as in Lemma~\ref{lem:dim1}, we have for all open sets $\Omega\subsetneq \mathbb{R}$ and all $f\in C_c^\infty(\Omega)$
\begin{equation}\label{eq:aimOmega}
\|f\|_\infty^{p+q(ps-1)} \leq
c \left( \int_\Omega \int_\Omega \frac{|f(x)-f(y)|^p}{|x-y|^{1+ps}}\,dy\,dx -\mathcal D_{1,p,s} \int_\Omega \frac{|f(x)|^p}{d(x)^{ps}}\,dx
\right)
\|f\|_q^{q(ps-1)}
\end{equation}
where $d(x)=\dist(x,\Omega^c)$.
\end{cor}
\begin{proof}
From Lemma~\ref{lem:dim1}, by translation and dilation, we obtain (\ref{eq:aimOmega}) for any interval and half-line.
The extension to arbitrary open bounded sets is straightforward.
\end{proof}
We prove Lemma \ref{lem:dim1} in Subsection \ref{sec:key}. Now we show how this corollary allows us to deduce our main theorem. Taking advantage of an averaging formula of Loss and Sloane \cite{LossSloane} the argument is almost the same as in \cite{FLHSM}, but we reproduce it here to make this paper self-contained.
\begin{proof}[Proof of Theorem~\ref{thm:HSM}]
Let $\omega_1$, \ldots, $\omega_N$ be an orthonormal basis in $\mathbb{R}^N$. We write $x_j$ for the $j$-th coordinate of $x\in \mathbb{R}^N$ in this basis, and $\tilde{x}_j = x- x_j\omega_j$.
By skipping the $j$-th coordinate of $\tilde{x}_j$ (which is zero), we may regard $\tilde{x}_j$ as an element of $\mathbb{R}^{N-1}$.
For a~given domain $\Omega\subsetneq\mathbb{R}^N$
we write
\[
d_j(x) = d_{\omega_j}(x) = \inf\{ |t| : x+t\omega_j \not\in \Omega \}.
\]
If $u\in C_c^\infty(\Omega)$, then Corollary~\ref{cor:key} yields
\[
|u(x)| \leq C (g_j(\tilde{x}_j) h_j(\tilde{x}_j) )^{\frac{1}{p+q(ps-1)}}
\]
for any $1\leq j \leq N$, where
\begin{align*}
g_j(\tilde{x}_j) = & \int_{\tilde x_j +a\omega_j\in \Omega} \! da \int_{\tilde x_j +b\omega_j\in\Omega} \! db \ \frac{ |u(\tilde x_j+a\omega_j)-u(\tilde{x}_j+b\omega_j)|^p }{|a-b|^{1+ps}} \\
& - \mathcal D_{1,p,s} \int_{\mathbb{R}} da\, \frac{|u(\tilde x_j + a\omega_j)|^p}{d_j(\tilde x_j + a\omega_j)^{ps}}
\end{align*}
and
\begin{align*}
h_j(\tilde{x}_j) = \left( \int_{\mathbb{R}} da\, |u(\tilde x_j + a\omega_j)|^q \right)^{ps-1} .
\end{align*}
Thus
\[
|u(x)|^N \leq C^N \prod_{j=1}^N (g_j(\tilde{x}_j) h_j(\tilde{x}_j) )^{\frac{1}{p+q(ps-1)}} \,.
\]
We now pick $q=\frac{pN}{N-ps}$ and rewrite the previous inequality as
\[
|u(x)|^q \leq C^q \prod_{j=1}^N (g_j(\tilde{x}_j) h_j(\tilde{x}_j) )^{\frac{1}{ps(N-1)}}.
\]
By a standard argument based on repeated use of H\"older's inequality (see, e.g., \cite[Lemma~2.4]{FLHSM}) we deduce that
\[
\int_{\mathbb{R}^N} |u(x)|^q\,dx \leq C^q \prod_{j=1}^N
\left( \int_{\mathbb{R}^{N-1}} g_j(y)^{\frac{1}{ps}} h_j(y)^{\frac{1}{ps}}\,dy \right)^{\frac{1}{N-1}} \,.
\]
We note that
\[
\| h_j^{\frac{1}{ps-1}} \|_{L^1(\mathbb{R}^{N-1})} = \| u \|_{L^q(\mathbb{R}^N)}^q \qquad \text{for every } j=1,\ldots, N
\]
and derive from the H\"older and the arithmetic--geometric mean inequality that
\begin{align*}
\prod_{j=1}^N \int_{\mathbb{R}^{N-1}} g_j(y)^{\frac{1}{ps}} h_j(y)^{\frac{1}{ps}}\,dy
&\leq
\prod_{j=1}^N \|g_j\|_1^{\frac{1}{ps}} \|h_j^{\frac{1}{ps-1}} \|_1^{\frac{ps-1}{ps}}
=\| u \|_q^{\frac{q(ps-1)N}{ps}} \prod_{j=1}^N \|g_j\|_1^{\frac{1}{ps}}\\
&\leq \| u \|_q^{\frac{q(ps-1)N}{ps}} \left( N^{-1} \sum_{j=1}^N \|g_j\|_1 \right)^{\frac{N}{ps}}.
\end{align*}
To summarize, we have shown that
\[
\|u\|_q^p \leq C^{\frac{p^2s(N-1)}{N-ps}} N^{-1} \sum_{j=1}^N \|g_j\|_1.
\]
We now average this inequality over all choices of the coordinate system $\omega_j$. We recall the Loss--Sloane formula \cite[Lemma~2.4]{LossSloane}
\begin{align*}
& \int_{\Omega}\int_{\Omega} \frac{|u(x)-u(y)|^p}{|x-y|^{N+ps} } \,dx\,dy \\
& = \frac12 \int_{\mathbb S^{N-1}} \!\!d\omega \int_{\{x:\, x\cdot\omega=0\}} \!\!d\mathcal L_\omega(x) \int_{x+a\omega\in\Omega} \!\!da
\int_{x+b\omega\in\Omega} \!\!db \ \frac{|u(x+a\omega)-u(x+b\omega)|^p}{|a-b|^{1+ps}},
\end{align*}
where $\mathcal L_\omega$ is $(N-1)$-dimensional Lebesgue measure on the hyperplane $\{x:\, x\cdot\omega=0\}$. Thus we arrive at
\begin{align*}
\|u\|_q^p \leq \frac{2\, C^{\frac{p^2s(N-1)}{N-ps}}}{|\mathbb S^{N-1}|} \bigg(&
\int_{\Omega}\int_{\Omega} \frac{|u(x)-u(y)|^p}{|x-y|^{N+ps} } \,dx\,dy\\
& - \mathcal D_{1,p,s}\frac{\pi^{\frac{N-1}{2}} \Gamma(\frac{1+ps}{2}) }{\Gamma(\frac{N+ps}{2}) }
\int_\Omega \frac{|u(x)|^p}{m_{ps}(x)^{ps}} \,dx \bigg) \,.
\end{align*}
Recalling the definition of $\mathcal D_{N,p,s}$ we see that this is the inequality claimed in Theorem \ref{thm:HSM}.
\end{proof}
\subsection{Proof of the key inequality}\label{sec:key}
Our first step towards the proof of Proposition \ref{lem:dim1} is a Hardy inequality on an interval with a remainder term. Note the similarity to Lemma \ref{Hball}.
\begin{lem}\label{lem:prem}
Let $0<s<1$ and $p\geq 2$ with $ps>1$. Then
\begin{align*}
& \int_0^1 \int_0^1 \frac{|f(x)-f(y)|^p}{|x-y|^{1+ps} } \,dx\,dy - \mathcal D_{1,p,s} \int_0^1 \frac{|f(x)|^p}{x^{ps}}\,dx \\
& \quad \geq c_p \int_0^1 \int_0^1 \frac{|v(x)-v(y)|^p}{|x-y|^{1+ps} } \omega(x)^{p/2}\omega(y)^{p/2} \,dx\,dy
+ \int_0^1 W_{p,s}(x) |v(x)|^p \omega(x)^p\,dx
\end{align*}
for all $f$ with $f(0)=0$ (and no boundary condition at $x=1$). Here $\omega(x)=x^{(ps-1)/p}$ and $f=\omega v$. The function $W_{p,s}$ is bounded away from zero and satisfies
\[
W_{p,s}(x) \approx x^{-(p-1)(ps-1)/p} \qquad\text{for}\ x\in(0,1/2]
\]
and
\[
W_{p,s}(x) \approx
\begin{cases}
1 & \text{if}\ p-1-ps>0 \,, \\
|\ln(1-x)| & \text{if} \ p-1-ps=0 \,, \\
(1-x)^{-1-ps+p} & \text{if} \ p-1-ps<0 \,,
\end{cases}
\qquad \text{for}\ x\in [1/2,1).
\]
\end{lem}
\begin{proof}
The general ground state representation \cite{FrSe1} reads
\begin{align*}
\int_0^1 \int_0^1 \frac{|f(x)-f(y)|^p}{|x-y|^{1+ps} } \,dx\,dy
&\geq \int_0^1 V(x) |f(x)|^p \\
&+ c_p \int_0^1 \int_0^1 \frac{|v(x)-v(y)|^p}{|x-y|^{1+ps} } \omega(x)^{p/2}\omega(y)^{p/2} \,dx\,dy
\end{align*}
with
$$
V(x) := 2 \omega(x)^{-p+1} \int_0^1 \left(\omega(x) -\omega(y)\right)
\left|\omega(x) - \omega(y) \right|^{p-2} |x-y|^{-1-ps}\,dy
$$
(understood as principal value integral). We decompose
\begin{align*}
V(x) & = 2 \omega(x)^{-p+1} \int_0^\infty \left(\omega(x) -\omega(y)\right)
\left|\omega(x) - \omega(y) \right|^{p-2} |x-y|^{-1-ps}\,dy \\
& \qquad - 2 \omega(x)^{-p+1} \int_1^\infty \left(\omega(x) -\omega(y)\right)
\left|\omega(x) - \omega(y) \right|^{p-2} |x-y|^{-1-ps}\,dy \\
& = \frac{\mathcal D_{1,p,s}}{x^{ps}} + W_{p,s}(x) \,.
\end{align*}
(The computation of the first term is in
\cite[Lemma 2.4]{FrankSeiringer}.) For $x\in(0,1)$, the second term is positive, indeed,
$$
W_{p,s}(x) = 2 \omega(x)^{-p+1} \int_1^\infty \left(\omega(y) -\omega(x)\right)^{p-1} (y-x)^{-1-ps}\,dy \,.
$$
Note that at $x=0$
$$
\int_1^\infty \omega(y)^{p-1} y^{-1-ps}\,dy = c_{p,s} <\infty
$$
since $ps-(p-1)(ps-1)/p>0$. Hence $W_{p,s}(x) \sim 2c_{p,s} x^{-(p-1)(ps-1)/p}$ as $x\to 0$. On the other hand, at $x=1$, we have
$$
\int_1^\infty \left(\omega(y) -1\right)^{p-1} (y-1)^{-1-ps}\,dy =\tilde c_{p,s} <\infty
\qquad \text{if}\ p-1-ps>0 \,.
$$
Hence $W_{p,s}(x) \to 2\tilde c_{p,s}$ as $x\to 1$ in that case. In the opposite case, one easily finds that for $x=1-\epsilon$, to leading order only $y$'s with $y-1$ of order $\epsilon$ contribute. Hence $W_{p,s}(x) \sim 2\tilde c_{p,s} (1-x)^{-1-ps+p}$ as $x\to 1$ if $p-1-ps<0$ and $W_{p,s}(x) \sim 2\tilde c_{p,s} |\ln(1-x)|$ if $p-1-ps=0$.
\end{proof}
\begin{cor}\label{cor:prem}
Let $0<s<1$ and $p\geq 2$ with $ps>1$. Then
\begin{align*}
& \int_{-1}^1 \int_{-1}^1 \frac{|f(x)-f(y)|^p}{|x-y|^{1+ps} } \,dx\,dy - \mathcal D_{1,p,s} \int_{-1}^1 \frac{|f(x)|^p}{(1-|x|)^{ps}}\,dx \\
& \qquad \geq c_p \left( \int_{-1}^0 \int_{-1}^0 + \int_0^1\int_0^1\right)
\frac{|v(x)-v(y)|^p}{|x-y|^{1+ps} } \omega(x)^{p/2}\omega(y)^{p/2} \,dx\,dy\\
&\qquad\quad+ c_{p,s} \int_{-1}^1 |v(x)|^p \omega(x)\,dx
\end{align*}
for all $f$ with $f(-1)=f(1)=0$. Here $\omega(x)=(1-|x|)^{(ps-1)/p}$ and $f=\omega v$.
\end{cor}
\begin{proof}
The corollary follows by applying Lemma~\ref{lem:prem} to functions $f_1(x)=f(1+x)$ and $f_2(x)=f(1-x)$, where $x\in [0,1]$,
and adding resulting inequalities.
\end{proof}
The second ingredient besides Lemma \ref{lem:prem} in our proof of Proposition \ref{lem:dim1} is the following bound due to Garsia, Rodemich and Rumsey \cite{GRR}.
\begin{lem}
Let $p,s>0$ with $ps>1$. Then for any continuous function $f$ on $[a,b]$
\begin{equation}\label{eq:key}
\int_a^b \int_a^b \frac{|f(x)-f(y)|^p}{|x-y|^{1+ps}}\,dy\,dx \geq c\ \frac{|f(b)-f(a)|^p}{(b-a)^{ps-1}}
\end{equation}
with $c=(ps-1)^p (8(ps+1))^{-p}/4$.
\end{lem}
\begin{proof}
This follows by taking $\Psi(x)=|x|^p$ and $p(x)=|x|^{s+1/p}$ in \cite[Lemma 1.1]{GRR}.
\end{proof}
After these preliminaries we can now turn to the
\begin{proof}[Proof of Proposition \ref{lem:dim1}]
Let $\omega(x)=(1-|x|)^{(ps-1)/p}$. Substituting $v=f/\omega$ and applying Corollary~\ref{cor:prem}, we see that it suffices to prove
\begin{align}\label{eq:aim2}
& \|v\omega\|_\infty^{p+q(ps-1)} \leq
c\bigg( \int_{-1}^1 |v(x)|^p \omega(x)\,dx \\
&\quad+
\left( \int_{-1}^0 \int_{-1}^0 + \int_0^1\int_0^1\right)
\frac{|v(x)-v(y)|^p}{|x-y|^{1+ps} } \omega(x)^{p/2}\omega(y)^{p/2} \,dx\,dy
\bigg)
\|v\omega\|_q^{q(ps-1)}.\nonumber
\end{align}
Without loss of generality, we may assume that $v$ is non-negative and that for some $x_0\in [0,1)$ we have $v(x_0)\omega(x_0)=\|v\omega\|_\infty > 0$. Let $c_1=\omega(\frac{1}{2})/(2\omega(0)) \in (0,1)$.
We distinguish three cases.
\emph{Case 1:} $x_0\in [0, \frac{1}{2}]$
and $v\omega \geq c_1 v(x_0)\omega(x_0)$ on $[0, \frac{1}{2}]$.
Then $\int_{-1}^1 |v|^p\omega \geq
\int_{0}^{1/2} |v|^p\omega^p \geq \frac{c_1^p}{2} |v(x_0)\omega(x_0)|^p$ and
$\int_{-1}^1 |v\omega|^q \geq \frac{c_1^q}{2} |v(x_0)\omega(x_0)|^q$,
hence (\ref{eq:aim2}) follows.
\emph{Case 2:} $x_0\in [0, \frac{1}{2}]$ and there is a $z \in [0, \frac{1}{2}]$ such that $v(z)\omega(z) \leq c_1 v(x_0)\omega(x_0)$.
Let $z$ be closest possible to $x_0$, so that $v(z)\omega(z)= c_1 v(x_0)\omega(x_0)$ and
$v\omega \geq c_1 v(x_0)\omega(x_0)$ on the interval $I$ with endpoints $x_0$ and $z$.
We observe that
\[
v(z) = c_1 v(x_0) \frac{\omega(x_0)}{\omega(z)} = \frac{v(x_0)}{2} \frac{\omega(x_0)}{\omega(0)} \frac{\omega(\frac{1}{2})}{\omega(z)} \leq \frac{v(x_0)}{2}.
\]
We have by~(\ref{eq:key})
\begin{align*}
\int_0^1 \int_0^1 & \frac{|v(x)-v(y)|^p}{|x-y|^{1+ps}}\,\omega(x)^{p/2}\omega(y)^{p/2}\,dy\,dx \\
& \geq w({\textstyle \frac{1}{2}})^p \int_I \int_I \frac{|v(x)-v(y)|^p}{|x-y|^{1+ps}}\,dy\,dx \\
& \geq c |v(x_0)-v(z)|^p |z-x_0|^{1-ps} \geq c' |v(x_0)\omega(x_0)|^p |z-x_0|^{1-ps}.
\end{align*}
On the other hand,
\begin{align*}
\int_{-1}^1 |v\omega|^q &\geq \int_I |v\omega|^q \geq c_1^q |v(x_0)\omega(x_0)|^q |z-x_0| \,.
\end{align*}
Hence (\ref{eq:aim2}) follows.
\emph{Case 3:} $x_0 \in (\frac{1}{2},1)$.
Since the function $x\mapsto \omega(x)/\omega(\frac{x}{2})$ is decreasing on $[0,1)$, we have that
\[
\frac{\omega(x_0)}{\omega(x_0/2)} \leq \frac{\omega(1/2)}{\omega(1/4)} =: c_2.
\]
Since $v(\frac{x_0}{2})\omega(\frac{x_0}{2}) \leq v(x_0)\omega(x_0)$, we get that $v(\frac{x_0}{2}) \leq c_2 v(x_0)$. Hence there exists
$z\in [\frac{x_0}{2}, x_0)$ such that $v(z)=c_2v(x_0)$ and $v \geq c_2v(x_0)$ on $[z, x_0]$.
We have by~(\ref{eq:key})
\begin{align*}
\int_0^1 \int_0^1 &\frac{|v(x)-v(y)|^p}{|x-y|^{1+ps}}\,\omega(x)^{p/2}\omega(y)^{p/2}\,dy\,dx \\
& \geq w(x_0)^p \int_{z}^{x_0} \int_{z}^{x_0} \frac{|v(x)-v(y)|^p}{|x-y|^{1+ps}}\,dy\,dx \\
& \geq c w(x_0)^p |v(x_0)-v(z)|^p |z-x_0|^{1-ps} \geq c' |v(x_0)\omega(x_0)|^p |z-x_0|^{1-ps}.
\end{align*}
Also,
\[
\int_{-1}^1 |v\omega|^q \geq \omega(x_0)^q \int_z^{x_0} |v|^q \geq c_2^q |v(x_0)\omega(x_0)|^q |z-x_0| \,.
\]
and again (\ref{eq:aim2}) follows. This completes the proof of Proposition \ref{lem:dim1}.
\end{proof}
\def$'${$'$}
|
2,869,038,154,376 | arxiv | \section{Introduction}\label{sec:introduction}
Consider the search for a pair of eyeglasses. Perhaps we look in the two or three most likely places without success, and resign ourselves to checking the top shelf of the refrigerator. However, particularly if the missing eyeglasses are ours, we know that the target may well have been missed in any of the previous locations. We might find our target more quickly if, from time to time, we go back and start the search again.
While the mental framework of using restart to shorten the mean time of search is a useful analogy, this principle can be used to alter the dynamics of many kinds of First Passage (FP) processes across physics \cite{gupta2014fluctuating}, chemistry \cite{reuveni2014role}, biology \cite{roldan2016stochastic} and computer science \cite{huang2007effect}.
To solidify this concept, suppose we have some stochastic process $\mathcal{W}_n$ with state space $\mathcal{S}$ that starts in some initial set $A\subset\mathcal{S}$, by which we mean $\mathcal{W}_0\in A$. Then suppose the FP characteristics of $\mathcal{W}_n$ into some target set $B\subset\mathcal{S}$ are known with the hitting time of this underlying process denoted by $\mathcal{U}$. Our main interest becomes whether restarting this process at random intervals might change the mean FP time on some external clock. That is, we define a process $\mathcal{W}_n^*$ that starts at $A$ and returns to $A$ at randomly determined times we call restarts, and which has the same dynamics as $\mathcal{W}_n$ between those restart events.
For this paper, we assume that the process evolves in discrete time, meaning $n\in\mathbb{N}\coloneqq\{0,1,2,\ldots\}$, and that $A\cap B=\emptyset$. For the first assumption, we note that other authors have already spent much time investigating the continuous time case \cite{evans2020stochastic}. For the second, it is helpful for some our results to assume that the FP time of the underlying process is almost surely not 0 ($\mathbb{P}(\mathcal{U}=0)=0)$. Up through Section \ref{sec:hitting_times}, it would be relatively simple to relax this condition, but it is critical for part of Section \ref{sec:ET<EU} (as we note there).
\section{Defining the FPUR} \label{sec:definitions}
In this paper, we largely work within the frame established by Pal and Reuveni in \cite{pal2017first}, and parallel the recent work from Bonomo and Pal in \cite{bonomo2021first} by giving a recursive definition for the hitting time of the First Passage Under Restart (FPUR) process and then deriving its generating function. We denote by $\mathcal{U}$ the FP time of the underlying process, by $\mathcal{R}$ the time until the next restart, and by $\mathcal{T}$ the FP time of the process with restart. Using $\mathcal{T}^*$ to represent an independent and identically distributed copy of $\mathcal{T}$, we have
\begin{equation} \label{eq:recursive_definition}
\mathcal{T} =
\begin{cases}
\mathcal{U} & \text{if } \mathcal{R} > \mathcal{U} \\
\mathcal{R} + \mathcal{T}^* & \text{if } \mathcal{R} \le \mathcal{U}
\end{cases}.
\end{equation}
In other words, if the underlying process reaches the target set before the restart occurs, then the FPUR also concludes at that time. In the case, however, that the restart occurs before or simultaneously to the underlying process finishing, the time of the restart is noted, and $\mathcal{U}$ and $\mathcal{R}$ are drawn anew from their respective distributions. It's worth emphasizing that, in the case that the first passage of the underlying process and the restart occur at the same time, the restart ``wins" the tie, and the process is reset to its initial position. This is an arbitrary decision with some consequences that will be discussed when they come up. In \cite{bonomo2021first}, Bonomo and Pal have a similar derivation in which the weak and sharp inequalities are reversed in the recursive definition.
Even before deriving the generating function for $\mathcal{T}$, much can be seen directly from (\ref{eq:recursive_definition}), including a simple expression for $\mathbb{E}[\mathcal{T}]$, with $p_r \coloneqq \mathbb{P}(\mathcal{R}\le\mathcal{U})$.
\begin{align*}
\mathbb{E}[\mathcal{T}] & = \mathbb{E}[\mathcal{U} \mid \mathcal{R}>\mathcal{U}]\left(1-p_r\right) + \mathbb{E}[\mathcal{R} + \mathcal{T}^* \mid \mathcal{R} \le \mathcal{U}]p_r \\
& = \frac{1}{1-p_r}\left( \mathbb{E}[\mathcal{U} \mid \mathcal{R}>\mathcal{U}]\left(1-p_r\right) + \mathbb{E}[\mathcal{R} \mid \mathcal{R} \le \mathcal{U}]p_r \right)
\end{align*}
This admits two quick and useful interpretations:
\begin{align}
\mathbb{E}[\mathcal{T}] & = \frac{\mathbb{E}[\mathcal{U} \wedge \mathcal{R}]}{1 - p_r},\text{ and} \label{eq:min_formula)}\\
\mathbb{E}[\mathcal{T}] & = \mathbb{E}[\mathcal{U} \mid \mathcal{R} > \mathcal{U}] + \frac{p_r}{1-p_r}\mathbb{E}[\mathcal{R} \mid \mathcal{R} \le \mathcal{U}]\label{eq:num_restart_formula}.
\end{align}
The first is very concise, and the second gives a clearer picture of the restart's effect, where $\frac{p_r}{1-p_r}$ is the expected number of restarts before the FPUR process reaches the target set. We will return to these formulas in later sections.
Before progressing, we must also offer an important definition. We call a restart \underline{preemptive} if $\mathcal{R} \le \mathcal{U}$ almost surely, or $p_r = 1$. This is a property not just of the restart distribution (in general), but of the interplay between the restart and underlying FP time distributions. In particular, we mostly seek to consider FPUR processes with \underline{non-preemptive} restart, since the alternative is often uninteresting. Since our choice in (\ref{eq:recursive_definition}) means that $\mathcal{R}\le\mathcal{U}$ prevents the process from terminating, any process with preemptive restart will almost surely never terminate and thus have an infinite mean hitting time. One particularly pathological case is when the underlying process cannot finish in finite time. If $\mathcal{U}=\infty$ almost surely, then we say that any restart distribution will be preemptive. To avoid this issue, we assume that $\mathbb{P}(\mathcal{U}=\infty)<1$ in all that follows, unless otherwise noted.
\section{Obtaining the Generating Function for $\mathcal{T}$}\label{sec:PGF_for_T}
When characterizing a discrete random variable, $X$, the usefulness of its Probability Generating Function (PGF) can hardly be overstated. Denoting the probability mass function of $X$ by $x(n)$ allows us to define its PGF as follows.
\begin{equation*}
\tilde x(z) = \sum_{n\ge0} x(n)z^n
\end{equation*}
This expression as a $z$-transform of $x(n)$ allows us to employ numerous techniques for power series with positive coefficients. Additionally, one can easily determine the $k$-th factorial moment of $X$ (that is, $\mathbb{E}[X(X-1)\ldots(X-k+1)])$ by taking the corresponding derivative and evaluating the result at $z=1$. Perhaps most usefully, we can evaluate $\tilde x(z)$ and its first derivative to obtain:
\begin{itemize}
\item $\tilde x(1) = \sum_{n\ge0}x(n) = \mathbb{P}(X<\infty)$, and
\item $\tilde x'(1) = \sum_{n\ge0}nx(n) = \mathbb{E}[X]$, so long as $\tilde x(1)=1$.
\end{itemize}
It's worth noting that we might generally expect that $\tilde x(1) = 1$, or that the sum of the probability mass equals 1. It is not, however, necessary that this is the case. In particular, if there is some nonzero probability that $X$ does not occur in finite time (such as when our underlying process might escape to infinity), then $\tilde x(1)$ will be the complement of that probability, denoted $\mathcal{E}_X \coloneqq \mathbb{P}(X\text{ is finite})$, often called the hitting probability.
When we address the topic of hitting time for a stochastic process, the probability generating function gives us a powerful tool for characterizing the first passage characteristics. Thus, the ability easily to write the PGF for $\mathcal{T}$, denoted $\tilde t(z)$, using only the PGF of $\mathcal{U}$ and $\mathcal{R}$ would be very valuable. The following lemma provides a formula for $\tilde t(z)$ given $\tilde u(z)$ and $\tilde r(z)$.
\begin{lemma}\label{lemma-pdf_FPUR_gf}
Given the PGF for $\mathcal{U}$ and $\mathcal{R}$, we can write the probability generating function for $\mathcal{T}$ as
\begin{equation*}
\tilde t(z) = \frac{\tilde u(z) - \sum_{n=0}^\infty z^nu(n)\sum_{i=0}^n r(i)}{1 - \tilde r(z)(1-\mathcal{E}_\mathcal{U}) - \sum_{n=0}^\infty u(n)\sum_{i=0}^n z^ir(i)}.
\end{equation*}
\end{lemma}
\begin{proof}
Directly from (\ref{eq:recursive_definition}), we can write
\begin{align*}
\mathbb{P}(\mathcal{T}=n) & = \mathbb{P}(\mathcal{T}=n \mid \mathcal{R}>\mathcal{U})\cdot\mathbb{P}(\mathcal{R}>\mathcal{U}) \\
& \quad + \mathbb{P}(\mathcal{T} = n \mid \mathcal{R}\le\mathcal{U})\cdot\mathbb{P}(\mathcal{R}\le\mathcal{U}) \\
& = \mathbb{P}(\mathcal{U}=n \text{ and } \mathcal{U}<\mathcal{R}) \\
& \quad + \mathbb{P}(\mathcal{R}+\mathcal{T}^*=n \text{ and } \mathcal{R}\le \mathcal{U}).
\end{align*}
Recalling that $\mathcal{U}$, $\mathcal{R}$ and $\mathcal{T}^*$ are independent allows us to simplify to
\begin{align*}
t(n) & \coloneqq \mathbb{P}(\mathcal{T}=n) \\
& = u(n)\left(1-\sum_{i=0}^n r(i)\right) \\
& \quad + \sum_{i=0}^n \left(r(i)\left[1 - \sum_{j=0}^{i-1} u(j)\right]\right)t^*(n-i).
\end{align*}
We continue with a straightforward method for producing the generating function: taking the $z$-transform. Multiplying both sides of the preceding equation by $z^n$ and summing over $n\in\mathbb{N}$ gives
\begin{align*}
\sum_{n=0}^\infty z^n t(n) & = \sum_{n=0}^\infty z^nu(n)\left[1 - \sum_{i=0}^n r(i)\right] \\
& + \sum_{n=0}^\infty z^n\sum_{i=0}^n \left(r(i)\left[1 - \sum_{j=0}^{i-1} u(j)\right]\right)t^*(n-i).
\end{align*}
On the left, we have $\tilde t(z)$ by definition. On the right-hand side, we can focus our attention on the second summand. We split the powers of $z$, change the order of the first two sums to obtain
\begin{align*}
& \sum_{n=0}^\infty z^n\sum_{i=0}^n \left(r(i)\left[1 - \sum_{j=0}^{i-1} u(j)\right]\right)t^*(n-i) \\
& = \sum_{n=0}^\infty \sum_{i=0}^n \left(z^ir(i)\left[1 - \sum_{j=0}^{i-1} u(j)\right]\right)z^{n-i}t^*(n-i) \\
& = \sum_{i=0}^\infty \left(z^ir(i)\left[1 - \sum_{j=0}^{i-1} u(j)\right]\right) \sum_{n=i}^\infty z^{n-i}t^*(n-i) \\
& = \sum_{i=0}^\infty \left(z^ir(i)\left[1 - \sum_{j=0}^{i-1} u(j)\right]\right) \tilde t^*(z) \\
& = \tilde t(z) \sum_{i=0}^\infty \left(z^ir(i)\left[1 - \sum_{j=0}^{i-1} u(j)\right]\right),
\end{align*}
where the last step is possible because $\mathcal{T}^*$ is an identically distributed copy of $\mathcal{T}$. Replacing the second summand in the earlier expression and solving for $\tilde t(z)$ gives us the formula,
\begin{equation*}
\tilde t(z) = \frac{\sum_{n=0}^\infty z^nu(n)\left[1 - \sum_{i=0}^n r(i)\right]}{1 - \sum_{i=0}^\infty \left(z^ir(i)\left[1 - \sum_{j=0}^{i-1} u(j)\right]\right)}.
\end{equation*}
This expression is actually sufficient for many useful calculations, but it can be convenient to rewrite it. From here, some simple algebra and another exchange of summation order gives us
\begin{align*}
\tilde t(z) & = \frac{\sum_{n=0}^\infty z^nu(n) - \sum_{n=0}^\infty z^nu(n)\sum_{i=0}^n r(i)}{1 - \sum_{i=0}^\infty z^ir(i) + \sum_{i=0}^\infty z^ir(i)\sum_{j=0}^{i-1} u(j)} \\
& = \frac{\tilde u(z) - \sum_{n=0}^\infty z^nu(n)\sum_{i=0}^n r(i)}{1 - \tilde r(z) + \sum_{j=0}^\infty u(j) \left[\tilde r(z) - \sum_{i=0}^j z^ir(i)\right]} \\
& = \frac{\tilde u(z) - \sum_{n=0}^\infty z^nu(n)\sum_{i=0}^n r(i)}{1 - \tilde r(z)(1-\mathcal{E}_\mathcal{U}) - \sum_{n=0}^\infty u(n)\sum_{i=0}^n z^ir(i)}.
\end{align*}
\end{proof}
\subsection{Two common restart mechanisms}
With the formula for $\tilde t(z)$, it might behoove us to cover two common distributions for restart: a geometric distribution with constant rate, $\rho$, and a deterministic or sharp distribution at some constant time, $N$. These two distributions are of particular interest for several reasons, not least among them that we can actually compute some useful results.
\subsubsection{The geometric restart}
In this text, we define a geometric distribution by its cumulative mass function as $R(n) = 1-(1-\rho)^n$ for $n\in\mathbb{N}$ with constant rate parameter $\rho\in(0,1)$. We consider the limiting cases $\rho\to0$ and $\rho\to1$ to be no restart and restart every step, respectively. From the cumulative distribution, we can easily write down the probability mass function, $r(n) = \rho(1-\rho)^{n-1}$ for $n\in\mathbb{Z}^+\coloneqq\{1,2,3,\ldots\}$, and the $z$-transform, $\tilde r(z) = \frac{\rho z}{1-(1-\rho)z}$. An observation worth making for the geometric restart is that $\mathbb{P}(\mathcal{R}\le\mathcal{U})<1$ and restart is non-preemptive for $\rho\in(0,1)$. In the following sections, many results depend on non-preemptive restart, so the geometric distribution is often a good candidate for experimentation. We also want to draw attention to the fact that the support of $\mathcal{R}$ is $\mathbb{Z}^+$ in this definition. This is in contrast to some other works (e.g. \cite{bonomo2021first}) that use an alternative parameterization of the distribution so that $\mathcal{R}\in\mathbb{N}$.
\subsubsection{The sharp restart}
An even simpler distribution can be defined with cumulative mass function $R(n) = \mathbbm{1}_{[N,\infty)}(n) = \begin{cases}1 & n \ge N \\ 0 & n < N\end{cases}$ with $n\in\mathbb{N}$ for some parameter $N\in\mathbb{Z}^+$. This admits probability mass function $r(n) = \delta_{n,N}$ and PGF $\tilde r(z) = z^N$. In contrast to the geometric restart, this sharp restart can be preemptive. To illustrate: consider the case where the underlying stochastic process can reach its target set no earlier than time $m$. If $N\le m$, then $\mathbb{P}(\mathcal{R}\le\mathcal{U})=1$, the FPUR cannot ever reach the target set, and $\mathcal{T}$ is almost surely infinite.
\section{Hitting Probabilities and Recurrence}\label{sec:hiting_probabilities}
With the generating function for $\mathcal{T}$ given by Lemma \ref{lemma-pdf_FPUR_gf}, we can look at how the hitting probability of the FP process is changed by adding an arbitrary restart mechanism. We simply evaluate the generating function at $z=1$, which recovers exactly $\mathcal{E}_\mathcal{T} = \sum_{n\in\mathbb{N}}t(n)$.
\begin{lemma}\label{lemma-hit_t}
The hitting probability of the FPUR process is given by
\begin{equation*}
\mathcal{E}_\mathcal{T} = \frac{\mathcal{E}_\mathcal{U} - \sum_{n=0}^\infty u(n)R(n)}{1 - \mathcal{E}_\mathcal{R}(1-\mathcal{E}_\mathcal{U}) - \sum_{n=0}^\infty u(n)R(n)},
\end{equation*}
when $d \coloneqq 1 - \mathcal{E}_\mathcal{R}(1-\mathcal{E}_\mathcal{U}) - \sum_{n=0}^\infty u(n)R(n)\neq0$. Otherwise $\mathcal{E}_\mathcal{T}=0$.
\end{lemma}
\begin{proof}
The formula itself is an immediate result of Lemma \ref{lemma-pdf_FPUR_gf}, so we merely address the circumstances under which $d$ equals $0$ and show that $\mathcal{E}_\mathcal{T}=0$ in that case. Setting the denominator to zero gives us
\begin{align*}
0 & = 1 - \mathcal{E}_\mathcal{R}(1-\mathcal{E}_\mathcal{U}) - \sum_{n=0}^\infty u(n)R(n) \\
1-\mathcal{E}_\mathcal{R} & = \sum_{n=0}^\infty u(n)R(n) - \mathcal{E}_\mathcal{U}\mathcal{E}_\mathcal{R} \\
1-\mathcal{E}_\mathcal{R} & = \sum_{n=0}^\infty u(n)[R(n) - \mathcal{E}_\mathcal{R}].
\end{align*}
The left-hand side is clearly nonnegative, and the right-hand side is clearly nonpositive, which indicates that $d\ge0$, with equality only when both of the following conditions are met:
\begin{itemize}
\item $\mathcal{E}_\mathcal{R} = 1$, and
\item $R(n)=\mathcal{E}_\mathcal{R}$ for all $n$ in the support of $u(n)$.
\end{itemize}
Put plainly, this is the case in which the restart is assured and must occur with probability 1 before the underlying process has any chance to reach the target set, which is the definition of preemptive restart. Notice that $d>0$ whenever $\mathcal{E}_\mathcal{R}<1$.
\end{proof}
This lemma indicates that, when restart is preemptive, the hitting probability of the FPUR process becomes 0, exactly as one might expect. In the case of non-preemptive restart, however, we can say a bit more.
\begin{theorem}\label{theo-T_iff_U_or_R}
For a FPUR process with non-preemptive restart, $\mathcal{E}_\mathcal{T}=1$ iff at least one of $\mathcal{E}_\mathcal{U}$ and $\mathcal{E}_\mathcal{R}$ is $1$.
\end{theorem}
\begin{proof}
First, suppose that $\mathcal{E}_\mathcal{T}=1$. Since the restart is non-preemptive, we may multiply both sides by the denominator to obtain the following sequence of equalities.
\begin{align*}
1 - \mathcal{E}_\mathcal{R}(1-\mathcal{E}_\mathcal{U}) - \sum_{n=0}^\infty u(n)R(n) & = \mathcal{E}_\mathcal{U} - \sum_{n=0}^\infty u(n)R(n) \\
1 - \mathcal{E}_\mathcal{R}(1-\mathcal{E}_\mathcal{U}) & = \mathcal{E}_\mathcal{U} \\
1 - \mathcal{E}_\mathcal{R} - \mathcal{E}_\mathcal{U} + \mathcal{E}_\mathcal{R}\mathcal{E}_\mathcal{U} & = 0 \\
(1 - \mathcal{E}_\mathcal{R})(1 - \mathcal{E}_\mathcal{U}) & = 0
\end{align*}
Thus we must have at least one of $\mathcal{E}_\mathcal{R}$ and $\mathcal{E}_\mathcal{U}$ equal to 1. \\
Next, we have two cases:
\begin{itemize}
\item Suppose $\mathcal{E}_\mathcal{R} = 1$. \\ Then we have $\mathcal{E}_\mathcal{T} = \frac{\mathcal{E}_\mathcal{U} - \sum_{n=0}^\infty u(n)R(n)}{1 - 1\cdot(1-\mathcal{E}_\mathcal{U}) - \sum_{n=0}^\infty u(n)R(n)} = \frac{\mathcal{E}_\mathcal{U} - \sum_{n=0}^\infty u(n)R(n)}{\mathcal{E}_\mathcal{U} - \sum_{n=0}^\infty u(n)R(n)}=1$.
\item Suppose $\mathcal{E}_\mathcal{U} = 1$. \\Then we have $\mathcal{E}_\mathcal{T} = \frac{1 - \sum_{n=0}^\infty u(n)R(n)}{1 - \mathcal{E}_\mathcal{R}(1-1) - \sum_{n=0}^\infty u(n)R(n)} = \frac{1 - \sum_{n=0}^\infty u(n)R(n)}{1 - \sum_{n=0}^\infty u(n)R(n)}=1$.
\end{itemize}
\end{proof}
There are many consequences to this theorem, but we'll take a moment to note an important one.
\begin{proposition}\label{prop:rec}
Given an underlying discrete stochastic process, $\mathcal{W}_n$, and a restart mechanism with $\mathcal{E}_\mathcal{R}=1$, any terminal point that can be reached in finite time by $\mathcal{W}_n$ becomes recurrent for $\mathcal{W}_n^*$, provided the restart is non-preemptive.
\end{proposition}
This proposition highlights the value of our geometric restart mechanism. Since the PGF for $\mathcal{R}$ is $\tilde r(z) = \frac{\rho z}{1-(1-\rho)z}$, we can immediately check that $\mathcal{E}_\mathcal{R} = \tilde r(1) = 1$. Since geometric restart is furthermore non-preemptive as discussed at the end of Section \ref{sec:PGF_for_T}, Proposition \ref{prop:rec} tells us that the FPUR is recurrent for every state it can reach in finite time, even when the underlying FP process is not!
\section{Hitting Times}\label{sec:hitting_times}
While understanding the hitting probability is a critical step in analyzing the first passage statistics of a stochastic process, our goal is often to compute the expected hitting time. One option is to differentiate the expression from Lemma \ref{lemma-pdf_FPUR_gf} and then evaluate at $z=1$, which gives expressions for $\mathbb{E}[\mathcal{T}]$ that are equivalent to those given in Section \ref{sec:definitions}. Depending on the complexity of $\tilde r(z)$, $\tilde u(z)$, $r(n)$ and $u(n)$, however, evaluating the derivative might be the easier approach. Any of these formulations will permit an extension to Proposition \ref{prop:rec}.
\begin{proposition}\label{prop:rec2}
Given an underlying discrete stochastic process, $\mathcal{W}_n$, and a restart mechanism with $\mathbb{E}[\mathcal{R}]<\infty$, any terminal point that can be reached in finite time by $\mathcal{W}_n$ becomes positive recurrent for $\mathcal{W}_n^*$, provided the restart is non-preemptive.
\end{proposition}
\begin{proof}
If we suppose that $\mathbb{E}[\mathcal{R}]<\infty$, then clearly $\mathcal{E}_\mathcal{R} = 1$ and we have recurrence by Proposition \ref{prop:rec}. To show positive recurrence, take equation (\ref{eq:min_formula)}): $\mathbb{E}[\mathcal{T}] = \frac{\mathbb{E}[\mathcal{U} \wedge \mathcal{R}]}{1-p_r}$. So long as $\mathbb{E}[\mathcal{R}]$ is finite and $p_r<1$, then $\mathbb{E}[\mathcal{T}]<\infty$.
\end{proof}
In general, it is not easy to compute expressions for the hitting time. Our two restarts from Section \ref{sec:PGF_for_T}, however, do allow for relatively simple formulations.
\subsection{Geometric restart} \label{sec-geom_formula}
Substituting $r(n) = \rho(1-\rho)^{n-1}$ for $n\ge1$ and $\tilde r(z) = \frac{\rho z}{1 - (1-\rho)z}$ into the formula from Lemma \ref{lemma-pdf_FPUR_gf} gives us the following PGF for $\mathcal{T}$:
\begin{equation*}
\tilde t(z) = \frac{\tilde u((1-\rho)z)}{1-\frac{\rho z}{1-(1-\rho)z}\left(1 - \tilde u((1-\rho)z)\right)}.
\end{equation*}
Taking the derivative with respect to $z$ and then evaluating at $z=1$ gives us the formula for the mean hitting time below.
\begin{equation}\label{eq:hitting_time-geom}
\mathbb{E}[\mathcal{T}] = \frac{1 - \tilde u(1-\rho)}{\rho \tilde u(1-\rho)}
\end{equation}
It's worth taking a moment to comment on how wonderful this expression is. It's not only concise, it also allows us to compute $\mathbb{E}[\mathcal{T}]$ with only the PGF for $\mathcal{U}$ \textemdash no need for taking further derivatives or computing any partial sums of the probability mass function.
We can also examine both of the limiting cases for $\rho$. Using L'H\^{o}pital's Rule to take $\rho\to0$, we find that $\mathbb{E}[\mathcal{T}]\to\mathbb{E}[\mathcal{U}]$. This matches nicely with our earlier interpretation: the $\rho\to0$ case should correspond to having no restart at all. In the other direction, taking $\rho\to1$ gives $\mathbb{E}[\mathcal{T}]\to\frac{1-u(0)}{u(0)}$, except that we assumed $u(0)=\mathbb{P}(\mathcal{U}=0)=0$ as discussed in Section \ref{sec:introduction}. Thus, we have $\mathbb{E}[\mathcal{T}]\to\infty$ and restart is preemptive. We note that in \cite{bonomo2021first} Bonomo and Pal derive a similar result. It differs from ours slightly, but only as a result of the choice of strict inequality in (\ref{eq:recursive_definition}) and our subsequent parameterization of the geometric distribution starting at 1 instead of 0.
Before moving to the sharp restart, we recall that geometric restart has $\mathbb{E}[\mathcal{R}]=\frac{1}{\rho}<\infty$, and thus a FPUR equipped with geometrically distributed restart is in fact positive recurrent for any state the underlying process can reach in finite time by Proposition \ref{prop:rec2}. The sharp restart has the same property, but as previously noted can be preemptive, which we see in the next section.
\subsection{Sharp restart} \label{sec-sharp_formula}
Just as for the geometric restart, we insert our probability mass function and PGF into the formula from Lemma \ref{lemma-pdf_FPUR_gf}. With $r(n)=\delta_{n,N}$ and $\tilde r(z) = z^N$, we have
\begin{align*}
\tilde t(z) & = \frac{\tilde u(z) - \sum_{n=0}^\infty z^nu(n)\mathbbm{1}_{[N,\infty)}(n)}{1-z^N(1-\mathcal{E}_\mathcal{U}) - \sum_{n=0}^\infty u(n) z^N \mathbbm{1}_{[N,\infty)}(n)} \\
& = \frac{\tilde u(z) - \sum_{n=N}^\infty z^nu(n)}{1-z^N(1-\mathcal{E}_\mathcal{U}) - z^N\sum_{n=N}^\infty u(n)} \\
& = \frac{\sum_{n=0}^{N-1}z^nu(n)}{1 - z^N\left( 1 - \sum_{n=0}^{N-1}u(n) \right)}
\end{align*}
Differentiating and evaluating at $z=1$ gives us the mean first passage time (which we can recognize as (\ref{eq:min_formula)}) or (\ref{eq:num_restart_formula}) where $p_r = 1-U(N-1)$).
\begin{equation}\label{eq:hitting_time-sharp}
\mathbb{E}[\mathcal{T}] = \frac{\sum_{n=0}^{N-1}nu(n) + N(1 - U(N-1))}{U(N-1)}
\end{equation}
Just as with the geometric restart, we're interested in the cases corresponding to instantaneous restart and no restart, in this case $N=1$ and $N\to\infty$, respectively. For the no restart case, we can see $\lim_{N\to\infty}\mathbb{E}[\mathcal{T}](N)=\mathbb{E}[\mathcal{U}]$, as expected. On the other hand $\mathbb{E}[\mathcal{T}]$ is undefined when $N=1$ since $U(0)$ is 0 by assumption. In that case, we have $\mathbb{P}(\mathcal{R}\le\mathcal{U})=1$, and restart is preemptive.
Also, since $\mathbb{E}[\mathcal{R}] = N < \infty$, the sharp restart also satisfies Proposition \ref{prop:rec2} and guarantees positive recurrence when restart is non-preemptive, i.e. $N$ must be strictly larger than the smallest value in the support of $u(n)$, or equivalently $U(N-1)>0$.
\section{When is $\mathbb{E}[\mathcal{T}] < \mathbb{E}[\mathcal{U}]$?}\label{sec:ET<EU}
In the context of applications, we are often concerned with whether adding a restart mechanism will speed up or slow down the underlying process, i.e., reduce or increase the expected arrival time in the target set. We say that a restart is \underline{beneficial} if the FPUR equipped with this mechanism has a smaller mean first passage time than the underlying FP process. Our goal then is to characterize the circumstances under which a restart is beneficial.
One simple way of determining whether a restart is beneficial in some parameter range is to examine the sign of $\mathbb{E}[\mathcal{T}]-\mathbb{E}[\mathcal{U}]$. Clearly, when this difference is negative, the restart is beneficial. Unfortunately, this expression doesn't admit any concise or easily computable form to check for arbitrary underlying and restart processes. Below, however, we do consider a different kind of criterion for the geometric restart and a useful special case for the sharp.
\subsection{Geometric restart and the derivative condition}
Let $\mathcal{R}\sim\text{Geom}(\rho)$. Then taking limits as $\rho$ tends to 0 and 1 of $\mathbb{E}[\mathcal{T}]=\frac{1-\tilde u(1-\rho)}{\rho\tilde u(1-\rho)}$ yields
\begin{align*}
\lim_{\rho\to0}\mathbb{E}[\mathcal{T}] & = \mathbb{E}[\mathcal{U}] \\
\lim_{\rho\to1}\mathbb{E}[\mathcal{T}] & = \infty,
\end{align*}
which we already noted in Section \ref{sec-geom_formula}. Thus, we can say that $\mathbb{E}[\mathcal{T}]$ atarts at $\mathbb{E}[\mathcal{U}]$ with $\rho = 0$ and tends to infinity as $\rho\to1$, perhaps increasing non-monotonically. Differentiating $\mathbb{E}[\mathcal{T}](\rho)$ with respect to $\rho$ and taking the limit as $\rho\to0$ produces the expression $D\coloneqq\frac{2\tilde u'(1)^2-\tilde u''(1)}{2}$. If $D<0$, then there clearly exists some interval of $\rho$ values (specifically $(0,\hat\rho)$ for some $\hat\rho\in(0,1)$) with $\mathbb{E}[\mathcal{T}]<\mathbb{E}[\mathcal{U}]$. Note that $\mathbb{E}[\mathcal{T}]$ may not be convex in $\rho$, so the restart could also be beneficial on some of $(\hat\rho,1)$, and $D\ge0$ does not guarantee that restart is not beneficial for some interval. That is, $D<0$ is sufficient, but not necessary, to ensure an interval where $\mathbb{E}[\mathcal{T}]<\mathbb{E}[\mathcal{U}]$. In the event that we can demonstrate that $\mathbb{E}[\mathcal{T}](\rho)$ is convex, we know that $D<0$ iff there exists some $\hat\rho\in(0,1)$ such that restart is beneficial for $\rho\in(0,\hat\rho)$.\\
\begin{figure}[h]
\includegraphics[width=8.6cm]{images/counter2.png
\caption{\label{fig:counter_Dneg} This plot shows $\mathbb{E}[\mathcal{T}](\rho)$ for selected values of $\rho$ between .01 and .841. $D<0$ is sufficient to guarantee an interval where $\mathbb{E}[\mathcal{T}]<\mathbb{E}[\mathcal{U}]$. Simulated values were averaged over 2000 trials, and we include 99\% confidence intervals.}
\end{figure}\\
\textbf{Example with $D<0$ (Figure \ref{fig:counter_Dneg})}\\
Consider the case of an underlying process that finishes at either time $1$ with probability $\frac{3}{4}$ or $20$ with probability $\frac{1}{4}$. The PGF for $\mathcal{U}$ is $\tilde u(z) = \frac{1}{4}\left(3z+z^{20}\right)$, so that $\mathbb{E}[\mathcal{U}]=5\frac{3}{4}$. We then compute $D=-\frac{231}{16}<0$, so there must be some interval starting at 0 for which $\mathbb{E}[\mathcal{T}]<\mathbb{E}[\mathcal{U}].$
\textbf{Example with $D>0$ (Figure \ref{fig:counter_Dpos})}\\
Now consider the preceding case with the probabilities reversed. The PGF for $\mathcal{U}$ is $\tilde u(z) = \frac{1}{4}\left(z+3z^{20}\right)$, and clearly $\mathbb{E}[\mathcal{U}]=15\frac{1}{4}$. We then compute $D=\frac{2\left(\frac{61}{4}\right)^2 - \frac{1140}{4}}{2}=\frac{1441}{16}>0$. Since $\mathbb{E}[\mathcal{T}](\rho)$ is not convex in $\rho$, despite $D>0$, we see in Figure \ref{fig:counter_Dpos} that there is a region of beneficial restart (albeit not starting at 0).\\
\begin{figure}[h]
\includegraphics[width=8.6cm]{images/counter.png
\caption{\label{fig:counter_Dpos} This plot shows $\mathbb{E}[\mathcal{T}](\rho)$ for selected values of $\rho$ between .01 and .841. Despite $D>0$, we can clearly see an interval where $\mathbb{E}[\mathcal{T}]<\mathbb{E}[\mathcal{U}]$. Simulated values were averaged over 2000 trials, and we include 99\% confidence intervals.}
\end{figure}
\subsection{Sharp restart and piecewise linear behavior}
We have no analogous trick for the sharp reset, but there is a special case of the underlying process worth mentioning. If we reconsider (\ref{eq:hitting_time-sharp}), we see that $N$ appears as limits of sums of $u(n)$ and multiplied by $\frac{1-U(N-1)}{U(N-1)}$, which is the expected number of restarts.
\begin{equation*}
\mathbb{E}[\mathcal{T}](N) = \frac{1 - U(N-1)}{U(N-1)}N + \frac{\sum_{n=0}^{N-1}nu(n)}{U(N-1)}
\end{equation*}
Importantly, if there are gaps in the support of $u(n)$, the cumulative mass function and the index-weighted sum are both constant across that gap. This implies that $\mathbb{E}[\mathcal{T}](N)$ is linear in $N$ across gaps in the support of the probability mass function, and it has a strictly positive slope, which decreases monotonically as $N\to\infty$. This may seem to be very specific, but it's not uncommon for gaps to exist in the support of $u(n)$. Even the simple example of the FP time to 0 a nearest-neighbor random walk on the integer lattice has this property. If the walk begins at 1, then $\mathcal{U}$ is almost surely odd. The implication above indicates that $\mathbb{E}[\mathcal{T}](N)$ would always be larger for $N=2k$ than for $N=2k+1$, $k\in\mathbb{N}.$
\section{The ``Cycle Trap"}\label{sec:cycle_trap}
Now we introduce an example that serves quite well to explore the concepts of the previous sections. The process we name the ``cycle trap" has a finite state space labeled by the integers from $-L$ to $M$, where $L,M\in\mathbb{Z}^+$. The process begins at the vertex labeled 0, and terminates at vertex $-L$, with movement between the vertices largely deterministic. At 0, the process can move to vertex -1 with probability $p$ or to 1 with probability $q \coloneqq 1-p$. If it moves to -1, then it continues moving to $-L$ one vertex at a time with probability 1 at each step. If it instead moves to 1, then it continues similarly to vertex $M$ before cycling back 0, as seen in Figure \ref{fig:cycle_trap_diagram}.
\begin{figure}[h]
\includegraphics[width=8.6cm]{images/diagram-20210811.png
\caption{\label{fig:cycle_trap_diagram} The Cycle Trap}
\end{figure}
The above formulation admits a PGF for the first passage time to $-L$ of $\tilde u(z) = \frac{pz^L}{1-qz^{M+1}}$. Provided that $q\neq1$, evaluating $\tilde u$ and its derivative at $z=1$ yields a hitting probability of $\mathcal{E}_\mathcal{U} = 1$ and a mean first passage time of $\mathbb{E}[\mathcal{U}] = L + \frac{q}{p}(M+1)$. After deriving a few formulas, adjusting these parameters ($p$, $L$ and $M$) will allow us to explore some nuances of beneficial restart. Just as in Section \ref{sec:ET<EU}, we shall focus on the geometric and sharp distributions.
\subsubsection{Geometric Restart}
Returning to equation (\ref{eq:hitting_time-geom}), we can plug in $\tilde u(z) = \frac{pz^L}{1 - qz^{M+1}}$ to find
\begin{equation*}
\mathbb{E}[\mathcal{T}] = \frac{1 - p(1-\rho)^L - q(1-\rho)^{M+1}}{p\rho(1-\rho)^L},
\end{equation*}
with $\rho$ as our rate parameter for the restart. We first note that L'H\^{o}pital's Rule confirms $\lim_{\rho\to0}\mathbb{E}[\mathcal{T}]=L+\frac{q}{p}(M+1)=\mathbb{E}[\mathcal{U}]$. Since $u(0)=0$ and $\lim_{\rho\to1}\mathbb{E}[\mathcal{T}]=\infty$, we can use the derivative criterion to check for beneficial restart. We also show that $\mathbb{E}[\mathcal{T}]$ is convex in $\rho$ by rewriting
\small
\begin{align*}
\mathbb{E}[\mathcal{T}] & = \frac{p(1-(1-\rho)^L) + q(1-(1-\rho)^{M+1})}{p \rho (1-\rho)^L} \\
& = (1-\rho)^{-L} + (1-\rho)^{-L+1} + \ldots + (1-\rho)^{-1} \\
& \quad + \frac{q}{p}\left((1-\rho)^{-L} + (1-\rho)^{-L+1} + \ldots + (1-\rho)^{M-L}\right).
\end{align*}
\normalsize
Here we see that $\mathbb{E}[\mathcal{T}]$ is a positive linear combination of integer powers of $(1-\rho)$ and is thus convex in $\rho$. This observation indicates that the derivative criterion is biconditional: the restart will have some beneficial interval of $\rho$ values if and only if $D<0$.
\begin{figure}[h]
\includegraphics[width=8.6cm]{images/cycletrap_p_bound.png}
\caption{\label{fig:cycletrap_p_bound}For selected values of $L$, the upper bounds for the bias as a function of $M$ are given.}
\end{figure}
We compute that the condition $D<0$ is equivalent to $(L+1)L+\frac{q}{p}(M+1)(2L-M)<0$, which is true exactly when $p<p^*\coloneqq\frac{(M+1)(M-2L)}{(M+1)(M-2L)+L(L+1)}$, as shown in Figure \ref{fig:cycletrap_p_bound}, where we recall that $p$ is the probability that the process moves to exit the trap. This expression is clearly equal to 0 when $M=2L$ and increases monotonically towards 1 as $M/L$ gets large. When $\frac{M}{L}>2$, this upper bound is strictly greater than 0, and there exists a range of values for $p$ such that there can be beneficial restart. Examples of both the $D<0$ and $D>0$ cases can be seen in Figure \ref{fig:cycletrap}.
\begin{figure}[h]
\includegraphics[width=8.6cm]{images/cycletrap_double.png}
\caption{\label{fig:cycletrap} Two examples of the cycletrap with geometric restart. The $D<0$ case has parameter values $(p,L,M)=(\frac{3}{4},2,14)$, and the $D>0$ case has parameter values selected to produce the same mean hitting time for the underlying process, with $(p,L,M) = (\frac{1}{2},2,4)$. Both blow up as $\rho\to1$. The simulated values were averaged over 500 trials, and we include 99\% confidence intervals.}
\end{figure}
Speaking broadly, this condition means that $M$ must be at least twice $L$ for beneficial restart to be possible, but as bias toward the exit increases (i.e. for larger values of $p$), greater ratios of $M$ to $L$ are required. On the other hand, for any positive value of $p$, the ``cost" of falling into the trap denoted by $\frac{M}{L}$ can be increased sufficiently to allow for beneficial restart.
The sharp restart is less restrictive than the geometric, requiring only that $M>L$ with no dependence on $p$, as we see in the next section.
\subsubsection{Sharp Restart}
In order to calculate some of the terms in equation (\ref{eq:hitting_time-sharp}), we must first obtain the probability mass function by extracting the coefficients of $\tilde u(z)$. Then we can compute the cumulative mass function, $U(n)$, by summing these coefficients. Thankfully, in the case of the cycle trap, this is straightforward, as we can write $\tilde u(z) = pz^L\sum_{k=0}^\infty \left(qz^{M+1}\right)^k$, which gives $u(n)=pq^{\frac{n-L}{M+1}}$ for $n \equiv L \Mod{M+1}$ and 0 otherwise. Computing the partial and index-weighted partial sums results in
\begin{equation*}
\mathbb{E}[\mathcal{T}] = L + \frac{q^{K+1}N + \frac{q}{p}(M+1)\left(Kq^{K+1}-(K+1)q^K + 1\right)}{1-q^{K+1}},
\end{equation*}
where $K = \left\lfloor \frac{N-1-L}{M+1} \right\rfloor$. We immediately notice when looking at the plots of $\mathbb{E}[\mathcal{T}](N)$ in Figures \ref{fig:cyc_sharp_1} and \ref{fig:cyc_sharp_2} the piecewise linear behavior described in Section \ref{sec:ET<EU}, with $\mathbb{E}[\mathcal{T}]$ increasing linearly over gaps in the support of $u(n)$.
We also give a criterion that determines the relationship between $\mathbb{E}[\mathcal{T}]$ and $\mathbb{E}[\mathcal{U}]$:
\begin{itemize}
\item When $L \ge M$, we have $\mathbb{E}[\mathcal{T}] \ge \mathbb{E}[\mathcal{U}]$ for all values of $N$. In particular, $L>M$ implies $\mathbb{E}[\mathcal{T}] > \mathbb{E}[\mathcal{U}]$.
\item When $L < M$, beginning at $N=L+1$, every $M+1$ values of $N$ will have first $M-L$ values with $\mathbb{E}[\mathcal{T}] < \mathbb{E}[\mathcal{U}]$, then one value of $N$ with $\mathbb{E}[\mathcal{T}] = \mathbb{E}[\mathcal{U}]$ followed by $L$ values of $N$ with $\mathbb{E}[\mathcal{T}] > \mathbb{E}[\mathcal{U}]$.
\end{itemize}
This can be verified directly by subtracting $\mathbb{E}[\mathcal{U}]$ from $\mathbb{E}[\mathcal{T}]$.
\scriptsize
\begin{align*}
\mathbb{E}[\mathcal{T}] - \mathbb{E}[\mathcal{U}] & = L + \frac{q^{K+1}N + \frac{q}{p}(M+1)\left(Kq^{K+1}-(K+1)q^K + 1\right)}{1-q^{K+1}} \\
& \quad - \left(L + \frac{p}{q}(M+1)\right) \\
& = \frac{q^{K+1}}{1-q^{K+1}}\left(N-(M+1)(K+1)\right) \\
& = \frac{q^{K+1}(M+1)}{1-q^{K+1}}\left(\frac{N-1-M}{M+1}-\left\lfloor{\frac{N-1-L}{M+1}}\right\rfloor\right)
\end{align*}
\normalsize
The last line establishes the previous dichotomy. When $L \ge M$, we have $\mathbb{E}[\mathcal{T}]\ge\mathbb{E}[\mathcal{U}]$ with equality only when $L=M$ and $N\equiv 0 \Mod{M+1}$. Beneficial restart occurs only when $L < M$, for values of $N$ that satisfying $a(M+1) + (L+1) \le N < (a+1)(M+1)$ for $a\in\mathbb{N}$. It's worth mentioning, further, that there are no cases where $\mathbb{E}[\mathcal{T}]\le\mathbb{E}[\mathcal{U}]$ for all values of $N$. It is always true the $\mathbb{E}[\mathcal{T}](N)>\mathbb{E}[\mathcal{U}]$ for $N \equiv L \Mod{M+1}$.\\
\begin{figure}[h]
\includegraphics[width=8.6cm]{images/cycletrap_sharp1.png}
\caption{\label{fig:cyc_sharp_1} The cycle trap with $(p,L,M) = (.25,7,5)$: Since $L> M$, we observe the behavior in which the hitting time decreases towards $\mathbb{E}[\mathcal{U}]$, never going below. The simulated values were averaged over 50000 trials.}
\end{figure}\\
\textbf{Example with no beneficial restart (Figure \ref{fig:cyc_sharp_1})}\\
To demonstrate the first kind of behavior, we pick parameter values $(p,L,M)=(.25,7,5)$, so that $L\ge M$. Looking towards Figure \ref{fig:cyc_sharp_1}, we see a case where $\mathbb{E}[\mathcal{T}]>\mathbb{E}[\mathcal{U}]$ for all $N$.\\
\textbf{Example with beneficial restart (Figure \ref{fig:cyc_sharp_2})}\\
To demonstrate the second kind, we pick parameter values $(p,L,M)=(.25,5,10)$, so that $L<M$. The restart is clearly beneficial for certain values of $N$ and not for others.
\begin{figure}[h]
\includegraphics[width=8.6cm]{images/cycletrap_sharp2.png}
\caption{\label{fig:cyc_sharp_2} The cycle trap with $(p,L,M) = (.25,5,10)$: Since $L< M$, we observe the behavior in which the hitting time moves back and forth across $\mathbb{E}[\mathcal{U}]$, with the difference decaying to 0. The simulated values were averaged over 50000 trials.}
\end{figure}\\
An interesting observation one might make after seeing these figures is that $\mathbb{E}[\mathcal{T}]$ increases across gaps in the support of $u(n)$, as we demonstrated in Section \ref{sec:hitting_times}, but appears to decrease at each value of $N$ that increases $K$, which are precisely those values of $N$ such that $u(N)>0$. To see that this is true, we can pick, for some $a\in\mathbb{N}$,
\begin{align*}
N_1 = L+a(M+1), & \quad K_1 = a-1 \\
N_2 = L+1+a(M+1), & \quad K_2 = a.
\end{align*}
Notice that $N_1$ has been picked such that $u(N_1)>0$ and we may compute $\mathbb{E}[\mathcal{T}](N_1)-\mathbb{E}[\mathcal{T}](N_2)$.
After some algebra, we see that this difference is $\frac{\left(1+q^{a+1}\right)q^aL}{\left(1-q^a\right)\left(1-q^{a+1}\right)}+\frac{q^{a+1}}{1-q^{a+1}}M$, which is positive, meaning that the graph of $\mathbb{E}[\mathcal{T}](N)$ will always demonstrate this saw-tooth behavior, decreasing after a value in the support and increasing otherwise. This is, however, not true for all underlying processes, as we will show in the next example.
\section{Biased random walk on $\mathbb{N}$}
We now turn our attention to a more interesting and classical example. One of the early major investigations into stochastic restart was \cite{evans2011diffusion}, in which Evans and Majumdar considered symmetric diffusion on the half-line with 0 as an absorbing boundary. This is a well-known problem in the study of first passage processes, and has an infinite mean hitting time. Evans and Majumdar show that introducing a restart mechanism can make this mean hitting time finite. More recently, Christophorov showed in \cite{christophorov2020peculiarities} that the discrete analogue of Evan's and Majumdar's model has some interesting differences in behavior. In this section, we hope to build on the preceding works to discuss the case of the discrete biased random walk on $\mathbb{N}$.
\subsection{The PGF for the underlying process}
Just as we outlined in Section \ref{sec:PGF_for_T}, we want to first obtain the generating function for the FP process, in this case, the biased random walk on $\mathbb{N}$ with 0 as an its terminal state. The derivation of this equation is too long to include here, though the result has been known at least since \cite{feller1957introduction}. For a starting point $m\in\mathbb{Z}^+$, with $p\in(0,1)$ the probability of moving towards 0 and $q\coloneqq1-p$ its complement, we have the PGF for the first passage time to 0 below.
\begin{equation*}
\tilde u(z) = \left(\frac{1-\sqrt{1-4pqz^2}}{2qz}\right)^m
\end{equation*}
Using this expression, we can immediately check the hitting probability and mean hitting time of the FP process.
\begin{align*}
\mathcal{E}_\mathcal{U} & = \min\left(1,\frac{p}{q}\right)^m \\
\mathbb{E}[\mathcal{U}] & =
\begin{cases}
\frac{m}{p-q} & p>q \\
\infty & p \le q
\end{cases}
\end{align*}
Clearly, the case with $p=q$ has a hitting probability of 1, meaning that it will almost surely reach 0 in finite time, but a mean hitting time of infinity, even starting a single vertex away. This is the case studied with restart by Evans and Majumdar (in the continuous time paradigm), and by Christophorov (in the discrete).
By Sections \ref{sec:hiting_probabilities} and \ref{sec:hitting_times}, we know that the $q > p$ case is also dramatically changed by adding a restart mechanism. Provided that the restart is non-preemptive and that $\mathcal{E}_\mathcal{R} = 1$, we know that the hitting probability of the resultant FPUR is 1, that is, $\mathcal{E}_\mathcal{T} = 1$. Furthermore, so long as $\mathbb{E}[\mathcal{R}]<\infty$ (such as with the geometric restart) the mean hitting time also becomes finite. Thus, even for the ``misbehaved" case where the underlying process is biased away, adding a suitable restart mechanism means that 0 becomes positive recurrent. It then remains, just as before, to determine the conditions under which restart may be beneficial.
\subsection{When is $\mathbb{E}[\mathcal{T}] < \mathbb{E}[\mathcal{U}]?$}
As discussed in Section \ref{sec:ET<EU}, we often frame our questions about FPUR around whether the restart can reduce the mean hitting time. For $q\ge p$, equivalently $p \le \frac{1}{2}$, we have established that $\mathbb{E}[\mathcal{T}] < \mathbb{E}[\mathcal{U}]$ for any non-preemptive restart by the simple fact that $\mathbb{E}[\mathcal{T}]$ is finite and $\mathbb{E}[\mathcal{U}]$ is not. When $p > q$, however, things get more interesting.
\subsubsection{Geometric Restart}
Recall from Section \ref{sec:hitting_times} that for a geometrically distributed restart time with parameter $\rho\in(0,1)$, we can express the mean hitting time by
$\mathbb{E}[\mathcal{T}] = \frac{1 - \tilde u(1-\rho)}{\rho \tilde u(1-\rho)}$. This formula, combined with our expression for $\tilde u(z)$ allows us to offer an explicit expression for the mean hitting time of the FPUR.
\begin{equation*}
\mathbb{E}[\mathcal{T}] = \frac{\left( 2q(1-\rho)\right)^m - \left( 1 - \sqrt{1 - 4pq(1-\rho)^2} \right)^m }{\rho \left( 1 - \sqrt{1 - 4pq(1-\rho)^2} \right)^m}
\end{equation*}
Just as with the cycle trap, we can utilize the derivative criterion, but demonstrating convexity is not so simple. However, we can instead argue for a biconditional result another way. Defining $\xi = 2(1-\rho)$, we can write, for $p>q$,
\small
\begin{align*}
F(\xi) & = \rho(\mathbb{E}[\mathcal{T}]-\mathbb{E}[\mathcal{U}]) = \left(\frac{q\xi}{1-\sqrt{1-pq\xi^2}}\right)^m - \frac{(2-\xi)m}{2(p-q)} - 1 \\
& = \left(\frac{1+\sqrt{1-pq\xi^2}}{p\xi}\right)^m - \frac{(2-\xi)m}{2(p-q)} - 1.
\end{align*}
\normalsize
As $0 \le \xi \le 2$, we examine the boundary cases, finding $F(2) = \left(\frac{1-\sqrt{1-4pq}}{2p}\right) - 1 = 0$, and $\lim_{\xi\to0}F(\xi)=+\infty$. Computing the first and second derivative of $F$ with respect to $\xi$, we find $F'(2)=0$ and that $\text{sgn}(F''(\xi))$ can be reduced to $\text{sgn}(\psi(\xi))$, where $\psi(\xi)\coloneqq m\sqrt{1-pq\xi^2}+1-2pq\xi^2$. Since $\psi(\xi)$ is decreasing with $\psi(0) = m+1>0$, we have only two possibilites:
\begin{itemize}
\item $\psi(2)\ge0$, in which case $F''(\xi)>0$ for all $\xi\in(0,2)$, implying $F'(\xi)<0$ and $F(\eta)>0$. In this case, $\mathbb{E}[\mathcal{T}]>\mathbb{E}[\mathcal{U}]$ for all $\rho$ and restart is never beneficial.
\item $\psi(2)<0$, in which case there exists some inflection point $\xi_0\in(0,2)$ such that $F''(\xi)>0$ for all $\xi<\xi_0$ and $F''(\xi)<0$ for all $\xi>\xi_0$. Consequently, $F(\xi)$ attains a negative minimum at some $\xi_1<\xi_0$, and there exists a $\xi^*\in(0,\xi_1)$ such that $F(\xi)>0$ for $\xi\in(0,\xi^*)$ and $F(\xi)<0$ for $\xi\in(\xi^*,2)$. This is precisely the case where there exists some $\rho^*$ such that restart is beneficial for $\rho\in(0,\rho^*)$.
\end{itemize}
We can examine the inequality $\psi(2)<0$ to find that there is a region of beneficial restart when $m\sqrt{1-4p}+1-8pq<0$, which is equivalent to $m<m^*\coloneqq\frac{8p(1-p)-1}{2p-1}$. In particular, if $p\ge\frac{3}{4}$, then $m^*\le1$ and geometric restart is not beneficial for any value of $\rho$. Some algebra allows us to rewrite this criterion as $p<p^*\coloneqq\frac{4-m+\sqrt{m^2+8}}{8}$, which is equivalent to $D<0$ from the derivative test. Note that this expression is decreasing in $m$ and asymptotically approaches $\frac{1}{2}$ as we can see in Figure \ref{fig:brwn_p_bound}. In other words, if the process begins further from 0, the bias must be less for restart to be beneficial. Framed differently, for even a very small bias towards 0, starting sufficiently far means that restart cannot be beneficial.
\begin{figure}[h]
\includegraphics[width=8.6cm]{images/BRWN_p_bound.png}
\caption{\label{fig:brwn_p_bound} For a given starting point $m$, only values of $p$ below this bound, $p^*$ will admit a range of $\rho$ values for which restart is beneficial.}
\end{figure}\\
\subsubsection{Sharp Restart}
Unfortunately, the sharp restart case is not so easily tractable here as it was for the cycle trap, though we can make a couple simple observations. First, restart is preemptive for $N \le m$. Second, $u(n)$ will only be nonzero for values of $n$ with the same parity as $m$. Thus, $\mathbb{E}[\mathcal{T}]$ will be increasing at least every other value of $N$.
From equation (\ref{eq:hitting_time-sharp}), we need to compute $U(n) = \sum_{i=0}^nu(i)$ and $U_w(n) = \sum_{i=0}^niu(i)$, the forms of which can be seen below.
\begin{align*}
U(n) & = mp^m\sum_{k=0}^{\lfloor\frac{n-m}{2}\rfloor} \frac{(pq)^k}{m+2k}{m+2k \choose k} \\
U_w(n) & = mp^m\sum_{k=0}^{\lfloor\frac{n-m}{2}\rfloor} (pq)^k{m+2k \choose k}
\end{align*}
We have not found a way to write these sums explicitly, but we can compute them numerically to plot the mean FP time of the FPUR as a function of $N$. In so doing, we observe some apparently distinct behaviors as shown in Figures \ref{fig:brwn_sharp_1}, \ref{fig:brwn_sharp_2} and \ref{fig:brwn_sharp_3}.
\begin{figure}[h]
\includegraphics[width=8.6cm]{images/halfline_sharp1.png}
\caption{\label{fig:brwn_sharp_1}For parameter values $(p,m)=(.8,3)$, we see that $\mathbb{E}[\mathcal{T}]$ starts high and then decreases towards $\mathbb{E}[\mathcal{U}]$. In this case, numerical solutions suggest that $\mathbb{E}[\mathcal{T}]>\mathbb{E}[\mathcal{U}]$ for all $N$, but we weren't able to verify that. This is clearly reminiscent of the cycle trap with $L>M$.}
\end{figure}
\begin{figure}[h]
\includegraphics[width=8.6cm]{images/halfline_sharp2.png}
\caption{\label{fig:brwn_sharp_2}By decreasing $p$ to $.65$, we see that $\mathbb{E}[\mathcal{T}]$ still begins above $\mathbb{E}[\mathcal{U}]$, but passes below before increasing towards the limiting value. Changing the value of $m$ seems only to change the steepness of this initial drop. The sawtooth behavior appears to continue indefinitely, but this has not been confirmed.}
\end{figure}
\begin{figure}[h]
\includegraphics[width=8.6cm]{images/halfline_sharp_inset_3.png}
\caption{\label{fig:brwn_sharp_3}As $p$ gets closer to .5, in this case $(p,m)=(.54,3)$, we can observe an interesting deviation from the behavior of the cycle trap: as $N$ increases, the sawtooth behavior eventually disappears, and $\mathbb{E}[\mathcal{T}]$ increases monotonically. Whether this behavior also exists in the previous example for some larger value of $N$ is unknown, but it is a distinct departure from what we saw with the cycle trap.}
\end{figure}
\section{Conclusion}
In this paper, we have presented some analysis of stochastic restart on FP processes in the case of discrete time. In particular, we have examined two restart distributions that permit some explicit formulas: the sharp and the geometric. We note that whenever this restart is non-preemptive and occurs almost surely in finite time, then the mean FP time of the process with restart will also be finite, even if that isn't true for the underlying process. If we further assume that the restart distribution has finite mean, then the mean FP of the process with restart does as well. This provides us with a method of forcing any process to be positive recurrent (with the possible expense of extending its mean hitting time): just add a non-preemptive restart mechanism with finite mean.
We then explored some of these mechanisms in the context of two examples, introducing the cycle trap stochastic process and examining the well-studied process of the biased random walk on the half-line. In particular, we showed that the addition of a restart mechanism can have a profound and surprising effect on the mean hitting time of a stochastic process.
\nocite{*}
|
2,869,038,154,377 | arxiv | \section*{References}
|
2,869,038,154,378 | arxiv | \section{Introduction}\label{Intro}
We study some topological aspects of symplectomorphism groups along the line of \cite{Abr98, AM99, McDacs, LP04, ALP, AGK09, Buse11, LL16, LLW16, ALLP}, etc. We will address the topological behavior of the symplectomorphism groups as the form $\omega$ varies within the symplectic cone. This is a follow-up note of \cite{BL1}, which formulates the symplectic stability and symplectic isotopy conjectures for arbitrary non-minimal ruled surfaces, and establishes them for the one-point blowups of minimal irrational ruled surfaces. Recall that the stability conjecture informally states that the reduced symplectic cone has a chamber partition such that symplectomorphism groups have homotopy type invariant within the chambers.
As it will be explained in Section \ref{stabproof}, it is not straightforward to extend the result of \cite{BL1} to the multi-point setting. The key is to control the degeneration of embedded J-holomorphic curves in certain homology classes, so that the classes of their simple representatives form a basis for the symplectic cone. This is the reason that we only partially establish this conjecture for multi-point blowups.
Let $M_g=\Sigma_g \times S^2$. We will focus on the stability of $\pi_0, \pi_1$ of $Symp(M_g\# \overline{n \mathbb{C} P^2}, \omega_t)$ while the symplectic form varies in a family such that $\omega_t(\Sigma_g) \to \infty$ and symplectic areas remain the same.
By McDuff's classification results \cite{McD94}, any symplectic form on $M_g$ is diffeomorphic to $\mu \sigma_{\Sigma_g} \oplus \sigma_{S^2}$ for some $ \mu >0$, up to diffeomorphism and normalization. This classification result also holds in the blowups $M_g\# \overline{n \mathbb{C} P^2}$ \cite{LLiu2}: if one picks an $\omega$ on $M_g\# \overline{n \mathbb{C} P^2}$, then we normalize $\omega$ to have areas $(\mu, 1, c_1, \cdots, c_n)$ on the homology classes $B, F, E_1, \cdots, E_n$. The vectors $u=(\mu, 1, c_1, \cdots, c_n)$ determine all possible symplectic cohomology classes and belong to a convex region $\Delta^{n+1}$ in $\mathbb{R}^{n+1}$, whose boundary walls are $n$-dimensional convex regions given by linear equations. We will be concerned with symplectic deformations inside such $\Delta^{n+1}$ for the $n$-point blowups.
The topology of symplectomorphism groups is tied with the space of almost complex structures. The core of our paper is to understand the space of almost complex structures.
Throughout the paper, we use the notation $G^g_{u,n}$ for $Symp (M_g\# \overline{n \mathbb{C} P^2} ,\omega)\cap
{\rm Diff}_0( M_g\# \overline{n \mathbb{C} P^2})$, where $[\omega]=u$ is a reduced cohomology class (see Section \ref{cone} for the definition)\footnote{Even though $u$ is a just reduced cohomology class rather than an isotopy class, there is no ambiguity as explained in Section \ref{tcinf}.} and ${\rm Diff}_0( M_g\# \overline{n \mathbb{C} P^2})$ is the identity component of the group of diffeomorphisms.
In particular, Lemma \ref{incup} and Proposition \ref{inflation} allow us to prove the following:
\begin{thm} \label{stab01intro}
Suppose $\mu, \mu'>max\{g, n\}$ for $u= (\mu, 1, c_1,\cdots, c_n)$, $u'= (\mu', 1, c_1,\cdots, c_n)$. Then the groups $\pi_0$ and $\pi_1$ of $G^g_{u,n} $ and $ G^g_{u',n} $ are the same.
\end{thm}
\begin{rmk}
The main Theorem can be stated in a larger generality if we fully introduce the simplicial structure of the reduced symplectic cone. Basically, the stated stability holds true for any family of $u= (\mu, 1, c_1,\cdots, c_n)$ as long as these vectors do not escape to the simplicial walls.
This is because on the boundaries of the symplectic cone chambers (see Figure \ref{rule2} for an illustration), some symplectic curves will disappear as their area becomes trivial, and they may change the connectedness and codimension 1 cycles in $G^g_{u,n}.$
Nevertheless, the way we state Theorem \ref{stab01intro} is a weaker but convenient version, and it is sufficient for Proposition \ref{tlimitintro}.
\end{rmk}
We remark that Lemma \ref{inflation} grants us that there is a topological colimit on horizontal lines inside the reduced symplectic cone as $\mu \to \infty.$
The absence of certain badly-behaved almost complex structures strata allow us to establish the full stability conjecture in the particular case when the blow-ups are all equal to half of the area of the fiber $[S^2]$:
\begin{thm} \label{stabintro}
The homotopy type of $G^g_{\mu,n}$ is constant for $\frac{k}{2} < \mu \le \frac{k+1}{2},$ for
any integer $k \ge 2g$. Moreover as $\mu$ passes the half integer $\frac{k+1}{2}$, all the groups
$\pi_i, i = 0,\cdots, 2k + 2g - 1$ do not change.
\end{thm}
In Chapter \ref{s:out}, we proceed to describe topologically the colimit $G^g_{\infty,n}$ of the symplectomorphisms groups in the equal size $\frac12$ blow-ups.
We establish the following theorem on the disconnectedness of this topological model denoted by $\mathcal{D}^g_n$. For a more detailed description of $\mathcal{D}^g_n$ and $G^g_{\infty,n}$, see Definition \ref{fibergp}.
\begin{prp}\label{tlimitintro}
Take $M_g\# \overline{n \mathbb{C} P^2}$ with forms in classes $(\mu,1, \frac12,\cdots,\frac12)$. Then there is a smooth topological model group ${\mathcal D}^g_n$ so that:
\begin{enumerate}
\item ${\mathcal D}^g_n$ is weakly homotopic to the topological colimit $G^g_{\infty,n}$ obtained as $\mu$ goes to $\infty$.
\item The group ${\mathcal D}^g_n$ is disconnected when $g\ge 2$.
\item When $\mu\to \infty$, for $i=0,1$, $\pi_i(G^g_{u,n})= \pi_i(G^g_{\infty,n})$ and hence the groups $G^g_{u,n}$ are disconnected for $g \geq 2$.
\end{enumerate}
\end{prp}
Notice that the above theorem extends the results of \cite{BL1} from the one-point blowup of a minimal ruled surface to arbitrary points blowups. It remains unknown what the full group $\pi_0 \mathcal{D}^g_n$ is, and in particular whether the disconnectivity in Proposition \ref{tlimitintro} holds for other symplectic forms in horizontal lines in the cone. We hope to explore these questions in future works.
On the line of equal size $1/2$, the rank of $\pi_1(G^g_{\mu,n})$ is positive (indeed at least $n$) for an $n$-fold blowup of $M_g$. On the other hand, by \cite[Corollary 3.6]{HK15}, if the number of blowups is large ($n \ge \mu/2$), there are points on this line that do admit Hamiltonian circle actions. Note that this never appears in the minimal cases or their one-point blowups since these cases all admit Hamiltonian circle actions.
As a corollary, we find that
\begin{cor}
There are new elements in $\pi_1(G^g_{u,n})$ for a symplectic form without any Hamiltonian circle actions.
\end{cor}
Notice that previously those kind of elements were discussed in \cite{Kedra} for general type surfaces and \cite{ABP19} for certain forms on rational 4-manifolds.
\noindent\textbf{Acknowledgements.} We are grateful to Richard Hind, T-J Li, Dusa McDuff, Weiwei Wu, Michael Usher and Weiyi Zhang for helpful conversations.
\section{Preliminaries}\label{cone}
This section covers the preliminary results needed to develop the proof strategies. These results have been largely described in \cite{BL1}. We include them here to make the current manuscript self-contained.
\subsection{The symplectic cone}
By \cite{LL01}, if $(M,\omega)$ is a closed symplectic 4-manifold with $b^+= 1$, the symplectic canonical class is unique once we fix $u=[\omega]$, and we denote it by $K_u$.
Let $\mathcal{E}$ be the set of exceptional sphere classes, and $\mathcal{C}_M \subset H^2(M,\mathbb{R})$ denotes the symplectic cone, the collection of symplectic form classes.
In Theorem 4 of \cite{LL01}, Li-Liu showed that if $M$ is a closed, oriented 4-manifold with $b^+= 1$
and if the symplectic cone $\mathcal{C}_M$ is nonempty, then
$$\mathcal{C}_M= \{u\in P |0 < u \cdot E \quad \text{for all} \quad E \in \mathcal{E} \},$$
where $P\subset H^2(M,\mathbb{R}) $ is the cone of positive form classes.
\begin{dfn}
For a cohomology class $\omega$ on $M_g\# \overline{n \mathbb{C} P^2}$, we normalize $\omega$ to have areas $(\mu, 1, c_1, \cdots, c_n)$ on the homology classes of the standard basis $B, F, E_1, \cdots, E_n$.
Then the symplectic forms in this cohomology class are isotopic. cf. \cite{McD96, LL01}
Furthermore we define {\it reduced} vectors $u=(\mu, 1, c_1, \cdots, c_n)$ to be those with $\mu>0, c_1+c_2\ne 1, 0<c_i<1,$ $ c_1\geq c_2\geq \cdots \geq c_n.$
\end{dfn}
We remark that the reduced symplectic classes give a set of orbits under the action of the diffeomorphism group. Since diffeomorphic forms have homeomorphic symplectomorphism groups, it is sufficient to study the stability conjecture for reduced classes.
We will be concerned with symplectic deformations inside a convex region $\Delta^{n+1}$ in $\mathbb{R}^{n+1}$, the right side of the reduced symplectic cone, given as a linear truncation with $\mu \geq 1$. We refer this region as the symplectic cone for the brevity of writing. Strictly speaking, $\Delta^{n+1}$ is a slice of $\mathcal{C}_M$ where the fiber class has area 1.
Note that the way we partition this symplectic cone is by looking at the homology classes of potential symplectic curves. To that end, we will now fix some notation:
\begin{dfn}\label{sw}
Let $\mathcal S_{\omega}$ denote the set of homology classes of embedded $\omega$-symplectic curves and $K_{\omega}$ the symplectic canonical class. For any $A\in \mathcal S_{\omega}$, by the adjunction formula,
\begin{equation}\label{AF}
K_{\omega}\cdot A=-A\cdot A -2+2g(A).
\end{equation}
For each $A\in \mathcal S_{\omega} $ we associate the integer
$$ {\mbox{cod}_A}=2(-A\cdot A-1+g).$$
\end{dfn}
We can define $\mathcal{S}_u$, where $u=[\omega]$ accordingly, only using the cohomology data of $\omega$. We are going to denote $\mathcal{S}^{<0}_u$ by the subset of $\mathcal{S}_u$, which is the classes having positive codimension, which, if represented by pseudo-holomorphic curves, correspond to curves of negative indexes.
\begin{rmk}
\begin{itemize}
\item By Lemma 2.4 in \cite{ALLP}, $\mathcal{S}_u=\mathcal{S}_w$.
\item By the main theorem in \cite{LU06}, the negative self-intersection classes that admit embedded representatives are exactly those that have positive pairings with the class $\omega.$ Namely, we can find some integral class $u'$ that admits some embedded curve in those classes of $\mathcal{S}^{<0}_u$, and the symplectic inflation of \cite{LU06} allows us to change the class $u'$ into $u.$
\end{itemize}
\end{rmk}
Let us explain the simplicial structure of $\Delta^{n+1}$ that guides the stability conjecture of the symplectomorphism group.
\begin{dfn}
An {\bf extremal exterior wall} (not contained in the cone) of $\Delta^{n+1}$, is given by $\{u| u\cdot E=0\}$ where $E\in \mathcal{E}$ is any exceptional class.
Lastly, a {\bf reduction exterior wall} (contained in the cone) of $\Delta^{n+1}$, is given by
$\{u| u\cdot D=0$ where $D$ is one of the square $-2$ homology classes given by the reduced condition in $\mathcal{R}=\{ F-E_1-E_2, E_i-E_j, i>j>0\}$.
A partition of the symplectic cone $\Delta^{n+1}$ into {\bf chambers} is given by separations with {\bf interior walls} given by $u\cdot A = 0,$ where $A\in \mathcal{S}^{< 0}_u \setminus (\mathcal{E}\cup \mathcal{R}).$
\end{dfn}
It is important to point out that for the purpose of proving the current results of stability in the first two homotopy groups, we will only need to partition with interior walls where the classes $A$ are of {\it section type } ${B+kF-\sum_i c_iE_i}$.
Here we draw the picture for $\Delta^3$ (with chambers when $\mu \ge 1$) of a two-point blowup of $\Sigma_g\times S^2:$
\begin{minipage}{.5\textwidth}
\centering
\includegraphics[height=2.5in]{rule2.png}
\label{rule2}
\end{minipage}%
\begin{minipage}{.5\textwidth}
Notice that in this figure, the $Y-$axis means the area ratio $\mu,$ $X$ and $Z$ are blowup sizes of $E_1, E_2$ respectively. Clearly, $Y$ goes to $\infty.$ The red triangles are the walls defined by the curve classes, and the tetrahedron is stable chambers in Theorem \ref{stab01intro}.
\end{minipage}%
\subsection{The homotopy fibration and the inflation strategy}
Overwhelmingly papers discussing the topology of spaces of symplectomorphism groups in the ruled surfaces settings base the results on the following fibration first initiated by Kronheimer\cite{Kro99} and then adapted by McDuff\cite{McDacs}.
McDuff's approach in \cite{McDacs} was to consider Kronheimer's fibration in \cite{Kro99}:
\begin{equation}\label{fib}
Symp(M, \omega)\cap {\rm Diff}_0(M) \to {\rm Diff}_0(M) \to \mathcal{T}_{\omega},
\end{equation}
where $ \mathcal{T}_{\omega}$ represents the space of symplectic forms in the class $[\omega]$ and isotopic to a given form, and ${\rm Diff}_0(M)$ is the identity component of the diffeomorphism group. Moser type results provide a transitive action of ${\rm Diff}_0(M)$ on $ \mathcal{T}_{\omega}$ and gives the fibration \ref{fib}.
To facilitate maps between such fibrations as $\omega$ is deformed, McDuff' \cite{McDacs}\footnote{ McDuff's original results were written in terms of homotopy fibration where the larger space of taming almost complex structures was used. Using the fact that the space of taming almost complex structures is homotopy equivalent to the one we use of compatible structures, for reasons explained in Section \ref{tcinf} we will use the compatible structure spaces.} uses the (weak) homotopy equivalence between $ \mathcal{T}_{\omega}
$ and the space $\mathcal{A}_{\omega}$ (which is the space of $\omega'$-compatible almost complex structures, where $\omega'$ is any symplectic form isotopic to $\omega$) to construct a homotopy fibration
\begin{equation}\label{fibacs}
Symp(M, \omega)\cap {\rm Diff}_0(M) \to{\rm Diff}_0(M) \to \mathcal{A}_{\omega}.
\end{equation}
This allows one to use inflation techniques described in Section \ref{tcinf} to move by inclusions the (stratified) spaces $\mathcal{A}_{\omega}$ as the symplectic form varies.
Thus, to succeed in stability results one needs to establish such (a) existence of strata (b) sufficient embedded or nodal $J$ holomorphic curves for {\it every $J$} in order to move these strata using inflation.
Inflation of symplectic forms that tame a given almost complex structure we first introduce by McDuff \cite{McDacs} and later by Buse \cite{Buse11}.
As discussed in \cite{ALLP} Section 4.4, their proofs made the unwarranted assumption that for
every $\omega$-tame $J$ and every $J$-holomorphic curve $C$ one can find a family of normal planes
that is both $J$ invariant and $\omega$-orthogonal to $TC$. This is true only if $\omega$ is compatible with
$J$ at every point of $C.$
A correct version of their statements, which works with the proof provided in \cite{McDacs} for positive self-intersection curves and in Theorem 1.1 in \cite{Buse11} for negative self-intersection curves, combines as:
\begin{thm} For a 4-manifold $M$, given a compatible pair $(J,\omega),$ one can inflate along a $J$-holomorphic curve $Z$, so that there exist a symplectic form $\omega'$ taming $J$ such that $[\omega']= [\omega]+ t PD(Z), t\in [0,\mu)$ where $\mu= \infty$ if $Z\cdot Z\ge0$ and $\mu= \frac{\omega(Z)}{(-Z\cdot Z)}$ if $Z\cdot Z<0$.
\end{thm}
As further explained in \cite{ALLP}, for four-dimensional manifolds $M$ with $b^+=1,$ (such as the blow-up ruled surfaces) the strategy is to perform the above inflation to obtain a $J$-tame $\omega'$ in the correct cohomology class. Then using the result of Li and Zhang \cite{LZ11} giving the comparison of tame/compatible cones, one has an $\omega''$ compatible with the given $J$. Anjos et all call this {\bf $b^+=1$ $J-$compatible inflation}.
Note that in the non-minimal case, cohomologous forms are not known to be isotopic, in contrast to the minimal case.
We'll consider a larger space ${\mathcal{A}_{[\omega]}}$, which is the space of almost complex structures taming cohomologous forms. The space $\mathcal{A}_{[\omega]}$ is the union of its mutually homeomorphic components given by the isotopy classes of symplectic forms in the cohomology class $[\omega]$. We showed in \cite{BL1} that one can keep track of isotopy classes (hence the components) during the $b^+=1$ $J-$compatible inflation with $J$ in the larger space $\mathcal{A}_{[\omega]}$.
Namely, the inflation process does not change connected components.
Moreover, as explained in \cite{BL1} there is a natural correspondence between the connected components of the spaces $\mathcal{A}_{[\omega]}$ and the spaces of all cohomologous symplectic forms $\mathcal{T}_{[\omega]}$ and they are mutually homotopy equivalent.
If we pick one such pair of components $(\mathcal{T}_{\omega}, \mathcal{A}_{\omega})$ we have a the homotopy fibration $G_{\omega}\to {\rm Diff}_0(M) \to \mathcal{A}_{ \omega}$.
Additionally, as soon as one successfully inflates to establish an inclusion of components of almost complex structure spaces for different cohomology classes $[\omega]$ and $[\omega']$ the following diagram can be used to establish homotopy maps between the spaces of symplectomorphisms groups.
\begin{equation}
\begin{CD}
G_{\omega} @>>> {\rm Diff}_0(M)@>>> \mathcal{A}_{ \omega}\\
@VVV @VVV @VVV \\
G_{\omega'} @>>> {\rm Diff}_{0}(M) @>>> \mathcal{A}_{ \omega'}.\\
\end{CD}
\end{equation}
\subsection{Picking a canonical component of the spaces of almost complex structures in the case $\mu> max\{g,n\} $}
Denote by $J_{std}$ the integrable structure obtained by consecutive complex blow-ups of the split complex structure on $\Sigma_g \times S^2$.
First note that $J_{std}$ tames (is compatible with) some symplectic form, when $\mu$ is very large. This is because for $J_{std}$, this is because we are able to choose the blowup points on a section in class $B$. The divisors in the blowup are all strict transforms from the minimal model, and by Nakai-Moishezon we are able to find an ample class in $u=(\mu, 1, c_1,\cdots, c_n)$.
We will show that by performing J-tame (or compatible) inflation, any $J$ in the open stratum $\mathcal{A}^{top}_u$ as in definition \ref{subsetA} is compatible with some symplectic form in $u'=(\mu', 1, c_1,\cdots, c_n)$, where $\mu'>max \{g,n\}$. This means that $J_{std}$ tames (is compatible with) some symplectic form in any cohomology class when $\mu>max \{g,n\}$. Hence using the connected component that contains $J_{std},$ we have a canonical choice of connected components in each $\mathcal{A}_u$, and by abuse of notation we will from now on call $\mathcal{A}_u$ those canonical component.
\subsection{Decomposition of $\mathcal{A}_u$ via pseudo-holomorophic curves}
In what follows we recall the general strategy to define subsets of the space of almost complex structures $\mathcal{A}_{u}$. We will adapt this general stratification to our needs for the current results in Section \ref{strA}.
We first define a driven subset of $\mathcal{A}_{u}$, labeled by set $\mathcal{C}\subset \mathcal S^{< 0}_u$ for a given isotopy class of $\omega$ as following:
\begin{dfn} \label{fine decomposition}
A collection $\mathcal{C}\subset \mathcal S^{< 0}_u$ is called admissible if
$$\mathcal{C}=\{A_1, \cdots, A_i,\cdots ,A_q |\quad A_i\cdot A_j \geq 0, \quad \forall i\neq j\}.$$ Given an admissible collection $\mathcal{C}$, we define the real codimension
of the labelled collection ${\mathcal{C}}$ as $$\mbox{cod}({\mathcal{C}})= \sum_{A_i\in \mathcal{C}} \mbox{cod}_{A_i}=\sum_{A_i\in \mathcal{C}} 2(-A_i\cdot A_i-1+g_i).$$
Define the $\mathcal{C}$-driven {\bf subset}
$$\mathcal{A}_{u, \mathcal{C}}:=\{ J\in \mathcal{A}_{u}| A\in \mathcal S_{u}^{<0} \hbox{ has an embedded $J$-hol representative if } A\in \mathcal{C}\}.$$
And if $\mathcal{C}=\{A\}$ contains only one class $A$, we will use $\mathcal{A}_A$ for $\mathcal{A}_{\{A\}}$
\end{dfn}
The following Proposition \ref{stratum} and Corollary are Proved in \cite{BL1}.
\begin{prp} \label{stratum}
Let $(X,\omega)$ be a symplectic 4-manifold.
Suppose $U_{\mathcal{C}}\subset\mathcal{A}_{u}$ is a subset
characterized by the existence of a configuration of embedded
$J$-holomorphic curves $C_{1}\cup C_{2}\cup\cdots\cup C_{N}$ of positive codimension as in Definition \ref{sw} with $\{ [C_{1}], [C_{2}],\cdots , [C_{N}]\}=\mathcal{C}$.
Then $U_{\mathcal{C}}$ is a co-oriented Fr\'echet suborbifold
of $\mathcal{A}_{u}$ of (real) codimension $2N-\sum g_i-2c_{1}([C_{1}]+\cdots+ [C_{N}]).$
Note here $c_1(\omega)=-K_\omega$ and by adjunction $2g-2=KA+AA$, we have the following two equivalent way to write the codimension $Cod=\sum_i K\cdot [C_i]-[C_i]^2,$ or $Cod= 2(-C_i\cdot C_i-1+g_i),$ as in Definition \ref{fine decomposition}.
In particular, this statement covers the case where $(X,\omega)$ is a ruled surface and $C_i's$ all have negative squares.
\end{prp}
\begin{cor}
The subsets $\mathcal{A}_{u, \mathcal{C}}$ are suborbifolds of $\mathcal{A}_{u}$ codimension of $Cod(\mathcal{A}_{u,\mathcal{C}})$.
\end{cor}
\subsection{Identifying strata of various spaces of almost complex structures using inflation}\label{tcinf}
\section{Decompositions of $\mathcal{A}_u$ and stability under inflation}\label{stabproof}
In this section, we prove the Theorems \ref{stab01intro} and \ref{stabintro} by inflating along embedded or nodal J-holomorphic curves.
\subsection{Curves}
Let us first recall several results from \cite{Zhang16}. Using our basis notation $B, F, E_1, \cdots, E_n$ of $H_2(X,\mathbb{Z})$, here are some of Zhang's results that we will use.
For a fixed $J$, let $\mathcal M_D$ denote the moduli space of subvarieties ($J$ holomorphic images) in class $D$.
\begin{thm}[Theorem 1.1 or 3.4 in \cite{Zhang16}]\label{intro1}
Let $(M, \omega)$ be an irrational symplectic ruled surface with a base of genus $g$ and $E$ an exceptional class. Then for any $J$ tamed by $\omega$ and any subvariety in class $E$, each one of its irreducible components is a rational curve of negative self-intersection. Moreover, the moduli space $\mathcal M_E$ is a single point.
\end{thm}
\begin{thm}[Theorem 1.2 in \cite{Zhang16}]\label{curvesexist}
Let $(M, \omega)$ be an irrational ruled surface of base genus $g\ge 1$. Then for any tamed $J$ on $M$,
\begin{enumerate}
\item Through any given point there is a unique subvariety in the positive fiber class $F$.
\item The moduli space $\mathcal M_F$ is homeomorphic to $\Sigma_g$, and there are only finitely many reducible subvarieties appearing as images of singular J-holomorphic curves in class $F$.
\end{enumerate}
\end{thm}
Here a rational curve means that the domain genus is 0, and by the adjunction formula each irreducible image component has to be smooth, hence one can pick a simple $J$ holomorphic map into it that is embedded:
\begin{lma}\label{gj=0}
The irreducible components of the stable curves of a Gromov limit for an exceptional curve all have $g_J=0$.
\end{lma}
\begin{proof}
First, by Corollary 3.8 of \cite{Zhang16}, every irreducible rational curve $C_i$ belongs to a fiber in homology class $F$. This means an irreducible component of the exceptional stable curve of class $E$ is also a component of some curve in class $F$.
Note the the fiber class $F$ pairs non-negatively with any J-holomorphic
subvariety. Theorem 1.4 of \cite{LZ15} says that
$0= g_J(F) \ge \sum_i g_J(C_i) $, and each $g_J(C_i)\ge 0.$ Hence all $g_J(C_i)=0.$
\end{proof}
On the other hand, for the high genus section type curves, Zhang provides the following existing result:
\begin{prp}[Proposition 3.6 in \cite{Zhang16}]\label{smoothsection}For any $(M,\omega)$ a symplectic irrational ruled surface and any $J$ on it tamed by the form $\omega$ there is an embedded $J$-holomorphic curve $C$ of genus $g$ such that $[C]\cdot F=1$.
\end{prp}
This grants that if $J$ tames a symplectic form in class $u= (\mu,1,c_1, \ldots c_n) $ then there is an embedded curve in the class ${B+kF-\sum_i c_iE_i}$ for some $k \leq g,$ as long as the curve has positive area.
\subsection{Stratification of $\mathcal{A}_{u}$}\label{strA}
In the current paper, we are not able to guarantee the existence of enough almost holomorphic curves for every $J$ to establish full stability. As we will see below, the deficiency stems from the lack of embedded exceptional curves for {\it every} $J$ to guarantee inflation within the chambers of the symplectic cone. However, embedded or mildly degenerated exceptional $J$- curves are provided for each $J$ in the codimension 2 strata and the open stratum.
Nevertheless, for equal size $\frac12$ blowups (see section \ref{1/2}), we are able to show that the above-mentioned types of strata along with codimension higher than $4$ strata corresponding to negative index embedded curves in classes $B+kF, k<g$, completely cover $\mathcal{A}_u$. This allows us to conclude that our current inflation results are sufficient to prove full stability in such deformations.
For any almost complex structure $J$ and any exceptional homology class $E$, we know there exists a $J$ holomorphic curve in the class $E$. Based on this we make the following definition:
\begin{dfn}
For any $J$ and an exceptional class $E$ in a ruled surface, we call the J-holomorphic exceptional curve in class $E$
\begin{itemize}
\item {\bf mildly degenerated} if its J-representative has exactly two rational embedded components in classes $\{C, D\}$, where $D$ has square $-2$ and $C$ is an exceptional class with $C^2=-1$, and $C \cdot E=0.$
\item {\bf badly degenerated} if its J-representative either has at least two components with a simple rational representative of square at most $-2$, or has at least one with simple rational representative of square less than $-2$.
\end{itemize}
\end{dfn}
For the mildly degenerated curves, we extend the definition of driven sets
$$\mathcal{U}_{u, C,D}:=\{ J\in \mathcal{A}_{u}| \hbox{ J admits a mildly degenerated exceptional curve of type \{C, D\}} \}$$
This is a codimension 2 suborbifold by Proposition \ref{stratum}.
The following arguments imply that exceptional class curves of type $E_i$ or $F-E_j$, as well as their specific degenerations $\{C, D \}$, are determined by homological data only. For the rest of the almost complex structures, we will show that they are included in Fr\'echet orbifolds with codimension 4 or higher.
\begin{lma}\label{estable2}
Consider any exceptional homology class $E$ and any $J \in \mathcal{A}_u$ on an irrational ruled surface.
One of the following three situations take place:
\begin{itemize}
\item The class $E$ has an embedded rational $J$ -curve representative.
\item The stable $J$ holomorphic curve in class $E$ is mildly degenerated.
\item The stable J- holomorphic curve is badly degenerated.
\end{itemize}
\end{lma}
\begin{proof}
Consider the stable curve in an exceptional class, $E=\sum_i r_iC_i$ where each $C_i$ is possibly multiple covered. Let $g_J(A)$ be the virtual genus of class $A$, given by $\frac{A\cdot A +K\cdot A}{2}+1$. By Lemma \ref{gj=0}, $0=g_J(E) \ge \sum_i r_i g_J(C_i) \ge 0$. Hence $ g_J(C_i)=0$ for each $C_i$. Then let $g(\Sigma_i)$ denote the domain genus of the corresponding map into $C_i$, we always have $ g_J(C_i)\ge g(\Sigma_i) $ where $``="$ holds iff the image almost holomorhic curve $C_i$ is smooth.
That implies that we can choose a simple embedded $J$ holomorphic map with the image $C_i$ for each $i$ since indeed $g_J(C_i)=0=g(\Sigma_i)\ge 0$.
First, let us consider the set $\mathcal{A}_E$ consisting of all $J_0$ admitting embedded exceptional curves. This is an open subset since the curve is stable and for $J$ sufficiently close to $J_0$ we must also have embeddedness of the exceptional curve.
Let us now look at the rest of $A_u$, consisting of $J$ with degenerated exceptional curves.
Then by the connectedness of the stable image curve, there must be at least one embedded rational component with sufficiently negative self-intersection $-k$, $k\geq 2$.
We will be using the virtual dimension computation (see Theorem 1.6.2 of \cite{IS99} by Ivashkovich-Shevchishin) for underlying simple representatives.
Namely, for a simple class $A$ admitting an embedded $J$ holomorphic representative and for $K$ the canonical class, index of the curve representing $A$ is given by
\begin{equation}\label{indA}
index(A)= 2g-2-2K\cdot A.
\end{equation}
If there is only one such negative component with square strictly less than $-1$ which is a simple class with self-intersection exactly -2, then the degeneration has to be of $\{C, D\}$ type by computing the square and pairing with the canonical class $K$.
More precisely, assume $E=C+\sum D_i+\sum P_j,$ so that $C^2=-2, D^2_i\ge -1, P^2_j\ge 0.$ By $g_J(C)=g_J(D_i)=0,$ we have $K\cdot C=0, K\cdot D_i=-1$ and $K\cdot P_J>-1$. Also, we have $ K \cdot (C+\sum D_i+\sum P_j) =K\cdot E=-1.$ Hence there can only be precisely one $D_i$. Furthermore since the component $C$ is embedded then all $J$ admitting such curve with such particular component belong to the Cod=2 stratum (see the results in Section 2) and \eqref{indA}.
To see $C\cdot E=0$ when $E=C+D$, one only need to note the following: $-1=E^2= (C+D)(C+D)= C(C+D) +D(C+D) =C\cdot E + 1-2.$ Hence $C\cdot E$ has to be $0.$
On the other hand, if there is either (a) at least one component of negative self-intersection less or equal to $-3$ or (b) more than one component of self-intersection $-2$ we argue by Ivashkovich-Shevchishin's index formula \eqref{indA} and transversality of strata that the corresponding $J$ belong to higher codimension strata detailed in Sec \ref{cone}.
In the first case (a), since there are irreducible components with square less or equalthan $-3$, we write
$E=C_1 +\sum_{i>1} C_i,$ such that $C_1^2<-2.$ If $C_1$ has a simple representative, then we are done. Now let's deal with the case $C_1$ is multiple covered. Let $C_1=m C'_1, m>1$ such that $C'_1$ has a simple representative. Notice that we immediately know that $ (C'_1)^2\le -1.$
Then we have $0= 2 g_J (C_1) = 2+ m^2 (C'_1)^2 +m K_J \cdot C'_1.$ This means that $$K_J \cdot C'_1 =\frac{-2-m^2 (C'_1)^2}{m}>0.$$
Hence $C_1'$ has an index less than -2. This means that $J$ belongs to a stratum of codimension greater than 2 in $\mathcal{A}_{u}.$
If there are more than one components with square $-2$ let's write $E=C_1 +C_2 +\sum_{i>2} C_i,$ such that $C_1^2=C^2_2=-2.$ If both of them have simple representatives, then we are done. Now lets assume some of them are multiple covered, i.e. $C_1=p C'_1, C^2= q C'_2, p, q \ge 1.$ Now we still have $0= 2 g_J (C_1) = 2+ p^2 (C'_1)^2 +p K_J \cdot C'_1.$ This means that $$K_J \cdot C'_1 =\frac{-2-q^2 (C'_1)^2}{q}>0.$$ Similarly, the index of $C'_2$ is also non-positive. Hence both simple representatives has non positive indices. By the transversality of the simple representatives, $J$ belongs to a strata of codimension greater than 2 in $\mathcal{A}_{u}.$
\end{proof}
The previous lemma yields three-partitions of the space $\mathcal{A}_ u$ that are $E$ dependent. We will use refinements of these partitions by intersections with strata given by section type classes as well and as a consequence give a stratification $\mathcal{A}_{u}$ as follows:
\begin{dfn}\label{subsetA}
The following is a stratification of $\mathcal{A}_{u}$ validated by the Lemma \ref{strsep} below.
\begin{itemize}
\item We call $\mathcal{A}^{top}_{u}$ to be the collection of $J\in \mathcal{A}_{u}$ characterized by
\begin{itemize}
\item [(a)] there are no J-holomorphic curves in $B+kF-\sum E_i$, index less or equal -2, and
\item [(b)] an embedded exceptional curve in each class of type $E_i$ or $F-E_i$.
\end{itemize}
$$ \mathcal{A}^{top}_u = (\underset{E' \in \mathcal{E}}{\cap} \mathcal{A}_{E'}) \setminus ( \underset{ind (B+mF-\sum E_i) \le -2}{\cup}\mathcal{A}_{ B+mF-\sum E_i}) $$ where
$ B+mF-\sum E_i$ has index no larger than -2.
\item $\mathcal{A}^{2}_{u}$, is the disjoint union of codimension two strata that could be of one of the following types:
\begin{itemize}
\item Degeneration type, forming the collection $\mathcal{A}_{u}^{mild} $. For
a given degeneration $C,D$ of a given exceptional class classes $E$, we introduce:
$\mathcal{A}^{2}_{u,C,D}:=(\mathcal{U}^{2}_{u, C, D} \cap (\underset{E' \ne E}{\cap} \mathcal{A}_{E'}) )\setminus ( \underset{ind (B+mF-\sum E_i) \le -2}{\cup}\mathcal{A}_{ B+mF-\sum E_i}) $ where $E'$ is an exceptional class. In words each $J\in\mathcal{A}^{2}_{u,C,D}$ is characterized by
\begin{itemize}
\item [(a)]there are no J-holomorphic curves in $B+kF-\sum E_i$, index less or equal -2, and
\item [(b)] the existence of exactly one exceptional class having a stable $J$ representative of homology type $\{C, D\}$, s.t. $C^2=-2, D^2=-1$ as in Lemma \ref{estable2} and
\item[(c)] all other exceptional classes of type $E_i$ or $F-E_i$ are represented by embedded J-holomorphic curves.
\end{itemize}
\item The second collection (possible) $\mathcal{A}^{2}_{u, B}$ is a union of sets obtained as follow. Pick a class $B'=B+kF-\sum E_i$ with index $-4k +2g-2 +2l=-2$ where $l$ is the number of $E_i$'s. The typical strata given by this $B'$ inside the larger collection is
$\mathcal{A}^{2}_{u, B'}:= \mathcal{A}_{u, B'}\cap (\underset{E' \in \mathcal{E}}{\cap} \mathcal{A}_{E'}) \setminus ( \underset{ B+mF-\sum E_i \ne B'}{\cup} \mathcal{A}_{ u, B+mF-\sum E_i, ind \leq-2}).$
In words, this set is characterized by $J$ for which there exist
\begin{itemize}
\item [(a)] one embedded section in class $B'$ \item[(b)]
no other embedded representative sections of index less or equal than $-2$ and \item[(c)]
an embedded exceptional curve in each class of type $E_i$ or $F-E_i$.
\end{itemize}
\end{itemize}
\item Finally, $ \mathcal{A}^{high}_{u} := \mathcal{A}_{u}\setminus (\mathcal{A}^{top}_u \cup \mathcal{A}^{2}_{u}) $
is the complement of the above two subsets.
\end{itemize}
\end{dfn}
Notice that Lemma \ref{type2} will guarantee the existence of the strata $\mathcal{A}^{2}_{u, B}$.
The following table summarizes this definition and is explained in Lemma \ref{strsep}.
\begin{table}[ht]
\begin{center}
\resizebox{\textwidth}{!}{
\renewcommand{\arraystretch}{2}
\begin{tabular}{||c c c c ||}
\hline\hline
section class & embedded exceptional &
mildly deg& deg\\[0.5ex]
\hline\hline
No $B+mF-\sum E_i, ind \le -2$ & $\mathcal{A}^{top}_{u}$ &
$\mathcal{A}_{u}^{mild}$ &
$\mathcal{A}^{high}_{u}$ \\[0.5ex]
\hline
$B+kF-\sum E_i, ind =-2$ & $\mathcal{A}^{2}_{u, B}$ &
$\mathcal{A}^{high}_{u}$ & $\mathcal{A}^{high}_{u}$ \\[0.5ex]
\hline
$B+kF-\sum E_i, ind <-2$ & $\mathcal{A}^{high}_{u}$ &
$\mathcal{A}^{high}_{u}$ & $\mathcal{A}^{high}_{u}$ \\[0.5ex]
\hline\hline
\end{tabular} }
\caption{Partition}\label{partable}
\end{center}
\end{table}
\vspace{10mm}
\smallskip
The following Lemma shows that the Definition \ref{strsep} is indeed correct and it further clarifies what type of almost complex structures fall in the third set $\mathcal{A}^{high}_u$.
\begin{lma}\label{strsep}
$\mathcal{A}_u$ is the disjoint union of the above 3 parts:
\begin{enumerate}
\item $\mathcal{A}^{top}_{u}$ is an open subset of Cod=0 in $\mathcal{A}_u$,
\item $\mathcal{A}^{2}_{u}$ has $Cod=2$ in $\mathcal{A}_u$,
\item $\mathcal{A}^{high}_u$ is the union (not necessarily disjoint) of (a) strata of $Cod>2$ corresponding to either homology classes of type $B+kF-\sum E_i,$ or to homology classes of square strictly less than -2 of rational curves that appear as a degeneration of the exceptional class or (b) transverse intersections of certain strata of codimension $2$.
\end{enumerate}
\end{lma}
\begin{proof}
A simple area argument shows that only finitely many open set $\mathcal{A}_{u,E}$ appear in the Definition \ref{subsetA} and for a given $u$ only finitely many closed sets appear in the unions $\cup_i\mathcal{A}_{u, B+mF-\sum E_i, ind \leq-2}$. This yields part (1) immediately.
By Lemma \ref{smoothsection}, there is always an embedded curve in the section class.
The proof of part $(3)$ part relies on the way we define the set $\mathcal{A}^{high}_u$ (see the table \ref{partable}).
Indeed for a $J$ in either one of the sets in the bottom row of the table, there is an embedded $J$-holomorphic curve in the class $B+kF-\sum E_i$, with the index $-4k +2g-2 +2l$ at most -4. Then $J$ belongs in some stratum $\mathcal{A}_{u,B+kF-\sum E_i}$ of codimension at least $4$ generated by a section class.
Secondly, if a $J$ is part of $\mathcal{A}^{high}_u$ by being one of the two sets in the second row of the table, then $J$ belongs to a transversal intersection of strata of codimension $2$ given by either two different different section type classes with index $-2$ each or one section type class and one mildly degenerated type $C,D$ exceptional curve.
If $J$ admits at least one badly degenerated exceptional curve or at least two mildly degenerated ones, they provide strata (or transverse intersection of such) of codimension at least $4$. Note here by Lemma \ref{estable2}, we have at least one embedded component of an index at most $-4$ or at least two components of index $-2$.
Lastly, any stratum in Part $(2)$ has correct codimension in $\mathcal{A}_u$, by the index computation for an embedded $(-2)$ sphere or by the existence of a section class in the correct codimension.
Indeed, if $J$ is in $\mathcal{A}_u^2$, either there is exactly one exceptional curve $E$ breaks into $C+D$ where $C$ is the unique -2 curve and $D$ is another exceptional curve (and no embedded section classes of large codimension), or there is an embedded curve in the class $B+kF-\sum E_i$ of index -2.
\end{proof}
\begin{rmk}
\begin{itemize}
\item Lemma \ref{strsep} states that $\mathcal{A}^{high}_u$ is a union of strata of Fr\'echet suborbifolds with codimension 4 or higher.
\item The changes in $\mathcal{A}^{high}_u$ as $\omega $ varies are not well understood for general deformations as one does not have enough curves to perform inflation to reposition {\bf within} a given chamber. This is the main impediment to establishing the full stability conjecture in general case.
\item However, $\mathcal{A}^{high}_u$ is well understood in the $(\mu, 1, \frac12, \cdots \frac12)$ case since in this cases the exceptional curves cannot degenerate for homological reasons.
\end{itemize}
\end{rmk}
\subsection{Stability of $Symp(M,\omega)$ and inflation}
Firstly, note that one can always inflate along the curve in the class $F$, we have the following Lemma which allows us the find an inclusion between different $[\omega]$.
\begin{lma}\label{incup}
For any stratum, including the open strata, $\mathcal{A}_{u, \mathcal{C}}\subset \mathcal{A}_{u', \mathcal{C}}$, $u=(\mu,1,c), u'=(\mu+{\epsilon},1,c)$, and for all $\mu >1, {\epsilon} > 0$.
\end{lma}
\begin{proof}
By \cite{Zhang16} Theorem 1.6, we known that for each $J\in
\mathcal{A}_{u, \mathcal{C}}$, through each point of $M$ there is a stable $J$-holomorphic sphere representing the fiber class $F = [{\rm pt} \times S^2]$.
Then we can inflate along the embedded curve $F$. Let us start with $u=(\mu, 1, c).$
Inflating, we obtain a form in $t P.D[F] +(\mu, 1, c) $= $(\mu+t, 1, c)$, for all $t \in [0,\infty).$
\end{proof}
\begin{prp}\label{inflation}
In the following cases, for $\mu>g$ the strata have inclusion relations:
\begin{enumerate}
\item $\mathcal{A}^{top}_{u}\supset \mathcal{A}^{top}_{u'}$, where $u=(\mu,1, c_i), u'=(\mu+{\epsilon},1,c_i)$, and for all $\mu >max\{n, g\}, {\epsilon} > 0$.
\item $ \mathcal{A}^{2}_{u, C, D} \supset \mathcal{A}^{2}_{u', C, D},$ $u=(\mu,1,c_i), u'=(\mu+{\epsilon},1,c_i)$, for all $\mu >1, {\epsilon} > 0$.
\item For any strata $ \mathcal{A}^{2}_{u, B+kF-\sum E_i} \subset \mathcal{A}^{2}_{u, B}$ where $B+kF-\sum E_i$ has index -2, $ \mathcal{A}^{2}_{u, B+kF-\sum E_i, }\supset \mathcal{A}^{2}_{u', B+kF-\sum E_i}$ , we have $u=(\mu,1,c_i), u'=(\mu+{\epsilon},1,c_i)$, for all $\mu >1, {\epsilon} > 0$.
\end{enumerate}
\end{prp}
\begin{table}[ht]
\begin{center}
\resizebox{\textwidth}{!}{\renewcommand{\arraystretch}{2}\begin{tabular}{||c c c c ||}
\hline\hline
Direction & Strata &
Class to inflate & Proof \\[0.5ex]
\hline\hline
In the same chamber & $\mathcal{A}_{u, \mathcal{C}} $ & $B+xF-\sum E_i$,
$E_i$, $F-E_i$ & Lemma \ref{inflation} \\[0.5ex]
\hline
Within a chamber & $\mathcal{A}^2_{u, C, D} $ & $B+xF-\sum E_i$, {C, D},
$E_i$, $F-E_i$ & Lemma \ref{cod2inf} \\[0.5ex]
\hline
Within a chamber & $\mathcal{A}^{top}_{u} $ &
$B+xF-\sum E_i$, $E_i, F-E_i$ & Lemma \ref{inflation} \\[0.5ex]
\hline
Across to chambers with large $\mu$ & Any strata &
$F$ & Lemma \ref{incup} \\[0.5ex]
\hline
Across to chambers with small $\mu$ & $\mathcal{A}_{u, \mathcal{C}} $ and $\mathcal{A}^{top}_{u} $ &
$B+xF-\sum E_i$ & Lemma \ref{inflation} \\[0.5ex]
\hline\hline
\end{tabular} }
\caption{Inflation process}\label{inftable}
\end{center}
\end{table}
\begin{proof}
First, by Lemma \ref{strsep}, for any of the three types of strata considered in the current Lemma, there is an embedded curve in class $A=B+xF-\sum E_i$, where $x<g$.
Note that the process of inflating along this curve will increase the areas of the fiber class $F$ and the $E_j$, if $E_j \cdot A>0$. That means that while we land the desired chamber with a smaller section class $\mu$ we will not be positioned back on the horizontal line considered in the statements. To reposition within this smaller $\mu$ chamber we will use the exceptional classes curves.
For the sets in points (1) and (3), since all $E_j$'s and $F-E_j$'s are embedded, we can perform simultaneous inflation along collections of such exceptional curves, decreasing proportionally the area of the curves involved to infinitesimally reach the exterior wall simplices. On the other hand, for the sets in point (2), we have to explain one additional inflation that sends towards the reduction simplicial wall ( Lemma \ref{cod2inf}.) Because we have are in a codimension 2 strata, there is at most one $E_j$ that has a type $\{C, D\}$ degeneration, then by Lemma \ref{cod2inf} we can still inflate along the pair $C, D$ in such a manner that the homological effect on $
u$ is the same as if the exceptional curves were embedded.
More concretely, here is how we inflate along a curve $B+xF -\sum E_i$. Let's start with $u=(\mu, 1, c_i).$ By inflating the classes $F-E_i$ and $B+xF-\sum E_i$, we obtain a form in
$$t P.D[B+xF-\sum E_i] +(\mu, x, c_i) +t_i(1,0,c_i) = (\mu +tx+ \sum t_i, 1+t , t_i+c_i),$$
which normalized to $$ \biggl(\frac{tx+\mu +\sum t_i}{1+t}, 1, \frac{c_i+t_i}{1+t}\biggr),$$
$\forall t \in [0,\infty).$
Note that we will choose $t_i/t$ proportional to $c_i$, that is, we always take $t_i=c_i t$. Then in the resulting symplectic class, the area of the exceptional curves is always stable after normalization.
When $B+xF-\sum E_i$ is a negative self intersection curve (note that for the current Lemma that can only happen for codimension 2, $g=1$, and $B+xF-\sum E_i$ has square $-1$), the computation is almost the same except the range of $t$,
$$t P.D[B+xF-\sum E_i] +(\mu, x, c_i) +t_i(1,0,c_i) = (\mu +tx+ \sum t_i, 1+t , t_i+c_i),$$
which normalized to $$ \biggl(\frac{tx+\mu +\sum t_i}{1+t}, 1, \frac{c_i+t_i}{1+t}\biggr),$$
$\forall t \in [0, \omega(B+xF-\sum E_i)).$
This also grants that we move to any chamber with pseudo-holomorphic $B+xF-\sum E_i.$
By inflating the classes $F-E_i$ and $B+xF-\sum E_i$, we obtain a form in
$$t P.D[B+xF-\sum E_i] +(\mu, x, c_i) +t_i(1,0,c_i) = (\mu +tx+ \sum t_i, 1+t , t_i+c_i),$$
which normalized to $$ \biggl(\frac{tx+\mu +\sum t_i}{1+t}, 1, \frac{c_i+t_i}{1+t}\biggr),$$
$\forall t \in [0,\infty).$
Then we just need to make sure that as long as $t\to \infty$, the resulting symplectic class covers $\mu \ge g$ cases. Note $\displaystyle{\lim_{t\to \infty }\frac{tx+\mu\sum t_i }{1+t} }=x\leq g.$ Hence we proved the statement of the Lemma.
\end{proof}
As explained in the above lemma, repositioning on the lower strata inside a chamber is done by inflating along the embedded exceptional curves; the following deals with that in the case that $J$ admits $\{C, D\}$ type degeneration.
\begin{lma}\label{cod2inf}
For any $J$ admitting a $\{C, D\}$ type degeneration of an exceptional class $E$,and for a symplectic form $\omega$ compatible with $J$, we can inflate along the two irreducible components repeatedly in an alternate fashion so that we obtain a symplectic form $\omega'$ tamed by $J$ and $[\omega'] = [\omega] +t P.D.(E), 0\le t < \omega(E)-\omega(C).$ In other word, there exists a symplectic form $\omega_{\epsilon}$ such that the symplectic area of the exceptional classes $E$ is $\epsilon$ close to the area of $C$, for an arbitrary small $\epsilon$.
\end{lma}
\begin{proof}
Assume for a given $J$ compatible with $\omega$, the exceptional class $E$ has a stable curve representative degenerating as in the homology type (2) in Lemma \ref{estable2}; i.e. the stable curve has two irreducible component classes $C$ and $D$. They each have an embedded representative and intersect each other transversely $C \cdot D=1$.
We will perform alternate inflation on the curves $D, C$; every step starts with a compatible form and ends with a tamed form, however, in between two steps we use cone Theorem of \cite{LZ09} introduced in section \ref{tcinf} to reposition on a compatible form in the same cohomology classes.
Notice that each inflation step will affect all areas of $ C, D, E$.
\begin{itemize}
\item Step 1: inflate along $D = (E-C)$, on an arbitrarily large interval inside $0 \leq t < \frac{\omega(E)- \omega(C)}{2} $ \footnote{ A convenient way of avoiding the $epsilon$ in each step of the J-tame inflation process is to use the formal inflation as defined in \cite{Zha17}. }. We obtain an $\omega_1$ (compatible with $J$ due to the above discussion) s.t. areas $ ( \omega_1(E), \omega_1(C) )$ is arbitrarily close to $( \frac{\omega(E)+ \omega(C)}{2}, \frac{\omega(E)+ \omega(C)}{2} ).$
\item Step 1': inflate along $C$, on an arbitrarily large interval inside $0 \leq t < \omega_1(C)= \frac{\omega(E)- \omega(C)}{2} $. As above, we pick $\omega^{'}_{1}$ compatible with $J$ s.t. areas $ ( \omega^{'}_{1}(E), \omega^{'}_{1}(C) )
$ are arbitrarily close to $( \frac{\omega(E)+ \omega(C)}{2} =\omega(C) +\frac12 (\omega(E)-\omega(C)), \omega(C) ).$
\item Repeat the above two steps, one gets a symplectic form compatible with $J$ with area of the curves $E, C$ arbitrarily close to $( \frac{1}{4}( {\omega(E)- \omega(C)})+\omega(C), \omega(C) ).$
\item $\cdots$
\item Get a symplectic form $\omega^{'}_{N+1}$ compatible with $J$ with area of the curves $E, C$ arbitrarily close to $((\frac{1}{2})^N( {\omega(E)- \omega(C)})+\omega(C), \omega(C) ).$
\item $\cdots$
\end{itemize}
This finishes the proof.
\end{proof}
\begin{rmk}
Let's use the following example to make the above process more explicit: $E=E_1, C=E_2,$ and $D=E_1-E_2.$
\begin{itemize}
\item Step 1: inflate along $D = (E-C)$, one have $\omega_1 \sim ( \mu, 1, (c_1+c_2)/2, (c_1+c_2)/2, \cdots ).$
\item Step 1': inflate along $C$, one have $\omega^{'}_{1}\sim( \mu, 1, (c_1+c_2)/2, c_2, \cdots ).$
\item Repeat the above two steps, one has $\omega^{'}_{2}\sim( \mu, 1, c_1/4+3c_2/4, c_2, \cdots ).$
\item $\cdots$
\item $\omega^{'}_{K}\sim ( \mu, 1, c_1/(2^K)+c_2 (2^K-1)/(2^K), c_2, \cdots ).$
\item $\cdots$
\end{itemize}
\end{rmk}
Up to now, we have not used the condition $\mu >n$. However, in the event that there is some embedded section type embedded curve of index $-2$ the following Lemma guarantees optimal bounds for the corresponding negative inflation along this curve. Essentially this says that any nonempty stratum of $\mathcal{A}^{2}_{u, B}$ stays stable under inflation until reaching the optimal chamber.
\begin{lma} \label{type2}
Any curve in class $B+kF-\sum E_i$ with index $-2$ has a positive area when $\mu >n$.
\end{lma}
\begin{proof}
$B+kF-\sum E_i$ has codimension $-4k +2g-2 +2l =2$ which means index $4k +2 -(2g +2l) =-2$ where $l$ is the number of $-E_i's$ in $B+kF-\sum E_i$. Since $l \ge 0, g \ge 1$ which means $4k \ge -2$, and now we have $k\ge 0$ because $k$ is an integer. The curve class $B+kF-\sum E_i$ has symplectic area $\mu+k-\sum c_i>0$ if $\mu>n.$
\end{proof}
\begin{cor}\label{strconst}
For any $ u $ with $\mu >max\{g, n\}$,
\begin{enumerate}
\item $\mathcal{A}_{u}^{top}$ is constant for any such $u$ within and outside any chambers. \item $\mathcal{A}_{u}^2 $ is constant for any such $u$ within and outside any chambers. \item $\mathcal{A}_{u}^{high}\subset \mathcal{A}_{u'}^{high}$, $u=(\mu,1,c), u'=(\mu+{\epsilon},1,c)$, and for all $\mu >1, {\epsilon} > 0$
\end{enumerate}
\end{cor}
\begin{proof}
This is a straightforward combination of Lemma \ref{inflation}, Lemma \ref{cod2inf} and Lemma \ref{type2}. Note that the condition on $\mu$ in the current statement comes from $\mu>g$ of Lemma \ref{inflation} and $\mu >n$ in Lemma \ref{cod2inf}.
\end{proof}
\begin{prp} [=Theorem \ref{stab01intro} ]\label{stab01}
Suppose $\mu, \mu'>max\{g, n\}$ for $u= (\mu, 1, c_1,\cdots, c_n)$, $u'= (\mu', 1, c'_1,\cdots, c'_n)$, then the groups $\pi_0$ and $\pi_1$ of $G^g_{u,n} $ and $ G^g_{u',n} $ are the same.
\end{prp}
\begin{proof}
First let us assume that $u$ and $u'$ are on a horizontal line $u=(\mu,1,c), u'=(\mu+{\epsilon},1,c)$, and for all $\mu >1, {\epsilon} > 0$
Proposition \ref{inflation} allows us to write the right-hand side inclusions in the following diagrams
\begin{equation}\label{hcomm}
\begin{array}{ccccccc}
{(a)} & & G^g_{u,n} &\to & {\rm Diff}_0(M_g \# n \overline{ \mathbb{C} P^2}) & \to & \mathcal{A}_{u}\\
& & \downarrow & & \;\downarrow = & & \downarrow\\
& & G^g_{u',n} &\to & {\rm Diff}_0(M_g \# n\overline{ \mathbb{C} P^2}) & \to & \mathcal{A}_{u'},\\
& & & & & & \\
{ (b)} & & & G^g_{u,n} \rightarrow G^g_{u',n} & & \\
& & & \searrow \downarrow & & \\
& & & G_{u''}& &
\end{array}
\end{equation}
Along with Corollary \ref{strconst} this is sufficient to get the results along horizontal lines. Incidentally, these diagrams also allow us to state that the homotopy colimit exists on horizontal lines as $\epsilon \rightarrow \infty$.
The same corollary implies that the open strata and the codimension $2 $ strata are constant {\it within the chambers}, thus the result can be established in full generality for all $u$ within the conditions of the theorem.
\end{proof}
Now let's recall the following result from \cite[Proposition 5.1]{McD08}:
\begin{thm}\label{csympk}
Let $(M,\omega)$ be a symplectic manifold and denote by $M_n$ its $n$-fold blow up. Then:\smallskip
{\rm (i)} There is a homomorphism $f_*^n:
\bigl(\mathbb{Z} \oplus \pi_2M\bigr)^n \to \pi_2\bigl(B{\rm Diff}(M_n)\bigr)$ whose kernel is
contained in the torsion subgroup of $\bigl(\pi_2M\bigr)^n$.\smallskip
{\rm (ii)} there is ${\epsilon}_0>0$ such that
the elements in $f_*^n\bigl((\pi_2M)^n\bigr)$ can all be realised in
$BSymp(M_n, \omega_{\epsilon})$ whenever the blow up parameter ${\epsilon} = ({\epsilon}_1,\dots,{\epsilon}_n)$ satisfies ${\epsilon}_i\le {\epsilon}_0$ for all $i$.
\end{thm}
Applying Theorem \ref{csympk} to our case where $M_g \# n\overline{ \mathbb{C} P^2}$ is the blow up ruled surface $M_g$:
The torsion free part of $\pi_2(M_g \# n\overline{ \mathbb{C} P^2})$ is $\mathbb{Z}$, and hence by (i), $f_*^n\bigl((\pi_2(M_g \# n\overline{ \mathbb{C} P^2})^n\bigr)$ has torsion-free part at least $\mathbb{Z}^n$. By (ii) those can all be realized in $\pi_1G^g_{u,n}$ whenever the vector $ u $ comes from small enough blow up sizes $c_i$.
\subsection{Stability of equal size 1/2}\label{1/2}
We can prove the stronger version of the stability result for equal size 1/2. Here we first describe the chamber structure and then provide a proof.
Notice that the space of such forms in class $(\mu, 1, \frac12,\cdots, \frac12), \mu>g$ is a line. The positive codimension strata are given exclusively by embedded curves in classes of they type $B-kF$ or $B-kF-\sum E_i.$ This is because since each $E_i$ and $F-E_j$ has area $\frac12,$ there is no degeneration of exceptional curves. This means that the entries $\mathcal{A}^2_{u, C, D}$ or $\mathcal{A}_u^{high}$ in the first row of table \ref{partable} are actually empty. Hence on the line $(\mu, 1, \frac12,\cdots, \frac12)$, the chambers are guided by those values of $\mu$ that are either integer points or half integer points where curves $B-kF$ or $B-kF-\sum E_i$ changes.
Hence the precise statement is the following:
\begin{thm}[=Theorem \ref{stabintro}]
The homotopy type of $G^g_{\mu,n}$ is constant for $\frac{k}{2} < \mu \le \frac{k+1}{2},$ for
any integer $k \ge max\{2g, n\}$. Moreover as $\mu$ passes the half integer $\frac{k+1}{2}$, all the groups
$\pi_i, i = 0,\cdots,2k + 2g - 1$ do not change.
\end{thm}
\begin{proof}
The proof is basically the same as that in Proposition \ref{inflation}. The only thing added here is the inflation along the embedded curves in classes $B-kF-\sum E_i.$ This provide inclusions of the higher codimensional strata $\mathcal{A}_{u', B-kF-\sum E_i} \subset \mathcal{A}_{u, B-kF-\sum E_i} $ for $u=(\mu,1,c), u'=(\mu+{\epsilon},1,c)$. Along with Corollary \ref{strconst} which provides the opposite inclusions, this provides all conditions to implement Proposition \ref{stab01} yielding the result.
\end{proof}
\begin{cor}
There are new elements in $\pi_1(G^g_{u,n})$ for a symplectic form without any Hamiltonian circle actions.
\end{cor}
\begin{proof}
Notice that by Theorem \ref{csympk}, on this line of equal size $1/2$, the rank of $\pi_1(Symp(M,\omega))$ is positive, indeed at least $k$ for k-fold blowups. On the other hand, by \cite[Corollary 3.6]{HK15}, if the number of blowup is large ($n \ge \mu/2$), there are points on this line that do admit Hamiltonian circle actions.
\end{proof}
\section{Singular foliations and topological colimit for equal size blowups}\label{s:out}
As mentioned in the proof of the stability Theorem \ref{stab01} the homotopy colimit
exists along each horizontal line fixing the blowup size.
In what follows we are going to investigate that homotopy colimit in the special case when the horizontal line is along equal size blow-ups of increasing base area.
\subsection{The equal size $\frac12$ blowup}\label{ss:bi}
In this special case, we are going to use the relationship between the space of singular foliations and the space of almost complex structures to establish a smooth diffeomorphism model for this homotopy colimit denoted in this case by $G^g_{\infty,n}$. We will show that this smooth diffeomorphism model is disconnected and hence conclude that $G^g_{\infty,n}$ is disconnected.
Proposition \ref{incup} and the homotopy commutative diagrams \eqref{hcomm} shows that the homotopy colimit exists. Additionally, from Theorem \ref{stabintro},
we have that as $\mu\to \infty$, $\pi_iG^g_{\mu,n}= \pi_iG^g_{\infty,n}$ for $i \leq 2k + 2g - 1.$ We also introduce the space $\mathcal{A}_{\infty}$ of almost complex structures, as a colimit of $\mathcal{A}_{u}$'s as $\mu \to \infty,$ namely, $\mathcal{A}_{\infty} := \underset{\mu>max\{g,n\}}{\cup}{\mathcal{A}_u}.$
In what follows, we will introduce a space of foliations that can be used to correctly identify a smooth model for our colimit.
We are going to use the following singular foliation in the smooth (topological) category:
\begin{dfn}\label{singfol}
A {\bf n-singular foliation} by $S^2$ of $M_g \# \overline{ \mathbb{C} P^2}$ is defined as a foliation with smooth embedded spherical leaves in the $F=[pt \times S^2]$ class
and n nodal leaves with two embedded spherical components, each in the class
$E_i, 1\le i \le n$ and $F-E_i$ where $E_i$'s are the exceptional classes. Also, we require that the complement of the singular leaf is a smooth foliation over an n-punctured genus g surface $Y$. We denote ${ Fol}$ by the space of such n-singular foliations.
\end{dfn}
As argued in Section \ref{1/2}, any exceptional curve has the minimal area in the equal size $\frac12$ blowup case and they can never degenerate. For each $J$ taming a symplectic form in a class $u=(\mu, 1, \frac 12, \cdots \frac 12)$ there exists a singular foliation as in Definition \ref{singfol} where the leaves are actually $J-$ holomorphic.
Let ${\mathcal F}_{std}$ be the standard blow-up foliation by $J_{std}$-spherical leaves.
Note that if we blow down the complex structure, we obtain the split complex structure on $M_g $ and the induced foliation is the split foliation by the spheres.
Following verbatim the argument in \cite{BL1}, we have the following Lemma on the space of foliations and transitive action, when there is only finitely many nodal fibers.
\begin{lma}
Let ${\rm Fol}_0$ be the connected component of ${\rm Fol}$ that contains $\mathcal{F}_{std}$. Then the colimit $\mathcal{A}_\infty $ is weakly homotopic to ${\rm Fol}_0$.
\end{lma}
\begin{proof}
Observe that there is a map $\mathcal{A}_\infty \to {\rm Fol}_0$ given by taking
$J$ to the singular foliation of $M_g \# n \overline{ \mathbb{C} P^2}$ by $J$-spheres in class $F$ or $F-E$. Standard arguments
in \cite{MS17} Ch 2.5 show that this map is a
fibration with contractible fibers. Hence it is a homotopy equivalence.\\
\end{proof}
\begin{lma}\label{tranfol}
There is a transitive action of ${\rm Diff}_0(M_g \# n \overline{ \mathbb{C} P^2})$ on ${\rm Fol}_0$.
\end{lma}
\begin{proof}
Since $S^2\setminus pt$ is compact and
simply connected, each generic leaf of this foliation has trivial holonomy and
hence has a neighborhood that is diffeomorphic to the product $D^2\times
S^2$ is equipped with the trivial foliation with leaves $pt\times S^2$.
Since our foliation has smoothly embedded leaves and only one nodal leaf, we can find a 2-form transverse to each leaf. Moreover, the Poincar\'e dual of such 2-form is a smooth section, not passing through the singular point $p$.
Now let's take an arbitrary singular foliation $\mathcal{F}^{'} \in {\rm Fol}_0$ and denote the smooth section by $\Sigma^{'}.$ We'll prove that ${\rm Diff}_0(M_g \# n \overline{ \mathbb{C} P^2})$ takes this foliation $(\mathcal{F}^{'}, \Sigma^{'})$ to $\mathcal{F}_{std}, \Sigma_{std}$ where $\Sigma_{std}$ is the smooth section (which is indeed $J_{std}$-holomorphic).
Since $\mathcal{F}'$ and $\mathcal{F}_{std}$ are in the same path connected component, there is a $\phi \in {\rm Diff}_0(M_g \# n \overline{ \mathbb{C} P^2})$ sending $\Sigma^{'}$ to $ \Sigma_{std},$ such that the singular leaf of $\mathcal{F}^{'}$ goes to the singular leaf of $\mathcal{F}_{std}$ while the two singular points are identified. Now let's fix a finite covering $\{D_i, 1\leq i \leq n\}$ of $\Sigma^{'},$ such that the local foliations over $D_i$'s cover the manifold $\Sigma_g\times S^2 \# \overline{ \mathbb{C} P^2}.$
Then we use partition of unity for the covering $\{D_i, 1\leq i \leq n\}$ of $\Sigma^{'},$ and for each local foliation, we apply a $\phi_i$ such that the foliation $\mathcal{F}^{'}$ under $\phi\circ \phi_1 \circ \cdots \circ \phi_n$ agrees with the foliation $\mathcal{F}_{std}.$
Now we have the transitive action of ${\rm Diff}_0(M_g \# n \overline{ \mathbb{C} P^2})$ on ${\rm Fol}_0$. Notice that this action of ${\rm Diff}_0(M_g \# n \overline{ \mathbb{C} P^2})$ does not necessarily preserve the regular leaves. However, it must stay invariant on each of the singular leaves and preserve the singular points and the intersections of the singular fibers with the base since any diffeomorphism isotopic to the identity acts trivially on homology.
\end{proof}
Hence there is an orbit-stabilizer fibration from the transitive action, where the isotropy group is described in the following definition:
\begin{dfn}\label{fibergp}
${\mathcal D}^g_n$ consists of all elements in the identity component of the diffeomorphisms group which fit into the commutative diagram $$
\begin{array}{ccc} M_g \# n \overline{ \mathbb{C} P^3}& \stackrel \phi{\to} &M_g\# n \overline{ \mathbb{C} P^2}\\
\downarrow & &\downarrow\\
M_g, \{p_1,\cdots, p_n\}, F_p& \stackrel {\phi'} {\to} & M_g, \{p_1,\cdots, p_n\}, F_p\\
\downarrow & &\downarrow\\
(\Sigma_g, x_1, \cdots, x_n) & \stackrel {\phi''}{\to} & (\Sigma_g,x_1, \cdots, x_n).
\end{array}
$$
\end{dfn}
Here $p_i$ is the intersection point $E_i\cap (F-E_i)$ of the singular fiber. And the first level of the downward arrow means that we contract the $E_i$ component. We abuse notation here to still denote $p_i$ for the point in $M_g$ after contracting the curve $E_i$.
On the second level, $\phi'$ is a diffeomorphism of $M_g$ keeping the points $p_i$ fixed and fixing the fiber $F_p$ passing through $p_i$ fixed as a set, and preserves other leaves in the standard foliation.
The base $\Sigma_g$ is the holomorphic curve $B_{std}$ w.r.t the standard complex structure, and the map downward is obtained by firstly blowing down the exceptional sphere and then projecting down to the base curve.
From the above definition, it is clear that the group element of $
{\mathcal D}^g_n \subset {\rm Diff}_0(M_g\# \overline{ \mathbb{C} P^2})$ preserves the leaves setwise in $Fol_0$ and hence we have the orbit-stabilizer associated to Lemma \ref{tranfol} being
\begin{equation}\label{folpre}
{\mathcal D}^g_n \to
{\rm Diff}_0(M_g \# n\overline{ \mathbb{C} P^2}) \to {\rm Fol}_0.
\end{equation}
\begin{prp}\label{tlimit}
Take $M_g \# n \overline{ \mathbb{C} P^2}$ with a form in the class $(\mu,1, \frac12,\cdots,\frac12)$. Then let $\mu$ go to $\infty$.
\begin{enumerate}
\item ${\mathcal D}^g_n$ is weakly homotopic to $G^g_{\infty,n}$.
\item The group ${\mathcal D}^g_n$ is disconnected when $g\ge 2$.
\item When $\mu\to \infty$, s.t. for $i=0,1$, $\pi_i(G^g_{u,n})= \pi_i(G^g _{\infty,n})$ for $i \leq 2k+2g-1,$ and hence the groups $G^g_{u,n}$ are disconnected for $g \geq 2$.
\end{enumerate}
\end{prp}
\begin{proof}
For statement (1), note the equation \eqref{folpre} fits into the
commutative diagram:
$$
\begin{array}{ccc}
{\rm Diff}_0(M_g \#n \overline{ \mathbb{C} P^2}) & \to & \mathcal{A}_\infty\\
\downarrow & & \downarrow\\
{\rm Diff}_0(M_g \# n \overline{ \mathbb{C} P^2}) & \to & {\rm Fol}_0,
\end{array}
$$
where the upper map is given as before by the action $\phi\mapsto \phi_*(J_{{\rm std}})$. Hence there is an induced homotopy equivalence from the homotopy fiber $G^g_{\infty,n}$ of the top row to the fiber
${\mathcal D}^g_n$ of the second.
To prove statement (2), first note that we have the following fibration
\[ {\rm Diff}(\Sigma_g, x_1,\cdots,x_n) \longrightarrow {\rm Diff}(\Sigma_g) \longrightarrow Conf(\Sigma_g,n),\]
Where $Conf(\Sigma_g,n)$ is the configuration of ordered n points on $\Sigma_g$.
Taking the right portion of the long exact sequence, we have:
\[ 1 \longrightarrow \pi_1(Conf(\Sigma_g,n)) \longrightarrow \pi_0[{\rm Diff}(\Sigma_g, x_1,\cdots,x_n)] \longrightarrow \pi_0[{\rm Diff}( \Sigma_g)]\longrightarrow 1.\]
Then restricting to $ {\rm Diff}_0( \Sigma_g) $, we obtain an element in the identity component of $\pi_0({\rm Diff}(\Sigma_g))$ but not in the identity component of $\pi_0({\rm Diff}(\Sigma_g, x_1,\cdots,x_n))$, where $x_1,\cdots,x_n$ are the points we will blow-up.
It can be constructed explicitly in the following way: choose a path $\alpha(t)\subset {\rm Diff}(\Sigma_g), t \in [0, 2\pi]$, pushing $x_1,\cdots,x_n \in \Sigma_g$ along homologically non-trivial loop in $Conf(\Sigma_g,n).$
Now $\alpha(0)=id$ and $\alpha(2\pi) \in {\rm Diff}(\Sigma_g, x_1,\cdots,x_n) \cap {\rm Diff}_0( \Sigma_g) $ and note that $\alpha(2\pi) $ is the desired element.
Next, we lift the path $\alpha(t)$ into dimension 4. To do that, first fix $M_g$, $\Sigma_g$ and choose $J_{split}$. There is a natural family $\alpha(t)\times id \subset {\rm Diff}_0(M_g),$ which act on the leaves in the trivial manner. For each of $t$, we have a product complex structure on $M_g$ by pulling back $J_{split}$ by $\alpha(t)\times id$. Recall that $p_i$'s are the preimage of $x_i$'s under the projection $M_g\to \Sigma_g$. We are going to obtain a family of complex structures by blowing up at the points $(\alpha(t) \times id )|_{p_i} \in M_g, 1\le i \le n.$ This gives us a loop of complex structures $J_t$ on $M_g \# n \overline{ \mathbb{C} P^2}$ where $J_0=J_{{\rm std}}$. Note that by \cite{Zhang16}, each $J_t$ gives rise to a singular foliation $\mathcal{F}_t$, as in Definition \ref{singfol}. Geometrically, $\mathcal{F}_t$ is a loop in $Fol_0$ starting with the standard singular foliation $\mathcal{F}_{std},$
pushing $x_1,\cdots,x_n \in \Sigma_g$ along homologically non-trivial loop in $Conf(\Sigma_g,n)$ for each time $t\in[0,2\pi]$.
By the transitivity Lemma \ref{tranfol}, we can use a path $\phi_t$ in ${\rm Diff}_0(M_g \# n\overline{ \mathbb{C} P^2})$ to push $\mathcal{F}_0,$ so that $\phi_t\circ \mathcal{F}_0= \mathcal{F}_t.$ Note that $\phi_t$ in ${\rm Diff}_0(M_g \# n \overline{ \mathbb{C} P^2})$ pushes the standard foliation along this loop.
Now we focus on the diffeomorphism $\phi_{2\pi}$. First note that $\phi_{2\pi}$ preserves the singular foliation $\mathcal{F}_{std},$ since the foliation $\mathcal{F}_{2\pi} =\mathcal{F}_0=\mathcal{F}_{std}.$ Hence $\phi_{2\pi} \in \mathcal{D}^g_n$. Also, the above paragraph gives an explicit isotopy of $\phi_{2\pi}$ to the identity map in ${\rm Diff}_0(M_g \# n\overline{ \mathbb{C} P^2})$, through the path $\phi_t.$
Finally,we show that $\phi_{2\pi}$ is not isotopic to id in $\mathcal{D}^g_n$. Suppose there is an isotopy to id, then by path lifting of the fibration \ref{folpre}, we would have a leaf-preserving element in ${\rm Diff}_0(M_g \# n\overline{ \mathbb{C} P^2})$, so that it is isotopic to identity through a path in $\mathcal{D}^g_n$. Furthermore, this path pushes the given foliation along the lifting of the loop $\mathcal{F}_t, t\in[0,2\pi]$. Now apply the diagram of definition \ref{fibergp}. We would have an isotopy that would in turn give an isotopy of $(\Sigma_g, x_1,\ldots,x_n)$, connecting the time $2\pi$ diffeomorphism to identity. This is a contradiction against the Birman exact sequence. Hence statement (2) holds.
Statement (3) follows from the stability Theorem \ref{stab01}.
\end{proof}
\begin{rmk}
When $g=0$, one can blow up $S^2\times S^2$ at $n$ points with equal sizes. It is shown in \cite{LLW15} that when $n\le 3$, $ G^0_{u, n}$ is connected for all $u$. When $n>3$, $ G^0_{u, n}$ ( blowup equal and 1/2 of the size of the fiber ) is a braid group of $n$ strands on spheres (cf. \cite{LLW3}). This follows the same pattern as ${\rm Diff}(S^2,n)$, which is the diffeomorphism group of $S^2$ fixing $n$ points. Their examples are constructive elements in $\pi_0 G^0_{u,n} $ produced using ball swapping techniques. As pointed out in Example 2.3 of \cite{LWnote}, there is a way to construct ball swappings of a ball along a non-trivial loop in $\Sigma_g$. It is an interesting question to explore whether the construction here is indeed a ball swapping map. An initial question in this direction is whether ( using either construction) one can see if $\pi_0 \mathcal{D}^g_n$ is a braid group of n strands on $\Sigma_g$. \\
\end{rmk}
\nocite{}
\printbibliography
\end{document}
|
2,869,038,154,379 | arxiv | \section{Introduction}
Type Ia supernova (SN~Ia) explosions play a critical role in
regulating chemical evolution through the cycling of matter in
galaxies. As supernovae (SNe) are the primary contributors of heavy
elements in the universe, observed variations in their rates with
redshift provide a diagnostic of metal enrichment over a cosmological
timeline. The frequency of these events and the processes involved
provide important constraints on theories of stellar evolution.
SNe~Ia are thought to originate from the thermonuclear explosion of
carbon-oxygen white dwarfs that approach the Chandrasekhar mass via
accretion of material from a binary companion \citep[for reviews,
see][]{hn00,how11}. This process can result in a significant ``delay
time'' between star formation and SN explosion, depending on the
nature of the progenitor system \citep{mad98,gre05}. The SN~Ia
volumetric rate (\ensuremath{\mathrm{SNR}_{\mathrm{Ia}}}) evolution therefore represents a convolution of
the cosmic star-formation history with a delay-time distribution
(DTD). As such, measuring the global rate of SN~Ia events as a
function of redshift may be useful for constraining possible DTDs and,
ultimately, progenitor models -- the detailed physics of SNe Ia
remains poorly understood, with several possible evolutionary paths
\citep[e.g.][]{bra95,Liv00}.
One complication for rates studies is that many SN surveys at low
redshifts are galaxy-targeted, counting discoveries in a select sample
of galaxies and converting to a volumetric rate by assuming a galaxy
luminosity function. This method can be susceptible to systematic
errors if it preferentially samples the bright end of the galaxy
luminosity function, biasing toward SNe in more massive, or brighter,
galaxies \citep[see, e.g.,][]{sul10}. Since many SN Ia properties are
correlated with their hosts, the recovered rates may then not be
representative of all types of SNe~Ia. A second type of SN survey
involves making repeat observations of pre-defined fields in a
``rolling search'', to find and follow SNe in specific volumes of sky
over a period of time. Such surveys minimize the influence of host
bias, but still suffer from Malmquist bias and other selection
effects. It is reasonably straight forward --- although often
computationally expensive --- to compensate for the observational
biases within rolling searches.
The advent of these wide-field rolling surveys has significantly
enhanced SN statistics at cosmological distances. The Supernova
Legacy Survey (SNLS) in particular has contributed a large sample of
Type Ia SNe out to redshifts of $z\sim1.05$ \citep{guy10}. Although
its primary goal is to assemble a sample of SNe~Ia to constrain the
cosmological parameters \citep[e.g.][]{ast06,sul11}, the SNLS is also
ideal for studies of SN rates \citep[][]{nei06,baz09}. The SNLS is a
rolling high-redshift search, with repeat multi-color imaging in four
target fields over five years and as such has consistent and
well-defined survey characteristics, along with significant follow-up
spectroscopy. However, due to the selection effects (including
incomplete spectroscopic follow-up) and other systematic errors, such
as contamination and photometric redshift errors, present in any SN
survey, a detailed understanding of internal biases is necessary for
accurate rate calculations.
\begin{figure}
\plotone{f01.eps}
\caption{Volumetric SN~Ia rates as a function of redshift from various
previous studies, taken from
\citet{li11b,dil10,rod10,dah08,gra11,nei06}. Additional individual
rates ($+$) include, in order of increasing redshift:
\citet{bla04,bot08,kuz08}. Values are plotted as published, with the
exception of a correction to the cosmology used in this paper. As a
comparison, the lines shows the evolution of various model cosmic
star-formation histories from \citet[][piece-wise fit is the
short-dashed line, the \citet{col01} form is the long-dashed
line]{li08} and \citet[][dot-dashed line]{yuk08}.}
\label{fig:rates_lit}
\end{figure}
In the past decade, volumetric SN~Ia rates have been measured to
varying degrees of accuracy out to redshifts of $z\sim 1.6$
(Fig.~\ref{fig:rates_lit}). \citet{cap99} compute the SN~Ia rate in
the local universe ($z\sim 0.01$) from a combined visual and
photographic sample of $\sim 10^4$ galaxies, yet their ability to
distinguish core-collapse SNe from Type Ia SNe was severely limited.
More recent work by \citet{li11b} using $\sim270$ SNe Ia from the Lick
Observatory Supernova Search \citep[LOSS;][]{lea11} has made
significant improvements in the statistics over previous studies on
local SNe~Ia. The rates published by \citet{dil10} include data from
516 SNe~Ia at redshifts $z<0.3$ from the SDSS-II Supernova Survey
(SDSS-SN), with roughly half of these confirmed through spectroscopy.
At intermediate redshifts, rate measurements are provided by
\citet[][38 SNe from the Supernova Cosmology Project in the range
$0.25\leq z \leq 0.85$]{pai02}, \citet[][8 SNe within
$0.3<z<1.2$]{ton03}, and \citet[][$>100$ SNe from the IfA Deep Survey,
23 of which have spectra]{rod10}. \citet{nei06} used a spectroscopic
sample of 58 SNe Ia from the first two years of SNLS to measure a
cumulative volumetric rate in the redshift range $0.2<z<0.6$.
SN~Ia rates out to $z\sim1.6$ are presented by \citet{dah04} using 25
SNe Ia (19 with spectra) from \textit{Hubble Space Telescope (HST)}
observations of the Great Observatories Origins Deep Survey (GOODS)
fields. These data were reanalyzed by \citet{kuz08} using a Bayesian
identification algorithm, and the \textit{HST} sample updated by
\citet{dah08} extending the 2004 sample to 56 SNe. Ground-based
measurements from the Subaru Deep Field have also been made by
\citet{poz07} using 22 SNe~Ia, updated by \citet{gra11} with 150
events.
The general trend of Fig.~\ref{fig:rates_lit} reveals that the rates
typically increase from $z=0$ to $z=1$. There is a wide spread in the
existing rate measurements, particularly in the range $0.4 < z < 0.8$.
At higher redshifts, data from the GOODS collaboration provide some
apparent evidence for a turnover in the SN~Ia rates. In particular,
\citet{dah04,dah08} report a decline in SN~Ia rates beyond $z\sim
0.8$. If present, this decline might point to a larger characteristic
delay time between star formation and SN explosion \citep[see
also][]{str04}. However, another independent analysis of the
\textit{HST} GOODS data finds rates that are offset, with measurements
by \citet{kuz08} consistently lower than those of \citet{dah04,dah08}.
\citet{kuz08} argue that their results do not distinguish between a
flat or peaked rate evolution. Ground-based data in this range
\citep{gra11}, while consistent with the \textit{HST}-based results,
show no obvious evidence for a decline above $z\sim1$.
In this paper we use four years of data from the SNLS sample to
investigate the evolution of SN~Ia rates with redshift out to $z\sim
1.1$. The sample presented comprises $\sim 700$
photometrically identified SNe~Ia from SNLS detected with the
real-time analysis pipeline \citep{per10}. One third of these have
been typed spectroscopically, and one half of the $\sim700$ have a
spectroscopic redshift (sometimes from ancillary redshift surveys in
the SNLS fields). No other data set currently provides such a
well-observed and homogeneous sample over this range in redshift.
Additionally, rigorous computation of the survey detection
efficiencies and enhancements in photometric classification techniques
are incorporated into the new SNLS rate measurements. Monte Carlo
simulations of artificial SNe~Ia with a range of intrinsic parameters
are performed on all of the detection images used in the SNLS
real-time discovery \citep{per10}; these provide an exhaustive
collection of recovery statistics, thereby helping to minimize the
effects of systematic errors in the rate measurements.
The SNLS SNe~Ia can be used to examine the relationship between the
\ensuremath{\mathrm{SNR}_{\mathrm{Ia}}}\ and redshift, given some model of the SN Ia DTD. The size of the
SNLS sample also permits a division of the SNe Ia by light-curve width
\citep[in particular the ``stretch''; see][]{per97}, allowing a search
for differences in the volumetric rate evolution expected by any
changing demographic in the SN Ia population. Brighter, more
slowly-declining (i.e., higher stretch) SNe~Ia are more frequently
found in star-forming spirals, whereas fainter, faster-declining
SNe~Ia tend to occur in older stellar populations with little or no
star formation \citep{ham95,sul06b}. If the delay time for the
formation of the lowest-stretch SNe~Ia is sufficiently long (i.e.,
their progenitors are low-mass stars $\sim 10$ Gyr old), these SNe~Ia
will not occur at high redshifts \citep{how01}. The behavior of the
high-$z$ rates can reveal the properties of the progenitor systems.
The organization of this paper is as follows: An overview of the rate
calculation is provided in \S\ref{sec:ratecalc}. The SNLS data set,
along with the light-curve fitting and selection cuts used to define
the photometric sample, is introduced in \S\ref{sec:SNLS}. SN~Ia
detection efficiencies and the rate measurements are presented in
\S\ref{sec:effs} and \S\ref{sec:SNLSrates}, respectively. Several
models of the SN Ia DTD are then fit to the rate evolution in
\S\ref{sec:dtds}, and the results discussed. Finally, the stretch
dependence of the rate evolution is investigated in
\S\ref{sec:stretchdep}. We adopt a flat cosmology with
($\Omega_M$,$\Omega_\Lambda$)=(0.27,0.73) and a Hubble constant of
$H_0=70\,\mathrm{km}\,\mathrm{s}^{-1}\,\mathrm{Mpc}^{-1}$.
\section{The rate calculation}
\label{sec:ratecalc}
The volumetric SN Ia rate in a redshift ($z$) bin $z_1 < z < z_2$ is
calculated by summing the inverse of the detection efficiencies,
$\varepsilon_i$, for each of the $N$ SNe~Ia in that bin, and dividing
by the co-moving volume ($V$) appropriate for that bin
\begin{equation}
\label{eq:rate}
r_{\mathrm{v}}(z)=\frac{1}{V}\sum_{i=1}^N
\frac{(1+z_i)}{\varepsilon_i(z_i,s_i,c_i)\,\Delta T_i}.
\end{equation}
The factor $(1+z_i)$ corrects for time dilation (i.e., it converts to a
rest-frame rate), $\Delta T_i$ is the effective search duration in
years, and the volume $V$ is given by
\begin{equation}
\label{eq:volume}
V=\frac{4\pi}{3} \frac{\Theta}{41253}\left[
\frac{c}{H_0}\int_{z_1}^{z_2}\frac{dz^\prime}{\sqrt{\Omega_M(1+z^\prime)^3+\Omega_\Lambda}}
\right]^3 \mathrm{Mpc}^3
\end{equation}
where $\Theta$ is the area of a search field in deg$^2$ and in this
equation $c$ is the speed of light, and $H_0$, $\Omega_M$ and
$\Omega_\Lambda$ are the cosmological parameters, and we assume a flat
universe.
$\varepsilon_i$ is a recovery statistic which describes how each SN Ia
event should be weighted relative to the whole population;
$1-\varepsilon_i$ gives the fraction of similar SNe Ia that exploded
during the search interval but that were not detected, for example due
to sampling or search inefficiencies. $\varepsilon_i$ is a function of
the SN stretch $s$, a dimensionless number that expands or contracts
a template light curve defined as $s=1$ to match a given SN event, the
SN color $c$, defined as the rest-frame $B-V$ color at maximum light
in the rest-frame $B$-band, and the SN $z$.
The $\varepsilon_i$ are evaluated separately for each year and field
of the survey, and are further multiplied by the sampling time
available for finding each object ($\Delta T_i$) to convert to a ``per
year'' rate. Typically these are 5 months for the SNLS, though this is
dependent on the field and year of the survey. Thus, in practice,
eqn.~(\ref{eq:rate}) is evaluated for each search field and year that
the survey operates.
This ``efficiency'' method is particularly suited for use with
Monte-Carlo simulations of a large, well-controlled survey such as
SNLS. Its disadvantage is that it is not straight forward to correct
for the likely presence of SNe that are not represented (in $z/s/c$
parameter space) among the $N$ in eqn.~(\ref{eq:rate}) (for example,
very faint or very red SNe Ia) without resorting to assuming a
luminosity function to give the relative fractions of SNe with
different properties. In particular, we are not sensitive to, and nor
do we correct for, spectroscopically peculiar SNe Ia in the SN2002cx
class \citep[e.g.,][]{li03}, and similar events such as SN2008ha
\citep[e.g.,][]{fol08}, super-Chandrasekhar events
\citep[e.g.,][]{how06}, and other extremely rare oddballs
\cite[e.g.,][]{kri11}. We also exclude sub-luminous SNe Ia (here
defined as $s<0.7$, a definition that would include SN1991bg-like
events) but note that these are studied in considerable detail for the
SNLS sample in our companion paper, \citet{gon11}. Thus, we are
presenting a measurement of the rates of ``normal'', low to moderate
extinction SNe Ia (explicitly, $c<0.6$), restricting ourselves to the
bulk of the SN Ia population that we can accurately model. We allow
for these incompletenesses when comparing to other measurements of the
SN Ia rate in \S\ref{sec:dtds}, which do include some of these classes
of SNe Ia.
The photometric sample begins with the set of all possible detections,
to which we apply a series of conservative cuts to remove interlopers.
The SNLS sample and the culling process are described next in
\S\ref{sec:SNLS}. To each resulting SN~Ia must then be applied the
corresponding $\varepsilon_i$; these are calculated using a detailed
set of Monte Carlo simulations on the SNLS images, a procedure
described in \S\ref{sec:effs}. The rate results and the measurement
of their associated errors are presented afterwards in
\S\ref{sec:SNLSrates}.
\section{Defining the SNLS sample}
\label{sec:SNLS}
In this section, we describe the SNLS search and the SN Ia sample that
we will subsequently use for our rate analysis. The SNLS is a rolling
SN search that repeatedly targeted four $1\degr\times 1\degr$ fields
(named D1--4) in four filters ($g_Mr_Mi_Mz_M$) using the MegaCam
camera \citep{bou03} on the 3.6~m Canada--France--Hawaii Telescope
(CFHT). The SNLS benefited from a multi-year commitment of observing
time as part of the CFHT Legacy Survey. Queued-service observations
were typically spaced $3-4$ days apart during dark/grey time, yielding
$\sim 5$ epochs on the sky per lunation. Key elements of the SNLS are
its consistent and well-defined survey characteristics, and the
high-quality follow-up spectroscopy from 8m-class telescopes such as
Gemini \citep{how05,bro08,wal11}, the ESO Very Large Telescope
\citep[VLT;][]{bal09}, and Keck \citep{ell08}. Due to the finite
amount of follow-up observing time available, not all of the SN Ia
candidates found by SNLS were allocated for spectroscopic follow-up
\citep[for a description of follow-up prioritization,
see][]{sul06a,per10}. The availability of well-sampled light curves
and color information from the SNLS nonetheless allow us to perform
photometric identification and redshift measurements, even in the
absence of spectroscopic data.
\begin{deluxetable}{crrcc}
\tablewidth{0pt}
\tablecaption{SNLS fields and survey parameters.\label{tab:deepfields}}
\tablehead{
\colhead{Field} & \colhead{RA (J2000)} & \colhead{DEC (J2000)} &
\colhead{Area (sq.\ deg)} & \colhead{N$_{\mathrm{seasons}}$}}
\startdata
D1 & 02:26:00.00 & -04:30:00.0 & 0.8822 & 4\\
D2 & 10:00:28.60 & +02:12:21.0 & 0.9005 & 4\\
D3 & 14:19:29.01 & +52:40:41.0 & 0.8946 & 4\\
D4 & 22:15:31.67 & -17:44:05.7 & 0.8802 & 4\\
\enddata
\end{deluxetable}
To identify the photometric SN~Ia sample, we begin with all variable
object detections in the SNLS real-time
pipeline\footnote{http://legacy.astro.utoronto.ca} \citep{per10}.
Other articles will describe a complementary effort to measure the
rates with a re-analysis of all of the SNLS imaging data
\citep[e.g.,][]{baz11}. We use SNLS data up to and including the
fifth year of D3 observing in June, 2007\footnote{In June 2007, the
$i_M$ filter on MegaCam was damaged during a malfunction of the
filter jukebox. Candidates discovered after this period were
observed with a new $i_M$ filter, requiring new calibrations for
subsequent images, and were thus not included in the present
study.}. The first (2003) season of D3 is omitted in this analysis;
this was a pre-survey phase when the completeness of the SN data
differed significantly from the rest of the survey. The remaining
detections made during four observing seasons for each of the four
deep fields are considered in this analysis. Each period of
observation on a given field is called a ``field-season'', with 16
field-seasons in total (4 fields observed for 4 seasons). The
coordinates of the field centers and other information are provided in
Table~\ref{tab:deepfields}.
We remove all candidates falling within masked regions in the deep
stacks. These regions include areas in and around saturated bright
stars or galaxies, as well as in the lower signal-to-noise edge
regions of the dithered mosaic. The remaining unmasked areas in each
field are listed in Table~\ref{tab:deepfields}, and add up to a total
of $3.56$ square degrees. Galaxy catalogs from these image stacks are
used to determine the placement of test objects in the simulations
described later in \S\ref{sec:effs}. This cut therefore ensures that
the areas being considered in the rates calculation match those used
in the detection efficiency measurements.
We next fit each event with a light curve fitter to determine its
redshift (where no spectroscopic redshift is available) and
photometric parameters ($\S$\ref{sec:light-curve-fitting}). We then
remove SN Ia candidates with insufficient light curve coverage
(\S\ref{sec:culls}). Finally, we use the light curve fits to identify
and remove core-collapse SNe as well as other transients, such as AGN
and variable stars (\S\ref{sec:removing-non-sne}). Each of the
remaining SNe~Ia will then correspond to some fraction of the true
number of events having similar photometric properties but which were
undetected by our survey. This detection efficiency will be determined
from the Monte Carlo simulations presented in \S\ref{sec:effs}.
\subsection{Light-curve fitting}
\label{sec:light-curve-fitting}
We fit template light curves to the SN Ia candidates to identify those
that don't match typical SNe~Ia. Flux measurements are made on all of
the final ``Elixir-preprocessed'' images\footnote{CFHT-LS images
processed with the Elixir pipeline are available from the Canadian
Astronomy Data Centre (CADC):
http://www.cadc-ccda.hia-iha.nrc-cnrc.gc.ca/cadc/} \citep{mag04}.
The Canadian SNLS photometric pipeline \citep{per10} was used to
measure the SN fluxes, using images processed with the accumulated
flat-fields and fringe-maps from each queue run, and aligning
photometrically to the tertiary standard stars of \citet{reg09}.
Two light-curve fitting tools were used to help identify the SNe Ia:
{\tt estimate\_sn} \citep{sul06a} for preliminary identification and
for measuring SN Ia photometric redshifts, and SiFTO \citep{con08} for
final light-curve fitting to measure the stretch and color of each
candidate. The {\tt estimate\_sn} routine is not designed for exact
measurement of SN Ia parameters, and SiFTO does not fit for redshift, so
we require this two-step process to fully characterize the photometric
sample of events.
In {\tt estimate\_sn}, the measured fluxes in $g_Mr_Mi_Mz_M$ are fit
using SN~Ia spectral templates from \citet{hsi07}. The current
version of the code includes the addition of priors in stretch, color,
and $\Delta$mag. These are determined from the distributions measured
for the spectroscopic sample. The photometric redshifts
($z_{\mathrm{SNphot}}$) output from this routine are used for
candidates with no available spectroscopic redshifts from either the
SN or its host. SiFTO is an empirical light-curve analysis method
that works by manipulating a spectral energy distribution (SED) model
rather than operating in integrated filter space \citep{con08}. SiFTO
does not impose a color model to relate the observed filters during
the light-curve fits. The implication of this is that SiFTO cannot
easily be modified to fit for redshift, and thus requires a known
input $z$. Output SN Ia fits are parameterized by stretch, date of
maximum light, and peak flux in each filter.
The stretch measurement provided by SiFTO is largely invariant to
changes in input redshift, as demonstrated in Fig.~\ref{fig:sinvar}.
Even when the input redshift is off by $\Delta z = \pm 0.3$, the
output stretch remains within $s\pm 5\%$ of its actual value. Opacity
effects in the SN ejecta are more pronounced in the bluer bands,
causing a more rapid decline \citep{kas07}; light curves measured at
shorter wavelengths are therefore intrinsically narrower. As a
result, if SiFTO is (for example) given an incorrectly small input
redshift, it measures the ``wrong'' time dilation but simultaneously
samples further towards the blue end of the spectrum where the
template is intrinsically narrower. The latter effect partially
negates the first, resulting in the same stretch measurement
regardless of marginal deviations in input redshift. While this is
not similarly true of the derived color or measured fit quality, this
stretch invariance is extremely useful for establishing an initial
constraint in fitting photometric redshifts with {\tt estimate\_sn}.
\begin{figure}
\includegraphics[width=0.8\textwidth]{f02.eps}
\caption{The effects of deliberately shifting the input redshift to
SiFTO. The top plot shows the change in output stretch for confirmed
SNe~Ia from the SNLS sample as the input redshift for the SiFTO fit
is offset from $z_{\mathrm{spec}}-0.3$ to $z_{\mathrm{spec}}+0.3$.
The gray shaded area represents the standard deviation of the
measured points about the median $\Delta s$. The bottom plot shows
the mean output stretch for each SN~Ia as a function of its known
stretch (at zero redshift offset). The error bars for each SN~Ia in
the lower plot represent the full range in stretch values output
from SiFTO as the input redshift is changed.}
\label{fig:sinvar}
\end{figure}
Spectroscopic redshifts are available for 525 (43\%) of the detections
remaining after the observational cuts: 420 from SNLS spectroscopy and
the rest from host-galaxy measurements (including data from DEEP/DEEP2
\citep{dav03}, VVDS \citep{lef05}, zCOSMOS \citep{lil07}, and
additional SNLS VLT MOS observations). The external redshifts are
assigned based on a simple RA/DEC matching between the SNLS and the
redshift catalogues, with a maximum allowed separation of 1.5\arcsec.
For the SNLS MOS work, the host was identified following the
techinques of \citet{sul06b}. The known redshifts are then held fixed
in the light-curve fits. We also considered the use of galaxy
photometric redshifts for the SNLS fields \citep[e.g.][]{Ilb06}.
However, though these catalogs have an impressive precision, they tend
to be incomplete and untested below a certain galaxy magnitude. SN Ia
photometric redshifts do not suffer these problems.
SN photometric redshifts (photo-$z$'s) are calculated for the
remaining objects using a multi-step procedure. Preliminary redshift
estimates are obtained using a first round of {\tt estimate\_sn} fits
without any constraints on the input parameters. The resulting fit
redshifts are then used as input to SiFTO to measure the stretch for
each object. These stretch values are then fixed in a subsequent
round of {\tt estimate\_sn} fits to obtain a more robust measurement
of the SN redshift -- constraining at least one input parameter to
{\tt estimate\_sn} improves the quality of the light-curve fits.
Fig.~\ref{fig:zcomp} shows that the $z_{\mathrm{SNphot}}$ are in good
agreement with the spectroscopic redshifts ($z_{\mathrm{spec}}$) out
to $z\ga 0.7$, with a small systematic offset above that. The median
precision in $z_{\mathrm{SNphot}}$ for the confirmed SNe~Ia is
\begin{displaymath}
\mathrm{MEDIAN}\left(\frac{|\Delta z|}{(1+z_{\mathrm{spec}})}\right)=0.019
\end{displaymath}
with $\sigma_{|\Delta z|/(1+z_{\mathrm{spec}})} = 0.031$. For
comparison, \citet{sul06a} find $|\Delta
z|/(1+z_{\mathrm{spec}})=0.031$ with a smaller sample and real-time
data (and a previous version of the {\tt estimate\_sn} code). In
\S\ref{sec:errsims}, we describe how these $z_{\mathrm{SNphot}}$
errors and the systematic offset are incorporated into the rates
analysis.
\begin{figure}
\plotone{f03.eps}
\caption{The comparison of SN photo-$z$ measurements to spectroscopic
redshifts for candidates in the SNLS sample. Filled (open) circles
represent confirmed (unconfirmed) SNe~Ia with spectroscopic
redshifts from the SNLS, and open squares are candidates with host
spectroscopic redshifts from the literature.}
\label{fig:zcomp}
\end{figure}
\subsection{Light curve coverage cuts}
\label{sec:culls}
Each candidate must pass a set of light curve quality checks to be
included in the photometric sample of SNe~Ia for the rate calculation.
Requiring that the SN light curves are well measured ensures that the
photometric typing technique is more reliable, and that it is straight
forward to correct for the effects of the selection cuts on the rates
themselves. Therefore, candidates with insufficient light-curve
coverage to measure accurate redshift, stretch, and color values from
template fits are removed from the detected sample. We define
observational criteria in terms of the phase, $t$, of the SN in
effective days (d) relative to maximum light in the rest-frame
$B$-band, where
\begin{equation}
t_{\mathrm{eff}} = \frac{t_{\mathrm{obs}}}{s(1+z)},
\end{equation}
and $t_{obs}$ is the observer-frame phase of the SN. The time of
maximum light is determined using the light curve fitter SiFTO
\citep{con08}, described in the previous section.
Each object is required to have a minimum of each of the following:
\begin{itemize}
\item One observation in each of $i_M$ and $r_M$ between $-15$d and
$+2.5$d for early light-curve coverage and color information;
\item One observation in $g_M$ between $-15$d and $+5$d for additional
color information;
\item One observation in each of $i_M$ and $r_M$ between $-9$d and
$+7$d for coverage near peak;
\item One observation in either $i_M$ or $r_M$ between
$+5$d and $+20$d to constrain the later stages of the light curve.
\end{itemize}
These conditions differ slightly from those used by \citet{nei06} in
their analysis of the first year of SNLS data. Note that no cuts are
made on the signal-to-noise (S/N) on a particular epoch; that is, a
detection of a candidate on each of the observation epochs is not a
requirement. We also neglect the redshift offset seen in
Fig.~\ref{fig:zcomp} in calculating the above rest-frame epochs. We
estimate that this would shift the effective epochs by only one day in
the worst case ($+20$d; a $z=1$ SN), and in most cases would be far
smaller than this.
Table~\ref{tab:culls} provides the numbers of candidates that survive
each of the applied cuts. In total, $1210$ SNLS detections pass the
light curve coverage cuts, $305$ of which are spectroscopically
confirmed SNe~Ia. (For consistency, these same objective requirements
are also applied to the artificial SNe~Ia used in the Monte Carlo
simulations (\S\ref{sec:effs}), thereby directly incorporating the
effects of this cut into the detection efficiencies.) With these
objective criteria satisfied, we can then use light-curve fitting to
define a photometric SN~Ia sample.
\begin{deluxetable}{lcc}
\tablewidth{0pt}
\tablecaption{Number of candidates after selection cuts\label{tab:culls}}
\tablehead{
\colhead{Cut} & \colhead{All candidates} & \colhead{Confirmed SNe~Ia}
}
\startdata
Masking cut & 1538 & 325 \\
Observational cuts & 1210 & 305 \\
Fit quality and $s$ cuts & ~691 & ~286
\enddata
\end{deluxetable}
\subsection{Removing non SNe Ia}
\label{sec:removing-non-sne}
A set of $\chi^2_{\nu}$ goodness-of-fit cuts (here, $\chi^2_{\nu}$ is
the $\chi^2$ per degree of freedom, $\nu$) are applied to all of the
SN~Ia light-curve fits from {\tt estimate\_sn} to help eliminate
non-Ias from the current sample \citep[see also][]{sul06a}. An
overall $\chi^2_{\nu}$ cut along with individual $i_M$ and $r_M$
filter $\chi^2_{\nu}$ constraints are applied separately for cases
both with and without known redshifts. Light-curve fit quality limits
for the sample with known input redshifts are set to $\chi^2_{\nu}<9$,
$\chi_i^2<9$, and $\chi_r^2<18$ (here the $\nu$ is omitted for
clarity). Those with fit redshifts are given stricter limits of
$\chi^2_{\nu}<6$, $\chi_i^2<6$, and $\chi_r^2<12$. The tighter
$\chi^2_{\nu}$ limits for candidates without known redshifts are necessary
since core-collapse SNe --- specifically SNe Ib/c --- can sometimes
achieve better fits to SN~Ia templates when $z$ is permitted to float
from the true value. The limits are determined empirically by
maximizing the fraction of SNe~Ia remaining in the sample, while also
maximizing the number of known non-Ias that are removed. Note that
{\tt estimate\_sn}, unlike SiFTO, enforces a color relation between
the fluxes in different filters, which leads to larger $\chi^2_{\nu}$ than
in SiFTO fits.
In the case of fixed [floating] redshifts input to the fit, $>95\%$
[$>96\%$] of spectroscopically identified SNe~Ia survive the
$\chi^2_{\nu}$ cuts (we correct for this slight inefficiency when
calculating our final rate numbers), while $0\%$ [$13\%$] of confirmed
non-Ias remain in the sample. A final round of SiFTO fits is then
performed to determine the output values of stretch and color. The
input redshifts to SiFTO are set to $z_{\mathrm{SNphot}}$ wherever no
$z_{\mathrm{spec}}$ values are available.
\begin{figure*}
\includegraphics[width=0.49\textwidth]{f04a.eps}
\includegraphics[width=0.49\textwidth]{f04b.eps}
\includegraphics[width=0.49\textwidth]{f04c.eps}
\caption{Distributions in redshift (upper left), stretch (upper
right), and color (lower left) for the SNLS SNe~Ia. The gray
histogram represents the final photometric SN Ia sample and the blue
histogram shows the fraction of the sample with known redshifts. The
spectroscopically confirmed SNe~Ia are shown as the red distribution
in each plot. Sample incompleteness causes the decline in the
observed population at $z>1.0$.}
\label{fig:objpars}
\end{figure*}
One final light-curve fitting cut is then applied on the sample,
requiring that the output SiFTO template fits have
$\chi^2_{\mathrm{SiFTO}}<4$. This step removes all but one of the
remaining confirmed non-Ias\footnote{The identification of
SNLS-06D4cb is inconclusive, although it has a spectrum that is a
poor match to a SN Ia. The SN photometric redshift for this object
is $z_{fit}=0.64$ but the host has a spectroscopic redshift of
$z=0.4397$.} when all redshifts are allowed to float, while at the
same time maximizing the number of confirmed SNe~Ia passing the cut.
No known contaminants remain when all available $z_{\mathrm{spec}}$
values are fixed in the fits.
\subsection{The photometric SN I\lowercase{a} sample}
The final photometric SN~Ia (phot-Ia) sample is restricted to $0.1\leq
z\leq 1.1$. Above this redshift, the rates are found to be too
uncertain to include in subsequent analyses. This is a result of low
S/N, poor detection efficiency, 100\% spectroscopic incompleteness,
and the potential for increased contamination from non-Ias. Only
candidates having stretch values within $0.7\leq s\leq 1.3$ are
considered in the present study. This range is characteristic of the
SNLS spectroscopic sample --- shown by the red histogram in the
central plot of Fig.~\ref{fig:objpars} -- but excludes sub-luminous
events such as SN1991bg. These sub-luminous, low-stretch SNe~Ia in the
SNLS sample have been studied in detail by \citet{gon11} -- our
stretch limit removes 22 such objects from our sample. Extremely red
($c>0.6$), and presumably highly extincted, candidates are also
removed. This cut eliminates only one event: SNLS-04D2fm, a faint SN
of unknown type at $z_\mathrm{spec}=0.424$.
The final redshift, stretch, and color distributions resulting from
the various cuts are shown in Fig.~\ref{fig:objpars}. The phot-Ia
sample consists of $691$ objects, $371$ of which have known redshifts.
A total of $286$ objects in this sample have been spectroscopically
confirmed as Type Ia SNe (Table~\ref{tab:culls}). The redshift
histogram reveals that the incompleteness of the spectroscopic sample
(in red) begins to increase beyond $z\sim 0.5$, where the rise in
the required exposure time makes taking spectra of every candidate too
expensive. The effects of incompleteness in the observed SNLS sample
become severe above $z>1.0$. The full phot-Ia sample has median
stretch and color values of $s=1.00$ and $c=-0.04$, respectively. The
color distribution peaks at a slightly redder value than the estimated
typical color of a SNe~Ia of $c_f \sim -0.06$, based on the
distribution observed for the spectroscopic SNLS sample.
\section{Detection efficiencies}
\label{sec:effs}
With the final SN Ia sample in hand, we now need to estimate the
weight that each of these events contributes in our final rates
calculation. These ``detection efficiencies'' depend on many
observational factors and will obviously vary with SN Ia
characteristics. For example, at higher-redshift, the higher stretch
SNe Ia are more likely to be recovered not only because they are
brighter, but also because they spend a longer amount of time near
maximum light, and are therefore more likely to pass the culls of
$\S$\ref{sec:culls}. In a rolling search like SNLS, such effects can
be directly accounted for by measuring recovery statistics for a range
of simulated input SN Ia properties using the actual images (and their
epochs) observed. This is a brute-force approach, but is a practical
way to accurately model a survey such as SNLS, helping to control
potential systematic errors by avoiding assumptions about image
quality limitations and data coverage that may bias the rate
calculation. Uncertainties on search time and detection area are
avoided since the actual values are well defined.
\subsection{Monte Carlo simulations}
\label{sec:MC}
An exhaustive set of Monte Carlo simulations were performed for each
field-season to determine the recovery fraction as a function of
redshift, stretch, and color. Full details about these simulations are
presented in \citet{per10}.
A total of $2.5\times 10^6$ artificial SNe~Ia with a flat redshift
distribution were added to galaxies present in the SNLS fields. Each
host galaxy was chosen to have a photometric redshift within 0.02 of
the artificial SN redshift, with the probability of selecting a
particular galaxy weighted by the ``A+B'' SN rate model with
coefficients from \citet[][hereafter \citetalias{sul06b}]{sul06b}.
Within their host galaxies, the artificial SNe were assigned
galactocentric positions drawn from the two-dimensional Gaussian
distribution about the host centroid returned by SExtractor, i.e., the
artificial SNe are placed with a probability that follows the light of
the host galaxy.
The simulated objects were assigned random values of stretch from a
uniform distribution in the range $0.5\leq s \leq 1.3$, with colors
calculated from the stretch--color relationships presented in
\citet{gon11} (the use of a uniform distribution in stretch ensures
that the parameter space of SN Ia events is equally sampled). Peak
apparent rest-frame $B$ magnitudes ($m_B$) at each selected redshift
were calculated for our cosmology and a SN Ia absolute magnitude, and
adjusted for the color--luminosity and stretch--luminosity relations.
We use an empirical piece-wise stretch-luminosity relationship with
different slopes above and below $s=0.8$ \citep[e.g.][]{gar04,gon11},
and SN Ia photometric parameters from the SNLS3 analysis
\citep{con11,sul11}. These peak apparent magnitudes were then further
adjusted by an amount $\Delta$mag according to the observed intrinsic
dispersion ($\sigma_\mathrm{int}$) in SN~Ia magnitudes following $s$
and $c$ corrections. Here, $\sigma_\mathrm{int}$ parameterizes a
Gaussian distribution from which a $\Delta$mag can be assigned for
each artificial event.
The SN color--luminosity relation includes both effects intrinsic to
the SN, and extrinsic effects such as dust. We use coefficients
consistent with the SNLS3 analysis, which favor a slope between $m_B$
and $c$ of $<4.1$, the value expected based on Milky Way dust. As
there is no evidence that this slope evolves with redshift
\citep{con11}, we keep it fixed for all the artificial SNe. For the
detection efficiency grids, our $c$ values range up to 0.6,
corresponding to a SN that is $\sim1.8$\,mag fainter in $B$-band than
a normal SN Ia.
\begin{figure}
\plottwo{f05a.eps}{f05b.eps}
\plottwo{f05c.eps}{f05d.eps}
\caption{Recovery fraction as a function of $i_M$ (AB) magnitude
(upper left), redshift (upper right), stretch (lower left), and
color (lower right) for all field-seasons combined. The solid lines
represent the fraction of objects found, and the dashed lines
include the additional observational constraints as described in
\S\ref{sec:culls}. These plots include only the artificial SNe~Ia
from the simulations that lie within the parameter space typical of
the observed SNLS sample.}
\label{fig:compl}
\end{figure}
Each artificial SN was assigned a random date of peak magnitude. For
the field-season under study, this ranged from 20 observer-frame days
before the first observation, to 10 days after the last observation.
This ensures that the artificial events sample the entire phase range
allowed by the culls in $\S$~\ref{sec:culls} at all redshifts. The
light curve of each event in $i_M$ was then calculated using the
$k$-correction appropriate for each epoch of observation, and each
artificial object was added at the appropriate magnitude into every
$i_M$ image. The real-time search pipeline \citep{per10}, the same
one that was used to discover the real SNe, was run on each epoch of
data to determine the overall recovery fraction as a function of the
various SN~Ia parameters. The variation in candidate recovery over
magnitude, redshift, stretch, and color are shown by the solid lines
in Fig.~\ref{fig:compl}. The 50\% detection incompleteness limit lies
at $i_M=24.3$ mag in the AB system.
As expected, SNe Ia that are high stretch, blue, or at lower redshift
are all generally easier to recover. Note that at lower redshifts, the
faster (less time-dilated) nature of the SN~Ia light curves means that
the observational criteria of \S\ref{sec:culls} are slightly more
likely to remove events (as there are fewer opportunities to observe a
faster SN), hence the observed decrease in the recovered fraction
towards lower-redshifts. That is, a low-$z$ SN that peaks during
bright time is less likely to be recovered than a higher-$z$ SN
peaking at the same epoch, even if they had the same observed peak
magnitude. This is also partially reflected in the fraction recovered
as a function of magnitude, with a curvature in the recovered fraction
towards brighter magnitudes. The recovery results are discussed in
detail in \citet{per10}.
A grid of detection efficiencies was constructed independently for
each field-season using the recovery statistics in bins of measured
redshift ($\Delta z=0.1$), stretch ($\Delta s=0.1$), and color
($\Delta c=0.2$). These bin sizes were found to provide adequate
resolution in each parameter. We investigated the use of a higher
resolution in stretch and color, and found no significant on impact
our results. Every observed phot-Ia in the SNLS sample is thereby
assigned a detection efficiency by linearly interpolating in $z/s/c$
space that corresponds to the field-season during which it was
detected, along with its other measured parameters:
$\varepsilon(\mathrm{field},z,s,c)$. These detection efficiencies are
plotted in Fig.~\ref{fig:objeffs_raw} prior to any adjustments for
sampling time and the availability of observations. Redder,
lower-stretch SNe~Ia tend to have smaller detection efficiencies, as
shown by the open circles in Fig.~\ref{fig:objeffs_raw}. For clarity,
detection efficiency errors are not shown in
Fig.~\ref{fig:objeffs_raw}.
Statistical uncertainties on $\varepsilon(\mathrm{field},z,s,c)$ for
well-sampled data are governed by the number of Monte Carlo
simulations performed, and are small in comparison to the systematic
error resulting from assumptions made about the underlying intrinsic
SN~Ia magnitude dispersion (the $\Delta$mag distribution, parameterized
by $\sigma_\mathrm{int}$). To estimate these latter errors, the
detection efficiencies are recalculated using a range of
$\sigma_\mathrm{int}$ values from $0.12-0.15$. Bins with
$\varepsilon=1$ will effectively have zero uncertainty, since the
likelihood of recovery will not depend on the details of the
population distribution; by contrast, ``low-efficiency'' bins are more
seriously affected. These detection efficiency ``errors'' are
included into the overall rate uncertainties in \S\ref{sec:errsims}.
\subsection{Sampling time}
To remain consistent in the selection criteria used for both the
observed SNLS sample and the fake objects, we also apply the same
observational cuts described in \S\ref{sec:culls} to the artificial
SNe~Ia. Using the peak date of each simulated light curve, we
determine whether the minimum observing requirements are met in each
filter by comparing with the SNLS image logs. This directly
incorporates the observational cuts into the detection efficiency
calculations, while factoring in losses due to adverse weather and the
gaps between epochs. The recovery fractions that include these
observational requirements are shown by the lower dashed lines in
Fig.~\ref{fig:compl}.
Each candidate's detection efficiency is multiplied by a factor to
account for its corresponding sampling time window for detection,
yielding a ``time corrected'' rest-frame efficiency $\varepsilon_T$:
\begin{equation}
\varepsilon_T = \varepsilon\,\frac{1}{(1+z)}\frac{\Delta T}{\mathrm{yr}}
\end{equation}
The sampling period $\Delta T$ (in years) for a given field-season is
\begin{equation}
\Delta T = \frac{1}{365.24} \left[\mathrm{max(MJD) - min(MJD)} + 30
\right],
\label{eq:searchtime}
\end{equation}
where MJD is the modified Julian date of the available detection
images. The extra 30 days account for the range in peak dates allowed
for the artificial SN~Ia light curves, from 20 days prior to the first
observation in a given field-season to 10 days past the final epoch.
The resulting time-corrected rest-frame detection efficiencies for the
phot-Ia sample are plotted as a function of redshift in
Fig.~\ref{fig:objeffs}. Since each field is observable for at most
$4-6$ months of the year, $\varepsilon_T$ peaks at $\sim 0.4$ even for
bright, nearby objects.
\begin{figure}
\plotone{f06.eps}
\caption{Detection efficiencies ($\varepsilon_i$ in
eqn.~\ref{eq:rate}) measured for each candidate in the photometric
SN~Ia sample and plotted against redshift. SNe~Ia that are redder
than the adopted fiducial color of $c_f \sim -0.06$ are shown as red
circles, while bluer objects are shown as blue squares. Open
symbols represent SNe~Ia with stretches smaller than the median
value of the sample ($s<1$). These efficiencies have not been
corrected for changes in the sampling time between the different
fields observed.}
\label{fig:objeffs_raw}
\end{figure}
\begin{figure}
\plotone{f07.eps}
\caption{Time-corrected rest-frame efficiencies for the SNLS phot-Ia
sample plotted against redshift. The efficiencies shown here are
$<1$ even at low-$z$ since they have been adjusted for field
observability. The dashed line shows that a $(1+z)^{-1}$ slope
matches the general trend of the data out to $z\sim 1$, where the
detection efficiencies begin to drop off more quickly. SNe~Ia with
$c>c_f$ are shown as red circles and bluer ones as blue squares.
Lower-stretch ($s<1$) events are displayed as open symbols. There
are no significant differences in the median values of
$\varepsilon_T$ as a function of redshift for high- and low-stretch
objects.}
\label{fig:objeffs}
\end{figure}
Figs.~\ref{fig:objeffs_raw} and \ref{fig:objeffs} show that there is a
drop-off in the efficiencies above $z=0.9$, in particular for the
redder $c$ bins, making it more difficult to calculate accurate rates
at these redshifts due to color--stretch bins which are not sampled.
At $z>1.1$, it is not possible to measure SN~Ia rates using this
method due to poor survey sensitivity and inadequate statistical
sampling of spectral templates. Therefore, we restrict our volumetric
rate calculations to the range $0.1\leq z < 1.1$.
\section{SN I\lowercase{a} rates}
\label{sec:SNLSrates}
Volumetric SN~Ia rates are calculated from eqn.~\ref{eq:rate} by
summing the observed SNe~Ia weighted by the inverse of their
time-corrected rest-frame efficiencies. The total sampling volumes for
the deep fields in redshift bins of $\Delta z=0.1$
(eqn.~\ref{eq:volume}) are provided in Column 2 of
Table~\ref{tab:rates}. Columns 3 and 4 show the numbers of observed
candidates in each bin for the entire sample ($N_\mathrm{obs}$) and
for the spectroscopically confirmed SNe~Ia ($N_\mathrm{spec-Ia}$) in
each redshift bin. The ``raw'' measured rates ($r_\mathrm{meas}$)
with their weighted statistical errors are given in Column 5, in units
of $\ensuremath{\times10^{-4}\,\mathrm{SNe\,yr}^{-1}\,\mathrm{Mpc}^{-3}}$; see later sections for the meaning of the remaining
columns.
\begin{deluxetable}{lccccccc}
\tabletypesize{\footnotesize}
\tablewidth{0pt}
\tablecaption{Volumetric rates from the SNLS sample\label{tab:rates}}
\tablehead{
\colhead{$z$ bin} &
\colhead{Survey Volume $V$} &
\colhead{$N_\mathrm{obs}$} &
\colhead{$N_\mathrm{spec-Ia}$} &
\colhead{$r_\mathrm{meas}$} &
\colhead{$r^{\prime}_\mathrm{meas}$} &
\colhead{$\langle z \rangle$} &
\colhead{$r_\mathrm{V}$\tablenotemark{a}} \\
\colhead{} &
\colhead{\footnotesize($10^4$ Mpc$^3$)} &
\colhead{} &
\colhead{} &
\multicolumn{2}{c}{\footnotesize($\ensuremath{\times10^{-4}\,\mathrm{SNe\,yr}^{-1}\,\mathrm{Mpc}^{-3}}$)} &
\colhead{} &
\colhead{\footnotesize($\ensuremath{\times10^{-4}\,\mathrm{SNe\,yr}^{-1}\,\mathrm{Mpc}^{-3}}$)}
}
\startdata
$0.10-0.20$ & 17.3 & 4 & 3 & $0.21 \pm 0.11$ &\nodata& 0.16 & $0.14 ^{+0.09}_{-0.09}$$^{+0.06}_{-0.12}$ \\
$0.20-0.30$ & 42.8 & 16 & 16 & $0.30 \pm 0.08$ &\nodata& 0.26 & $0.28 ^{+0.07}_{-0.07}$$^{+0.06}_{-0.07}$ \\
$0.30-0.40$ & 75.7 & 31 & 24 & $0.35 \pm 0.07$ &\nodata& 0.35 & $0.36 ^{+0.06}_{-0.06}$$^{+0.05}_{-0.06}$ \\
$0.40-0.50$ & 112.7 & 42 & 29 & $0.36 \pm 0.06$ &\nodata& 0.45 & $0.36 ^{+0.06}_{-0.06}$$^{+0.04}_{-0.05}$ \\
$0.50-0.60$ & 151.5 & 72 & 47 & $0.48 \pm 0.06$ &\nodata& 0.55 & $0.48 ^{+0.06}_{-0.06}$$^{+0.04}_{-0.05}$ \\
$0.60-0.70$ & 190.1 & 91 & 36 & $0.55 \pm 0.06$ & $0.57\pm0.06$ & 0.65 & $0.48 ^{+0.05}_{-0.05}$$^{+0.04}_{-0.06}$ \\
$0.70-0.80$ & 227.2 & 110 & 56 & $0.59 \pm 0.06$ & $0.57\pm0.06$ & 0.75 & $0.58 ^{+0.06}_{-0.06}$$^{+0.05}_{-0.07}$ \\
$0.80-0.90$ & 262.1 & 128 & 44 & $0.64 \pm 0.06$ & $0.65\pm0.06$ & 0.85 & $0.57 ^{+0.05}_{-0.05}$$^{+0.06}_{-0.07}$ \\
$0.90-1.00$ & 294.1 & 141 & 25 & $1.20 \pm 0.17$ & $0.99\pm0.29$ & 0.95 & $0.77 ^{+0.08}_{-0.08}$$^{+0.10}_{-0.12}$ \\
$1.00-1.10$\tablenotemark{b} & 323.0 & 50 & 6 & $0.93 \pm 0.25$ & $0.51\pm0.26$ & 1.05 & $0.74 ^{+0.12}_{-0.12}$$^{+0.10}_{-0.13}$ \\
\enddata
\tablenotetext{a}{The first error listed is statistical, and the
second systematic.}
\tablenotetext{b}{Bins at $z>1.0$ are not
included in the rates analysis; see Section~\ref{sec:distcorr}}
\end{deluxetable}
Contamination by non-Ias that survive the culling criteria is
estimated to contribute under $2\%$ to the total measured rates to
$z\sim 1$. The contribution is found to be negligible up to $z\sim
0.5$, at which point it increases to around $4\%$ at $z\sim 1$. This
is determined by summing $1/\varepsilon_T$ for the known non-Ias, and
dividing by the corresponding value for objects in each redshift bin
with available spectroscopy. The $\varepsilon_T$ values used here are
based on the results obtained when allowing redshift to vary in the
fits, not when holding $z$ fixed at the spectroscopic redshifts.
We now correct the raw measured rates for potential systematic offsets
in the photometric redshifts and other parameters. This is done using
the technique described next in \S\ref{sec:errsims}, which also
computes a combined statistical and systematic uncertainty on the
final rates. The SN~Ia rates are potentially also sensitive to the
inclusion of very low detection efficiency candidates at $z\ga 0.9$,
and we must consider the effects of undetected SNe~Ia in $z/s/c$ bins
with very poor detection recovery rates. These low-efficiency issues
are discussed later in \S\ref{sec:distcorr}.
\subsection{Error analysis}
\label{sec:errsims}
In addition to the simple ``root-$N$'' statistical errors, a number of
additional uncertainties also affect our measured rates. These can
include errors in the measured SN (photometric) redshift, stretch, and
color, which together determine the detection efficiency (and hence
weight) assigned to each event.
As shown in Fig.~\ref{fig:zcomp}, there is a redshift uncertainty in
each measure of $z_{\mathrm{SNphot}}$. The discrepancy between
$z_{\mathrm{spec}}$ and $z_{\mathrm{SNphot}}$ for the confirmed SN Ia
sample is presented in Fig.~\ref{fig:zerrs} for two bins in stretch.
There is a small offset above $z=0.7$, increasing to $\Delta z =
z_{\mathrm{spec}}-z_{\mathrm{SNphot}}\approx 0.05$ ($\sigma=0.08$) at
$z>1$ in the $0.7\leq s < 1.0$ sample, and $\approx 0.07$
($\sigma=0.06$) in the $1.0\leq s < 1.3$ sample. On average, the
$z_{\mathrm{SNphot}}$ measurements are underestimated, with an
increasing offset to higher redshift.
An offset is expected based on a Malmquist bias, such that brighter
objects are more likely to have a spectroscopic type at a fixed
redshift \citep{per10}. However, we estimate this effect to be
smaller: the solid lines in Fig.~\ref{fig:zerrs} show this predicted
offset as a function of redshift. This is calculated by estimating
the rest-frame $B$-band apparent magnitude with $z$ for the adopted
cosmology, and applying the $\Delta$mag offsets contributed by
spectroscopic selection as measured in \citet{per10}.
\begin{figure}
\plottwo{f08a.eps}{f08b.eps}
\caption{The redshift offset, $\Delta z =
z_{\mathrm{spec}}-z_{\mathrm{SNphot}}$, as a function of SN or host
spectroscopic redshift for the phot-Ia sample. The offsets are
calculated separately in two stretch bins: $0.7\leq s < 1.0$ (upper
panel) and $1.0\leq s < 1.3$ (lower panel). The median $z$ offset
in sliding bins of width $\Delta z=0.2$ are shown by the solid
points, with error bars representing the standard deviation in each
bin. The offsets increase from approximately zero at $z=0.7$ to
$\Delta z\sim 0.05-0.07$ at $z>1$. The solid lines represent the
expected offset due purely to sample selection bias in each stretch
range.}
\label{fig:zerrs}
\end{figure}
To study these various uncertainties, and to handle this redshift
migration effectively, we perform a set of Monte Carlo simulations on
the measured rates. We begin with the basic rate measurements,
$r_{\mathrm{meas}}(z)$, from Table~\ref{tab:rates}, and calculate how
many measured SNe~Ia that rate represents in each redshift bin by
multiplying by the volume in that bin: $N_{\mathrm{meas}}(z)$. Many
realizations (5000) are performed by drawing $N_{\mathrm{meas}}$
objects from typical SNLS-like distributions of artificial SNe~Ia.
These are the same artificial objects as used in the detection
efficiency calculations (\S\ref{sec:effs}), although the stretch,
color, and $\Delta$mag distributions are matched to those of the
spectroscopically confirmed SN~Ia sample \citep[see][]{per10}. The
distributions for input to the error calculations are shown in
Fig.~\ref{fig:simdist}. Only a fraction of the objects in each
redshift bin have a spectroscopic redshift, with the remainder having
a SN Ia photometric redshift and accompanying uncertainty
(Fig.~\ref{fig:zerrs}). This ``spectroscopic fraction'',
$F_{\mathrm{spec}}(z)$ is calculated in each redshift bin from
Fig.~\ref{fig:objpars}.
\begin{figure}
\plotone{f09.eps}
\caption{Resampled distributions showing the properties of the
artificial SNe~Ia used as input to the rate error simulations.
$\Delta$mag refers to the scatter in SN~Ia rest-frame $B$-band peak
magnitudes, and has a dispersion of $\sigma_\mathrm{int}=0.14$.}
\label{fig:simdist}
\end{figure}
\begin{figure*}
\includegraphics[width=0.49\textwidth]{f10a.eps}
\includegraphics[width=0.49\textwidth]{f10b.eps}
\includegraphics[width=0.49\textwidth]{f10c.eps}
\caption{Histograms showing the input (black line) and output (gray
filled) parameter distributions from the rate error simulations as a
function of redshift (left), stretch (center), and color (right).}
\label{fig:parshift}
\end{figure*}
The procedure for each realization is then as follows:
\begin{enumerate}
\item $N_{\mathrm{meas}}(z)$ is randomized according to the Poisson
distribution, using the Poisson error based on $N_{\mathrm{obs}}(z)$
but scaled to $N_{\mathrm{meas}}(z)$, to give $N_{\mathrm{rand}}(z)$
simulated objects.
\item Each simulated object $i$ in each redshift bin is assigned a
random redshift appropriate for that bin ($z_i$). Within each bin,
the probability follows a scaled number-density profile according to
the expected increase in volume with redshift.
\item The $z_i$ are then randomly matched to a SN Ia from the
artificial distribution with the same redshift
(Fig.~\ref{fig:simdist}), and that event's stretch ($s_i$) and color
($c_i$) assigned to the simulated event.
\item Using the fraction of spectroscopic redshifts in each bin
$F_{\mathrm{spec}}(z)$ (Fig.~\ref{fig:objpars}), we assign this
fraction of the $N_{\mathrm{rand}}$ objects to correspond to a
spectroscopic redshift measurement. The remaining redshifts are
assumed to come from a photometric fit, and are shifted and
randomized using the median offsets and standard deviations shown in
Fig.~\ref{fig:zerrs} to give $z^{\prime}_i$.
Any event with a $z_{\mathrm{spec}}$ is not adjusted.
\item Correlated stretch and color errors are then incorporated for
all objects, using typical covariances produced by SiFTO for the
SNLS sample, and the $s_i$ and $c_i$ randomized to $s^{\prime}_i$ and
$c^{\prime}_i$.
\item $z^{\prime}_i$, $s^{\prime}_i$, and $c^{\prime}_i$ are used to
match each simulated object to a detection efficiency. Efficiency
errors are included by applying a random shift drawn from an
two-sided Gaussian representing the asymmetric uncertainties on each
value (\S\ref{sec:MC}).
\item Random numbers between zero and one are generated to evaluate
whether each simulated object gets ``found'': if the selected number
is lower than the detection efficiency associated with the simulated
object, that event is added to the rate calculated for that
iteration.
\item The rate for that iteration is then calculated using the
appropriate detection efficiencies from step 6.
\end{enumerate}
The final volumetric rates are presented in Columns 7 of
Table~\ref{tab:rates}. These are calculated as the mean of the 5000
simulated rates in each redshift bin. We also calculate the standard
deviation in each bin, subtract from this in quadrature the
statistical Poisson uncertainty based on $N_{\mathrm{obs}}(z)$, with
the remainder our estimate of the systematic uncertainty in each
redshift bin.
The effects of the simulations described above on the input redshift,
stretch, and color distributions are shown in Fig.~\ref{fig:parshift}.
The redshift histogram shows that the offsets applied in step~4 of the
simulations produce a net increase in redshift to compensate for the
small bias in the photometric fitting, with the effect increasing
towards higher $z$. This causes a flattening of the output rates
calculated by the simulations as compared with the measured (and
uncorrected) rates (Table~\ref{tab:rates}). In addition to the offset
towards higher redshifts at $z>0.7$, there is also a very small
spread in the stretch and color distributions
(Fig.~\ref{fig:parshift}).
\subsection{Low-efficiency candidates}
\label{sec:distcorr}
The detection efficiencies in some of the reddest $c$ bins begin to
rapidly decrease at $z\ga 0.9$ (Fig.~\ref{fig:objeffs}). As the
contribution to the rate from each observed SN~Ia goes as
$1/\varepsilon_T$, the measured volumetric rates are particularly
sensitive to any objects with very low detection efficiencies
($\varepsilon_T$). There is also the potential for a complete omission
of SNe Ia in some $z/s/c$ bins. For example, in $z/s/c$ bins with
less than $10\%$ detection efficiency, on average at least 10 SNe~Ia
must be observable in a given field-season for just one to be
detected. If that one SN is not detected by the real-time pipeline,
the 10 SNe~Ia that it truly represents in that bin will never be
counted in the final rates tally (and the rate measurement will be
biased).
To examine the sensitivity of our rates on the ``low-$\varepsilon_T$
regions'' of $z/s/c$ parameter space, we use the $z<0.6$
detection-efficiency-corrected SNLS sample as a model for the true
($s$, $c$) SN~Ia distribution at $z>0.6$. This population is assumed
observationally complete (Fig.~\ref{fig:compl}) and is taken to be
representative of the underlying sample of SNe~Ia in the
universe\footnote{Of course, this relies on the (possibly incorrect)
assumption of no evolution in intrinsic stretch or color as a
function of redshift \citep[e.g.,][]{how07}.}. The two-dimensional
($s$, $c$) distribution at $z<0.6$ is fit to the five $z>0.6$ bins,
and the best-fit scaling determined. The total rates
$r^{\prime}_{\mathrm{meas}}(z)$ are then calculated from those scaled
numbers (tabulated in Table~\ref{tab:rates})
These tests indicate that, while the results remain consistent within
their errors up to $z=0.95$ ($0.99 \pm 0.29 \ensuremath{\times10^{-4}\,\mathrm{SNe\,yr}^{-1}\,\mathrm{Mpc}^{-3}}$ compared with
$r_\mathrm{V}=1.20 \pm 0.17 \ensuremath{\times10^{-4}\,\mathrm{SNe\,yr}^{-1}\,\mathrm{Mpc}^{-3}}$), there is a significant amount of
uncertainty in the SNLS rates at higher redshifts due to sample
incompleteness. At $z=1.05$, the scaled rate is $0.51 \pm 0.26
\ensuremath{\times10^{-4}\,\mathrm{SNe\,yr}^{-1}\,\mathrm{Mpc}^{-3}}$, whereas our the calculated value is $r_\mathrm{V}= 0.93 \pm
0.25 \ensuremath{\times10^{-4}\,\mathrm{SNe\,yr}^{-1}\,\mathrm{Mpc}^{-3}}$. This finding is consistent with the results shown in
Figs.~\ref{fig:objpars} and \ref{fig:objeffs_raw}: the phot-Ia sample
numbers drop significantly at $z>1.0$, and those that are found in the
sample can have very low $\varepsilon_T$. For these reasons, we limit
the formal analysis of the SNLS rates to $r_\mathrm{V}(z < 1.0)$.
\section{Delay-time distributions}
\label{sec:dtds}
Having measured volumetric SN Ia rates and associated errors, we now
compare our measurements with those of other studies. We also examine
the SN Ia rate evolution as a function of redshift, and compare with
predictions based on various simple delay-time distribution models
from the literature.
For comparison of the SNLS rate measurements to various SN Ia models,
additional data in redshift ranges not sampled by SNLS is required. In
the rest of this section, we will make use of an extended SN Ia rate
sample comprising the \citet{li11b} LOSS measurement at $z\sim0$, the
\citet{dil10} sample from SDSS-SN at $z\sim0.2$, and the recent
\citet{gra11} Subaru Deep Field (SDF) sample at higher redshifts,
together with our SNLS results. Clearly other samples could have been
chosen -- however, these three are the largest SN Ia samples in their
respective redshift ranges, and have the greatest statistical power.
In the case of the SDSS and SDF samples, they are also built on
rolling SN searches similar to SNLS.
We make some small corrections to these published rates in order to
ensure a fair comparison across samples. The \citet{li11b} sample
includes all sub-classes of SNe Ia, including the peculiar events in
the SN2002cx-like class and sub-luminous events in the SN1991bg-like
class. SN2002cx-like events make up 5\% of the LOSS volume-limited SN
Ia sample \citep{li11a}. These are not present (or accounted for) in
the SNLS sample, and are excluded from the SDSS analysis
\citep{dil08}, so we therefore exclude these from the \citet{li11b}
sample, reducing their published rate value by 5\%.
Both the \citet{li11b} and \citet{dil10} samples include SNe Ia in the
sub-luminous SN1991bg category \citep[see also][]{dil08}, which we
exclude here in the SNLS analysis \citep[these are studied
in][]{gon11}. While we could correct our own rates for this population
using the \citet{gon11} results, it is unclear how to treat the SDF
sample in the same way (do SN1991bg-like events even occur at $z>1$?)
and the SNLS sub-luminous measurement is quite noisy. Instead we use
the very well-measured fraction of SN1991bg-like SNe in the
volume-limited LOSS sample (15\%), and reduce both the LOSS and SDSS
published rates by this amount. This 15\% is based on the
classifications given in \citet{li11b}. We confirm that this is
appropriate for our stretch selection (i.e., we require $s>0.7$) by
fitting the available \citet{li11b} light curves with SiFTO. 16\% of
the available \citet{li11b} sample has a fitted stretch $<0.7$,
consistent with the 15\% reported as 91bg-like by \citet{li11b}.
\subsection{Comparison with published rates}
\begin{figure}
\plotone{f11.eps}
\caption{The SNLS volumetric SN~Ia rates in the context of the data in
Fig.~\ref{fig:rates_lit}. The filled circles represent the SNLS
rates from the current analysis. The rate at $z=1.05$ (with the
dashed error bar) represents the redshift bin in which
incompleteness and poor spectroscopic sampling make measurements
untrustworthy. SNLS rates above $z=1.0$ are not included in
subsequent fits. The samples of \citet{li11b} and \citet{dil10} have
been scaled downwards to reflect the exclusion of sub-luminous and
SN2002cx-like SNe Ia from the SNLS sample (see $\S$~\ref{sec:dtds}).
Over-plotted are the various SFHs we use in our analysis in
$\S$~\ref{sec:dtds} as fit to the SNLS data only. The short-dashed
line shows the piece-wise SFH from \citet{li08}, the long-dashed line
the Cole et al. form from \citet{li08}, the dot-dashed line the SFH
from \citet{yuk08}, and the dotted line the SFH of \citet{wil08}.}
\label{fig:rates_comp}
\end{figure}
Fig.~\ref{fig:rates_comp} shows the SNLS volumetric SN~Ia rates for
comparison with recent published results. The SNLS volumetric rate at
$z\sim 0.5$ published by \citet{nei06} is $\ensuremath{\mathrm{SNR}_{\mathrm{Ia}}}(\langle
z\rangle=0.47)=[0.42^{+0.13}_{-0.09}\mathrm{(syst)}\pm
0.06\mathrm{(stat)}]$\ensuremath{\times10^{-4}\,\mathrm{SNe\,yr}^{-1}\,\mathrm{Mpc}^{-3}}, consistent with our binned rates in the
same redshift range. Note that the effects of non-Ia contamination
(\S\ref{sec:SNLSrates}) have not been incorporated into the SNLS
errors shown in Fig.~\ref{fig:rates_comp}. Our results are also
consistent with \cite{dil10} at $<0.3$, and with \citet{rod10} at
higher redshifts, although those latter measurements have
significantly greater uncertainties.
\begin{figure}
\plotone{f12.eps}
\caption{SNLS rates as a function of redshift, showing a power-law fit
to the data (solid line): $\ensuremath{\mathrm{SNR}_{\mathrm{Ia}}}(z)=r_0(1+z)^\alpha$, where
$\alpha=2.11\pm 0.28$ and $r_0=(0.17 \pm 0.03)\ensuremath{\times10^{-4}\,\mathrm{SNe\,yr}^{-1}\,\mathrm{Mpc}^{-3}}$. The reduced
$\chi^2$ goodness-of-fit statistic is $\chi^2_{\nu}=0.64$. For
comparison, the dashed line shows the \citet{col01} form of the
\citet{li08} star formation history profile, which has $\alpha=3.3$
out to $z\sim1$.}
\label{fig:rates_powerlaw}
\end{figure}
The SNLS rates show a rise out to $z\sim 1$, with no evidence of a
rollover at $z\sim 0.5$. We can parameterize the SNLS rate evolution
as a simple power-law:
\begin{equation}
\ensuremath{\mathrm{SNR}_{\mathrm{Ia}}}(z)=r_0(1+z)^\alpha,
\end{equation}
with the best-fit shown as the solid line in
Fig.~\ref{fig:rates_powerlaw}. We find $\alpha=2.11\pm 0.28$ and
$r_0=(0.17 \pm 0.03)\ensuremath{\times10^{-4}\,\mathrm{SNe\,yr}^{-1}\,\mathrm{Mpc}^{-3}}$ (all errors in this section are
statistical only). This evolution is shallower than a typical fit to
the cosmic star-formation history, with $\alpha\simeq3.3$
\citep[e.g.][]{li08}, though the constraining power of the SNLS data
alone at $z<0.3$ is not great. By comparison, \citet{dil10} find
$r_0=(0.23\pm0.01)\ensuremath{\times10^{-4}\,\mathrm{SNe\,yr}^{-1}\,\mathrm{Mpc}^{-3}}$ and $\alpha=2.04^{+0.90}_{-0.89}$ using the
lower-redshift SDSS-SN data, completely consistent with our results.
Including all the external data gives $r_0=(0.21 \pm 0.01)\ensuremath{\times10^{-4}\,\mathrm{SNe\,yr}^{-1}\,\mathrm{Mpc}^{-3}}$ and
$\alpha=1.70\pm0.12$.
\subsection{Comparison with delay-time distribution models}
\label{sec:comp-with-models}
\begin{deluxetable}{lccccccccccc}
\tabletypesize{\scriptsize}
\setlength{\tabcolsep}{0.025in}
\tablecolumns{12}
\tablecaption{Various DTD model fits to the volumetric SN Ia rate, \ensuremath{\mathrm{SNR}_{\mathrm{Ia}}}.\label{tab:ABchi2}}
\tablehead{
&&&\multicolumn{3}{c}{\citet{li08} piece-wise SFH}&\multicolumn{3}{c}{\citet{li08} ``Cole et al.'' SFH}&\multicolumn{3}{c}{\citet{yuk08} SFH}\\
\colhead{Data} &
\colhead{Model} &\colhead{$\nu$} &
\multicolumn{2}{c}{Fit parameters}&
\colhead{$\chi^2_{\nu}$}&
\multicolumn{2}{c}{Fit parameters}&
\colhead{$\chi^2_{\nu}$}&
\multicolumn{2}{c}{Fit parameters}&
\colhead{$\chi^2_{\nu}$}}
\startdata
& & & $\tau$ (Gyr) & & & $\tau$ (Gyr) & & & $\tau$ (Gyr) & & \\[2pt]
\tableline
Extended\tablenotemark{a} & Gaussian & 18 & 3.4 & \nodata & 2.62 & 3.4 & \nodata & 1.60 & 3.4 & \nodata & 3.29 \\
Extended & Gaussian & 17 & $3.1\pm0.3$ & \nodata & 2.64 & $2.5\pm0.7$ & \nodata & 1.60& $2.9\pm0.3$ & \nodata & 3.00 \\
Ext.+D08 & Gaussian & 19 & 3.4 & \nodata & 2.72 & 3.4 & \nodata & 1.68 & 3.4 & \nodata & 3.43 \\
\tableline\\[-5pt]
& & & $\beta$ & & & $\beta$ & & & $\beta$ & & \\[2pt]
\tableline
SNLS & Power law & 8 & 1 &\nodata & 0.76 & 1 & \nodata &1.36 & 1 & \nodata &0.81 \\
Extended & Power law & 18 & 1 &\nodata & 0.72 & 1 & \nodata &1.06 & 1 & \nodata &0.78 \\
Extended & Power law & 17 & $0.98\pm0.05$ &\nodata & 0.76 & $1.15\pm0.08$ & \nodata & 0.92 & $0.98\pm0.05$ & \nodata & 0.81 \\
\tableline\\[-5pt]
& & & $\tau$ (Gyr) & & & $\tau$ (Gyr) & & & $\tau$ (Gyr) & & \\[2pt]
\tableline
SNLS & Exponential & 7 & $1.5\pm0.4$ &\nodata & 0.85 & $0.2\pm2.9$ &\nodata & 0.58 & $1.4\pm0.4$ &\nodata & 0.91 \\
Extended & Exponential & 17 & $2.6\pm0.3$ &\nodata & 1.33 & $2.1\pm0.3$ &\nodata & 1.15 & $2.5\pm0.3$ &\nodata & 1.44 \\
\tableline\\[-5pt]
& & & $\beta$ & & & $\beta$ & & & $\beta$ & & \\[2pt]
\tableline
SNLS & \citetalias{pri08} & 8 & 0.5 &\nodata & 4.14 &0.5 &\nodata & 4.81 & 0.5 &\nodata & 4.31 \\
Extended & \citetalias{pri08} & 18 & 0.5 &\nodata & 4.08 &0.5 &\nodata & 4.77 & 0.5 &\nodata & 4.19 \\
\tableline\\[-5pt]
& & & $A$\tablenotemark{a} & $B$\tablenotemark{b} & & $A$ & $B$ & & $A$ & $B$ & \\[2pt]
\tableline
SNLS & $A+B$ & 7 & $1.6\pm0.5$ & $3.4\pm0.3$ & 0.74 & $0.3\pm0.6$ & $5.2\pm0.5$ & 0.59 & $1.7\pm0.5$ & $3.2\pm0.3$ & 0.78 \\
Extended & $A+B$ & 17 & $1.9\pm0.1$ & $3.3\pm0.2$ & 0.60 & $1.5\pm0.2$ & $4.3\pm0.3$ & 0.77 & $2.0\pm0.1$ & $3.1\pm0.2$ & 0.63 \\
SNLS & $A=0$ & 8 & 0 & $4.4\pm0.3$ & 1.81 & 0 & $5.4\pm0.2$ & 0.54 & 0 & $4.2\pm0.3$ & 1.92 \\
Extended & $A=0$ & 18 & 0 & $5.3\pm0.4$ & 6.22 & 0 & $6.1\pm0.3$ & 2.95 & 0 & $5.1\pm0.4$ & 6.70 \\
SNLS & $B=0$ & 8 & $5.6\pm0.8$ & 0 & 9.72 & $6.1\pm0.9$ & 0 & 9.74 & $5.8\pm0.8$ & 0 & 10.09 \\
Extended & $B=0$ & 18 & $3.8\pm0.4$ & 0 & 9.42 & $4.0\pm0.4$ & 0 & 9.56 & $3.8\pm0.4$ & 0 & 9.66 \\
\tableline\\[-5pt]
& & & $\Psi_1$\tablenotemark{c} & $\Psi_2$ & & $\Psi_1$ & $\Psi_2$ & & $\Psi_1$ & $\Psi_2$& \\[2pt]
\tableline
Extended & 2-bin\tablenotemark{d} & 17 & $90\pm5.2$ & $1.2\pm0.10$ & 0.64 & $120\pm7.8$ & $0.88\pm0.14$ & 0.79 & $86\pm5.0$ & $1.3\pm0.10$ & 0.67 \\
\enddata
\tablenotetext{a}{The extended sample refers to the SNLS sample plus the external data described in $\S$~\ref{sec:dtds}.}
\tablenotetext{b}{Units of \ensuremath{\times10^{-14}\,\mathrm{SNe\,yr}^{-1}\,M_\odot^{-1}}, where \ensuremath{\mathrm{M}_{\mathrm{\odot}}}\ refers to the current stellar mass, \ensuremath{M_{\mathrm{stellar}}}.}
\tablenotetext{c}{Units of \ensuremath{\times10^{-4}\,\mathrm{SNe\,yr}^{-1}\,(M_\odot\,\mathrm{yr}^{-1})^{-1}}.}
\tablenotetext{d}{Units of \ensuremath{\times10^{-14}\,\mathrm{SNe\,yr}^{-1}\,M_\odot^{-1}}, where \ensuremath{\mathrm{M}_{\mathrm{\odot}}}\ refers to the total formed stellar mass, \ensuremath{M_{\ast}}.}
\tablenotetext{e}{A discrete DTD, equal to $\Psi_1$ at $t<420$Myr, and $\Psi_2$ otherwise.}
\end{deluxetable}
We now compare our rate evolution with simple parameterizations of the
delay-time distribution (DTD) from the literature relating the cosmic
star-formation history (SFH) to SN Ia rates. The DTD, $\Psi(t)$, gives
the SN Ia rate as a function of time for a simple stellar population
(SSP), i.e., following a $\delta$-function burst of star formation.
The SN Ia rate (\ensuremath{\mathrm{SNR}_{\mathrm{Ia}}}) at time $t$ is then
\begin{equation}
\label{eq:convol}
\ensuremath{\mathrm{SNR}_{\mathrm{Ia}}}(t)=\int^t_0\mathrm{SFR}(t-\tau)\Psi(\tau)d\tau .
\end{equation}
where SFR($t$) is the star-formation rate as a function of time. Thus,
different functional forms of the DTD can be tested against
observations of volumetric rates if the SFR($t$), or the cosmic SFH,
is known \citep[e.g.,][]{mad98,str04,oda08,hb10,gra11}. An implicit
assumption in this test is that the DTD is invariant with redshift and
environment.
As our default SFH model, we choose the \citet{li08} update to the
\citet{hb06} fit to a compilation of recent star-formation density
measures. For simplicity we use a \citet{sal55} initial mass function
(IMF) with mass cut-offs at 0.1\ensuremath{\mathrm{M}_{\mathrm{\odot}}}\ and 100\ensuremath{\mathrm{M}_{\mathrm{\odot}}}, and assume that
stars began to form at $z=10$. The default SFH is parameterized in a
piece-wise fashion, and is over-plotted on the data in
Fig.~\ref{fig:rates_comp}. As shown by \citet{for06} and
\citet{gra11}, the choice of SFH can add an additional significant
systematic uncertainty in any comparisons of DTDs to SN Ia volumetric
data. We therefore compare with results obtained using an alternative
parameterization of the SFH by \citet{li08} following \citet{col01}, as
well as a SFH fit to slightly different data by \citet{yuk08}
\cite[see also][]{hb10}. In $\S$~\ref{sec:comparison-other-dtd}, we
also investigate a SFH derived in a very different manner
\citep{wil08}.
The integral of the DTD gives the total number of SNe Ia per formed
stellar mass, $N_{\mathrm{Ia}}/\ensuremath{M_{\ast}}$. This can be converted into the
fraction of intermediate-mass stars that explode as SNe Ia, $\eta$, by
multiplying by a factor of 47.2 (for the Salpeter IMF). This assumes
that the progenitor mass range for a SN Ia is 3--8\ensuremath{\mathrm{M}_{\mathrm{\odot}}}\
\citep[see][for a discussion]{mao08}. For all our model DTDs, we set
the DTD to zero at epochs earlier than 40Myr, the approximate lifetime
of an $8\ensuremath{\mathrm{M}_{\mathrm{\odot}}}$ star.
\subsubsection{Gaussian DTDs}
\label{sec:gaussian-dtds}
\begin{figure*}
\plotone{f13.eps}
\caption{SN~Ia rates as a function of redshift with various delay-time
distribution (DTD) model predictions fit to the data for different
cosmic SFHs. Lower left: ``$A+B$'' model ($\S$~\ref{sec:aplusb}),
lower right: Gaussian DTDs ($\S$~\ref{sec:gaussian-dtds}), upper
left: the model of \citet{pri08} ($\S$~\ref{sec:power-law-dtds}),
and upper right: a generic power law DTD
($\S$~\ref{sec:power-law-dtds}). In all cases, solid lines
represent the piece-wise cosmic SFH of \citet{li08}, dashed lines
the \citet{col01} form of the \citet{li08} SFH, and dot-dash lines
the \citet{wil08} SFH. See text for more details of the models, and
Table~\ref{tab:ABchi2} for the numerical values.}
\label{fig:rates_dtd_compare}
\end{figure*}
We begin by fitting a Gaussian DTD, with $\Psi(t)\propto
e^{-(t-\tau)^2/(2\sigma^2)}$, to the volumetric \ensuremath{\mathrm{SNR}_{\mathrm{Ia}}}\ data, following
\citet{str04} \citep[see also][]{str10}. We fit a DTD with parameters
fixed at $\tau=3.4$\,Gyr and $\sigma=0.2\tau$ (i.e., just adjusting
the normalization in the fits), as well as a DTD fit with $\tau$
allowed to vary. The results are listed in Table~\ref{tab:ABchi2} and
compared to other DTD fits in Fig.~\ref{fig:rates_dtd_compare}.
This model has $\chi^2_{\nu}=2.62$ ($\chi^2_{\nu}$ is the reduced
$\chi^2$, the $\chi^2$ per degree of freedom, $\nu$). Allowing $\tau$
as a free parameter in the fits gives $\tau=3.1\pm0.3$\,Gyr with a
similar $\chi^2_{\nu}$ -- the fit quality is slightly better when
using Cole et al. form of the SFH ($\chi^2_{\nu}=1.60$).
The Gaussian DTDs therefore provide poor fits to the \ensuremath{\mathrm{SNR}_{\mathrm{Ia}}}\ data, and
are not capable of matching the SNLS, SDF and SDSS/LOSS data
simultaneously. In particular, these Gaussian DTDs predict a decrease
in the number of SNe at $z>1$ not seen in the combined data set.
However, they were originally favored following fits to data including
$z>1$ points from \textit{HST} searches \citep{dah04,dah08} which are
not included in our analysis due to their lower statistical precision
compared to the SDF study. As a consistency check we also replace the
SDF data with the \citet{dah08} data (adjusted downwards by 15\% to
account for 19bg-lie events) in our fits -- we find that the
$\chi^2_{\nu}$ does not improve (Table~\ref{tab:ABchi2}).
\subsubsection{Power law and exponential DTDs}
\label{sec:power-law-dtds}
Theoretically, if SNe Ia are dominated by a single channel, the DTD
will likely decline with age. In the single degenerate channel, SNe
Ia at 10\,Gyr should be rare, since 1\,\ensuremath{\mathrm{M}_{\mathrm{\odot}}}\ secondaries have small
envelopes to donate and must rely on only the most massive primaries
\citep{gre05}. A power law DTD (i.e., $\Psi(t)\propto t^{-\beta}$ with
$\beta\sim1$) with a low time-delay cut-off is expected in the double
degenerate scenario \citep[e.g.][]{gre05,for06,mao10}, and has been
explained post-hoc in the single degenerate channel using a mixture of
contributions \citep{hac08}. Furthermore, models with $\beta\sim1$
seem to provide a good match to a variety of recent observational data
\citep{tot08,mao10,gra11}.
We fit both $\beta=1$ and free $\beta$ DTDs to the SNLS+external \ensuremath{\mathrm{SNR}_{\mathrm{Ia}}}\
data. The results can be found in Table~\ref{tab:ABchi2} and
Fig.~\ref{fig:rates_dtd_compare}. Our best-fit value is
$\beta=0.98\pm0.05$ ($\chi^2_{\nu}=0.76$), consistent with $1$. This
broad agreement with $1$ holds when considering the other SFH
paramterizations ($\beta=1.15\pm0.08$ for the Cole et al. form).
\citet*[][hereafter PHS]{pri08} present a simple model relating white
dwarf formation rate, which decreases with time following an
instantaneous burst of star formation as $\sim t^{-0.5}$, resulting in
a DTD with $\beta\sim0.5$. By fitting the SN Ia host galaxy data of
\citetalias{sul06b}, \citetalias{pri08} demonstrate that $\Psi(t)\sim
t^{-0.5\pm0.2}$, irrespective of the assumed SFH or the detailed
mixture of stellar populations. \citetalias{pri08} argue that the
single-degenerate formation scenario alone is not sufficient to
account for all of the observed SNe~Ia \citep[see also][]{gre05}. The
\citetalias{pri08} model makes an explicit prediction for the
evolution of the SN Ia rate with redshift, given an input SFH. We fit
the \citetalias{pri08} model -- essentially $\beta=0.5$ -- to the data
and show the resultant fit in Fig.~\ref{fig:rates_dtd_compare} (also
Table~\ref{tab:rates}). We find a $\chi^2_{\nu}=4.08$, obviously a
substantially poorer fit than the generic power-law fit, or power-law
DTDs with $\beta=1$.
For completeness we also also test an exponential DTD, i.e.,
$\Psi(t)\propto \exp^{-t/\tau}$. When $\tau$ is small, this
approximates a simple star-formation dependence, and when large, it
approximates a constant DTD. The results are in
Table~\ref{tab:ABchi2}; generally, single exponential DTDs provide
poor fits to the data, but do still have acceptable $\chi^2$.
\subsubsection{``Two-component'' models}
\label{sec:aplusb}
Finally, we examine various two-component DTD models. The first is the
popular ``$A+B$'' model,
a simple, two-component model of SN~Ia production that is comprised of
a ``prompt'' component that tracks the instantaneous SFR, and a
``delayed'' (or ``tardy'') component that is proportional to
\ensuremath{M_{\mathrm{stellar}}}\ \citep{man05,sb05}:
\begin{equation}
\ensuremath{\mathrm{SNR}_{\mathrm{Ia}}}(z) = A\times\ensuremath{M_{\mathrm{stellar}}} (z) + B\times\mathrm{SFR}(z)
\label{eq:aplusb}
\end{equation}
Here, the $A$ and $B$ coefficients scale the \ensuremath{M_{\mathrm{stellar}}}\ and SFR
components, respectively. The prompt component consists of very young
SNe~Ia that explode relatively soon (in the model, immediately) after
the formation of their progenitors, whereas the delayed component
(scaled by $A$) corresponds to longer delay times and an underlying
old stellar population. This model is empirically attractive due to
the ease of comparison with readily observable galaxy quantities, such
as \ensuremath{M_{\mathrm{stellar}}}\ and SFR. Note that this A+B model does not exactly
correspond to a DTD -- however, it can be easily converted to a DTD by
using the variation of \ensuremath{M_{\mathrm{stellar}}}\ with time in a SSP. This leads to a
DTD with some fraction of SNe Ia formed immediately (the $B$
component), followed by a slightly decreasing fraction to large times
(the $A$ component). This decrease is $\sim25$\% from 0.1 to 5\,Gyr,
and $\sim20$\% from 1\,Gyr to 10\,Gyr -- clearly significantly
shallower than a $\beta=1$ power law.
Some confusion exists over the exact definition of \ensuremath{M_{\mathrm{stellar}}}\ in
eqn.~(\ref{eq:aplusb}), and hence the definition of $A$. Some authors
\citep[e.g.][]{nei06} simply treat \ensuremath{M_{\mathrm{stellar}}}\ as the integral of the
SFH, equating it to the total formed mass, \ensuremath{M_{\ast}}. Others make
corrections for stars that have died, particularly in studies which
perform analyses on a galaxy-by-galaxy basis as this quantity is more
straight forward to link to observational data
\citep[e.g.,][]{sul06b}. This latter definition leads to larger $A$
values, as $\ensuremath{M_{\mathrm{stellar}}}(t)$ will be less than $\ensuremath{M_{\ast}}(t)$
\citepalias[see Fig.~7 of][for the size of this difference]{sul06b}.
Here, our $A$ values refer to \ensuremath{M_{\mathrm{stellar}}}, and we pass the cosmic SFH
through the P\'EGASE.2 routine \citep{leb04}, convolving the chosen
SFH with a single stellar population, and generating a galaxy SED$(t)$
from which mass $\ensuremath{M_{\mathrm{stellar}}}(t)$ can be estimated. The evolving
\ensuremath{M_{\mathrm{stellar}}}\ and SFR are used to perform a fit of
equation~\ref{eq:aplusb} to the volumetric rate evolution, with
results listed in Table~\ref{tab:ABchi2}.
Fitting the SNLS rates alone gives coefficients of $A=(1.6\pm
0.5)\ensuremath{\times10^{-14}\,\mathrm{SNe\,yr}^{-1}\,M_\odot^{-1}}$ and $B=(3.4\pm 0.3)\ensuremath{\times10^{-4}\,\mathrm{SNe\,yr}^{-1}\,(M_\odot\,\mathrm{yr}^{-1})^{-1}}$ for the \citet{li08} piece-wise
SFH, with $\chi^2_{\nu}=0.74$ for $\nu=7$. Incorporating the external
SN~Ia rate from LOSS, SDSS and SDF yields values of $A=(1.9\pm
0.1)\ensuremath{\times10^{-14}\,\mathrm{SNe\,yr}^{-1}\,M_\odot^{-1}}$ and $B=(3.3\pm 0.2)\ensuremath{\times10^{-4}\,\mathrm{SNe\,yr}^{-1}\,(M_\odot\,\mathrm{yr}^{-1})^{-1}}$, with $\chi^2_{\nu}=0.60$.
This fit is compared to other DTDs in
Fig.~\ref{fig:rates_dtd_compare}.
Next, we set $A=0$ to investigate the possibility of a pure
star-formation dependence, fitting only the prompt ($B$) component to
the SNLS data. This results in an upper limit of $B=(4.4\pm
0.3)\ensuremath{\times10^{-4}\,\mathrm{SNe\,yr}^{-1}\,(M_\odot\,\mathrm{yr}^{-1})^{-1}}$ ($\chi^2_{\nu}=1.81$), equivalent to the normalized SFH
curve plotted in Fig.~\ref{fig:rates_comp}. Adding the external data
rates to the SNLS values gives an upper limit of $B=(5.3\pm
0.4)\ensuremath{\times10^{-4}\,\mathrm{SNe\,yr}^{-1}\,(M_\odot\,\mathrm{yr}^{-1})^{-1}}$, but again with a very poor fit quality
($\chi^2_{\nu}=6.22$). While the SNLS results themselves are
marginally consistent with a pure prompt component
(Table~\ref{tab:ABchi2}), adding the additional constraints supplied
by the low-$z$ data yield a very poor $A=0$ fit. The related test of
setting $B=0$ and testing for only the delayed component similarly
gives very poor fit results (Table~\ref{tab:ABchi2}).
As discussed above, these $A$ and $B$ values depend on the adopted
SFH. Even ignoring the systematics in the individual SFR measurements
to which the SFH model is fit, the type of fit used also introduces
considerable uncertainty. Using the Cole et al. form of the SFH, for
example, changes the best-fit values to $A=1.5\pm0.2\ensuremath{\times10^{-14}\,\mathrm{SNe\,yr}^{-1}\,M_\odot^{-1}}$ and
$B=4.3\pm0.3\ensuremath{\times10^{-4}\,\mathrm{SNe\,yr}^{-1}\,(M_\odot\,\mathrm{yr}^{-1})^{-1}}$, a significant variation with similar quality fit
than for the piece-wise form. Thus care must be taking in comparing
any particular $A+B$ values, or any particular prediction of SN Ia
rate evolution, without ensuring the consistent use of a SFH and
derivation of \ensuremath{M_{\mathrm{stellar}}}.
Clearly, and as well documented in the literature, the $A+B$ model
must be a significant approximation to the physical reality in SN Ia
progenitor systems. In particular, there must be \textit{some} delay
time for the prompt component, and it is unlikely to act as a delta
function in the DTD. We test this by approximating $\Psi(t)$ as two
discrete bins in time, i.e., a step function with a value $\Psi_1$ at
times $t<t_{\mathrm{split}}$, and $\Psi_2$ at $t\geq
t_{\mathrm{split}}$. We choose $t_{\mathrm{split}}=420$\,Myr
(following, e.g., \citealt{bra10}), and used a sigmoid function to
ensure the DTD was continuous when crossing $t_{\mathrm{tsplit}}$.
This DTD also provides a good fit to the data
(Table~\ref{tab:ABchi2}), with a significant detection of the two
components -- $\Psi_1$ and $\Psi_2$ are $>0$ at 5$\sigma$ in all three
SFHs considered (and typically $\sim10\sigma$). We experimented with
making this function more general, by allowing $t_{\mathrm{split}}$ to
vary. However, these fits were not constraining, although they prefer
$t_{\mathrm{split}}\lesssim2$\,Gyr, and the fit parameters are highly
correlated (i.e., as $t_{\mathrm{tsplit}}$ decreases, $\Psi_1$
increases to ensure a similar fraction of SNe Ia are generated from
the ``prompt'' component).
One direct outcome of this simple two-bin DTD model is that, for our
default cosmic SFH, while $\sim70$\% of SNe Ia originate from the
prompt component integrated over cosmic time, at $z=0$ the prompt
component accounts for only $\sim25$\% of SNe Ia. These fractions
remain fairly constant out to $t_{\mathrm{split}}\sim2$\,Gyr.
We also explored more bins in $\Psi$ -- e.g., the three bin DTD of
\citet{bra10} and \citet{mao11} -- but, while these were consistent
with our data, they did not provide improved fits over the two-bin
DTD, and again, the parameters themselves were not well constrained.
\subsection{Discussion}
\label{sec:comparison-other-dtd}
We now compare our DTDs inferred from the volumetric \ensuremath{\mathrm{SNR}_{\mathrm{Ia}}}\ data with
other independent estimates from the literature, comparing both the
parametric form of our best-fit DTDs, as well as the normalization. In
Fig.~\ref{fig:dtd}, we plot our inferred DTDs, with normalizations
from the best-fits to the volumetric SN Ia rates, and compare to other
empirical determinations of the DTD from the literature from a variety
of methods \citep{bra10,mao10,mao11}. The data points come from the
analysis of SN Ia rates in galaxy clusters \citep{mao10}, the
reconstruction of the DTD from an analysis of the LOSS SN Ia host
galaxy spectra \citep{mao11}, and a similar analysis of the host
galaxies of SDSS SNe Ia \citep{bra10}.
Where necessary we convert these external measurements to a Salpeter
IMF -- from a diet-Salpeter IMF \citep{bel03} for \citet{mao10,mao11}
and a \citet{kro07} IMF for \citet{bra10}. We also adjust the
\citet{mao10} and \citet{mao11} results downwards as their
measurements presumably include SN1991bg-like events which are not
included in our SNLS analysis (the Brandt et al. analysis does not
include these SNe, and their stretch range is well matched to our
analysis -- see their figure 2). Furthermore, we correct the
\citet{bra10} DTD points upwards by 0.26\,dex to account for stellar
masses in Brandt et al. that are 0.26\,dex too high due to a
normalization issue in the VESPA code used to derive them (D. Maoz,
private communication; R. Tojeiro, private communication).
\begin{figure}
\plotone{f14.eps}
\caption{The SN Ia delay-time distribution (DTD) inferred from fits to
our volumetric rate data, compared to other determinations from the
literature. The solid line shows the best-fitting power-law (PL) DTD
and the long-dashed line the best-fitting ``$A+B$'' (AB) model from
Table~\ref{tab:ABchi2}, each drawn for the two SFHs that give the
most different results. The short-dashed line is the best-fit
power-law DTD from \citet{mao10}, which has $\beta=1.3$, but with
their normalization. The $A+B$ model has been adjusted, for plotting
purposes, so that the instantaneous component is spread over the
first 400\,Myr. The DTD data points come from \citet[][red
circles]{bra10}, \citet[][open circles]{mao10}, and \citet[][black
circles]{mao11}. The horizontal error-bars indicate the bin widths
on these points.}
\label{fig:dtd}
\end{figure}
Although the generic shape of the power-law DTD inferred from the
volumetric rate data matches the external DTD data well
(Fig.~\ref{fig:dtd}), it is clear that the best-fit normalization
required to reproduce the volumetric rate data differs to that
required to fit some of the external samples. The DTDs inferred from
volumetric rate data are generally consistent with the \citet{bra10}
analysis, and the first and third bins of the \citet{mao11} data.
However, the normalization of the best-fit DTD to the \citet{mao10}
cluster data lies significantly above our best-fit DTD. Integrating
our best-fit power-law DTDs gives
$N_{\mathrm{Ia}}/\ensuremath{M_{\ast}}\sim4.4\pm0.2-5.2\pm0.2\times10^{-4}\,\mathrm{SNe}\,\ensuremath{\mathrm{M}_{\mathrm{\odot}}}^{-1}$
($\eta=2.0-2.5$\%) depending on the SFH, in good agreement with
similar analyses \citep{hb10}. However, this is significantly below
the value of $\sim40\times10^{-4}\,\mathrm{SNe}\,\ensuremath{\mathrm{M}_{\mathrm{\odot}}}^{-1}$ obtained
by integrating the \citet{mao10} ``optimal iron-constraint'' power-law
DTD (for our IMF), or $\sim24\times10^{-4}\,\mathrm{SNe}\,\ensuremath{\mathrm{M}_{\mathrm{\odot}}}^{-1}$
from the ``minimal iron constraint'' DTD. We can sanity check our
normalizations by predicting the SN Ia rate in the Milky Way, given
our DTD values. Assuming a Milky Way stellar mass of
$\sim5\times10^{10}\ensuremath{\mathrm{M}_{\mathrm{\odot}}}$ and a SFR of $\sim4\ensuremath{\mathrm{M}_{\mathrm{\odot}}}\mathrm{yr}^{-1}$,
our $A+B$ DTDs give a predicted Milky Way normal SN Ia rate of
0.22-0.25 events per century. This is in good agreement with
independent estimates of the actual rate \citep[$\sim0.35$ to $0.40$
events per century; e.g.,][]{tam94,rol06,li11a} given the
uncertainties involved.
So why is the normalization in the \citet{mao10} cluster SN Ia DTD
(which is not arbitrary) different to those derived from volumetric
rate data by such a large factor? Several possibilities exist that may
explain the discrepancy. The first is that the SNLS, and by extension
all other \ensuremath{\mathrm{SNR}_{\mathrm{Ia}}}\ studies, are missing a significant number of SNe Ia, a
factor of at least four. However, it is difficult to understand how
this might occur, given some of the cluster rates used in
\citet{mao10} are drawn from very similar surveys (including SNLS and
SDSS-SN).
A second possibility is that the cosmic SFH models used in our
analysis over-predict the actual SFR -- a lower SFH normalization
would require a higher DTD normalization to match the volumetric rate
data. To test this possibility, we repeat our rates fitting analysis
using the SFH of \citet{wil08}, a SFH derived from a requirement to
match the redshift evolution of \ensuremath{M_{\mathrm{stellar}}}. Although this agrees with
other SFH estimates at $z<0.7$, it suggests that high-redshift SFRs
could be $\sim0.6$\,dex lower compared to the models used in this
paper. However, even using this SFH, the integrated power-law DTD
only gives $6.6\pm0.6\times10^{-4}\,\mathrm{SNe}\,\ensuremath{\mathrm{M}_{\mathrm{\odot}}}^{-1}$
($\eta=3.1\pm0.3$\%), still some distance short of that apparently
required by the clusters analysis.
Other options to adjust the SFH normalization downward, such as
reducing the dust extinction corrections applied to the various
star-formation indicators which make up the SFH compilations, are
probably not viable given the long-established and significant
evidence for obscuration in star-forming galaxies, and the agreement
between the different diagnostics \citep[see discussion
in][]{hb06,li08}.
A third possibility is that the cluster rates used to derive the DTD
of \citet{mao10} have some contamination from ``younger'' SNe Ia, thus
increasing their rates above that appropriate for their age, assuming
a redshift of formation of 3. This, and other similar potential
systematics from the clusters analysis, are discussed in detail in
\citet{mao10}.
Finally, it may well be the case that the assumption of a single DTD
is not adequate, given the various indications that there may be more
than one progenitor channel. This would suggest that there is not one
universal DTD that is independent of redshift, or other variables,
such as metallicity.
\section{Stretch dependence}
\label{sec:stretchdep}
There is an observed correlation between the photometric properties of
SNe~Ia and their host environments \citep{ham95,ham00,sul06b}.
Brighter SNe~Ia with slower light-curves tend to originate in
late-type spiral galaxies, such that the rates of higher-stretch
objects are proportional to star-formation on short ($\sim 0.5$ Gyr)
timescales \citepalias{sul06b}. Meanwhile, fainter, faster-declining
SNe~Ia are more likely to be associated with older stellar
populations. This split seems to extend to the recovered DTDs --
\citet{bra10} show that the recovered DTD for low and high stretch SNe
Ia are very different, consistent with the above picture of young SNe
Ia being high stretch and old SNe Ia low stretch. A larger fraction of
high-stretch SNe Ia might therefore be expected at high redshift
tracking the increase in the cosmic SFH and hence preponderance of
younger stars. Indeed, \citet{how07} find a modest increase in the
average light-curve width out to redshifts of $z\sim 1.25$. The rate
evolution for low-stretch SNe~Ia should therefore demonstrate a
correspondingly shallower increase with redshift than that of
higher-stretch objects.
\begin{figure}
\plotone{f15.eps}
\caption{SN~Ia rates split at the median SNLS stretch value of
$s_0=1.0$. The lower-stretch rates are shown as open circles, and
high-stretch rates by the solid circles. We additionally show the
$s<0.8$ rate measurement from \citet{gon11}. The measured rates have
simple weighted errors and are uncorrected for systematic errors in
redshift. The first two redshift bins ($z=0.15,0.25$) are combined
here due to the small number of low-$z$ objects the split samples.
Power-law fits to the rates yield similar slopes over the redshift
range shown: $\alpha_{s<0.8}=1.63\pm0.28$ (dotted line),
$\alpha_{0.8\leq s<1}=2.48 \pm 0.97$ (dashed line) and
$\alpha_{s\geq1}=2.11 \pm 0.60$ (solid line).}
\label{fig:scomp}
\end{figure}
We investigate this trend in Fig.~\ref{fig:scomp}, splitting the SNLS
sample at the median stretch value of $s_0=1.0$. The measured rates
with simple weighted errors are plotted separately for objects with
$0.8 \leq s < 1.0$ (open circles) and those with $1.0\leq s < 1.3$
(filled circles). We also show the $s<0.8$ rate measurement from
\citet{gon11}. The samples from the analysis in this paper exhibit a
comparable rise in their rates with redshift, with power-law slopes of
$\alpha_{0.8\leq s<1}=2.48 \pm 0.97$ and $\alpha_{s\geq1}=2.11 \pm
0.60$. Extrapolating the fits to each of the samples back to $z=0$
gives the following fractions of SNe Ia at $z=0$ in each group: 17\%
($s<0.8$), 39\% ($0.8 \leq s < 1.0$), and 44\% ($1.0\leq s < 1.3$).
The $s<0.8$ fraction is consistent with the LOSS SN1991bg-like
fraction of 15\% \citep{li11a}, given that true 1991bg-like events
have fitted stretches $s\lesssim0.7$ \citep{gon11}. The fraction of
very low stretch events with $s<0.8$ SNe Ia shows only a small
increase with increasing redshift \citep[see also][]{gon11}, although
the uncertainties are very large.
The stretch-split rates are only considered out to $z=0.8$, beyond
which redshift the stretch errors become large ($>0.1$), and the lower
efficiencies of the redder, lower-stretch SNe~Ia can bias the observed
stretch evolution by driving up the $s<1$ rates
(Fig.~\ref{fig:objeffs}).
\begin{figure}
\plotone{f16.eps}
\caption{The ratio of low-stretch ($0.8<s\leq1$) to high-stretch
($1.0<s\leq1.3$) SNe Ia volumetric rates as a function of redshift.
The horizontal line shows the weighted mean ratio of all SN Ia
stretches in the sample of $1.01$. Various predicted trends,
following the analysis of \citet{how07} and using the $A+B$ and
power-law DTDs, are overlaid. These trends are not fit to the
data-points plotted. The solid lines represent the piece-wise SFH,
and the dashed lines the Cole et al. form of the SFH, from
\citet{li08}. The shaded areas show, for each predicted trend, the
uncertainty expected by shifting the stretch distributions of
\citet{how07} by 0.025. The reduced $\chi^2$ of the model fits to
the data range from $\chi^2_{\nu}=1.8$ to $\chi^2_{\nu}=2.6$ over
the redshift range shown, compared with $\chi^2_{\nu}=1.35$ for a
flat line at the weighted mean ratio.}
\label{fig:sratio}
\end{figure}
Fig.~\ref{fig:sratio} shows the ratio of the rates split by stretch
for $s>0.8$. To compare this observed data with any expected
evolution, we need model stretch distributions for ``old'' and
``young'' SNe, together with a mechanism for predicting the relative
evolution of these two components with redshift. For the former, we
take the two stretch distributions for young and old SNe Ia from
\citet{how07}: the old component is represented by a Gaussian with
$\langle s_{\mathrm{old}} \rangle=0.945$ and
$\sigma_{s_{\mathrm{old}}}=0.077$, while the young component has
$\langle s_{\mathrm{young}} \rangle=1.071$ and
$\sigma_{s_{\mathrm{young}}}=0.063$. To estimate the relative
redshift evolution of the old and young components, we use the
best-fitting A+B values from Table~\ref{tab:ABchi2} (assigning $A$ to
the old SNe and $B$ to the young SNe), as well as the power-law DTD.
For this latter DTD, we assign SNe born at $t<2$Gyr in the DTD to
the young component, and SNe born at $t\geq2$Gyr to the old component.
Together with a SFH, these models then predict the relative fraction
of low and high stretch SNe as a function of redshift, over-plotted in
Fig.~\ref{fig:sratio}. We also vary the \citet{how07} stretch
distributions by adjusting $\langle s \rangle$ by $\pm0.025$ for the
two components; these are shown as the hashed gray areas in the
figure.
As expected, the predicted ratios show a smooth decline from a large
fraction of low-stretch SNe at $z=0.1$. As the relative contribution
from delayed SNe~Ia to the rates decreases with increasing redshift,
so too should the dominance of lower-stretch objects \citep[see also
Fig.~1 in][]{how07}. The prediction based on the power-law DTD shows a
shallower evolution with redshift, reflecting the extended age of the
young component relative to the simplistic $A+B$ model.
However, Fig.~\ref{fig:scomp} and Fig.~\ref{fig:sratio} are
surprising. If broad-lightcurve SNe Ia favor a young environment and
narrow-lightcurve events favor an old environment, as has been well
established, then the ratio of narrow to broad SNe ought to be
changing as star formation increases with redshift. But all of the
predictions are a relatively poor match to the SNLS sample in the
higher redshift bins. We vary $s_0$ by $\pm5\%$ to assess the
sensitivity of our results to the stretch split value (default of
1.0), but find no significant improvement in the agreement with the
predicted model as compared with the straight-line fit at the weighted
mean ratio.
The lack of an observed evolution may be due to several factors. The
first is that the age-split between low- and high-s SNe Ia may be more
subtle than previously appreciated. Alternatively, the main effect may
be dominated by $s<0.8$ SNe Ia, which do show a different evolutionary
trend with redshift (Fig.~\ref{fig:scomp}). The limited time baseline
of only $\sim4.5$\,Gyr from $z=0.2-0.8$ may also be a factor, and of
course limitations of the method (e.g., the arbitrary cutoff of for
SNe to be ``young'' or ``old'' in the power-law DTD) could mask any
real effect.
Some other aspects of Fig.~\ref{fig:sratio} are better understood.
The $A+B$ model over-predicts low-s SNe at $z=0$ because it has only
20\% fewer SNe in the DTD at 12\,Gyr compared to 1\,Gyr, in apparent
contrast to the data and to the $t^{-1}$ model, where the rate falls
by an order of magnitude over this baseline. These excess old SNe Ia
from $z=1.5$ show up after a 10 Gyr delay at $z=0$. However, the
difference in the predictions between the $A+B$ and power-law models
is not large, so this unphysical assumption cannot provide the only
solution for the lack of observed evolution in Fig.~\ref{fig:sratio}.
\section{Summary}
In this paper, we have probed the volumetric rate evolution of
``normal'' $0.8<s<1.3$ SNe Ia using a sample of $691$ events from the
Supernova Legacy Survey (SNLS) in the range $0.1<z<1.1$, $286$ of
which have been confirmed spectroscopically. The SNLS rates increase
with redshift as (1+$z$)$^{\alpha}$ with $\alpha={2.11\pm 0.28}$, and
show no evidence of flattening beyond $z\sim 0.5$. Due to
spectroscopic incompleteness and the decrease in detection efficiency
for the SNLS sample, a rollover in the slope cannot be ruled out
beyond $z\sim 1$ based on the SNLS data alone.
As a significant component of the SN~Ia rate is linked with young
stellar populations, an increasing fraction of SN~Ia events may suffer
the effects of host extinction at higher redshifts. In our rate
calculation method, the effect of SN color is factored directly into
the detection efficiency determinations: detection recovery is
evaluated empirically according to the observed SN color regardless of
its cause. Redder objects at a given redshift have lower detection
efficiencies, and are correspondingly more heavily weighted in the
rates determination.
Combining the SNLS data with that from other SN Ia surveys, we fit
various simple delay-time distributions (DTDs) to the volumetric SN Ia
rate data. DTDs with a single Gaussian are not favored by the data. We
find that simple power-law DTDs ($\Psi(t)\propto t^{-\beta}$) with
$\beta\sim1$ ($\beta=0.98\pm0.05$ to $\beta=1.15\pm0.08$ depending on
the parameterization of the cosmic SFH) can adequately explain all the
SN Ia volumetric rate data, as can two-component models with a
prompt and delayed channel. These models cannot be separated with
the current volumetric rate data. Integrating these different DTDs
gives the total number of SNe Ia per solar mass formed (excluding
sub-luminous $s<0.8$ events) of
$N_{\mathrm{Ia}}/\ensuremath{M_{\ast}}\sim4.4-5.7\times10^{-4}\,\mathrm{SNe}\,\ensuremath{\mathrm{M}_{\mathrm{\odot}}}^{-1}$
(assuming a Salpeter IMF), depending on the star formation history and
DTD model. This is in good agreement with other similar analyses, but
lies significantly below the number expected from DTDs derived from
cluster SN Ia rates.
The use of other techniques, such as fitting the SFH of individual
galaxies \citep{sul06b,bra10,mao11}, or observing a simplified subset
of galaxies \citep{tot08,mao10}, use more information, and in
principle ought to be more reliable. However, each technique has
significant drawbacks, such as contamination \citep{tot08,mao11},
limitations of SED fitting codes \citep{sul06b,bra10,mao11}, and the
assumption that all cluster galaxies formed at $z=3$ in a
delta-function of star-formation \citep{mao11}. Therefore, our
results are an important complementary constraint. By presenting an
evolution in the SN Ia rate over a large redshift baseline done
self-consistently by a single survey we have for the first time
mitigated the primary drawback of this method -- having to combine
myriad rate determinations from multiple surveys, all done with
different assumptions and biases, sometimes disparate by large factors
\citep{nei06}.
We also find no clear evidence for a difference in the rate evolution
for SNLS samples with $0.8\leq s < 1.0$ and $1.0\leq s< 1.3$ out to
$z=0.8$, although the stretch evolution model from \citet{how07}
cannot be ruled out conclusively. Stretch evolution plays a more
significant role in the sub-luminous population \citep{gon11}, which
show a much flatter evolution than the $s>0.8$ sample.
Next generation surveys such as Dark Energy Survey (DES), Pan-STARRS,
Palomar Transient Factory (PTF), and SkyMapper, many of which are
already underway, are finding thousands of SNe Ia (in comparison to
the $\sim 700$ in this study). Statistical rate determinations ought
to improve, but systematic difficulties will remain, as not all SNe
can be spectroscopically confirmed. However, large number statistics
will allow the construction of sub-samples larger than the three (split
by stretch) analyzed here. Comparison of the relative rates of SNe
with different properties and in different environments may ultimately
improve deduced DTDs, and allow for the construction of different DTDs
for subsets of SNe Ia.
\acknowledgments
We are sincerely grateful to the entire Queued Service Observations
team and staff at CFHT for their patience and assistance throughout
the SNLS real-time observing period. We are particularly indebted to
Pierre Martin, Jean-Charles Cuillandre, Kanoa Withington, and Herb
Woodruff. Canadian collaboration members acknowledge support from
NSERC and CIAR; French collaboration members from CNRS/IN2P3,
CNRS/INSU and CEA. MS acknowledges support from the Royal Society.
This work is based on observations obtained with MegaPrime/MegaCam, a
joint project of CFHT and CEA/DAPNIA, at the Canada-France-Hawaii
Telescope (CFHT) which is operated by the National Research Council
(NRC) of Canada, the Institut National des Sciences de l'Univers of
the Centre National de la Recherche Scientifique (CNRS) of France, and
the University of Hawaii. This work is based in part on data products
produced at the Canadian Astronomy Data Centre as part of the
Canada-France-Hawaii Telescope Legacy Survey, a collaborative project
of NRC and CNRS.
This work is based in part on observations obtained at the Gemini
Observatory, which is operated by the Association of Universities for
Research in Astronomy, Inc., under a cooperative agreement with the
NSF on behalf of the Gemini partnership: the National Science
Foundation (United States), the Science and Technology Facilities
Council (United Kingdom), the National Research Council (Canada),
CONICYT (Chile), the Australian Research Council (Australia), CNPq
(Brazil) and CONICET (Argentina). Gemini program IDs: GS-2003B-Q-8,
GN-2003B-Q-9, GS-2004A-Q-11, GN-2004A-Q-19, GS-2004B-Q-31,
GN-2004B-Q-16, GS-2005A-Q-11, GN-2005A-Q-11, GS-2005B-Q-6,
GN-2005B-Q-7, GN-2006A-Q-7, GN-2006B-Q-10, and GN-2007A-Q-8.
Observations made with ESO Telescopes at the Paranal Observatory under
program IDs 171.A-0486 and 176.A-0589. Some of the data presented
herein were obtained at the W.M. Keck Observatory, which is operated
as a scientific partnership among the California Institute of
Technology, the University of California and the National Aeronautics
and Space Administration. The Observatory was made possible by the
generous financial support of the W.M. Keck Foundation.
|
2,869,038,154,380 | arxiv | \section{Introduction}
Graph-structured data are ubiquitous in the real-world applications, ranging from social networks, recommender systems, knowledge graphs to chemical molecules. Despite the effectiveness of Euclidean space for graph-related learning tasks, its ability to encode complex patterns is intrinsically limited by its polynomially expanding capacity. Although nonlinear techniques~\cite{non_linear_embedding} assist to mitigate this issue, complex graph patterns may still need an embedding dimensionality that is computationally intractable. As revealed by recent research~\cite{bronstein2017geometric} that many complex data shows non-Euclidean underlying anatomy, for example, the datasets with tree-like structure (e.g., hierarchies, power-law distribution) extensively exists in many real-world networks, such as the hypernym structure in natural languages, the subordinate structure of entities in the knowledge graph, the organizational structure for financial fraud, and item-user interactions in recommender systems.
In these situations, Euclidean space fails to produce the most powerful or adequate geometrical representations.
Recently, hyperbolic space has gained increasing popularity in the representation learning community~\cite{hgcn2019,liu2019HGNN,peng2021hyperbolic,liu2022enhancing,yang2021hyper,chen2021modeling} and various applications~\cite{sun2021hgcf,yang2021discrete,yang2022hicf,yang2022hrcf}.
The typical geometric property of hyperbolic space is that its volume increases exponentially in proportion to its radius, whereas the Euclidean space grows polynomially.
Such a geometric trait brings two benefits enabling it to well deal with the complex real-world scenarios. The \textit{first} one is that hyperbolic space exhibits minimal distortion and fits the hierarchies particularly well since the space closely matches the growth rate of tree-like data while the Euclidean space cannot. The \textit{second} is that even though with a low dimensional embedding space, hyperbolic models are surprisingly able to produce high-quality representation, which makes it to be especially favorable in low-memory and low-storage scenarios.
We believe the nascent topic highly fits the interest of the machine learning and data mining community. However, as far as we know, there are few related tutorials or seminars at present. The most related one is a workshop jointly held in Neurips 2020\footnote{https://sites.google.com/view/diffgeo4dl/home}, which gives a general overview of applications of differential geometry for deep learning without the emphasis on the graph-structured situations and hyperbolic geometry. Hence, it is necessary to have this tutorial, providing a more comprehensive discussion of the methods,applications, and challenges of this fast growing area.
\section{Target Audience and Requirements}
The tutorial targets machine learning researchers of any background. The young researchers, who are interested in the domain of graph representation learning, non-Euclidean representation learning, network embedding, and on how hyperbolic geometry can be applied in multiple scenarios, are especially welcome. No special background on data mining is required for the participants but it is helpful if the audiences already have some knowledge about concepts from mathematics, such as vector spaces and manifolds. If not, we also provide intuitive descriptions for all of them during the tutorial. The expected audience size is 200. No special technical equipment is needed and some way of taking notes is suggested.
\section{Outline}
The tutorial is mainly organized and refers to the recently released survey papers \cite{peng2021hyperbolic} and \cite{yang2022hyperbolic} but with more emphasis on the challenges by presenting the potential or newly developed solutions to address them. The outline is sketched as following:
\begin{outline}
\1 Introduction (Min, \textbf{30 min}) \cite{gcn2017,2010hyperbolic,zhou2020graph}
\2 An overview of graph representation learning
\2 Brief introduction of Riemannian geometry
\2 The motivation hyperbolic graph representation learning
\1 Hyperbolic graph representation learning (HGRL) (Menglin, \textbf{30 min}) \cite{yang2022hyperbolic,peng2021hyperbolic}
\2 Hyperbolic shallow models
\2 Hyperbolic neural networks
\3 Hyperbolic MLR
\3 Hyperbolic RNN
\2 Hyperbolic Graph Neural Networks
\3 Hyperbolic feature transformation
\3 Hyperbolic neighborhood aggregation
\3 Hyperbolic non-linear activation
\1 Applications (Menglin, \textbf{45 min})
\2 HGRL for recommender systems \cite{HyperML2020,sun2021hgcf,yang2022hrcf,chen2021modeling,wang2021fully}
\2 HGRL for knowledge graph \cite{wu2021hyperbolic,bose2020latent}
\2 HGRL for other applications \cite{yang2021discrete,sawhney2021exploring}
\1 Advanced Topics (Min, \textbf{60 min})
\2 Complex structures ~\cite{zhu2020graph,bachmann2020constant,wang2021mixed,fu2021ace,li2022curvature}
\2 Evolving interactions~\cite{sun2021hyperbolic,yang2021discrete}
\2 Geometry-aware Learning ~\cite{yang2022hrcf}
\2 Trustworthy and scalability ~\cite{zhang2021we,suzuki2021GraphEmbedding,chen2021fully,sun2021hgcf}
\end{outline}
Video records and the slides are now accessed via the tutorial homepage(\url{https://hyperbolicgraphlearning.github.io/} )for readers' convenience.
\section{Contributors}
\begin{itemize}
\item \textbf{Min Zhou} is currently a Principal Research Engineer of Huawei Noah’s ARK LAB, Shenzhen, China. She received the B.S. degree in Automation from the University of Science and Technology of China in 2012, and the Ph.D. degree from Industrial Systems Engineering and Management Department, National University of Singapore in 2016, respectively. Her interests include pattern mining and machine learning, and their applications in sequence and graph data. She has published several works related to graph learning and mining on top conferences and journals.
\item \textbf{Menglin Yang} is currently a final-year PhD student in the Department of Computer Science and Engineering, The Chinese University of Hong Kong (CUHK). His research interests include hyperbolic graph learning and machine learning. His several works related to hyperbolic graph representation learning were accepted at recent top conferences, including KDD 2021, WSDM 2022, WWW 2022, KDD2022, SIGIR2022.
\item \textbf{Lujia Pan} is the expert of Huawei Noah’s Ark Lab, Shenzhen, China. She currently heads the Intelligent operation and maintenance team and is working closely with a group of researchers and engineers on different projects such as fault diagnosis, anomaly detection, prediction in ICT(information and communications technology) network. Her research interests are on various issues related to improving the performance and reliability of intelligent operation and maintenance, including representation learning, time series analysis, label denoising, and active learning. In these research areas, she has published more than 20 technical papers in journals and conferences and is the inventor of more than 40 patents. She is currently a part-time PhD student in Department of Control Systems and Engineering, Xi’an Jiaotong University. Before that she received her B.S. and M.S. degrees in information engineering from Chongqing University of Posts and Telecommunications.
\item \textbf{Irwin King} is the Chair and Professor of Computer Science and Engineering at The Chinese University of Hong Kong. His research interests include machine learning, social computing, AI, web intelligence, data mining, and multimedia information processing. In these research areas, he has over 300 technical publications in journals and conferences. He is an Associate Editor of the Journal of Neural Networks (NN). He is an IEEE Fellow, an ACM Distinguished Member, and a Fellow of Hong Kong Institute of Engineers (HKIE). He has served as the President of the International Neural Network Society (INNS), General Co-chair of The WebConf 2020, ICONIP 2020, WSDM 2011, RecSys 2013, ACML 2015, and in various capacities in a number of top conferences and societies such as WWW, NIPS, ICML, IJCAI, AAAI, APNNS, etc. He is the recipient of the ACM CIKM 2019 Test of Time Award, the ACM SIGIR 2020 Test of Time Award, and 2020 APNNS Outstanding Achievement Award for his contributions made in social computing with machine learning. In early 2010 while on leave with AT\&T Labs Research, San Francisco, he taught classes as a Visiting Professor at UC Berkeley. He received his B.Sc. degree in Engineering and Applied Science from California Institute of Technology (Caltech), Pasadena and his M.Sc. and Ph.D. degree in Computer Science from the University of Southern California (USC), Los Angeles.
\end{itemize}
\begin{comment}
\subsection{Contributors}
\begin{itemize}
\item Irwin King
\end{itemize}
\end{comment}
\section{Societal Impact}
In this tutorial, we focus on the emerging field of graph representation learning. We identify several challenges and present the potential solutions to address them, including some attempts conducted on our own. Our tutorial could share the main ideas among researchers, with the aim of pushing the boundary of graph representation research. Meanwhile, we have involved the contents of the applications in various domains.
For instance, The hyperbolic graph representation techniques have been applied to understand cell developmental processes~\cite{klimovskaia2020poincare}, which helps to discover hierarchies from scRNAseq data. They have also been applied in disease spreading analysis~\cite{hgcn2019} and drug discovery such as molecular property prediction~\cite{yu2020semi}. These lines of research no doubt help chemists to discover the new drugs more efficiently, encouraging large positive potential social impacts. We believe that the more we understand hyperbolic space powered graph representation techniques, the better we can solve these challenging problems and further benefit society.
\bibliographystyle{ACM-Reference-Format}
|
2,869,038,154,381 | arxiv | \section{Introduction and preliminaries}
\subsection{Background}
This article is devoted to the study of two related notions that appear
to be important in the analysis of dependent first order theories
(theories without the independence property): strict independence
and strict non-forking. We explore basic properties of these concepts
and establish connections between the two.
A central concept in our investigation, and a major technical tool,
is that of a \emph{witness}: an infinite sequence that always witnesses
dividing. We characterize witnesses in dependent theories in terms
of forking, and investigate their properties.
\medskip{}
Strict independence is a new concept. This work is the first paper
where it is explicitly defined and systematically studied. However,
it has been used implicitly in several articles dealing to some extent
with different candidates for notions of weight in dependent theories,
such as \cite{Sh783,AlfAlex-dp,Us}. It was then isolated by the second
author in \cite{Us1} as a possible notion giving rise to a good concept
of orthogonality in dependent theories.
Strict non-forking was defined by Shelah in \cite{Sh783}. The main
result that Shelah proves in regard to this notion is essentially
that strict non-forking implies strict independence (in this paper,
we refer to this result as ``Shelah's Lemma''). Implicit in Shelah's
paper is the following important fact: a strictly non-forking sequence
is a witness (actually, strict independence is enough). This fact
was later isolated and explicitly stated in \cite{kachernikov} and
\textbf{\cite{Us1}}.\textcolor{magenta}{{} }We refer to this fact here
as ``Kim's Lemma for dependent theories''. The existence of such
sequences was established in \cite{kachernikov}.
Existence of manageable witnesses is an important property of a theory.
For example, Kim showed that in a simple theory, all Morley sequences
are witnesses. This was a key technical result that allowed the development
of simple theories: e.g., it lies at heart of Kim's proof of symmetry
of non-forking. In \cite{kachernikov}, the existence of witnesses
was the main step in the proof of forking = dividing. So strict non-forking
and witnesses have already proved very useful in the study of dependent
theories.
Nevertheless, very little was known in general about these concepts.
For example, it was not known whether or not strict non-forking is
symmetric. No natural characterization of the class of witnesses was
known either. The situation is remedied to a large extent by the current
investigation.
If the theory $T$ is stable, then strict independence and strict
non-forking coincide, and both are equivalent to non-forking. It would
be very interesting to establish a strong connection between either
one of these concepts and non-forking in an arbitrary dependent theory.
We have attempted to address this question, but have not yet succeeded
to find a satisfactory answer.
\subsection{Overview}
Informally, we call a set of elements (tuples) \emph{strictly independent
}if every collection of indiscernible sequences starting with these
elements may be assumed mutually indiscernible. More precisely, a
set $\left\{ a_{i}\right\} $ is called strictly independent if for
any collection of indiscernible sequences $I_{i}$ where $I_{i}$
starts with $a_{i}$, there exist $I'_{i}$ of the same type as $I_{i}$
starting with $a_{i}$ such that $I'_{i}$ is indiscernible over $I'_{\neq i}$.
See Definition \ref{def:strictind}.
Strict non-forking is the forking notion (that is, the extensible
notion) that corresponds to the symmetrization of usual forking. That
is, $\operatorname{tp}\left(a/Ab\right)$ is a strictly non-forking extension of
$\operatorname{tp}\left(a/A\right)$ if for every $B$ containing $Ab$ there is
$a'\equiv_{Ab}a$ such that $a'\mathop{\mathpalette\Ind{}}_{A}B$ and $B\mathop{\mathpalette\Ind{}}_{A}^{d}a'$.
See Definition \ref{def:strictnonfork}. One can easily define strict
non-forking and strict Morley sequences as usual based on this notion,
see Definition \ref{def:strictMorley}.
As mentioned above, one central notion in our paper is that of a witness.
An indiscernible (or just 1-indiscernible)\textcolor{green}{{} }sequence
$I$ is called a witness for $a\in I$ if for any formula $\varphi(x,a)$,
whenever $\varphi(x,a)$ divides, $I$ witnesses it (see Definition
\ref{def:witness}). Recall that ``Kim's Lemma'' in our context
states that a strict Morley sequence is a witness. That is, one can
think of strict non-forking as a technical tool that gives rise to
witnesses. The first result in this paper is Lemma \ref{lem:main lemma},
a simple, but an incredibly useful observation that in a sense ``reverses
the roles'' and allows to use witnesses in order to ``test'' for
strict non-forking. This result confirms the tight connection between
the concepts of strict non-forking and witnesses: one can use either
one in order to draw conclusions on the other.
Shelah's Lemma states that a strictly non-forking sequence is also
strictly independent. Since one of these notions depends on the order,
and the other does not, we can not expect the converse to hold. Nevertheless,
we prove (Theorem \ref{thm:Main}) that tuples $a,b$ are strictly
independent\textcolor{black}{{} (a symmetric notion)} over an extension
base $A$ if and only if $\operatorname{tp}\left(a/Ab\right)$ is a strictly non-forking
extension of $\operatorname{tp}\left(a/A\right)$ if and only if $\operatorname{tp}\left(b/Aa\right)$
is a strictly non-forking extension of $\operatorname{tp}\left(b/A\right)$. In
particular, we conclude that strict non-forking is symmetric over
extension bases.
``Kim's Lemma'' suggests the following natural questions: Are there
convenient characterizations of sequences that are witnesses? Which
Morley sequences are witnesses? In this paper we answer these questions
completely. For example, we show that \textcolor{black}{(over an extension
base)} a sequence $I=\left\langle a_{i}\right\rangle $ is a witness
if and only if $\operatorname{tp}\left(a_{\neq i}/a_{i}\right)$ does not fork;
that is, $a_{\neq i}\mathop{\mathpalette\Ind{}} a_{i}$, Theorem \ref{thm:characterize_witness}.
In addition, we show that a Morley sequence is strict Morley if and
only if it is a witness, Corollary \ref{cor:strictmorley_char}. This
confirms that ``Kim's Lemma'' is in a sense ``as good as it gets''.
Under the further assumption that the theory is of bounded alternation
rank (e.g. dp-minimal), we present ``quantitative'' versions of
the results described above . For example, we calculate an explicit
bound on the length of a finite witness which is necessary to test
strict non-forking, Proposition \ref{prop:bounded version of main lemma}
(a quantitative version of Lemma \ref{lem:main lemma}). Similarly,
Corollary \ref{cor:dp-minimal witness} provides a bound on ``how
much non-forking'' one needs to produce a witness (a quantitative
version of Theorem \ref{thm:characterize_witness}).
Before this work, it was not clear whether a strict Morley sequence
is necessarily ``totally strict'', that is, the sequences of pairs,
triplets, etc, are strict Morley sequences as well. Here we prove
a surprisingly simple and straightforward characterization of totally
strict Morley sequences as two-way Morley sequences (that is, the
sequence remains Morley when one reverses the order), Corollary \ref{cor:2way}.
We also present an example (due to Pierre Simon) of a strict Morley
sequence which is not totally strict (witnessed by the sequence of
pairs). This shows in particular that strict non-forking can be sensitive
to the ordering on the set: in Simon's example one obtains a strict
Morley sequence, which is not even a Morley sequence once the order
is reversed.
We proceed to drawing conclusions on types co-dominated by generically
stable types. In particular, we prove that any Morley sequence in
such a type is strict (hence is a witness), Theorem \ref{thm:dombystable-strict}.
As a corollary, one sees that non-forking is symmetric on realizations
of such types, Corollary \ref{cor:dombystable-symm}.
Section \ref{sec:My-weight-problem} is devoted to notions of weight
arising from the concepts of non-forking and independence studied
here. In particular, we characterize strong dependence (resp., dependence)
in terms of the appropriate weight being almost finite (resp., bounded).
Finally, section \ref{sec:Non-NIP-theories} contains generalizations
of some of our results to certain independent theories. Section \ref{sec:Problems}
lists a few problems.
We would like to thank Pierre Simon and Artem Chernikov for several
comments and suggestions.
\section{Preliminaries and basic notions.}
\subsection{Notations}
Unless stated otherwise, we are working in a monster model $\mathfrak{C}$ of
a \emph{dependent }theory $T$. For a general introduction to dependent
theories see e.g. \cite{Ad,Sh:c}. We do not generally distinguish
tuples and elements.
By $a\mathop{\mathpalette\Ind{}}_{A}^{f}B$ we mean that $\operatorname{tp}\left(a/BA\right)$ does not
fork over $A$, and $a\mathop{\mathpalette\Ind{}}_{A}^{d}B$ means $\operatorname{tp}\left(a/BA\right)$
does not divide over $A$. With the exception of Section \ref{sec:My-weight-problem},
we shall omit the $f$ from $\mathop{\mathpalette\Ind{}}^{f}$, so $\mathop{\mathpalette\Ind{}}$ will just mean
non-forking.
If $I$ is a sequence (usually indiscernible), then $\varphi\left(x,I\right)$
denotes the set $\left\{ \varphi(x,a)\left|\, a\in I\right.\right\} $.
\subsection{Preliminaries}
We remind the reader a few basic properties of non-forking in dependent
theories. For the definitions and more, the reader is referred to
\cite{kachernikov}.
Recall that a type $\operatorname{tp}\left(a/B\right)$ \emph{splits strongly} over
a set $A\subseteq B$ if there are tuples $b_{1},b_{2}\in B$ such
that for some formula $\varphi\left(x,y\right)$ over $A$ we have
$\varphi\left(x,b_{1}\right)\land\neg\varphi\left(x,b_{2}\right)\in\operatorname{tp}\left(a/B\right)$
and there is an $A$-indiscernible sequence containing both $b_{1}$
and $b_{2}$.
\begin{fact}
\label{fac:strongsplit}\cite{Sh783} If $\operatorname{tp}(a/B)$ splits strongly
over a set $A\subseteq B$, then $\operatorname{tp}(a/B)$ divides over $A$. More
generally, if $c,d\in B$ have the same Lascar strong type over $A$,
and $ac\not\equiv_{A}ad$ then $\operatorname{tp}\left(a/B\right)$ forks over $A$.
In particular this is true when $A$ is a model and $c\equiv_{A}d$.
This means that if $p$ is a global type that does not fork over a
model $M$, then it is invariant over $M$.
\end{fact}
From this we easily get:
\begin{claim}
(Preservation of indiscernibility) Let $I$ be an indiscernible sequence
over a set $A$, and assume $b\mathop{\mathpalette\Ind{}}_{A}I$. Then $I$ is indiscernible
over $Ab$.
\end{claim}
We recall some basic facts on invariant types:
\begin{defn}
Suppose $p$ is a global type which is invariant over a set $A$.
\begin{enumerate}
\item We say that a sequence $\left\langle a_{i}\left|\, i<\alpha\right.\right\rangle $
is generated by $p$ over $B\supseteq A$ if $a_{0}\models p|_{B}$
and for all $i<\alpha$, $a_{i}\models p|_{Ba_{<i}}$. Note that this
sequence is indiscernible over $B$.
\item We let the type $p^{\left(\alpha\right)}$ be the union of $\operatorname{tp}\left(\left\langle a_{i}\left|\, i<\alpha\right.\right\rangle /B\right)$
running over all $B\supseteq A$, where $\left\langle a_{i}\left|\, i<\alpha\right.\right\rangle $
is some (any) sequence generated by $p$ over $B$. Note that this
type is well defined and invariant over $A$.
\end{enumerate}
\end{defn}
\begin{fact}
\label{fac:dividing} \cite[1.4]{SheSimple} The following are equivalent
for any theory $T$:
\begin{enumerate}
\item $a\mathop{\mathpalette\Ind{}}_{A}^{d}b$.
\item For every indiscernible sequence $I$ over $A$ such that $b\in I$,
there is an indiscernible sequence $I'$ such that $I'\equiv_{Ab}I$
and $I'$ is indiscernible over $Aa$.
\item For every indiscernible sequence $I$ over $A$ such that $b\in I$,
there is $a'$ such that $a'\equiv_{Ab}a$ and $I$ is indiscernible
over $Aa'$.
\end{enumerate}
\end{fact}
\begin{claim}
\label{cla:preservationdividing} ($T$ any theory) Assume that $a\mathop{\mathpalette\Ind{}}_{B}b$
and $\varphi\left(x,b\right)$ divides / forks over $B$, then $\varphi\left(x,b\right)$
divides / forks over $Ba$ as well.\end{claim}
\begin{proof}
For dividing this follows from Fact \ref{fac:dividing}. Assume $\varphi\left(x,b\right)$
forks over $B$, so there are $n<\omega$, $\varphi_{i}\left(x,y_{i}\right)$
and $b_{i}$ for $i<n$ such that $\varphi_{i}\left(x,b_{i}\right)$
divides over $B$ and $\varphi\left(x,b\right)\vdash\bigvee_{i<n}\varphi_{i}\left(x,b_{i}\right)$.
We may assume $a\mathop{\mathpalette\Ind{}}_{A}b\left\langle b_{i}\left|\, i<n\right.\right\rangle $.
So $\varphi_{i}\left(x,b_{i}\right)$ divides over $Ba$ and hence
$\varphi\left(x,b\right)$ forks over $Ba$.
\end{proof}
We shall also use the fact that forking independence satisfies transitivity
(on the left):
\begin{fact}
(see e.g. \cite{Ad3}) ($T$ any theory) if $a\mathop{\mathpalette\Ind{}}_{A}bc$ then $a\mathop{\mathpalette\Ind{}}_{Ab}c$.
If $b\mathop{\mathpalette\Ind{}}_{A}c$ and $a\mathop{\mathpalette\Ind{}}_{Ab}c$ then $ab\mathop{\mathpalette\Ind{}}_{A}c$. \end{fact}
\begin{defn}
$ $
\begin{enumerate}
\item A set $A$ is called an \emph{extension base }if no type over $A$
forks over $A$. Equivalently, every type over $A$ has a global extension
which does not fork over $A$.
\item A theory $T$ is called \emph{extensible} if every set is an extension
base.
\end{enumerate}
\end{defn}
Note that models are always extension bases, but there are dependent
theories which are not extensible. Most of our results in this paper
assume that the sets of parameters one works over are extension bases.
The reason this assumption is helpful is the following result (due
to Chernikov and the first author) which will be explicitly and implicitly
used throughout the article:
\begin{fact}
\cite{kachernikov} Let $A$ be an extension base. Then dividing over
$A$ equals forking over $A$. More precisely, if a formula $\varphi\left(x,a\right)$
forks over $A$, then it divides over $A$.
\end{fact}
Note that if $A$ is not an extension base, then the conclusion of
the fact above can not possibly hold (since no type over $A$ divides
over $A$); so the theorem tells us that in dependent theories (in
fact, even $\operatorname{NTP}_{\operatorname{2}}$, see Section \ref{sec:My-weight-problem}) this
is the only obstacle.
We will refer to this fact as ``forking = dividing over $A$'' (or
omit $A$ when it is clear from the context).\textcolor{green}{{} }
\begin{cor}
\label{cor:Left Extension}\cite{kachernikov} Let $A$ be an extension
base. Then forking has left extension over $A$, which means: if $a\mathop{\mathpalette\Ind{}}_{A}b$
then for all $c$ there is $c'\equiv_{Aa}c$ such that $ac'\mathop{\mathpalette\Ind{}}_{A}b$.
\end{cor}
\subsection{Basic notions}
We now move on to the definitions of the main notions studied in this
paper. The following definitions appear in \cite{Sh783}:
\begin{defn}
\label{def:strictnonfork}We say that $\operatorname{tp}\left(a/Ab\right)$ \emph{strictly
does not fork }over $A$ (we write $a\mathop{\mathpalette\Ind{}}_{A}^{\operatorname{st}}b$) if there is
a global extension $p$ of $\operatorname{tp}\left(a/Ab\right)$ which does not
fork over $A$ (so in particular $a\mathop{\mathpalette\Ind{}}_{A}b$), and for any $B\supseteq Ab$,
if $c\models p|_{B}$ then $\operatorname{tp}\left(B/Ac\right)$ does not divide%
\footnote{It is a matter of taste whether it should be ``divide'' or ``fork''.
For all the results here there is no difference except Lemma \ref{lem:stind-eqdef}
where minor changes are required. %
} over $A$ (i.e. $B\mathop{\mathpalette\Ind{}}_{A}^{d}c$). \end{defn}
\begin{rem}
If $a\mathop{\mathpalette\Ind{}}_{A}^{\operatorname{st}}b$ and $c,d\in A$ then $ac\mathop{\mathpalette\Ind{}}_{A}^{\operatorname{st}}bd$
($c,d$ may be empty). \end{rem}
\begin{lem}
\label{lem:stind-eqdef}The following are equivalent for a tuple $a$
and sets $A\subseteq B$:\end{lem}
\begin{enumerate}
\item $a\mathop{\mathpalette\Ind{}}_{A}^{\operatorname{st}}B$
\item For every $c$, there is some $c'\equiv_{B}c$ such that $Bc'\mathop{\mathpalette\Ind{}}_{A}^{d}a$
and $a\mathop{\mathpalette\Ind{}}_{A}Bc'$.
\item For every $\left(\left|B\right|+\left|T\right|\right)^{+}$-saturated
model $M$ containing $B$, there is some $M'\equiv_{B}M$ such that
$M'\mathop{\mathpalette\Ind{}}_{A}^{d}a$ and $a\mathop{\mathpalette\Ind{}}_{A}M'$.
\item There is some $\left(\left|B\right|+\left|T\right|\right)^{+}$-saturated
model $M$ containing $B$, such that $M\mathop{\mathpalette\Ind{}}_{A}^{d}a$ and $a\mathop{\mathpalette\Ind{}}_{A}M$.\end{enumerate}
\begin{proof}
Clearly (1)$\Rightarrow$(2) (by applying an automorphism over $A$)
and (2)$\Rightarrow$(3) $\Rightarrow$(4).
(4)$\Rightarrow$(1): We must show that there is a global type $p\left(x\right)$
extending $r\left(x\right):=\operatorname{tp}\left(a/B\right)$ such that $p\left(x\right)$
does not fork over $A$ and $\neg\psi\left(d,x\right)\in p\left(x\right)$
whenever $\psi\left(y,a\right)$ divides over $A$. In other words,
the following set is consistent:
\begin{eqnarray*}
r\left(x\right) & \cup & \left\{ \neg\varphi\left(x,c\right)\left|\varphi\in L\left(A\right),\,\varphi\left(x,c\right)\mbox{ forks over }A\right.\right\} \\
& \cup & \left\{ \neg\psi\left(d,x\right)\left|\psi\in L\left(A\right),\,\psi\left(y,a\right)\mbox{ divides over }A\right.\right\} .
\end{eqnarray*}
If not, $r\left(x\right)\models\bigvee\varphi_{i}\left(x,c_{i}\right)\vee\bigvee\psi\left(d_{j},x\right)$
for such formulas. But then by saturation of $M$ we may assume $c_{i},d_{j}\in M$
contradicting (4).
\end{proof}
Recall that a sequence $I=\langle a_{i}\rangle$ is called \emph{non-forking}
over a set $A$ if $a_{i}\mathop{\mathpalette\Ind{}}_{A}a_{<i}$ for all $i$. It is called
\emph{Morley }over $A$ if it is both indiscernible over $A$ and
non-forking over $A$. By analogy, we define:
\begin{defn}
\label{def:strictMorley}A sequence $I=\langle a_{i}\rangle$ is called
\emph{strictly non-forking} over a set $A$ if $a_{i}\mathop{\mathpalette\Ind{}}_{A}^{\operatorname{st}}a_{<i}$
for all $i$. It is called \emph{a strict Morley sequence }over $A$
if it is both indiscernible over $A$ and strictly non-forking over
$A$. It is called \emph{totally strict }if in addition for all $n<\omega$,
the sequence of increasing $n$-tuples from it is also strict Morley
(i.e. if the order type of $I$ is $\omega$, then
\[
\left\langle \left(a_{j}\left|\, ni<j<n\left(i+1\right)-1\right.\right)\left|\, i<\omega\right.\right\rangle
\]
is strict Morley).
\end{defn}
Of course, if $I$ is a strict Morley sequence over $A$, then it
is Morley over $A$.
\begin{example}
\label{exa:rationals}We will see later (after Corollary \ref{cor:2way})
that for $T=Th\left(\mathbb{Q},<\right)$, every non-constant $A$-indiscernible
sequence of singletons is totally strict over $A$.
\end{example}
Finally, we give the main new definition of this paper:
\begin{defn}
\label{def:strictind}Let $B$ be a set, and $\mathcal{J}=\left\{ a_{i}\left|\, i<\lambda\right.\right\} $
a set of tuples. We say that $\mathcal{\mathcal{J}}$ is \emph{strictly
independent }over $B$ if the following holds: for every set $\mathcal{I}=\left\{ I_{i}\left|\, i<\lambda\right.\right\} $
of $B$-indiscernible sequences such that $I_{i}$ starts with $a_{i}$
, there is $\mathcal{I}'=\left\{ I'_{i}\left|\, i<\lambda\right.\right\} $
satisfying:
\begin{itemize}
\item $I_{i}\equiv_{Ba_{i}}I'_{i}$ for all $i<\lambda$.
\item $I'_{i}$ is indiscernible over $BI'_{\neq i}$.
\end{itemize}
\end{defn}
We will refer to the second item in the definition above as ``$I'_{i}$
are mutually indiscernible over $B$'', or ``$I'_{i}$ are mutually
$B$-indiscernible''.
\begin{rem}
It is clear from the definition that if $\left\{ a_{i}\left|\, i<\lambda\right.\right\} $
is strictly independent over $B$, and $b_{i}\in B$ for $i<\lambda$,
then $\left\{ a_{i}b_{i}\left|\, i<\lambda\right.\right\} $ is also
strictly independent. \end{rem}
\begin{defn}
\label{def:half strictind-1}Let $B$ be a set, and $\mathcal{J}=\left\langle a_{i}\left|\, i<\lambda\right.\right\rangle $
a sequence of tuples. We say that $\mathcal{J}$ is \emph{$\frac{1}{2}$-strictly
independent }over $B$ if the following holds: for every set $\mathcal{I}=\left\{ I_{i}\left|\, i<\lambda\right.\right\} $
of $B$-indiscernible sequences such that $I_{i}$ starts with $a_{i}$
, there is $\mathcal{I}'=\left\{ I'_{i}\left|\, i<\lambda\right.\right\} $
satisfying:
\begin{itemize}
\item $I_{i}\equiv_{Ba_{i}}I'_{i}$ for all $i<\lambda$.
\item $I'_{i}$ is indiscernible over $BI'_{<i}a_{>i}$.
\end{itemize}
\end{defn}
One of the motivations for defining strict independence, and for investigating
its connection to strict non-forking is the following lemma due to
Shelah:
\begin{lem}
\label{lem:(Shelah's-Lemma)}(Shelah's Lemma, \cite[Claim 5.13]{Sh783})
Let $\mathcal{J}=\left\langle a_{i}\left|\, i<\lambda\right.\right\rangle $
be a strictly non-forking sequence over a set $A$. Then $\mathcal{J}$
is strictly independent. \end{lem}
\begin{proof}
\renewcommand{\qedsymbol}{} The proof consists of two claims:
\begin{claim*}
$\mathcal{J}$ is $\frac{1}{2}$-strictly independent. \end{claim*}
\begin{proof}
\renewcommand{\qedsymbol}{$\square$} Suppose $\mathcal{I}=\left\{ I_{i}\left|\, i<\lambda\right.\right\} $
is a set of $A$-indiscernible sequences. By compactness we may assume
$\lambda=n<\omega$. The proof is by induction on $n$. Suppose we
have $I_{i}'$ for $i<n$ and consider $n+1$. Perhaps changing $I_{i}'$,
we may assume that $a_{n}\mathop{\mathpalette\Ind{}}^{\operatorname{st}}I_{<n}'$. Hence also $I_{<n}'\mathop{\mathpalette\Ind{}}_{A}^{d}a_{n}$.
By Fact \ref{fac:dividing}, there is an $A$-indiscernible sequence
$I'_{n}$ such that $I_{n}\equiv_{Aa_{n}}I_{n}'$ and $I_{n}'$ is
indiscernible over $I_{<n}'$. Now, $I_{n}'$ is indiscernible over
$AI_{<n}'$ by construction. For every $i<n$, $I_{i}'$ is indiscernible
over $AI_{<i}'a_{>i}$ by the induction hypothesis (where $a_{>i}=a_{i+1}\ldots a_{n-1}$).
As $a_{n}\mathop{\mathpalette\Ind{}}_{A}I'_{<n}$, , it follows that $a_{n}\mathop{\mathpalette\Ind{}}_{AI_{<i}'a_{>i}}I_{i}'$,
so by preservation of indiscernibility, $I_{i}'$ is indiscernible
over $AI_{<i}'a_{>i}a_{n}$. \end{proof}
\begin{claim*}
In general, $\mathcal{J}$ is strictly independent iff it is $\frac{1}{2}$-strictly
independent. \end{claim*}
\begin{proof}
\renewcommand{\qedsymbol}{$\square$} For simplicity of notation,
assume without loss of generality that $A=\emptyset$. Of course,
if $\mathcal{J}$ is strictly independent then $\mathcal{J}$ is $\frac{1}{2}$-strictly
independent. For the other direction, suppose $\mathcal{I}=\left\{ I_{i}\left|\, i<\lambda\right.\right\} $
is a set of indiscernible sequences as in Definition \ref{def:strictind}.
Find some $\mathcal{I}'=\left\{ I_{i}'\left|\, i<\lambda\right.\right\} $
as in Definition $\ref{def:half strictind-1}$. It is easy to check
that all ``vertical paths'' have the same type, i.e. for any choice
of $c_{i}\in I_{i}'$, $\operatorname{tp}\left(c_{i}\left|\, i<\lambda\right.\right)=\operatorname{tp}\left(a_{i}\left|\, i<\lambda\right.\right)$.
By Ramsey and compactness (see e.g. \cite[Lemma 5.1.3]{TentZiegler})
we can extract mutually indiscernible sequences $\left\{ I_{i}''\left|\, i<\lambda\right.\right\} $
such that the type of all vertical paths is $\operatorname{tp}\left(a_{i}\left|\, i<\lambda\right.\right)$.
Now apply an automorphism taking the first column to $\left\langle a_{i}\left|\, i<\lambda\right.\right\rangle $.
\end{proof}
\end{proof}
Strict non-forking, and especially strict Morley sequences have already
proved very useful due to the following observation, shown independently
in \cite{Us1} and \cite{kachernikov}%
\footnote{There it is stated for strict non-forking sequences, but the proof
there really uses Shelah's lemma and Lemma \ref{lem:(Kim's-Lemma)}
as stated here.%
} (in fact it is implicit in \cite[Claim 5.8]{Sh783}):
\begin{lem}
\label{lem:(Kim's-Lemma)}(``Kim's Lemma'') If $\left\langle a_{i}\left|\, i<\left|T\right|^{+}\right.\right\rangle $
is a strictly independent sequence over $B$, and $\varphi_{i}\left(x,a_{i}\right)$
divides over $B$, then $\left\{ \varphi_{i}\left(x,a_{i}\right)\left|\, i<\left|T\right|^{+}\right.\right\} $
is inconsistent. In particular, this is true if $\left\langle a_{i}\left|\, i<\left|T\right|^{+}\right.\right\rangle $
is strictly non-forking.
\end{lem}
It is therefore interesting to understand strict Morley sequences
and strictly non-forking extensions in general. From the definitions
it is not even clear that such extensions exist. However, a few existence
results have been shown:
\begin{fact}
\label{fac:strict_exist}$ $
\begin{itemize}
\item \cite{kachernikov}Let $A$ be an extension base. Then $A$ is a strict
extension base. That is, every type over $A$ has a global strictly
non-forking extension.
\item \cite{Us1}($T$ any theory) Let $p$ be a global type which is both
an heir and non-forking over $A$. Then $p$ is a global strictly
non-forking extension of $p|_{A}$.
\end{itemize}
\end{fact}
This allows taking strictly non-forking extensions over extension
bases, and we will do this repeatedly. Note that by Fact \ref{fac:strongsplit},
if $M\supseteq A$ is a model, and $p$ is a global type that does
not fork over $A$, then $p$ is invariant over $M$. Together with
Fact \ref{fac:strict_exist}, we can conclude that for every extension
base $A$, and a tuple $a$, there is a strict Morley sequence over
$A$ starting with $a$ (if $p$ is a global strictly non-forking
type containing $\operatorname{tp}\left(a/A\right)$, then find some model $M\supseteq A$
and a conjugate (over $A)$ $q$ of $p$ such that $a\models q|_{M}$
and define $a_{i}\models q|_{Ma_{<i}}$ to get a strict Morley sequence).
\section{\label{sec:Basic-properties}Basic properties}
\begin{defn}
\label{def:witness}Call an infinite sequence $I$ a \emph{witness
for $a$ over $A$}, if:
\begin{enumerate}
\item For all $b\in I$ we have $a\equiv_{A}b$.
\item For any formula $\varphi\left(x,y\right)$ over $A$, if $\varphi\left(x,a\right)$
divides over $A$, then $\varphi(x,I_{0})$ is inconsistent for every
countable $I_{0}\subseteq I$.
\end{enumerate}
Say that $I$ is an \emph{indiscernible witness} for $a\in I$ \emph{over
$A$} if $I$ is a witness over $A$, and it is indiscernible over
$A$. \end{defn}
\begin{rem}
Note that if $I$ is a witness for some $a$ over $A$, then it is
a witness for every $a\in I$, and, in fact, for any $a$ realizing
the type of some (any) element of $I$ over $A$. So we will often
simply say ``$I$ is a witness over $A$'', omitting $a$. \end{rem}
\begin{example}
\label{exa:strict Morley is a witness}By Lemma \ref{lem:(Kim's-Lemma)},
strictly independent (and in particular strict Morley) sequences of
tuples of the same type over $A$ are witnesses over $A$. \end{example}
\begin{obs}
\label{obs:direct consequence of Kim} Let $A$ be an extension base.
If there exists a witness for $b$ over $A$, which is indiscernible
over $Aa$, then $a\mathop{\mathpalette\Ind{}}_{A}b$.\end{obs}
\begin{proof}
Let $I=\left\langle b_{i}\left|\, i<\omega\right.\right\rangle $
be such a witness. If $a\mathop{\mathpalette\Notind{}}_{A}b$, then (since forking = dividing
over $A$) there is some formula $\varphi\left(x,b\right)$ over $A$
such that $\varphi\left(a,b\right)$ holds and $\varphi\left(x,b\right)$
divides over $A$. So, by definition, $I$ witnesses it. But this
is a contradiction --- by indiscernibility over $Aa$, $\varphi\left(a,b_{i}\right)$
holds for all $i<\omega$.
\end{proof}
The following Lemma is quite easy, but incredibly useful. In a sense
it is an analogue of Observation \ref{obs:direct consequence of Kim}
for $\mathop{\mathpalette\Ind{}}^{\operatorname{st}}$.
\begin{lem}
\label{lem:main lemma}Let $A$ be an extension base. Suppose $I=\left\langle a_{i}\left|\, i<\omega\right.\right\rangle $
is a witness for $a=a_{0}$ over $A$. Then, if $I\mathop{\mathpalette\Ind{}}_{A}b$ and
$I$ is indiscernible over $Ab$, then $a\mathop{\mathpalette\Ind{}}_{A}^{\operatorname{st}}b$.\end{lem}
\begin{proof}
It is enough to prove (recalling Lemma \ref{lem:stind-eqdef}) that
for any $c$, there is some $c'\equiv_{Ab}c$ such that $bc'\mathop{\mathpalette\Ind{}}_{A}a$
and $a\mathop{\mathpalette\Ind{}}_{A}bc'$. Let $p\left(x\right)=\operatorname{tp}\left(c/Ab\right)$.
So it is enough to show that the following set is consistent:
\begin{eqnarray*}
p\left(x\right) & \cup & \left\{ \neg\varphi\left(x,b,a\right)\left|\,\varphi\in L\left(A\right),\,\varphi\left(x,y,a\right)\mbox{ forks over }A\right.\right\} \\
& \cup & \left\{ \neg\psi\left(a,b,x\right)\left|\,\psi\in L\left(A\right),\,\psi\left(y,b,c\right)\mbox{ forks over }A\right.\right\}
\end{eqnarray*}
Suppose not, then $p\left(x\right)\vdash\varphi\left(x,b,a\right)\vee\psi\left(a,b,x\right)$
where $\varphi\left(x,y,a\right)$ forks over $A$, and $\psi\left(y,b,c\right)$
forks over $A$ (recall that forking formulas form an ideal, that
is, forking is preserved under finite disjunctions). Since forking
= dividing we have that $\varphi\left(x,y,a\right)$ and $\psi\left(y,b,c\right)$
actually divide over $A$.
Now, as $I$ is a witness for $a$, it witnesses that $\varphi\left(x,y,a\right)$
divides over $A$. Clearly, $\left\{ \varphi\left(x,b,a_{i}\right)\left|\, i<\omega\right.\right\} $
is inconsistent. As $I$ is indiscernible over $Ab$, it follows that
$p\left(x\right)\vdash\bigvee_{i<n}\psi\left(a_{i},b,x\right)$ for
some $n<\omega$. Take some $c'\models p\left(x\right)$ such that
$I\mathop{\mathpalette\Ind{}}_{A}bc'$. So for some $i$, the formula $\psi\left(a_{i},b,c'\right)$
holds, so $\psi\left(y,b,c'\right)$ does not divide over $A$, therefore
neither does $\psi\left(y,b,c\right)$ --- a contradiction.\end{proof}
\begin{rem}
\label{rem:weakining}Both Observation \ref{obs:direct consequence of Kim}
and Lemma \ref{lem:main lemma} hold even when we weaken the assumption
of indiscernibility by ``having the same type''. So, if $A$ is
an extension base, $I=\left\langle a_{i}\left|\, i<\omega\right.\right\rangle $
is a witness for $a=a_{0}$ over $A$, $I\mathop{\mathpalette\Ind{}}_{A}b$, and all tuples
in $I$ have the same type over $Ab$, then $a\mathop{\mathpalette\Ind{}}_{A}^{\operatorname{st}}b$.
\end{rem}
As a corollary we answer the first question that motivated this paper.
The following yields equivalence between strict non-forking and strict
independence (for a 2-element set). It follows in particular that
strict non-forking is \emph{symmetric.}
\begin{thm}
\label{thm:Main}The following are equivalent for tuples $a,b$ and
an extension base $A$:
\begin{enumerate}
\item $a\mathop{\mathpalette\Ind{}}_{A}^{\operatorname{st}}b$
\item $\left\langle a,b\right\rangle $ is a strictly independent set.
\item $\left\langle b,a\right\rangle $ is a $\frac{1}{2}$-strictly independent
sequence.
\item $b\mathop{\mathpalette\Ind{}}_{A}^{\operatorname{st}}a$
\end{enumerate}
\end{thm}
\begin{proof}
By Lemma \ref{lem:(Shelah's-Lemma)}\textbf{ }(1) $\Rightarrow$(2),
and we already observed that (2) is equivalent to (3). So assume $\left\langle a,b\right\rangle $
is $\frac{1}{2}$-strictly independent. By Fact \ref{fac:strict_exist}\textbf{
}there are strict Morley sequences $I,J$ over $A$ starting with
$a$ and $b$, respectively. By the assumption, without loss of generality,
$I$ is indiscernible over $AJ$, whereas $J$ is indiscernible over
$Aa$.
By Observation \ref{obs:direct consequence of Kim} $J\mathop{\mathpalette\Ind{}}_{A}a$
holds so we can apply Lemma \ref{lem:main lemma}.\end{proof}
\begin{cor}
\emph{\label{cor:Symmetry-of-strict}(Symmetry of strict non-forking)}
Let $A$ be an extension base. If $b\mathop{\mathpalette\Ind{}}_{A}^{\operatorname{st}}a$ then $a\mathop{\mathpalette\Ind{}}_{A}^{\operatorname{st}}b$.
\end{cor}
\begin{cor}
\label{cor:approximation to main problem}$ $
\begin{enumerate}
\item Suppose $I=\left\langle a_{i}\left|\, i<\omega\right.\right\rangle $
is an indiscernible witness for $a_{0}=a$ over an extension base
$A$. If $I\mathop{\mathpalette\Ind{}}_{A}b$ and $b\mathop{\mathpalette\Ind{}}_{A}I$, then $a\mathop{\mathpalette\Ind{}}_{A}^{\operatorname{st}}b$.
\item Suppose $C$ is an extension base and $A$ is a set that has the property
that for any finite $a\subseteq A$ there is some indiscernible witness
over $C$ starting with $a$ in $A$. Then, for any $B$, if $A\mathop{\mathpalette\Ind{}}_{C}B$
and $B\mathop{\mathpalette\Ind{}}_{C}A$ then $A\mathop{\mathpalette\Ind{}}_{C}^{\operatorname{st}}B$.
\end{enumerate}
\end{cor}
\begin{proof}
(1) follows from Lemma \ref{lem:main lemma} and by preservation of
indiscernibility. (2) follows from (1). \end{proof}
\begin{claim}
\label{cla:kind of local character}Suppose $I$ is a sequence of
length $\left(\left|T\right|+\left|A\right|\right)^{+}$ which is
a witness over an extension base $A$. Then%
\footnote{Here and in the following corollary, for simplicity of the presentation
we assume that $a$ and $b$ are finite tuples, but one can easily
modify the assumptions to suit infinite tuples.%
} for any $a$ there is some $b\in I$ such that $a\mathop{\mathpalette\Ind{}}_{A}b$.\end{claim}
\begin{proof}
If not, then for every $b\in I$, there is some formula $\varphi_{b}\left(x,y\right)$
over $A$ and such that $\varphi_{b}\left(a,b\right)$ holds and $\varphi_{b}\left(x,b\right)$
divides over $A$. We can find an infinite subset $I_{0}$ of $I$
such that $\varphi_{b}$ is constant for all $b\in I_{0}$. Thus we
have a contradiction to the fact that $I$ is a witness. \end{proof}
\begin{cor}
\label{cor:witness on the left}Assume that $I$ is a sequence of
length $\left(\left|T\right|+\left|A\right|\right)^{+}$ which is
a witness over an extension base $A$ and that $I\mathop{\mathpalette\Ind{}}_{A}a$. Then
for some $b\in I$, $a\mathop{\mathpalette\Ind{}}_{A}^{\operatorname{st}}b$. \end{cor}
\begin{proof}
Let $J$ be a countable strict Morley sequence over $A$ starting
with $a$, such that $I\mathop{\mathpalette\Ind{}}_{A}J$. By Claim \ref{cla:kind of local character}
there is some $b\in I$ such that $J\mathop{\mathpalette\Ind{}}_{A}b$. But in addition,
$b\mathop{\mathpalette\Ind{}}_{A}J$. So by Corollary \ref{cor:approximation to main problem}
(1) we are done. \end{proof}
\begin{rem}
If in Corollary \ref{cor:witness on the left} we assume that $I$
is an indiscernible witness over $A$, then it is enough to assume
it has length $\left|T\right|^{+}$, because then by ``shrinking
of indiscernibles'' (see \cite{Sh715}), some end-segment of $I$
is indiscernible over $aA$ and we can use Lemma \ref{lem:main lemma}
and symmetry.
\end{rem}
We cannot expect full transitivity of $\mathop{\mathpalette\Ind{}}^{\operatorname{st}}$ in general, due
to the following proposition, for which we recall:
\begin{defn}
\label{def:gen.stable}A type $p\in S(A)$ in a dependent theory $T$
is called \emph{generically stable }if some (equivalently, any) Morley
sequence in $p$ is an indiscernible set.\end{defn}
\begin{prop}
\label{prop:transitivity implies gen.stable}Suppose $p$ is a type
over an extension base $A$ such that strict non-forking is transitive
on realization of $p$, namely if $B\mathop{\mathpalette\Ind{}}_{A}^{\operatorname{st}}CD$ and $C\mathop{\mathpalette\Ind{}}_{A}^{\operatorname{st}}D$
then $BC\mathop{\mathpalette\Ind{}}_{A}^{\operatorname{st}}D$ where $B$, $C$ and $D$ are sets of realizations
of $p$. Then $p$ is generically stable and $\mathop{\mathpalette\Ind{}}^{\operatorname{st}}=\mathop{\mathpalette\Ind{}}$ on
realizations of $p$.\end{prop}
\begin{proof}
By Fact \ref{fac:strict_exist}, there is some global strictly non-forking
(over $A$) type $q$ which extends $p$. Let $M\supseteq A$ be some
model. To show that $p$ is generically stable, it is enough to show
that for every $n$ and every permutation $\sigma:n\to n$, if $a=\left\langle a_{i}\left|\, i<n\right.\right\rangle \models q^{\left(n\right)}|_{M}$
then $a_{\sigma}=\left\langle a_{\sigma\left(i\right)}\left|\, i<n\right.\right\rangle \models q^{\left(n\right)}|_{M}$.
It is enough to show it for $\sigma=\left(i\;\, i+1\right)$ where
$i+1<n$. By induction, we may assume $i=n-2$.
Let $\left\langle a_{i}\left|\, i<n+1\right.\right\rangle \models q^{\left(n+1\right)}|_{M}$.
Since $a_{n}\mathop{\mathpalette\Ind{}}_{A}^{\operatorname{st}}a_{<n-2}a_{n-2}a_{n-1}M$ , by transitivity
and symmetry, we have $a_{<n-2}a_{n}a_{n-2}M\mathop{\mathpalette\Ind{}}_{A}^{\operatorname{st}}a_{n-1}$.
By symmetry again, we have $a_{n-1}\mathop{\mathpalette\Ind{}}_{M}^{\operatorname{st}}a_{<n-2}a_{n}a_{n-2}M$.
By Fact \ref{fac:strongsplit}, since $a_{n}a_{<n-2}\equiv_{M}a_{n-2}a_{<n-2}$
(they both realize $q^{\left(n-1\right)}|_{M}$), $a_{n}a_{n-1}a_{<n-2}\equiv_{M}a_{n-2}a_{n-1}a_{<n-2}$.
Since the left hand side realizes $q^{\left(n-1\right)}|_{M}$, we
are done.
Now, to show that $\mathop{\mathpalette\Ind{}}=\mathop{\mathpalette\Ind{}}^{\operatorname{st}}$ on realizations of $p$, recall
that by \cite[Lemma 8.5]{Us}, non-forking is symmetric on realizations
of $p$. Now we can conclude by Theorem \ref{thm:Main} and \cite[Lemma 8.10]{Us}
(or just by preservation of indiscernibility). \end{proof}
\begin{cor}
(poor man's transitivity) Suppose $A$ is an extension base. If $b\mathop{\mathpalette\Ind{}}_{A}a_{1}$,
$b\mathop{\mathpalette\Ind{}}_{A}a_{2}$ and $a_{1}\mathop{\mathpalette\Ind{}}_{Ab}^{\operatorname{st}}a_{2}$ then $a_{1}\mathop{\mathpalette\Ind{}}_{A}^{\operatorname{st}}a_{2}$.\end{cor}
\begin{proof}
Apply Theorem \ref{thm:Main}. Let $I,J$ be indiscernible sequences
over $A$ starting with $a_{1},a_{2}$. As $b\mathop{\mathpalette\Ind{}}_{A}a_{1}$, there
is some $I'\equiv_{A}I$ starting with $a_{1}$ that is indiscernible
over $Ab$, and in the same way, there is some $J'\equiv_{A}J$ starting
with $a_{2}$ and indiscernible over $Ab$. Since $a_{1}\mathop{\mathpalette\Ind{}}_{Ab}^{\operatorname{st}}a_{2}$
we may assume that $I',J'$ are mutually indiscernible over $Ab$
(this does not require $Ab$ to be an extension base, by Lemma \ref{lem:(Shelah's-Lemma)})
and in particular over $A$.
\end{proof}
The next proposition a strengthening of Lemma \ref{lem:main lemma}
under the stronger assumption of bounded alternation rank.
\begin{defn}
\label{def:bounded alt-rank}A theory has \emph{bounded alternation
rank} if for every $n<\omega$ there is some $m<\omega$ such that
whenever $\varphi\left(x,y\right)$ is a formula with $\lg\left(x\right)=n$,
the alternation rank of $\varphi\left(x,y\right)$ is less than $m$,
i.e. there is no indiscernible sequence $\left\langle b_{i}\left|\, i<\omega\right.\right\rangle $
such that $\left\{ \varphi\left(x,b_{i}\right)^{i\pmod2}\left|\, i<m+1\right.\right\} $
is consistent. \end{defn}
\begin{prop}
\label{prop:bounded version of main lemma}Assume $T$ has bounded
alternation rank, $A$ an extension base. Assume that $a$ is a tuple
of length $n$ and let $m$ be the corresponding number from Definition
\ref{def:bounded alt-rank}. Then, if $b=\left\langle b_{i}\left|\, i<\left\lfloor \left(m+1\right)/2\right\rfloor \right.\right\rangle $
starts an indiscernible witness over $A$, $b_{i}\equiv_{Aa}b_{0}$
for all $i<\left\lfloor \left(m+1\right)/2\right\rfloor $ and $b\mathop{\mathpalette\Ind{}}_{A}a$ then $b_{0}\mathop{\mathpalette\Ind{}}_{A}^{\operatorname{st}}a$.
\end{prop}
Note that we cannot use the proof of Lemma \ref{lem:main lemma} directly.
\begin{proof}
By Lemma \ref{lem:main lemma} (and Remark \ref{rem:weakining}),
it is enough to show that there is a witness $I$ over $A$ (not necessarily
indiscernible), starting with $b$, with all tuples in it having the
same type over $Aa$, such that $I\mathop{\mathpalette\Ind{}}_{A}a$. So let $I=\left\langle b_{i}\left|\, i<\omega\right.\right\rangle $
be an indiscernible witness starting with $b$ over $A$. Let $p\left(\bar{x}\right)=\operatorname{tp}\left(I/A\right)$,
and $r\left(x\right)=\operatorname{tp}\left(b_{0}/Aa\right)$. What we need to show
consistent is
\[
p\left(\bar{x}\right)\cup\left\{ \neg\psi\left(\bar{x},a\right)\left|\,\psi\left(\bar{x},y\right)\in L\left(A\right),\psi\left(x,a\right)\mbox{ divides over }A\right.\right\} \cup\bigcup_{i<\omega}r\left(x_{i}\right).
\]
Suppose it is not consistent. Then there is some formula $\varphi\left(x,y\right)$
over $A$ and $k<\omega$ such that $\varphi\left(x,a\right)\in r\left(x\right)$
and
\[
p\left(\bar{x}\right)\cup\left\{ \neg\psi\left(\bar{x},a\right)\left|\,\psi\left(\bar{x},y\right)\in L\left(A\right),\psi\left(x,a\right)\mbox{ divides over }A\right.\right\} \vdash\bigvee_{i<k}\neg\varphi\left(x_{i},a\right).
\]
Let $I'=\left\langle b_{i}\left|\, i\in\mathbb{Q}\right.\right\rangle $
be an $A$-indiscernible sequence containing $I$. As $b\mathop{\mathpalette\Ind{}}_{A}a$,
we can, by left extension, assume that $I'\mathop{\mathpalette\Ind{}}_{A}a$. Note that $\left\langle b_{-k+i}\left|\, i<\omega\right.\right\rangle \models p$
and independent from $a$ over $A$, so for some $i<k$, $\neg\varphi\left(b_{-i},a\right)$
holds. Similarly, for every $i<\left\lfloor \left(m+1\right)/2\right\rfloor $ there is some $j_{i}\in\left(i,i+1\right)$
such that $\neg\varphi\left(b_{j_{i}},a\right)$ holds, and there
is such $b_{j_{\infty}}$ for $j_{\infty}>\left\lfloor \left(m+1\right)/2\right\rfloor $. Since $\varphi\left(b_{i},a\right)$
holds for every $i<\left\lfloor \left(m+1\right)/2\right\rfloor $, this gives a contradiction to the choice
of $m$.\end{proof}
\begin{rem}
If $T$ is dp-minimal (see e.g \cite{Simon-Dp-min,AlfAlex-dp}), then
by \cite{AlexAlfTemp}, it has bounded alternation rank (in fact,
$m=2n+1$ in Definition \ref{def:bounded alt-rank}) . Also note that
in this case, if $n=1$ and $A$ is a model $M$, we can replace ``
$\left\langle b_{i}\left|\, i<2\right.\right\rangle $ starts an indiscernible
witness'' by ``$b_{1}\mathop{\mathpalette\Ind{}}_{M}^{\operatorname{st}}b_{0}$'' (because then we can
generate a strict Morley sequence starting with $b_{0}b_{1}$).
\end{rem}
\section{On witnesses and strict Morley sequences.}
We begin with quite a nice characterization of indiscernible sequences
which are witnesses over a given extension base $C$. First we need
the following technical result:
\begin{prop}
\label{prop:witness alternation rank} Let $A$ be any set. Suppose
$\varphi\left(x,y\right)\in L\left(A\right)$ has alternation rank
less than $m$, and let $n=\left\lfloor \left(m+1\right)/2\right\rfloor $.
\begin{enumerate}
\item If $I=\left\langle a_{i}\left|\, i<\omega\right.\right\rangle $ is
a Morley sequence over $A$ such that $a_{<n}\mathop{\mathpalette\Ind{}}_{A}a_{n}$ and $\varphi\left(x,a_{0}\right)$
divides over $A$ then $\left\{ \varphi\left(x,a_{i}\right)\left|\, i<n\right.\right\} $
is inconsistent.
\item If $I=\left\langle a_{i}\left|\, i<\omega\right.\right\rangle $ is
an indiscernible sequence over $A$ such that $a_{\neq i}\mathop{\mathpalette\Ind{}}_{A}a_{i}$
and $\varphi\left(x,a_{0}\right)$ divides over $A$ then $\left\{ \varphi\left(x,a_{i}\right)\left|\, i<\omega\right.\right\} $
is inconsistent.
\end{enumerate}
\end{prop}
\begin{proof}
(1) Suppose not. It is enough to show that
\[
\left\{ \neg\varphi\left(x,a_{i}\right)\left|\, i\in E\right.\right\} \cup\left\{ \varphi\left(x,a_{i}\right)\left|\, i\in O\right.\right\}
\]
is consistent where $O$ ($E$) is the set of odd (even) numbers
smaller than $m+1$.
If not, then, letting $\Sigma_{1}=\left\{ \varphi\left(x,a_{i}\right)\left|\, i\in E\right.\right\} $
and $\Sigma_{2}=\left\{ \varphi\left(x,a_{i}\right)\left|\, i\in O\right.\right\} $,
we get that $\Sigma_{2}\models\bigvee\Sigma_{1}$. Let $\Sigma_{0}\subseteq\Sigma_{1}$
be minimal such that $\Sigma_{2}\models\bigvee\Sigma_{0}$. Assume
that $\Sigma_{0}$ is not empty, and let $i_{0}\in E$ be minimal
such that $\varphi\left(x,a_{i_{0}}\right)\in\Sigma_{0}$. Let $J=O\cup E_{>i_{0}}$.
By assumption, $\left\{ a_{i}\left|\, i\in O_{<i_{0}}\right.\right\} \mathop{\mathpalette\Ind{}}_{A}a_{i_{0}}$,
and since $I$ is a Morley sequence, by transitivity we have $\left\{ a_{i}\left|\, i\in J\right.\right\} \mathop{\mathpalette\Ind{}}_{A}a_{i_{0}}$.
Since $\varphi\left(x,a_{i_{0}}\right)$ divides over $A$, it also
divides over $A\cup\left\{ a_{i}\left|\, i\in J\right.\right\} $
(Claim \ref{cla:preservationdividing}), so there is an indiscernible
sequence $\left\langle b_{j}\left|\, j<\omega\right.\right\rangle $
with $b_{0}=a_{i_{0}}$ that witnesses this. By indiscernibility,
for all $j<\omega$ we have
\[
\Sigma_{2}\vdash\varphi(x,b_{j})\lor\bigvee\Sigma_{0}\backslash\left\{ \varphi\left(x,a_{i_{0}}\right)\right\} .
\]
But then $\Sigma_{2}\vdash\bigvee\left(\Sigma_{0}\backslash\left\{ \varphi\left(x,a_{i}\right)\right\} \right)$,
contradicting the minimality of $\Sigma_{0}$.
So $\Sigma_{0}$ is empty, i.e. $\Sigma_{2}\vdash\bot$, contradicting
our assumption.
(2) is similar (and easier). \end{proof}
\begin{rem}
One can extract another argument for this proposition from \cite[proof of Claim 5.19]{Sh783},
assuming that $T$ is extensible:
Suppose $c\vDash\left\{ \varphi\left(x,a_{i}\right)\left|\, i<n\right.\right\} $,
and let $c'$ be such that $c'\equiv_{Aa_{O}}c$ and $c'\mathop{\mathpalette\Ind{}}_{Aa_{O}}I$,
where $a_{O}=\left\{ a_{i}\left|\, i\in O\right.\right\} $. In particular
$\varphi\left(c',a_{i}\right)$ holds for $i\in O$. If $i\in E$,
then by transitivity and the assumption, $c'a_{O}\mathop{\mathpalette\Ind{}}_{A}a_{i}$,
so $\neg\varphi\left(c',a_{i}\right)$ holds.
\end{rem}
Now it is easy to deduce:
\begin{thm}
\label{thm:characterize_witness}(Characterization of indiscernible
witnesses). Let $C$ be an extension base, $I=\left\langle a_{i}\right\rangle $
an indiscernible sequence over $C$. Then $I$ is a witness over $C$
if and only if for every $i$ we have $a_{\neq i}\mathop{\mathpalette\Ind{}}_{C}a_{i}$.\end{thm}
\begin{proof}
First, assume $I$ is a witness, but for some $i$ we have $a_{\neq i}\mathop{\mathpalette\Notind{}}_{C}a_{i}$.
We may assume that $I=\left\langle a_{q}\left|\, q\in\mathfrak{\mathbb{Q}}\right.\right\rangle $.
Since forking = dividing over $C$, there is a formula $\varphi\left(x,y\right)\in L\left(C\right)$
such that $\varphi\left(x,a_{i}\right)$ divides over $C$ and for
some $a=a_{j_{1}}\ldots a_{j_{k}}$ from $I$, $\mathfrak{C}\models\varphi\left(a,a_{i}\right)$.
Enumerate $a$ such that $j_{1}<j_{2}<\ldots<j_{\ell}<i<j_{\ell+1}<\ldots<j_{k}$
(it is possible that $j_{\ell}=-\infty$ or $j_{\ell+1}=+\infty$).
Then since $I$ is a witness, by indiscernibility we have that $\left\langle a_{q}\left|\, q\in\left(j_{\ell},j_{\ell+1}\right)\right.\right\rangle $
witnesses dividing of $\varphi\left(x,a_{i}\right)$ ; however, by
indiscernibility again, $\mathfrak{C}\models\varphi\left(a,a_{q}\right)$ for
all $q\in\left(j_{\ell,}j_{\ell+1}\right)$, a contradiction.
The converse follows from Proposition \ref{prop:witness alternation rank}. \end{proof}
\begin{rem}
\label{rem:strict morley are witnesses}By Example \ref{exa:strict Morley is a witness},
strict Morley sequences are witnesses over any set. The proof uses
Kim's and Shelah's Lemmas. But over an extension base, using Theorem
\ref{thm:characterize_witness}, the proof is much easier. Indeed,
if $I=\left\langle a_{i}\left|\, i<\omega\right.\right\rangle $ is
a strict Morley sequence over an extension base $A$, then by transitivity
and symmetry it is easy to see that $a_{\neq i}\mathop{\mathpalette\Ind{}}_{A}a_{i}$. In
fact, one does not need Corollary \ref{cor:Symmetry-of-strict}, just
that $a_{<i}\mathop{\mathpalette\Ind{}}_{A}a_{i}$, which follows from the definition of
strict non-forking.
\end{rem}
Another conclusion of Lemma \ref{lem:main lemma} is the following
characterization of strict Morley sequences among Morley sequences:
\begin{cor}
\textbf{\label{cor:strictmorley_char} }Let $C$ be an extension base.
A Morley sequence $I=\left\langle a_{i}\left|\, i<\omega\right.\right\rangle $
is a strict Morley sequence over $C$ iff it is a witness over $C$. \end{cor}
\begin{proof}
We already know that strict Morley sequences are witnesses. Conversely,
by transitivity, $a_{\geq i}\mathop{\mathpalette\Ind{}}_{C}a_{<i}$. This means that $\left\langle a_{j}\left|\, j\geq i\right.\right\rangle $
is a witness which is indiscernible over $\left\langle a_{j}\left|\, j<i\right.\right\rangle $.
By Lemma \ref{lem:main lemma} we are done.
\end{proof}
As a corollary of the proof, we obtain a surprising fact: a two-way
Morley sequence is strict Morley (even totally strict):
\begin{cor}
\label{cor:2way}For an extension base $A$ and an $A$-indiscernible
sequence $I=\left\langle a_{i}\left|\, i<\omega\right.\right\rangle $
the following are equivalent:
\begin{enumerate}
\item $I$ is a two-way independent sequence over $A$ (i.e. $a_{\geq i}\mathop{\mathpalette\Ind{}}_{A}a_{<i}$
and $a_{<i}\mathop{\mathpalette\Ind{}}_{A}a_{\geq i}$).
\item $I$ is totally strict.
\end{enumerate}
\end{cor}
Going back to Example \ref{exa:rationals}, it is easy to see that
every $A$-indiscernible increasing sequence of singletons $I=\left\langle a_{i}\left|\, i<\omega\right.\right\rangle $
is Morley (because every set is an extension base, and some increasing
Morley sequence over $A$ exists and all such sequences have the same
type over $A$). Analogously, this is true for decreasing sequences
as well, so $I$ is two-way independent over $A$.
Here we present an example (due to Pierre Simon) of a strict Morley
sequence which is not totally strict. The condition above is, therefore,
not a characterization of strict Morley sequences.
\begin{example}
\label{exa:pierre}Let $T$ be the theory of dense trees in the language
$<,\wedge$ where $\wedge$ is the meet function. This theory is the
model completion of the theory of trees. It is $\omega$-categorical
and has elimination of quantifiers.
Note that there are only five types of non-constant indiscernible
sequences in this theory: increasing, decreasing ($a_{n}\frac{>}{<}a_{n+1}$),
increasing / decreasing comb ($a_{n+2}\wedge a_{n+1}\frac{>}{<}a_{n+1}\wedge a_{n}$)
and flower ($a_{n+1}\wedge a_{n}=a_{n+1}\wedge a_{n+2}$).
Let $M$ be a countable model containing the tree $2^{<\omega}$.
One of the branches, say $\eta$ in $2^{\omega}$ does not have a
point above it in $M$. Let $p\left(x\right)$ be the type $\left\{ x>\eta\upharpoonright i\left|\, i<\omega\right.\right\} $.
Note that this already determines a complete type over $M$. Let $q$
be a global strictly non-forking extension, then $q^{\left(2\right)}$
is not strictly non-forking:
Assume not, and let $\left\langle a_{i}\right\rangle \models q^{\left(\omega\right)}|_{M}$.
Then it is an indiscernible sequence. Now, since the formulas $x>a_{0}$
and $x>a_{0}\wedge a_{1}$ divide over $M$, it follows that the sequence
of pairs cannot be strictly non-forking.
\end{example}
Let us take another look at theories with bounded alternation rank.
\begin{cor}
\label{cor:dp-minimal witness}Suppose $T$ has bounded alternation
rank, and $A$ is an extension base. Suppose that $\varphi\left(x,y\right)\in L\left(A\right)$,
$x$ is a tuple of length $n$, and let $m$ be as in Definition \ref{def:bounded alt-rank}.
Suppose $\left\langle a_{i}\left|\, i<\omega\right.\right\rangle $
is a Morley sequence such that $a_{<k}\mathop{\mathpalette\Ind{}}_{A}a_{k}$ where $k=\left\lfloor \left(m+1\right)/2\right\rfloor $.
Then, if $\varphi\left(x,a_{0}\right)$ forks (divides) over $A$,
then $\left\{ \varphi\left(x,a_{i}\right)\left|\, i<k\right.\right\} $
is inconsistent.
In particular, by \cite{AlexAlfTemp}, in dp-minimal theories, $m=2n+1$,
so $k=n+1$. \end{cor}
\begin{rem}
In a general dependent theory, it is known by \cite{HP} that if $a,b\models p\in S\left(\mathfrak{C}\right)$
and $p$ does not fork over $A$ then $a$ and $b$ have the same
Lascar strong type iff there is an indiscernible sequence $\bar{c}$
over $A$ such that $a\bar{c}$ and $b\bar{c}$ are both indiscernible
over $A$. In fact, if $a\mathop{\mathpalette\Ind{}}_{A}b$ and $a$ and $b$ have the same
Lascar strong type over $A$, then $\left\langle b,a\right\rangle $
starts a Morley sequence over $A$. Indeed, let $M$ be some model
containing $A$ such that $a\mathop{\mathpalette\Ind{}}_{A}bM$, and let $p\in S\left(\mathfrak{C}\right)$
be a global non-forking extension of $\operatorname{tp}\left(a/Mb\right)$. So $p$
is invariant over $M$, and both $\bar{a}=\left\langle a_{i}\left|\, i<\omega\right.\right\rangle \models p^{\left(\omega\right)}|_{Mab}$
and $a\frown\bar{a}$ are indiscernible over $M$. Since $p^{\left(\omega\right)}$
does not fork over $A$, it follows from Fact \ref{fac:strongsplit}
that $a\bar{a}\equiv_{A}b\bar{a}$, so both are indiscernible over
$A$. But $a_{0}a_{1}\equiv_{M}aa_{1}\equiv_{A}ba_{1}\equiv_{M}ba$
and we are done.
\end{rem}
From this we get:
\begin{cor}
In the context of Corollary \ref{cor:dp-minimal witness}, if $T$
is dp-minimal, $n=1$, $a_{0}\mathop{\mathpalette\Ind{}}_{A}a_{1}$ and $a_{1}\mathop{\mathpalette\Ind{}}_{A}a_{0}$,
$\varphi\left(x,a_{0}\right)$ forks over $A$, and $a_{0}$ and $a_{1}$
have the same Lascar strong type over $A$, then $\left\{ \varphi\left(x,a_{0}\right),\varphi\left(x,a_{1}\right)\right\} $
is inconsistent.
\end{cor}
\section{On types related to generically stable types.}
The following notions arises when studying extensions of ``special''
(e.g., generically stable) types:
\begin{defn}
We say that a type $p\in S\left(A\right)$ is \emph{co-dominated}
(over $A$) by a type $q\in S\left(A\right)$ if there are $a,b$
realizing $p,q$ respectively such that whenever $c\mathop{\mathpalette\Ind{}}_{A}b$ , we
also have $c\mathop{\mathpalette\Ind{}}_{A}a$. We denote this by $p\triangleleft_{A}^{*}q$
or simply by $p\triangleleft^{*}q$ , when $A$ is clear from the
context. To specify $a$ and $b$ we write $a\triangleleft^{*}b$.
\end{defn}
Recall that a type $p\in S(A)$ is \emph{dominated} by $q\in S(A)$
($p\triangleleft q$) if there are $a,b$ realizing $p,q$ respectively
such that whenever $b\mathop{\mathpalette\Ind{}}_{A}c$ , we also have $a\mathop{\mathpalette\Ind{}}_{A}c$. It
was shown in \cite{OnUs-stable} that if $q$ is generically stable
and $p\triangleleft q$, then $p$ is also generically stable. In
this section we investigate (using the techniques developed earlier
in the paper) types co-dominated by generically stable types.
\begin{thm}
\label{thm:dombystable-strict}Let $A$ be an extension base. Suppose
$p\in S(A)$ is generically stable, $q\triangleleft_{A}^{*}p$ , then:
any Morley sequence in $q$ over $A$ is in fact a strict Morley sequence.\end{thm}
\begin{proof}
To simplify the notation, assume $A=\emptyset$. Suppose $\left\langle b_{i}\left|i<\omega\right.\right\rangle $
is a Morley sequence in $q$. By Corollary \ref{cor:strictmorley_char},
it is enough to show that $b_{\neq i}\mathop{\mathpalette\Ind{}} b_{i}$ for all $i$.
\begin{claim*}
There exists $\left\langle a_{i}\left|\, i<\omega\right.\right\rangle $
such that\end{claim*}
\begin{itemize}
\item $a_{i}\models p$
\item $\left\langle a_{i}b_{i}\left|\, i<\omega\right.\right\rangle $ is
a non-forking sequence,
\item $a_{i}b_{i}\equiv a_{j}b_{j}$ for all $i,j$, and, most importantly,
\item $b_{i}\triangleleft^{*}a_{i}$.\end{itemize}
\begin{proof}
For $i<\omega$, we define $I_{i}=\left\langle a_{i}^{j}\left|\, j<i\right.\right\rangle $
that satisfy the conditions for $b_{<i}$, and such that if $i<i'$
then $b_{<i}I_{i}\equiv b_{<i}I_{i'}\upharpoonright i$.
For $i=0$, there is nothing to define.
For $i=1$, let $a_{1}^{0}\models p$ be such that $b_{0}\triangleleft^{*}a_{1}^{0}$.
For $i>1$, let $I_{i}'\equiv_{b_{<i}}I_{i}$ be such that $b_{i}\mathop{\mathpalette\Ind{}} b_{<i}I_{i}'$.
Now, by left extension (Corollary \ref{cor:Left Extension}), find
some $a_{i+1}^{i}$ such that $a_{1}^{0}b_{0}\equiv a_{i+1}^{i}b_{i}$
and $a_{i+1}^{i}b_{i}\mathop{\mathpalette\Ind{}} I_{i}'b_{<i}$. So letting $I_{i+1}=\left\langle a_{i+1}^{j}\left|\, j<i\right.\right\rangle $,
we are done.
Now use compactness to finish: find $\left\langle a_{i}\left|i<\omega\right.\right\rangle $
such that $a_{<i}b_{<i}\equiv I_{i}b_{<i}$ for all $i<\omega$.
\end{proof}
Since we can find a sequence $\langle a_{i}b_{i}\rangle$ satisfying
the requirements of the claim above of any length, we can use Erd\H{o}s-Rado
to find one which is also indiscernible (of course, the sequence $\langle b_{i}\rangle$
will now change, but we are keeping its type, which is all that is
important).
Now assume that the order type of the sequence $\langle a_{i}b_{i}\rangle$
is $\left(\mathbb{Q},<\right)$.
Note that $I=\left\langle a_{i}\left|\, i\in\mathbb{Q}\right.\right\rangle $
is a Morley sequence in $p$, so (since $p$ is generically stable)
it is an indiscernible set.
Recall that we are trying to show that $b_{\neq i}\mathop{\mathpalette\Ind{}} b_{i}$ for
all $i$. Assume that $b_{\neq i}\mathop{\mathpalette\Notind{}} b_{i}$, then by co-dominance,
$b_{\neq i}\mathop{\mathpalette\Notind{}} a_{i}$. By symmetry for generically stable types
\cite[Lemma 8.5]{Us}, $a_{i}\mathop{\mathpalette\Notind{}} b_{\neq i}$. Hence there is a
formula $\varphi\left(a_{i},b_{\neq i}\right)$ that shows it (i.e.
$\varphi\left(x,b_{\neq i}\right)$ divides). Write this formula as
$\varphi\left(a_{i},b_{<i-\varepsilon},b_{>i+\varepsilon}\right)$
for some $\varepsilon\in\mathbb{Q}$ small enough. By indiscernibility,
$\varphi\left(a_{j},b_{<i-\varepsilon},b_{>i+\varepsilon}\right)$
holds for all $j\in\left(i-\varepsilon,i+\varepsilon\right)$. Since
$I$ is an indiscernible set, and $T$ is dependent, $\varphi\left(a_{j},b_{<i-\varepsilon},b_{>i+\varepsilon}\right)$
for almost all (i.e. except finitely many) $j\in\mathbb{Q}$. But
$b_{>i+\varepsilon}$ is bounded, i.e. it is in fact contained in
$b_{<i'}$ for some $i'\in\mathbb{Q}$. So for some $j>i'$, $a_{j}\mathop{\mathpalette\Notind{}} b_{<i'}$
- a contradiction. \end{proof}
\begin{cor}
\label{cor:dombystable-symm}Suppose $A$ is an extension base, $p,q\in S\left(A\right)$,
$q\triangleleft^{*}p$ and $p$ is generically stable.
Then: for any $b$, if $a\mathop{\mathpalette\Ind{}}_{A}b$ and $a\models q$ then $a\mathop{\mathpalette\Ind{}}_{A}^{\operatorname{st}}b$.
In particular, forking is symmetric for realizations of $q$ {[}caution!
not necessarily for \emph{tuples} of realizations{]}.\end{cor}
\begin{proof}
Suppose $a\mathop{\mathpalette\Ind{}}_{A}b$. Then let $p$ be a global non-forking type
extending $\operatorname{tp}\left(a/Ab\right)$. Generate a Morley sequence $I$
starting with $a$ using $p$, so $I$ is indiscernible over $Ab$,
but also $I\mathop{\mathpalette\Ind{}} b$. As $I$ is a strict Morley sequence by the previous
theorem, it follows that $a\mathop{\mathpalette\Ind{}}^{\operatorname{st}}b$ by Lemma \ref{lem:main lemma}. \end{proof}
\begin{rem}
If the following occurs: for any $b$, if $b\mathop{\mathpalette\Ind{}}_{A}a$ and $a\models q$
then $a\mathop{\mathpalette\Ind{}}_{A}b$, then $q$ is generically stable:
Let $\left\langle a_{i}\left|\, i<\omega\right.\right\rangle $ be
a Morley sequence in $q$. Then by transitivity and symmetry, $a_{1}\mathop{\mathpalette\Ind{}}_{A}a_{0}a_{2}$,
so $a_{1}a_{0}\equiv_{A}a_{1}a_{2}\equiv_{A}a_{0}a_{1}$. A similar
argument shows that this is an indiscernible set. \end{rem}
\begin{example}
Let $T$ be the theory of a dense linear order, and let $q$ be the
type of a singleton over the empty set. It is easy to see that $q$
satisfies the conclusions of both Theorem \ref{thm:dombystable-strict}
and Corollary \ref{cor:dombystable-symm}. So none of these conditions
imply generic stability.
\end{example}
\begin{example}
(due to Pierre Simon) Let $L=\left\{ <,E\right\} $ and $T$ be the
theory of a dense linear order with an equivalence relation with dense
classes. Let $p$ be the type in $T^{\operatorname{eq}}$ over $\emptyset$ of any
$E$-class. Then $p$ is generically stable. Let $q$ be the type
of any element in the home sort. Then $q\triangleleft^{*}p$ as witnessed
by taking $a$ and $a/E$ for some $a$. But $q$ is not generically
stable.
\end{example}
\begin{example}
It is possible to modify this example to have that $q$ is itself
an extension of a generically stable type. The idea is to add another
equivalence relation $E'$ which is a coarsening of $E$, and the
order relation only applies to $E'$-equivalent elements. Then, the
type of any element $a$ in the home sort is generically stable, but
if $a'E'a$ then $\operatorname{tp}\left(a'/a\right)$ is no longer generically
stable, but it is still co-dominated by the type of its $E$-class
in $T^{\operatorname{eq}}$ over $a$.
\end{example}
\begin{example}
One can even construct an example where $q$ is a forking (not generically
stable) extension of a generically stable type $p'$, such that $q$
is co-dominated by the non-forking (hence generically stable) extension
of $p'$. Let $L=\left\{ <,E_{1},E_{2}\right\} $ and let $T$ be
the model completion of the following universal theory:
\begin{itemize}
\item $E_{1}$ and $E_{2}$ are equivalence relations.
\item $<$ is a partial order.
\item If $x<y$ then $xE_{1}y$, and $<$ is a linear order on each $E_{1}$-class.
\end{itemize}
This theory has the amalgamation property and the joint embedding
property, hence the model completion exists and has elimination of
quantifiers (see \cite[Theorem 7.4.1]{Hod}). Note that every set
is an extension base. Let $M\models T$ and $a\in M$. Then it is
easy to see that $\operatorname{tp}\left(a/\emptyset\right)$ is generically stable.
Let $b$ realize the (unique) non-forking extension (hence generically
stable) of $\operatorname{tp}\left(a/\emptyset\right)$ over $a$, and let $p=\operatorname{tp}\left(b/a\right)$.
Let $a'$ be such that $a'E_{1}a$ and $a'E_{2}b$ (so $\neg\left(a'E_{1}b\right)$
and $\neg\left(a'E_{2}a\right)$). Let $q=\operatorname{tp}\left(a'/a\right)$.
Then $q$ is not generically stable, but $a'\triangleleft^{*}b$:
Suppose $c\mathop{\mathpalette\Ind{}}_{a}b$, and we need to show that $c\mathop{\mathpalette\Ind{}}_{a}a'$. By
quantifier elimination we may assume that $c$ is a singleton. There
is only one possible reason that $c\mathop{\mathpalette\Notind{}}_{a}a'$: $cE_{2}a'$. But
since $a'E_{2}b$ and $c\mathop{\mathpalette\Ind{}}_{a}b$ this does not hold.
\end{example}
\section{\label{sec:My-weight-problem}Strict notions of weight }
During a talk by the second author on an early version of this paper,
Anand Pillay asked whether there is a notion of weight, based on notions
of forking discussed here, that characterizes strong dependence. In
this section we confirm that this is indeed the case. Furthermore,
we isolate notions of weight based on strict independence that characterize
dependence, strong dependence, and the tree property of the second
kind.
We recall the relevant definitions:
\begin{defn}
A theory has the\emph{ tree property of the second kind} if there
is a formula $\varphi\left(x,y\right)$, $k<\omega$ and an array
$\left\langle a_{i,j}\left|\, i,j<\omega\right.\right\rangle $ such
that:
\begin{itemize}
\item All rows are $k$-inconsistent: for all $i$, $\left\{ \varphi\left(x,a_{i,j}\right)\left|\, j<\omega\right.\right\} $
is $k$-inconsistent (every $k$-subset is inconsistent).
\item All vertical paths are consistent: for all $\eta:\omega\to\omega$,
$\left\{ \varphi\left(x,a_{i,\eta\left(i\right)}\right)\left|\, i<\omega\right.\right\} $
is consistent.
\end{itemize}
A theory is $\operatorname{NTP}_{\operatorname{2}}$ if it does not have the tree property of the
second kind.
\end{defn}
Note that dependent theories are $\operatorname{NTP}_{\operatorname{2}}$.
\begin{defn}
We say that $\operatorname{tp}\left(a/Ab\right)$ is strictly invariant over $A$
(we write $a\mathop{\mathpalette\Ind{}}_{A}^{\operatorname{ist}}b$) if there is a global extension $p$
of $\operatorname{tp}\left(a/Ab\right)$ which is Lascar invariant over $A$ (i.e.
does not strongly split over $A$), and for any $B\supseteq Ab$,
if $c\models p|_{B}$ then $\operatorname{tp}\left(B/Ac\right)$ does not divide
over $A$.
\end{defn}
We can define a strictly invariant sequence and strictly invariant
Morley sequences using $\mathop{\mathpalette\Ind{}}^{\operatorname{ist}}$ instead of $\mathop{\mathpalette\Ind{}}^{\operatorname{st}}$. The
proof of Shelah's Lemma \ref{lem:(Shelah's-Lemma)} goes through and
so if $\mathcal{J}=\left\langle a_{i}\left|\, i<\lambda\right.\right\rangle $
is a strictly invariant sequence over a set $A$, then it is strictly
independent.
\begin{defn}
A theory is \emph{strongly dependent} if there is no sequence of formulas
$\left\langle \varphi_{i}\left(x,y\right)\left|\, i<\omega\right.\right\rangle $
and an array $\left\langle a_{i,j}\left|\, i,j<\omega\right.\right\rangle $
such that for every $\eta:\omega\to\omega$, the following set is
consistent
\[
\left\{ \varphi\left(x,a_{i,j}\right)^{\eta\left(i\right)=j}\left|\, i,j<\omega\right.\right\} .
\]
Where $\varphi^{1}=\varphi$ and $\varphi^{0}=\neg\varphi$. \end{defn}
\begin{lem}
\label{lem:A invariant is M strictly invariant}If $p$ is a global
$A$ invariant type and $M\supseteq A$ is $\left|A\right|^{+}$ saturated,
then $p$ is an heir over $M$. In this case, $p$ is strictly invariant
over $M$. In particular, a Morley sequence that $p$ generates over
$M$ is totally strict. \end{lem}
\begin{proof}
Suppose $\varphi\left(x,c,m\right)\in p$ for $m\in M$. Let $c'\equiv_{Am}c$
be in $M$. Then $\varphi\left(x,c',m\right)\in p$ by invariance
over $A$, and we are done by Fact \ref{fac:strict_exist}.
\end{proof}
In the following definition, $\mathop{\mathpalette\Ind{}}$ is any notion of independence,
but we think of it as being $\mathop{\mathpalette\Ind{}}^{f}$, $\mathop{\mathpalette\Ind{}}^{d}$, $\mathop{\mathpalette\Ind{}}^{s}$or
$\mathop{\mathpalette\Ind{}}^{i}$, where $a\mathop{\mathpalette\Ind{}}_{A}^{s}b$ means that $\operatorname{tp}\left(a/bA\right)$
does not strongly split over $A$ and $a\mathop{\mathpalette\Ind{}}_{A}^{i}b$ means that
$\operatorname{tp}\left(a/bA\right)$ has a global Lascar invariant extension over
$A$. By Fact \ref{fac:strongsplit}, when $T$ is dependent, $\mathop{\mathpalette\Ind{}}^{f}=\mathop{\mathpalette\Ind{}}^{i}$.
\begin{defn}
We say that $p\in S\left(B\right)$ has $\mathop{\mathpalette\Ind{}}$ pre-weight at least
$\alpha$, if there is a $B$-strictly independent set $\left\{ a_{i}\left|\, i<\alpha\right.\right\} $
and $a\models p$ such that $a\mathop{\mathpalette\Notind{}}_{B}a_{i}$ for all $i$. So we
say that it has pre-weight $<\alpha$ if it is not the case that it
has pre-weight at least $\alpha$. \end{defn}
\begin{thm}
$ $
\begin{enumerate}
\item $T$ is $\operatorname{NTP}_{\operatorname{2}}$ iff all types have $\mathop{\mathpalette\Ind{}}^{d}$ pre-weight $<\left|T\right|^{+}$
iff all types over models have $\mathop{\mathpalette\Ind{}}^{f}$ / $\mathop{\mathpalette\Ind{}}^{d}$ pre-weight
$<\left|T\right|^{+}$.
\item $T$ is dependent iff all types have $\mathop{\mathpalette\Ind{}}^{s}$ pre-weight $<\left|T\right|^{+}$
iff all types over models have $\mathop{\mathpalette\Ind{}}^{i}$/\textup{$\mathop{\mathpalette\Ind{}}^{s}$} pre-weight
$<\left|T\right|^{+}$.
\item $T$ is strongly dependent iff all types have $\mathop{\mathpalette\Ind{}}^{s}$ pre-weight
$<\aleph_{0}$ iff all types over models have $\mathop{\mathpalette\Ind{}}^{i}$ / $\mathop{\mathpalette\Ind{}}^{s}$
pre-weight $<\aleph_{0}$.
\end{enumerate}
\end{thm}
\begin{proof}
(1) Let $T$ is $\operatorname{NTP}_{\operatorname{2}}$, $p\in S\left(B\right)$. If it has $\mathop{\mathpalette\Ind{}}^{d}$
pre-weight $\geq\left|T\right|^{+}$, then there is a $B$-strictly
independent set $\left\{ a_{i}\left|\, i<\left|T\right|^{+}\right.\right\} $
and $a\models p$ such that $a\mathop{\mathpalette\Notind{}}_{B}^{d}a_{i}$ for all $i$. So
for each $i$ there is a formula $\varphi_{i}\left(x,y\right)$ over
$B$ such that $\varphi_{i}\left(x,a_{i}\right)$ divides over $B$.
We may assume that for $i<\omega$, $\varphi_{i}=\varphi$, and this
is a contradiction to Lemma \ref{lem:(Kim's-Lemma)} which also applies
in $\operatorname{NTP}_{\operatorname{2}}$ theories. Since over models, forking equals dividing,
we are done.
On the other hand, suppose $T$ is not $\operatorname{NTP}_{\operatorname{2}}$, i.e. it has the tree
property of the second kind for some formula $\varphi\left(x,y\right)$.
So there is an array $\left\langle a_{i,j}\left|\, i,j<\omega\right.\right\rangle $,
such that it is $k$ inconsistent and each vertical path is consistent.
We may assume by Ramsey and compactness that the rows are mutually
indiscernible and that, the depth of the array is $\left|T\right|^{+}$.
For $i<\left|T\right|^{+}$, let $A_{i}=\left\{ a_{i,j}\left|\, j<\omega\right.\right\} $,
and $p_{i}$ a global co-heir over $A_{i}$ containing $\left\{ x\neq a_{i,j}\left|\, j<\omega\right.\right\} $.
Let $A=\bigcup_{i<\left|T\right|^{+}}A_{i}$ and $M\supseteq A$ some
$\left|A\right|^{+}$ saturated model.
Let $b_{i,j}$ be generated by $b_{i,j}\models p_{i}|_{Mb_{i,<j}b_{<i}}$.
So we have an array $\left\langle b_{i,j}\left|\, i<\left|T\right|^{+},j<\omega\right.\right\rangle $.
It is easy to see that $\left\langle b_{i,j}\left|\, i<\left|T\right|^{+},j<\omega\right.\right\rangle $
still witnesses the tree property of the second kind for $\varphi$.
The first column $\left\langle b_{i,0}\left|\, i<\left|T\right|^{+}\right.\right\rangle $
is a strictly invariant sequence over $M$, by choice of $p_{i}$
and Lemma \ref{lem:A invariant is M strictly invariant}. Let $a\models\left\{ \varphi\left(x,b_{i,0}\right)\left|\, i<\left|T\right|^{+}\right.\right\} $,
so $p=\operatorname{tp}\left(a/M\right)$ has $\mathop{\mathpalette\Ind{}}^{d}$ pre-weight at least $\left|T\right|^{+}$
and we are done.
Note that also that in the proof, the sequence of infinite tuples
$\left\{ \left\langle b_{i,j}\left|\, j<\omega\right.\right\rangle \left|\, i<\left|T\right|^{+}\right.\right\} $
is itself strictly invariant, because $p_{i}^{\left(\omega\right)}$
is $A$-invariant. Since each such tuple is an indiscernible sequence
this means that $p$ has $\mathop{\mathpalette\Ind{}}^{s}$ pre-weight at least $\left|T\right|^{+}$
as well.
(2) Suppose $T$ is dependent. So it is also $\operatorname{NTP}_{\operatorname{2}}$. By Fact \ref{fac:strongsplit},
and since forking = dividing over models, (1) implies left to right.
Conversely, assume all types over models have $\mathop{\mathpalette\Ind{}}^{s}$ pre-weight
$<\left|T\right|^{+}$. If $T$ is not dependent then there is a formula
$\varphi\left(x,y\right)$ and an array of mutually indiscernible
sequences $\left\langle a_{i,j}\left|\, i<\left|T\right|^{+},\, j<\omega\right.\right\rangle $
such that the set $\left\{ \varphi\left(x,a_{i,0}\right)\land\neg\varphi\left(x,a_{i,1}\right)\left|\, i<\left|T\right|^{+}\right.\right\} $
is consistent . Now apply the same proof as (1), and note that since
$p_{i}^{\left(2\right)}$ is also $A$ invariant, the sequence of
pairs $\left\langle b_{i,0},b_{i,1}\right\rangle $ is also strictly
invariant.
(3) Suppose $T$ is strongly dependent. If $p\in S\left(B\right)$
has $\mathop{\mathpalette\Ind{}}^{s}$ pre-weight at least $\aleph_{0}$, then we can construct
an array of mutually indiscernible sequences over $B$, $\left\langle a_{i,j}\left|\, i,j<\omega\right.\right\rangle $
and such that $\left\{ \varphi_{i}\left(x,a_{i,0}\right)\land\neg\varphi_{i}\left(x,a_{i,1}\right)\left|\, i<\omega\right.\right\} $
is consistent:
There is some $a\models p$ and a strictly independent set of pairs
$\left\{ \left(a_{i,0},a_{i,1}\right)\left|\, i<\omega\right.\right\} $
over $B$ such that each pair starts an indiscernible sequence over
$B$ and $\varphi\left(a,a_{i,0}\right)\land\neg\varphi\left(a,a_{i,1}\right)$
holds. By definition of strict independence, there are $B$-mutually
indiscernible sequences $I_{i}=\left\langle \left(b_{j}^{i,0},b_{j}^{i,1}\right)\left|\, j<\omega\right.\right\rangle $
(of pairs) starting with $\left(a_{i,0},a_{i,1}\right)$. By dependence,
$\left\langle \ldots b_{j}^{i,0},b_{j}^{i,1}\ldots\right\rangle $
are also mutually indiscernible over $B$ (if not one can easily find
infinite alternation).
Since $T$ is dependent this is a contradiction to strong dependence
(for $\varphi_{i}$ in the definition, take $\psi_{i}\left(x,y,z\right)=\varphi_{i}\left(x,y\right)\land\neg\varphi_{i}\left(x,z\right)$).
By Fact \ref{fac:strongsplit}, and since forking = dividing over
models, left to right holds.
Conversely, assume all types over models have $\mathop{\mathpalette\Ind{}}^{s}$ pre-weight
$<\aleph_{0}$, and proceed as in (2). \end{proof}
\begin{rem}
One can change the definition of weight so that instead of $\left\{ a_{i}\right\} $
being a strictly independent set, it would be a strictly invariant
sequence. The theorem would go through with essentially the same proof.
\end{rem}
\begin{rem}
Clause (1) in the proposition above, as well as its proof, are similar
to \cite[Theorem 36]{ChernikovNTP2}, but were done independently.
\end{rem}
\section{\label{sec:Non-NIP-theories}Independent theories}
Throughout this paper, except Section \ref{sec:My-weight-problem},
we assumed that $T$ was a dependent theory. Some of the results presented
do not actually require this assumption. Most importantly, Lemma \ref{lem:main lemma}
works in any theory in which forking = dividing over extension bases,
e.g. $\operatorname{NTP}_{\operatorname{2}}$ theories. We will now use this fact to show that strict
non-forking is symmetric for elements of a strict Morley sequence
in a subclass of $\operatorname{NTP}_{\operatorname{2}}$, namely the class of resilient theories.
Resilient theories were introduced recently, in \cite{cheBenya} (after
the present paper was essentially completely finished). After learning
of this definition, we realized that some of our results generalize
easily to this bigger class, and decided to include this here.
\begin{defn}
A theory $T$ is called \emph{resilient }if for any indiscernible
sequence, $I=\left\langle a_{i}\left|\, i\in\mathbb{Z}\right.\right\rangle $
and formula $\varphi\left(x,y\right)$, if $\varphi\left(x,a_{0}\right)$
divides over $a_{\neq0}$, then $\varphi\left(x,I\right)$ is inconsistent. \end{defn}
\begin{fact}
\cite{cheBenya} Simple and dependent theories are resilient. Resilient
theories are $\operatorname{NTP}_{\operatorname{2}}$.
\end{fact}
In fact, implicit in our proof of Proposition \ref{prop:witness alternation rank}
is the proof that dependent theories are resilient.
The most important property of resilient theories for us is:
\begin{lem}
\label{lem:still the same witnesses}Theorem \ref{thm:characterize_witness}
holds for resilient theories over extension bases.\end{lem}
\begin{proof}
Suppose $A$ is an extension base, $I=\left\langle a_{i}\left|\, i\in I\right.\right\rangle $
an $A$-indiscernible sequence such that for every $i$ we have $a_{\neq i}\mathop{\mathpalette\Ind{}}_{A}a_{i}$.
Then, if $\varphi\left(x,y\right)$ is over $A$ and $\varphi\left(x,a_{i}\right)$
divides over $A$ for some $i$, then it also divides over $Aa_{\neq i}$
by Fact \ref{fac:dividing}. By definition of resilience, this means
that $\varphi\left(x,I\right)$ is inconsistent. This proves that
$I$ is a witness.
The other direction is exactly the same as in the proof of Theorem
\ref{thm:characterize_witness}. \end{proof}
\begin{thm}
\label{thm:resilient}Assume that $T$ is resilient, and $A$ is an
extension base. Then:
\begin{enumerate}
\item Strict Morley sequences over $A$ are witnesses.
\item Morley sequences which are witnesses over $A$ are strict Morley.
\item If $I=\left\langle a_{i}\left|\, i<\omega\right.\right\rangle $ is
a strict Morley sequence then for any $i$ , $a_{i}\mathop{\mathpalette\Ind{}}_{A}^{\operatorname{st}}a_{i+1}$
(i.e. strict non-forking is symmetric for elements from $I$).
\end{enumerate}
\end{thm}
\begin{proof}
The proof of (1) is the same as Remark \ref{rem:strict morley are witnesses}
(using Lemma \ref{lem:still the same witnesses}).
(2) follows from Lemma \ref{lem:main lemma}, exactly as in Corollary
\ref{cor:strictmorley_char}.
(3) Expand $I$ to order type $\mathbb{Z}$. Then $a_{0}\mathop{\mathpalette\Ind{}}_{A}^{\operatorname{st}}a_{<0}$.
So $a_{<0}\mathop{\mathpalette\Ind{}}_{A}a_{0}$. But $a_{<0}$ is an $Aa_{0}$-indiscernible
witness over $A$ by (1). So by Lemma \ref{lem:main lemma}, $a_{-1}\mathop{\mathpalette\Ind{}}_{A}^{\operatorname{st}}a_{0}$.
\end{proof}
For general $\operatorname{NTP}_{\operatorname{2}}$ we get a weak form of symmetry (that implies
the one we have for dependent theories):
\begin{thm}
($\operatorname{NTP}_{\operatorname{2}}$) If $a\mathop{\mathpalette\Ind{}}_{A}^{\operatorname{ist}}b$ for an extension base $A$, then
$b\mathop{\mathpalette\Ind{}}_{A}^{\operatorname{st}}a$.\end{thm}
\begin{proof}
Let $I$ be a strictly invariant sequence starting with $b$, indiscernible
over $A$. Then $I$ is a witness. We may assume that $a\mathop{\mathpalette\Ind{}}_{A}^{\operatorname{ist}}I$.
Since strict invariance preserves indiscernibility, $I$ is indiscernible
over $a$. By definition, $I\mathop{\mathpalette\Ind{}}_{A}a$. Lemma \ref{lem:main lemma}
implies that $b\mathop{\mathpalette\Ind{}}_{A}^{\operatorname{st}}a$.
\end{proof}
Let us conclude with a remark on simple theories:
\begin{thm}
\label{thm:simple}Strict non-forking equals forking iff $T$ is simple.\end{thm}
\begin{proof}
Lemma \ref{lem:stind-eqdef} holds in any theory. If $T$ is simple,
then non-forking is symmetric, and hence by that lemma, equals strict
non-forking. On the other hand, if $T$ is not simple, then the proof
of \cite[Proposition 2.3.8]{WagnerBook} gives some $a,b,A$ such
that $a\mathop{\mathpalette\Ind{}}_{A}b$ and $b\mathop{\mathpalette\Notind{}}_{A}^{d}a$, so strict non-forking does
not equal non-forking.
\end{proof}
Note that by Theorem \ref{thm:resilient} (1) and Theorem \ref{thm:simple},
Kim's lemma in the version that states that strict Morley sequences
are witnesses over extension bases is indeed a generalization of Kim's
lemma for simple theories. Also note that by \cite{kachernikov},
in $\operatorname{NTP}_{\operatorname{2}}$ theories every extension base is an $\mathop{\mathpalette\Ind{}}^{\operatorname{st}}$-extension
base (and in simple theories every set is an extension base).
\section{\label{sec:Problems}Problems}
\begin{problem}
Is it the case that given a model $M$ and tuples $a,b$ , we have
$a\mathop{\mathpalette\Ind{}}_{M}^{\operatorname{st}}b$ if and only if $a\mathop{\mathpalette\Ind{}}_{M}b$ and $b\mathop{\mathpalette\Ind{}}_{M}a$
?
\end{problem}
\begin{problem}
Are all witnesses strictly independent? This seems a bit too strong,
but perhaps with some more assumptions this becomes true. Note that
by Lemma \ref{lem:main lemma} and Theorem \ref{thm:characterize_witness},
if $\left\langle a_{i}\right\rangle $ is an indiscernible witness
over an extension base $A$, then for any $i\neq j$, $a_{i}\mathop{\mathpalette\Ind{}}_{A}^{\operatorname{st}}a_{j}$,
so any pair is strictly independent.
\end{problem}
\begin{problem}
Theorem \ref{thm:Main} means that over extension bases strict independence
for two elements (tuples) is the same as strict non-forking. It is
not so clear what could be the appropriate analogue of this results
for an arbitrary set of tuples.
\end{problem}
\begin{problem}
Is strict non-forking symmetric in $\operatorname{NTP}_{\operatorname{2}}$ theories? Resilient theories?
\end{problem}
\bibliographystyle{alpha}
|
2,869,038,154,382 | arxiv | \section{Introduction}
In the recent past, the critical behavior of frustrated spin systems has
been the subject of intensive theoretical, numerical and experimental
studies (see \cite{plumer}\cite{aza1} and references therein
and \cite{aza2}). However,
there is still no definite conclusion about the nature of the phase
transition that occurs in these systems. One of the most striking
features of frustrated models is their non trivial ground state which for
continuous spin models is in general a canted ground state. Well known examples
are incommensurate helimagnets and triangular antiferromagnets among
others. As a consequence of the non collinear ordering the $O(3)$ spin
rotation group is completely broken in the low temperature phase so that the
relevant order parameter is a rotation matrix instead of a vector as
in ferromagnetic-like models. One may thus wonder if canted spin models belong
to a new universality class. Up to now, no definite answer is known.
Experiments on rare earth helimagnets such as Ho, Tb, or Dy for
example, do not show any clear evidence for a universal critical behaviour.
{}From a theoretical point of view, early renormalization group $(RG)$
studies by Garel and Pfeuty\cite{garel}
found no stable fixed point in the neighborhood of $D=4$
by studying the Landau-Ginzburg theory of a commensurate helimagnet.
This is an example of a fluctuation-induced first order transition. However,
early Monte-Carlo studies
of several canted models \cite{diep}\cite{kawa} found evidence for a
continuous transition in three dimensions. If taken seriously these
numerical results are in contradiction with the $4-\epsilon $ prediction.
Of course, it is notoriously difficult to discriminate between first
and second order phase transitions in Monte-Carlo studies
(the two-dimensional five-states Potts model is a well-known weird
case for example). Strictly speaking, one cannot
exclude the existence of a stable fixed point which manifests itself at
{\sl a finite distance from $D=4$}, unreachable in an $\epsilon$ expansion
around $D=4$. If this happens to be true, then standard perturbative methods
are of no use to study the critical behavior of canted spin models.
There is an alternative perturbative approach to this problem which is the
low temperature expansion of the non-linear sigma (NL$\sigma)$ model suited to
the symmetry breaking scheme of these canted models.
In this paper we focus on a simple commensurate helimagnet which is
the triangular antiferromagnet with N-component classical spins. By
stacking triangular planes, this magnet exists in all integer dimensions
($D\ge 2$).
In the case of the triangular antiferromagnet (AFT) with Heisenberg classical
spins i.e. $N=3$, the massless modes live in the homogeneous space $G/H=
O(3)\otimes O(2)/O(2)$. Some results from the
$D=2+\epsilon$ expansion of a NL$\sigma$ model based on this coset $G/H$
have been recently reported\cite{aza1}\cite{aza2}. It has been found
that up to two loop a stable fixed point which is the $N=4$ Wilson-Fisher
fixed point shows up in the vicinity of $D=2$. Thus {\sl no new
universality class is required in the case of canted spin models}.
One meets the general phenomenon of increased symmetry at a critical
point since at this point the model is $O(3)\otimes O(3)= O(4)$ instead of
$O(3)\otimes O(2)$ symmetric.
In this paper, we extend our previous analysis to the case with $N\ge 3$
components. We build up the relevant nonlinear sigma model
and analyze its RG properties by standard field-theoretic techniques.
If one believes that both the $\epsilon =4-D$ and $\epsilon =D-2$
perturbative results can be extented to non-zero $\epsilon$, in the
neighborhood of $D=2$ and $D=4$, as it is the case for the $O(N)$ models,
the simplest hypothesis which agrees with both
$\epsilon$ expansions is the following: there is a tricritical surface
separating the basin of attraction of the $O(4)$ fixed point found near
$D=2$ from a first
order runaway region found in the vicinity of $D=4$. The phase transition
of canted magnets is thus
either first order or second order with $O(4)$ or tricritical exponents
(i.e. mean-field in $D=3$). This hypothesis has been previously proposed in
ref.\cite{aza1}. Recent extensive Monte-Carlo studies \cite{bata} performed
directly in D=3 point towards a second-order transition with O(4)
exponents, a fact that was missed by previous lower-statistics studies.
This means that presumably the critical surface for $N=3$ lies between
D=3 and D=4.
The manifold $G/H$ is topologically equivalent
to $O(3)$ but as metric spaces they are different. The RG properties
of the corresponding non-linear sigma model
are {\it a priori} sensitive to the metric properties.
However the study of purely topological properties can be performed
directly on $O(3)$ as in ref.\cite{kawa2}. The study of defects reveals
the presence of $Z_2$ vortices that are probably liberated in the
high-temperature phase of the strictly two-dimensional AFT model.
In this work we will ignore global aspects and concentrate on
configurations with zero vorticity, leaving for the future the study
of the defects on the phase transition.
In this paper, we present the detailed renormalization group
study of canted spin systems in $D= 2+\epsilon$. In section I we
show how the effective continuum action is obtained from a lattice Hamiltonian
with Heisenberg spins. In section II the group theoretical
construction of the non-linear sigma (NL$\sigma)$ model is presented.
In section III the two loop recursion relations as well as the
Callan-Symanzik $\gamma$-function are given. The critical exponents $\nu$
and $\eta$ are calculated. Special attention is given to the nature
of the order parameter which is shown to belong to the tensor representation
of $O(4)$. In section IV known results from both
$4-\epsilon$ and $1/N$ expansions are recalled for convenience.
They are discussed and compared with the
$2+\epsilon$ results in section V. Our conclusions are contained in section
VI.
\section{ Continuum limit and effective action}
\subsection{General analysis}
The effective action that describes the long distance behavior of a
lattice model is obtained by taking the continuum limit of the microscopic
Hamiltonian:
\begin{equation}
H=-\sum_{ij}J_{ij}{\bf S}_i.{\bf S}_j .
\label{ham}
\end{equation}
In this equation the vectors ${\bf S}_i$ are classical Heisenberg spins
with fixed unit length.
In a ferromagnetic system, this continuum limit is achieved
by letting the spins ${\bf S}$ fluctuate around their common
expectation value. Relative fluctuations between neighboring spins are
assumed to be smooth
enough so that we may replace ${\bf S}_i.{\bf S}_j$ by $(\nabla
{\bf S}(x))^2$.
When the interaction distribution $\{J_{ij}\}$ leads to a canted ground state
the continuum limit is less obvious since neighboring spins do not
fluctuate around the same mean expectation value. To overcome this
difficulty, one has
to consider the magnetic cell with $n$ sublattices
(${\bf S}^1,\dots, {\bf S}^n$) as the basis of a new
superlattice where the continuum limit is taken. Practically, this procedure
depends on the detailed microscopic
model: lattice symmetry, ground state structure and
interaction parameters. We shall however give qualitative arguments valid for
many canted models.
\noindent Let us define in each elementary cell an orthonormal basis $\{{\bf
e}_a(x)\}$:
\begin{equation}
{\bf e}_a(x).{\bf e}_b(x) =\delta_{ab}\ \ ;\ \ a=1,2,3 \quad ,
\label{eaeb}
\end{equation}
where $x$ is a superlattice
index.
We may parametrize our $n$ sublattice spins ${\bf S}^\alpha(x),
\alpha=1,..,n$ as:
\begin{equation}
{\bf S}^\alpha(x)=\sum_a C^\alpha_a(x){\bf e}_a(x) .
\end{equation}
In the ground state, all the $ {\bf S}^\alpha$ are in general
{\it not}
independent. There is in fact a maximum of three of them which
are independent. Equivalently, there is a minimum of $n-3$ linear combinations
of the ${\bf S}^\alpha (x)$ which have zero expectation value in the ground
state. Such combinations cannot be part of an order parameter. They
correspond to relative motions of the spins within each unit cell. They are
massive modes with short range correlations and are
thus irrelevant to the critical behavior. We
ignore them by imposing the constraints that {\it locally}, i.e. within each
unit cell, the spins are in the
ground state configuration. We call this requirement ``local rigidity''.
Thus, up to an appropriate field redefinition, the order parameter of canted
magnets will be the orthonormal basis $\{{\bf e}_a (x)\}$ defined on each
site of the superlattice.
As a consequence canted magnets are equivalent in the critical
region to a system of interacting solid rigid bodies. The continuum
effective action ${\cal S}_1$ may be obtained through the standard gradient
expansion of the ${\bf e}_a(x)$ as in ferromagnets:
\begin{equation}
\begin{array}{l}
{\displaystyle{{\cal S}_1={1\over2}\int d^Dx\ \sum\limits_{a=1}^3
p_a\left(\nabla {\bf e}_a\right)^2 ,}}\\
\\
{\bf e}_a(x).{\bf e}_b(x) =\delta_{ab} ,
\end{array}
\label{actionea}
\end{equation}
where the ground state is given by the minimization equations:
\begin{equation}
\begin{array}{l}
\nabla{\bf e}_a(x)=0\nonumber\\
{\bf e}_a(x)= {\bf e}_a^0 .
\end{array}
\label{gs}
\end{equation}
The $p_a, a=1,2,3$ are coupling constants which depend on the
particular lattice model we started with. The partition function $Z$ is :
\begin{equation}
Z=\int D{\bf e}_{1,2,3}(x) \left(\prod_{ab}\delta({\bf e}_a(x).{\bf e}_b(x)
-\delta_{ab})\right)e^{-{\cal S}_1/T} .
\end{equation}
When two couplings $p_a$ vanish, one recovers, integrating over the
corresponding ${\bf e}_a$, the action of the standard non linear sigma model
$O(3)/O(2)$ corresponding to collinear ferro- or anti- ferromagnets.
In all the other cases, among the nine fields $e_a^i(x)$,
taking into account the constraints (\ref{eaeb}),
one sees that there are three independent fluctuating fields corresponding
to the three Goldstone modes, or spin waves, resulting from the breakdown
of the $O(3)$ group. Each one corresponds to infinitesimal rotations around
each of the ${\bf e}_a(x)$'s. The couplings $p_a$ are the associated stiffness
constants which depend on the detailed microscopic model. They are deeply
connected to the symmetry properties of the lattice Hamiltonian as we shall
see.
In addition to the usual $O(3)$ rotational invariance, the symmetry
group $G$ of the Hamiltonian contains, in general, a discrete group
${\cal T}$ of
transformations \{$T^s$\} mixing together the sublattices ${\bf S}^\alpha$, or
equivalently the
${\bf e}_a$. These transformations may belong to the space group of
the lattice, as in triangular antiferromagnets, but may be some more
complicated objects, such as ``gauge" transformations as in the
Villain lattice\cite{yosefin}.
The order parameter $\{{\bf e}_a\}, a=1,2,3$ thus transforms under
$G$ as:
\begin{equation}
\begin{array}{l}
{\displaystyle{{ e}_a^i\to \sum_jU_{ij}\ { e}_a^j\ ;\ U\in O(3),}}\\
\\
{\displaystyle{{\bf e}_a\to \sum_b(T^s)_{ab}\ {\bf e}_b\ ;\ T^s\in {\cal T}}}.
\end{array}
\end{equation}
The requirement that the
action ${\cal S}_1$ should be
invariant under the group ${\cal T}$ implies several
relations between the $p_a$'s. In general, the ${\bf e}_a$'s span
reducible representations of the group
${\cal T}$. Depending on the number of these representations, some of the
coupling constants $p_a$ may be equal. If there are
three irreducible representations of dimension 1, all the $p_a$ are different.
If there is one representation of dimension 2 and one of dimension 1, as it
is the case in the triangular lattice where ${\cal T}$ is $C_{3v}$, two
coupling constants are equal: $p_1=p_2$. In this case, since the
action is quadratic in the fields,
the invariance under the discrete group ${\cal T}$ is enlarged to a
continuous invariance group $O(2)$ generated by:
\begin{equation}
\left(
{\bf e}_1, {\bf e}_2, {\bf e}_3
\right)
\to
\left(
{\bf e}_1, {\bf e}_2, {\bf e}_3
\right)
\left(
\begin{array}{ccc}
\cos\theta &\sin\theta &0\\
-\sin\theta & \cos\theta & 0\\
0 & 0 & 1
\end{array}
\right),
\label{tr1}
\end{equation}
and
\begin{equation}
\left(
{\bf e}_1, {\bf e}_2, {\bf e}_3
\right)
\to
\left(
{\bf e}_1, {\bf e}_2, {\bf e}_3
\right)
\left(
\begin{array}{ccc}
0 &1 &0\\
1& 0 & 0\\
0 & 0 & 1
\end{array}
\right).
\label{tr2}
\end{equation}
The action ${\cal S}_1$ is thus $O(3)\otimes O(2)$ invariant in this case.
Finally, if there is only one representation
of dimension 3, as it is the case in lattices with tetragonal symmetry,
$p_1=p_2=p_3$ and ${\cal S}_1$ is invariant under $G=O(3)\otimes O(3)$.
To summarize, depending on the symmetry
group of the lattice, ${\cal S}_1$ can be symmetric under $O(3)\otimes O(p)$
with $p=1,2$ or 3.
Since any rotation matrix $R\in$ SO(3) is given by an orthonormal set
of three vectors, we can gather the ${\bf e}_a$'s into a rotation matrix $R$:
\begin{equation}
R(x)=\big({\bf e}_1(x){\bf e}_2(x){\bf e}_3(x)\big),
\end{equation}
and the action ${\cal S}_1$ can be written into a different, but
{\it equivalent}, form:
\begin{equation}
{\cal S}_2={1\over2}\int d^Dx\ Tr\left(P(\partial R^{-1})(\partial R)\right),
\label{actionp}
\end{equation}
where $P$ is the diagonal matrix: $P=diag (p_1,p_2,p_3)$ and $R\in$ SO(3).
Using $R$, the symmetry operations on the ${\bf e}_a$ can be written in a
compact form.
The action ${\cal S}_2$ is invariant under left $O(3)$
transformations $R\to
UR\ ,\ U\in$ O(3) and right transformations: $R\to RV$ where $V$ belongs to
the $O(p)$ group which commutes with matrix $P$. We
find again that depending on
the value of $p$, ${\cal S}_2$ is invariant under the
group $G=O(3)\otimes O(p); p=1,2,3$. The
right $O(p)$ invariance reflects the original discrete symmetry of the
microscopic Hamiltonian since it mixes the ${\bf e}_a$
while the left $O(3)$ is the usual rotational
symmetry.
The discrete symmetry group of the Hamiltonian acts, in the continuum
limit, as the $O(p)$ group. This is an artefact of the
continuum limit and has no dynamical consequences since the number of
Goldstone modes is given by the breaking of the $O(3)$ spin rotation
group only. Indeed the number $n$ of Goldstone modes resulting
from the symmetry breaking $G\to H$, where $H$ is the subgroup of $G$ which
leaves the ground state invariant, is equal to the number of broken
generators of G:
$n=dim\ Lie (G) -dim\ Lie (H)$. In our case, the isotropy subgroup $H$
consists of the transformations of $G$ leaving the orthonormal
basis ${\bf e}_a^0$ invariant or equivalently using Eq.(\ref{gs}) and the
above $U$ and $V$ transformations:
\begin{equation}
H\ :\ R_0\to {\hat{V}}R_0V=R_0,
\end{equation}
where ${\hat{V}}\in O(p)\subset O(3)$ and is determined once $V$ is chosen (see
Eq.(\ref{vchapeau} for an explicit expression of ${\hat{V}}$ in a
particular example).
This particular subgroup $H$ is called the diagonal group
$O(p)_{diag}$ of
a subgroup $O(p)$ in $O(3)$ times the $O(p)$ of $G$ acting on
the right. The symmetry breaking patterns described by action
(\ref{actionea})
and equivalently by action (\ref{actionp}) are therefore
$G/H = O(3)\otimes O(p)/O(p)_{diag}$ with $p=1,2,3$ depending
on matrix
$P$. These are all the possible symmetry breaking schemes that can undergo a
frustrated Heisenberg spin model. In any case, there are three Goldstone
modes. Before ending this section, let us emphasize that in the non linear
sigma model, the dynamical properties depend only on the {\it geometry} of
the coset space $G/H$ and not on the field parametrization. Contrary
to the Landau-Ginzburg model, the vanishing dimension of the order parameter
in two dimensions allows infinitely many different parametrizations of the
theory, differing in (possibly non linear) field redefinitions. The choice of
one particular parametrization may be useful in discussing symmetry
properties or renormalization group equations but does not change the
physics. In particular we have seen
that the actions ${\cal S}_1$ and ${\cal S}_2$ are equivalent. There is
another equivalent form of ${\cal S}_1$ and ${\cal S}_2$ that we shall
use for the discussion of the $O(3)\otimes
O(2)/O(2)$ case:
\begin{equation}
{\cal S}_3 ={1\over2}\int d^Dx \left(g_1\left(\nabla
{\bf e}_1^2 + \nabla
{\bf e}_2^2\right)
+ g_2\left({\bf e}_1\nabla{\bf e}_2 - {\bf e}_2\nabla{\bf e}_1\right)^2\right),
\label{actioncourant}
\end{equation}
with:
\begin{equation}
{\bf e}_a.{\bf e}_b =\delta_{ab}\ ,\ g_2=-{1\over2}p_2\ ,\
g_1=p_1+p_2 .
\end{equation}
The latter expression of the action is obtained from (\ref{actionea}) by
integrating over the
constraint ${\bf e}_3={\bf e}_1\wedge {\bf e}_2$ and the relations
among the coupling constants are derived in Appendix B.
\subsection{The particular case of the antiferromagnetic triangular lattice}
We shall consider the antiferromagnetic triangular
lattice with $N$-component spins, $N\ge 3$, as an example. Let us start by the
case $N =3$. The symmetry group of the system is the product of
the rotation group $O(3)$ acting on the spin
components times the space group of the triangular lattice. The Hamiltonian
density must then be $G=O(3)\otimes C_{3v}$ invariant. The three spins
${\bf S}_i$, $i=1,2,3$ of the elementary plaquette are co-planar in the
ground state with the well-known 120 degrees structure. Then, we expect that
only two vectors of the triad $({\bf e}_1, {\bf e}_2,{\bf e}_3)$ are necessary
for the decomposition of the ${\bf S}_i$.
i) $\Sigma = {\bf S}_1 + {\bf S}_2 + {\bf S}_3$ spans the
trivial representation of $C_{3v}$. This linear combination of the spins
has a vanishing expectation value at $T=0$. It cannot be an order
parameter and corresponds to massive modes.
ii) The two vectors:
\begin{equation}
\left(
\begin{array}{l}
{\bf e}_1\\
\\
{\bf e}_2
\end{array}
\right)
\propto
\left(
\begin{array}{l}
-{{\sqrt3} + 1\over 2}{\bf S}_1 +{{\sqrt3} - 1\over 2}{\bf S}_2 + {\bf S}_3\\
\\
{{\sqrt3} - 1\over 2}{\bf S}_1 -{{\sqrt3} + 1\over 2}{\bf S}_2+ {\bf S}_3
\end{array}
\right)
\end{equation}
span the two dimensional representation of $C_{3v}$. They have a non vanishing
expectation value in the low temperature phase and can thus be taken as
an appropriate order parameter.
The ``local rigidity'' constraint of section (2.1) means here:
\begin{equation}
\Sigma (x) = <\Sigma (x)>= 0.
\end{equation}
This allows fluctuations of the spins between cells but not within the
cells. This constraint is consistent with
the symmetry since $\Sigma$ is a scalar for $C_{3v}$. Once the constraint is
imposed, it is straightforward to show
that ${\bf e}_1(x)$ and ${\bf e}_2(x)$ are orthonormal by use of ${\bf S}_i^2
=1$.
The action for the triangular lattice is then:
\begin{equation}
{\cal S}_1 = {1\over 2}\int d^Dx \left( p_1\left( (\nabla {\bf e}_1(x))^2
+ (\nabla {\bf e}_2(x))^2\right) \right).
\label{jenaimarre}
\end{equation}
Dombre and Read have obtained, from a direct microscopic derivation
\cite{dombre}:
\begin{equation}
p_1 ={{\sqrt{3}}\over4} {J\over T} .
\end{equation}
Note that the original $C_{3v}$ invariance has been enlarged in
Eq.(\ref{jenaimarre}) to a $O(2)$ group given by
(\ref{tr1},\ref{tr2}): $G=O(3)\otimes O(2)$.
We will see that this action is not stable under renormalization. The
most general renormalizable action which is compatible with the symmetry
$O(3)\otimes O(2)$ is given by equation (\ref{actioncourant}). Its general
form is stable under renormalization so that we shall work only with
it in the following.
It is easy to generalize the above action to the case where the fields
${\bf e}_a(x)$ have $N> 3$ components. The symmetry group $G$ is in this case
$O(N)\otimes O(2)$. The ground states are given by eq.(\ref{gs}).
Let us choose one of them, for example:
\begin{equation}
({\bf e}_1^{(0)}, {\bf e}_2^{(0)}) =
\left(
\begin{array}{ll}
0&0\\
.&.\\
1&0\\
0&1
\end{array}
\right).
\label{GS}
\end{equation}
The unbroken symmetry group $H$ of the low-temperature phase is then
the set of matrices leaving this configuration invariant:
\begin{equation}
\left(
\begin{array}{cc}
{\large{O}(N-2)}&0\\
0&\begin{array}{cc}
\cos\theta&-\sin\theta\\
\sin\theta&\cos\theta
\end{array}
\end{array}\right)
\left(
\begin{array}{c}
0\\
.\\
1\\
0
\end{array}
\right.
\left.
\begin{array}{c}
0\\
.\\
0\\
1
\end{array}
\right)
\left(\begin{array}{c}
\cos \theta\\
-\sin\theta
\end{array}
\right.
\left.\begin{array}{c}
\sin\theta\\
\cos\theta
\end{array}
\right)
=\left(
\begin{array}{c}
0\\
.\\
1\\
0
\end{array}
\right.
\left.
\begin{array}{c}
0\\
.\\
0\\
1
\end{array}
\right).
\label{vchapeau}
\end{equation}
This group $H$ consists in two subgroups: a $O(N-2)$ and a
diagonal $O(2)$. Therefore the action ${\cal S}_1$ describes the symmetry
breaking pattern $G\to H= O(N)\otimes O(2)\to O(N-2)\otimes O(2)_{diag}$.
\section{Group theoretical construction of the non linear sigma model}
In the last section, we have derived three equivalent forms of the relevant
action for Heisenberg frustrated magnets ${\cal S}_1$, ${\cal S}_2$,
${\cal S}_3$ (eq.(\ref{actionea},\ref{actionp},\ref{actioncourant})). Each
of them has its own interests and shortcomings. Action ${\cal S}_1$ is closely
related to the microscopic Hamiltonian while action ${\cal S}_2$ is
suited to the discussion of the symmetry properties. Finally
action ${\cal S}_3$
offers a possible large $N$, $N\ge3$, generalization which we shall
discuss in detail. The particular form of the action is irrelevant
near $D=2$ since the RG properties depend only on the geometry of the
manifold $G/H$ \cite{friedan}. These intrinsic
properties will be formulated in the language of group theory which provides
an abstract but powerful framework\cite{friedan}.
We shall first deal with the standard $O(N)/O(N-1)$ model.
The construction of the $ O(N)\otimes O(2)/O(N-2)\otimes O(2)$ model
is presented after this warm-up example.
\subsection{The $O(N)/O(N-1)$ partition function}
The partition function of the $O(N)/O(N-1)$ model is \cite{polyakov,brezin}:
\begin{equation}
Z= \int D{\bf S}\ \delta\left({\bf S}^2(x)-1\right)\
\exp \left({-{1\over 2T}\int d^Dx (\partial { S})^2}\right).
\label{fonctpart}
\end{equation}
The functional delta selects the configurations of ${\bf S}(x)$ with
unit length. We can take advantage of this delta to integrate out one degree of
freedom in ${\bf S}(x)$. Let us choose ${\bf u}$, ${\bf u}^2=1$, collinear to
the magnetization and write ${\bf S}(x)$ as:
\begin{equation}
{\bf S}(x) = \sigma (x){\bf u} + {\bf\pi}(x) \hspace{0.6cm};
\hspace{0.6cm} {\bf\pi}(x) \bot {\bf u} \hspace{0.6cm};
\hspace{0.6cm} \sigma^2 + \pi^2 =1 .
\label{sigma}
\end{equation}
After integrating out $\sigma(x)$, $Z$ can be rewritten as:
\begin{equation}
Z=
\int_{\vert \pi\vert\le 1} D{\bf\pi} \ \exp \left({-{1\over 2T}
\int d^Dx\left((\partial{\bf\pi})^2 +
(\partial\sqrt{1-{\bf\pi}^2})^2\right)}\right).
\label{ZON}
\end{equation}
The low temperature $T$ perturbative calculation of
(\ref{ZON}) starts from small
fluctuations around the ground state: $<{\bf S}>= {\bf u}$. They correspond
to the excitations of the ${\bf\pi}$-field and are the usual spin waves.
The ${\bf\pi}$'s consist of the $N-1$ Goldstone modes coming from the
breaking of $O(N)$
down to the rotation group $O(N-1)$ that leaves the ground state
invariant, i.e. the $O(N-1)$ around the ${\bf u}$-direction. Once the symmetry
breaking pattern $O(N) \to O(N-1)$
is given the NL$\sigma$ model is entirely determined up to the coupling
constant which in this case is the temperature.
We present now a matrix formulation of the $O(N)/O(N-1)$ model. Let us
choose a ground state ${\bf S}^0={\bf u}$. We can write the
${\bf S}(x)$ field as:
\begin{equation}
{\bf S}(x) = R(x){\bf S}^0 ,
\label{R(x)}
\end{equation}
where $R(x)$ is the $O(N)$ matrix sending ${\bf S}^0$ onto ${\bf S}(x)$.
The partition function $Z$ can be rewritten as:
\begin{equation}
Z= \int D{R}\ \exp \left({-{1\over 2T}\int d^Dx Tr(K(\partial R^{-1}
\partial R))}\right),
\label{cestlafin}
\end{equation}
where
\begin{equation}
K_{\alpha\beta}= S_{\alpha}^OS_{\beta}^O .
\label{k}
\end{equation}
In this last equation the indices are those of vectors
$(\alpha,\beta)=1,..,N$ and $R\in O(N)$.
The relationship between ${\bf S}(x)$ and $R(x)$ is not bi-univoque
since for any rotation matrix $h(x)$ leaving ${\bf S}^0$ invariant:
\begin{equation}
h{\bf S}^0={\bf S}^0 ,
\end{equation}
one has:
\begin{equation}
R(x)h(x){\bf S}^0=R(x){\bf S}^0 ={\bf S}(x) .
\end{equation}
As a consequence, the action is locally (i.e. gauge) right invariant
under the transformation:
\begin{equation}
R^h(x) = R(x)h(x)\ ,\ h\in H=O(N-1) .
\label{transfojauge}
\end{equation}
Some degrees of freedom in $R\in O(N)$ are thus unphysical.
To obtain a bi-univoque representation in terms of matrices we have
to choose one unique element in each equivalence class $R^h$, that
is to fix the gauge.
The set of these equivalence classes is the set of $O(N)$ rotations up to a
$O(N-1)$ rotation: it is $O(N)/O(N-1)$. We can easily find one element
per equivalence class in terms of the physical ${\bf\pi}$-field. Let us
write $R(x)$ and $h(x)$ as:
$$
R(x)=\left(
\begin{array}{c}
\large{A}\\
^t{\bf V}'
\end{array}
\right.
\left.
\begin{array}{c}
{\bf V}\\
B
\end{array}
\right)
\qquad
h(x)=\left(
\begin{array}{c}
h'\\
0
\end{array}
\right.
\left.\begin{array}{c}
0\\
1
\end{array}
\right)
\quad
h'\ \in\ O(N-1) .
$$
The matrix ${A}$ is $(N-1)\times (N-1)$, ${\bf V}$ and ${\bf V}'$ are a
$N-1$-component vectors and $B$ is a scalar.
We use relation (\ref{transfojauge}) to eliminate as many degrees of
freedom in $R(x)$ as possible. It is convenient to choose:
\begin{equation}
h'(x)=A^{-1}\sqrt{A\ ^tA}.
\label{hash}
\end{equation}
This leads to exactly one element per class given by:
\begin{equation}
L= \left(\begin{array}{c}
\sqrt{1_{N-1}-{\bf V} \ ^t{\bf V}}\\
-^t{\bf V}
\end{array}
\right.
\left.
\begin{array}{c}
{\bf V}\\
\sqrt{1-{\bf V}^2}
\end{array}
\right).
\end{equation}
We identify ${\bf V}$ by applying $L$ to ${\bf S}^0$, eq.(\ref{R(x)}):
\begin{equation}
\left(\begin{array}{c}
{\bf\pi}\\
\sigma
\end{array}
\right)=
L\left(\begin{array}{c}
0\\
1
\end{array}
\right).
\label{defL}
\end{equation}
The element $L$ can thus be written entirely in terms of the $\pi$ fields:
\begin{equation}
L(\pi^i)= \left(\begin{array}{c}
\sqrt{1_{N-1}-{\bf \pi} \ ^t{\bf\pi}}\\
-^t{\bf \pi}
\end{array}
\right.
\left.
\begin{array}{c}
{\bf \pi}\\
\sigma
\end{array}
\right).
\end{equation}
The set of $L$-matrices is such that to any
$\pi$ corresponds a unique $L(\pi)\in O(N)/O(N-1) $. The quantity
$(L^{-1}\partial L)$ belongs to the Lie algebra $Lie(G)$ of $G=O(N)$
and we have:
\begin{equation}
(L^{-1}\partial L)=(L^{-1}\partial L)_{G-H}+(L^{-1}\partial L)_{H}
\label{ldl}
\end{equation}
where $(L^{-1} \partial L)_{H}$ is in $Lie(H)$.
The partition function $Z$ can be finally written as:
\begin{equation}
Z= \int D{\pi}\ e^{-{1\over T}S},
\label{zl}
\end{equation}
\begin{equation}
S=-{1\over 2}\int d^Dx\ Tr([( L^{-1} \partial L)_{G-H}]^2).
\label{actiona}
\end{equation}
We have used the fact that $K$ is a projector: $K( L^{-1} \partial L)_{H}=0$.
The partition function in Eq.(\ref{cestlafin}) is globally $G$-invariant
and locally (i.e.gauge) $H$-invariant. Once
a gauge choice is made (as in (\ref{hash},\ref{defL})) no $H$-transformations
are allowed in (\ref{zl},\ref{actiona})
and the
$G$-transformations are in general not compatible with the gauge choice,
i.e. they do not preserve the form of matrices $L$. This means that a
$G$-transformation must be accompanied by a $H$-gauge-restoring-transformation:
\begin{equation}
L(\pi') = gL(\pi) h(g,\pi).
\label{ltransfo}
\end{equation}
Thus, $G$ is non linearly realized on the $\pi$-fields. This is
completely different from
the Landau-Ginzburg model where $G$ is linearly realized on the $\pi_i$
fields.
Equation (\ref{zl}) is the general expression for the partition function of a
NL$\sigma$ model defined on a coset space $G/H$. This coset space can be
viewed as a metric manifold so that it is convenient to formulate the theory
in the language of differential geometry.
Since $L^{-1}\partial L$ belongs to $Lie(G)$, eq.(\ref{ldl}) can be
rewritten as:
\begin{equation}
L^{-1}\partial_\mu L = e_\mu^I T_I +\omega_\mu^a T_a ,
\label{viel}
\end{equation}
where the $T_a$'s are the generators of $Lie(H)$ while the $T_I$'s are
generators in $Lie(G)-Lie(H)$. $e_\mu^I$ and $\omega_\mu^a$ are respectively
the vielbein and the connection in the tangent space of $G/H$. Under
(\ref{ltransfo}) they transform as:
\begin{equation}
\left\{
\begin{array}{ll}
{e'}_\mu^IT_I &= h^{-1}(x)(e_\mu^IT_I)h(x),\\
{\omega'}_\mu^aT_a &= h^{-1}(x)(\omega_\mu^a T_a) h(x) +h^{-1}(x)
\partial_\mu h(x).
\end{array}
\right.
\end{equation}
The $T_I$'s span a representation of $H$ since:
\begin{equation}
[T_a,T_I] = {f_{aI}}^JT_J .
\label{transt}
\end{equation}
As a consequence, the $e_\mu^I$'s span a linear representation of $H$.
Using (\ref{viel}), action $S$ in eq.(\ref{actiona}) can be written as:
\begin{equation}
S={1\over2}\int d^D x\ e_\mu^I e_\mu^J\eta_{IJ},
\label{toto}
\end{equation}
where $\eta_{IJ}$ is the tangent space metric given by:
\begin{equation}
\eta_{IJ}= -Tr(KT_IT_J),
\end{equation}
with $K$ the projector on $G-H$. In the $O(N)/O(N-1)$ case it is given by
eq.(\ref{k}). In this case, the $N-1$ generators $T_I$'s of
$Lie(O(N))-Lie(O(N-1))$ span the vector
representation of $O(N-1)$ so that there is only one coupling constant:
$\eta_{IJ}= \eta\delta_{IJ}$. However, in general, $\eta_{IJ}$ is a diagonal
matrix with several different couplings. The number of these couplings is
the number of quadratic invariants under
transformation (\ref{transt}) constructed with the $e_\mu^I$'s.
For a symmetric space there is only one such invariant. This is the case
of $O(N)/O(N-1)$ for example. For a non-symmetric homogeneous
space such as $O(N)\otimes O(2)/O(N-2)\otimes O(2)$ this number is
larger than one.
The formula $e_\mu^I = e_i^I\partial_\mu\pi^i$ leads to the more conventional
form eq.(\ref{ZON}) of the action of the NL$\sigma$ model defined on a coset
space viewed
as a metric space equipped with the metric $g_{ij}(\pi)=e_i^Ie_j^J\eta_{IJ}$
\cite{friedan}:
\begin{equation}
Z=\int_{\vert \pi\vert\le 1} D{\bf\pi} \
\exp \left( -{{1\over 2T}\int d^Dx\ g_{ij}(\pi)
\partial\pi^i\partial\pi^j} \right).
\label{toto2}
\end{equation}
Eq.(\ref{toto}) and eq.(\ref{toto2}) provide alternative descriptions of
NL$\sigma$ models defined on a coset space $G/H$ in terms of purely local
geometrical quantities of the manifold $G/H$ such as for example the
metric, Riemann and Ricci tensors. It is equivalent
to work either on the manifold itself (\ref{toto2}) or in the tangent space
(\ref{toto}). For practical calculations, it is extremely convenient to use
the tangent space formulation we have discussed above. It can be shown that
the geometrical quantities such as the Riemann tensor depend only in
tangent space on the Lie algebras $Lie(G)$ and
$Lie(H)$. More precisely they depend on the structure constants defined by the
following commutation rules:
\begin{equation}
\begin{array}{l}
[T_a,T_I] = {f_{aI}}^JT_J ,\\
{[T_a,T_b] = {f_{ab}}^cT_c},\\
{[T_I,T_J] = {f_{IJ}}^KT_K + {f_{IJ}}^aT_a},
\end{array}
\label{fstructure}
\end{equation}
where $T_a\in Lie(H)$ and $T_I\in Lie(G)-Lie(H)$.
\subsection{The ${O(N)\otimes O(2)/ O(N-2)\otimes O(2)_{diag}}$ partition
function}
In the case of the ${O(N)\otimes O(2)/ O(N-2)\otimes O(2)_{diag}}$ model,
the order parameter is the set of $N$-component vectors
$({\bf e}_1,{\bf e}_2)$ and the action is given by action ${\cal S}_3$
eq.(\ref{actioncourant}). Let us define the order parameter
as the rectangular matrix:
\begin{equation}
\Phi =({\bf e}_1,{\bf e}_2).
\end{equation}
The $O(N-2)\otimes O(2)$ transformations can be written:
\begin{equation}
^t\Phi ' = ^tr(x)\ ^t\Phi\ ^tR(x),
\label{transphi}
\end{equation}
where $R\in O(N-2)$ and $r\in O(2)$. The ground state Eq.(\ref{GS}) is
invariant under the transformations:
\begin{equation}
^t\Phi^0 = h_1(x) ^t\Phi^0 H(x),
\end{equation}
with $h_1\in O(2)$ and
\begin{equation}
H(x)=
\left(
\begin{array}{cc}
h_2(x) & 0\\
0 & h_1^{-1}(x)
\end{array}
\right),
\qquad
h_2\ \in\ O(N-2).
\end{equation}
Thus, the matrices $r(x)$ and $R(x)$ are defined up to the following local
transformations:
\begin{equation}
\left\{
\begin{array}{lll}
r(x) & \to &r(x)\ h_1(x)\\
R(x) & \to & R(x)\ H(x)
\end{array}
\right.
\label{transformations}
\end{equation}
In the low temperature phase we can rewrite $\Phi$ in terms of the $2N-3$
Goldstone modes:
\begin{equation}
\Phi(x)=
\left(
\begin{array}{c}
\pi(x)\\
\omega(x)\sqrt{1_2 - {^t\pi\pi}}
\end{array}
\right),
\end{equation}
where $\pi$ is a $(N-2)\times 2$ matrix and $ \omega(x)\in O(2)$. The
$\pi_i^\alpha$, $i=1,\dots ,N, \alpha=1,2$ transform as two independent
vectors under $O(N-2)$ and as a vector under $O(2)$. $\omega(x)\sqrt{1_2
- {^t\pi\pi}}$ represents one extra degree of freedom which is scalar under
both $O(N-2)$ and $O(2)$.
One can use the gauge freedom (\ref{transformations}) to go from
a general element $R(x)\otimes r(x)$ of $O(N)\otimes O(2)$ to the
unique element in the same gauge orbit $L\otimes {\bf 1}_2$:
\begin{equation}
L(\pi(x), \omega(x)) =
\left(\begin{array}{cc}
\sqrt{1-\pi\ ^t\pi} & \pi\\
- \omega(x) ^t\pi & \omega(x)\sqrt{1 - {^t\pi\pi}}
\end{array}
\right).
\label{LpiO}
\end{equation}
The matrix $L$ thus parametrizes the coset space $O(N)\otimes
O(2)/O(N-2)\otimes O(2)_{diag}$. Note that this matrix is the same as for the
coset space $O(N)/O(N-2)$.
In fact this is not accidental and we will use the following property
to simplify our study:
very generally the coset
spaces $G\otimes X/H\otimes X_{diag}$ (where $X$ is the maximal subgroup
of $G$ commuting with $H$) and $G/H$ are topologically equivalent.
We can thus work directly with the coset $G/H$
keeping in mind that we search for an action which has $G\otimes X$
as symmetry (isometry) group.
The vielbein of $G/H$
defined as in eq.(\ref{viel}) decompose into two irreducible representations
under the action of $H\otimes X$. $X$ itself spans the adjoint
representation of $X$ and is a scalar under $H$. $G-H-X$ is irreducible
because $H\otimes X$ is maximal in $G$, stated otherwise $G/H\otimes X$
is a symmetric space. Thus, the two
projected matrices $(L^{-1}\partial L)_{\vert G-H-X}$ and
$(L^{-1}\partial L)_{\vert X}$ transform independently under the right
action of the $H\otimes X$ group so that there are two independent
couplings $\eta_1$ and $\eta_2$. We are thus led to the action:
\begin{equation}
S=-{1\over 2}\int d^Dx\left( \eta_1tr(L^{-1}\partial L)_{\vert G-H-X}^2
+\eta_2tr(L^{-1}\partial L)_{\vert X}^2 \right).
\label{genac}\end{equation}
Denoting by $I$ the indices of $Lie(G)-Lie(H)$, and among them by $\alpha$
the indices of $Lie(X)$, this action may be rewritten as:
\begin{equation}
S={1\over 2}\int d^Dx\ e^I_\mu e^J_\mu \eta_{IJ}.
\end{equation}
where the tangent space metric $\eta_{IJ}$ is given by:
\begin{equation}
\eta_{IJ}=-\eta_1tr(T^IT^J)-(\eta_2-\eta_1)\delta_{I\alpha}
\delta_{J\beta}tr(T^\alpha T^\beta).
\label{eta}\end{equation}
We recall that in our case $T_I\in Lie(O(N))-Lie(O(N-2))$, $T_\alpha\in
Lie(O(2))$ and
$T_a\in Lie(O(N-2))$ and that the corresponding algebra is given
in (\ref{fstructure}).
Action (\ref{genac}) is completely equivalent to the action ${\cal S}_3$
we have obtained from the continuum limit (\ref{actioncourant}). We prove
it in appendix B and derive the
relations between the couplings $g_1, g_2$ entering in ${\cal S}_3$
and $\eta_1,\eta_2$: $\eta_1=g_1/2\ ;\ \eta_2= g_1+2g_2$.
\section{Renormalization of the $NL\sigma$ model in
$D=2+\epsilon$}
\subsection{General case}
The renormalizability in $D=2+\epsilon$ of
NL$\sigma$ models defined on coset spaces $G/H$ was studied by D.H.
Friedan\cite{friedan}. The $\beta$ function gives the
evolution of the metric $g_{ij}(\pi)$
with the scale:
\begin{equation}
{\partial g_{ij}\over \partial l}= \beta_{ij}.
\label{evolution}
\end{equation}
At two loop order it is given by the following expression:
\begin{equation}
\beta _{ij}(g)= -\epsilon g_{ij} + R_{ij} + {1\over2} T R_{ipqr}R_{jpqr} +
O(T^2).
\label{beta}
\end{equation}
where:
$R_{ij}$ and $R_{jpqr}$ are the Ricci and the Riemann tensors of the manifold
$G/H$ equipped with the metric $g_{ij}$.
In principle, it is enough to
compute $R_{ij}$ and $R_{ijkl}$ from the metric $g_{ij}$ to obtain these
recursion relations. In practice, these calculations are tedious
and some formal algebraic work has to be done first. The trick is to get rid
in the calculation of any dependence on the coordinates ${\pi ^i}$ by going
from the manifold itself to its tangent space, eq.(\ref{toto}). The crucial
advantage is
that in tangent space, the Riemann and Ricci tensors are functions only of
the structure constants ${f_{ij}^k}$ of $ Lie(G)$ and that the tangent
space metric $\eta _{IJ}$ is constant, see eq.(\ref{eta}), and involves only
the coupling constants. In the vielbein basis,
eq.(\ref{evolution},\ref{beta}) becomes:
\begin{equation}
{\partial \eta_{IJ}\over \partial l}= \beta_{IJ} ,
\end{equation}
\begin{equation}
\beta _{IJ}(\eta)= -\epsilon \eta_{IJ} + R_{IJ} +
{1\over2} T\ R_{IPQR\ }R_{JPQR} +
O(T^2).
\label{betaij}
\end{equation}
The matrix $\eta_{IJ}$ is given in eq.(\ref{eta}) and the Riemann tensor
in tangent space can be expressed as:
\begin{eqnarray}
R_{IJKL} \hspace{-0.3cm}&=&\hspace{-0.3cm}f{_{IJ}}^af_{aKL} +
{1\over 2}{f_{IJ}}^M\left( f_{MKL}+f_{LMK}-f_{KLM}\right)\nonumber\\
\hspace{-0.3cm}& &\hspace{-0.3cm}+{1\over4}\left(f_{IKM} +f_{MIK}-f_{KMI}
\right)\left(
{{f_J}^M}_L + {f_{LJ}}^M -{f^M}_{LJ}\right)\nonumber\\
\hspace{-0.3cm}& &\hspace{-0.3cm}-{1\over4}\left(f_{JKM} +f_{MJK}-f_{KMJ}
\right)\left(
{{f_I}^M}_L + {f_{LI}}^M -{f^M}_{LI}\right).
\label{riemann}
\end{eqnarray}
The indices $a$ and $\{I,J\dots\}$ refer to $H$ and and $G-H$ respectively.
$G-H$ indices are raised and lowered by means of $\eta ^{IJ}$ and
$\eta _{IJ}$ and repeated indices are summed over.
In NL$\sigma$ models the $\beta$ function and its derivatives allows
to compute the fixed point and the critical exponent $\nu$. Note that
since the $\beta$ function
is a tensor, it does not depend on a particular choice of coordinates.
As a consequence, the mere existence of a fixed point as well as the value
of the exponent $\nu$ {\sl do not depend on the representation spanned
by the order parameter}.
The other renormalization group function which is needed to give a
complete description
of the critical behavior is the Callan-Symanzik $\gamma$-function. This
function is determined by the field renormalization $Z$:
\begin{equation}
\gamma = -{\partial\log Z\over\partial l}.
\label{defgamma}
\end{equation}
From this function follows the anomalous dimension $\eta$:
\begin{equation}
\eta=\gamma(\eta_1^{*},\eta_2^{*}) -\epsilon
\end{equation}
where $\eta_1^{*},\eta_2^{*}$ are the fixed point values of the
coupling constants.
\noindent The factor $Z$ is given at one loop order by the Laplace-Beltrami
operator acting on the coordinate $\pi^i$. It can be shown that this
is nothing but $g^{ij}\Gamma_{ij}^k$ where $\Gamma_{ij}^k$ is the Christoffel
connection on the metric manifold $G/H$:
\begin{equation}
Z\pi^k= \pi^k + {1\over \epsilon}g^{ij}\Gamma_{ij}^k .
\label{zz}
\end{equation}
Once again, it is simpler to compute $g^{ij}\Gamma _{ij}^k$ by working in
tangent space. We find for any coset $G\otimes X/H\otimes X$ where $X$
is the subgroup of $G$ that commutes with $H$:
\begin{equation}
g^{ij}\Gamma_{ij}^k
=-{1\over\eta_1}\left(\sum_AT_AT_A\right)\pi^k+{\eta_2-\eta_1\over
\eta_1\eta_2}\left(\sum_\alpha T_\alpha T_\alpha\right)\pi^k ,
\label{ggamma}
\end{equation}
where $\{T_A\}$ and $\{T_\alpha \}$ are generators of $G$ and $X$ and
where $\eta_1, \eta_2$ are defined in equation (\ref{genac}). $\sum_AT_AT_A$
and $\sum_\alpha T_\alpha T_\alpha$ are Casimir operators of $G$ and $X$.
In general, a choice of coordinates is not stable under
renormalization. Equations (\ref{zz}) and (\ref{ggamma}) show that a good
coordinate system which renormalizes
multiplicatively consists in the $\pi$ fields together with the
massive $\sigma$ modes. They build
up a linear representation of $G\otimes X$ such that the $\pi$'s are an
eigenvector of the
Casimir operators with an eigenvalue that depends on the representation.
{\sl Therefore, the $\gamma$-function and thus the critical exponent $\eta$
depends on the representation $r$ of $G\otimes X$ spanned by the order
parameter}. In our case, the Casimir operators have to be taken in the
vector representation of
both the $O(N)$ and the $O(2)$ groups. Their values are therefore
respectively $N-1$ and 1.
To summarize, in NL$\sigma$ models the existence of a fixed
point depends only on the symmetry breaking pattern
$G/H$ and not on the representation spanned by the order parameter. However,
the universality class is completely determined once the representation
$r$ of $G$ spanned by the observable is known. This scheme is completely
different from what happens in the $4-\epsilon$ expansion where even the
$\beta$ function, and thus the mere existence of a fixed point,
{\it does depend} on the representation of $G$ spanned by the order parameter.
In the following, we apply these results to the $O(N)\otimes O(2)/O(N-2)
\otimes O(2)$ models. For reasons that will soon become clear, we shall
distinguish between the $N=3$ and $N>3$ cases.
\subsection{Results for $N>3$}
Using Eq.(\ref{betaij}) we obtain the following two loop recursion
relations valid for any $N\ge3$:
\begin{equation}
\left\{
\begin{array}{lll}
{\displaystyle{
{\partial\eta_1\over\partial l}}}& \hspace{-0.3cm}=-&\hspace{-0.3cm}\epsilon
\eta_1 + N-2 -{\displaystyle{
\frac{1}{2}
\frac{\eta_2}{\eta_1} +
\frac{3N-4}{8}\frac{\eta_2^2}{\eta_1^3}+3(1-\frac{N}{2})\frac{\eta_2}
{\eta_1^2}}}\\
&&\\
& & +{\displaystyle{(3N-8)\frac{1}{\eta_1}}}\\
&&\\
{\displaystyle{\frac{\partial\eta_2}{\partial l}}} & \hspace{-0.3cm}=
-&\hspace{-0.3cm}\epsilon
\eta_2 + {\displaystyle{
\frac{N-2}{2}\left(\frac{\eta_2}{\eta_1}\right)^2 +\frac{N-2}{8}
\frac{\eta_2^3}{\eta_1^4}}}
\end{array}
\right.
\label{eqrecursion}
\end{equation}
Defining
$T_{1,2}=1/\eta_{1,2}$, we find that, apart from the trivial zero temperature
line of fixed points: $T_1=T_2=0$ with $T_1/T_2$ arbitrary
there is one non trivial fixed point $C_{NL}$ with coordinates:
\begin{equation}
\left\{
\begin{array}{ll}
T_1^{*}\hspace{-0.3cm}&={\displaystyle{\frac{N-1}{ (N-2)^2}\left(\epsilon -
\frac{1}{ 2}\frac{3N^2 -10N +4}{(N-2)^3}\epsilon^2\right) + O(\epsilon^3)}}\\
&\\
T_2^{*}\hspace{-0.3cm}&={\displaystyle{\frac{1}{2}\frac{(N-1)^2}{ (N-2)^3}
\left(\epsilon - \frac{1}{2}\frac{5N^2 -16N +4}{(N-2)^3}\epsilon^2\right) +
O(\epsilon^3)}}
\label{pointfixe}
\end{array}
\right.
\end{equation}
This fixed point has one direction of instability so that our model
undergoes an ordinary
second order phase transition with critical exponent $\nu$:
\begin{equation}
\nu^{-1}=\epsilon + {1\over2} {6N^3 - 27N^2 +32N-12\over (N-2)^3(2N-3)}\epsilon
^2+ O(\epsilon^3)
\label{nu}
\end{equation}
In order to complete our discussion, we have to specify the representation $r$
of $O(N)\otimes
O(2)$ spanned by the observable of the physical system under study. We are
interested in the
AFT model with $N$-component spins. In this case, the order parameter
transforms under the vector
representation of both $O(N)$ and $O(2)$, see Eq.(\ref{transphi}). At one loop,
it follows
from Eq.(\ref{defgamma}) and Eq.(\ref{ggamma}) that the anomalous dimension
$\eta$ is:
\begin{equation}
\eta ={3N^2-10N+9\over 2(N-2)^3}\epsilon +O(\epsilon^2)
\label{gamma}
\end{equation}
\subsection{Results for $N=3$}
Although both the recursion relations and the values of the exponents given in
the preceding
section are still valid in the $N=3$ case the symmetry properties are less
obvious. In this case,
we can take advantage of the different equivalent parametrizations of the
action we have derived
in section 2. The convenient parametrization is given by Eq.(\ref{actionp}):
\begin{equation}
{\cal S}_2={1\over2}\int d^Dx\ Tr\left(P(\partial R^{-1})(\partial R)\right)
,
\end{equation}
where $P$ is the diagonal matrix: $P=diag (p_1,p_2,p_3)$ and $R\in SO(3)$. In
the $O(3)\otimes O(2)/ O(2)$ case we have $p_1=p_2\ne p_3$. The relationship
between the $p_i$'s and the tangent space couplings $\eta_{IJ}$ is given in
Appendix B. Using Eq.(\ref{eqrecursion}) we can deduce
the two loop recursion relations for the couplings $p_i$.
At the fixed point we find $p_1^{*}=p_2^{*}= p_3^{*}$ and thus $P^{*}\propto
1$. It follows from the discussion given in section 2 that the
action ${\cal S}_2$ is
$O(3)\otimes O(3)$ symmetric at the fixed point: the symmetry has been
dynamically enlarged at
the fixed point. Since $O(3)\otimes O(3)/ O(3)\sim O(4)/O(3)$ the critical
behavior of the
$O(3)\otimes O(2)/ O(2)$ NL$\sigma$ model is given by that of the $O(4)/O(3)$
NL$\sigma$ model. It is a new result to find such a $O(4)$ symmetry for a
Heisenberg system. We
stress that it is not trivial to identify such a symmetry using a different
parametrization such as the one given in action
${\cal S}_3$ (see Eq.(\ref{actioncourant})). In
this case, the $O(4)$ symmetry is non-linearly realized on the fields ${\bf
e}_1,{\bf e}_2$.
The critical exponents $\nu$ and $\eta$ are given by
Eqs.(\ref{nu},\ref{gamma}) with $N=3$.
Although the critical exponent $\nu$ is identical
to that of the $N=4$ vector model, it is not so simple to get $\eta$.
The order parameter is a matrix
$R(x)=({\bf e}_1(x),{\bf e}_2(x),{\bf e}_3(x)))$ (Eq.(\ref{actionp})) and
spans the {\it tensor} representation of $O(4)$.
This point was previously missed in ref.\cite{aza1}.
As a consequence, the exponent
$\eta$
of the Heisenberg AFT model is the anomalous
dimension of a {\it composite} operator of the $N=4$ vector model. To see this,
we need the relationship between the $O(3)$ matrix $R$ and a
$O(4)$ unit vector. It can be shown that to any unit 4-component vector:
\begin{equation}
\Psi =(\Psi_0, \Psi_i)\ \ ;\ \ \Psi_0^2+\sum_i \Psi_i^2=1
\end{equation}
there exists a matrix $R$ of $O(3)$ with components:
\begin{equation}
R_{ij}= 2(\Psi_i \Psi_j -{1\over4}\delta_{ij}) + 2\epsilon_{ijk}\Psi_0\Psi_k +
2(\Psi_0^2-{1\over4})\delta_{ij}
\end{equation}
Therefore, the expectation values of the vectors $<{\bf e}_i(x)>, i=1,3$
are obtained from those of the {\it bilinear} forms $<(\Psi_i \Psi_j
-{1\over4}\delta_{ij})>$.
We thus find no new universality class for Heisenberg canted models but
instead the
general phenomenon of increased symmetry at the fixed point. These models
belong to the standard $N=4$ Wilson-Fisher universality class.
In dimension $D=3$, the exponent $\nu$ is very accurately known
\cite{leguillou}: $\nu=0.74$. However, the anomalous dimension
of the composite
operator $(\Psi_i \Psi_j -{1\over4}\delta_{ij})$ is only known at the two
loop order in $\epsilon=4-D$.
Let us finally emphasize that the phenomenon of increased symmetry at the
fixed point is particular to the $N=3$ case in the
$O(N)\otimes O(2)/O(N-2)\otimes O(2)$ NL$\sigma$ models. For any $N>3$,
the phase transition belongs indeed
to a universality class different from $O(N)$
but as one reaches the physical $N=3$ case one falls in the
well known $O(4)$ one. This conclude our analysis of the NL$\sigma$
models associated to canted magnets.
The well-known $\epsilon = 4-D$ expansion
starting from the upper critical dimension
of the appropriate Landau-Ginzburg-Wilson (LGW) action
has been applied to helimagnets more than ten years
ago by Garel and Pfeuty\cite{garel} and Bailin et al.\cite{bailin}. More
recently, renewed interest
on this subject has been drawn by Kawamura\cite{kawamura}.
We shall, in the next section, present the results obtained from this
expansion.
\section{The linear theory and the $\epsilon = 4-D$ expansion}
The LGW action can be obtained in the same spirit as the NL$\sigma$. Once the
symmetry breaking pattern is known, in our case $O(N)\otimes O(2)/O(N-2)\otimes
O(2)$,
all we have to do is to find the most general action which is $O(N)\otimes
O(2)$
symmetric and which ground state is $O(N-2)\otimes O(2)$ invariant. Among all
possible actions, one has to select those which are
{\it renormalizable} in $D=4$, i.e. to keep only terms up to order 4
in the fields and to order 2 in their derivatives.
The LGW action does not possess the invariance under
reparametrization of the NL$\sigma$ model. Moreover, only linear
transformations of $O(N)\otimes O(2)$ are allowed since non-linear
transformations involve higher powers of the fields and their derivatives
than allowed by
renormalizability. For this reason the whole LGW or Linear
theory depends explicitly on the representation of $O(N)\otimes O(2)$ spanned
by the
physical order parameter. This fact have dramatic consequences on the
renormalizability of LGW theories as compared to their corresponding NL$\sigma$
models.
In order to build the LGW action, we shall start from the
NL$\sigma$ model. In the particular case of canted models,
we have to choose the parametrization which spans a
linear representation of $O(N)\otimes O(2)$. In this case the partition
function Z
is given by:
\begin{equation}
Z=\int D{\bf e }_1 D{\bf e }_2\ \delta( {\bf e }_1.{\bf e}_2)\ \delta(
{{\bf e }_1}^2 -1)\
\delta( {{\bf e }_2}^2 -1)e^{-{\cal S}_3},
\label{actioncourantbis}
\end{equation}
\begin{equation}
{\cal S}_3 ={1\over2}\int d^Dx \left(g_1\left(\nabla
{\bf e}_1^2 + \nabla
{\bf e}_2^2\right)
+ g_2\left({\bf e}_1\nabla{\bf e}_2 - {\bf e}_2\nabla{\bf e}_1\right)^2\right)
{}.
\end{equation}
The ${\bf e}_i$ are $N$-component vectors. The LGW action is now obtained in
a standard way by relaxing the constraints in Eq.(\ref{actioncourantbis})
and use of a potential:
\begin{equation}
V({\bf e }_1,{\bf e }_2)=
{1\over2} m^2 ({\bf e }_1^2+{\bf e }_2^2) +
u_1 ({\bf e }_1^2+{\bf e }_2^2)^2 + u_2 ({\bf e }_1\wedge{\bf e }_2)^2
\end{equation}
The LGW action for canted magnets reads now:
\begin{equation}
{\cal S}_{LGW} ={1\over2}\int d^Dx \left({1\over2}\left(\nabla
{\bf e}_1^2 + \nabla
{\bf e}_2^2\right)
+ V({\bf e }_1,{\bf e }_2)\right)
\label{actionlin}
\end{equation}
We have rescaled the fields in order to obtain the standard normalization for
the gradient
term and have omitted the current term
$({\bf e}_1\nabla{\bf e}_2 - {\bf e}_2\nabla{\bf e}_1)^2$
since it is {\it not}
renormalizable. Note that it is precisely this term which allowed the
NL$\sigma$
action ${\cal S}_3$ to be $O(3)\otimes O(3)/O(3)$ symmetric at the fixed point
when
$N=3$.
The two loop recursion relations for the couplings $u_1$ and $u_2$ were first
obtained by Bailin et al.\cite{bailin} and Garel and Pfeuty\cite{garel} . We
shall here only summarize their results.
Let us comment the RG flow:
i) there is a critical value of $N$ depending on the dimension:
$N_c(D)=21.8-23.4\epsilon+ O(\epsilon^2)$, under which there is no fixed point.
In this
case the transition is expected to be first order. Let us emphasize that for
$\epsilon =1$ the second term in the $\epsilon$-expansion of
$N_c(\epsilon)$ is {\it not} a small perturbation of the first one since it is
$-23.4$. This is the signal that a precise determination of $N_c(D)$ needs
some control of the $\epsilon$-expansion which, as it stands, can not be
used directly for $\epsilon$ of order 1.
ii) for $N>N_c(D)$ there are still two different regions in the portion of
the $(u_1 ,u_2)$
parameter space where the potential is stable: see fig. 1.
One is
the second order region. It lies above the line $L$ joining the origin to
an unstable fixed point (called
$C_-$) and is the basin of attraction of the stable fixed
point, called
$C_L$. The Heisenberg fixed point $H$ is unstable towards $C_L$.
The other region lies between the line
$L$ and the stability line $S$ of the potential: $u_2=-2u_1$. It is a region
of runaway behaviour
and is expected to correspond to a first order region. We note
that, as $N$ tends to infinity, $L$ tends to $S$, as expected.
\section{Interpolating between $D=2+\epsilon$ and $D=4-\epsilon$.}
There is clearly a mismatch between RG results obtained in
$D=4-\epsilon$ dimensions from the LGW model and in $D=2+\epsilon$
dimensions from the $O(N)\otimes O(2)/O(N-2)\otimes O(2)$ sigma model.
Even though these NL$\sigma$ models predict for any $N$ a continuous
transition in the neighborhood of dimension 2, there are models with
either $N<N_c$ or which do not belong to the basin of attraction of $C_L$
for which the transition is expected to be of first order at least near
$D=4$. This is very different from the ferromagnetic case where both the
$O(N)/O(N-1)$ NL$\sigma$ model and the LGW model predict the same
critical behaviour. However, when $N>N_c(D)$, there exists a domain in the
coupling constants space $M$ where a second order transition is predicted
in both models. These domains are respectively the basins of
attraction of $C_{NL}$ in $D=2+\epsilon$ and of $C_L$ in $D=4-\epsilon$.
The natural question is whether or not these two fixed points are the same
in a given dimension $D$ between $2$ and $4$. The $1/N$ expansion allows
to answer this question, at least for $N$ large enough, since this expansion is
non
perturbative in the dimension \cite{aza2}. The critical exponent $\nu$ has been
calculated to the lowest non trivial order in a $1/N$ expansion of the
Landau-Ginzburg
action (\ref{actionlin})\cite{bailin,kawamura} :
\begin{equation}
\nu_{1/N}(D) = {1\over D-2}\left(1-{1\over ND}12(D-1)S_D\right),
\label{nu1/N}
\end{equation}
\begin{equation}
S_D={\sin \left({\pi\over 2}(D-2)\right)\Gamma(D-1)\over 2\pi\left(
\Gamma(D/2)\right)^2}.
\end{equation}
By expanding eq.(\ref{nu1/N}) to lowest non trivial order in $\epsilon$,
$\epsilon =4-D$ or $\epsilon =D-2$, we find that $\nu _{1/N}(D)$ coincides
with $\nu_{4-\epsilon}(N)$ and $\nu_{2+\epsilon}(N)$ to lowest order in
$1/N$. The same type of expansion can be done on the other exponents with the
same results.
We may thus conclude as in the ferromagnetic case that, when the fixed point
exists near $D=2$ and
near $D=4$, we can follow it smoothly from $D=4-\epsilon$ down to
$D=2+\epsilon$. Therefore, in the
whole space $E=$\{$(M)$ = coupling constants, $D, N$) there should exist a
domain $Z$ where the
transition is of second order and which is governed by a unique fixed
point $C_L(N,D)=C_{NL}(N,D)$.
In the complementary of $Z$ the transition is expected to be of
first order. On the boundary
$\Gamma$ of these two domains, the transition should be tricritical
in the simplest hypothesis.
The situation can be summarized in the plane (N, D) of number
of components of the model and dimension.
The $4-\epsilon$ findings have shown that there is a universal
curve $N_c (D)$ separating a first-order region and a second-order
region. If one believes that the $2+\epsilon$ results survive
perturbation theory then the neighborhood of $D=2$ belongs
to the second-order region for all $N\ge 3$. As a consequence,
the line $N_c (D)$ intersects the $N=3$ axis somewhere
between $D=2$ and $D=4$. This defines thus a critical dimension $D_c$
that we do not expect to be a simple number.
Making the hypothesis that the RG calculations have captured all the
relevant fixed points then
there are two possibilities:
\noindent
i) The critical value is between $D=3$ and $D=4$. This implies
that the physical case $N=3$, $D=3$ undergoes an O(4) transition
as shown in (4.3). This case is favored by present numerical studies
\cite{bata}.
\noindent
ii) The critical value is between $D=2$ and $D=3$. Then the physical case
is governed by a fluctuation-induced first-order transition.
It cannot be excluded that $D_c =2$ in which case the perturbative
analysis of the nonlinear sigma model is always irrelevant.
We note in addition that there is an additional possibility namely
$D_c =3$. We do not see any reason why this would be realized
since this looks like an artificial fine-tuning. In this case
one may speculate that a tricritical mean-field like behaviour
is seen for $N=3$, $D=3$.
These alternatives are consistent with all the RG results and
do not require additional fixed points not seen in perturbation theory.
In this picture the stable fixed point seen in $4-\epsilon$
for $N\ge N_c$ can be followed smoothly by the large-N limit
till $D=2$ and then identified with the conventional O(4) fixed point
via the $2+\epsilon$ calculation of (4.2-3) for $N=3$.
\section{Conclusion}
We have shown in this article that the non linear sigma model provides a new
approach to the analysis of the critical behavior of frustrated systems.
The double expansion in $T$ and in $\epsilon=D-2$ of the $O(N)\otimes
O(2) /O(N-2)\otimes O(2)$ NL$\sigma$ model
has been performed and a fixed point has been
found in $D=2+\epsilon$ for any $N$, which turns out to have a remarkable
$O(4)$ symmetry for three component spins. Since for $N\le 21$ the transition
is expected to be first order near $D=4$, we conjecture that for any $N\in
[3,21]$, the nature of the phase transition changes between 2 and 4 dimensions
and is tricritical at the border of the second and first order region.
In the simplest hypothesis the (N, D)-plane is divided
in two region: a first-order region containing $N=3$ and $D=4$
and a second-order region containing the $D=2$ line (for any N),
the whole large-N line (for all D) and also in the neighborhood
of D=4 the $N\ge 21.8$ points. In between lies a universal line
$N_c (D)$ whose $4-\epsilon$ expression was already known.
This universal line intersects the N=3 axis for some unknown
critical dimension $D_c$.
If $3<D_c <4$ then the physical point N=3, D=3 is second order
in the O(4) universality class and its exponent $\eta$ is that
of a {\it tensor} representation.
If $2 < D_c < 3$ the physical point is first-order.
It may happen that $D_c =3$ (although we see no reason why)
in which case one could see tricritical behaviour.
This phase diagram is in agreement with all known RG results.
To decide the fate of the physical point requires clearly
additional techniques. Present Monte-Carlo results\cite{bata}
favor $2 < D_c < 3$ although more work is needed.
Direct RG calculations in D=3 may also help to confirm the phase
diagram\cite{aza3}.
We note that in the generic case
$O(N)\otimes O(2) /O(N-2)\otimes O(2)$, $N>3$, the symmetry
is {\it not} enlarged at the fixed point.
In this respect $N=3$ is exceptional: for other values of N,
the fixed point does not belong to the O(N) Wilson-Fisher family.
It would be interesting of course to investigate the fate of the
XY N=2 case since known helimagnets have significant anisotropies
that lead to a non-Heisenberg behaviour.
\bigskip
\bigskip
\noindent
{\bf Acknowledgements:}
\bigskip
Two of us (P. A. and B. D.) would like to thank A. Caill\'e,
H. T. Diep and M. L. Plumer for numerous discussions.
One of us (Th. J.) would like to thank Th. Garel, J. C. Le Guillou and A. P.
Young for several discussions and also D. P. Belanger
for informing us about the present experimental status of
phase transitions helimagnetism.
\newpage
{\bf APPENDIX A}
We have seen in section 6 that for $N=3$ the LGW and NL$\sigma$ models
do not predict the
same critical behaviour. No fixed point is found in $D=4-\epsilon$
dimensions from the LGW action
while the NL$\sigma$ model predicts a fixed point in $D=2+\epsilon$ with a
$O(3)\otimes O(3)\sim O(4)$ symmetry at this point. Actually, though
these results are
perturbative, it is easy to see that a $O(4)$-symmetric fixed
point {\sl can not be obtained}
with the LGW action (\ref{actionlin}) since no value of $(u_1,u_2)$ makes
this action
$O(4)$-symmetric. The reason is that the rectangular
matrix $(\phi_1,\phi_2)$ represents 6
degrees of freedom on which acts $O(3)$ on the right and $O(2)$ on the
left and that this
$O(2)$ can not be enlarged to $O(3)$ with only these 2 fields. This $O(3)$
symmetry can be
realized only with at least 3 fields: $(\phi_1,\phi_2,\phi_3)$.
This would correspond to the 9-dimensional representation of $O(4)$. To
check whether the
discrepancy between the two models can be eliminated by allowing the
LGW model to reach the
$O(3)\otimes O(3)$ symmetry, we have built and studied the most general
action invariant under
$O(3)\otimes O(2)$ and involving 3 fields:
\begin{equation}
\begin{array}{ll}
H= &\hspace{-0.3cm}{1\over2}(\partial\phi_1)^2 + {1\over2}(\partial\phi_2)^2 +
{1\over2}(\partial\phi_3)^2 + {1\over4}u_1\left( \phi_1^2 + \phi_2^2\right)^2+
{1\over2}u_2\left( \phi_1^2 \phi_2^2 - (\phi_1.\phi_2)^2\right)\\
&\\ &\hspace {-0.3cm}+ {1\over4}u'(\phi_3^2)^2+ {1\over2}u_4\phi_3^2
\left( \phi_1^2 +
\phi_2^2\right) - {1\over2}u_3\left( (\phi_1.\phi_3)^2 +
(\phi_2.\phi_3)^2\right)
\end{array}
\label{action3champs}
\end{equation}
$(\phi_1,\phi_2)$ is a doublet of $O(2)$ and $\phi_3$ a singlet.
$H$ is $O(3)\otimes O(3) $ invariant when :
\begin{equation}
u_2=u_3\ \ ;\ \ u_1+u_2=u_4\ \ ;\ \ u_1=u'
\label{conditions}
\end{equation}
A one-loop RG calculation shows that no attractive fixed point exists in
$D=4-\epsilon$ in this model. This is the proof that the root of the problem is
not only a question of symmetry but also a problem of dimension, field
content and renormalizability. More precisely, if we set $u_2,u_4$ and $u'$ to
the values eq.(\ref{conditions}) which make hamiltonian
eq.(\ref{action3champs})
$O(3)\otimes O(3)$-symmetric, we find that the remaining symmetry in the broken
phase is $O(3)_{diag}$. Then, the NL$\sigma$ model associated with this
symmetry
breaking scheme is $O(4)/ O(3)$. This NL$\sigma$ model is unique since it
depends only on the Goldstone modes, and then only on the Lie algebras of
$O(4)$ and $O(3)$ and {\it not} on the representations of these groups. On the
other hand, there are as many associated LGW models as there are
representations
of $O(4)$ (or at least as there are actions built with representations
of $O(4)$
that can be broken down to $O(3)$). Surprisingly, the LGW action built with the
4-component vector representation of $O(4)$ admits a fixed point in
$4-\epsilon$
dimensions (the usual Heisenberg fixed point) and that built with the
9-dimensional tensor representation $(\phi_1,\phi_2,\phi_3)$ admits no such
fixed point, as we have seen
above. Since in these two LGW models, the symmetry breaking scheme is the same
and then the Goldstone modes are the same, it means that the difference between
these models lies in the massive modes. It is not clear up to now whether these
modes can indeed be physically relevant for the critical behaviour. At least
perturbatively and near 2 dimensions, the NL$\sigma$ model does not take care
of these modes. In our case, we have to deal with the vector and tensor
representations of $O(4)$ which both allow to replace the constraints of the
NL$\sigma$ model by potentials and which lead to two different results. The
relevance of the massive modes in the critical behaviour is then directly
related to the way one chooses to go from the microscopic Hamiltonian to the
different continuous actions, linear or non linear.
\noindent
{\bf APPENDIX B}
We give in this appendix the relationship between different
parametrizations of the sigma model.
\noindent
{\bf From tangent space to the constraints (for any $N$):}
We parametrize the matrix $L$ in Eq.(\ref{LpiO}) as:
\begin{equation}
L = (A\ \phi_1\ \phi_2)
\end{equation}
where $A$ is a rectangular $N\times (N-2)$ matrix, $\phi_1$
and $\phi_2$ are
$N$-component vectors. Since $L$ is in $O(N)$, they must satisfy:
\begin{equation}
^tA.A=1_{N-2}\ \ ,\ \ ^tA\phi_1=^tA\phi_2=0\ \ ,\ \ \phi_1^2=
\phi_2^2=1\ \ ,\ \ \phi_1 .\phi_2=0
\end{equation}
\begin{equation}
A.^tA + \phi_1.^t\phi_1 + \phi_2.^t\phi_2 = 1_N
\end{equation}
We are interested in the action of the NL$\sigma$ model so that we have
to compute $ L^{-1}\partial L$:
\begin{equation}
L^{-1}\partial L=
\left(
\begin{array}{ccc}
0& ^tA\partial\phi_1&^tA\partial\phi_2\\
-^tA\partial\phi_1& 0&\phi_1.\partial\phi_2\\
^tA\partial\phi_2&-\phi_1.\partial\phi_2&0
\end{array}
\right)
\end{equation}
The projection of $L^{-1}\partial L$ onto $Lie(G')-Lie(H')$
leads to two
sets of
vielbein that are not mixed under the H-transformations:
\begin{equation}
(L^{-1}\partial L)_{\vert_{G'-H'-X}}=
\left(
\begin{array}{ccc}
0& ^tA\partial\phi_1&^tA\partial\phi_2\\
-^tA\partial\phi_1& 0&0\\
^tA\partial\phi_2&0&0
\end{array}
\right)
\end{equation}
and
\begin{equation}
(L^{-1}\partial L)_{\vert_{X}}=
\left(
\begin{array}{ccc}
0& 0&0\\
0& 0&\phi_1.\partial\phi_2\\
0&-\phi_1.\partial\phi_2&0
\end{array}
\right)
\end{equation}
The total action is the sum of the traces of the squares of these
two matrices weighted by $\eta_1$ and $\eta_2$. These
coefficients are by definition those coming from the tangent
space metric $\eta_{IJ}$.
\begin{equation}
\begin{array}{l}
\eta_1 Tr\left[\left((L^{-1}\partial L)_{\vert_{G'-H'-X}}\right)^2\right]
=-2\eta_1\left((\partial\phi_1)^2 +(\partial\phi_2)^2\right) +
4\eta_1(\phi_1.\partial\phi_2)^2\\
\eta_2 Tr\left[\left((L^{-1}\partial L)_{\vert_{X}}\right)^2\right]
=-2\eta_2(\phi_1.\partial\phi_2)^2
\end{array}
\end{equation}
The coupling constants $\eta_1$, $\eta_2$ are now easily related to
$g_1$ , $g_2$ defined in (\ref{actioncourant}):
\begin{equation}
\left\{
\begin{array}{ll}
\eta_1 &={g_1/2}\\
\eta_2&= g_1 +2g_2
\end{array}
\right.
\end{equation}
\noindent
{\bf From the P-matrix to the constraints ($N=3$)}
We now compute directly $Tr (P(R^{-1}\partial R)^2)$ to obtain
explicitly the action (\ref{actionp}). Let us start with $R\ \in\ O(3)$,
parametrized as follows:
\begin{equation}R = (e_1\ e_2\ e_1\wedge e_2)
\label{r}
\end{equation}
where $e_1$ and $e_2$ are two 3-component vectors such that
$e_1^2=e_2^2=1$ and $e_1 .e_2=0$. The diagonal part of $(R^{-1}\partial
R)^2$ is:
\begin{equation}
(R^{-1}\partial R)^2_{\vert_{diag}}=\left(
\begin{array}{lll}
-(\partial e_1)^2& & \\
&-(\partial e_2)^2& \\
& &-(\partial e_1)^2-(\partial e_1)^2 +2(\partial e_1.e_2)^2
\end{array}
\right)
\end{equation}
so that:
\begin{equation}
Tr\left(P(R^{-1}\partial R)^2\right) = -(p_1+p_2)\left((\partial e_1)^2
+ (\partial e_2)^2\right)
+{p_2\over2}(\partial e_1.e_2- \partial e_2.e_1)^2
\end{equation}
We obtain the relations between $(p_1,p_2)$ and $(g_1,g_2)$:
\begin{equation}
\left\{
\begin{array}{ll}
p_1 &= g_1 +2g_2 ,\\
p_2 &= -2g_2 .
\end{array}
\right.
\end{equation}
\newpage
\bibliographystyle{unsrt}
|
2,869,038,154,383 | arxiv | \section{Introduction}
\label{sec:introduction}
Photoelectron spectroscopy is a powerful tool for studying
photoionization dynamics. Intense short laser pulses for the
ionization, which easily drive multi-photon transitions, allow to
observe effects in table-top experiments that otherwise would require
synchrotron radiation.
A recent example is the photoelectron circular dichroism (PECD) of chiral
molecules~\cite{LuxACIE12,LehmannJCP13,Janssen2014,LuxCPC15,FanoodPCCP15}.
It refers to the forward/backward asymmetry with respect to the light
propagation axis in the photoelectron angular distribution (PAD) obtained
after excitation with circularly polarized
light~\cite{Ritchie1976,PowisAdvCP08,Nahon2010}.
When the PAD is expanded in Legendre polynomials, a PECD is characterized
by the expansion coefficients of the odd-order polynomials with the highest
order polynomial being determined by the order of the process, i.e.,
the number of absorbed photons~\cite{Ritchie1976,DixitPRA1983}.
A theoretical description of such experiments with intense femtosecond laser
pulses requires proper account of the multi-photon excitation pathways. In
the pioneering work of McClain and
co-workers~\cite{pmw1970,Monson1970,McClain1972,wm1974}, a model for the
simultaneous absorption of two photons including the corresponding modified
molecular selection rules was formulated. Two-photon circular dichroism was
developed in Ref.~\cite{Itjr1974}, attributing the effect to a difference in
the absorption coefficient for the two left and two right polarized photons.
These approaches are based on a perturbation expansion of the light-matter
interaction. The strong-field approximation provides an alternative
description which is particularly suited for very intense
fields~\cite{Keldysh1965,Faisal1973}.
Multi-photon transitions driven by strong femtosecond laser pulses may or may
not involve intermediate states.
In recent experiments with bicyclic
ketones~\cite{LuxACIE12,LehmannJCP13,Janssen2014,LuxCPC15,FanoodPCCP15}, a
2+1-REMPI process was employed. The nature of the intermediate state remains
yet to be clarified. A first theoretical study used the strong-field
approximation~\cite{LeinPRA2014}. While the standard strong-field
approximation using a plane wave basis for the photoelectron was found to
fail in describing PECD, accounting for the Coulomb interaction between
photoelectron and photoion in the Born approximation allowed for observation
of PECD. However, the PAD did not agree with the epxerimental ones. This may
be explained by the role fo the intermediate state in the REMPI process which
necessarily is ignored in the strong-field approximation~\cite{LeinPRA2014}.
Here, we take the opposite approach, starting with a
perturbation theory treatment of the multi-photon process.
Thus, ionization is viewed as a (weak) one-photon
transition into the continuum, the 'initial' state of which is
prepared by non-resonant two-photon absorption.
Such an approach is motivated by the moderate intensities,
of the order of $10^{12}\,$W/cm$^2$, used in the
experiments~\cite{LuxACIE12,LehmannJCP13,Janssen2014,LuxCPC15,FanoodPCCP15}.
Although clearly in the multi-photon regime, such
intensities can be described comparatively well by low order
perturbation theory~\cite{AmitayPRL08,RybakPRL11,LevinPRL15}.
The non-resonant two-photon preparation step yields an important difference
compared to pure one-photon excitation~\cite{RitchiePRA1976}. In the latter
case, the first order Legendre polynomial alone accounts for the
PECD~\cite{Reid2003,cooper,ChandraJPhysB87}. This results from the random
orientation of the molecules, or, in more technical terms, from integrating
the differential cross section over the Euler angles. In contrast,
non-resonant two-photon excitation may lead to an orientation-dependent
probability distribution of the molecules in the resonant intermediate
state~\cite{LehmannJCP13}. In this case, the maximum order of Legendre
polynomials contributing to the PAD is not limited to 2, but 6 for a 2+1
process. Whether the two-photon absorption is orientation-dependent is
determined by the two-photon transition matrix elements. Here, we calculate
the two-photon transition matrix elements using state of the art \textit{ab
initio} methods.
However, for molecules as complex as camphor and fenchone, it is extremely
challenging to model the complete photoionization process from first
principles, even when using the most advanced \textit{ab initio}
methods. We therefore split the theoretical description into two
parts.
As long as all electrons remain bound, state of the art quantum
chemical approaches, for example the coupled cluster methods,
can be used to accurately determine the electronic
wave functions. However, once an electron starts to leave the ionic
core, the standard basis sets of electronic structure theory are not
well adapted. An alternative is offered by a single-center expansion
into eigenfunctions of a hydrogen-like atom for which both
bound and
continuum functions are known analytically. The hydrogenic
continuum functions properly account for the
long-range Coulomb interaction between ionic core and ejected electron but
neglect the effect of short-range correlations in the ionization
step.
The basis functions for the single center expansion are chosen such as
to yield the simplest possible model that is able to reproduce the
laboratory-frame photoelectron angular
distributions (LF-PADs) resulting from a 2+1-REMPI process in
randomly oriented chiral molecules. The two descriptions are
matched at the resonant, electronically excited intermediate state
by projecting the numerically calculated wavefunction onto the basis
functions of the single center expansion.
Our approach of calculating the PAD as a one-photon absorption cross section
for an effective ``initial'' state in a single center expansion, while
neglecting dynamical effects, allows us to generalize our findings to chiral
molecules other than fenchone or camphore. In particular, we analyze the role
of the laser polarization for each step in the 2+1 ionization process and
determine the conditions on the two-photon absorption matrix elements for
yielding PECD.
The remainder of the paper is organized as follows: Our theoretical framework
is introduced in Sec.~\ref{sec:model}. In detail, Sec.~\ref{subsec:ansatz}
defines the PAD as one-photon photoionization cross section and summarizes
the single center expansion. To make connection with experiment, the cross
sections need to be transformed from the molecule-fixed frame into the
laboratory frame and averaged over the random orientations of the molecules.
The corresponding expressions for a 2+1 REMPI process are presented in
Sec.~\ref{subsec:PAD} with the details of the derivation given in the
appendix. The symmetry properties required for observing PECD are analyzed in
Sec.~\ref{subsec:parity}. Section~\ref{sec:abinitio} is dedicated to
\textit{ab initio} calculations for the intermediate, electronically
excited states and the two-photon absorption matrix elements.
Section~\ref{subsec:compdetails} presents the computational details and
Sec.~\ref{subsec:abinitioresults} the results. The one-center reexpansion
required for matching the numerical results to the single-center
description derived in Sec.~\ref{sec:model} is described in
Sec.~\ref{subsec:reexpansion}. Our numerical results for the PAD of camphor
and fenchone and the corresponding PECD are presented in
Sec.~\ref{sec:pad_results} with Sec.~\ref{subsec:fenchone} dedicated to
fenchone and Sec.~\ref{subsec:camphor} to camphore. Our findings are
summarized and discussed in Sec.~\ref{subsec:discussion}.
Section~\ref{sec:conclusions} concludes.
\section{Model}
\label{sec:model}
We model the resonantly enhanced multi-photon photoionization as a 2+1
process, assuming the last photon to constitute a weak probe of the molecular
state that is prepared by non-resonant two-photon absorption.
For simplicity, we
employ the strict electric dipole approximation. That is, contributions from
magnetic dipole terms, which are important for circular polarization
dependent differences in absorption cross sections, and higher order electric
and magnetic multipole terms are neglected.
Defining two
coordinates systems, the molecular frame of reference $\mathcal{R}$ and the
laboratory frame $\mathcal{R}^\prime$, $\epsilon_{\varrho_2}^\prime$ denotes
the polarization of the laser field with respect to the laboratory frame
(where we distinguish the polarization of the ionizing photon,
$\epsilon_{\varrho_2}^\prime$ from that of the first two photons,
$\epsilon_{\varrho_1}^\prime$).
For convenience,
we work in the spherical basis. Thus, $\epsilon_{\varrho_2}^\prime$ and $\epsilon_{\varrho_1}^\prime$ correspond to the spherical unit vectors in the laboratory frame, with $\varrho_{1,2}=\pm 1,0$ denoting left/right circular and linear polarization of the laser
beam which propagates in the positive $z^\prime$ direction (the relation between the spherical and Cartesian unit vectors is found in
Eq.~\eqref{eq:cartesian_spherical}).
Primed (unprimed) coordinates refer the laboratory
(molecular) frame of reference throughout.
Both frames, $\mathcal{R}^\prime$ and $\mathcal{R}$, are related by
an arbitrary coordinate rotation $D(\alpha\beta\gamma)$,
where $\omega=(\alpha,\beta,\gamma)$ denote the Euler angles
defining the orientation of $\mathcal{R}$ with respect to
$\mathcal{R}^\prime$.
Consider a one-photon ($1\mathrm{P}$) transition in a molecule whose
orientation
with respect to $\mathcal{R}^\prime$ is given by the Euler angles
$\omega$. The corresponding differential photoionization cross
section, when measured in the molecular frame $\mathcal{R}$,
reads, within perturbation theory and the electric dipole
approximation and in SI units~\cite{Hans1957},
\begin{eqnarray}
\label{eq:diffX}
\frac{d^2\sigma_{1\mathrm{P}}}{d\omega\; d\Omega_{\mathbf{k}}} &=& c_0\,
\left|\langle\Psi_{\mathbf{k}}|
\mathbf{\epsilon}^\prime_{\varrho_2}\cdot\mathbf{r}
|\Psi_o\rangle\right|^2\,,
\end{eqnarray}
where $c_0 = 4\pi^2\alpha\hbar\omega_{{\mathrm{ph}}}$ with $\alpha$ being
the fine-structure constant, $\hbar\omega_{\mathrm{ph}}$ the energy of the
ionizing photon, $\hbar$ the reduced Planck constant and $\mathbf{r}$ the
position operator of the electron (or a sum of the various position operators
in the multi-electron case).
The
polarization of the electric field in the laboratory frame of reference is
specified by $\epsilon^\prime_{\varrho_2}$, where $\varrho_2$ takes the
value 0 for linear and $+1 (-1)$ for left (right) circular polarization,
respectively. $|\Psi_{\mathbf{k}}\rangle$ denotes an energy
normalized molecular state with one electron transfered to the ionisation
continuum with asymptotic electron linear momentum $\mathbf{k}$.
$|\Psi_o\rangle$ is the (bound, unity normalized) molecular state prepared by
the non-resonant two-photon absorption, which is defined in the molecular
frame of reference. In Eq.~\eqref{eq:diffX}, we employ the standard notation
for doubly differential cross sections in the molecular frame of
reference~\cite{ChandraJPhysB87,ChengPRA2010,LuchessePRA1982} that depend not
only on the solid angle $\Omega_{\mathbf{k}}$ but also on the orientation of
the molecule via the Euler angles $\omega$.
We utilize a single-center approximation~\cite{Bishop67} which
allows us to calculate the matrix elements in
Eq.~\eqref{eq:diffX} explicitly. That is, we project the
multi-electron wave function obtained from \textit{ab
initio} calculations, $|\Psi_o\rangle$, on one-electron basis
functions and neglect
electron correlations in the continuum description.
We first discuss in Sec.~\ref{subsec:ansatz}
our choice of $|\Psi_o\rangle$ and then
explain below in Sec.~\ref{subsec:PAD} how to
connect the differential ionization cross section to the
experimentally measured photoelectron angular distributions.
\subsection{Single center expansion}
\label{subsec:ansatz}
The ``initial'' state for the one-photon ionization is a
multi-electron wavefunction which is usually expanded in specially
adapted basis functions developed in quantum chemistry. In contrast,
the single center expansion
is based on the fact that any molecular wavefunction can be written as
a linear combination of functions about a single arbitrary
point~\cite{Bishop67}. Of course, such an ansatz will converge very
slowly, if the multi-center character of the wavefunction is
important. Writing the wavefunction of the
electronically excited state of the neutral molecule, that is prepared
by the two-photon absorption process, as
$\langle\mathbf{r}| \Psi_o\rangle=\Psi_o(\mathbf{r})$, we expand it
into eigenfunctions of a hydrogen-like atom,
\begin{eqnarray}
\label{eq:exited_state}
\Psi_o(\mathbf{r}) = \sum_{n_o=0}^\infty\sum_{\ell_o=0}^{n_o-1}
\sum_{m_o=-\ell_o}^{\ell_o} a^{\ell_o}_{m_o}(n_o)\,
R^{n_o}_{\ell_o}(r)\, Y^{\ell_o}_{m_o}(\Omega_{\mathbf{r}})\,.
\end{eqnarray}
Here, $a^{\ell_o}_{m_o}(n_o)$
stands for the unknown expansion coefficients,
$R^{n_o}_{\ell_o}(r)$ denotes the radial eigenfunctions of the
hydrogen-like atom, and $Y^{\ell_o}_{m_o}(\Omega_{\mathbf{r}})$ are the spherical
harmonics. $\Omega_{\mathbf{r}}=(\vartheta_{\mathbf{r}},\phi_{\mathbf{r}})$
refers to the polar and azimuthal angles of the position
vector $\mathbf{r}$ in the molecular frame of reference.
Note that all information about the geometry and the symmetry
properties of the ``initial'' electronically excited state is
contained in the expansion coefficients $a^{\ell_o}_{m_o}(n_o)$.
The number of basis
functions must be truncated in any actual calculation, i.e.,
\begin{eqnarray}
\label{eq:exc_state}
\Psi_o(\mathbf{r}) \approx \sum_{n_o=n_o^{\mathrm{min}}}^{n_o^{\mathrm{max}}}\sum_{\ell_o=0}^{n_o-1}
\sum_{m_o=-\ell_o}^{\ell_o} a^{\ell_o}_{m_o}(n_o)\,
R^{n_o}_{\ell_o}(r)\, Y^{\ell_o}_{m_o}(\Omega_{\mathbf{r}})\,.\quad
\end{eqnarray}
Strictly speaking, all molecular orbitals that are involved
in Slater determinants describing the excited state should be subject
to the single center expansion. In the present model, we
employ an effective one-electron picture by expanding only one
representative virtual orbital around the single center, namely the
one that is additionally occupied in the supposedly leading configuration for the respective excited state.
We will also ask what the simplest possible model is that gives rise
to PECD. In this case, we assume a single quantum
number $n$, $n=n_o$, to contribute to Eq.~\eqref{eq:exited_state},
i.e.,
\begin{eqnarray}
\label{eq:exited_state2}
\Psi^{s}_o(\mathbf{r}) \approx \sum_{\ell_o=0}^{L_{o,\rm max}}
\sum_{m_o=-\ell_o}^{\ell_o} a^{\ell_o}_{m_o}\,
R^{n_o}_{\ell_o}(r)\, Y^{\ell_o}_{m_o}(\Omega_{\mathbf{r}})\,,
\end{eqnarray}
where $L_{o,\mathrm {max}}$ refers to the highest angular momentum state
appearing in the ``initial'' wavefunction. It follows from basic symmetry
arguments that the minimal value of $L_{o,\rm max}$ for which a PECD
can be expected is $L_{o,\mathrm{max}}=2$, that is, at least $d$-orbitals are
required.
We model the photoionization as a one-electron process arising from a
hydrogenic-like system exclusively, which allows for neglecting
the bound molecular part (the remaining molecular parent ion) in $|\Psi_{\mathbf{k}}\rangle$. Thus,
the resulting continuum wave functions, $\Psi_{\mathbf{k}}(\mathbf{r})$, are
expanded into partial waves in a way that allows for an explicit
expression of the photoionization cross section in terms of the
scattering solid angle
$\Omega_{\mathbf{k}}$~\cite{ChengPRA10,LuchessePRA1982,DillChemphys87,cooper},
\begin{eqnarray}
\label{eq:solution}
\Psi_{\mathbf{k}}(\mathbf{r}) = 4\pi\sum_{l=0}^{\infty}\sum_{m=-l}^{l}
\mathrm{i}^{\ell}
\phi_{k,\ell,m}(r)\, Y^{*\,\ell}_{m}(\Omega_{\mathbf{k}})\,
Y^{\ell}_{m}(\Omega_{\mathbf{r}})\,.
\end{eqnarray}
Here, $Y^{\ell}_{m}(\Omega_{\mathbf{r}})$ and $Y^{\ell}_{m}(\Omega_{\mathbf{k}})$
correspond to the spherical harmonics describing the orientation of
the photoelectron position and momentum,
respectively, and $\phi_{k,\ell,m}(r)$ is the radial part of the
photoelectron wavefunction. For simplicity,
we use here and in the following $Y^{*\, \ell}_m(\Omega_{\mathbf{k}})$
as an abbreviation for $(Y^{\ell}_m(\Omega_{\mathbf{k}}))^*$.
Modeling photoionization as a one-electron process, we can approximate
\begin{eqnarray}
\label{eq:approximation}
\phi_{k,\ell,m}(r)\approx e^{-\mathrm{i}\delta_{\ell}} G_{k,\ell}(r)\,,
\end{eqnarray}
where $G_{k,\ell}(r)$ are the well-known radial continuum wavefunctions
of the hydrogen atom, recalled in Appendix~\ref{subsec:H-cont},
and $\delta_{\ell}$ stands for the Coulomb phase
shift of the $\ell-$th scattered partial wave,
with $\delta_{\ell} =
\Gamma(\ell+1-\mathrm{i}/k)$~\cite{LuchessePRA1982,ChandraJPhysB87,DillChemphys87}.
Note that we expect the phase shift for molecules to depend
on $\ell_o$ and $m_o$ since the molecular potential of chiral
molecules is not spherically symmetric.
Neglecting the
$m_o$-dependence of the phase shift involves no approximation when
using Eq.~\eqref{eq:exited_state} since
the hydrogen eigenfunctions form a complete
orthonormal basis.
However, this is not true anymore when truncating
the basis, cf. Eq.~\eqref{eq:exc_state}. Our ansatz thus involves an
additional approximation, namely Eq.~\eqref{eq:approximation}.
By construction, Eq.~\eqref{eq:approximation} yields
orthogonality between bound and unbound wavefunctions which is
required to avoid spurious singularities~\cite{ChengPRA10} and
reproduce the correct threshold behavior of the
photoioization cross-sections~\cite{OanaJCP09}.
With the approximation of Eq.~\eqref{eq:approximation}, we
account for the long-range Coulomb interaction between photoelectron
and a point charge representing the ionic
core but neglect the short-range static exchange. Also,
dynamic changes in the electron distribution, such as
adjustments of the electronic cloud due to nuclear
motion, as well as the interaction of the outgoing photoelectron with
the driving electric field upon photoionization are neglected.
Inserting Eq.~\eqref{eq:approximation} into Eq.~\eqref{eq:solution}
yields
\begin{equation}
\label{eq:photoelectron}
\Psi_{\mathbf{k}}(\mathbf{r}) = 4\pi\sum_{l=0}^{\infty}\sum_{m=-l}^{l}
\mathrm{i}^{\ell}
e^{-\mathrm{i}\delta_{\ell}} G_{k,\ell}(r)\,
Y^{*\, \ell}_{m}(\Omega_{\mathbf{k}})\,
Y^{\ell}_{m}(\Omega_{\mathbf{r}})\,,
\end{equation}
and we can evaluate the matrix element in
Eq.~\eqref{eq:diffX}. Because
the wavefunctions are given in the molecular frame of reference, we
need to rotate the spherical unit vector
$\mathbf{\epsilon}^\prime_{\varrho_2}$ in
Eq.~\eqref{eq:diffX} into that frame~\cite{ChandraJPhysB87}.
Expanding the rotation operator $D(\alpha\beta\gamma)$
connecting $\mathbf{r}$ and $\mathbf{r}^\prime$ into
irreducible rank 1 tensor representations, cf. Appendix~\ref{subsec:rotmat},
Eq.~\eqref{eq:diffX} becomes
\begin{eqnarray}\label{eq:diffXa}
\frac{d^2\sigma_{1\mathrm{P}}}{d\omega d\Omega_{\mathbf{k}}} &=& c_0
\sum^1_{q=-1}\sum^1_{q^\prime=-1}
\mathcal{D}^{(1)}_{q, \varrho_2}(\omega)
\mathcal{D}^{(1)}_{-q^\prime,-\varrho_2}(\omega)\\\nonumber
&& \times (-1)^{q^\prime-\varrho_2}
\langle\Psi_{\mathbf{k}}|\mathbf{r}_{q}|\Psi_o\rangle
\langle\Psi_{\mathbf{k}}|\mathbf{r}_{q^\prime}|\Psi_o\rangle^{*}\,.
\end{eqnarray}
Inserting Eqs.~\eqref{eq:exited_state2} and~\eqref{eq:photoelectron}
to evaluate the overlap integrals yields
\begin{eqnarray}
\label{eq:diffX2}
\frac{d^2\sigma_{1\mathrm{P}}}{d\omega\; d\Omega_{\mathbf{k}}} &=&c_0\,
\sum_{\substack{\ell,m \\ n_o\ell_o,m_o}}\, \sum_{\substack{\ell^\prime,m^\prime\\
n^\prime_o,\ell^\prime_o,m^\prime_o}}\, \sum_{q=-1}^{1}\sum_{q^\prime=-1}^{1}
(-\mathrm{i})^{\ell-\ell^\prime}e^{\mathrm{i}(\delta_{\ell}-\delta_{\ell^\prime})}\nonumber\\
&&\times a^{\ell_o}_{m_o}(n_o)\,a^{*\ell^\prime_o}_{m^\prime_o}(n^\prime_o) I^{n_o}_{_{k}}(\ell,\ell_o)
I^{n^\prime_o}_{_{k}}(\ell^\prime,\ell^\prime_o) \nonumber\\
&&\times Y^{\ell}_{m}(\Omega_{\mathbf{k}})Y^{*\ell^\prime}_{m^\prime}(\Omega_{\mathbf{k}})\,
\mathcal{D}^{(1)}_{q,\varrho_2}(\omega)\mathcal{D}^{*{(1)}}_{q^\prime,\varrho_2}({\omega})\nonumber\\
&&
\times\mathcal{S}^{\ell,m}_{\ell_o,m_o}(q)
\mathcal{S}^{*\ell^\prime,m^\prime}_{\ell^\prime_o,m^\prime_o}(q^\prime)\,.
\end{eqnarray}
In Eq.~\eqref{eq:diffX2}, we have introduced radial and angular
integrals $I_{k}(\ell,\ell_o)$ and
$\mathcal{S}^{\ell,m}_{\ell_o,m_o}(q)$, given by
\begin{subequations}\label{eq:Ik_and_S}
\begin{eqnarray}
\label{eq:Ik}
I^{n_o}_{k}(\ell,\ell_o) = I_o\,
\int^{+\infty}_{0} r^3\,G_{k,\ell}(r)\,R^{n_o}_{\ell_o}(r)\,dr
\end{eqnarray}
for a fixed $n_o$ in Eq.~\eqref{eq:exited_state}
with $I_o = 4\pi/3$, and
\begin{eqnarray}
\label{eq:S}
\mathcal{S}^{\ell,m}_{\ell_o,m_o}(q) &=& \int Y^{\ell\,*}_{m}(\Omega_r)\,
Y^{1}_{q}(\Omega_r)Y^{\ell_o}_{m_o}(\Omega_r)\,d\Omega_r \\\nonumber
&=& (-1)^{-m}\, b_{\ell,\rm \ell_o}
\begin{pmatrix}
\ell & 1 & \ell_o \vspace*{0.33cm} \\ 0 & 0 & 0
\end{pmatrix}
\begin{pmatrix}
\ell & 1 & \ell_o \vspace*{0.33cm} \\ -m & q & m_o
\end{pmatrix}
\end{eqnarray}
\end{subequations}
with
\[
b_{\ell,\rm \ell_o}=\sqrt{3\, (2\ell+1)(2\ell_o+1)/4\pi}
\]
and using Wigner $3j$
symbols~\cite{edmonds,irreducible,Rose1967,Varshalovich1988}.
The angular integral $\mathcal{S}^{\ell,m}_{\ell_o,m_o}(q)$
determines, for each spherical unit vector $q=0,\pm 1$,
the selection rules between the angular components of the bound exited
electronic state with quantum numbers $\ell_o$, $m_o$
and the partial wave components of the
continuum wavefunction with quantum numbers $\ell$, $m$.
Equation~\eqref{eq:S} implies that transitions are
allowed if and only if $\ell+1+\ell_o$ is
even and $m_o+q-m=0$ for all $|\ell_o-1|\le\ell\le\ell_o+1$.
This is a special case of the more general rule for multipole
transitions derived in Ref.~\cite{DixitPRA1983}.
The angular integrals can be evaluated analytically using the standard
angular momentum algebra, whereas the radial integrals in
Eq.~\eqref{eq:Ik} are computed numerically.
The choice of basis to describe the radial part of the continuum
wavefunction determines the weight with which each
excited state expansion coefficient $a^{\ell_o}_{m_o}(n_o)$
contributes to the PAD, cf. Eqs.~\eqref{eq:diffX2}
and~\eqref{eq:Ik}. Thus, choosing for
example planes waves, i.e., the eigenfunctions of the ``free''
photoelectron,
which is described in terms of the Bessel functions~\cite{edmonds,irreducible,Varshalovich1988}, and does not take into
account the Coulomb interaction between the outgoing photoelectron and the
remaining ion,
would translate
into a PAD different from the one obtained with the
hydrogenic continuum wavefunctions of Eq.~\eqref{eq:photoelectron}.
Whether or not the model is able to reproduce the measured Legendre
coefficients will to some extent
depend on the choice of basis for the radial part in
Eq.~\eqref{eq:solution}.
The missing ingredient to determine the differential
photoionization cross section, Eq.~\eqref{eq:diffX},
are the expansion coefficients, $a^{\ell_o}_{m_o}(n_o)$,
of the intermediate excited state wavefunction.
They can either be used as fitting parameters or
determined from \textit{ab initio} calculations, see
Sec.~\ref{sec:abinitio} .
Two more steps are then required to connect the differential ionization
cross section to the experimentally measured
PAD. First, the PAD is measured in the laboratory frame and the
differential ionization cross section thus needs to be rotated
from the molecular into the laboratory frame. Second, the orientation
of the molecule with respect to the laboratory frame, defined by the
polarization axis of the laser electric field, is arbitrary. We
therefore need
to average over all possible orientations, i.e., integrate over the
Euler angles $\omega = (\alpha$, $\beta$, $\gamma)$, as we consider
a randomly oriented initial ensemble of molecules.
\subsection{Photoelectron Angular Distributions}
\label{subsec:PAD}
Rotating the differential cross section from the molecular into the
laboratory frame requires rotation of the continuum state
$|\Psi_{\mathbf{k}}\rangle$ into $|\Psi_{\mathbf{k}^\prime}\rangle$
using the inverse of Eq.~\eqref{eq:eq1}. This leads to
\begin{widetext}
\begin{eqnarray}
\label{eq:differential2}
\frac{d^2\sigma_{1\mathrm{P}}}{d\omega \;d\Omega_{{\mathbf{k}}^\prime}} &=&
c_0\, \sum_{\substack{\ell,m \\ n_o,\ell_o,m_o}}\,
\sum_{\substack{\ell^\prime,m^\prime \\ n^\prime_o,\ell^\prime_o,m^\prime_o}}\, \sum_{q,q^\prime}
(-\mathrm{i})^{\ell-\ell^\prime}e^{\mathrm{i}(\delta_{\ell}-\delta_{\ell^\prime})}\,
a^{\ell_o}_{m_o}(n_o)\,a^{*\ell^\prime_o}_{m^\prime_o}(n^\prime_o)
I^{n_o}_{_{k}}(\ell,\ell_o)\, I^{n^\prime_o}_{_{k}}(\ell^\prime,\ell^\prime_o) \mathcal{S}^{\ell,m}_{\ell_o,m_o}(q)
\mathcal{S}^{*\ell^\prime,m^\prime}_{\ell^\prime_o,m^\prime_o}(q^\prime)\nonumber\\
&& \times\sum^{\ell+\ell^\prime}_{\mathcal{L}=\vert\ell-\ell^\prime\vert}
\begin{pmatrix} \ell & \ell^\prime & \mathcal{L}\vspace*{0.33cm}\\ 0 & 0 & 0 \end{pmatrix}
\begin{pmatrix} \ell & \ell^\prime & \mathcal{L}
\vspace*{0.33cm} \\ m & -m^\prime & -(m-m^\prime) \end{pmatrix}
\sum_{\mu=-\mathcal{L}}^{\mathcal{L}}
\mathcal{D}^{(1)}_{q,\varrho_2}(\omega)\mathcal{D}^{(1)}_{-q^\prime,-\varrho_2}(\omega)
\mathcal{D}^{(\mathcal{L})}_{m^\prime-m,-\mu}(\omega)
P^{\mu}_{\mathcal{L}}(\cos\vartheta^\prime_{k})\,e^{\mathrm{i}\mu\varphi^\prime_k}\nonumber\\
&&\times (2\mathcal{L}+1)\,\varsigma^\mu_{\mathcal{L}}(\ell,\ell^\prime)\, (-1)^{m^\prime + q^\prime- \varrho_2}\,,
\end{eqnarray}
\end{widetext}
where $\varsigma^\mu_{\mathcal{L}}(\ell,\ell^\prime)$ is defined in
Eq.~\eqref{eq:varsigma:def} in Appendix~\ref{subsec:X1P}.
$P^{\mu}_{\mathcal{L}}(\cos\vartheta^\prime_{k})$ denotes the associate
Legendre polynomials. A detailed derivation of
Eq.~\eqref{eq:differential2} is found in Appendix~\ref{subsec:X1P}.
When averaging over all orientations in the second step, we need to
account for the fact that the probability for non-resonant two-photon
absorption from the ground state to the intermediate electronically
excited state is,
depending on the properties of the two-photon absorption tensor,
not isotropic. The differential ionization cross section in the
laboratory frame therefore needs to be weighted by the probability of
the electronically excited state to be occupied after absorption
of the first two (identical) photons. Thus,
the cross section for photoemission into a solid angle
$d\Omega_{\mathbf{k}^\prime}$ around the axis $\mathbf{k}^\prime$ in
the laboratory frame, after one-photon transition from
the electronically excited intermediate state, is given by
\begin{eqnarray}
\label{eq:weighted0}
\frac{d^2\sigma_{2+1}}{d\omega \;d\Omega_{\mathbf{k}^\prime}}
= \rho_{2\mathrm{P}}(\omega)
~\frac{d^2\sigma_{1\mathrm{P}}}{d\omega \;d\Omega_{\mathbf{k}^\prime}},
\end{eqnarray}
where $\rho_{2\mathrm{P}}(\omega)$ denotes the orientation-dependent
probability to reach the intermediate excited state
by absorption of two identical photons from the ground state.
Equation~\eqref{eq:weighted0} assumes a molecule to have, in its
electronic ground state, an initial orientation of
$\omega=(\alpha,\beta,\gamma)$ with respect to the laboratory
frame of reference. Note that Eq.~\eqref{eq:weighted0} makes an
additional assumption, namely the relative phase between the
two-photon and one-photon steps to be irrelevant for the photoelectron
spectrum and angular distribution. For a discussion of similar approximations
in related multiphoton transitions between bound states, see for instance
Refs.~\cite{McClain1972,Monson1970}.
The experimentally measured PAD contains
contributions from all molecules in the sample, each of them with a specific
orientation $\omega$. The total photoelectron signal
is therefore obtained by an incoherent summation
over the contributions from all molecules. This is equivalent to
integrating Eq.~\eqref{eq:weighted0} over the Euler angles weighted by the
probability of two-photon absorption. The
``averaged'' photoionization cross section in the laboratory frame
therefore reads,
\begin{eqnarray}
\label{eq:weighted}
\frac{d\sigma_{2+1}}{d\Omega_{\mathbf{k}^\prime}}
= \int\rho_{2\mathrm{P}}(\omega)
~\frac{d^2\sigma_{1\mathrm{P}}}{d\omega\; d\Omega_{\mathbf{k}^\prime}}
d\omega\,,
\end{eqnarray}
where the integration is carried over the Euler angles $\alpha,\beta,\gamma$.
The orientation-dependent probability to reach the intermediate excited state, $\rho_{2P}(\omega)$, is obtained from
the transition probability for two-photon absorption from the ground state $|\Psi_g\rangle$ to the intermediate electronically excited state $|\Psi_o\rangle$. The latter in general is defined as~\cite{Peticolas1967}
\begin{subequations}
\label{eq:peticolas1b}
\begin{equation}
\label{eq:probatlitypeticolas}
A^{(2)}_{o,g} = \tilde{\mathcal{N}}_0(\omega_{\mathrm{ph}})\, |\mathcal{M}|^2\,,
\end{equation}
where $\mathcal{M}$,
in the strict electric dipole approximation,
$\exp(i\mathbf{k}\cdot\mathbf{r})\approx1$, reads
\begin{eqnarray}
\label{eq:peticolas2b}
\mathcal{M} &=& \sum_{n} \left\{\dfrac{(\mathbf{e}_1\cdot \langle\Psi_o|\mathbf{r}|\Psi_n\rangle)(\langle\Psi_n|\mathbf{r}|\Psi_g\rangle\cdot \mathbf{e}_2)}{\hbar\omega_g-\hbar\omega_n +\hbar\omega_{\mathrm{ph},2}}\right.\nonumber\\
&&\quad\quad\left.+\dfrac{(\mathbf{e}_1\cdot \langle\Psi_o|\mathbf{r}|\Psi_n\rangle)(\langle\Psi_n|\mathbf{r}|\Psi_g\rangle\cdot \mathbf{e}_2)}{\hbar\omega_g-\hbar\omega_n +\hbar\omega_{\mathrm{ph},1}}\right\}\quad\quad
\end{eqnarray}
In Eq.~\eqref{eq:peticolas2b},
$\mathbf{e}_j$ denotes the polarization direction (without specifying a certain frame of reference) of photon $j$ ($j=1,2$) with energy $\hbar\omega_{\mathrm{ph},j}$.
\end{subequations}
To shorten notation, the polarization independent quantity $\tilde{\mathcal{N}}_0(\omega_{\mathrm{ph}})$
in Eq.~\eqref{eq:probatlitypeticolas} contains all prefactors,
\[
\tilde{\mathcal{N}}_0(\omega_{\mathrm{ph}}) = \dfrac{2\pi e^4_0}{\hbar^3 c^2} (F_1\, \hbar\omega_{\mathrm{ph},1})\,I(\omega_{\mathrm{ph},2})\,,
\]
with $e_0$ being the elementary charge, and where $F_1$ and $I(\omega_{\mathrm{ph},2})$ refer to the incident laser-photon-flux (of type $1$) and the energy flux per unity frequency (of type $2$), respectively~\cite{Peticolas1967}.
Evaluation of Eq.~\eqref{eq:peticolas2b} requires a frame transformation, since
the wavefunctions involved in the two-photon transition matrices are known in the molecular frame whereas the polarization directions of the photons are given in the laboratory frame of reference.
As before, transformation of the polarization directions from the laboratory frame to the molecular frame is
carried out by means of the Wigner rotation matrices around the Euler angles $\omega = (\alpha,\beta,\gamma)$. Consequently, the orientation dependent two-photon absorption probability is obtained as
\begin{subequations}
\label{eq:newtensor}
\begin{eqnarray}
\rho_{2\mathrm{P}}(\omega)
&=&
\left(\dfrac{8\pi^2\hbar}{3}\right)^2\tilde{\mathcal{N}}_0(\omega_{\mathrm{ph}})\left| \sum_{q_1,q_2}\mathcal{D}^{(1)}_{q_1,\rm\varrho_1}(\omega)
\mathcal{D}^{(1)}_{q_2,\rm\varrho_1}(\omega)\,
T_{q_1,q_2}\right|^2\,,\nonumber\\
\label{eq:density2}
\end{eqnarray}
where we have applied the properties of the rotation matrices between both
frames, detailed in Appendix~\ref{subsec:rotmat}, to Eq.~\eqref{eq:peticolas2b}. In Eq.~\eqref{eq:density2}, $T_{q_1,q_2}$ denotes
the two-photon absorption tensor in the
molecular frame of reference, whose tensor elements reads,
\begin{eqnarray}
T_{q_1,q_2} &=& \sum_{n}\dfrac{\langle\Psi_o| r_{q_1}|n\rangle\langle n| r_{q_2}|\Psi_g\rangle}{\hbar\omega_g - \hbar\omega_n + \hbar\omega_{\mathrm{ph},2}}
+ \dfrac{\langle\Psi_o| r_{q_2}|n\rangle\langle n| r_{q_1}|\Psi_g\rangle}{\hbar\omega_g - \hbar\omega_n + \hbar\omega_{\mathrm{ph},1}}\,.\nonumber\\
\label{eq:tensordef}
\end{eqnarray}
\end{subequations}
In Eq.~\eqref{eq:density2}, $\varrho_1$ denotes
the polarization direction in the laboratory frame of reference, i.e. $\varrho_1 = \pm 1,0$, driving the two-photon absorption process, both photons
having the same polarization direction.
Additionally, the indexes $q_1$ and $q_2$ take the values $\pm 1,0$. Finally, $r_{q_k}$ denotes the spherical component of the
position operator $\hat{\mathbf{r}}$, with $q_k = \pm 1,0$.
The correspondence between the spherical and Cartesian components of $r_k$ are
detailed in Eq.~\eqref{eq:cartesian_spherical}. Hence, it is straightforward
to write $T_{q_1,q_2}$ in terms of the tensor elements written in the
Cartesian basis, $T_{og}^{\alpha\beta}(\omega_{\mathrm{ph}})$, for $\alpha,\beta=x,y,z$, cf. Eq.~\eqref{tanmoments}.
The correspondences are detailed in Eq.~\eqref{eq:tensor_def2}, in
Appendix~\ref{subsec:rotmat}.
A further step consist of normalizing the
probability density, such that the normalization condition,
\begin{eqnarray}
\label{eq:normalization_condition}
\int \rho_{2P}(\omega)\,d\omega = 1
\end{eqnarray}
is fulfilled. Using the properties of addition of angular momenta, it is
straightforward to find that the normalization factor reads, upon integration
of Eq.~\eqref{eq:density2} over the Euler angles,
\begin{subequations}
\begin{eqnarray}
\tilde{\mathcal{N}}_0(\varrho_1) &=&\tilde{\gamma}(\omega_{\mathrm{ph}})\mathcal{B}(\varrho_1)
\end{eqnarray}
where we have defined,
\begin{widetext}
\begin{eqnarray}
\label{eq:normalization_factor}
\mathcal{B}(\varrho_1) &=&\sum_{\substack{q_1,q_2\\q^\prime_1,q^\prime_2}} T_{q_1,q_2}\,T^{*}_{q^\prime_1,q^\prime_2}\sum_{\substack{\mathcal{Q}=0}}^2
\left(2\mathcal{Q}+1 \right)
\begin{pmatrix} 1 & 1 & \mathcal{Q}\vspace*{0.31cm}\\ q^\prime_1 & q^\prime_2 & -q^\prime_1-q^\prime_2 \end{pmatrix}
\begin{pmatrix} 1 & 1 & \mathcal{Q}\vspace*{0.31cm}\\ \varrho_1 & \varrho_1 & -2\varrho_1 \end{pmatrix}
\begin{pmatrix} 1 & 1 & \mathcal{Q}\vspace*{0.31cm}\\ q_1 & q_2 & -q^\prime_1-q^\prime_2 \end{pmatrix}
\begin{pmatrix} 1 & 1 & \mathcal{Q}\vspace*{0.31cm}\\ \varrho_1 & \varrho_1 & -2\varrho_1 \end{pmatrix}\,,\nonumber\\
\end{eqnarray}
\end{widetext}
\label{eq:prob_density1}
\end{subequations}
with $\tilde{\gamma}(\omega_{\mathrm{ph}})\equiv (8\pi^2\hbar/3)^2\,\tilde{\mathcal{N}}_0(\omega_{\mathrm{ph}})$.
To retrieve Eqs.~\eqref{eq:normalization_factor}, we have made use of the properties
involving the product of two Wigner rotations matrices, as well as the
integration involving a product of three Wigner rotations matrices, and apply
them to Eq.~\eqref{eq:density2}. These properties are outlined in
Eq.~\eqref{subeq:properties1:2} and Eq.~\eqref{eq:three_integral}, in
in
Appendix~\ref{subsec:rotmat} and Appendix~\ref{subsec:X2+1}, respectively.
Finally, the orientation dependent probability density reads,
\begin{eqnarray}
\label{eq:final_proba2p}
\rho_{2P}(\omega) &=& \mathcal{N}_0(\varrho_1)
\left| \sum_{q_1,q_2}\mathcal{D}^{(1)}_{q_1,\rm\varrho_1}(\omega)
\mathcal{D}^{(1)}_{q_2,\rm\varrho_1}(\omega)\,
T_{q_1,q_2}\right|^2\,,\quad\quad
\end{eqnarray}
with $\mathcal{N}_0(\varrho_1) = \mathcal{B}^{-1}(\varrho_1)$.
In order to alleviate notations, and unless otherwise stated, we write $\mathcal{N}_0=\mathcal{N}_0(\varrho_1)$.
It is important to note, however, that in practice, computation of $\mathcal{N}_0$ is
not required, since this factor is common to all Legendre coefficients, and all
of them are given, as in the experiment~\cite{LuxACIE12,LuxCPC15}, normalized with respect to $c_0$.
Each component of the second-rank tensor $T_{q_1, q_2}$ determines
a property of the system, namely, the average transition rate. As a result of that
the tensor $T_{q_1, q_2}$ has two types of symmetry properties. The first one is due to
an intrinsic symmetry originated from the property itself.
For instance, $T_{q_1, q_2}$ defines the probability of a absorption of two identical
photons. Since two photons of the same energy and polarization are not the same,
$T_{q_1, q_2}$ has to be symmetric. The second type of symmetry comes from the geometric
symmetry of the molecule, and that specifies which of tensor components have to be
zero~\cite{McCLAIN19771,Marco1983}.
In the isotropic case, $\rho_{2\mathrm{P}}(\alpha,\beta,\gamma)=1$, and
evaluation of
Eq.~\eqref{eq:weighted} is analogous to integrating over
Eq.~\eqref{eq:differential2}, resulting in the standard
expressions for the differential photoionization cross
section~\cite{Reid2003,ChandraJPhysB87,ChengPRA10,cooper,Yang1948,Janssen2014}:
If the weak probe photon is linearly polarized
($\epsilon^\prime_{\varrho_2}=\epsilon^\prime_0$),
only $P_0$ and $P_2$ can become non-zero, whereas for circularly polarized
light, $P_0$, $P_1$ and $P_2$ can have non-vanishing
values. Moreover, the laboratory frame PAD preserves the cylindrical symmetry with
respect to the propagation direction of the light $z^\prime$, i.e.,
$\mu=\varrho_2-\varrho_2=0$ in Eq.~\eqref{eq:differential2}.
The situation changes if the probability to populate the
intermediate electronically excited state becomes anisotropic.
If this probability depends
on the initial orientation of the molecule, given in terms of the
Euler angles $\omega$ with respect to the laboratory frame
$\mathcal{R}^\prime$,
the Wigner rotation matrices in Eq.~\eqref{eq:density2}
couple to those in Eq.~\eqref{eq:differential2}. Upon
integration over the Euler angles in Eq.~\eqref{eq:weighted}, this
gives rise to higher order Legendre polynomials in the PAD, as we show
now.
To evaluate the angular momentum coupling in Eq.~\eqref{eq:weighted},
we expand the norm squared in
Eq.~\eqref{eq:density2}. Making use of the product rule for
Wigner rotation matrices, Eq.~\eqref{eq:density2} then becomes
\begin{subequations}\label{eq:rho}
\begin{widetext}
\begin{eqnarray}
\label{eq:density3}
\rho_{2\mathrm{P}}(\omega) &=&\mathcal{N}_0\sum_{\substack{q_1,q_2 \\ q_3,q_4}}
(-1)^{q_3+q_4}\,T_{q_1,q_2}T^{*}_{q_3,q_4}
\sum^4_{K=0}\,g^{(K)}_{q_1,q_2,q_3,q_4}
\mathcal{D}^{(K)}_{s,0}(\omega)\,,
\end{eqnarray}
with $s=q_1+q_2-q_3-q_4$, and where we have defined
\begin{eqnarray}
\label{eq:gK}
g^{(K)}_{q_1,q_2,q_3,q_4}(\varrho_1) &=&
\sum^2_{Q=0}
\sum^2_{Q^\prime=0}
\sum^{Q+Q^\prime}_{K=|Q-Q^\prime|}
\gamma^{(K)}_{Q,\rm Q^\prime}
\begin{pmatrix} 1 & 1 & Q \vspace*{0.33cm} \\ q_1 & q_2 & -q_1-q_2\end{pmatrix}
\begin{pmatrix} 1 & 1 & Q \vspace*{0.33cm} \\ \varrho_1 & \varrho_1 & -2\varrho_1 \end{pmatrix}\\\nonumber
&&\times
\begin{pmatrix} 1 & 1 & Q^\prime \vspace*{0.33cm} \\ q_3 & q_4 & -q_3-q_4\end{pmatrix}
\begin{pmatrix} 1 & 1 & Q^\prime \vspace*{0.33cm} \\ \varrho_1 & \varrho_1 & -2\varrho_1 \end{pmatrix}
\begin{pmatrix} Q & Q^\prime & K \vspace*{0.33cm} \\ q_1+q_2 & -q_3-q_4 & -s\end{pmatrix}
\begin{pmatrix} Q & Q^\prime & K \vspace*{0.33cm} \\ 2\varrho_1 & -2\varrho_1 & 0\end{pmatrix}
\end{eqnarray}
\end{widetext}
\end{subequations}
with $\gamma^{(K)}_{Q,\rm Q^\prime}=(2Q+1)(2Q^\prime+1)(2K+1)$.
In Eq.~\eqref{eq:density3}, the orientation dependence is contained
in $\mathcal{D}$, the polarisation dependence in $g$ and the dependence on
molecular parameters in $T$.
The derivation of Eqs.~\eqref{eq:rho},
employing the standard angular momentum
algebra, is presented in
Appendix~\ref{subsec:2Ptensor}. We make once more use of the product
rule for two rotation matrices, namely those
involving the laser polarization in Eq.~\eqref{eq:differential2},
cf. Eq.~\eqref{subeq:property1}
in Appendix~\ref{subsec:X2+1}.
Thus, a product of three rotation matrices is obtained
when inserting Eqs.~\eqref{eq:rho} and~\eqref{eq:differential3_appendix},
into Eq.~\eqref{eq:weighted0}.
Evaluating the products of the Wigner $3j$ symbols, the
differential cross section, Eq.~\eqref{eq:weighted0},
for a specific orientation $\omega$ of the molecule becomes
\begin{subequations}\label{eq:diffXLF}
\begin{eqnarray}
\label{eq:diff1}
\frac{d^2\sigma_{2+1}}{d\omega d\Omega_{{\mathbf{k}}^\prime}} &=&
c_o\,
\sum^{\infty}_{\mathcal{L}=0}
\sum^{+\mathcal{L}}_{\mu=-\mathcal{L}}
b^{\mu}_\mathcal{L}(\omega)
P^{\mu}_{\mathcal{L}}(\cos{\vartheta^\prime_{k}})\,e^{i\mu\phi^{\prime}_{k}}\,,\quad\quad
\end{eqnarray}
where the only orientation-dependent quantity,
$b^{\mu}_\mathcal{L}(\omega)$, is given by
\begin{eqnarray}
\label{eq:a_coef}
b^{\mu}_\mathcal{L}(\omega) &=&
\sum_{\lambda}
\kappa(\lambda)\,\,
\mathcal{D}^{K}_{s,\rm 0}(\omega)
\mathcal{D}^{\nu}_{q-q^\prime,\rm 0}(\omega)
\mathcal{D}^{\mathcal{L}}_{m^\prime-m,\rm -\mu}(\omega)\,.\quad\quad\quad
\end{eqnarray}
\end{subequations}
Note that the summation in Eq.~\eqref{eq:a_coef}
runs over all indices, except $\mathcal{L}$ and $\mu$, i.e.,
$\lambda=\{K,\nu,Q,Q^\prime,q,q^\prime,q_k,n_o,n^\prime_o,\ell,\ell^\prime,\ell_o,\ell^\prime_o\}$,
with $K=1,2,3,4$ and $\nu=0,1,2$ appearing from the coupling of the
first and second
Wigner rotation matrices in Eq.~\eqref{eq:differential2},
c.f. Eq.~\eqref{eq:condense_1}. The specific form of
$\kappa^\mu_{\mathcal{L}}(\lambda)$ is detailed in
Eq.~\eqref{eq:kappa_def}, in Appendix~\ref{subsec:X2+1}.
We can now use the integral properties of a product
of three Wigner rotation
matrices~\cite{Varshalovich1988,irreducible,edmonds},
c.f. Eq.~\eqref{eq:three_integral} in
Appendix~\ref{subsec:X2+1}. Integration of $b^{\mathcal{L}}_{\mu,\rm\nu}(\omega)$
over the Euler angles then yields
\begin{eqnarray}
\label{a1_integrated}
c^{\mu}_{\mathcal{L},\rm\lambda}&=&
\int b^{\mu}_{\mathcal{L},\rm\lambda}(\omega)\; d^3\omega\\
&=&\sum_{\lambda}\kappa^\mu_{\mathcal{L}}(\lambda)
\begin{pmatrix} K & \nu & \mathcal{L}\\s & q-q^\prime & m^\prime-m\end{pmatrix}
\begin{pmatrix} K & \nu & \mathcal{L}\\0 & 0 & -\mu\end{pmatrix}\nonumber\\
&=& \nonumber
\sum_{\lambda}\kappa^\mu_{\mathcal{L}}(\lambda)
\begin{pmatrix} K & \nu & \mathcal{L}\\s & q-q^\prime & m^\prime-m\end{pmatrix}
\begin{pmatrix} K & \nu & \mathcal{L}\\0 & 0 & 0\end{pmatrix}\,\delta_{\mu,0}\,.
\end{eqnarray}
Note that the second Wigner symbol in the right-hand side of
Eq.~\eqref{a1_integrated} is non-zero only if $\mu=0$ and
$K+\nu+\mathcal{L}$ is even with $|K-\nu|\le\mathcal{L}\le K+\nu$.
Because $\mu=0$, the terms depending on the azimuthal angle
in Eq.~\eqref{eq:differential2} do not contribute and
we retrieve cylindrical symmetry for the PAD of
Eq.~\eqref{eq:weighted} which can thus be expressed in terms of
Legendre polynomials. Furthermore, according to the fifth and sixth
Wigner symbols in
Eq.~\eqref{eq:gK}$, K=0,\ldots,4$, because $|Q-Q^\prime|\le K\le Q+Q^\prime$, and
$0\le Q\le2$ according to the first and second Wigner symbols in
Eq.~\eqref{eq:gK}. The same applies to $Q^\prime$, reflecting the
addition of angular momentum in a two-photon absorption process.
Making use, in Eq.~\eqref{a1_integrated}, of the fact that
the non-zero contributions for $\nu$ are given by $\nu=0,1,2$,
c.f. Eq.~\eqref{eq:condense_1}, one obtains that $\mathcal{L}$
runs from $0$ to $6$, and higher orders give only vanishing
contributions. Therefore, the highest order
Legendre polynomial that contributes to the PAD is
$\mathcal{L}_{\mathrm{max}}=6$, as expected for a 2+1 process from the
$2(m+n)-1$ rule~\cite{Reid2003}.
Finally, evaluating Eq.~\eqref{eq:weighted} with the help of
Eq.~\eqref{a1_integrated} yields the experimentally measured PAD that
is obtained for an initial ensemble of randomly oriented
molecules,
\begin{subequations}\label{eq:PADfinal}
\begin{eqnarray}
\label{eq:final_LFPAD}
\frac{d\sigma_{2+1}}{d{\Omega_{\mathbf{k}^\prime}}} &=&
\sum^{6}_{\mathcal{L}=0}\, c_{\mathcal{L}}\,
P_{\mathcal{L}}\left(\cos{\vartheta^\prime_{\mathbf{k}}}\right)\,,
\end{eqnarray}
with coefficients
\begin{widetext}
\begin{eqnarray}
\label{eq:final_coeff}
c_{\mathcal{L}}({\varrho_1},{\varrho_2}) &=& \tilde{c}_o\,\mathcal{N}_0 \sum_{\substack{\ell,m \\ n_o,\ell_o,m_o}}\,
\sum_{\substack{\ell^\prime,m^\prime \\ n^\prime_o\ell^\prime_o,m^\prime_o}}\, \sum_{q,q^\prime}
\sum_{\substack{q_1,q_2 \\ q_3,q_4}}
\sum^2_{\nu=0} \sum^4_{K=0}
(-1)^{q3+q4}\,(2\nu+1)(2\mathcal{L}+1)
a^{\ell_o}_{m_o}(n_o)\, a^{*\ell^\prime_o}_{m^\prime_o}(n^\prime_o)\, T_{q_1,q_2} T^{*}_{q_3,q_4}
\\\nonumber
&&\times (-\mathrm{i})^{\ell-\ell^\prime}\,
(-1)^{m^\prime-q-\varrho_2}\,e^{\mathrm{i}(\delta_{\ell}-\delta_{\ell^\prime})}\,\,g^{(K)}_{q_1,q_2,q_3,q_4}(\varrho_1)
\,\,I^{n_o}_{_{k}}(\ell,\ell_o)\,\, I^{n^\prime_o}_{_{k}}(\ell^\prime,\ell^\prime_o)\,\,\mathcal{S}^{\ell,m}_{\ell_o,m_o}(q)
\,\,\mathcal{S}^{\ell^\prime,m^\prime}_{\ell^\prime_o,m^\prime_o}(q^\prime)\,\hat{\varsigma}(\ell,\ell^\prime)\,\\\nonumber
&&
\times
\begin{pmatrix} \ell & \ell^\prime & \mathcal{L}\vspace*{0.33cm}\\ m & -m^\prime & m^\prime-m\end{pmatrix}
\begin{pmatrix} \ell & \ell^\prime & \mathcal{L}\vspace*{0.33cm}\\ 0 & 0 & 0 \end{pmatrix}
\begin{pmatrix} 1 & 1 & \nu\vspace*{0.33cm}\\ q & -q^\prime & q^\prime-q \end{pmatrix}
\begin{pmatrix} 1 & 1 & \nu\vspace*{0.33cm}\\ \varrho_2 & -\varrho_2 & 0 \end{pmatrix}
\begin{pmatrix} K & \nu & \mathcal{L}\vspace*{0.33cm}\\s & q-q^\prime & m^\prime-m\end{pmatrix}
\begin{pmatrix} K & \nu & \mathcal{L}\vspace*{0.33cm}\\0 & 0 & 0\end{pmatrix}\,.
\end{eqnarray}
\end{widetext}
\end{subequations}
with $\tilde{c}_o=4\pi c_o$, and $\hat{\varsigma}(\ell,\ell^\prime)= \sqrt{(2\ell+1)(2\ell^\prime+1)}$. Derivation
of Eq.~\eqref{eq:PADfinal} is explicitly detailed in Appendix~\ref{subsec:X2+1}. Note that the coefficients
$c_{\mathcal{L}}(\varrho_1,\varrho_2)$ depend on
the expansion coefficients $a_{m_o}^{\ell_o}(n_o)$ describing the
intermediate electronically exited state, the two-photon absorption
tensor elements, $T_{q_1,q_2}$, and the laser polarization directions of the
two-photon absorption step, $\varrho_1$, and of the one-photon
ionization, $\varrho_2$.
We would like to emphasize
that the contribution of Legendre polynomials with order higher
than 2 in Eq.~\eqref{eq:PADfinal} is due to the
orientation dependence of populating the intermediate electronically
excited state by two-photon absorption from the electronic ground
state.
That is, the density $\rho(\omega)$ expresses the fact that molecules
with a certain orientation $\omega=\omega_1$ have a larger probability to
undergo non-resonant two-photon absorption than molecules with
some other orientation $\omega=\omega_2$. So although the
molecules are assumed to be completely randomly oriented with respect
to the laser beam axis when they are in their electronic ground state,
an effective alignment
results for those molecules that absorb two photons. This effective
alignment results from selection of certain orientations rather than
rotational dynamics which would occur on a much slower timescale.
The contribution of higher order Legendre polynomials to the PAD is
then entirely determined by the properties of the two-photon absorption
tensor and the electronically excited state. In order to interpret the
experimentally observed PADs for fenchone and camphor
in terms of their expansion in Legendre
polynomials, at least qualitatively, we
estimate $a_{m_o}^{\ell_o}(n_o)$ and $T_{q_1,q_2}$ using \textit{ab initio}
calculations or via fitting.
Before presenting the corresponding details in Sec.~\ref{sec:abinitio}, we
discuss below the basic symmetry properties of these parameters of our
model as well as the dependence on the laser
polarization directions $\varrho_1$, $\varrho_2$.
\subsection{PECD and symmetry}
\label{subsec:parity}
By definition, PECD is obtained if the sign of the odd Legendre
coefficients change when the helicity of the electric field changes.
Analogously, for fixed electric field helicity, the odd Legendre
coefficients change sign when enantiomers are interchanged.
We therefore first inspect sign changes in the Legendre
coefficients for molecules of opposite handedness within our
one-center expansion framework.
The relation between a given enantiomer and its mirror image is given
by the parity operator, which changes the
coordinates $\mathbf{r}$ to $-\mathbf{r}$.
We therefore check, in the following, that our model transforms properly
under parity.
Moreover, we determine the role that the excited state
coefficients $a^{\ell_o}_{m_o}(n_o)$ and two-photon absorption tensor
elements play for each Legendre
coefficient that contributes to the PAD. To this end, we rewrite
Eq.~\eqref{eq:final_coeff}, expressing each
$c_{\mathcal{L}}({\varrho_1},{\varrho_2})$
explicitly in terms of the $a^{\ell_o}_{m_o}(n_o)$ and $T_{q,q^\prime}$,
\begin{eqnarray}
\label{eq:final_cj}
c_{\mathcal{L}}({\varrho_1},{\varrho_2})&=&
\sum_{\substack{n_o,\ell_o,m_o\\ n^\prime_o,\ell^\prime_o,
m^\prime_o}}\sum_{\substack{q_1,q_2 \\ q_3,q_4}}\,
\gamma^{n_o,\ell_o,m,n^\prime_o,\ell^\prime_o,m^\prime_o}_{q_1,q_2,q_3,q_4}(\mathcal{L},\epsilon^\prime_{\varrho_1},\epsilon^\prime_{\varrho_2})\nonumber\\
&&\times
a^{\ell_o}_{m_o}(n_o)\,a^{*\ell^\prime_o}_{m^\prime_o}(n^\prime_o)\,
T_{q_1,q_2}\, T^{*}_{q_3,q_4}\nonumber\\
\end{eqnarray}
Equation~\eqref{eq:final_cj}
allows for determining each Legendre coefficient
as a function of the
intermediate electronically excited state via $a^{\ell_o}_{m_o}(n_o)$
and $T_{q,q^\prime}$, i.e., it connects the measured Legendre
coefficients to the electronic structure properties.
We can thus compare the contribution of different
$a^{\ell}_{m_o}(n_o)$ to different Legendre coefficients $c_{\mathcal{L}}$, and explain
differences, observed e.g. for different molecules, in
terms of the electronic structure. This is important because
investigation of camphor and fenchone revealed, for example,
the same order of magnitude for the first and third Legendre
coefficient in camphor, in contrast to fenchone where
$c_3$ is about one order of magnitude smaller than
$c_1$~\cite{LuxACIE12,LuxCPC15}. This
observation suggests a significantly different electronic structure
despite the fact that the two bicyclic
monoketones are constitutional isomers
which differ only in the position of the geminal methyl
groups~\cite{PollmannSpectro97}.
In the following,
we discuss the behavior under parity and the contribution of the
$a^{\ell_o}_{m_o}(n_o)$ and $T_{q,q^\prime}$ to the
$c_{\mathcal{L}}({\varrho_1},{\varrho_2})$ separately for the excited state coefficients, the
two-photon absorption tensor and the laser polarization.
\subsubsection{Role of the excited state expansion coefficients}
\label{subsec:role_expansion_coefficients}
In this section, we explicitly show that our single-center expansion for the
$(2+1)$ REMPI process properly transforms under parity. Note that
the two-photon absorption process conserves parity, which implies
that exchanging enantiomers results in a parity change of the
expansion coefficients of the intermediate electronically excited state,
from $a^{\ell_o}_{m_o}(n_o)$ to
$(-1)^{\ell_o}\,a^{\ell_o}_{m_o}(n_o)$. For practical convenience, we
define the following quantity present in
Eq.~\eqref{eq:final_coeff} depending on $\ell_o$ and $m_o$,
\begin{eqnarray}
\label{eq:S_parity1}
\mathcal{P}_{\mathcal{L}}&=& a^{\ell_o}_{m_o}(n_o) a^{\ell^\prime_o}_{m^\prime_o}(n^\prime_o)\mathcal{S}^{\ell,m}_{\ell_o,m_o}(q)
\,\,\mathcal{S}^{\ell^\prime,m^\prime}_{\ell^\prime_o,m^\prime_o}(q^\prime)
\begin{pmatrix} \ell & \ell^\prime & \mathcal{L}\vspace*{0.33cm}\\
0 & 0 & 0\end{pmatrix}\,.\nonumber\\
\end{eqnarray}
Upon application of the parity operator,
Eq.~\eqref{eq:S_parity1} becomes
\begin{eqnarray}
\label{eq:S_parity11}
\tilde{\mathcal{P}}_{\mathcal{L}}&=& (-1)^{\ell_o+\ell^\prime_o}\,a^{\ell_o}_{m_o}(n_o) a^{\ell^\prime_o}_{m^\prime_o}(n^\prime_o)\nonumber\\
&&
\times\,
\mathcal{S}^{\ell,m}_{\ell_o,m_o}(q)
\,\mathcal{S}^{\ell^\prime,m^\prime}_{\ell^\prime_o,m^\prime_o}(q^\prime)
\begin{pmatrix} \ell & \ell^\prime & \mathcal{L}\vspace*{0.33cm}\\
0 & 0 & 0\end{pmatrix}\,.\nonumber\\
\end{eqnarray}
Furthermore, we make use of the following property of the Wigner $3j$
symbols~\cite{irreducible,edmonds,Hans1957,Varshalovich1988},
\begin{eqnarray}
\label{eq:wigner_prop1}
\begin{pmatrix} j & j^\prime & J \vspace*{0.33cm} \\ m & m^\prime & M \end{pmatrix}
=(-1)^{j+j^\prime+J}
\begin{pmatrix} j & j^\prime & J \vspace*{0.33cm} \\ -m & -m^\prime &
-M \end{pmatrix}\,,\quad
\end{eqnarray}
and apply it to the first Wigner $3j$ symbol in the expressions for
$\mathcal{S}^{\ell,m}_{\ell_o,m_o}(q)$ and $\mathcal{S}^{\ell^\prime,m^\prime}_{\ell^\prime_o,m^\prime_o}(q^\prime)$, i.e.
Eq.~\eqref{eq:S}, containing triple zeros in the second row.
The parity-transformed $\mathcal P_{\mathcal{L}}$ thus becomes
\begin{eqnarray}
\label{eq:S_tilde2}
\tilde{\mathcal{P}}_{\mathcal{L}}&=&
(-1)^{\ell_o+\ell^\prime_o}\, (-1)^{\ell+\ell_o+\ell^\prime+\ell^\prime_o}\,\nonumber\\
&&\times\,\mathcal{S}^{\ell,m}_{\ell_o,m_o}(q)
\,\,\mathcal{S}^{\ell^\prime,m^\prime}_{\ell^\prime_o,m^\prime_o}(q^\prime)
\begin{pmatrix} \ell & \ell^\prime & \mathcal{L}\vspace*{0.33cm}\\ 0 & 0 & 0\end{pmatrix}\,.
\end{eqnarray}
Applying Eq.~\eqref{eq:wigner_prop1} once more to the Wigner $3j$
symbol in Eq.~\eqref{eq:S_tilde2} allows for eliminating the
explicit dependence of $\tilde{\mathcal{P}}_{\mathcal{L}}$
on the partial waves $\ell$ and $\ell^\prime$,
\begin{eqnarray}
\label{eq:parity_coeff}
\tilde{\mathcal{P}}_{\mathcal{L}}&=&
(-1)^{\ell_o+\ell^\prime_o}\,(-1)^{\ell+\ell_o+\ell^\prime+\ell^\prime_o}
\mathcal{S}^{\ell,m}_{\ell_o,m_o}(q)
\,\mathcal{S}^{\ell^\prime,m^\prime}_{\ell^\prime_o,m^\prime_o}(q^\prime)\nonumber\\
&&\times
(-1)^{\ell+\ell^\prime+\mathcal{L}}
\begin{pmatrix} \ell & \ell^\prime & \mathcal{L}\vspace*{0.33cm}\\ 0 & 0 & 0\end{pmatrix}\nonumber\\
&=&
(-1)^{\mathcal{L}}
\mathcal{S}^{\ell,m}_{\ell_o,m_o}(q)
\,\mathcal{S}^{\ell^\prime,m^\prime}_{\ell^\prime_o,m^\prime_o}(q^\prime)
\begin{pmatrix} \ell & \ell^\prime & \mathcal{L}\vspace*{0.33cm}\\ 0 & 0 & 0\end{pmatrix}\nonumber\\
&=&
(-1)^{\mathcal{L}}\mathcal{P}_{\mathcal{L}}\,.
\end{eqnarray}
Because $\mathcal P_{\mathcal{L}}$ and $\tilde{\mathcal{P}}_{\mathcal{L}}$
refer, by construction, to enantiomers of opposite
handedness, Eq.~\eqref{eq:parity_coeff} implies a change of sign for
$\mathcal{L}$ odd, cf. Eq.~\eqref{eq:PADfinal}, when interchanging
enantiomers, and no sign change for $\mathcal{L}$ even.
Our model properly reproduces this basic symmetry behavior.
The corresponding behavior under change of the light helicity, keeping
the same enantiomer, is checked below in
Sec.~\ref{subsec:role_polarization}.
Next we check the dependence of the non-zero Legendre coefficients contributing
to the PAD on the maximum order $L_{o,\mathrm{max}}$ of the excited
state coefficients, $a^{\ell_o}_{m_o}(n_o)$, cf. Eq.~\eqref{eq:exited_state2}.
According to Equation~\eqref{eq:final_coeff}, a non-zero projection of
the electronically excited state onto
$d$-orbitals ($\ell_o=2$) is required
to ensure that higher orders $c_{\mathcal{L}}$ are non-zero.
In fact, an additional requirement to reach
$\mathcal{L}_{\mathrm{max}}=6$
is that $L_{o,\mathrm{max}}\ge 2$. This is straightforward to see by
inspecting the term
\begin{eqnarray*}
\begin{pmatrix} \ell & \ell^\prime & \mathcal{L}\vspace*{0.33cm}\\ 0 & 0 & 0\end{pmatrix}
\end{eqnarray*}
in Eq.~\eqref{eq:final_coeff},
defining the PAD for a $(2+1)$ REMPI process. This term vanishes
unless $\ell+\ell^\prime+\mathcal{L}$ is even and
$|\ell-\ell^\prime|\le\mathcal{L}\le\ell+\ell^\prime$. In order to
reach $\mathcal{L}_{\mathrm{max}}=6$, the minimal requirement in terms
of the angular momentum for the continuum
wavepacket is $\ell_{\mathrm{max}}=3$. Together with the selection rule
$\ell_{\mathrm{max}}=L_{o,\mathrm{max}}+1$, cf. Eq.~\eqref{eq:S}, this implies
$L_{o,\mathrm{max}}=2$, i.e., presence of $d$-waves in the
resonantly excited state. Note that a contribution from
higher partial waves only modifies the algebraic value of the Legendre
coefficients, but does not lead to higher orders because, as we have
already pointed out, the maximal order of the Legendre coefficients
is also limited by the term
\begin{eqnarray*}
\begin{pmatrix} K & \nu & \mathcal{L}\vspace*{0.33cm}\\
0 & 0 & 0\end{pmatrix}
\end{eqnarray*}
in Eq.~\eqref{eq:final_coeff}.
Perhaps even more interestingly, for circular polarization direction
($\varrho_1=\varrho_2=\pm 1$), $c_5$ vanishes if the
projection of the electronically excited state onto $\ell_o=3$
is zero. In other words, expansion of the electronically
excited state in terms of $s$, $p$ and $d$ orbitals results in
non-zero Legendre coefficients $c_\mathcal{L}$ for $\mathcal{L}$ up to
6, except for $c_5$. In fact, we found $c_5$ to appear only in
presence of a non-vanishing
contribution of $f$ orbitals. This does not result from selection
rules as discussed before, but rather from an accidental compensation
of terms in the summations in Eq.~\eqref{eq:final_coeff} which arises
from the central symmetry of our single center basis functions.
Given the experimental observation of Ref.~\cite{LuxACIE12,LuxCPC15},
we expect the electronically excited state for fenchone and
camphor to have non-vanishing projections onto $s$-, $p$-, $d$- and
possibly $f$-orbitals. Also, the eventual expansion coefficients of the
electronically excited state will most likely be different for
fenchone and camphor to account for the different ratios of $c_3$ and
$c_1$ observed for the two molecules~\cite{LuxACIE12,LuxCPC15}.
\subsubsection{Role of Polarizations $\varrho_1$ and $\varrho_2$}
\label{subsec:role_polarization}
Having shown sign inversion for the odd Legendre coefficients for
enantiomers of opposite handedness and a fixed circular polarization
direction, we outline, in the following, an analogous symmetry
property that is relevant when considering the same enantiomer but
inverting the
polarization direction. By definition, PECD requires all odd
Legendre expansion coefficients for a given enantiomer to change sign
when changing circular
polarization from left to right, and vice versa.
In order to show that our approach also properly reproduces this
behavior, we employ again the symmetry
properties of the Wigner $3j$ symbols in
Eq.~\eqref{eq:final_coeff}, similarly to
Sec.~\ref{subsec:role_expansion_coefficients}.
For the sake of completeness, we consider the general case of
independent polarizations for the two-photon absorption and the
one-photon ionization processes.
First, we consider all terms in Eq.~\eqref{eq:final_coeff}
depending on $\epsilon^\prime_{\varrho_2}$. We apply Eq.~\eqref{eq:wigner_prop1}
to the fourth and sixth Wigner $3j$ symbol in Eq.~\eqref{eq:final_coeff} for
$c_{\mathcal{L}}(-\varrho_1,-\varrho_2)$. This yields
\begin{subequations}\label{eq:factors}
\begin{eqnarray}
\label{eq:factor2}
\begin{pmatrix} 1 & 1 & \nu \vspace*{0.33cm} \\ -\varrho_2 &
+\varrho_2 & 0 \end{pmatrix}
= (-1)^{2+\nu}
\begin{pmatrix} 1 & 1 & \nu \vspace*{0.33cm} \\ \varrho_2 & -\varrho_2
& 0 \end{pmatrix}
\end{eqnarray}
for the fourth Wigner $3j$ symbol, and
\begin{eqnarray}
\label{eq:factor3}
\begin{pmatrix} K & \nu & \mathcal{L} \vspace*{0.33cm} \\ 0 & 0 &
0 \end{pmatrix}
= (-1)^{K+\nu+\mathcal{L}}
\begin{pmatrix} K & \nu & \mathcal{L} \vspace*{0.33cm} \\ 0 & 0 &
0 \end{pmatrix}
\end{eqnarray}
for the sixth Wigner $3j$ symbol in Eq.~\eqref{eq:final_coeff} when the
polarization direction driving the ionization proceess is
$-\varrho_2$. Next, we evaluate the expression containing the
information about the polarization direction
driving the two-photon absorption process. For
$\epsilon_{-\varrho_1}$, the term $g^K_{\varrho_1}(q_1,q_2,q_3,q_4)$,
defined in Eq.~\eqref{eq:gK}, reads
\begin{eqnarray}
\label{eq:gK_tilde}
g^K_{-\varrho_1}(q_1,q_2,q_3,q_4) = (-1)^{K}\,g^K_{+\varrho_1}(q_1,q_2,q_3,q_4)\,,\quad
\end{eqnarray}
\end{subequations}
when changing $\varrho_1$ to $-\varrho_1$. In Eq.~\eqref{eq:gK_tilde},
we have applied Eq.~\eqref{eq:wigner_prop1} to the second,
fourth and sixth Wigner $3j$ symbols in Eq.~\eqref{eq:gK}.
The Legendre coefficient $c_{\mathcal{L}}(-\varrho_1,-\varrho_2)$
involves, according to Eq.~\eqref{eq:final_coeff},
the triple product of Eqs.~\eqref{eq:factors}, that is,
\begin{widetext}
\begin{eqnarray}
\label{eq:gK_sym}
g^K_{-\varrho_1}(q_1,q_2,q_3,q_4)
\begin{pmatrix} 1 & 1 & \nu \vspace*{0.33cm} \\ -\varrho_2 & +\varrho_2 & 0 \end{pmatrix}
\begin{pmatrix} K & \nu & \mathcal{L} \vspace*{0.33cm} \\ 0 & 0 & 0 \end{pmatrix}
&=&
(-1)^{\mathcal{L}}
g^K_{+\varrho_1}(q_1,q_2,q_3,q_4)
\begin{pmatrix} 1 & 1 & \nu \vspace*{0.33cm} \\ +\varrho_2 & -\varrho_2 & 0 \end{pmatrix}
\begin{pmatrix} K & \nu & \mathcal{L} \vspace*{0.33cm} \\ 0 & 0 & 0 \end{pmatrix} \,.
\end{eqnarray}
\end{widetext}
This implies, according to Eq.~\eqref{eq:final_coeff},
\begin{eqnarray}
\label{eq:c_sym}
c_{\mathcal{L}}(-\varrho_1,-\varrho_2) = (-1)^{\mathcal{L}}\, c_{\mathcal{L}}(+\varrho_1,+\varrho_2)\,,
\end{eqnarray}
i.e., indeed, only odd Legendre coefficients change
sign when changing simultaneously the
polarization directions $\varrho_1$ and $\varrho_2$, whereas all even
coefficients remain unchanged.
Next, we evaluate all non-vanishing Legendre coefficients as a function
of the polarization directions $\varrho_1$ and $\varrho_2$ without
making any assumptions on the two-photon absorption tensor $T$.
To this end, we first consider the case where the two-photon
absorption process is
driven by linearly polarized light, $\varrho_1=0$.
The second Wigner $3j$ symbol in Eq.~\eqref{eq:gK} then becomes
\begin{eqnarray*}
\label{eq:sigma0}
\begin{pmatrix} 1 & 1 & Q \vspace*{0.33cm} \\ \varrho_1 & \varrho_1 & -2\varrho_1 \end{pmatrix}
&=&
\begin{pmatrix} 1 & 1 & Q \vspace*{0.33cm} \\ 0 & 0 & 0 \end{pmatrix}\,.
\end{eqnarray*}
It does not vanish if and only if $Q=0,2$;
and analogously for the fourth Wigner symbol in Eq.~\eqref{eq:gK}
involving $Q^\prime$. Furthermore, the sixth Wigner $3j$
symbol in Eq.~\eqref{eq:gK} becomes
\begin{eqnarray*}
\begin{pmatrix} Q & Q^\prime & K \vspace*{0.33cm} \\ 0 & 0 & 0 \end{pmatrix}\,,
\end{eqnarray*}
which is non-zero only if $K$ is even, because $Q$ and $Q^\prime$ are
even, and $|Q-Q^\prime| \le K\le Q+Q^\prime$. As a consequence,
because both $Q$ and $Q^\prime$ are
restricted to $0$ and $2$, $K$ must be equal to
0, 2 or 4.
Now, we consider the fourth Wigner $3j$ symbol in
Eq.~\eqref{eq:final_coeff}, namely
\begin{eqnarray}
\label{eq:sigma02}
\begin{pmatrix} 1 & 1 & \nu\vspace*{0.33cm}\\ \varrho_2 & -\varrho_2 & 0 \end{pmatrix}\,,
\end{eqnarray}
which contains the information about the photoionization transition.
If the photoionization process is driven by linearly polarized light
($\varrho_2=0$),
the allowed values for $\nu$ in Eq.~\eqref{eq:sigma02} are
$\nu=0,2$. Therefore, the last Wigner symbol in Eq.~\eqref{eq:final_coeff},
\begin{eqnarray}
\label{eq:sigma03}
\begin{pmatrix} K & \nu & \mathcal{L}\vspace*{0.33cm}\\0 & 0 &
0\end{pmatrix}\,,
\end{eqnarray}
has non-vanishing values only for $|K-\nu|\le\mathcal{L}\le K+\nu$ and
$K+v+\mathcal{L}$ must be even due to the triple zeros in the second
row. Because $K=[0,2,4]$ for $\varrho_1=0$ and $\nu=0,2$ for
$\varrho_2=0$, the maximal order of Legendre coefficients is
$\mathcal{L}_{\mathrm{max}}=6$ and the non-vanishing Legendre
coefficients are those
for $\mathcal{L}=0,2,4,6$, i.e., there are no odd Legendre polynomials
in the PAD for $\varrho_1=\varrho_2=0$.
On the other hand, if we keep $\varrho_1=0$ but the photoionization
transition is driven by circularly polarized light
($\varrho_2=\pm 1$), the non-vanishing
values in Eq.~\eqref{eq:sigma02} are not anymore restricted to even
$\nu$, but instead to $\nu=0,1,2$. Using these values for $\nu$
together with the requirement
$|K-\nu|\le\mathcal{L}\le K+\nu$ in Eq.~\eqref{eq:sigma03}, we obtain,
for $K=0,2,4$ (due to $\varrho_1=0$), even as well as odd Legendre
polynomials in the PAD, i.e., $\mathcal{L}=0,1,\ldots,6$.
Next we check whether PECD can arise, i.e., whether the non-zero odd
coefficients change sign under changing the light helicity, for
$\varrho_1=0$ and $\varrho_2=\pm 1$. To
this end, we explicitly write out the dependence of
Eq.~\eqref{eq:final_coeff} on the polarization direction $\varrho_2$
driving the ionization step and define
\begin{subequations}
\begin{eqnarray}
\label{eq:final_coeff_reduced1}
\zeta^{K,\nu}_{\mathcal{L}}(\varrho_2) &=&
\begin{pmatrix} 1 & 1 & \nu\vspace*{0.33cm}\\ \varrho_2 & -\varrho_2 & 0 \end{pmatrix}
\begin{pmatrix} K & \nu & \mathcal{L}\vspace*{0.33cm}\\0 & 0 &
0\end{pmatrix}\,,
\end{eqnarray}
corresponding to the fourth and sixth Wigner $3j$ symbol in
Eq.~\eqref{eq:final_coeff}. For the opposite polarization direction
$-\varrho_2$, this quantity becomes
\begin{eqnarray}
\label{eq:final_coeff_reduced2}
\zeta^{K,\nu}_{\mathcal{L}}(-\varrho_2) &=&
\begin{pmatrix} 1 & 1 & \nu\vspace*{0.33cm}\\ -\varrho_2 & \varrho_2 & 0 \end{pmatrix}
\begin{pmatrix} K & \nu & \mathcal{L}\vspace*{0.33cm}\\0
& 0 & 0\end{pmatrix}\nonumber\\
&=&
(-1)^{2\nu+K+\mathcal{L}}
\begin{pmatrix} 1 & 1 & \nu\vspace*{0.33cm}\\ \varrho_2 & -\varrho_2 & 0 \end{pmatrix}
\begin{pmatrix} K & \nu & \mathcal{L}\vspace*{0.33cm}\\0
& 0 & 0\end{pmatrix}\nonumber\\
&=&
(-1)^{\mathcal{L}}\, \zeta^{K,\nu}_{\mathcal{L}}(\varrho_2)\,,
\label{eq:c_linear_circular}
\end{eqnarray}
\end{subequations}
where we have applied Eq.~\eqref{eq:wigner_prop1} to both Wigner $3j$ symbols
in Eq.~\eqref{eq:final_coeff_reduced2},
together with the fact that $K$ is even for $\varrho_1=0$, as previously
discussed. Finally, inserting Eq.~\eqref{eq:c_linear_circular}
into Eq.~\eqref{eq:final_coeff} yields
\begin{eqnarray}
c_{\mathcal{L}}(\varrho_1=0,-\varrho_2) = (-1)^{\mathcal{L}}
c_{\mathcal{L}}(\varrho_1=0,+\varrho_2)\,.
\end{eqnarray}
As a consequence, also for linearly polarized light driving the two-photon
absorption process, odd Legendre
coefficients change sign when the polarization direction of the
ionizing field is changed from right to left, and vice versa.
Whereas $K$ must be even for $\varrho_1=0$,
$\nu$ is $\nu=0,1,2$ for $\varrho_2=\pm1$,
allowing $\mathcal{L}$ to take odd and even values in
Eq.~\eqref{eq:c_linear_circular}. This implies that there is no need for
circular polarization to drive the two-photon absorption
process: Two-photon absorption driven by linearly polarized light
followed by photoionization with circularly polarized light is
sufficient for observing PECD in chiral molecules.
In Section~\ref{subsec:role_tensor} we investigate the specific role of
the two-photon aborption tensor for all the cases discussed above.
Conversely, the two-photon transition may be driven by circularly
polarized light followed by photoionization with linearly polarized
light, i.e., $\varrho_1=\pm 1$ and $\varrho_2=0$. As shown in
Eq.~\eqref{eq:demo_finished:0} in
Appendix~\ref{subsec:ci:0}, such a configuration leads to a PAD consisting
exclusively of even Legendre contributions.
In Eq.~\eqref{eq:c_sym} we have shown that only odd Legendre
coefficients change sign when changing simultaneously the polarization
direction driving the two-photon absorption and the one-photon
ionization. In
Appendix~\ref{subsec:ci}, we show that
\begin{eqnarray}
\label{eq:ci_rho2}
c_{\mathcal{L}}(\varrho_1,\varrho_2)
= (-1)^{\mathcal{L}}c_{\mathcal{L}}(\varrho_1,-\varrho_2)\,,
\end{eqnarray}
i.e., odd Legendre coefficients change sign when the polarization
direction of the photoionization transition is changed, whereas
the polarization of the field driving the two-photon absorption is
kept fixed. This suggests the polarization direction of the ionizing
field alone to impose the sign for all odd Legendre
coefficients; the polarization direction in the two-photon
absorption process plays no role. To verify this statement, we calculate
$c_{\mathcal{L}}(-\varrho_1,\varrho_2)$ in Appendix~\ref{subsec:ci2}
and find indeed
\begin{eqnarray}
\label{eq:main_sym_result2}
c_{\mathcal{L}}(-\varrho_1,\varrho_2) = c_{\mathcal{L}}(+\varrho_1,\varrho_2)\,.
\end{eqnarray}
That is, the two-photon process determines only the degree of
anisotropy prior to ionization.
To summarize, using linearly polarized light for both two-photon
absorption and one-photon ionization results in a PAD consisting only
of even Legendre polynomials, i.e., vanishing PECD. In contrast, when
the $(2+1)$ REMI process is driven by circularly polarized light,
higher order odd Legendre polynomials may contribute, depending on the
geometric properties of the resonantly excited state. The occurrence
of non-zero Legendre coefficients for all polarization combinations
is summarized in Table~\ref{table:all_in_one} below.
\subsubsection{Role of two-photon absorption tensor}
\label{subsec:role_tensor}
The number of Legendre coefficients that contribute to PECD in our
model of the 2+1 REMPI process is determined by how anisotropic the
ensemble of electronically excited molecules is. This, in turn, follows
from the properties of the two-photon absorption tensor. Here, we
check the conditions that $T_{q_1,q_2}$,
in order to give rise to this anisotropy. To this end,
we introduce the two-photon absorption amplitude
$\mathcal{A}_{2\mathrm{P}}(\omega)$,
where for convenience the multiplying factor in Eq.~\eqref{eq:final_proba2p} has been dropped,
\begin{eqnarray}
\mathcal{A}_{2\mathrm{P}}(\omega) &=& \sum_{q_1}\sum_{q_2}
\mathcal{D}^{(1)}_{q_1,\varrho_1}(\omega)\,
\mathcal{D}^{(1)}_{q_2,\varrho_1}(\omega)\,
T_{q_1,q_2} \,,
\end{eqnarray}
i.e.,
$\rho_{2\mathrm{P}}(\omega)\propto|\mathcal{A}_{2\mathrm{P}}(\omega)|^2$,
cf. Eq.~\eqref{eq:final_proba2p}. For simplicity, we define
$\tilde{\mathcal{A}}_{2\mathrm{P}}(\omega)$
such that $\mathcal{A}_{2\mathrm{P}}(\omega) =
\frac{4\pi}{3} \tilde{\mathcal{A}}_{2\mathrm{P}}(\omega)$.
We first check the 'trivial' case of an isotropic two-photon
absorption tensor, i.e., a two-photon tensor that is diagonal in the
Cartesian basis with equal elements.
In this case, $\tilde{\mathcal{A}}_{2\mathrm{P}}(\omega)$ becomes
\begin{eqnarray*}
\tilde{\mathcal{A}}_{2\mathrm{P}}(\omega) &=&
+\mathcal{D}^{(1)}_{0,\varrho_1}(\omega)\,\mathcal{D}^{(0)}_{0,\varrho_1}(\omega)\,
T_{zz}\\\nonumber
&&
-\frac{1}{2}\mathcal{D}^{(1)}_{-1,\varrho_1}(\omega)\,\mathcal{D}^{(1)}_{+1,\varrho_1}(\omega)\,
\left(T_{xx}+T_{yy}\right)\\\nonumber
&&
-\frac{1}{2}\mathcal{D}^{(1)}_{+1,\varrho_1}(\omega)\,\mathcal{D}^{(1)}_{-1,\varrho_1}(\omega)\,
\left(T_{xx}+T_{yy}\right)\,,
\end{eqnarray*}
where we have employed the transformation between
spherical and Cartesian basis,
cf. Eq.~\eqref{eq:cartesian_spherical}. Taking the elements to be
equal, $T_{xx}=T_{yy}=T_{zz}=1$ without loss
of generality, $\tilde{\mathcal{A}}_{2\mathrm{P}}(\omega)$ can be written as
\begin{eqnarray}
\tilde{\mathcal{A}}_{2\mathrm{P}}(\omega) &=&
\mathcal{D}^{(1)}_{0,\varrho_1}(\omega)\,
\mathcal{D}^{(1)}_{0,\varrho_1}(\omega)\,
-
2\mathcal{D}^{(1)}_{-1,\varrho_1}(\omega)\,
\mathcal{D}^{(1)}_{+1,\varrho_1}(\omega)\nonumber\\
&=&
\sum_{\mu=0,\pm 1}\,(-1)^{\mu}
\mathcal{D}^{(1)}_{\mu,\varrho_1}(\omega)\,
\mathcal{D}^{(1)}_{-\mu,\varrho_1}(\omega)\nonumber\\
&=&
\sum_{\mu=0,\pm 1}\,(-1)^{-\varrho_1}
\mathcal{D}^{(1)}_{\mu,\varrho_1}(\omega)\,
\mathcal{D}^{{*}(1)}_{\mu,-\varrho_1}(\omega)\nonumber\\
&=&(-1)^{-\varrho_1}\,
\delta_{\varrho_1,-\varrho_1}\,,
\label{eq:first_law}
\end{eqnarray}
where we have used Eq.~\eqref{eq:wigner_unitary}.
That is, for an isotropic two-photon tensor,
it is not possible to reach an anisotropic distribution by
absorption of two identical photons.
The PAD for the $(2+1)$ REMPI process then reduces to the well-known one
for one-photon ionization of randomly oriented molecules, i.e., only $P_0$
and $P_2$ contribute if $\varrho_2=0$, and $P_0$, $P_1$ and $P_2$
are non-zero for $\varrho_2=\pm 1$.
\begin{table*}[tb]
\newcommand{\ra}[1]{\renewcommand{\arraystretch}{#1}}
\centering
\ra{1.0}
\caption{\label{table:all_in_one} Contribution of Legendre
coefficients to the PAD as a function of the partial wave cut-off
in Eq.~\eqref{eq:exited_state2} and
the polarizations $\epsilon_{\varrho_1}'$ and $\epsilon_{\varrho_2}'$
of two-photon absorption and photoionization, respectively, for an
isotropic and anisotropic
two-photon absorption tensor $\mathrm{T}$
within the strict electric
dipole approximation. }
\begin{tabular}{cccccccccccccccccccccccccc}
\toprule[1.0pt]
\addlinespace[0.1cm]
& &\multicolumn{4}{c}{$\epsilon_{0}'/\epsilon_{\pm 1}'$} & \phantom{abcd}& \multicolumn{4}{c}{$\epsilon_{\pm 1}'/\epsilon_{0}'$} & \phantom{abcd} & \multicolumn{4}{c}{$\epsilon_{0}'/\epsilon_{0}'$} & \phantom{abcd} & \multicolumn{4}{c}{$\epsilon_{\pm 1}'/\epsilon_{\pm 1}'$}&
\phantom{abcd}&\multicolumn{3}{c}{$\epsilon_{\pm 1}'/\epsilon_{\mp 1}'$}\\
\addlinespace[0.2cm]
isotropic& &$s$ & $p$ & $d$ &$f$ &\phantom{abc}& $s$ & $p$ & $d$ & $f$ &\phantom{abc}& $s$ & $p$ & $d$ &$f$ &\phantom{abc}& $s$ & $p$ & $d$ &$f$ &\phantom{abc}& $s$ & $p$ &$d$ &$f$\\
\hline
\addlinespace[0.1cm]
$c_0$&& $\bullet$ &$\bullet$& $\bullet$ & $\bullet$ &&$-$ &$-$ & $-$ & $-$ &&$\bullet$ &$\bullet$ & $\bullet$ & $\bullet$ &&$-$ &$-$ & $-$ & $-$ &&$-$ &$-$ & $-$ & $-$ \\
$c_1$&& $- $ &$- $& $\bullet$ & $\bullet$ &&$-$ &$-$ & $-$ & $-$ &&$-$ &$-$ & $-$ & $-$ &&$-$ &$-$ & $-$ & $-$ &&$-$ &$-$ & $-$ & $-$ \\
$c_2$&& $\bullet$ &$\bullet$& $\bullet$ & $\bullet$ &&$-$ &$-$ & $-$ & $-$ &&$\bullet$ &$\bullet$ & $\bullet$ & $\bullet$ &&$-$ &$-$ & $-$ & $-$ &&$-$ &$-$ & $-$ & $-$ \\
$c_3$&& $-$ &$-$ & $- $ & $- $ &&$-$ &$-$ & $-$ & $-$ &&$-$ &$-$ & $-$ & $-$ &&$-$ &$-$ & $-$ & $-$ &&$-$ &$-$ & $-$ & $-$ \\
$c_4$&& $-$ &$-$ & $-$ & $-$ &&$-$ &$-$ & $-$ & $-$ &&$-$ &$-$ & $-$ & $-$ &&$-$ &$-$ & $-$ & $-$ &&$-$ &$-$ & $-$ & $-$ \\
$c_5$&& $-$ &$-$ & $-$ & $-$ &&$-$ &$-$ & $-$ & $-$ &&$-$ &$-$ & $-$ & $-$ &&$-$ &$-$ & $-$ & $-$ &&$-$ &$-$ & $-$ & $-$ \\
$c_6$&& $-$ &$-$ & $-$ & $-$ &&$-$ &$-$ & $-$ & $-$ &&$-$ &$-$ & $-$ & $-$ &&$-$ &$-$ & $-$ & $-$ &&$-$ &$-$ & $-$ & $-$ \\
\addlinespace[0.05cm]
\toprule[1.0pt]
\addlinespace[0.05cm]
& &\multicolumn{4}{c}{$\epsilon_{0}'/\epsilon_{\pm 1}'$} & \phantom{abcd}& \multicolumn{4}{c}{$\epsilon_{\pm 1}'/\epsilon_{0}'$} & \phantom{abcd} & \multicolumn{4}{c}{$\epsilon_{0}'/\epsilon_{0}'$} & \phantom{abcd} & \multicolumn{4}{c}{$\epsilon_{\pm 1}'/\epsilon_{\pm 1}'$}&
\phantom{abcd}&\multicolumn{3}{c}{$\epsilon_{\pm 1}'/\epsilon_{\mp 1}'$}\\
anisotropic& &$s$ & $p$ & $d$ &$f$ &\phantom{abc}& $s$ & $p$ & $d$ & $f$ &\phantom{abc}& $s$ & $p$ & $d$ &$f$ &\phantom{abc}& $s$ & $p$ & $d$ &$f$ &\phantom{abc}& $s$ & $p$ &$d$ &$f$\\
\hline
$c_0$&&$\bullet$ &$\bullet$ & $\bullet$ & $\bullet$ &&$\bullet$ &$\bullet$ & $\bullet$& $\bullet$ &&$\bullet$ &$\bullet$ & $\bullet$ & $\bullet$ &&$\bullet$ &$\bullet$ & $\bullet$ & $\bullet$ &&$\bullet$ &$\bullet$ & $\bullet$ & $\bullet$\\
$c_1$&&$- $ &$- $ & $\bullet$ & $\bullet$ &&$-$ &$- $ & $-$ & $-$ &&$-$ &$- $ & $-$ & $-$ &&$-$ &$- $ & $\bullet$ & $\bullet$ &&$-$ &$-$ & $\bullet$ & $\bullet$\\
$c_2$&&$\bullet$ &$\bullet$ & $\bullet$ & $\bullet$ &&$\bullet $ &$\bullet$ & $\bullet$& $\bullet$ &&$\bullet$ &$\bullet$ & $\bullet$ & $\bullet$ &&$\bullet$ &$\bullet$ & $\bullet$ & $\bullet$ &&$\bullet$ &$\bullet$ & $\bullet$ & $\bullet$\\
$c_3$&&$- $ &$- $ & $\bullet$ & $\bullet$ &&$-$ &$- $ & $-$ & $-$ &&$-$ &$- $ & $-$ & $-$ &&$-$ &$- $ & $\bullet$ & $\bullet$ &&$-$ &$-$ & $\bullet$ & $\bullet$\\
$c_4$&&$- $ &$\bullet$ & $\bullet$ & $\bullet$ &&$-$ &$\bullet$ & $\bullet$& $\bullet$ &&$-$ &$\bullet$ & $\bullet$ & $\bullet$ &&$-$ &$\bullet$ & $\bullet$ & $\bullet$ &&$-$ &$\bullet$ & $\bullet$ & $\bullet$\\
$c_5$&&$-$ &$-$ & $-$ & $\bullet$ &&$-$ &$-$ & $-$ & $-$ &&$-$ &$-$ & $-$ & $-$ &&$-$ &$-$ & $-$ & $\bullet$ &&$-$ &$-$ & $-$ & $\bullet$\\
$c_6$&&$-$ &$-$ & $\bullet$ & $\bullet$ &&$-$ &$-$ & $\bullet$& $\bullet$ &&$-$ &$-$ & $\bullet$ & $\bullet$ &&$-$ &$-$ & $\bullet$ & $\bullet$ &&$-$ &$-$ & $\bullet$ & $\bullet$\\
\hline
\bottomrule[0.8pt]
\addlinespace[0.1cm]
\multicolumn{16}{l}{$\bullet$\, contributing to the PAD}\\
\multicolumn{16}{l}{$-$\, not contributing to the PAD}\\
\end{tabular}
\end{table*}
In what follows, we discuss a general two-photon absorption tensor,
decomposing it as
\begin{eqnarray}
\label{eq:tensor_gral}
\mathrm{T} &=& \alpha_o\openone_{3\times 3}+
\begin{pmatrix} \beta_{xx} & 0 & 0 \\
0 & \beta_{yy} & 0 \\
0 & 0 & \beta_{zz} \end{pmatrix}
+
\begin{pmatrix} 0 & \mathrm{T}_{xy} & \mathrm{T}_{xz} \\
\mathrm{T}_{xy} & 0 & \mathrm{T}_{yz} \\
\mathrm{T}_{xz} & \mathrm{T}_{yz} & 0 \end{pmatrix}\nonumber\\
&\equiv& \mathrm{T}_{\mathrm{Id}} +
\mathrm{T}_{\mathrm{d}} +\mathrm{T}_{\mathrm{nd}}\,,
\end{eqnarray}
where we have split the diagonal elements into
$\mathrm{T}_{\mathrm{Id}}$
and $\mathrm{T}_{\mathrm{d}}$ in order to differentiate between isotropic and
anisotropic two-photon tensors.
The contributions of odd and even Legendre polynomials to the PAD
as a function of $L_{o,\rm max}$, the number of partial waves in the
electronically excited state, the polarizations
$\epsilon^\prime_{\varrho_1}$ and $\epsilon^\prime_{\varrho_2}$,
and the two-photon absorption tensor are
summarized in Table~\ref{table:all_in_one}.
If the complete $(2+1)$ REMPI process is driven by linearly polarized
light and only $\alpha_0\neq 0$, then $P_o$ and $P_2$ contribute to the
PAD as just discussed. If the two-photon absorption tensor is
anisotropic, even Legendre polynomials of higher order can appear.
For a molecule characterized by such a two-photon absorption tensor,
odd Legendre polynomials can contribute to the PAD if the
polarization of the ionization step is circular
($\epsilon^\prime_{\varrho_2}=\epsilon^\prime_{\pm 1}$).
Analogously,
both even and odd Legendre polynomials can appear
if $\epsilon^\prime_{\varrho_1}=
\epsilon^\prime_{\varrho_2}=\epsilon^\prime_{\pm 1}$.
Note that anisotropy of the two-photon tensor is sufficient, i.e., it
does not matter whether the anisotropy is due to diagonal or
non-diagonal elements of the Cartesian tensor.
The latter case is the one discussed in
Ref.~\cite{LehmannJCP13}, where a ``nearly'' diagonal two-photon
absorption tensor was used. In other words, an anisotropic tensor
with non-zero off-diagonal elements in the Cartesian basis also yields
the pattern in the lower part of Table~\ref{table:all_in_one}.
As indicated, the point group symmetry of the molecule determines which tensor
components of $T_{q_1,q_2}$ must be zero. This tensor pattern
is a property of the states involved in the transition and is determined by the
symmetry of the initial and final states. For instance, in molecular systems
with point groups T and O, the photon absorption tensor
becomes more selective. The 2+1 process between two states
that transform like the totally symmetric representation of these point groups
will only take place with linearly polarized laser light. In this case the
isotropic part $\mathrm{T}_{\mathrm{Id}}$ of Eq.~(\ref{eq:tensor_gral}) can remain nonzero.
If the 2+1 process involves initial and final states that transform like non-totally symmetric
representations of the point group, the tensor pattern changes and thus the tensor
might have isotropic or anisotropic parts. This determines whether the 2+1 process is
allowed or not. We refer the reader to Refs.~\cite{McCLAIN19771,Marco1983}
for more detailed discussion of this issue.
\section{Ab initio calculations}
\label{sec:abinitio}
The theoretical framework to model PECD presented above involves a
number of molecular parameters. These can either be obtained by
fitting the theoretical PAD to the experimental results or from
\textit{ab initio} calculations.
Below we provide \textit{ab initio} results for the two-photon
absorption tensor for non-resonant transitions from the electronic
ground state to the lowest-lying electronically excited states of
fenchone and camphor. To assess the quality of these calculations, we
employ different basis sets and different levels of treating
electronic correlation.
\subsection{Computational details}
\label{subsec:compdetails}
The linear response coupled cluster method with single and double
(CC-SD) cluster amplitudes is used to calculate the intermediate
electronicallyexcited state and the two-photon absorption tensor in
the electric dipole approximation. Moreover, time-dependent density
functional theory (TD-DFT) calculations with the {\sc b3lyp}
exchange-correlation functional are performed. The molecular structure
was energy minimized in all cases by performing DFT calculations with
the {\sc b3lyp} exchange-correlation functional and the def2-TZVP basis
set on all atoms, using the {\sc turbomole} program package \cite{turbomole6}.
In Fig.~\ref{fig:optimiz}, the energy-minimized molecular structures of fenchone
and camphor are shown, where the black vectors represent the Cartesian
coordinate system located at the center of mass of the molecular systems.
\begin{figure}[tb]
\centering
\includegraphics[width=\linewidth]{structotal.pdf}
\caption{The oriented structures of fenchone (left) and camphor
(right). The black vectors represent the Cartesian coordinate system
located at the center of mass of the molecular systems. The blue and
red vectors refer to the eigenvectors of the right and left two-photon
tensors corresponding to the third excited state (for more information
see Appendix~\ref{appen_twophoton}).}
\label{fig:optimiz}
\end{figure}
These structures and orientations correspond to the ones used
subsequently for the calculation of the two-photon absorption tensors.
Cartesian coordinates of the oriented structures are reported in the
Supplemental Material~\cite{SuppMat}.
Calculations for the two-photon transition strength
tensor were performed using the {\sc dalton} program package
\cite{dalton:2005}.
Details of the implementation of the two-photon absorption
tensors within the linear response coupled cluster (CC) scheme
are found in Refs.~\cite{paterson:2006,hattig:1998}. The orbital
unrelaxed methodology was employed in the linear response calculations of
the two-photon absorption tensors on the coupled cluster level. Electrons
occupying the $11$ energetically lowest-lying molecular orbitals that
are dominated by $1$s orbitals of the various carbon atoms or the
oxygen were excluded from
the correlation treatment on the coupled cluster levels (so-called frozen
core approximation). The evaluation of the two-photon absorption tensor
was performed at the CC-SD/Rydberg-TZ level of theory. It is worth noting that
two-photon transition strength tensor $T_{i,j}$,
($i$,$j$=$x,y,z$) is calculated in the coupled cluster framework as
a symmetric product of two-photon transition moments from initial to final state
and from final to initial state (the left and right two-photon transition moments).
As explained in more detail in Appendix~\ref{appen_twophoton},
in coupled cluster theory, the symmetrized biorthogonal structure
inhibits identification of the left and right two-photon absorption
tensors. Thus, using the results of coupled cluster theory directly
in the calculation of PAD might be problematic, because the model
constructed in Sec.~\ref{sec:model} depends on only one two-photon
absorption tensor. We present a solution to this problem in
Appendix~\ref{appen_twophoton}.
In Fig.~\ref{fig:optimiz}, the eigenvectors of the left and right two-photon
absorption tensors for the third exited state of fenchone and camphor
are shown (blue and red vectors).
To benchmark the quality of the electronic structure calculations,
electronic excitation energies for transitions to the energetically lowest
lying singlet states are performed on the CCSD and approximate second order
couplet cluster (CC2) level for the $n$-aug-cc-pV$N$Z hierarchy of
basis sets
(see below). The {\sc turbomole} program package \cite{turbomole6} was used
for calculations on the CC2 level within the resolution of the identity
(RI) approximation. Select results were compared to conventional
CC2 calculations with the {\sc molpro} program package \cite{molpro},
confirming that the RI approximation has little impact on the computed
excitation energies (typically less than 10~meV). CCSD calculations for
excitation energies were performed with {\sc molpro}. Again, electrons
occupying
the $11$ energetically lowest-lying molecular orbitals were kept frozen in
all coupled cluster calculations.
The following basis sets were employed:
\begin{itemize}\setlength{\itemsep}{0pt}
\item Turbomole-TZVP with H:[3s,1p], C:[5s,3p,1d], O:[5s,3p,1d].
\item Rydberg-TZ with H: [2s], C: [5s,3p,1d], O:[5s,4p,3d,2f],
'q':[1s,1p,1d], where
'q' is a ''dummy'' center, positioned at the center of mass of the
molecule. Primitive diffuse $s$, $p$, $d$ Gaussian basis
functions with the exponent coefficients equal to 0.015~$a_0$ were placed on this center.
With this basis we can expect quite a reliable description of the
higher excited states (which, according to
Ref.~\cite{pulm:1997}, are diffuse Rydberg states) but most likely
not for the lowest lying excited state.
\item The ($n$-aug-)cc-pV$N$Z hierarchy of basis sets which are
correlation consistent polarized valence $N$-tuple zeta basis sets, with
$N$ = D, T, Q, referring to double-$\zeta$, triple-$\zeta$ and
quadruple-$\zeta$, respectively. On the oxygen nucleus, these basis
sets have been also augmented by further diffuse functions with $n$ =
s, d, t, q implying single, double, triple and quadruple
augmentation, respectively. We used the procedure described in
Ref.~\cite{DEW1994} for producing these aforementioned
augmented basis sets.
\end{itemize}
The single center reexpansion is performed in two steps.
First, the orbitals of the hydrogen atom are calculated
with a large uncontracted basis set [$13$s$11$p$9$d$8$f].
For manually adjusting the phases of the atomic orbitals, we have computed
numerically the radial part of the hydrogenic wavefunction
using the following procedure. The atomic wavefunction
\begin{equation}
\vert \psi_i\rangle=\sum_j \vert \chi_j \rangle C_{ji}
\label{wfabini}
\end{equation}
is considered, where $\vert \chi_j \rangle$ is a gaussian basis function and reads
\begin{align}
\vert \chi_j \rangle=\frac{1}{\sqrt{2^{-3/2-l_j}\alpha_j^{-1/2-l_j}\Gamma[\frac{1}{2}+l_j]}} e^{-\alpha_j r^2} r^{l_j},
\end{align}
where $\Gamma$ refers to the gamma function.
The $C_{ji}$, the atomic orbital coefficients, are calculated
by using the quantum chemical software Turbomole. The angular part
can be chosen as the so-called real valued
spherical harmonic and the integral over angular part is
\begin{align}
\langle Y_{l_jm_j} (\theta,\phi) \vert Y_{l_km_k} (\theta,\phi) \rangle = \delta_{l_jl_k}\delta_{m_jm_k}.
\end{align}
In this way, one can calculate the radial part of Eq.~(\ref{wfabini}) and
compare it with Eq.~(\ref{radial:excitedstates})(which was used originally
for reexpansion of the the electronically excited state of the neutral
molecules under investigating (see Eq.~(\ref{eq:exited_state}))) and
thus adjust the phases of atomic orbitals.
In the second step, the relevant molecular orbitals were
calculated by projecting them onto hydrogen-like atom orbitals placed
at the center-of-mass of camphor and fenchone, respectively, which is called
the blowup procedure in the Turbomole context~\cite{turbomole6}. This
calculation was carried out at the Hartree Fock(HF)/TZVP level of theory.
\subsection{Results and discussion}
\label{subsec:abinitioresults}
\begin{table}[tb]
\caption{\label{tab:exci} Experimental
and calculated excitation energies
(in eV) for fenchone (top) and camphor (bottom) obtained by
TD-DFT and CC-SD/Rydberg-TZ used for subsequent calculation of
the two-photon transition tensor.}
\begin{tabular}{|c|c|c|c|}
\hline
state &
experiment~\cite{pulm:1997} &DFT-{\sc b3lyp} & CC-SD\\
\hline\hline
A / $\mathrm{n}\rightarrow \pi^*$ & 4.25 & 4.24 & 4.44 \\
B / $\mathrm{n}\rightarrow 3\mathrm{s}$ & 6.10 & 5.41 & 6.19 \\
$\text{C}_1$/$\mathrm{n}\rightarrow 3\mathrm{p}$& 6.58 & 5.75 & 6.53 \\
$\text{C}_2$ & & 5.82 & 6.60 \\
$\text{C}_3$ & & 5.86 & 6.62 \\
$\text{D}_1$/$\mathrm{n}\rightarrow 3\mathrm{d}$& 7.14 & & 7.04 \\
$\text{D}_2$ & & & 7.09 \\
$\text{D}_3$ & & & 7.10 \\
$\text{D}_4$ & & & 7.12 \\
$\text{D}_5$ & & & 7.14 \\
\hline\hline
A / $\mathrm{n}\rightarrow \pi^*$ & 4.21 & 4.15 & 4.37 \\
B / $\mathrm{n}\rightarrow 3\mathrm{s}$ & 6.26 & 5.53 & 6.33 \\
$\text{C}_1$/$\mathrm{n}\rightarrow 3\mathrm{p}$& 6.72 & 5.87 & 6.73 \\
& & & \\
$\text{C}_2$ & & 5.90 & 6.75 \\
$\text{C}_3$ & & 5.98 & 6.78 \\
$\text{D}_1$/$\mathrm{n}\rightarrow 3\mathrm{d}$& 7.28 & & 7.21 \\
$\text{D}_2$ & & & 7.27 \\
$\text{D}_3$ & & & 7.29 \\
$\text{D}_4$ & & & 7.31 \\
$\text{D}_5$ & & & 7.33 \\
\hline
\end{tabular}
\end{table}
The excitation energies of the lowest lying excited states for
fenchone and camphor are presented in Tables~\ref{tab:exci}-\ref{tab:camphortz}. The
labeling of the states follows the one for the absorption spectra of
Ref.~\cite{pulm:1997}. The states B, C and D are comparatively close in
energy. In principle, the order in which the states are obtained in
the calculations is unknown and the states may be interchanged due to
an insufficient level of the correlation treatment or the smallness of
the basis set. Nevertheless, we suppose that if the difference between
the theoretical excitation energies and the experimental ones is smaller than the
energy difference between the two states, then the order of the states
is correctly reproduced. Table~\ref{tab:exci} shows that
the quite accurate excitation energies for the states B, C and D
are obtained in the CC-SD calculations with the Rydberg-TZ basis set
for both camphor and fenchone. The A state
is less accurately described with this basis set, while TDDFT result
for state A is very close to the corresponding experimental value.
For Rydberg states, it is well documented that the TDDFT method
has severe limitations~\cite{Cme1998}, and we observe, indeed,
relatively large deviations between the computed excitation energies into Rydberg
states and the corresponding experimental excitation energies as shown in Table~\ref{tab:exci}.
We this did not perform the calculation of excitation energies into even higher Rydberg states
for the present molecular systems.
Tables~\ref{tab:fenchonedz} and \ref{tab:fenchonetz} report
more detailed information on the electronic structure of fenchone,
obtained by employing both the CC$2$ and CCSD methods with
systematically improved basis sets. Enlarging the set of
augmenting diffuse functions on the \ce{O} atom improves
the excitation energies of the molecule under investigation.
The energy of state A changes only mildly with increasing number of
diffuse functions and increasing the multiple zeta quality.
Excitation energies for the state A evaluated at the CC$2$/d-aug-ccpVQZ and
CC-SD/t-aug-pVDZ level of theory are in good agreement
with the experimental one reported in Ref.~\cite{pulm:1997}. For
state B, a similar dependence on changing the augmented basis sets on
the \ce{O} atom and increasing the multiple zeta quantity can be
observed. Furthermore, we report a quite
clear description for all members of the $n \rightarrow 3p$ Rydberg
transitions, corresponding to the C band of the experimental spectrum
reported in Ref.~\cite{pulm:1997}, whose individual components are
experimentally not resolved. The theoretical spacing among
all components of the band C approaches to the experimental one when
increasing the augmented basis sets on the \ce{O} atom and the multiple
zeta quality. Strictly speaking, the theoretical spacing among
all components of the C band is less than $0.1$ eV which is in general
in line with the experimental finding.
The D state is composed of the $n \rightarrow 3d$ Rydberg transition.
Here, we again report all individual components, which were not resolved
experimentally. The theoretical spacing among all components of the D band,
which is less than $0.1$ eV on average, approaches the experimental finding
when increasing the augmented basis sets on the \ce{O} atom and the
multiple zeta quality. For the state A, the CC$2$ and CC-SD produce
the results close to each other, whereas for Rydberg states, deviation between
the results obtained by employing the CC$2$ and CC-SD methods is getting larger
as was seen previously for different molecular systems~\cite{heid2009}.
Based on results of excitation energies evaluated
at CC$2$/t-aug-cc-pVDZ, d-aug-cc-pVTZ, t-aug-cc-pVTZ and d-aug-cc-pVQZ as well
as CC-SD/t-aug-cc-pVDZ, we estimate the excitation energies for fenchone
at CC-SD/t-aug-cc-pVQZ as described in the following. We add $\Delta E_1$
(which is the energy difference calculated using the CC$2$ method
for basis sets d-aug-cc-pVQZ and d-aug-cc-pVTZ) as well as $\Delta E_2$
(which is the energy difference evaluated using CC$2$ for basis sets t-aug-cc-pVTZ
and t-aug-cc-pVDZ) to the excitation energies calculated at the CC-SD/t-aug-ccpVDZ
level of theory. This procedure allows to estimate only few excitation energies
of the fenchone molecule at the CCSD/t-aug-cc-pVQZ level of theory. This way of estimation
does not work for all Rydberg states because the CC$2$ method is not accurate enough
for calculating excitation energies of these states. We should mention that the direct
calculation at the CCSD/t-aug-cc-pVQZ
level of theory was beyond our computational facilities. The corresponding results are
shown in Table.~\ref{tab:estimate}. In order to justify this way of estimation, we employed
it for acetone, for which it is possible to calculate the excitation
energies at the CC-SD/t-aug-cc-pVQZ level of theory. This allows us to compare the excitation
energies at the CC-SD/t-aug-cc-pVQZ level of theory with the estimated ones.
The corresponding results were presented in Tables.~S$7$ and $8$ of the supporting
information. It can be seen that the estimate values are very close to the corresponding
ones calculated at the CC-SD/t-aug-cc-pVQZ level of theory.
As an important remark, the excitation energies produced in Table~\ref{tab:exci} using the CC-SD/Rydberg-TZ
level of theory are closer to the experimental values than those generated using the CC-SD/t-aug-cc-pVDZ level of theory or the estimated
values at the CC-SD/t-aug-cc-pVQZ level of theory (see Tables\ref{tab:fenchonedz} and \ref{tab:estimate}).
\begin{table*}[tb]
\caption{\label{tab:fenchonedz}
Lowest vertical electronic singlet excitation energies (in eV) for
fenchone as computed with the CC$2$ and CCSD method. The column heading
indicates the basis set, but augmented
basis functions were only used on \ce{O} and deleted from \ce{H} and
\ce{C}. Thus, for \ce{H} and \ce{C} the cc-pVDZ basis
set was used throughout.}
\begin{ruledtabular}
\begin{tabular}{lccccccccccc}
&Exp.~\cite{pulm:1997}&&\multicolumn{2}{c}{cc-pVDZ} &\multicolumn{2}{c}{aug-cc-pVDZ} &\multicolumn{2}{c}{d-aug-cc-pVDZ} &\multicolumn{2}{c}{t-aug-cc-pVDZ}\\
\cline{4-5} \cline{6-7} \cline{8-9} \cline{10-11}
State&&transition & CC2& CCSD &CC2&CCSD &CC2&CCSD &CC2&CCSD \\
\hline
A&4.25&$\mathrm{n}\rightarrow \pi^*$ &4.38 &4.35 & 4.36 &4.35 &4.35&4.35 &4.34 &4.34\\
B&6.10&$\mathrm{n}\rightarrow 3\mathrm{s}$ &7.32 &7.94 & 7.23 &7.77 &5.80&6.39 &5.56 &6.15\\
$\text{C}_1$ &6.58&$\mathrm{n}\rightarrow 3\mathrm{p}$ &7.92 &8.27 & 7.72 &8.07 &6.18&6.85 &5.99 &6.71\\
$\text{C}_2$ & & &8.07 &8.52 & 7.93 &8.31 &6.28&6.97 &6.01 &6.74\\
$\text{C}_3$ & & &8.11 &8.76 & 7.99 &8.66 &6.38&7.10 &6.03 &6.79\\
D&7.14& $\mathrm{n}\rightarrow 3\mathrm{d}$ &8.22 &8.83 & 8.20 &8.78 &7.71&8.00 &6.65 &7.39\\
& & &8.57 &8.95 & 8.28 &8.79 &7.92&8.31 &6.76 &7.57\\
& & &8.63 &9.02 & 8.36 &8.81 &8.15&8.59 &6.84 &7.63\\
& & &8.72 &9.25 & 8.53 &8.87 &8.25&8.74 &6.89 &7.68\\
& & &8.74 &9.31 & 8.59 &9.10 &8.29&8.76 &7.26 &7.95\\
&8.27& &9.02 &9.35 & 8.85 &9.20 &8.33&8.79 &7.36 &8.04\\
& & &9.19 &9.52 & 9.03 &9.35 &8.50&8.96 &7.46 &8.06 \\
\end{tabular}
\end{ruledtabular}
\end{table*}
\begin{table*}[tb]
\caption{\label{tab:fenchonetz}
Lowest vertical electronic singlet excitation energies (in eV) for
fenchone as computed with the CC$2$ method. The column heading indicates the
basis set, but augmented basis functions were
only used on \ce{O} and deleted from \ce{H} and \ce{C}.}
\begin{ruledtabular}
\begin{tabular}{lcccccc}
State&Exp.~\cite{pulm:1997} &cc-pVTZ & aug-cc-pVTZ & d-aug-cc-pVTZ& t-aug-cc-pVTZ&d-aug-cc-pVQZ\footnotemark[1]\\
\hline
A&4.25&4.32 & 4.29 & 4.29&4.27&4.28\\
B&6.10&6.83 & 6.15 & 6.01&5.68&5.96\\
$\text{C}_1$ &6.58&7.51 & 7.40 & 6.32&6.13&6.36\\
$\text{C}_2$ & &7.53 & 7.50 & 6.39&6.14&6.41\\
$\text{C}_3$ & &7.69 & 7.62 & 6.46&6.17&6.45\\
D&7.14&7.90 & 7.70 & 7.58&6.83&7.36\\
& &7.97 & 7.82 & 7.68&6.95&7.56\\
& &8.19 & 8.05 & 7.80&7.02&7.68\\
& &8.30 & 8.22 & 8.08&7.04&7.77\\
& &8.48 & 8.40 & 8.19&7.20&7.88\\
&8.27&8.63 & 8.47 & 8.20&7.28&8.04\\
& &8.77 & 8.66 & 8.22&7.32&8.06\\
\end{tabular}
\footnotetext[1]{In this calculation, the basis set cc-pVQZ on
\ce{C} and \ce{O} atoms is used.}
\end{ruledtabular}
\end{table*}
\begin{table}[tb]
\caption{\label{tab:estimate}
The estimated lowest vertical electronic singlet excitation energies (in eV) for
fenchone and camphor at CC-SD/t-aug-cc-pVQZ level of theory. }
\begin{ruledtabular}
\begin{tabular}{lcc}
State&fenchone&camphor\\
\hline
A &4.45& 4.17\\
B &6.22& 6.52\\
$\text{C}_1$ &6.89&7.00 \\
$\text{C}_2$ &6.90&7.02 \\
$\text{C}_3$ &6.92&7.06 \\
D &7.79&7.73 \\
& &7.81 \\
& &7.88 \\
\end{tabular}
\end{ruledtabular}
\end{table}
For camphor, the calculated excitation energies for state A, the lowest
excited state, are in reasonable agreement with experiment for all methods
and basis sets, cf. Tables~\ref{tab:exci}, \ref{tab:camphortz} and
\ref{tab:camphordz}.
Here, we again observe that enlarging the set of augment diffuse function
on the \ce{O} atom and the multiple zeta quality improves the results for
the excitation energies. Furthermore, increasing the augmented
basis sets on the \ce{O} atom and the multiple zeta quality leads to a
decrease (of less than $0.1$ eV) in the theoretical spacing among
all components of the C and D states, which again is in line with
the experimental finding~\cite{pulm:1997}. The estimated excitation
energies at CC-SD/t-aug-cc-pVQZ level of theory are calculated in
the same way as done for fenchone. These results are shown in
Table.~\ref{tab:estimate}. We should mention that the excitation energies
produced in Table~\ref{tab:exci} using the CC-SD/Rydberg-TZ
level of theory are better than those generated using the CC-SD/t-aug-cc-pVDZ level
of theory or the estimated values at the CC-SD/t-aug-cc-pVQZ level of theory
(see Tables\ref{tab:camphordz} and \ref{tab:estimate}).
\begin{table*}[tb]
\caption{\label{tab:camphordz}
Lowest vertical electronic singlet excitation energies (in eV) for
camphor as computed with the CC$2$ and CCSD method.
The column heading indicates the basis set, but augmented
basis functions were only used on \ce{O} and deleted from \ce{H} and
\ce{C}. Thus, for \ce{H} and \ce{C} the cc-pVDZ basis
set was use throughout.}
\begin{ruledtabular}
\begin{tabular}{lccccccccccc}
&Exp.~\cite{pulm:1997}&&\multicolumn{2}{c}{cc-pVDZ} &\multicolumn{2}{c}{aug-cc-pVDZ} &\multicolumn{2}{c}{d-aug-cc-pVDZ} &\multicolumn{2}{c}{t-aug-cc-pVDZ}\\
\cline{4-5} \cline{6-7} \cline{8-9} \cline{10-11}
State&& transition & CC2& CCSD &CC2&CCSD &CC2&CCSD &CC2&CCSD \\
\hline
A&4.21&$\mathrm{n}\rightarrow \pi^*$ &4.27 &4.25 & 4.23 &4.25 &4.22&4.24 &4.22 &4.24 \\
B&6.26&$\mathrm{n}\rightarrow 3\mathrm{s}$ &7.40 &8.05 & 7.32 &7.87 &5.83&6.44 &5.64 &6.34 \\
$\text{C}_1$ &6.72&$\mathrm{n}\rightarrow 3\mathrm{p}$ &7.69 &8.10 & 7.46 &7.90 &6.25&6.93 &6.07 &6.81 \\
$\text{C}_2$ & & &8.04 &8.35 & 7.81 &8.11 &6.30&7.00 &6.09 &6.84 \\
$\text{C}_3$ & & &8.23 &8.84 & 8.11 &8.63 &6.60&7.32 &6.15 &6.93 \\
D&7.28&$\mathrm{n}\rightarrow 3\mathrm{d}$ &8.38 &8.90 & 8.19 &8.69 &7.43&7.85 &6.75 &7.56 \\
& & &8.47 &8.98 & 8.24 &8.78 &7.79&8.10 &6.84 &7.67 \\
& & &8.56 &9.03 & 8.33 &8.28 &7.91&8.46 &6.90 &7.73 \\
& & &8.62 &9.22 & 8.36 &8.90 &8.14&8.62 &7.05 &7.82 \\
& & &8.79 &9.27 & 8.62 &9.02 &8.25&8.71 &7.26 &7.85 \\
&7.94& &8.91 &9.36 & 8.77 &9.16 &8.28&8.84 &7.35 &7.95 \\
& & &9.04 &9.51 & 8.83 &9.45 &8.33&8.85 &7.39 &8.05 \\
\end{tabular}
\end{ruledtabular}
\end{table*}
\begin{table*}[tb]
\caption{\label{tab:camphortz}
Lowest vertical electronic singlet excitation energies (in eV) for
camphor as computed with the CC$2$ method. The column heading indicates the
basis set, but augmented basis functions were
only used on \ce{O} and deleted from \ce{H} and \ce{C}.}
\begin{ruledtabular}
\begin{tabular}{lcccccc}
State&Exp.~\cite{pulm:1997} &cc-pVTZ & aug-cc-pVTZ & d-aug-cc-pVTZ& t-aug-cc-pVTZ&d-aug-cc-pVQZ\footnotemark[1]\\
\hline
A&4.21&4.20 & 4.17 & 4.17& 4.15&4.17 \\
B&6.26&6.94 & 6.85 & 5.98& 5.78&6.02 \\
$\text{C}_1$&6.72&7.41 & 7.32 & 6.39& 6.22&6.43\\
$\text{C}_2$ & &7.66 & 7.57 & 6.43& 6.23&6.47\\
$\text{C}_3$ & &7.75 & 7.63 & 6.67& 6.30&6.65\\
D&7.28&7.85 & 7.65 & 7.31& 6.95&7.28\\
& &7.97 & 7.82 & 7.66& 7.02&7.62 \\
& &8.13 & 8.04 & 7.93& 7.08&7.63\\
& &8.19 & 8.09 & 7.98& 7.19&7.72\\
& &8.28 & 8.19 & 8.02& 7.25&7.94\\
&7.94&8.62 & 8.53 & 8.08& 7.27&7.96\\
& &8.66 & 8.63 & 8.17& 7.34&7.99\\
\end{tabular}
\footnotetext[1]{In this calculation, the basis set cc-pVQZ on
\ce{C} and \ce{O} atoms is used.}
\end{ruledtabular}
\end{table*}
In the following, we report the two-photon absorption tensor elements
for fenchone and camphor calculated with the TD-DFT and CC-SD
methods. The computational details for the coupled cluster
calculations are presented in Appendix~\ref{appen_twophoton}.
The elements of the two-photon absorption tensor for fenchone and camphor
in the Cartesian basis are generally independent because the molecules
have the $C_1$ point group symmetry~\cite{McCLAIN19771}.
However, as we consider absorption of two photons with same the
frequency, the two-photon tensor must be symmetric~\cite{McCLAIN19771}.
Table~\ref{tab:twophotonA} presents the results for fenchone.
The A state in terms of the
excitation energy is of no real concern for our present purposes
because the wavelength and spectral width of the
laser pulses employed in the $2+1$ REMPI process~\cite{LuxACIE12,LuxCPC15}
practically rule out that A is the relevant intermediate
state. As inferred from Table~\ref{tab:twophotonA}, changing the method accounting
for the electron correlations {\it i.e} TD-DFT and CC-SD, alters considerably
the skeleton of the two-photon transition matrix and in particular there are
changes in the signs of matrix elements when employing different electron
correlation methods.
As the excitation energies for the B and C states, calculated
with the CC-SD/Rydberg-TZ level of theory, are in good agreement with experimental ones,
cf. Table~\ref{tab:exci}, we expect the corresponding
two-photon absorption tensor elements to be more reliable for the
evaluation of PECD than those obtained with TD-DFT.
We therefore use the two-photon absorption tensor elements
calculated at the CC-SD/Rydberg-TZ level of theory for calculating PAD in
Sec.~\ref{sec:pad_results}.
\begin{table}
\caption{\label{tab:twophotonA} Two-photon transition matrix elements
(in units of $a^2_0~E_{\mathrm{h}}^{-1}$ with $a_0$ being the Bohr radius and
$E_{\mathrm{h}}$ being the Hartree energy)
at the {\sc b3lyp}/Rydberg-TZ level of theory (top) and symmetric effective
two-photon transition matrix elements at the CC-SD/Rydberg-TZ level of
theory (bottom) for fenchone. The specific orientation used is shown in Fig.~\ref{fig:optimiz}. }
\begin{ruledtabular}
\begin{tabular}{@{}l*{7}{D{.}{.}{0}}@{}}
\toprule
States&T^{xx}_{go}&T^{xy}_{go}&T^{xz}_{go}&T^{yy}_{go}&T_{go}^{xz}&T^{zz}_{go}\\
\hline
\hline
\midrule
$\text{A}$&+$0.50$&+$0.50$&$+0.50$&+$0.20$&-$0.30$&-$0.30$\\
$\text{B}$&+$1.60$&-$0.70$&-$2.60$&+$20.80$&+$8.20$&-$0.70$\\
$\text{C}_1$&-$40.60$&-$11.50$&-$6.30$&+$1.60$&+$1.40$&-$1.60$\\
$\text{C}_2$&+$3.20$&+$1.30$&+$2.40$&+$5.30$&-$1.20$&-$1.40$\\
$\text{C}_3$&-$8.60$&-$3.00$&-$5.00$&-$1.90$&+$8.70$&+$0.10$\\
\bottomrule
\hline \hline \addlinespace[0.5ex]
state & \tilde{T}^{xx}_{go} &\tilde{T}^{xy}_{go}& \tilde{T}^{xz}_{go} & \tilde{T}^{yy}_{go}&\tilde{T}^{yz}_{go}&\tilde{T}^{zz}_{go}\\
\hline
$\text{A}$ &-$0.11$&-$0.03$&+$0.08$&-$0.27$&+$0.20$&-$0.27$\\
$\text{B}$ &+$1.58$&+$17.10$&+$7.50$&-$1.67$&-$0.24$&-$2.48$\\
$\text{C}_1$&-$0.21$&-$7.57$&-$4.10$&+$1.13$&+$1.02$&+$0.96$\\
$\text{C}_2$&-$21.24$&+$5.45$&-$1.32$&-$6.00$&-$1.87$&-$2.02$\\
$\text{C}_3$&-$28.67$&-$1.54$&+$4.10$&-$7.88$&+$0.04$&-$6.69$\\
\end{tabular}
\end{ruledtabular}
\end{table}
\begin{table}
\caption{\label{tab:twophotonCam} The same as
Table~\ref{tab:twophotonA} but for camphor.}
\begin{ruledtabular}
\begin{tabular}{@{}l*{7}{D{.}{.}{0}}@{}}
\toprule
States&T^{xx}_{go}&T^{xy}_{go}&T^{xz}_{go}&T^{yy}_{go}&T_{go}^{xz}&T^{zz}_{go}\\
\hline
\hline
\midrule
$\text{A}$ &-$0.30$&+$0.50$&-$1.90$&-$0.40$&-$1.00$&-$0.10$\\
$\text{B}$ &+$10.90$&-$5.40$&-$8.30$&-$8.30$&-$13.40$&-$4.10$\\
$\text{C}_1$&-$3.50$&-$4.80$&-$0.70$&-$1.90$&+$1.40$&-$3.40$\\
$\text{C}_2$&-$4.20$&+$1.00$&+$2.20$&-$0.30$&$0.00$&+$1.10$\\
$\text{C}_3$&-$23.70$&-$5.50$&-$3.10$&-$3.20$&-$2.20$&-$2.90$\\
\bottomrule
\hline \hline \addlinespace[0.5ex]
state & \tilde{T}^{xx}_{go} &\tilde{T}^{xy}_{go}& \tilde{T}^{xz}_{go} & \tilde{T}^{yy}_{go}&\tilde{T}^{yz}_{go}&\tilde{T}^{zz}_{go}\\
\hline
$\text{A}$ &-$0.35$&-$0.27$&-$0.48$&+$0.41$&-$0.03$&-$1.17$\\
$\text{B}$ &+$1.29$&+$9.36$&+$12.63$&+$6.58$&+$4.67$&+$7.55$\\
$\text{C}_1$&+$7.48$&+$0.41$&+$0.82$&-$3.46$&-$3.54$&-$5.11$\\
$\text{C}_2$&+$3.07$&+$0.28$&-$4.10$&+$4.10$&+$1.92$&-$5.88$\\
$\text{C}_3$&-$21.48$&+$0.98$&+$2.83$&-$1.95$&-$1.13$&-$0.81$\\
\end{tabular}
\end{ruledtabular}
\end{table}
Table~\ref{tab:twophotonCam} presents the two-photon absorption tensor
elements for camphor.
Changing the method accounting of electron correlations, TD-DFT or CC-SD,
alters considerably the skeleton of the two-photon transition
matrix.
For camphor, similar observation as mentioned for fenchone can
be mentioned here; the A state is very unlikely to be the intermediate
state probed in the $2+1$ REMPI process. As inferred from Table~\ref{tab:twophotonCam},
changing the method accounting for the electron correlations {\it i.e} TD-DFT
and CC-SD, alters considerably the skeleton of the two-photon transition matrix
and in particular there are changes in the signs of matrix elements when employing
different electron correlation methods.
\begin{figure*}[tb]
\centering
\includegraphics[width=0.05\linewidth]{axes.pdf} \quad \includegraphics[width=0.15\linewidth]{43fen.png} \quad \includegraphics[width=0.15\linewidth]{44fen.png} \quad \includegraphics[width=0.15\linewidth]{45fen.png}
\quad \includegraphics[width=0.15\linewidth]{46fen.png} \quad \includegraphics[width=0.15\linewidth]{47fen.png}
\caption{ The molecular orbitals $43$, $44$, $45$, $46$ and $47$ of
fenchone corresponding to the excited
states A, B, $\text{C}_1$, $\text{C}_2$ and $\text{C}_3$, respectively. These molecular orbitals are calculated at the HF/TZVP
level of theory.}
\label{fig:orbitalfenchon}
\end{figure*}
\begin{figure*}[tb]
\centering
\includegraphics[width=0.15\linewidth]{43cam.png} \quad \includegraphics[width=0.15\linewidth]{44cam.png} \quad \includegraphics[width=0.15\linewidth]{45cam.png}
\quad \includegraphics[width=0.15\linewidth]{46cam.png} \quad \includegraphics[width=0.15\linewidth]{47cam.png}
\caption{The molecular orbitals $43$, $44$, $45$, $46$ and $47$ of camphor corresponding to the excited
states A, B, $\text{C}_1$, $\text{C}_2$ and $\text{C}_3$, respectively. These molecular orbitals are calculated at the HF/TZVP
level of theory.}
\label{fig:orbitalcamphor}
\end{figure*}
\subsection{Single center reexpansion of molecular wavefunctions}
\label{subsec:reexpansion}
In order to match the \textit{ab initio} results with our model for
the 2+1 REMPI process, we perform a single center reexpansion of the
relevant molecular orbitals (see Figs.~\ref{fig:orbitalfenchon}
and \ref{fig:orbitalcamphor}) obtained from a HF calculation with
the TZVP basis set, projecting them onto hydrogenic atomic orbitals
placed at the center-of-mass of the molecule.
The hydrogenic orbitals are chosen in the form
$\varphi=\sum_i\tilde{a}_i R_i(r)\Upsilon_i(\theta,\phi)$, where $i$ denotes a
complete set of quantum numbers, $i\equiv(n_o,\ell_o,m_o)$. $R_i(r)$
are the radial functions the hydrogen and
$\Upsilon_i(\theta,\phi)$ the {\it real} spherical harmonics.
The transformation between the expansion coefficients $\tilde{a}_i$
and $a_i$, defined in Eq.~\eqref{eq:exited_state} with the standard
complex spherical harmonics, is given in
Appendix~\ref{subsec:real_spherical}.
The projection quality of the orbitals $42$ (highest occupied
molecular orbital (HOMO) for the electronic ground state)
and $43$ (one of the two singly occupied molecular orbitals (SOMOs)
for state A) for both camphor and fenchone
is rather low. It amounts to 28\% and 45\% for fenchone and to 24\%
and 51\% for camphor. This is expected for the HOMO and SOMO
which are localized orbitals. In contrast,
for the orbitals representative of the Rydberg states B and C in all cases the projection quality is
higher than 90 \% for the corresponding SOMO. For these states, the results
of the reexpansion are presented in the Supplemental Material~\cite{SuppMat}.
We find the B state to be of $s$-type, that is, the $s$-wave
contributes more than all other waves together; whereas the C states
are of $p$-type. This is in agreement with the results of
Refs.~\cite{pulm:1997,Diedrich:2003}, where these states were also
found to be of $s$- and $p$-type, respectively.
The $d$ wave contributions for SOMOs orbitals corresponding to the B and
$\text{C}_1$, $\text{C}_2$ and $\text{C}_3$ states in fenchone and camphor
are 2\% , 3\% , 5\% and 6\%, respectively.
\section{Photoelectron angular distributions}
\label{sec:pad_results}
The experimental measurements indicate a PECD effect of 10\% for
fenchone and 6.6\% for camphor~\cite{LuxCPC15}.
We first check the range of PECD that our model allows for.
To this end, we optimize, as a preliminary test, PECD, allowing all molecular parameters, i.e.,
two-photon absorption tensor elements and excited state expansion coefficients, to vary freely.
We expand up to $d$ and $f$ waves for a single
quantum number $n_o$, taken to be $n_o=3$ and $4$,
respectively.
The optimization target is to maximize (or minimize,
depending on the sign) PECD in order to
determine the upper bounds. Following the definitions in
Refs.~\cite{LuxCPC15,LeinPRA2014}, we define an optimization
functional,
\begin{eqnarray}
\label{eq:opt_func}
J = \dfrac{1}{c_0}\left(2c_1 -\dfrac{1}{2}c_3
+ \dfrac{1}{4}c_5\right)\,,
\end{eqnarray}
where the Legendre coefficients
are calculated according to Eq.~\eqref{eq:final_coeff}.
All optimizations are carried out
using the genetic algorithm for constrained multivariate problems as
implemented in Ref.~\cite{simulinkR2014a}, using $500$
iterations. We find numerical
bounds of about 35\% for both expansion cut-offs. The experimentally observed PECD effects are well within these bounds.
We now present calculations of the PAD for fenchone and camphor,
using two different strategies to evaluate
Eq.~\eqref{eq:PADfinal}. First,
we aim at identifying the minimal requirement in terms of structure
and symmetry properties of the intermediate electronically excited
state for reproducing, at least qualitatively, the experimental
data. To this end, we minimize the difference between theoretically
and experimentally obtained Legendre coefficients,
$\delta_j=|(c_j-c^{\mathrm{exp}}_{j})/c^{\mathrm{exp}}_{j}|$,
taking the excited state expansion coefficients, $a^{\ell_o}_{m_o}$, cf.
Eq.~\eqref{eq:exited_state2}, as optimization parameters, with
$n_o=3$ fixed. This
allows for $L_{o,\text{max}}=2$, i.e., $s$, $p$ and $d$ waves.
Second, we test the agreement between theoretically
and experimentally obtained Legendre coefficients when utilizing the
expansion coefficients
and two-photon tensor elements obtained by \textit{ab initio}
calculations, cf. Section~\ref{sec:abinitio}.
Here, our aim is to explain the differences observed experimentally
in the PADs for fenchone and camphor
in terms of the intermediate electronically excited state.
In the first approach, treating the excited state coefficients as optimization
parameters, the optimization can be performed for the odd
Legendre moments only, focussing on reproducing PECD, or for
both odd and even Legendre
moments, in order to reproduce the complete PAD.
The different experimental
uncertainties for odd and even Legendre coefficients~\cite{LuxCPC15}
motivate such a two-step approach. Moreover,
optimizing for the odd Legendre coefficients alone allows to quantify
the minimal requirements on the intermediate electronically excited
state for reproducing PECD.
In the second approach, when using the \textit{ab initio}
two-photon absorption tensors and expansion coefficients,
we need to account for the unavoidable error bars of the \textit{ab
initio} results. To this
end, we also utilize optimization, allowing the two-photon tensor
matrix elements to vary, whereas the
excited state coefficients
are taken as is from the reexpansion of the \textit{ab initio}
wavefunctions.
\subsection{Fenchone}
\label{subsec:fenchone}
We start by addressing the question of how many partial waves are
required in the intermediate electronically
excited state to yield odd Legendre coefficients with $\mathcal{L}>1$, as
observed experimentally.
To this end, we consider the expansion of the intermediate electronically
excited state, cf. Eq.~\eqref{eq:exc_state}, with
$L_{o,\mathrm{max}}=2$ and $L_{o,\mathrm{max}}=3$,
i.e., up to $d$ and $f$ waves, for the
states B and C, and employ
the two-photon tensor elements from the CCSD/Rydberg-TZ calculations,
cf. Table~\ref{tab:twophotonA}.
\begin{table*}[tbp]
\caption{\label{tab:fenchone:opt1}
Legendre coefficients for the
PAD of fenchone (calculated at a photoelectron energy
of $0.56\,$eV and normalized with respect to $c_0$), obtained by
fitting to the experimental
values with the excited state coefficients
$a^{\ell_o}_{m_o}$ as free parameters. Only odd (top) and
both odd and even (bottom) contributions were accounted for in the fitting procedure.
The Rydberg states B, C1, C2 and C3 of fenchone are characterized by
their two-photon absorption tensor, cf. Tab.~\ref{tab:twophotonA}.
}
\begin{tabular}{c !{\vrule width -4pt}c !{\vrule width -4pt}c !{\vrule width -4pt}c!{\vrule width -4pt}c !{\vrule width -4pt}c !{\vrule width -4pt}c !{\vrule width -4pt}c !{\vrule width -4pt}c !{\vrule width -4pt}c !{\vrule width -4pt}c !{\vrule width -4pt}c !{\vrule width -4pt}c !{\vrule width -4pt}c !{\vrule width -4pt}
c !{\vrule width -4pt}c !{\vrule width -4pt}c !{\vrule width -4pt}c !{\vrule width -4pt}c !{\vrule width -4pt}
}
\toprule[1.0pt]
\addlinespace[0.05cm]
& & \phantom{abc} & & \multicolumn{3}{c}{state B} & &\multicolumn{3}{c}{state C1} & &\multicolumn{3}{c}{state C2} & &\multicolumn{3}{c}{state C3}\\
coeffs. & \phantom{abc} &exp.~\cite{LuxCPC15} & \phantom{abcdef} &$d$ waves &\phantom{ab}& $f$ waves &\phantom{abcd}&$d$ waves &\phantom{ab}&$f$ waves &\phantom{abcd}&$d$ waves &\phantom{ab}&$f$ waves &\phantom{abcd}&$d$ waves &\phantom{ab}&$f$ waves \\
\addlinespace[0.05cm]
\cmidrule[0.1pt]{1-1}
\cmidrule[0.1pt]{3-3}
\cmidrule[0.1pt]{5-7}
\cmidrule[0.1pt]{9-11}
\cmidrule[0.1pt]{13-15}
\cmidrule[0.1pt]{17-19}
\addlinespace[0.15cm]
$c_1$ &&$-0.067$& & $-0.067$& & $-0.067 $ && $-0.067 $& & $-0.067$ && $-0.067 $& & $-0.067$ && $-0.067 $& & $-0.067$ \\
$c_3$ &&$+0.008$& & $+0.080$& & $+0.080 $ && $+0.008 $& & $+0.008$ && $+0.008 $& & $+0.008$ && $+0.008 $& & $+0.008$ \\
$c_5$ &&$+0.004$& & $ -$& & $+0.0005$ && $- $& & $+0.004$ && $- $& & $+0.004$ && $- $& & $+0.004$ \\
\addlinespace[0.1cm]
\cmidrule[0.1pt]{1-19}
\addlinespace[0.15cm]
$c_1$ & &$-0.067 $& & $ -0.028 $& & $-0.041 $ && $-0.045 $& & $-0.036 $ & & $-0.040 $ & & $-0.048 $ & & $-0.045 $ & & $-0.046 $ \\
$c_2$ & &$-0.580 $& & $-0.076 $& & $-0.102 $ && $ -0.274 $& & $ -0.176 $ & & $ -0.146 $ & & $ -0.226 $ & & $ -0.224 $ & & $ -0.246 $ \\
$c_3$ & &$+0.008 $& & $ +0.006 $& & $+0.005 $ && $+0.006 $& & $ +0.008 $ & & $ +0.003 $ & & $ +0.004 $ & & $ +0.006 $ & & $ +0.005 $ \\
$c_4$ & &$-0.061 $& & $ -0.004 $& & $-0.004 $ && $-0.021 $& & $ -0.012 $ & & $ -0.012 $ & & $ -0.011 $ & & $ -0.012 $ & & $ -0.019 $ \\
$c_5$ & &$+0.004 $& & $ - $& & $+0.0001 $ && $ - $& & $+0.001 $ & & $- $ & & $+0.002 $ & & $ - $ & & $+0.001 $ \\
$c_6$ & &$-0.008 $& & $ +0.0002 $& & $+0.0003 $ && $+0.0007 $& & $+0.0001$ & & $+0.0006$ & & $+0.001 $ & & $-0.002 $ & & $-0.002 $ \\
\bottomrule[1.0pt]
\addlinespace[0.1cm]
\end{tabular}
\end{table*}
The results are presented
in Table~\ref{tab:fenchone:opt1}.
Presence of $f$-waves is required to obtain a non-zero coefficient $c_5$,
as expected from Table~\ref{table:all_in_one}.
Allowing for $f$ waves (with $n_0$=4) results in a perfect match for the odd coefficients
for states C1, C2 and C3,
cf. the upper part of Table~\ref{tab:fenchone:opt1}.
In contrast, for state B, $c_3$ and $c_5$, while having the correct sign, are off by an order of magnitude.
Modifying the optimization weights improves $c_5$ for state B, but
only at the expense of the agreement for $c_1$ and $c_3$. State B can therefore be ruled out as intermediate electronically excited state. This is further confirmed by the lower part
of Table~\ref{tab:fenchone:opt1}, showing the results for both odd and even Legendre coefficients in the optimization target.
For state B, the sign of $c_6$ does not match the experimental one.
Fitting both odd and even Legendre coefficients also allows to differentiate between the C states---only state C3 reproduces the correct sign of $c_6$. For all other Legendre moments, signs and order of magnitude of the coefficients match the experimental ones for all three C states.
Fitting to all and not just the odd Legendre coefficients decreases the agreement
between theoretical and experimental results for all C states. This
may indicate that the model, with a single $n_o$, is not capable of reproducing the full complexity of the process,
or it may be due to different experimental error bars for even and odd Legendre coefficients. In our fitting procedure, we have neglected the experimental error bars to keep
the calculations manageable. The experimental error bars for the even
Legendre coefficients are much larger than for the odd
ones~\cite{LuxCPC15}, and ignoring them may introduce a bias into the optimization procedure that could also explain the decreased agreement.
\begin{table*}[tbp]
\caption{\label{tab:fenchone_opt_vs_nonopt} Legendre
coefficients for the PAD of fenchone (calculated at a photoelectron energy of $0.58$ eV
and normalized with respect to $c_0$), obtained by employing the excited state
coefficients and two-photon tensors from the \textit{ab initio}
calculations. When including error bars, the tensor
elements are allowed to vary within
$\pm$20\%.}
\begin{tabular}{c !{\vrule width -4pt}c !{\vrule width -4pt}c !{\vrule width -4pt}c!{\vrule width -4pt}c !{\vrule width -4pt}c !{\vrule width -4pt}c !{\vrule width -4pt}c !{\vrule width -4pt}c !{\vrule width -4pt}c !{\vrule width -4pt}c !{\vrule width -4pt}c !{\vrule width -4pt}c !{\vrule width -4pt}c !{\vrule width -4pt}
c !{\vrule width -4pt}c !{\vrule width -4pt}c !{\vrule width -4pt}c !{\vrule width -4pt}c !{\vrule width -4pt}
}
\toprule[1.0pt]
\addlinespace[0.05cm]
& & \phantom{abc} & & \multicolumn{3}{c}{state B} & &\multicolumn{3}{c}{state C1} & &\multicolumn{3}{c}{state C2} & &\multicolumn{3}{c}{state C3}\\
coeffs. & \phantom{a} &exp.~\cite{LuxCPC15} & \phantom{abcde} &fixed &\phantom{ab}& error bars&\phantom{abcd}&fixed&\phantom{ab}&error bars&\phantom{abcd}&fixed&\phantom{ab}&error bars&\phantom{abcd}&fixed&\phantom{ab}&error bars \\
\addlinespace[0.05cm]
\cmidrule[0.1pt]{1-1}
\cmidrule[0.1pt]{3-3}
\cmidrule[0.1pt]{5-7}
\cmidrule[0.1pt]{9-11}
\cmidrule[0.1pt]{13-15}
\cmidrule[0.1pt]{17-19}
\cmidrule[0.1pt]{1-19}
\addlinespace[0.15cm]
$c_1$ & &$-0.067 $& & $+0.003 $& & $+0.003 $ && $-0.004 $& & $-0.003$ & & $-0.002$ & & $-0.001$ & & $-0.013$ & & $-0.015 $ \\
$c_2$ & &$-0.580 $& & $-0.238 $& & $-0.193 $ && $-0.272 $& & $-0.217$ & & $-0.409$ & & $-0.358$ & & $-0.250$ & & $-0.213 $ \\
$c_3$ & &$+0.008 $& & $-0.039 $& & $-0.029 $ && $+0.050 $& & $+0.038$ & & $+0.033$ & & $+0.025$ & & $+0.008$ & & $+0.010 $ \\
$c_4$ & &$-0.061 $& & $-0.095 $& & $-0.113 $ && $-0.084 $& & $-0.105$ & & $+0.010$ & & $-0.015$ & & $-0.023$ & & $-0.048 $ \\
$c_5$ & &$+0.004 $& & $-0.001 $& & $-0.001 $ && $+0.003 $& & $+0.002$ & & $-0.004$ & & $+0.003$ & & $-0.0004$ & & $-0.00004$ \\
$c_6$ & &$-0.008 $& & $-0.003 $& & $-0.005 $ && $+0.003 $& & $-0.001$ & & $-0.004$ & & $-0.017$ & & $-0.013$ & & $-0.007 $ \\
\bottomrule[1.0pt]
\addlinespace[0.1cm]
\end{tabular}
\end{table*}
While already Table~\ref{tab:fenchone:opt1} suggests that C3 is likely the intermediate electronically excited state state probed in the 2+1 photoexcitation process,
the ultimate test consists in using \textit{ab initio} results for all parameters in Eq.~\eqref{eq:PADfinal}, i.e., the excited state expansion coefficients and the two-photon tensor elements, and compare the resulting Legendre coefficients to the experimental data. The results are shown in Table~\ref{tab:fenchone_opt_vs_nonopt}
(``fixed tensor elements'').
Choosing a slightly larger photoelectron energy, specifically
$0.58\,$eV instead of $0.56\,$eV, with the shift of $0.02\,$eV well within the error bars of the calculated excitation energies,
considerably improves the agreement between
theoretical and experimental values, in particular for the $c_1$ coefficient.
Additionally, we allow the tensor elements to vary within a range
of $\pm 20\%$ to account for unavoidable errors in the electronic structure calculations. The best tensor elements within the error range are obtained by minimization. The corresponding functional is defined as
\begin{eqnarray}
\label{eq:minimization_functional}
\Gamma &=& \dfrac{1}{\Gamma^{(0)}}\, \sum^6_{j=1}\omega_j\left(\dfrac{{c}_j - {c}^{\text{exp}}_j}{{c}^{\text{exp}}_j}\right)^2,
\end{eqnarray}
where $\omega_j$ are optimization weights and
$\Gamma^{(0)}$ is the value of the functional using the fixed tensor elements.
Table~\ref{tab:fenchone_opt_vs_nonopt} confirms
state B to be ruled out, since it does not reproduce correctly even a single sign of the odd coefficients. For all states C, the correct signs are obtained for the lower order Legendre coefficients, up to $c_4$. State C1 yields the correct sign of $c_6$ only if the tensor elements are allowed to vary within $\pm 20\%$; the same holds for C2 and the sign of $c_5$. C3 does not reproduce the correct sign of $c_5$, but the value of $c_5$ is very small and close to zero when accounting for the error bars. In terms of PECD, the most important coefficient for fenchone is $c_1$, since its experimental value is an order of magnitude larger than that of the other odd coefficients. For $c_1$, the best agreement is obtained for state C3, differing from the experimental value by a factor of five. In contrast, the difference is by a factor of about twenty for state C1, and even larger for state C2. While $c_1$ is too small by more than an order of magnitude for states C1 and C2, $c_3$ is overestimated by a factor of five for C1 and a factor of three for C2. For states C1 and C2, the largest odd Legendre coefficient is thus $c_3$, unlike the experimental result where it is $c_1$. In contrast, the theoretical result for $c_3$ is in quantitative agreement for state C3 which therefore yields the correct ordering of the odd Legendre coefficients in terms of their magnitude. We thus conjecture that
for fenchone, state C3 is most likely the intermediate electronically state probed in the experiment, despite the fact that $c_5$ is very close to zero.
The reason for the discrepancy exclusively
for $c_5$, while all other coefficients match the experimental ones at least qualitatively, is not entirely clear. A necessary condition for non-vanishing $c_5$ is, according to Table~\ref{table:all_in_one},
that the $d$-wave contribution of the intermediate state to be
non-vanishing. The results shown in Table~\ref{tab:fenchone_opt_vs_nonopt} thus suggest that our calculations underestimate the $d$-wave character of C3.
This may be caused by an improper description of
long-range interaction between the photoelectron and the remaining ion, i.e., by the fact that the true potential felt by the photoelectron is neither central nor point-like, or by the interaction between the laser field and the photoelectron whose time dependence is neglected in our model.
Finally, the error bars of the two-photon tensor elements may be larger than 20\%. Indeed, allowing error bars
of $\pm$50\% in the two-photon absorption tensor elements removes the
disagreement for $c_5$ and state C3. At the same time, these error bars do not significantly improve the agreement for the other two states. For example,
the coefficient $c_1$ is $-0.0061$ for state C1 and $-0.0045$ for state C2, leaving the conclusion that state C3 is
the intermediate resonance unchanged.
\begin{table}[ht]
\caption{\label{table:fenchone_50_percent_for_c5} Legendre
coefficients for the PAD of fenchone (calculated at a photoelectron energy of $0.58\,$eV and normalized with
respect to $c_0$), obtained by employing the excited state
coefficients and two-photon tensor elements from the \textit{ab initio} for state C3 and increasing error bars of the two-photon tensor elements.
Minimization of the functional in Eq.~\eqref{eq:minimization_functional} carried out with equal (top) and unequal (bottom, $\omega_5 = 10\omega, \omega_{j=1,\ldots,4,6}=\omega$) optimization weights.
}
\begin{tabular}{c !{\vrule width 0pt}c !{\vrule width -4pt}c !{\vrule width -12pt}c!{\vrule width -4pt}c !{\vrule width -4pt}c !{\vrule width -4pt}c !{\vrule width -4pt}c !{\vrule width -4pt} c!{\vrule width -4pt}c!{\vrule width -4pt}c!{\vrule width -4pt} }
\bottomrule[0.8pt]
\addlinespace[0.05cm]
&exp.~\cite{LuxCPC15} & \phantom{ab} &fixed &\phantom{ab}& $\pm 20\%$ &\phantom{ab}& $\pm 30\%$ &\phantom{ab}&$\pm 50\%$ &\phantom{a} \\
\addlinespace[0.05cm]
\cmidrule[0.1pt]{1-11}
\addlinespace[0.15cm]
$c_1$ &$-0.067 $& & $-0.012 $ && $-0.015 $ && $-0.016 $ && $-0.016 $ &\\
$c_2$ &$-0.580 $& & $+0.250 $ && $-0.213 $ && $-0.210 $ && $-0.212 $ &\\
$c_3$ &$+0.008 $& & $+0.008 $ && $+0.010 $ && $+0.010 $ && $+0.010 $ &\\
$c_4$ &$-0.061 $& & $-0.023 $ && $-0.045 $ && $-0.048 $ && $-0.048 $ &\\
$c_5$ &$+0.004 $& & $-0.0004$ && $-0.00004$ && $-0.00001$ && $+0.00002$ &\\
$c_6$ &$-0.008 $& & $-0.013 $ && $-0.007 $ && $ -0.007 $ && $-0.007 $ &\\
\addlinespace[0.08cm]
\cmidrule[0.05pt]{1-11}
\addlinespace[0.08cm]
\multicolumn{2}{l}{$\Gamma$ (equal $\omega_j$)} & & $1.0$ && $0.714 $ && $0.711 $ && $0.705 $ &\\
\addlinespace[0.05cm]
\bottomrule[0.8pt]
\bottomrule[0.8pt]
\addlinespace[0.05cm]
&exp.~\cite{LuxCPC15} & \phantom{ab} &fixed &\phantom{ab}& $\pm 20\%$ &\phantom{ab}& $\pm 30\%$ &\phantom{ab}&$\pm 50\%$ &\phantom{a} \\
\addlinespace[0.05cm]
\cmidrule[0.1pt]{1-11}
\addlinespace[0.15cm]
$c_1$ &$-0.067 $& & $-0.012 $ && $-0.015 $ && $-0.018 $ && $-0.022 $ &\\
$c_2$ &$-0.580 $& & $+0.250 $ && $-0.223 $ && $-0.227 $ && $-0.268 $ &\\
$c_3$ &$+0.008 $& & $+0.008 $ && $+0.010 $ && $+0.011 $ && $+0.014 $ &\\
$c_4$ &$-0.061 $& & $-0.023 $ && $-0.045 $ && $-0.0504$ && $-0.033 $ &\\
$c_5$ &$+0.004 $& & $-0.0004$ && $+0.00004$ && $+0.0004$ && $+0.001 $ &\\
$c_6$ &$-0.008 $& & $-0.013 $ && $-0.006 $ && $-0.001 $ && $-0.001 $ &\\
\addlinespace[0.05cm]
\cmidrule[0.05pt]{1-11}
\addlinespace[0.08cm]
\multicolumn{2}{l}{$\Gamma$ (unequal $\omega_j$)} & & $1.0$ && $0.775$ && $0.710$ && $0.686$ &\\
\multicolumn{2}{l}{$\Gamma$ (equal $\omega_j$)} & & $1.0$ && $0.720$ && $0.917$ && $0.994$ &\\
\addlinespace[0.05cm]
\bottomrule[0.8pt]
\end{tabular}
\end{table}
A systematic increase of the two-photon tensor error bars for state C3 is presented in Table~\ref{table:fenchone_50_percent_for_c5}.
We compare minimization of the functional~\eqref{eq:minimization_functional} with equal weights for all Legendre coefficients (upper part of Table~\ref{table:fenchone_50_percent_for_c5}) to that with a ten times larger weight of $c_5$ (lower part of Table~\ref{table:fenchone_50_percent_for_c5}).
The movitation behind the second choice is to see whether the
correct sign can be obtained for $c_5$ without the need to increase the error bars to a very high value.
When increasing the error bars of the two-photon tensor elements, while using the same optimization weights in Eq.~\eqref{eq:minimization_functional}, the value of $c_5$ is increased until it changes sign. The overall value of the functional decreases monotonically, as expected. When the optimization weight of $c_5$ is taken 10 times larger than those of all other Legendre coefficients, assuming an error range of $\pm$20\% for the two-photon tensor elements of state C3 already yields the correct sign for all Legendre coefficients. Increasing the error range in this case further improves the magnitude of $c_5$, until it differs from the experimental one by a factor of four for error bars of $\pm$50\%. However, this comes at the expense of the agreement for all other Legendre coefficients except $c_1$. It is quantified by evaluating $\Gamma$ in Eq.~\eqref{eq:minimization_functional}
with equal weights, using the optimized two-photon tensor elements obtained with unequal weights.
Overall, already the two-photon tensor elements taken directly from the \textit{ab initio} calculations yield a satisfactory agreement for the PAD between theory and experiment for state C3. The agreement is further improved by allowing the two-photon tensor elements to vary within a range of $\pm 20\%$
to account for the error bars of the \textit{ab initio} calculations.
All Legendre coefficients except $c_3$ are sensitive to a variation
within this range. Except for $c_5$, i.e., underestimation of the excite state $f$-wave character,
a surprisingly good agreement between theoretical and experimental values is obtained,
with the numerical values differing from the experimental ones up to a factor of five.
\begin{figure}[tb]
\centering
\includegraphics[width=0.9\linewidth]{fenchone_plot2.png}
\caption{Comparison of experimentally obtained
and theoretically calculated Legendre coefficients in the PAD for
$S$-$(+)$-fenchone, using state C3 and
right circular polarization.
The calculations were carried out for a fixed photoelectron energy of
$0.56\,$eV, respectively
$0.58\,$eV, as well as integrating over a Gaussian distribution
of photoelectron energies (denoted by $\rho(E)$) centered at $0.56\,$eV with a FWHM
of 200$\,$meV.
}
\label{fig:fenchone:ci}
\end{figure}
The semi-quantitative agreement between theory and experiment
is further illustrated in Fig.~\ref{fig:fenchone:ci} where we compare
calculation results for two specific photoelectron energies,
$0.56\,$eV and $0.58\,$eV, to the experimentally obtained
Legendre coefficients.
The differences for the Legendre coefficients for $0.56\,$eV and
$0.58\,$eV indicates the dependence of our results on the error
bar of the calculated excitation energy of the intermediate
electronically excited state.
Additionally, Fig.~\ref{fig:fenchone:ci} also shows the result
of integrating over a normal distribution of photoelectron energies
centered at $0.56\,$eV with a full width at half maximum (FWHM) of
200$\,$meV. This accounts for the experimental averaging over
photoelectron energies~\cite{LuxCPC15}. The disagreement
between theoretical and experimental results amounts to a factor
of about two which translates into a ``mean'' PECD
of 3\% and 4\% for the fixed and $\pm 20\%$ adjustable tensor elements, respectively,
compared to the experimental value of
10.1\%~\cite{LuxCPC15}.
\begin{figure}[tb]
\centering
\includegraphics[width=0.9\linewidth]{fenchone_plot1.png}
\caption{Dependence of the calculated Legendre coefficients on photoelectron energy for the PAD of state C3 for $S$-$(+)$-fenchone , using right
circular polarization.
}
\label{fig:fenchone:civsE}
\end{figure}
The dependence of the calculated Legendre coefficients on the
photoelectron energy is investigated in more detail in
Fig.~\ref{fig:fenchone:civsE}. A non-monotonic behavior is observed
for all orders. Such a non-monotonic behavior of the Legendre
coefficients as a function of the photoelectron energy has
already been reported for $c_1$ in the one-photon ionization of randomly oriented
molecules~\cite{HardingChemPhys2005}. It reflects the dependence of
the Legendre coefficients on the radial part of the photoelectron
wavefunction.
\begin{table*}[tb]
\caption{\label{table:kummer_vs_plane_waves}
Legendre coefficients in the PAD of fenchone for state C3 and different
photoelectron energies, obtained with
hydrogenic continuum functions which include the Coulomb interaction
between photoelectron and photoion and plane waves where this
interaction is neglected. $\rho(E)$ stands for integration over a
Gaussian distribution of photoelectron energies centered at
0.56$\,$eV with a FWHM of 200$\,$meV.
}
\centering
\begin{minipage}{1\linewidth}
\begin{tabular}{c !{\vrule width 0pt}c !{\vrule width -4pt}c !{\vrule width -4pt}c!{\vrule width -4pt}c !{\vrule width -4pt}c !{\vrule width -4pt}c !{\vrule width -4pt}c !{\vrule width -4pt}c !{\vrule width -4pt}c !{\vrule width -4pt}c !{\vrule width -4pt}c !{\vrule width -4pt}c !{\vrule width -4pt}c !{\vrule width -4pt}c !{\vrule width -4pt}c}\toprule[1.0pt]
& & & \multicolumn{5}{c}
hydrogenic continuum
functions} &\phantom{abc}& & & \multicolumn{5}{c}{plane waves}\\
& &\phantom{abc}&\multicolumn{5}{c}{photoelectron energy~(eV)}&\phantom{abc}&&\phantom{abc}&\multicolumn{5}{c}{photoelectron energy~(eV)}\\
coeffs.\phantom{a } &exp.~\cite{LuxCPC15} &\phantom{abcd} &\multicolumn{1}{c}{$\hspace{0.7cm}0.36\hspace{0.5cm}$}&\multicolumn{1}{c}{$0.58$}& &\multicolumn{1}{c}{$\hspace{0.7cm}0.75\hspace{0.7cm}$}&\multicolumn{1}{c}{$\rho(E)$} &&\phantom{a} & &\multicolumn{1}{c}{$\hspace{0.7cm}0.36\hspace{0.5cm}$}& \multicolumn{1}{c}{$0.58$}&&\multicolumn{1}{c}{$\hspace{0.7cm}0.75\hspace{0.5cm}$}& \multicolumn{1}{c}{$\rho(E)$}\\
\hline
$c_1$ & $-0.061$ & & $-0.002$ &$-0.012$ && $-0.058$ &$-0.037$ & &
&& $+0.002$ &$+0.006$ && $+0.002$&$-0.017$\\
$c_2$ & $-0.580$ & &$-0.341$ &$-0.250$ && $-0.385$ &$-0.411$ & &
&& $+0.034$ &$+0.012$ && $-0.029$&$-0.126$\\
$c_3$ &$+0.008$ & &$-0.008$ &$+0.008 $ && $+0.170$ &$+0.005$ &&
&& $-0.006$ &$-0.061$ && $-0.012$&$+0.009$\\
$c_4$ &$-0.061$ & &$+0.002$ &$-0.023$ && $-0.008$ &$-0.030$ &
&& $+0.114$ &$-0.178$ && $-0.001$&$-0.051$\\
$c_5$ &$+0.004$ & &$-0.001$ &$-0.0004$ && $+0.192$ &$-0.00003$ &
&& $ +0.0001$ &$-0.004$ && $-0.001$&$+0.00001$\\
$c_6$ &$-0.008$ & &$-0.004$ &$-0.007$ && $+0.001$&$-0.004$ & & && $+0.001$ &$-0.013$ && $+0.006$&$-0.004$\\
\bottomrule[1.0pt]
\addlinespace[0.1cm]
\end{tabular}
\end{minipage}
\end{table*}
This dependence
is studied further in Table~\ref{table:kummer_vs_plane_waves}, where we
compare the Legendre coefficients obtained with the Kummer confluent
functions, i.e., the hydrogenic continuum wavefunctions defined Appendix~\ref{subsec:H-cont}, to those obtained with plane waves.
The latter completely neglect the Coulomb
interaction between photoelectron and photoion. The plane waves clearly fail to reproduce the experimentally observed PECD, see in particular the values for $0.58\,$eV.
Moreover, their values vary drastically with photoelectron energy. This difference is most likely explained by the highly oscillatory nature of plane waves even at short distances, in contrast to the hydrogenic scattering functions.
Our finding is in line with the observation of Ref.~\cite{LeinPRA2014} for the strong field approximation where plane waves fail completely to produce any PECD. In our model, non-zero odd Legendre coefficients are obtained, but a description of the photoelectron continuum that accounts for the Coulomb interaction between photoelectron and photoion provides clearly better results.
\subsection{Camphor}
\label{subsec:camphor}
\begin{table*}[tbp]
\caption{\label{tab:camphor:opt1}
Legendre coefficients for the
PAD of camphor (calculated at a photoelectron energy
of $0.52\,$eV and normalized with respect to $c_0$), obtained by
fitting to the experimental
values~\cite{LuxCPC15} with the excited state coefficients
$a^{\ell_o}_{m_o}$ as free parameters. Only odd (top) and
both odd and even (bottom) contributions were accounted for in the fitting procedure.
The Rydberg states B, C1, C2 and C3 of camphor are characterized by
their two-photon absorption tensor, cf. Tab.~\ref{tab:twophotonA}.
}
\begin{tabular}{c !{\vrule width -4pt}c !{\vrule width -4pt}c !{\vrule width -4pt}c!{\vrule width -4pt}c !{\vrule width -4pt}c !{\vrule width -4pt}c !{\vrule width -4pt}c !{\vrule width -4pt}c !{\vrule width -4pt}c !{\vrule width -4pt}c !{\vrule width -4pt}c !{\vrule width -4pt}c !{\vrule width -4pt}c !{\vrule width -4pt}
c !{\vrule width -4pt}c !{\vrule width -4pt}c !{\vrule width -4pt}c !{\vrule width -4pt}c !{\vrule width -4pt}
}
\toprule[1.0pt]
\addlinespace[0.05cm]
& & \phantom{abc} & & \multicolumn{3}{c}{state B} & &\multicolumn{3}{c}{state C1} & &\multicolumn{3}{c}{state C2} & &\multicolumn{3}{c}{state C3}\\
coeffs. & \phantom{abc} &exp.~\cite{LuxCPC15} & \phantom{abcdef} &$d$ waves &\phantom{ab}& $f$ waves &\phantom{abcd}&$d$ waves &\phantom{ab}&$f$ waves &\phantom{abcd}&$d$ waves &\phantom{ab}&$f$ waves &\phantom{abcd}&$d$ waves &\phantom{ab}&$f$ waves \\
\addlinespace[0.05cm]
\cmidrule[0.1pt]{1-1}
\cmidrule[0.1pt]{3-3}
\cmidrule[0.1pt]{5-7}
\cmidrule[0.1pt]{9-11}
\cmidrule[0.1pt]{13-15}
\cmidrule[0.1pt]{17-19}
\addlinespace[0.15cm]
$c_1$ &&$+0.026$& & $+0.026 $& & $+0.024 $ && $+0.028 $& & $+0.026$ && $+0.020 $& & $+0.027$ && $+0.025 $& & $+0.026$ \\
$c_3$ &&$-0.053$& & $+0.038 $& & $-0.025 $ && $-0.038 $& & $-0.040$ && $-0.032 $& & $-0.042$ && $-0.042 $& & $-0.047$ \\
$c_5$ &&$+0.008$& & $ -$& & $+0.004$ && $- $& & $+0.006$ && $- $& & $+0.006$ && $- $& & $+0.005$ \\
\addlinespace[0.1cm]
\cmidrule[0.1pt]{1-19}
\addlinespace[0.15cm]
$c_1$ & &$+0.026 $& & $+0.099 $& & $+0.096 $ && $+0.051 $& & $+0.054$ & & $+0.054$ & & $+0.041 $ & & $+0.040 $ & & $+0.048 $ \\
$c_2$ & &$-0.670 $& & $-0.198 $& & $-0.248 $ && $-0.130 $& & $-0.209$ & & $-0.135$ & & $-0.170 $ & & $-0.193 $ & & $-0.230 $ \\
$c_3$ & &$-0.053 $& & $-0.034 $& & $-0.022 $ && $-0.023 $& & $-0.020$ & & $+0.037$ & & $+0.043 $ & & $+0.028 $ & & $+0.013 $ \\
$c_4$ & &$+0.012 $& & $+0.013 $& & $+0.013 $ && $+0.014 $& & $+0.013$ & & $+0.017$ & & $+0.018 $ & & $+0.011 $ & & $+0.019 $ \\
$c_5$ & &$+0.008 $& & $ - $& & $+0.001 $ && $ - $& & $+0.001$ & & $ - $ & & $+0.002 $ & & $ - $ & & $ +0.002$ \\
$c_6$ & &$-0.001 $& & $-0.001 $& & $-0.001 $ && $-0.001 $& & $-0.001 $ & & $-0.003$ & & $-0.002 $ & & $-0.001 $ & & $-0.003 $ \\
\bottomrule[1.0pt]
\addlinespace[0.1cm]
\end{tabular}
\end{table*}
We now turn to camphor, for which the experimentally recorded
photoelectron spectrum peaks at $0.52\,$eV~\cite{LuxCPC15}. Analogously to our discussion for fenchone, we first investigate
possible candidates for the intermediate resonance by considering
the respective two-photon tensor alone and treating the excited state expansion coefficients as free optimization parameters. The results are displayed in Table~\ref{tab:camphor:opt1}, comparing the optimization that targets only the odd Legendre coefficients to that considering both odd and even $c_j$. For all states, a non-zero $c_5$ coefficient is only obtained by including $f$-waves in the electronically excited state (corresponding to $n_o=4$), as expected. When expanding up to $f$-waves, all four candidates allow for odd Legendre coefficients close to the experimental ones, unlike the case of fenchone, where state B could already be ruled out at this stage. However, states C2 and C3 do not allow for the correct sign of $c_3$, when the optimization targets both odd and even Legendre coefficients.
\begin{table*}[tbp]
\caption{\label{table:reexpansion_camphore_other_states_0.52}
Legendre coefficients for the PAD of camphor (calculated at a photoelectron
energy of $0.52\,$eV and normalized with respect
to $c_0$), obtained by employing the excited state
coefficients and two-photon tensor elements from the \textit{ab
initio} calculations. When including error bars, the tensor
elements are allowed to vary within
$\pm$20\%.}
\begin{tabular}{c !{\vrule width 0pt}c !{\vrule width -4pt}c !{\vrule width -4pt}c!{\vrule width -4pt}c !{\vrule width -4pt}c !{\vrule width -4pt}c !{\vrule width -4pt}c !{\vrule width -4pt}c !{\vrule width -4pt}c !{\vrule width -4pt}c !{\vrule width -4pt}c !{\vrule width -4pt}c !{\vrule width -4pt}c !{\vrule width -4pt}
c !{\vrule width -4pt}c !{\vrule width -4pt}c !{\vrule width -4pt}c !{\vrule width -4pt}c !{\vrule width -4pt}
}
\toprule[1.0pt]
\addlinespace[0.05cm]
& & \phantom{abc} & & \multicolumn{3}{c}{state B} & &\multicolumn{3}{c}{state C1} & &\multicolumn{3}{c}{state C2} & &\multicolumn{3}{c}{state C3}\\
coeffs. & \phantom{a} &exp.~\cite{LuxCPC15} & \phantom{abcde} &fixed &\phantom{ab}& error bars&\phantom{abcd}&fixed&\phantom{ab}&error bars&\phantom{abcd}&fixed&\phantom{ab}&error bars&\phantom{abcd}&fixed&\phantom{ab}&error bars \\
\addlinespace[0.05cm]
\cmidrule[0.1pt]{1-1}
\cmidrule[0.1pt]{3-3}
\cmidrule[0.1pt]{5-7}
\cmidrule[0.1pt]{9-11}
\cmidrule[0.1pt]{13-15}
\cmidrule[0.1pt]{17-19}
\cmidrule[0.1pt]{1-19}
\addlinespace[0.15cm]
$c_1$ & &$+0.026 $& & $+0.003 $& & $+0.002 $ && $+0.002 $& & $+0.001 $ & & $-0.002 $ & & $-0.002 $ & & $-0.001 $ & & $-0.001 $ \\
$c_2$ & &$-0.678 $& & $-0.384 $& & $-0.383 $ && $-0.389 $& & $-0.401 $ & & $-0.395 $ & & $-0.395 $ & & $-0.421 $ & & $-0.425 $ \\
$c_3$ & &$-0.053 $& & $-0.025 $& & $-0.022 $ && $-0.020 $& & $-0.017 $ & & $+0.005 $ & & $+0.008 $ & & $+0.004 $ & & $+0.003 $ \\
$c_4$ & &$+0.012 $& & $-0.066 $& & $-0.050 $ && $+0.020 $& & $+0.023 $ & & $+0.004 $ & & $-0.002 $ & & $-0.008 $ & & $+0.0001$ \\
$c_5$ & &$+0.008 $& & $-0.002 $& & $-0.001 $ && $+0.0001 $& & $+0.0001$ & & $+0.001 $ & & $+0.001 $ & & $+0.0003$ & & $+0.001 $ \\
$c_6$ & &$-0.001 $& & $+0.043 $& & $+0.035 $ && $-0.026 $& & $-0.023 $ & & $-0.008 $ & & $-0.001 $ & & $+0.005 $ & & $-0.0004$ \\
\bottomrule[1.0pt]
\addlinespace[0.1cm]
\end{tabular}
\end{table*}
\begin{table*}[tbp]
\caption{\label{table:reexpansion_camphore_other_states_0.58}
The same as Table~\ref{table:reexpansion_camphore_other_states_0.52} but for a photoelectron energy of 0.58$\,$eV.
}
\begin{tabular}{c !{\vrule width 0pt}c !{\vrule width -4pt}c !{\vrule width -4pt}c!{\vrule width -4pt}c !{\vrule width -4pt}c !{\vrule width -4pt}c !{\vrule width -4pt}c !{\vrule width -4pt}c !{\vrule width -4pt}c !{\vrule width -4pt}c !{\vrule width -4pt}c !{\vrule width -4pt}c !{\vrule width -4pt}c !{\vrule width -4pt}
c !{\vrule width -4pt}c !{\vrule width -4pt}c !{\vrule width -4pt}c !{\vrule width -4pt}c !{\vrule width -4pt}
}
\toprule[1.0pt]
\addlinespace[0.05cm]
& & \phantom{abc} & & \multicolumn{3}{c}{state B} & &\multicolumn{3}{c}{state C1} & &\multicolumn{3}{c}{state C2} & &\multicolumn{3}{c}{state C3}\\
coeffs. & \phantom{a} &exp.~\cite{LuxCPC15} & \phantom{abcde} &fixed &\phantom{ab}& error bars&\phantom{abcd}&fixed&\phantom{ab}&error bars&\phantom{abcd}&fixed&\phantom{ab}&error bars&\phantom{abcd}&fixed&\phantom{ab}&error bars \\
\addlinespace[0.05cm]
\cmidrule[0.1pt]{1-1}
\cmidrule[0.1pt]{3-3}
\cmidrule[0.1pt]{5-7}
\cmidrule[0.1pt]{9-11}
\cmidrule[0.1pt]{13-15}
\cmidrule[0.1pt]{17-19}
\cmidrule[0.1pt]{1-19}
\addlinespace[0.15cm]
$c_1$ & &$+0.026 $& & $+0.033 $& & $+0.030 $ && $+0.026 $& & $+0.027 $ & & $-0.005 $ & & $-0.009 $ & & $-0.004 $ & & $-0.002 $ \\
$c_2$ & &$-0.678 $& & $-0.450 $& & $-0.498 $ && $-0.477 $& & $-0.502 $ & & $-0.431 $ & & $-0.427 $ & & $-0.432 $ & & $-0.437 $ \\
$c_3$ & &$-0.053 $& & $-0.029 $& & $-0.031 $ && $-0.024 $& & $-0.022 $ & & $-0.003 $ & & $-0.0002$ & & $+0.001 $ & & $-0.003 $ \\
$c_4$ & &$+0.012 $& & $-0.074 $& & $-0.034 $ && $+0.003 $& & $+0.009 $ & & $-0.022 $ & & $-0.036 $ & & $-0.026 $ & & $-0.018 $ \\
$c_5$ & &$+0.008 $& & $-0.001 $& & $-0.001 $ && $+0.0001 $& & $+0.0001$ & & $+0.0002$ & & $+0.001 $ & & $+0.0002$ & & $+0.0001$ \\
$c_6$ & &$-0.001 $& & $+0.030 $& & $+0.024 $ && $-0.015 $& & $-0.011 $ & & $-0.020 $ & & $-0.010 $ & & $+0.0001$ & & $+0.003 $ \\
\bottomrule[1.0pt]
\addlinespace[0.1cm]
\end{tabular}
\end{table*}
Once again, the ultimate test to rule out a given state consists in
using both two-photon tensor elements and excited state expansion coefficients obtained from the \textit{ab initio} calculations. The corresponding
results
are shown in Table~\ref{table:reexpansion_camphore_other_states_0.52}. First of all, Table~\ref{table:reexpansion_camphore_other_states_0.52} confirms that states C2 and C3 are not the intermediate resonance probed in the experiment, since both states yield the wrong sign for both $c_1$ and $c_3$. Comparing the remaining two candidates, states B and C1, a much better agreement is observed for C1 which yields the correct signs for all Legendre coefficients. In contrast, state B only yields correct signs for the lower orders, $c_1$, $c_2$, and $c_3$. When accounting for the error bars in the two-photon tensor, a correct sign is additionally obtained for $c_4$, but the signs for
$c_5$ and $c_6$ still cannot properly be reproduced with state B as intermediate resonance. As to the state C1, not only all signs but also
the correct order of magnitude for $c_2$, $c_3$ and $c_4$ is observed, whereas the values are too small by one order of magnitude for $c_1$ and by two orders for $c_5$ and too large by one order of magnitude for $c_6$. Allowing the
two-photon absorption tensor for state C1 to vary within an error range of $\pm 20\%$ does not yield any significant improvement. It therefore does not seem to be the unavoidable error in the two-photon tensor elements that is important.
A second source of error in the \textit{ab initio} calculations is found in the excitation energy of the intermediate electronically excited state. This is reflected in the photoelectron energy. We thus present results for a second photoelectron energy, 0.58$\,$eV in
Table~\ref{table:reexpansion_camphore_other_states_0.58}. For
state C1, all signs still match, and the correct order of magnitude is now obtained for $c_1$ to $c_4$. In particular, $c_1$ is now in quantitative agreement with the experimental value, and $c_2$ and $c_3$ differ by less than factor of 1.5, respectively 2.5.
Despite the disagreement in the numerical values for $c_5$ and $c_6$, C1 is clearly the state the best matches the experimental data---the results obtained for states
B, C2 and C3 show the same deficiencies as in Table~\ref{table:reexpansion_camphore_other_states_0.52}.
\begin{table}[ht]
\caption{\label{table:camphore_50_percent_for_c6} Legendre
coefficients for the PAD of camphor (calculated at a photoelectron energy of $0.58$ eV and normalized with
respect to $c_0$), obtained by employing the excited state
coefficients and two-photon tensor elements from the \textit{ab initio}
calculations for state C3 and increasing error bars of the two-photon tensor elements.
Minimization of the functional $\Gamma$ in Eq.~\eqref{eq:minimization_functional} is carried out with equal optimization weights. }
\begin{tabular}{c !{\vrule width 0pt}c !{\vrule width -4pt}c !{\vrule width -4pt}c!{\vrule width -4pt}c !{\vrule width -4pt}c !{\vrule width -4pt}c !{\vrule width -4pt}c !{\vrule width -4pt} c!{\vrule width -4pt}c!{\vrule width -4pt}c!{\vrule width -4pt} }
\toprule[0.8pt]
\addlinespace[0.03cm]
coeffs. &exp.~\cite{LuxCPC15} & \phantom{ab} &fixed &\phantom{ab}& $\pm 20\%$ &\phantom{ab}& $\pm 30\%$ &\phantom{ab}&$\pm 50\%$ &\phantom{a} \\
\addlinespace[0.05cm]
\cmidrule[0.1pt]{1-11}
\addlinespace[0.15cm]
$c_1$ &$+0.026 $& & $+0.026 $ && $+0.027 $ && $+0.026 $ && $+0.022 $ &\\
$c_2$ &$-0.678 $& & $-0.477 $ && $-0.502 $ && $-0.515 $ && $-0.529 $ &\\
$c_3$ &$-0.053 $& & $-0.024 $ && $-0.022 $ && $-0.020 $ && $-0.014 $ &\\
$c_4$ &$+0.012 $& & $+0.003 $ && $+0.009 $ && $+0.012 $ && $+0.012 $ &\\
$c_5$ &$+0.008 $& & $+0.0001$ && $+0.0001$ && $+0.0001$ && $+0.0003 $ &\\
$c_6$ &$-0.001 $& & $-0.015 $ && $-0.011 $ && $-0.008 $ && $-0.001 $ &\\
\addlinespace[0.15cm]
\cmidrule[0.05pt]{1-11}
\addlinespace[0.08cm]
$\Gamma$ &$ $& & $1.0$ && $0.50$ && $0.26$ && $0.01$ &\\
\addlinespace[0.08cm]
\bottomrule[0.9pt]
\end{tabular}
\end{table}
The agreement with the experimental data obtained for state C1 can be further improved by allowing for larger error bars in the two-photon tensor elements. This is demonstrated in Table~\ref{table:camphore_50_percent_for_c6}. In fact, the agreement can be made fully quantitative, except for $c_5$, when increasing the error bars up to $\pm$50\%, as indicated by the small value of the optimization functional.
In comparison to fenchone, cf. Table~\ref{table:fenchone_50_percent_for_c5},
minimization results in significantly smaller values for $\Gamma$, as
the error range is increased. Also,
the higher order Legendre coefficients are found to be more sensitive to
modifications of the two-photon tensor elements than the lower
ones. This is not surprising since the higher order coefficients
depend more strongly on the anisotropy induced by the two-photon absorption.
Analogously to fenchone, $c_5$ has the correct sign but remains too small by one order of magnitude. This indicates once more that we underestimate significantly the $d$-wave contribution to the intermediate electronically excited state. It amounts to just 6\% for both
fenchone and camphor in our calculations.
\begin{figure}[tb]
\centering
\includegraphics[width=0.9\linewidth]{camphor_plot2.png}
\caption{Comparison of experimentally obtained
and theoretically
calculated Legendre coefficients in the PAD for $R$-$(+)$-camphor ,
using state C1 and right circular polarization.
The calculations considered fixed photoelectron energies of
$0.52\,$eV and
$0.58\,$eV as well as an integration over a Gaussian distribution
of energies centered at $0.58\,$eV with a FWHM
of 200$\,$meV.
}
\label{fig:camphor:ci}
\end{figure}
\begin{figure}[tb]
\centering
\includegraphics[width=0.9\linewidth]{camphor_plot1.png}
\caption{Dependence of the calculated Legendre coefficients in the PAD
of camphor, state C1, on the photoelectron energy within the range of
0.50$\,$eV to 0.58$\,$eV using right circularly polarized light.}
\label{fig:camphor:civsE}
\end{figure}
The discussion above is summarized and illustrated in Fig.~\ref{fig:camphor:ci} which shows, besides the Legendre coefficients for photoelectron energies of 0.52$\,$eV and 0.58$\,$eV, those obtained when integrating over a normal distribution of photoelectron energies, centered at 0.52$\,$eV, with a FWHM of 200$\,$meV. The latter mimicks the spectral bandwidth in the experiment.
Introducing a distribution of photoelectron energies slightly worsens the agreement between theory and experiment.
This can be attributed to the striking sensitivity of
the Legendre coefficients on photoelectron energy, as shown in Fig.~\ref{fig:camphor:civsE}.
A further improvement of the theoretical model would thus require experimental data for more than one photoelectron energy and with better energy resolution.
\subsection{Discussion and Summary}
\label{subsec:discussion}
Before concluding our paper, we briefly summarize our main findings.
Our model describing the one-photon photoionization of an "initial" state that is prepared by non-resonant, orientation-dependent two-photon absorption using a single-center approximation of the photoelectron continuum and ideas from optimal control
allows for PECD as defined in Eq.~\eqref{eq:opt_func} of up to 35\%. This is, within
our model, the maximum PECD that could be expected for an ensemble of randomly
oriented chiral molecules. The upper limit is below 100\% is due to the random
orientation of the molecules and, possibly, due to the underlying approximations
made within our model.
One might thus speculate whether a better treatment of e.g. static exchange or contributions from the magnetic dipole interaction would allow for raising this limit even higher. It is, at any rate, already significantly higher than the largest PECD observed experimentally so far~\cite{LuxACIE12,LehmannJCP13,Janssen2014,LuxCPC15,FanoodPCCP15}. This encourages studies of molecules beyond bicyclic ketones, both experimentally and theoretically.
Our model accounts for the electronic structure of the experimentally investigated examples of fenchone and camphor in terms of their two-photon absorption tensor and intermediate electronically excited state based on \textit{ab initio} calculations. In both cases, there are several candidate electronic states which could serve as the intermediate resonance. For fenchone, knowledge of the two-photon tensors of the candidate states alone already suggests state C3 to be the intermediate resonance. Calculations
employing both two-photon tensors and excited state wavefunctions confirm this conjecture, in particular if the calculations account for error bars in the two-photon tensor. Compared to the other electronically excited states that could be accessed by the two-photon excitation, state C3 has a much larger $d$-wave component than all other states. The largest disagreement is observed in the Legendre coefficient $c_5$, suggesting that our model underestimates the $f$-wave component of state C3. For the lower order Legendre coefficients, a semi-quantitative agreement between theoretical and experimental values is obtained.
We find proper account of the Coulomb interaction between photoelectron and photoion to be crucial. When replacing, in our expansion of the photoelectron continuum wavefunction, hydrogenic basis functions by plane waves, no agreement with the experimental values is obtained. This is in line with an earlier study of PECD using the strong-field approximation~\cite{LeinPRA2014}, where plane waves completely fail to produce any PECD.
In contrast to fenchone, knowledge of the two-photon tensors for camphor is not sufficient to point to a single state as the intermediate resonance. However, calculations accounting for the \textit{ab initio} two-photon absorption matrix elements and excited state wavefunctions strongly suggest state C1 to be the intermediate resonance, in particlar when including error bars of the two-photon absorption tensor. The agreement is found to depend very strongly on the photoelectron energy, with semi-quantitative agreement found for a slightly larger value than the experimental one. Such an energy shift could be explained by the error bars of the calculated excitation energy or by the dynamic Stark shift, which is neglected in our model.
\section{Conclusions \& Outlook}
\label{sec:conclusions}
We have derived a theoretical model to study PECD after (2+1)
resonantly enhanced multi-photon ionization in randomly oriented
chiral molecules. The model is based on a perturbative treatment of
the light-matter interaction within the electric dipole
approximation and combines an \textit{ab initio}
description of the non-resonant two-photon absorption with a
single-center expansion of the
photoelectron wavefunction into hydrogenic continuum functions. This
allows to account for the Coulomb interaction between photoelectron
and photoion as well as electronic correlations in the transition to
the intermediate
electronically excited state. It neglects static exchange and
dynamic correlations in the
interaction of the photoelectron with the parent ion as well as the
time-dependence of the laser pulse and the possible multi-center
character of the continuum wavefunction.
The model
correctly reproduces the basic symmetry behavior expected under exchange of
handedness and exchange of light helicity.
Making use of the fundamental selection rules for two-photon absorption
and one-photon ionization, we have shown which Legendre coefficients
may be expected in the photoelectron angular distributions, depending
on the basic geometric properties in the electronic structure of the
molecules as well as the possible combinations of polarization for
two-photon absorption and one-photon ionization.
We have identified the role of the two-photon absorption tensor and
intermediate
state wavefunction---it is the partial wave decomposition of the
latter which determines PECD whereas the two-photon absorption tensor
(in the electronic dipole approximation)
merely introduces an anisotropic distribution of photoexcited
molecules. Notably, the anisotropy is achieved by selection and not by
rotational dynamics which would occur on a much slower timescale than
that of femtosecond laser excitation.
We have applied our theoretical framework to fenchone and camphor,
which have been studied extensively in recent
experiments~\cite{LuxACIE12,LehmannJCP13,Janssen2014,LuxCPC15,FanoodPCCP15}.
The \textit{ab initio} calculations employed the coupled cluster
method as well
as density functional theory. Due to the Rydberg-like character of the
intermediate electronically excited state, diffuse basis functions
needed to be added to the standard basis sets. This has allowed to
reach a reasonable agreement with
experimental values for the excited state energies.
We have used the electronic structure data to calculate the
photoionization cross section.
Accounting for the basic structure of the two-photon absorption tensor
alone has already allowed us to qualitatively reproduce the
experimental results for fenchone and camphor. The minimal requirement
was identified to be a
contribution of $d$-waves in the intermediate electronically excited
state. Such a contribution can be expected if the two-photon
absorption tensor is anisotropic.
Employing the \textit{ab initio} data in the calculation of the
photoelectron angular distribution, we have obtained a
semi-quantitative agreement between theoretical and experimental
Legendre coefficients characterizing the photoelectron angular
distribution.
The satisfactory agreement of our model with the experimental data
encourages a number of follow-up studies. First of all,
a fully time-dependent description should be employed, following the
lines of Ref.~\cite{SeidemanPRA2001}, because the photoelectron angular
distributions depend on the polarization
as well as the dynamics~\cite{HardingChemPhys2005}. Based on the model
developed here, an extension to time-dependent studies is
straightforward, but will require substantial numerical effort. Such
an extension will allow to investigate the dependence of the
photoelectron angular distribution on the laser parameters, including
intensity, central frequency, spectral bandwidth and
varying polarization. The latter would be a first step towards the coherent
control of PECD.
In parallel to accounting for time-dependent effects, the electronic
structure treatment may be improved. In particular, the
multi-center character of the continuum wavefunction can be
accounted for by employing Dyson orbitals
in the calculation of the photoionization cross
section~\cite{OanaJCP07,OanaJCP09,HumeniukJCP13}.
Moreover, a perturbative
treatment of the static exchange for the photoelectron and extension
to beyond the electric dipole approximation should be
straightforward. The former would allow for a detailed study of the
dependence of the angular distribution on the photoelectron energy,
including low photoelectron kinetic energies. It would thus open the
way toward investigating the role of the chiral ionic core in the dynamics
leading to the photoelectron angular distributions.
An extension to beyond the electric dipole approximation would allow for a unified theoretical treatment of further observables beyond PECD, such as
circular dichroism in laser mass spectrometry of photoions~\cite{BoeslCPC06,LiJCP06,BreunigCPC09}, as well as comparison with different levels of electronic structure theory~\cite{KroenerPCCP15}.
\begin{acknowledgments}
We would like to thank Christian Lux and Thomas Baumert for
discussions as well as Sebastian Marquardt and Hauke Westemeier for help
and discussions.
Financial support by the State Hessen Initiative for the
Development of Scientific and Economic Excellence (LOEWE) within the
focus project Electron Dynamic of Chiral Systems (ELCH) is
gratefully acknowledged.
\end{acknowledgments}
|
2,869,038,154,384 | arxiv | \section{Introduction}
Over the past several years, conversational systems have become of increasing interest to the research community. Previously, interest in human-machine spoken language interaction had focused on goal-oriented dialog systems. Over that time, spoken dialog technologies were able to achieve a level of maturity that enabled their use in commercial systems, even as the approaches shifted from knowledge-engineering to machine learning, based in part on the availability of increasingly larger corpora of human-machine interaction. For the latter, see the overview by \cite{YOUNG2010150}.
Goal-oriented interaction, however, represents only one aspect of natural human communication. Left out of sustained investigation were interactions that did not have concrete transactional goals and took place for other reasons, for example social bonding or entertainment. An early example of a socialbot was ELIZA \cite{weizenbaum1966eliza}, a rule-based system that simulated natural conversation.
It's successors include the more recent A.L.I.C.E. \cite{shawar2002comparison} system as well as various systems built for the Loebner Prize competitions \cite{bradevsko2012survey}. In the past few years, a renewed interest in such systems has emerged, making use of readily available data. The initial systems \cite{banchs2012iris} were based on information retrieval techniques and operated by taking immediately preceding inputs (such as the human's last turn, possibly in combination with the system's last turn) as a retrieval key for a database of conversational material, such as film scripts or online chats. While such systems could carry on conversations, they tended to have difficulty maintaining continuity and contextual awareness and would often reduce to question-answering. An alternative approach grew out of the machine learning community, applying deep learning techniques to the problem \cite{1506.05869,sordoni2015neural}. Such systems usually use sequence-to-sequence models, and a variety of approaches have been tried, such as using a hierarchical architecture \cite{serban2015building}. Results have been mixed, often the performance of such systems is greatly hindered by their lack of context and continuity, two very complex concepts not easily learned by a sequence-to-sequence model, possibly due to the inconsistency and relative paucity of data, especially when compared to other fields where machine learning has been successfully applied.
Socialbots in the 2017 Alexa Prize Competition made use of such techniques \cite{DBLP:journals/corr/abs-1801-03604}, but ultimately with limited success. What did work well were approaches based on finite-state machines (FSMs). These would lead the human participant through a more scripted interaction, reminiscent of earlier directed dialog techniques developed for goal-oriented systems. In the most extreme cases, the human would be exclusively answering questions posed by the agent, but more flexible and dynamic FSMs are also possible. That said, a well-scripted static FSM can be quite compelling and can produce a good experience for the user despite not always being characteristic of a natural human conversation, in which the roles are more evenly balanced and each participant is expected to take some initiative in developing the conversation.
In defining our initial approach to the challenge, we acknowledged the need to have FSMs manage parts of the interaction, but we were more interested in creating a framework that could support more balanced conversations. Conversational systems at this point in time need to incorporate scripted interaction grounded in some particular domain (say movies or sports), but they also need to detect and understand conversational signals from the human (e.g. expressions of confusion or praise) and be able to generate their own such signals in appropriate contexts. For example, acknowledgment is key to communicating a sense of engagement (``paying attention'') and allowing the listener to maintain a useful model of their counterpart's internal state. To that end, we took inspiration from early work on conversation, in particular, that of \cite{sacks1974simplest} and \cite{ventola1979structure}. The main observations in these works focused on how interlocutors manage the mechanics of conversation, and on how casual conversation reflects an understanding shared and accepted by both interlocutors. We need to point out that this earlier work is observational and cannot translate directly into a specific computational approach; but it does provide a framework for developing conversational systems and sheds light on the overall structure of human conversation, mimicking which we hold as our ultimate goal.
Tartan is focused around two main dialog strategies: it makes use of FSMs to provide for locally cohesive structure (such as an introduction and topic-specific episodes) and attempts to introduce less constrained responses through the use of a variety of generation and retrieval techniques. Our approach utilizes an array of FSMs which when triggered provide a robust but somewhat constrained conversation and can be swapped in and out based on input from the user. Tartan also makes use of an intent detection module that identifies various conversational markers expressed by the human and guides the conversation in a way which attempts to be reactive on a turn-by-turn basis. Simultaneously, less constrained, and therefore more novel, responses are available to the system via a set of response generators with access to various databases and resources, which can be triggered to provide a completely un-scripted experience.
In this paper, we discuss Tartan's architecture, our approach to natural language processing and response selection, as well as our FSM architecture. We will present some analysis of conversations and summarize our lessons learned.
\section{Related Work}
This is the second year of the Alexa Prize competition. As such, we have had the fortune to study a previous cohort of socialbots and learn from their strengths and weaknesses. Sounding Board, the inaugural Alexa Prize competition champions, displayed the strength of a robust dialog manager and diverse information retrieval modules \cite{chatbot_wash}. Alquist, another 2017 Alexa Prize competitor, suggests utilizing information aggregation to store a database of static information that can easily be queried by a socialbot \cite{chatbot_alquist}. They do this by generating knowledge bases of topics, and by continuously expanding their bot's knowledge by introducing new information each day created from news outlets and social media. Many of the 2017 Alexa Prize socialbots were developed with similar architectures (see Figure \ref{figure:generic_archit}).
\begin{figure}[h]
\centering
\includegraphics[width=1.0\textwidth, height=0.2\textwidth]{generic_architecture.jpg}
\caption{A generic socialbot system architecture}
\label{figure:generic_archit}
\end{figure}
First, Automated Speech Recognition translates the speech into text. Next, the text is processed by a Natural Language Understanding module. The Natural Language Understanding module contains features such as intent recognition, topic and domain detection, anaphora resolution, and Named Entity Recognition \cite{DBLP:journals/corr/abs-1801-03604}. Some teams augmented this NLU with other features. Sounding Board augments its Natural Language Understanding with an error handling module, which controls for errors in Automated Speech Recognition and Natural Language Processing \cite{chatbot_wash}. Next, the Natural Language Unit hands off information to the Dialog Manager. The Dialog Manager tracks the context and history of the conversation. Slugbot's Dialog Manager modeled dialog flow as a state graph \cite{slugbot}. Magnus built multiple Finite State Machines to model dialog within a specific topic (e.g. movies) \cite{magnus}. We took inspiration from Magnus and built a dynamic Finite State Machine to model dialog flow. Previous Alexa Prize teams primarily used three techniques for response generation: neural text generation, information retrieval, and templates. The teams ensembled multiple response generators. The MILA Team, for example, utilized 22 different response models in their bot \cite{mila}. Ultimately, a response is chosen using a selection strategy and a response output manager. Most teams used a rule-based logic system to determine their response ranking. Meanwhile, the MILA team utilized reinforcement learning for its response ranking \cite{mila}. Some teams, such as Sounding Board, used their response output manager to control prosody \cite{chatbot_wash}.
\section{System Architecture}
\begin{figure}[h]
\centering
\includegraphics[width=0.8\textwidth, height=0.6\textwidth]{tartan_arch}
\caption{System Architecture Diagram}
\label{figure:archit}
\end{figure}
\subsection{Overview}
Figure \ref{figure:archit} shows the design of the Tartan socialbot. It makes use of the infrastructure provided by the Amazon Alexa Prize team (\textsc{cobot}), which was based on their analysis of system architectures developed during the first competition. The bot is intended to be used via an Alexa device, such as an Echo or a Dot. Participating teams did not have any control over which device was used, or over the circumstances of use. For example, we observed a 460 turn conversation at one point; it was not clear to us what was going on. The front end returns a top-n list of utterance hypotheses, as well as individual word timings, plus word-level confidence scores. We used the latter for analysis but not in the running system. Tartan was instrumented using the AWS CloudWatch \footnote{\url{https://aws.amazon.com/cloudwatch}} tool, to provide ongoing performance monitoring.
\subsection{Pre-processing and Natural Language Understanding (NLU)}
A well-functioning socialbot needs to understand what users are saying and react accordingly. The infrastructure processes provided several kinds of information that we made use of when generating and selecting the response. These included:
\textbf{Sentiment:} We use VADER (Valence Aware Dictionary and sEntiment Reasoner) \footnote{\url{https://github.com/cjhutto/vaderSentiment}} for sentiment analysis \cite{gilbert2014vader}. VADER has been designed specifically for social media text, which is closer to human dialogue as compared to the data used for training other open source sentiment classifiers such as NLTK \cite{NLTK}.
\textbf{Topic:} Amazon's topic classification API is used to get a probability distribution over a set of topics for each user utterance.
\textbf{NER:} Recognizing the named entities (e.g. person, organization, etc.) in an utterance is an important step to understanding a user's intent. Tartan uses SpaCy \footnote{\url{https://spacy.io/usage/linguistic-features}} to perform named entity recognition.
\textbf{Anaphora resolution:} To understand the correct interpretation of what a user says, pronouns and other referring expressions must be connected to the right mentions. These mentions could be in the same turn or a previous turn. We use NeuralCoref \footnote{\url{https://github.com/huggingface/neuralcoref}}, which implements the deep learning based coreference resolution system from \cite{clark2016improving}.
\textbf{Intent Detection:} At first, when in the context of an FSM expected responses were predicated on one of the Amazon intents\footnote{\url{https://developer.amazon.com/docs/custom-skills/built-in-intent-library.html}} and sentiment extracted from the user utterance. Under other conditions, keywords were used as query terms sent to our response generators. One of our goals was to move beyond this level of understanding.
In examining our conversation data we found that an overwhelming proportion of user intents were labeled as a \texttt{CatchAllIntent}, meaning that the intent classifier could not come up with a confident label. We concluded that the intents that were well recognized were ones that were based on data from goal-directed systems (e.g. utterances such as \textsc{yes, no, stop}). Similarly, topic labels were limited in scope (understandably, given the training data we assume was available).
Coverage for intents and topics is shown in Table \ref{table:intents}. Only items that accumulate to ~95\% of the total are shown. Note that, except for yes, no, stop, few intent labels are informative. To improve coverage we created intents, such as ``yes\_intent'' that include items apparently not covered in the corresponding intents. Likewise, about 70\% of the topic labels are uninformative. (The prevalence of the Movie\_TV topic labels might be attributed to Tartan's reliance on that topic of conversation.) These data are based on Tartan's July 2018 epoch.
\begin{table}
\centering
\begin{tabular}[t]{|l|c|c|c||c|c|c|c|}
\hline
Intent & count & percent & cumul \%& Topic & count & percent & cumul \%\\
\hline
&&&&&&&\\
CatchAllIntent &110593 &33.6 &33.6 &Phatic &197826 &70.1 &70.1 \\
yes\_intent &69283 &21.1 &54.7 &Movies\_TV &22097 &7.8 &77.9 \\
no\_intent &42553 &12.9 &67.7 &Music &9969 &3.5 &81.5 \\
Launch\...Intent &36357 &11.1 &78.7 &Other &9901 &3.5 &85.0 \\
fsm\_request &29284 &8.9 &87.6 &SciTech &6221 &2.2 &87.2 \\
A\...StopIntent &17894 &5.4 &93.1 &Celebrities &4699 &1.7 &88.9 \\
conclude &5116 &1.6 &94.6 &Sports &4284 &1.5 &90.4 \\
&&& &News &3906 &1.4 &91.8 \\
&&& &Games &2975 &1.1 &92.8 \\
&&& &Pets\_Animals &2929 &1.0 &93.9 \\
&&& &Politics &2736 &1.0 &94.8 \\
&&&&&&&\\
\hline
\end{tabular}
\caption{Intents and topics observed over the July 2018 epoch}
\label{table:intents}
\end{table}
\begin{table}
\centering
\begin{tabular}[t]{rl|rl}
\hline
&Gambit&&Favorite\\
\hline\\
145 & SPORTS & 113& COLOR\\
134 & MUSIC& 53 & SONG\\
109 & VIDEO GAMES& 39& BOOK\\
78 & MOVIES& 31& FOOD\\
64 & BOOKS& 31& ANIMAL\\
50 & IT& 23 & SPORT \\
48 & THE& 18& MOVIES\\
46 & GAMES& 16& SPORTS\\
41 & SEX& 16&ME \\
40 & FOOD& 14& DOGS\\
40 & ANIMALS& 14& ACTOR \\
\\
\hline
\end{tabular}
\caption{Slot values, with observed frequency, found for the concepts \texttt{Gambit.gambit\_slot} and \texttt{Backstory.backstory\_favorite\_slot}.}
\label{table:slots}
\end{table}
As a consequence, we developed our own set of intents. Practically, we were happy to make use of the existing Amazon intents and our extensions; we therefore concentrated on those utterances that were labeled \texttt{CatchAllIntent}. Specifically, our goal was to identify intents that represented communication about the conversation, as opposed to specific topics of conversation. To accomplish this goal, we represented this information in the form of a semantic grammar. While such grammars have been shown to be useful for goal-directed systems \cite{Ward1994RecentII}, it was not immediately clear the approach would work in the current case. We conjectured that the ``language'' of conversation would, for practical purposes, be limited enough to make reasonable coverage possible. We made one extension to semantic parsing, to allow for dynamic slots. These are different from pre-specified slots; specifically, certain phrases in the grammar, for example, \textit{what is your favorite} will have immediately following material (words not already captured as a different concept) available along with the carrier concept. Doing this allows other parts of the system to deal with the material, ideally in a suitable context. We expect that in the long run it will be possible to create parsers that learn these concepts and identify slot values. At this time, however, we first need to develop an understanding of conversational semantics.
Grammar development was based on sessions collected in the last week of June. The data from intermediate-length (7--12 turns) sessions were examined and utterances of interest were entered into a Phoenix semantic grammar \cite{Ward1994RecentII}. Once the initial grammar was constructed, we use it to process the corpus iteratively and identify utterances that were not covered, allowing the grammar to be augmented. The grammar was further modified during July but primarily to make corrections. However, the set of concepts we initially identified remained the same.
Our final grammar captured intents for 62.7\% of \texttt{CatchAllIntent} utterances, computed over all the July 2018 data. Our intent inventory was focused on conversationally relevant concepts and did not make an effort to explicitly capture domain-specific ones (say for sports or news) or ones we believed should otherwise be ignored at this level of understanding (for example a rambling discourse covering several subjects). We expect that there might be topic-specific concepts that it would be useful to capture in the context of particular FSMs but we chose not to pursue this. Examining the residuals of unparsed utterances, we found that much of it appeared to be driven by specific exchanges (for example references to movies) or uninterpretable. We expect that managing such utterances can be done more effectively in the context of a topic-specific FSM.
Users, on average, produced 1.45 concepts in an utterance. As an example, we observed \texttt{$[$Acknowledgment, Assent, Disclosure, Disclosure.disclosure\_slot$]$}, a reasonable succession of concepts for an utterance such as \textit{really || i agree || my favorite movie is \underline{star wars}} (concept bounds are marked by ``||'', slot value is underlined). One advantage of doing this is that it allows the system to separate conversational acts (the first two) from the core act (a disclosure). The bot might formulate a different response if, say, the \textsc{Assent} was a \textsc{Dissent}. As well, the slot value can be made use of as part of a follow-on response.
We identified a total of 37 major concepts, with an additional 74 sub-concepts that provided specializations deemed useful for flow control and response generation (this includes slot phrases).
For example \textsc{Backstory} includes 17 sub-concepts corresponding to things that users asked about and that could have specific answers (such as favorites). The concepts were organized into nine groupings useful for driving conversation understanding; the six most frequent ones are shown in Table \ref{table:concepts}. Together these six groups account for 97\% of observed concepts. One thing to note is the high frequency of Social concepts. In part, this was due to each session beginning with an exchange of greetings. On the other hand, about 18\% of detected concepts were an Address, calling to Alexa by name.
Note that parsing allows for multiple intents to be identified per utterance. Note also that sub-concepts can be designated as ``\texttt{\_slot}'' or ``\texttt{\_focus}''; meaning that unparsed words following the concept phrase are attached to it as a slot value.
\begin{table}
\centering
\begin{tabular}[t]{cccccc}
\hline
Social & Topic Mgmt & Convo State & Interest & Rejection & Convo Control\\
28.1\% &24.5\% &22.8\% &11.9\% & 5.0\%& 4.7\% \\
\hline \\
{Address} & {Gambit} & {Acknowledgment} & {Backstory} & \texttt{Confusion} &{Conclude}\\
{Social} & {CoreTopic} & {Concur} & {Disclosure} & {Rebuttal} & {Elaboration}\\
{OtherBot} & {Question} & {Dissent} & {Preference} & \texttt{Nasty} & {ExplainResponse}\\
& {ChatGambit} & {Assent} & &{Rejection} & {Assertion}\\
& {TellMe} & {Approva}l & &&{Continuation}\\
& {ActionGambit} & {Demur} && &{Repeat}\\
& {Curious} & &&&\\ [1ex]
\hline
\end{tabular}
\caption{Concepts observed over the July 2018 epoch; concepts in a column are listed in decreasing frequency observed.}
\label{table:concepts}
\end{table}
\subsection{Dialog Manager}
Tartan's unique design required a robust way to choose between response generators and FSMs on the fly and to be able to switch from one to the other while remembering the states of all currently active FSMs in order to potentially return to them in the future. For this purpose, we implemented a dialog manager consisting of two main parts, the FSM Manager and the Selecting and Ranking Strategy. The FSM Manager is in charge of all of Tartan's FSMs, while the Selecting Strategy jumps in if no applicable FSMs are found and chooses a list of appropriate response generators to run. The responses from these response generators are then filtered by the Ranking Strategy to output the best possible response. Note that we do not need to filter the FSM responses since they were created specifically to be valid responses.
\subsubsection{Selecting and Ranking Strategy}
\paragraph{Selecting Strategy} We used an intent map from the list of possible intents recognized by our bot to various response generators. The mapping is a many to many function where one or more intents can map to one or more response generators. We learned these mappings empirically with data collected during the competition. The intents can map to both single-turn response generators as well as to specific FSMs; additionally, FSMs have the ability to override the intent mappings dependent on conversational context. The selecting strategy outputs a list of FSMs or response generators from which we generate candidate responses.
\paragraph{Global Ranking Strategy} Tartan divides candidate response generators into two groups: FSMs and response generators. FSMs are queried first and if an FSM response is generated then the response generators are not run, which decreases latency. It is important to note that FSMs have the ability to query response generators; hence, response generators may be queried as part of an FSM response, which is considered an FSM response and not a response generator response.
\paragraph{FSM Ranking Strategy} The FSM Manager contains a ranking mechanism that chooses the correct FSM and FSM response given a list of potential active FSMs and FSM candidate responses. This is detailed in Section 3.4.1.
\paragraph{Response Generator Ranking Strategy} Our response ranker is a two-tiered ranking system. First, we perform a filtering, which removes undesirable responses. This includes responses that are inappropriate, ungrammatical, irrelevant, excessively negative, too long, etc. To perform this soft filtering we run our candidate responses through several NLU modules. First, we leverage the main NLU pipeline that processes incoming user utterances. We also implemented a rule-based grammar checking module, and a neural relevance module that scores the relevance of a candidate response conditioned on the utterance and the previous turn. Additionally, we remove inappropriate responses and/or certain controversial responses. Whether or not a controversial response is removed depends, in part, on the module that generated the response. For example, our controversiality module is much more sensitive to responses retrieved from social media than to responses retrieved from verified news sources. After filtering and scoring candidate responses, we feed our candidate responses into a rule-based ranking model that selects a candidate response conditioned on the response generator, the utterance/response pair, and the soft filtering module.\par
\subsection{Finite State Machines (FSMs)}
Finite state machines (FSMs) were an integral part of Tartan's conversational strategy. FSMs allowed Tartan to more easily maintain context throughout a conversational arc, and facilitated a more structured conversation that Tartan could logically analyze and continue. One challenge of utilizing FSMs was finding an appropriate conversational balance. At one extreme, the bot can completely control the conversation by asking the interlocutor a series of directed questions, while at the other end of the spectrum the bot can instead allow the interlocutor to dictate the conversational arc, while maintaining a set of predefined responses for conversing on specific topics and a logic that determines how the responses may be combined and structured. Both of these strategies have certain advantages and weaknesses. The more the socialbot directs the conversation, the easier it is to maintain context and to generate appropriate responses at each turn. This permits topical conversations that are more significant and penetrating; however, having the socialbot dictate the arc of the conversation can leave the interlocutor feeling overly constrained and often results in a worse user experience. Alternatively, allowing the interlocutor to dictate the conversation flow allows the user too much freedom, and usually results in the user posing questions that the bot is not capable of correctly parsing and responding to. The ideal socialbot strikes a perfect balance between driving the conversation when necessary and also knowing when to back off and allow the user to take control. This, however, assumes that the bot is perfectly capable of handling any utterance the user throws at it. In reality, any current socialbot must often err on the side of caution by providing a bit more structure to the conversation in order to be better able to anticipate the user's response, at the risk of reducing conversational engagement and novelty.
Tartan's FSM framework involves an FSM Manager which correctly activates and hands off between multiple FSMs. Tartan makes use of a Base FSM that manages the introductory exchange and provides a fall-back when other FSMs exit. The fall-back includes proposing a new topic, minor chitchat, as well as solicitation of topics that the user might want to talk about. Apart from the Base FSM, our design incorporates two categories of FSM: topics and interruptions. Topic FSMs include topics such as Movies, Jokes, and Backstory. Interruptions handle various kinds of interjections a user might voice.
\begin{figure}[h]
\centering
\includegraphics[width=1.0\textwidth]{freeform2.jpg}
\caption{Sample diagram of FSM arc structure}
\label{figure:freeform}
\end{figure}
In Figure \ref{figure:freeform} we can see an example arc structure of the Freeform FSM, a successor to a more constrained base FSM (see the Experiments section). The conversation starts in the dummy START state and immediately enters state 0, which greets the user and then follows another null arc (an arc which does not cause a new conversation turn) into state 1, which asks how the user is doing. After the user replies, arc 1 is usually followed, which asks what the user would like to talk about, waits for a response, then enters the "Empty State" which allows different FSMs and response generators to grab the user's utterance and respond to it. Every once in a while, a personal question such as "what is your favorite color?" is attempted, although the user can opt out by declining said question. Not shown here is a state which specifically transitions into another FSM (although the empty state does just that when a user's utterance activates a new FSM). This sort of FSM structure allows for incredibly detailed yet flexible control of a conversation, and we hope that we will soon be able to learn these FSM structures automatically from conversational data gathered throughout this competition.
\subsubsection{FSM Manager}
At the core of Tartan's FSM framework lies the FSM Manager, which utilizes information from the NLU such as intent, sentiment, etc. to choose the correct FSM to respond with and retrieve its response. The FSM Manager first attempts to get a response from the currently active FSM, but failing that it looks for other FSMs which fit the user's utterance. Using the FSM Manager, an FSM can provide a partial response and hand off to a separate FSM to complete it. An FSM can also queue up another FSM or utilize an existing response generator for all or part of its response. Each FSM has a set of functions, called arcs, which coincide with transitions between FSM states. These functions return scores which indicate how well the given arc corresponds to the current context. The score functions take as input the NLU output, the conversation history, and the conversation context and return a real valued score. This allows one or more FSMs to simultaneously propose candidate responses. Finally, the FSM Manager selects a candidate response using a decision tree that reasons over the conversation history, conversation context, active FSM, and FSM candidate response scores.
\par
Another key part of the FSM Manager is the FSM stack. This is a data structure which keeps track of which FSM is currently active but most importantly keeps track of which FSMs led to the current one. A significant aspect of the FSM manager is the ability to hand off from one FSM to another, aka the Base FSM can hand off to the movie FSM if the user expresses a desire to talk about movies, and once the Movie FSM exits the Base FSM can be popped off the FSM stack and pick up where it left off. This is important because it models the conversational structure of a human conversation. When people converse they generally instantiate and follow one conversational arc at a time, but they remember the other things that have been brought up in the conversation and are prepared to bring them up when the current arc is completed or interrupted. Of course, our FSM stack cannot fully recreate all the intricacies of a live human conversation, since people often merge arcs and reference world knowledge or perform deductions beyond the capability of any current socialbot, but the FSM stack can maintain conversational context quite well.
\subsubsection{Topic FSMs}
\paragraph{MovieFSM}
The MovieFSM gets triggered when the user's intent is related to movies in general or any specific movie in particular. In order to provide the best possible responses, we compiled a movie database by querying the IMDB database to get the top 500 latest movies, rated by users from around the world and spanning multiple genres such as mystery, horror, comedy, and romance. For each movie we store the title, genre, cast, characters, plot summary, etc. This allows our movie FSM to ask the user questions regarding their favorite characters in the movie, the performance of various acters, and so forth, providing a rich conversational arc with various branches to keep the conversation from getting stale. We also pay attention to how interested the user seems in the current conversation by monitoring utterance-level sentiment and sometimes either proposing a new movie, a new genre, or exiting the movie FSM altogether if the utterances are overwhelmingly negative.
\paragraph{BackStoryFSM}
The Backstory FSM responds to various questions that people want to ask of Tartan (see Table \ref{table:slots} for some examples). Backstory elements were identified either from sessions or included just for good measure. Based on our examination of early sessions we decided that Tartan should maintain its agent/robot identity and not try to pretend to be too human. For example, in response to questions about family, Tartan answers ``I'm just a bot. I don't have relationships like humans do. Maybe someday...'', but Tartan does have favorite things, like colors and movies. Some responses can have an additional state, e.g. ``and what's your name'' that provides a follow-up.
\paragraph{JokesFSM}
The JokesFSM gets triggered when the NLU indicates the user's intent to hear a joke. To provide an output joke, we needed to first create a database of jokes. This database was created by querying the Internet for top jokes and puns. These queries were deliberately geared towards providing short jokes since we believe during a conversation a user would like to hear a short joke instead of a lengthy story. This decision was informed by reviewing conversation data where often times the user gets noticeably disinterested when the bot produces overly lengthy responses. We collected several dozen of the best jokes and puns that we could find, splitting them up into one-part and two-part jokes. Two-part jokes are jokes which require the user to provide some input (What do you call a lion who never tells the truth? ... "What?" ... The lying king!). The JokesFSM selects a joke at random and outputs it to the user.
\paragraph{SongsFSM}
The SongsFSM provides information about top songs from around the world. Furthermore, it is capable of finding a particular song based on a user request and providing song information to the user. We use the Musixmatch API \footnote{\url{https://www.musixmatch.com}} for song information.
\subsubsection{Interruption FSMs}
The term ``interruption'' is perhaps inaccurate from the perspective of a conversation (it may not act like one) but it does interrupt the currently executing FSM. Interruptions are meant to be short; the user asks a question or makes a comment that should be responded to but should not derail the conversation. The expectation is that under normal circumstances the interrupted FSM (e.g. a topic FSM) will be resumed.
The current set is shown in Table \ref{table:interrup}.
\begin{table}
\centering
\begin{tabular}{|r|l|}
\hline
\textbf{Concur} & ``i know what you mean'' \\
\textbf{Approval} & ``that would be great''\\
\textbf{Acknowledgment} & ``okay'' ``thanks'' ``i am not surprised'' \\
\textbf{Assent} &``yes of course'' \\
\textbf{Dissent} & ``no i do not like it'' \\
\textbf{Disclosure} & ``i like gardening'' ``today is my birthday'' \\
\textbf{Confusion} & ``you asked me that already'' ``are you there'' \\
\textbf{Praise} & ``you are very smart'' ``that is cute'' \\
\textbf{Boredom} & ``i am bored'' ``you are boring'' \\
\hline
\end{tabular}
\caption{Interruption FSMs in Tartan}
\label{table:interrup}
\end{table}
\subsection{Response Selection and Generation}
We made use of an array of generators to create responses. These include:
\paragraph{Templates}
We identified that many frequently asked questions were non-sequitur; they were usually not asked with the intention of changing or driving the subject of conversation, but were out of curiosity in order to test and explore the socialbot's abilities. Moreover, the interlocutor would often immediately return to discussing the previous subject. We implemented a generalized model utilizing utterance-level embeddings and templated responses. We store pairs of common questions and ideal answers and, given a query, we calculate a similarity metric across all the stored templates. To compute utterance similarity, we calculate the cosine distance of the user utterance with each template. If a similarity metric exceeds our threshold, we respond with the template and return the "ideal" response. In order to build this system, we took advantage of Facebook’s Infersent \cite{infersent} and Google’s Universal Sentence Encoder \cite{google_use} to build our sentence-level representation.
\paragraph{EVI}
EVI is a question answering service provided by Amazon. It provides strong and accurate responses for factual questions. For example, EVI is able to successfully answer questions such as "Who is the president of the United States?" In addition to answering factual questions, EVI was often able to provide superficially strong answers to phatic questions, such as "How is your day going?" While EVI serves its intended purpose quite well, it can struggle when it is deployed in a conversational setting. EVI has no way of setting its context, so it can only reply to the exact text that is passed to it. Additionally, EVI responses are often exact answers to questions posed to it, unlike in natural conversations, where humans often append follow-up questions or content after answering a question in order to facilitate the conversation.
\paragraph{Fact Retrieval}
Fact retrieval presents relevant facts conditioned on past utterances or on the current utterance. In a sense, the fact retrieval response generator can be thought of as a greedy information retrieval module. The motivation behind fact retrieval is to present the user with an interesting fact that minimizes the probability of a user ending the conversation on the following turn. This differs from the rest of our response generators, which are developed to maximize conversation quality across the duration of a conversation. Currently, we are using a domain-specific trivia fact base scraped from IMDB and have focused on movie facts. We periodically output these facts in a conversation when we identify that the user has expressed interest in an entity about which we have a stored fact, or if we identify that a user is particularly disengaged in the conversation. This allows the bot to present the kind of pseudo-random trivia which often comes up in human conversation (e.g. "did you know that ...?"). We plan to expand our database of facts to other domains in order to provide interesting facts and tidbits to the user without repetition.
\paragraph{Neural Generation}
We used a dialog framework developed by FAIR (Facebook AI Research) called ParlAI \cite{parlai}. It utilizes a key-value memory network which essentially memorizes the training data and picks the best response to any given input. The motivation behind using such a network in lieu of a more traditional sequence-to-sequence model was that most sequence-to-sequence models fail to generate grammatically correct or sensible responses most of the time. The keys, in this case, can vary, ranging from entities or keywords to subject-predicate sets of knowledge triples. The network uses efficient memory storage based on hashing which assures efficient and quick retrieval of relevant content via associative search over the memory. This model is, in essence, a general framework for storing and retrieving context in memory based on each scenario. We used a PyTorch implementation of ParlAI trained on PersonaChat - and as a result this model is mainly useful for chit-chat and phatic conversations. This is largely due to the nature of the PersonaChat dataset, which is mostly composed of themed small talk between people adhering to certain assigned personalities. Unfortunately, even this state of the art speech generation approach fails to match a basic FSM in conversational quality due mostly to the lack of context. There has been work in networks which are capable of including context in their generation process, but they are as of yet still not robust enough to be feasible for this particular use case.
\paragraph{Twitter Retrieval}
Twitter is a rich source of information. Tweets are at most 280 characters long and provide brief replies on a diverse range of topics. We used RAKE \cite{RAKE} to extract keywords from the user's utterance and passed these keywords to the Twitter API to retrieve tweets. We then cleaned the tweets, as they often contain hashtags, emoticons and abbreviations. Once ready, the tweets were passed through an extra layer of profanity filtering, as there are many topics which require extra filtering especially from sites like Twitter due to a large amount of profanity and various political arguments.
\paragraph{News Retrieval}
This module focuses on responding to user's questions about current affairs. We extract keywords and named entities from the user's utterance and send these to News API\footnote{\url{https://newsapi.org}}. We focus on headlines in order to respond with popular news for the day sorting the news by popularity. We also get the article description, so that we can elaborate on the headline if prompted.
\paragraph{Conversation Retrieval}
This module aims to respond to phatic utterances such as "How are you?", "What do you want to do today?", etc. To do this, we used the Daily Dialogue dataset \cite{dailydialog}. This dataset contains everyday conversations between two people. We picked question-answer pairs as the training dataset and used StarSpace embeddings \cite{starspace} to encode them. When a user talks to Tartan, we encode the utterance and find the most similar utterance from the conversations and output the response to that utterance. We analyzed last year’s Alexa Prize conversations and found that the best cutoff threshold is 0.82. Initially, we tried encoding all sentences instead of just question-answer pairs. The responses generated were out of context but on-topic. For example, if the query is “I was wondering if I should go hiking today” the response could be “Hiking is my favorite hobby” . The threshold to pick the best response for this was 0.98. The model performed reasonably well during live conversations; however, one possible improvement to this model is to incorporate the user's intent so that the model has more information about the given utterance to work with.
\section{Analysis and Evaluation}
In this section, we discuss interesting results and insights gained throughout our participation in the competition. A number of these findings confirm conventional wisdom about the human conversation. Nevertheless, there has been limited empirical evidence to substantiate that such wisdom extends beyond human-human interactions into human-agent interactions.
One significant result is the impact that an interlocutor's mood has on conversation quality. In the opening turns of a conversation, our socialbot asks the interlocutor how they are doing. We store the interlocutors response and classify their ``mood" along with a spectrum of possible moods. At the extreme ends of this spectrum are ``mood\_unhappy'' and ``mood\_great''. We find that users classified as ``mood\_great'' rate conversations, on average, more than 1.4 points higher than users classified as ``mood\_unhappy.'' This is a striking result given that the rating system ranges from 1-5. This raises an interesting use case for conversational agents. How can conversational agents create positive interactions with interlocutors who are predisposed to be displeased with the conversation? It is likely that an ideal conversation with an unhappy user would be substantially different from an ideal conversation with a happy user. We additionally find that users report being in a great mood roughly 7x more frequently than they report being in an unhappy mood. Given the relative infrequency that users report being unhappy, one must also consider whether this use case is significant enough to warrant major development.
\begin{figure}[h]
\centering
\includegraphics[scale=0.7]{sentiment_ratings.png}
\caption{A regression analysis of sentiment score against scaled rating. The ratings have been scaled to have a mean value of 3.}
\label{figure:sent_ratings}
\end{figure}
Similarly, we find a number of other metrics that suggest user contentedness correlates with higher evaluations of our agent's conversations. We find that the sentiment of a user utterance correlates with a user's conversation ratings as shown in Figure \ref{figure:sent_ratings}. We rate the sentiment of each user utterance on a scale ranging from -1 to 1, where -1 is most negative, 1 is most positive, and 0 is neutral. We regressed these utterance sentiments against conversation ratings. We find that each increase in sentiment corresponds with a .33 increase in conversation rating. We similarly find that user assents correlate with higher conversation ratings. Conversations with at least one user assent have 9\% higher ratings than conversations with at least one user dissent.
\begin{figure}[h]
\centering
\includegraphics[width=0.7\textwidth, height=0.4\textwidth]{fraction_fsm_turns.png}
\caption{Percent of FSM turns in a conversation vs the average scaled conversation rating. The ratings have been scaled to have a mean value of 3.}
\label{figure:percent_fsm_turns}
\end{figure}
Our FSM Manager allows us to examine the average ratings of conversations with respect to what percentage of the turns in each conversation were provided by an FSM. Note that this does not necessarily mean that the output is entirely scripted, some of our FSMs queried databases and outputted different responses based on the input, although the responses were of course more controlled than those provided by one of the standalone response generators. Figure \ref{figure:percent_fsm_turns} shows the relationship between the percentage of a conversation that was FSM-driven and the average rating. For this graph we used conversations of lengths 3 to 15, since we found very short and very long conversations to not be very representative of a user who is genuinely trying to talk to the bot. In the figure, we can see a largely monotonic increase in average conversation rating based on how many FSM turns are contained in a conversation. This is a good sign and supports our theory that while standalone response generators may provide more novel responses than FSMs, overall users prefer the more contextually cohesive conversations provided by the FSMs via the FSM manager.
We also found evidence that user perceptions of conversations are ephemeral. That is, they are heavily dependent on recent conversation history. Alexa Prize competition rules dictated that when an Amazon intent classification detected that a user desired to exit the conversation, the bot should immediately end the conversation without any further discourse. Empirically, we discovered that users would frequently have difficulty exiting a conversation with our bot. To remedy this, we implemented a goodbye module in our socialbot, which, when prompted, would reply with "It sounds like you don't want to talk anymore. Would you like to stop?" After implementing this, conversations with our bot could be ended in two ways: a response could trigger the Amazon classifier and immediately end the conversation, or a response could trigger our socialbot's goodbye script, in which our bot asks the user if he wants to end the conversation and then expresses pleasantries before ending the conversation. We found that conversations that ended with our bot's goodbye script were rated 11\% higher than conversations that terminated via Amazon's classifier. As these two cohorts of conversations were otherwise indistinguishable, it follows that our custom goodbye module is responsible for the increased ratings. Given the simplicity of our goodbye module, we theorize that the content expressed in our goodbye module doesn't improve conversation quality. Rather, we hypothesize that user opinions on conversation quality are mercurial and ephemeral, and, by ending the conversation on a positive tone, we are able to impress upon the interlocutor the appearance of a positive and more natural conversation.
\section{Experiments}
We took the opportunity to explore various potential solutions to specific problems in socialbot operation. This section describes some of these.
\subsection{Controversiality Classifier}
\cite{zhang2018conversations} discusses the important challenges about online social systems and shows that detecting early signs of conversational failure can reduce the prevalence of antisocial behavior, harassment, and personal attacks, which are sadly common on online anonymized forums. In the context of the Alexa Prize, it’s important to flag subtle offensive content to guarantee that the output of our models won’t contribute to whether the conversation will fail and to avoid potential concerning content coming from the user. Therefore, one of the challenges we tackled was the construction of a controversiality classifier. The main idea was to force our bot to take caution whenever the controversiality level of the conversation was higher than usual.
In order to build the classifier, we downloaded 4 years of Reddit data and took advantage of the “controversiality” score given by the API for each user message. Therefore, we could use a supervised approach to learn when to drive the conversation in a different direction given how controversial the topic was. We started by building a Bidirectional LSTM with self-attention classifier, mimicking similar and recent successful approaches for sentiment classification. After tuning and improving our model, we were able to reach the accuracy of 80\% on our validation data; however, our actual results were not consistent with our validation data.
The main reason for the failure of our model was that controversiality varies a lot between different topics. For example, when talking about Nintendo video games on a particular forum, topics about Playstation might be controversial. At first, when we were initially inspecting our data manually we were not able to identify this phenomenon. However, after a few tries we saw that this clear pattern. We decided to filter a huge amount of channels, allowing only the more general ones where topic biases wouldn’t play an important part, in order to reduce the problem. Unfortunately, we were still not able to train an efficient model. For future work, an interesting direction would be to build a topic-sensitive controversiality classifier, which could be even more useful in a socialbot setting.
\subsection{User Embeddings}
To help track context, we experimented with creating personalized user embeddings. These embeddings would track users across their various sessions speaking with Tartan. We hypothesized that we could leverage these user embeddings to make helpful topic suggestions to users. To model this problem, we treated users and user utterances like a traditional sentence-matching problem. In the traditional sentence-matching setting, given a sentence and a set of articles, we wish to find the article that is most likely to contain the sentence. We model our user embeddings such that each user is treated as an article, and each user utterance is treated as a sentence. We trained this model using Facebook AI Research's StarSpace embeddings \cite{starspace} and use the PersonaChat dataset \cite{personachat}, which consists of 164,356 crowdsourced utterances in brief human-human conversations. We found that our model was unable to generalize to live human conversation. On our validation dataset, our model had less than 10\% accuracy when assigning an utterance to a user. Qualitatively, the suggestions provided by leveraging our user embeddings were unsatisfactory and frequently unrelated to the conversation. We theorize that the poor results demonstrated by our model were due to the relatively few utterances spoken by each user in a given conversation. Ultimately, we removed this module and replaced our user embeddings with a bag-of-words model for tracking users. While the bag-of-words model lacked the context of user embeddings, it empirically performed better in the real world.
\subsection{Utterance Embeddings}
We experimented with multiple utterance encodings. We tested Google's Universal Sentence Encoder \cite{google_use}, Allen NLP's ELMo \cite{Peters:2018} on the PersonaChat dataset. Given an utterance, we trained a model to predict the response chosen from the set of all utterances in the dataset. We observed that the PersonaChat dataset is qualitatively different from Alexa conversations because they are artificially generated conversations biased on individual's given persona. Hence, we chose to create our own dataset using more natural conversation corpora. Given an utterance, we generated a candidate list comprising the true response and 12 negative samples from the Daily Dialogue dataset. While generating the dataset, we also made sure not to use candidates which are not exact matches or excessively close to the response. We observed much better results using the embeddings on the custom dataset. The results are displayed in Table \ref{table:embeddings}.
\begin{table}
\centering
\begin{tabular}[t]{|c|c|c|c|c|}
\hline
Dataset & Model & Hits @ 1 & Hits @ 3 & Hits @ 5\\
\hline
PersonaChat & Universal Sentence Encoder & 0.10 & 0.24 & 0.32\\
Custom Dataset & Universal Sentence Encoder & 0.085 & 0.23 & 0.38\\
PersonaChat & ELMo & 0.14 & 0.29 & 0.41\\
Custom Dataset & ELMo & 0.19 & 0.36 & 0.50\\
\hline
\end{tabular}
\caption{Validation results for pretrained utterance embeddings}
\label{table:embeddings}
\end{table}
\subsection{Constrained Base FSM}
\begin{figure}[h]
\centering
\includegraphics[scale=0.6]{ratings_graph.png}
\caption{Average scaled conversation rating vs date. The ratings have been scaled to have a mean value of 3.}
\label{figure:ratings}
\end{figure}
During the competition we also experimented with a more constrained Base FSM which always attempted to steer the user towards one of our Topic FSMs, and would loop back to a different Topic FSM when the user finished the previous one. As shown in Figure \ref{figure:ratings}, our ratings immediately took a severe hit. This result was surprising, as our hypothesis had been that the more we stay inside an FSM and not venture into "unknown" response generator territory, the better our conversations would be. Unfortunately, this was not the case. We observed that users were not looking for a limited set of perfect FSMs to explore. Rather, they preferred a higher level of novelty and spontaneity. The next, and current, iteration of our Base FSM, the Freeform FSM, avoided this mistake by balancing steering the user towards an FSM, asking exploratory questions (e.g. what is your favorite color?), attempting small talk, or allowing the user to propose a topic of conversation and then attempting to respond coherently. In Figure \ref{figure:ratings} the introduction of our Freeform FSM, which weakened the FSM Manager's bias towards Topic FSMs, is labeled and one can see an immediate uptick in the average conversation rating. This analysis suggests that despite the clear overall benefit of adding FSMs to our bot, it is important to understand that how changes are implemented is often just as important as what changes are made. Although FSMs are, in our opinion, essential to any real-world conversational bot, the FSM implementation must be done with care and utilize insights gained from analyzing human-human and human-agent conversations in order to conform to the conversational rules and standards of the human mind.
\subsection{Conversation Ratings}
To evaluate conversation quality, we developed a model to predict conversation ratings using actual conversations from the Alexa Prize competition. We train on 15,576 conversations, which are manually rated by the interlocutor. We use an LSTM with an embedding layer, Adam optimizer and mean squared loss for this task and evaluated with a 4-fold cross validation. We tokenize utterances and replace low frequency words. Our best model gives an RMSE of 1.04 averaging across all folds.
Using this model, we conducted an experiment to evaluate whether the first or the last half of the conversation has a higher impact on the rating model. When we evaluated using the first and second halves of the conversations, the RMSE was 1.44 and 1.24 respectively. This suggests that the last impression with Alexa has more impact that than the first. But, we still need the whole conversation for the best rating model. We note that this analysis is a first step at exploring the impact of early vs late stage conversation quality on the interlocutor's evaluation. There are a number of possible conflating factors that makes it difficult to directly analyze the halves of a conversation, such as the first half of the conversation being more structured and the fact that, by definition, conversation breakdowns that result in the user ending a session must occur at the end of the conversation.
\section{Scientific Contributions and Future Work}
Our goal for this year's Alexa Prize competition was to make progress on what we believe to be key challenges for socialbots. Our previous experience has shown us that coherent conversation is difficult to create using corpus-based techniques that confine themselves to a limited context. At the same time, the use of handcrafted and scripted conversations, while capable of generating reasonable dialogs, does not appear to get the essence of good discourse, which we believe requires equal participation by both interlocutors. Such constrained interactions tend to shift control to the socialbot, and while this can produce extended interaction, we do not believe it results in a satisfying user experience.
We consequently chose to focus on the development of a baseline conversational intelligence. We did this in two ways. The first was creating a semantic grammar that would allow the bot to understand the mechanics of an ongoing exchange. We believe that we were successful in achieving this goal, though much work remains to be done. About three-quarters of otherwise unlabeled concepts were identified and utilized to create the above mentioned grammar. This conversational grammar is a beginning and would need to be rationalized and further developed.
The other challenge in creating conversations is maintaining cohesion and context management. Functions such as co-reference and topic management, perhaps subsumed under a well-organized history function are essential. Our approach was to first develop a significantly flexible FSM capability, and to cast most of what went on in the system in terms of FSMs, even for very simple actions such as responding to \texttt{Backstory} questions. This, as well as the ability to easily manage sets of FSMs through mechanisms such as a stack and the use of null arcs, was designed to create a simple authoring environment. The further goal would be to develop an approach to induce FSMs from a collected conversation corpus of successful exchanges. In principle, this would allow a socialbot to improve its conversational skills, as well as identify topics that appear to engage users.
\section{Conclusion}
Engaging socialbots are difficult to build, in part because there is no clear scientific foundation for computational conversation. We believe that over the course of the Alexa Prize, we and others have begun to identify key elements that need to be created. Specifically, we identified context tracking and dialog management to be core weaknesses in conversational artificial intelligence. We have contributed a framework for dynamic Finite State Machines to control and maintain the flow of conversations, without constraining the responses or topics about which our socialbot can converse. Our dialog management, combined with a mixture of scripted, partially scripted, and retrieved responses allowed our bot to reach a peak rank of 4th place in the Alexa Prize competition.
\section{Acknowledgments}
We wish to thank Pravalika Avvaru and Poorva Rane for their help engineering the Tartan socialbot. We also thank Zarana Parekh, Sreyashi Nag, Soham Ghosh, Anirudha Rayasam, Aditya Siddhant, and Radhika Parik for their help developing Tartan as part of a course project. Lastly, we thank Alan Black, Jamie Callan, and Eduard Hovy for their helpful discussion and feedback.
|
2,869,038,154,385 | arxiv | \section{Introduction}
Most of the present data on Neutrino Physics are consistent with the
hypothesis of having only three active neutrinos. Nevertheless, there is a
small subset of experiments which seem to require the presence of New
Physics (NP). The first indication hinting at the presence of NP was
provided by an excess in the results of the LSND experiment,
where electron anti-neutrinos were observed in a
pure muon anti-neutrino beam \cite{Athanassopoulos:1996jb,Aguilar:2001ty}.
One of the simplest explanations of the LSND result
involves the existence of an anti-neutrino with a mass-squared difference $\Delta m^{2}$ of about 1
eV$^{2}$. Taking into account that $\Delta m^{2}_\text{atm}$ is of order
$10^{-3}$ eV$^{2}$ and $\Delta m^{2}_\text{solar}$ of order \mbox{$10^{-4}$ eV$^{2}$}
one concludes that the LSND result would require a fourth neutrino. On
the other hand, the invisible decay of the $Z$ gauge boson shows that there
are only three active neutrinos with a mass less than a half of the $Z$
mass~\cite{PDG2019}, implying that if a fourth light neutrino exists it must be sterile,
i.e., a singlet under the gauge symmetry of the Standard Model (SM).
The existence of extra (sterile) neutrinos
should then be reconciled with cosmological constraints, which call for a suppressed
thermalisation of these massive neutrinos in the early Universe,
given the effective neutrino number $N_\text{eff} = 2.99^{+0.34}_{-0.33}$
($95\%$~CL, from TT,TE,EE+lowE+lensing+BAO) measured by Planck~\cite{Aghanim:2018eyx}%
\footnote{
Although addressing this suppression falls beyond our scope, it has
been shown that it can be achieved via ``secret'' sterile neutrino
self-interactions~\cite{Hannestad:2013ana,Dasgupta:2013zpn}. Here, TT,TE and EE+lowE+lensing refer to particular likelihood combinations and BAO stands for baryon acoustic oscillation measurements.
}.
Meanwhile, new anomalies have appeared in Neutrino Physics supporting the
hypothesis of the existence of light sterile neutrinos. The indications for
the existence of a sterile neutrino of mass of order 1 eV come from
short-baseline (SBL) neutrino oscillation experiments.
They started with the LSND
result in the nineties. At that time, this result was not confirmed by
KARMEN \cite{Armbruster:2002mp}. However, KARMEN had a shorter baseline than
LSND and therefore could not exclude the whole parameter space available to
LSND. This was followed by the MiniBooNE experiment \cite{Aguilar-Arevalo:2018wea}
with inconclusive results%
\footnote{
The need to reconcile MiniBooNE and LSND data has currently revived
interest~\cite{Dentler:2019dhz,deGouvea:2019qre} in models attempting to explain
anomalies via sterile neutrino decay~\cite{PalomaresRuiz:2005vf}.}. Recently, new interest
in the LSND result was sparked by the ``reactor anti-neutrino anomaly'' due to
a deficit of the number of anti-neutrinos observed in several different
reactor neutrino experiments, when compared with the theoretical flux
calculations \cite{Mueller:2011nm,Mention:2011rk,Huber:2011wv}.
A crucial and independent development has been provided by the DANSS~\cite{Alekseev:2018efk}
and NEOS~\cite{Ko:2016owz} collaborations,
whose programmes include comparing spectra at different distances from the anti-neutrino source.
The preferred fit regions of these independent experiments interestingly overlap near $\Delta m^2 \sim 1.4$ eV$^2$
and $\sin^2 2 \vartheta_{14} \sim 0.05$, with $\vartheta_{14}$ being an effective mixing angle as interpreted in a 3+1 scheme.
Also of relevance is the so-called ``Gallium neutrino anomaly'',
discovered in 2005-2006~\cite{Abdurashitov:2005tb,Laveder:2007zz,Giunti:2006bj},
albeit of less significance.
For recent reviews on eV-scale sterile neutrinos and additional references, see~\cite{Giunti:2019aiy,Diaz:2019fwt}.
The purpose of this paper is to investigate the possibility of obtaining in a natural way
at least one sterile neutrino with a mass of order eV in the framework of the
general type-I seesaw mechanism~\cite{Minkowski:1977sc,Yanagida:1979as,Glashow:1979nm,GellMann:1980vs,Mohapatra:1979ia}.
The crucial point is that we
shall consider a special case of the seesaw framework. Instead of having
three heavy sterile neutrinos, as in the usual setup,
at least one of the sterile neutrinos should be light while, at the same time, its
mixing with the light active neutrinos should be small enough to comply with existing
experimental bounds, but large enough to be relevant to low energy
phenomenology. Two important challenges are to find solutions that are
stable under renormalisation, and to inquire if these spectra,
with at least one neutrino with a mass of order eV,
might indeed explain the SBL anomalies.
For definiteness, let us recall how the conventional seesaw mechanism
works. It consists of an extension of the SM where three right-handed
neutrinos are added to the standard spectrum. As a result, the neutrino mass
terms include a Dirac mass matrix, denoted $m$, generated by the breakdown
of the electroweak (EW) symmetry, and a Majorana mass term, denoted $M$, with the
scale of $M$ much larger than the scale of $m$. In general this leads to
three light neutrinos with masses of order $m^{2}/M$ and three heavy neutrinos
with masses of order $M$. The generic seesaw framework leads to an active-sterile mixing
of order $m/M$, too small to be of relevance to low
energy physics, while providing a framework for leptogenesis~\cite{Fukugita:1986hr}.
In the derivation of the standard seesaw formulae, one
performs a block diagonalisation of the $6 \times 6$ complex neutrino mass matrix,
obtaining approximate relations that are valid to an excellent
approximation. Some of the approximate formulae no longer hold in the
special cases which we are considering. However, there are important exact
relations which continue to be valid in our case. We find viable models with
at least one sterile neutrino with a mass of order eV by imposing a $U(1)$
symmetry (see e.g.~\cite{Branco:1988ex}) allowing for small breaking terms.
Before the breaking, for special assignments of leptonic charges, the lightest neutrinos are
naturally massless at tree level, acquiring calculable small masses after
the breaking and complying with the experimental $\Delta m^2$ values after
radiative corrections.
The paper is organised as follows. In the next section, we describe our setup,
settle the notation and present a useful parametrisation of the mixing matrix
as well as some exact results concerning the Dirac mass matrix, neutrino masses and deviations from unitarity.
In section~\ref{sec:devunit} we discuss the size of such deviations
from unitarity in the $3\times 3$ leptonic mixing matrix.
In section~\ref{sec:loopandsym} we describe how one-loop mass corrections can be controlled
within the considered framework. In section~\ref{sec:numeric} we present explicit numeric examples
and go through their phenomenology,
while section~\ref{sec:cp} is dedicated to the study of CP Violation within the type-I seesaw,
with emphasis on CP Violation measurements and CP-odd weak basis invariants.
Finally our conclusions are presented in section~\ref{sec:conclusions}.
\section{Framework}
\label{sec:framework}
We work under the type-I seesaw framework, in a model with three
right-handed neutrinos added to the SM.
The leptonic mass terms are given by:
\begin{equation}
\begin{aligned}
\mathcal{L}_{m} \,&=\,
-\left[\overline{{\nu }_{L}^{0}}\,m\,\nu _{R}^{0}
+\frac{1}{2}\nu_{R}^{0T} C^{*} M\,\nu _{R}^{0}
+\overline{l_{L}^{0}}\,m_{l}\,l_{R}^{0}\right]+\text{h.c.} \\
\,&=\,
-\left[\frac{1}{2}n_{L}^{0T} C^{*} \mathcal{M}^* n_{L}^{0}
+\overline{l_{L}^{0}} m_{l}l_{R}^{0}\right]+\text{h.c.}\,,
\end{aligned}
\end{equation}
where $n_L^0 = (\nu_L^0\,, \,\,C\, \overline{\nu_R^0}^T)^T$
and the zero superscript denotes a general flavour basis.
Without loss of generality, one may choose a weak basis where $m_{l}$ is
real and diagonal. The analysis that follows is performed in this
basis, meaning $\nu_L^0 = (\nu_{eL},\,\nu_{\mu L},\,\nu_{\tau L})$.
The neutrino mass matrix $\mathcal{M}$ is a $6\times 6$ complex symmetric matrix and
has the form:
\begin{align}
\mathcal{M}=\left(
\begin{array}{cc}
0 & m \\
m^{T} & M%
\end{array}%
\right)\,. \label{vm0}
\end{align}%
This matrix is diagonalised by the unitary transformation
\begin{align}
\mathcal{V}^{T}\mathcal{M}^*\mathcal{V}=\mathcal{D}\qquad
\Longleftrightarrow \qquad \mathcal{M}=\mathcal{V\ D\ V}^{T}, \label{vm}
\end{align}%
where $\mathcal{D}$ is diagonal real non-negative
and contains all neutrino masses,
\begin{align}
\mathcal{D}=\left(
\begin{array}{cc}
d & 0 \\
0 & D%
\end{array}%
\right)\,. \label{eq:d}
\end{align}%
Here, $d$ contains the masses of the three known light neutrinos, $d=\text{diag}(m_1,m_2,m_3)$,
and $D$ the masses of other
neutrinos, $D=\text{diag}(M_1,M_2,M_3)$. The $6\times 6$
unitary matrix $\mathcal{V}$ can be written as
\begin{align}
\mathcal{V}=\left(
\begin{array}{cc}
K & R \\
S & Z%
\end{array}%
\right) \,, \label{unit}
\end{align}%
where $K$, $R$, $S$ and $Z$ are $3\times 3$ matrices. Using
the unitarity of $\mathcal{V}$, namely $\mathcal{V}\,\mathcal{V}^\dagger=\mathcal{V}^\dagger\mathcal{V}=\mathds{1}_{(6\times 6)}$,
one can obtain~\cite{Agostinho:2017wfs} a series of exact relations relating
the matrices $K$, $R$, $S$, and $Z$,
examples of which are $KK^{\dagger }+RR^{\dagger }=\mathds{1}$ and $KS^{\dagger }+RZ^{\dagger }=0$.
We shall show that in order to study deviations of unitarity,
it is useful to parametrise $\mathcal{V}$ in a different way.
\subsection{A Novel Parametrisation for the Leptonic Mixing Matrix}
\label{sec:param}
In Ref.~\cite{Agostinho:2017wfs} we introduced an especially useful parametrisation of the $6 \times 6$
leptonic mixing matrix that enables to control all deviations from unitarity through a single $3 \times 3$
matrix which connects the mixing of the active and sterile neutrinos in the context of type I seesaw.
It was written:
\begin{align}
\mathcal{V}=\left(
\begin{array}{cc}
K & 0 \\
0 & Z%
\end{array}%
\right) \left(
\begin{array}{cc}
\mathds{1}\, & Y \\
-X & \mathds{1}\,%
\end{array}%
\right) \,,\quad X=-Z^{-1}S\,,\quad Y=K^{-1}R\,,
\label{u2}
\end{align}
where it is assumed that $K$ and $Z$ are non-singular.
From the aforementioned unitarity relation $KS^{\dagger }+RZ^{\dagger }=0$
one promptly concludes that
\begin{align}
Y=X^{\dagger }\quad \Longrightarrow \quad \mathcal{V}=\left(
\begin{array}{cc}
K & KX^{\dagger } \\
-ZX & Z%
\end{array}%
\right)\,. \label{xy}
\end{align}
Thus, a generic $6\times 6$\ unitary matrix $\mathcal{V}$, in fact, only
contains three effective $3\times 3$ matrices $K$, $Z$ and $X$. Furthermore,
from the same unitarity of $\mathcal{V}$
and from the singular value decomposition $X = W\,d_X\,U^\dagger$,
one finds that $K$ and $Z$ can be written as:
\begin{align}
\begin{array}{l}
K = U_{K}\ \sqrt{\left( \mathds{1}\,+d_{X}^{2}\right) ^{-1}}\ U^{\dagger }
= U_{K}\ U^{\dagger }\sqrt{\left( \mathds{1}\,+X^{\dagger}X\right) ^{-1}}
= V \sqrt{\left( \mathds{1}\,+X^{\dagger}X\right) ^{-1}}
\,, \\[4mm]
Z = W_{Z}\ \sqrt{\left( \mathds{1}\,+d_{X}^{2}\right)^{-1}}\ W^{\dagger }
= W_{Z}\ W^{\dagger }\ \sqrt{\left( \mathds{1}\,+XX^{\dagger}\right) ^{-1}}
\,,%
\end{array}
\label{kz}
\end{align}%
where%
\footnote{Principal square roots of positive semi-definite matrices
are unique and their use is implied in Eq.~\eqref{kz}.}
$U_{K},W_{Z}$, $U$ and $W$ are all $3\times 3$ unitary matrices,
$d_X$ is a diagonal matrix with real non\discretionary{-}{-}{-}negative entries,
and we have defined an additional unitary matrix $V \equiv U_K U^\dagger$.
The matrices $U$ and $W$ diagonalise the Hermitian products
$X^{\dagger }X$ and $XX^{\dagger }$, respectively:
\begin{align}
U^{\dagger }\ X^{\dagger }X\ U=d_{X}^{2}\,,\qquad W^{\dagger
}XX^{\dagger }\ W=d_{X}^{2} \,.
\label{uwxx}
\end{align}
Any unitary matrix to the left of $Z$ --
like the product $W_Z\,W^\dagger$ in Eq.~\eqref{kz} --
is unphysical as it can be rotated away via a weak
basis transformation which does not affect the form of $m_l$.
Accordingly, one can choose to work in a weak basis for which $\Sigma = \mathds{1}$
in the general expression
\begin{align}
Z\,=\,\Sigma \,( \mathds{1}\,+XX^{\dagger})^{-1/2}\,,
\label{eq:Zgeneral}
\end{align}
with $\Sigma$ unitary.
Note, however, that $\Sigma \neq \mathds{1}$ in the numerical `symmetry' bases considered later on
in sections~\ref{sec:loopandsym} and~\ref{sec:numeric}.
The matrix $K$ plays the role of the PMNS mixing matrix,
as it connects the flavour eigenstates $\nu_{\alpha L}$
($\alpha = e$, $\mu$, $\tau$)
to the lightest mass eigenstates.
From Eq.~\eqref{kz}, it is clear that $K$ is unitary if and only if
$d_{X}^{2}=0$. Thus, the deviations from unitarity are manifestly expressed
in the diagonal matrix $d_{X}^{2}$ containing the (squared)
singular values of $X$.
In summary, a generic $6\times 6$ mixing unitary matrix $\mathcal{V}$ can
be simplified and
be written in terms of just one $3\times 3$ unitary matrix $V$
and of explicit deviations from unitarity, parametrised by a $3 \times 3$ matrix $X$:%
\begin{align}
\mathcal{V}=\left(
\begin{array}{cc}
K & R \\
S & Z%
\end{array}%
\right) \qquad ;\qquad
\begin{array}{l}
K=V\,
\sqrt{\left( \mathds{1} +X^{\dagger
}X\right) ^{-1}}\qquad ;\qquad R=K\ X^{\dagger }, \\[5mm]
Z=
\sqrt{\left( \mathds{1} +XX^{\dagger
}\right) ^{-1}}\qquad \,\;\;\;;\qquad S=-Z\ X\,,
\end{array}
\label{eq:epl}
\end{align}%
i.e.
\begin{align}
\mathcal{V}=\left(
\begin{array}{cc}
V\,\left( \mathds{1}+X^\dagger X\right)^{-1/2} &
V\,\left( \mathds{1}+X^\dagger X\right)^{-1/2}\,X^\dagger \\
-\left( \mathds{1}+XX^\dagger\right)^{-1/2}\, X &
\left( \mathds{1}+XX^\dagger\right)^{-1/2}
\end{array}%
\right)\,.
\end{align}
In general, there are no restrictions on the matrix $X$. However, in a type-I
seesaw model, the mixing matrix $\mathcal{V}$ must also obey the mass
relation stated in Eq.~\eqref{vm}, and the $6\times 6$ neutrino mass matrix $%
\mathcal{M}$ is not general: some entries are zero at tree level. This imposes
a restriction%
\footnote{This restriction generalises to $d + X^\dagger D X^*= K^{-1} m_L (K^{-1})^\dagger$
for an explicit, symmetric light neutrino Majorana mass matrix $m_L$ in place of the zero in Eq.~\eqref{vm0},
which may arise from radiative corrections or
be present due to e.g.~a type-II seesaw~\cite{Konetschny:1977bn, Schechter:1980gr, Cheng:1980qt,
Lazarides:1980nt, Mohapatra:1980yp} contribution.}
on $X$,
\begin{align}
d+X^{T}\ D\ X=0\,,
\label{dXDX}
\end{align}%
which implies that it is possible to write $X$ as:
\begin{align}
X\,=\,i\,\sqrt{D^{-1}}\,O_{c}\,\sqrt{d}\,,
\label{eq:xc}
\end{align}%
where $O_c$ is a complex orthogonal matrix, i.e., $O_c^T O_c=O_c\,O_c^T=\mathds{1}$.
Explicitly,
\begin{align}
|X_{ij}|=\left\vert (O_c)_{ij}\,\sqrt{\frac{m_j}{M_i}}\,\right\vert\,.
\end{align}%
Since $O_{c}$ is an orthogonal complex matrix, not all of its elements need
to be small. Furthermore, not all the $M_{i}$ need to be much larger than
the electroweak scale, in order for the seesaw mechanism to lead to
naturally suppressed neutrino masses. These observations about the size of
the elements of $X$ are especially relevant in view of the fact that some of
the important physical implications of the seesaw model depend crucially on $X$.
In particular, the deviations of $3\times 3$ unitarity are controlled by
$X$, as shown in Eq.~\eqref{kz}.
On the other hand, from Eq.~\eqref{dXDX} one can also see that $X$ must
not vanish, in order to account for the non-zero light neutrino
masses.
Several authors have adopted different types of parametrisations for the full mixing matrix, in the context of seesaw models, see for example \cite{Korner:1992zk,Casas:2006hf,Blennow:2011vn,Xing:2011ur,Donini:2012tt}.
Some of these are approximate and apply to specific limits or to models with fewer than three sterile neutrinos, others are exact and do not depend on the number of sterile neutrinos like in our case.%
\footnote{
Although we have applied our parametrisation to a scenario with three sterile neutrinos, it is applicable to cases where the number $q$ of sterile neutrinos differs from 3. We are then in the presence of a rectangular $3 \times q$ Dirac mass matrix $m$ and of a $q \times 3$ rectangular $X$ matrix, with everything else remaining consistent.
}
Some of these parametrisations were derived to deal with special types of analyses and may become cumbersome when adopted for other purposes. We find our parametrisation very useful since it is particularly simple and parametrises, in a concise and exact form, all deviations from unitarity by a single matrix $X$.
From the above, one concludes that the set $\{m_l,d,D, V, O_c\}$
of matrices is sufficient to describe lepton masses and mixing
at tree level.
In the working weak basis,
there are 9 lepton masses in the first three matrices, while
mixing is parametrised by 6 angles and 6 CP-violating (CPV) phases,
contained in the unitary matrix $V$ and in the orthogonal deviation matrix $O_c$.~Parameter counting is summarised in Table~\ref{tab:parameters}
and is in agreement with, e.g., Refs.~\cite{Branco:2001pq, Broncano:2002rw, Branco:2011zb}.
Coincidentally, these numbers of angles and CPV phases match
those of a general 3+1 scenario (see e.g.~\cite{Giunti:2007ry}),
even though three right-handed neutrinos have been added to the SM.
This is a consequence of having a type-I seesaw UV completion,
which requires the zero block in Eq.~\eqref{vm0}.
\begin{table}[t]
\centering
\renewcommand{\arraystretch}{1.2}
\begin{tabular}{lcccccc}
\toprule
& $\,\,\,\, m_l \,\,\,\,$ & $\,\,\,\, d \,\,\,\,$ & $\,\,\,\, D \,\,\,\,$ & $\,\,\,\, O_c \,\,\,\,$ & $\,\,\,\, V \,\,\,\,$ & Total \\
\midrule
Moduli \quad& 3 & 3 & 3 & 3 & 3 & 15 = 9 + 6 \\[1mm]
Phases \quad& $-3$ & 0 & 0 & 3 & 6 & 6 \\
\bottomrule
\end{tabular}
\caption{
Physical parameter counting in type-I seesaw with three sterile neutrinos. The 15 moduli correspond to 9 lepton masses
(3 charged-lepton masses and 6 neutrino masses) and to 6 mixing angles. There are 6 physical phases,
as rephasing the charged leptons can remove 3 phases from $V$.
Recall that $m_l$ is real and diagonal in the considered weak basis.}
\label{tab:parameters}
\end{table}
In this paper, we consider the possibility of having at least one sterile
neutrino with a mass of order eV arising from the seesaw mechanism in a
model with three right-handed neutrinos added to the SM.
We analyse the different aspects and consequences of the phenomenology of such a model.
With this aim, relations between observables and parameters
which are independent of the seesaw limit are derived in the following subsection.
\subsection{Exact Relations at Tree Level}
From Eqs.~\eqref{vm} and \eqref{xy}, one can extract a general
and exact formula for the neutrino Dirac mass matrix $m$ in Eq.~\eqref{vm0},
valid for any weak basis and any scale of $M$:
\begin{align}
m \,=\, K\, X^\dagger D \left( Z^{-1}\right)^*
\,=\, - i \, K \, \sqrt{d} \, O_{c}^\dagger \sqrt{D}
\left( Z^{-1}\right)^*.
\label{eq:mdirac}
\end{align}
Recall that, in our working weak basis,
$m_l$ is diagonal and
$K$ is directly identified with the
non-unitary PMNS matrix.
Moreover, $K$ and $Z$ take the forms given in Eq.~\eqref{eq:epl}
and one has:
\begin{equation}
\begin{aligned}
m \,&=\,
V\,\sqrt{\left(\mathds{1}+X^\dagger X\right)^{-1}}\,X^\dagger D\, \sqrt{\mathds{1}+X^* X^T} \\
&=\, - i \,
V\,\sqrt{\left(\mathds{1}+X^\dagger X\right)^{-1}}
\, \sqrt{d} \, O_{c}^\dagger \sqrt{D}\,
\sqrt{\mathds{1}+X^* X^T}\,.
\end{aligned}
\label{eq:mdiracWB}
\end{equation}
This exact formula is to be contrasted with the known
parametrisation for the neutrino Dirac mass matrix developed by
Casas and Ibarra~\cite{Casas:2001sr}, which is
valid in the standard seesaw limit of $M \gg m$ and reads
\begin{align}
m\, \simeq \,-i\, U_\text{PMNS} \, \sqrt{d} \, O_c^\text{CI} \, \sqrt{D}\,,
\label{eq:casas}
\end{align}
in the weak basis where $m_l$ and $M = \mbox{diag}(\tilde M_1, \tilde M_2, \tilde M_3) \equiv \tilde D$ are diagonal.
Here, $O_c^\text{CI}$ is an orthogonal complex matrix
and $U_\text{PMNS}$ represents the approximately unitary lepton mixing matrix.
In this limit of $M \gg m$, the light neutrino mass matrix $m_\nu$ can be approximated by:
\begin{align}
m_\nu\,\simeq \, - m\, M^{-1}m^T.
\label{eq:eff}
\end{align}
It is clear from \eqref{eq:mdiracWB} that
one can obtain Eq.~\eqref{eq:casas} as a limiting
case of Eq.~\eqref{eq:mdirac}
through an expansion in powers of $X$.
Keeping only the leading term, unitarity is regained with $U_\text{PMNS} \simeq V$
and one can identify the complex orthogonal matrices: $O_c^\text{CI} = O_c^\dagger$.
As a side note, let us remark that it is possible to
obtain a parametrisation for $m$
which is exact and holds in a general weak basis
by following the Casas-Ibarra procedure. One finds:
\begin{align}
m\, = \,-i\, U_\nu \, \sqrt{\tilde d} \, \tilde O_c^\text{CI} \, \sqrt{\tilde D}\,\, \Sigma_M^T,
\label{eq:casas_exact}
\end{align}
where once again $\tilde O_c^\text{CI}$ is a complex symmetric matrix.
However, $\tilde d$ and $\tilde D$ do not contain physical masses,
but are instead
diagonal matrices with non-negative entries
obtained from the Takagi decompositions
$-m \,M^{-1} m \,=\, U_\nu \,\tilde d \,U_\nu^T$ and $M \,=\, \Sigma_M \,\tilde D \, \Sigma_M^T$,
with $U_\nu$ and $\Sigma_M$ unitary.
The matrix $\Sigma_M$ is unphysical, as it can be rotated away by a weak basis transformation
diagonalising $M$.
Even though this parametrisation resembles that of Eq.~\eqref{eq:mdiracWB},
the latter may be preferable since it directly makes use of low-energy observables.
Only in the limit $M\gg m$, where Eq.~\eqref{eq:eff} and $\tilde d \simeq d$, $\tilde D \simeq D$ hold,
does Eq.~\eqref{eq:casas_exact} reduce to the approximate relation \eqref{eq:casas},
in a weak basis of diagonal charged leptons and diagonal sterile neutrinos.
At this stage, one may wonder whether there exists an exact relation, analogous
to Eq.~\eqref{eq:eff} which is valid in any region of parameter space.
One can actually deduce such a relation for an arbitrary number of active and sterile neutrinos.
Consider the following decomposition of a block-diagonal matrix:
\begin{align}
\left[
\begin{array}{cc}
\mathbf{A} & \mathbf{B} \\
\mathbf{C} & \mathbf{D}
\end{array}\right]
\,=\,
\left[
\begin{array}{cc}
\mathds{1}_{(p\times p)} & \mathbf{B} \\
0 & \mathbf{D}
\end{array}\right]
\left[
\begin{array}{cc}
\mathbf{A}-\mathbf{B}\,\mathbf{D}^{-1}\mathbf{C} & 0 \\
\mathbf{D}^{-1}\mathbf{C} & \mathds{1}_{(q\times q)}
\end{array}\right]
\,,
\end{align}
where $\mathbf{A}$, $\mathbf{B}$, $\mathbf{C}$, and $\mathbf{D}$ are complex $p\times p$, $p\times q$, $q\times p$, and $q\times q$ matrices, respectively,
and one has assumed that $\mathbf{D}$ is non-singular.
From this it follows that
\begin{align}
\det \left[
\begin{array}{cc}
\mathbf{A} & \mathbf{B} \\
\mathbf{C} & \mathbf{D}
\end{array}%
\right] \,=\,
\det \left(\mathbf{A}-\mathbf{B}\,\mathbf{D}^{-1}\mathbf{C}\right) \,\, \det \,\mathbf{D}\,.
\label{po}
\end{align}%
In a general type-I seesaw scenario, $\mathbf{A}=0$, $\mathbf{B}=\mathbf{C}^T=m$ and $\mathbf{D}=M$, and one obtains%
\begin{align}
\left\vert \det \left[
\begin{array}{cc}
0 & m \\
m^{T} & M%
\end{array}%
\right] \right\vert =\left\vert \det\, m \right\vert ^{2}\,,
\label{po1}
\end{align}%
which leads to%
\begin{align}
m_{1}\ldots m_p=\frac{\left\vert \det \,m \right\vert ^{2}}{%
M_{1}\ldots M_q}\,, \label{po21}
\end{align}%
with $m_{i}$ ($i=1,\ldots,p$) and $M_{j}$ ($j=1,\ldots,q$) denoting the neutrino masses.
For the case of interest, $p = q = 3$ and one has:
\begin{align}
m_1 \,m_2 \,m_3\,=\,\frac{\left\vert \det \,m \right\vert ^{2}}{%
M_1 \,M_2\, M_3}\,. \label{po22}
\end{align}%
We stress that these relations are {\bf exact} and that no assumptions have been made about the
relative sizes of the $m_i$ and $M_j$.
It is clear from Eq.~\eqref{po22} that the smallness of neutrino masses in
this framework may
have its origin in the largeness of the $M_j$ (with respect to the EW scale),
or in the suppression of $|\det\, m|$ due to e.g.~an approximate symmetry.
\section{The Size of Deviations from Unitarity}
\label{sec:devunit}
Present neutrino experiments put stringent constraints on the deviations
from unitarity~\cite{Gluza:2002vs, Antusch:2006vwa,
FernandezMartinez:2007ms, Antusch:2014woa, Fernandez-Martinez:2016lgt,
Blennow:2016jkn}. In the framework of the type-I seesaw, it is the block $K$
of the matrix $\mathcal{V}$ that takes the role played by the $U_\text{PMNS}$
matrix at low energies,
typically taken as unitary and parametrised accordingly
(see e.g.~the standard parametrisation~\cite{PDG2019}).
Clearly, in this framework, $K$ is no longer a unitary matrix.
When considering the deviations from unitarity of $K$, one must comply with
experimental bounds, while at the same time investigate whether it is
possible to obtain deviations that are
sizeable enough to be detected experimentally in the near future.
Using the above parametrisation,
this translates into making appropriate choices for the matrix $X$.
Deviations from unitarity of $K$ can be
parametrised as the product of an Hermitian matrix by a unitary matrix~\cite{Fernandez-Martinez:2016lgt}:
\begin{align}
K\,=\,(\mathds{1}-\eta )\,V \,,
\label{eq:khu}
\end{align}%
where $\eta$ is an Hermitian matrix.
In the previous section,
we have instead parametrised $K$ with an Hermitian matrix to the right
and the unitary matrix $V$ to the left, see Eq.~\eqref{eq:epl}.
These right- and left-polar decompositions are unique
since we are dealing with a non-singular $K$ by assumption.
Moreover, they can be connected explicitly:
\begin{align}
\eta \,=\,
V\left(\mathds{1} - \sqrt{\left(\mathds{1} + X^\dagger X \right)^{-1}}\right)V^\dagger
\,=\,\mathds{1}-U_K \left( \sqrt{\mathds{1} +d_{X}^{2}}\ \right)^{-1} U_K^\dagger\,.
\end{align}
Expanding in powers of $X$ (or equivalently of $d_X$), one obtains
\begin{align}
\eta
\,=\, \frac{1}{2} \,U_K\, d_X^2 \,U_K^\dagger + \mathcal{O}(d_X^4)
\,=\, \frac{1}{2} \,V \, X^\dagger X\, V^\dagger + \mathcal{O}(X^4)\,.
\end{align}
Constraints on the entries of $\eta$ depend on the mass scale of the new neutrinos.
Bounds on $\eta$ can be found in the literature
for the scenario in which all three heavier neutrinos have
masses above the EW scale~\cite{Antusch:2014woa, Fernandez-Martinez:2016lgt}.
As pointed out in \cite{Fernandez-Martinez:2016lgt}, in such a case
it is very useful to parametrise $K$ with the
unitary matrix on the right, due to the fact that, experimentally, it is not
possible to determine which physical light neutrino is produced.
Therefore, one must sum over the massive neutrino fields
and observables depend on $KK^\dagger$.
From the unitarity relation $KK^\dagger+RR^\dagger=\mathds{1}$
and Eq.~\eqref{eq:khu}, one has
\begin{align}
K K^\dagger \,=\, \mathds{1} - R R^\dagger \,=\, \mathds{1} - 2\eta + \eta^2
\quad \Rightarrow \quad
\eta \,=\, \frac{1}{2} \,R R^\dagger + \mathcal{O}(R^4)\,,
\label{eq:KReta}
\end{align}%
i.e.~there is a straightforward connection between $KK^{\dagger}$, $RR^{\dagger}$
and the deviations
from unitarity, expressed in $\eta$.
When one has one or more light sterile neutrinos,
the aforementioned bounds cannot be directly applied,
as some states are kinematically accessible and different sets of
experimental constraints need to be taken into account,
depending on the spectrum at hand.
In this case, observables can constrain directly the entries of $R$, and not just the product $RR^{\dagger}$.
For light sterile neutrinos with eV-scale masses, the most stringent bounds on deviations from unitarity
come from oscillation experiments~\cite{Blennow:2016jkn}, such as
BUGEY-3~\cite{Declais:1994su}, MINOS~\cite{MINOS:2016viw}, NOMAD~\cite{Astier:2001yj,Astier:2003gs}
and Super\discretionary{-}{-}{-}Kamiokande~\cite{Abe:2014gda}.
In our analysis, the relevant exclusion curves in the $\sin^{2}2\vartheta_{\alpha \beta}$ -- $\Delta m^{2}$
planes (see section~\ref{sec:numeric}) are considered and translated into constraints
on the elements of the mixing matrix block $R$.
If one is dealing instead with keV or GeV\,--\,TeV sterile neutrinos, it is
important to take into account the experimental bounds coming
from $\beta$-decay experiments (see e.g.~\cite{Adhikari:2016bei} and references within)
and from LHC searches for heavy Majorana neutrinos~\cite{Antusch:2015mia,
Deppisch:2015qwa, Das:2015toa, Das:2017nvm, Das:2017zjc,
Sirunyan:2018mtv, Sirunyan:2018xiv}.
Another crucial experimental input, also taken into account
in our analysis, is the limit on the $\mu \rightarrow e\gamma $ branching ratio
obtained by the MEG Collaboration,
$BR(\mu \rightarrow e\gamma) < 4.2\times 10^{-13}$ ($90\%$~CL)~\cite{TheMEG:2016wtm},
one of the most stringent bounds on lepton flavour violating processes.
This bound is expected to be relevant whenever the heavier neutrino masses are
around or above the EW scale, as a GIM cancellation arises for lighter states
(see for instance Eq.~(40) of Ref.~\cite{Fernandez-Martinez:2016lgt}).
\subsection{Restrictions on the Neutrino Mass Spectrum}
The type-I seesaw model that we consider here, with at least one sterile
neutrino with a mass around 1 eV, also leads to some restrictions on the light
neutrino mass spectrum at tree level.
In particular, we find
an upper bound on the mass $m_\text{min}$ of the lightest neutrino,
as a function of the deviations from unitarity.
Taking into account the parametrisation~\eqref{eq:xc} for the matrix $X$
controlling deviations from unitarity, and
for eigenvalues $d_{X_{i}}^2$ ($i=1,2,3$) of $X^\dagger X$, we have:
\begin{align}
\mbox{tr}\left[ X^{\dagger }X\right]\,=\,
\mbox{tr}\left[ O_c^\dagger \, D^{-1}\, O_{c}\, d\right]
\,=\, d_{X_{1}}^{2}+d_{X_{2}}^{2}+d_{X_{3}}^{2}\,. \label{ei}
\end{align}%
From this, and recalling that $d=\mbox{diag}(m_1,m_2,m_3)$ and
$D=\mbox{diag}(M_1, M_2, M_3)$, we obtain
\begin{align}
\sum_k\,\frac{1}{M_k}
\left( m_{1}\left\vert O_{k1}^{c}\right\vert^{2}
+m_{2}\left\vert O_{k2}^{c}\right\vert^{2}
+m_{3}\left\vert O_{k3}^{c}\right\vert^{2}\right)
\,=\, d_{X_{1}}^{2}+d_{X_{2}}^{2}+d_{X_{3}}^{2}\,,
\label{ei1}
\end{align}
and conclude that%
\begin{align}
\frac{m_\text{min}}{M_{1}}\left( \left\vert O_{11}^{c}\right\vert ^{2}+\left\vert
O_{12}^{c}\right\vert ^{2}+\left\vert O_{13}^{c}\right\vert ^{2}\right)
\,<\,d_{X_{1}}^{2}+d_{X_{2}}^{2}+d_{X_{3}}^{2}\,,
\end{align}%
where naturally $M_{1}\leq M_{2}\leq M_{3}$ and $m_\text{min} = m_1$ ($m_3$) for normal (inverted) ordering.
Then, inserting the inequality $\sum_i \left\vert O_{1i}^{c}\right\vert
^{2}\geq 1$, valid for any orthogonal complex matrix, we find
\begin{align}
m_\text{min} \,<\, \left(d_{X_{1}}^{2}+d_{X_{2}}^{2}+d_{X_{3}}^{2}\right) M_{1} \,. \label{upm1}
\end{align}
As discussed, when one has one or more light sterile neutrinos, the typical
stringent conditions on the deviations from unitarity do not apply.
Thus, one may consider larger deviations from unitarity, even of the
order of the smallest $U_\text{PMNS}$ angle, i.e.~$\mathcal{O}(0.1)$~\cite{Blennow:2016jkn}.
Since in the scenarios of interest the lightest of the heaviest neutrinos
has a mass of $M_{1} \sim 1$ eV, using Eq.~\eqref{upm1} we
find a bound for the mass of the lightest neutrino:%
\begin{align}
m_\text{min} \,\lesssim \, 0.1 \text{ eV}\,.
\label{m1ev}
\end{align}
Note that this bound becomes stronger as one considers smaller
and smaller deviations from unitarity.
Taking into account the measured light neutrino mass-squared differences, we conclude
that the light neutrinos cannot have masses above $\mathcal{O}(0.1)$ eV
under these conditions, a statement which is also supported by cosmological bounds~%
\cite{Aghanim:2016yuo}.
\subsection{Neutrino Oscillations}
\label{sec:osc}
In the presence of deviations from unitarity,
neutrino oscillation probabilities are modified%
~\cite{Antusch:2006vwa,Blennow:2016jkn}.
If $n$ of the heavier neutrinos are accessible at oscillation experiments,
then a $3 \times (3 + n)$ submatrix $\Theta$ of $\mathcal{V}$ enters
the computation of oscillation probabilities,
\begin{align}
\Theta\,=\,
\left( \begin{array}{cc}
K & R_{3\times n}
\end{array} \right)\,,
\label{eq:wdef}
\end{align}
where $R_{3\times n}$ contains the first $n$ columns of $R$.
For a given experimental setup, and depending on their masses,
the heavier states may already be produced incoherently or instead
lose coherence before reaching the detector, due to wave-packet separation
(see e.g.~\cite{Cozzella:2018zwm}).
The probability of transition between flavour (anti-)neutrinos
\stackon[-.7pt]{$\nu$}{\brabar}$_\alpha$ and
\stackon[-.7pt]{$\nu$}{\brabar}$_\beta$,
or of survival for a given flavour ($\alpha = \beta$),
with $\alpha, \beta = e, \mu, \tau$, can be shown to take the form
\begin{equation}
\begin{aligned}
P_{\stackon[-.7pt]{$\nu$}{\brabar}_\alpha \rightarrow \stackon[-.7pt]{$\nu$}{\brabar}_\beta}(L,E)
\,=\,\frac{1}{(\Theta\Theta^\dagger)_{\alpha\alpha}(\Theta\Theta^\dagger)_{\beta\beta}}
\Bigg[
\left|(\Theta\Theta^\dagger)_{\alpha\beta}\right|^2
&- 4 \sum_{i>j}^{3+n}\,\re
\left(\Theta_{\alpha i}^*\,\Theta_{\beta i}\,\Theta_{\alpha j}\,\Theta_{\beta j}^*\right)
\sin^2 \Delta_{ij} \\
&\pm 2 \sum_{i>j}^{3+n}\,\im
\left(\Theta_{\alpha i}^*\,\Theta_{\beta i}\,\Theta_{\alpha j}\,\Theta_{\beta j}^*\right)
\sin 2 \Delta_{ij}
\Bigg] \,,
\end{aligned}
\label{eq:probability}
\end{equation}%
where the plus or minus sign in the second line refers to neutrinos or anti-neutrinos, respectively.
Here, $L$ denotes the source-detector distance, $E$ is the (anti-)neutrino energy,
and one has defined
\begin{align}
\Delta_{ij} \,\equiv \, \frac{\Delta m^2_{ik}\, L}{4E}
\,\simeq\, 1.27\,\frac{\Delta m_{ij}^{2}[\text{eV}^{2}]\,L[\text{km}] }{ E[\text{GeV}]}\,,
\end{align}
with mass-squared differences $\Delta m_{ij}^2 \equiv m_i^2 - m_j^2$, as usual.
Note that if $n=3$
then $\Theta \Theta ^{\dagger}=KK^{\dagger}+RR^{\dagger}=\mathds{1}_{3\times 3}$
due to the unitarity of the full $6\times 6$ mixing matrix $\mathcal{V}$
and Eq.~\eqref{eq:probability} reduces to the usual unitary formula.
It should be pointed out that the normalisation
$(\Theta\Theta^\dagger)_{\alpha\alpha}(\Theta\Theta^\dagger)_{\beta\beta}$
in \eqref{eq:probability} will cancel in the experimental event rates,
due to similar correction factors appearing in production rates
and detection cross-sections~\cite{Antusch:2006vwa,Cozzella:2018zwm}.
Nevertheless, we explicitly keep it in subsequent expressions.
It will turn out to be negligibly close to unity for our particular numerical examples.
The term proportional to $|(\Theta\Theta^\dagger)_{\alpha\beta}|^2$ is instead known to
correspond to a ``zero-distance'' effect~\cite{Langacker:1988ur,Antusch:2006vwa}.
It will also turn out to be negligible for our explicit numerical examples.
In what follows,
we will consider approximate forms of Eq.~\eqref{eq:probability},
having in mind SBL and long-baseline (LBL) experimental setups.
Since LBL experiments realistically need to take matter effects into account,
our formulae in those cases are simply indicative.
\section{Structure of the Mass Matrix}
\label{sec:loopandsym}
\subsection{One-loop Corrections}
\label{sec:loop}
So far we have focused on neutrino masses and mixing at tree level.
However, in general, one expects one-loop corrections $\delta M_{L}$
to the $0_{(3\times 3)}$ block of $\mathcal{M}$ in Eq.~\eqref{vm0}.
As these are not guaranteed to be negligible,
one should keep track of them in order to properly scan the parameter space of seesaw models.
They are inherently finite and are given by~\cite{Grimus:2002nk, AristizabalSierra:2011mn}
(see also \cite{Grimus:2018rte}):
\begin{align}
\delta M_{L}\,=\,\delta M_{L}^{Z}+\delta M_{L}^{H}~,
\end{align}
where $\delta M_{L}^{Z}$ and $\delta M_{L}^{H}$ represent contributions
depending on the $Z$ and Higgs boson masses, $m_Z$ and $m_H$, respectively.
Explicitly, one has (see also Appendix A of Ref.~\cite{AristizabalSierra:2011mn}):
\begin{equation}
\begin{aligned}
\delta M_{L}^{Z} &\,=\,
\frac{3}{32\pi ^{2}\, v^2 }\,
\left( \begin{array}{cc} K & R \end{array} \right)\,
\frac{\mathcal{D}^{3}}{\mathcal{D}^{2}/{m_{Z}^{2}} - \mathds{1}}\,
\log \left( \frac{\mathcal{D}^{2}}{m_{Z}^{2}} \right)\,
\left( \begin{array}{c} K^T \\ R^T \end{array} \right)\,, \\[2mm]
\delta M_{L}^{H} &\,=\,
\frac{1}{32\pi ^{2}\, v^2 }\,
\left( \begin{array}{cc} K & R \end{array} \right)\,
\frac{\mathcal{D}^{3}}{\mathcal{D}^{2}/{m_{H}^{2}} - \mathds{1}}\,
\log \left( \frac{\mathcal{D}^{2}}{m_{H}^{2}} \right)\,
\left( \begin{array}{c} K^T \\ R^T \end{array} \right)\,,
\label{dM1}
\end{aligned}
\end{equation}
in a generic weak basis, with $v \simeq 174$ GeV being the Higgs VEV
and with $\mathcal{D}$, $K$ and $R$ given in Eqs.~\eqref{eq:d} and \eqref{eq:epl}.
This result can be cast in a simple form:
\begin{align}
\delta M_{L}\,=\,
K\,f(d)\,K^T + R\,f(D)\,R^T\,,
\label{eq:oneloop}
\end{align}
where naturally $f$ is applied element-wise to diagonal matrices,
with
\begin{align}
f(m) \,\equiv \, \frac{m^3}{(4\pi\,v)^2} \left(\frac{3\log(m/m_Z)}{m^2/m_Z^2 -1} + \frac{\log(m/m_H)}{m^2/m_H^2 -1}\right)\,.
\end{align}
Models with very small deviations from unitarity (standard seesaw)
have a very small $X$ and hence a correspondingly small $R = K X^\dagger$.
For these, the one-loop $\delta M_{L}$ corrections are negligible,
as can be seen from Eq.~\eqref{eq:oneloop}.
Namely (aside from the loop-factor suppression),
the terms with $K$ are suppressed by the light neutrino masses
$d$, whereas the effect of the heavier neutrino masses in $D$ is regulated
by the small entries of $R$.
However, in models with sizeable deviations from unitarity,
$R$ is not small and controlling $\delta M_{L}$
requires a mechanism such as a symmetry at the Lagrangian level.
\subsection{Approximately Conserved Lepton Number}
\label{sec:approximateL}
Relatively light sterile neutrinos can arise naturally in a seesaw framework
in the presence of an approximately conserved lepton number~%
\cite{Shaposhnikov:2006nn, Kersten:2007vk, Ibarra:2010xw}.
Such a U(1)$_L$ symmetry, when exact, imposes specific textures on the mass matrices $m$ and $M$.
These textures may be slightly perturbed when the symmetry is approximate,%
\footnote{We allow for small perturbations to all entries of $m$ and $M$,
without any presumption regarding their origin.
The case where only $M$ departs from its symmetric texture,
which manifestly corresponds to a soft breaking of the lepton number symmetry,
was considered in Refs.~\cite{Lavoura:2000ci,Grimus:2001ex}.}
allowing for non-vanishing Majorana neutrino masses and non-trivial mixing.
We are interested in scenarios where at least one of the
mostly-sterile neutrinos is light, with a mass of $\mathcal{O}$(eV),
in order to establish a connection to the SBL anomalies.
We are further looking for situations where some of the Yukawa
couplings are of order one.
The choice of lepton charges should then be such that,
in the exact conservation limit:
i) $M$ has zero determinant,%
\footnote{In previous work \cite{Agostinho:2017wfs}, several cases were analysed following the U(1)$_L$
charge assignment $\lambda_\nu = (1,-1,0)$ and $\lambda_L=(1,1,1)$, which
however implies $\det \,M \neq 0$ in the symmetric limit.}
and ii) not all entries of $m$ are small.
These conditions limit the possible U(1)$_L$ charge assignments.
The possibility of having a conserved (non-standard) lepton number
has been considered in the past~\cite{Wolfenstein:1981kw,Leung:1983ti,Branco:1988ex}.
Following the analysis of Ref.~\cite{Branco:1988ex},
we work in a certain `symmetry' weak basis in which
lepton charge vectors $\lambda_\nu$ and $\lambda_L$
are assigned to the three right-handed neutrino singlets
and to the three lepton doublets, respectively.
As anticipated in section~\ref{sec:param}, one generically has $\Sigma \neq \mathds{1}$
in Eq.~\eqref{eq:Zgeneral}.
Up to permutations, there are only 4 non-trivial
choices of U(1)$_L$ charges leading to an $M$ with zero determinant
in the exact conservation limit:
$\lambda_\nu=(1,1,0)$, $\lambda_\nu=(1,-1,-1)$,
$\lambda_\nu=(1,1,1)$ and $\lambda_\nu=(0,0,1)$.
Of these four, $\lambda_\nu=(1,1,1)$ is not viable as it imposes $M=0$,
and $\lambda_\nu=(0,0,1)$ is discarded since
requiring controlled loop corrections in our framework
effectively reduces it to the case with $\lambda_\nu = (1,-1,-1)$.
We look into in the remaining two options $\lambda_\nu=(1,1,0)$ and $\lambda_\nu=(1,-1,-1)$ in what follows.
Given $\lambda_\nu$, the choice of $\lambda_L$ follows
from the requirements that the seesaw mechanism is operative for all light neutrinos
and that all left-handed neutrinos are allowed to couple to
the right-handed ones~\cite{Branco:1988ex}.
\subsubsection{Case I: \texorpdfstring{$\lambda_\nu = (1,1,0)$}{lambda nu = (1,1,0)}}
\label{sec:caseI}
For this case, the only sensible choice for the doublet charges is
$\lambda_L = (0,0,0)$.
The mass matrices in the symmetric limit read:
\begin{align}
m\,=\,\left(
\begin{array}{ccc}
0 & 0 & a \\
0 & 0 & b \\
0 & 0 & c%
\end{array}%
\right) \,,\quad
M\,=\,\left(
\begin{array}{ccc}
0 & 0 & 0 \\
0 & 0 & 0 \\
0 & 0 & M_3%
\end{array}%
\right) \,.
\label{eq:MI}
\end{align}
Breaking the symmetry will generate the light neutrino masses, two
(mostly-)sterile states with masses $M_1$ and $M_2$ that can be much smaller than
$M_3$, and a heavy sterile with a mass close to $M_3$.
As expected, some Yukawa couplings remain of $\mathcal{O}(1)$,
which can also be understood from Eq.~\eqref{eq:mdirac}, expressing the
dependence of the Dirac mass matrix $m$ on the sterile masses contained in $D$.
This case is further separated into two subcases: one can allow for a hierarchy $M_2 \gg M_1$ ({\bf case Ia}),
which may arise in a scenario of stepwise symmetry breaking, or instead focus on a single new light-sterile scale,
with $M_1 \sim M_2$ ({\bf case Ib}).
\subsubsection{Case II: \texorpdfstring{$\lambda_\nu = (1,-1,-1)$}{lambda nu = (1,-1,-1)}}
\label{sec:caseII}
For this case, one is instead led to
$\lambda_L = (1,1,1)$. In the exact conservation limit,
the mass matrices are given by:
\begin{align}
m\,=\,\left(
\begin{array}{ccc}
a & 0 & 0 \\
b & 0 & 0 \\
c & 0 & 0%
\end{array}%
\right)\,,\quad
M\,=\,\left(
\begin{array}{ccc}
0 & A & B \\
A & 0 & 0 \\
B & 0 & 0%
\end{array}%
\right)\,.
\label{eq:MII}
\end{align}
In this limit, one has two degenerate neutrinos with mass
$\sqrt{|A|^2+|B|^2}$ and opposite CP parities, forming a single heavy Dirac particle.
Breaking the symmetry will allow for the generation of light neutrino masses
and for another massive sterile state to arise, with a mass than can be much smaller than $|A|$ and $|B|$.
It will additionally lift the mass degeneracy for the Dirac neutrino,
producing a pseudo-Dirac neutrino pair~\cite{Wolfenstein:1981kw,Petcov:1982ya}.
As pointed out in~\cite{Ibarra:2011xn}, a strong mass degeneracy translates into a
symmetry in the $R$ block of the mixing matrix, namely $R_{\alpha 2}\simeq \pm i\, R_{\alpha 3}$ ($\alpha = e,\mu,\tau$).
Such a relation can be seen to play a fundamental role in suppressing the effect of the
large masses $M_2$ and $M_3$ in the one-loop correction $\delta M_L$, see Eq.~\eqref{eq:oneloop}.
It signals that one is close to the limit of lepton number conservation,
even if $R_{\alpha 2}$ and $R_{\alpha 3}$ are not extremely suppressed.%
\footnote{Nonetheless, it is true that in the exact conservation limit $d=X=R=0$.}
One is then allowed to have relatively large Yukawa couplings
even if $M_2 \simeq M_3$ are not as large as the $M_3$ of case I.
This can be seen from Eq.~\eqref{eq:mdirac},
which can be written in the form $m \,=\, R\, D \left( Z^{-1}\right)^* $.
The mass of the pseudo-Dirac pair can be at the TeV scale~%
\cite{Ibarra:2010xw,Ibarra:2011xn,Dinh:2012bp,Cely:2012bz,Penedo:2017knr},
since the size of the lightest neutrino masses is protected by approximate lepton number conservation.
The same symmetry and effects are present in the examples given in Ref.~\cite{Agostinho:2017wfs}.
In the following section, we perform a numerical analysis focusing on cases Ia, Ib and II
and incorporating an eV sterile neutrino in the seesaw spectrum while allowing for a
mixing matrix $K$ with sizeable deviations from unitarity.
\section{Numerical Analysis and Benchmarks}
\label{sec:numeric}
For each of the cases Ia, Ib and II defined in the previous section,
we explicitly provide a numerical benchmark for the seesaw mass matrices,
and explore the parameter space of qualitatively similar seesaw structures.
As anticipated in section~\ref{sec:osc}, we further provide approximate forms
of the transition probabilities of muon to electron (anti-)neutrinos,
$P_{\stackon[-.7pt]{$\nu$}{\brabar}_\mu \rightarrow \stackon[-.7pt]{$\nu$}{\brabar}_e}$,
obtained from Eq.~\eqref{eq:probability} while having in mind SBL and LBL setups,
for each of the three scenarios.
Given that recent global fits~\cite{deSalas:2018bym,Esteban:2018azc} disfavour
a light neutrino mass spectrum with inverted ordering (IO) with respect to one with normal ordering (NO)
at more than the $3\sigma$ level, we restrict the mass ordering to NO in our numerical examples.
Before proceeding, note that the three scenarios of interest exhibit
some correspondence to the commonly considered
3+1+1 (case Ia), 3+2 (case Ib), and 3+1 (case II)
schemes, see for instance~\cite{Giunti:2019aiy}.
Thus, even though the connection to the latter is not exact --
in particular, the spectrum of case Ib is not that of a typical 3+2 scenario --
it may prove useful to consider quantities therein defined in our analysis,
namely~\cite{Gariazzo:2015rra}
\begin{align}
\sin^2 2 \vartheta^{(k)}_{\mu e}
\,\equiv\, 4\, \big|\Theta_{\mu k}\big|^2 \,\big|\Theta_{e k}\big|^2
\,,
\label{eq:sthmue}
\end{align}
with $k=4$ in the 3+1 case, while $k=4,5$ for the other two cases.
According to the global fit to SBL data of Ref.~\cite{Gariazzo:2015rra},
explaining the observed anomalies requires
$\Delta m^2_{41} \in [0.87,\,2.04]$ eV$^2$ and
$\sin^2 2 \vartheta^{(4)}_{\mu e} \in [6.5\times 10^{-4},\, 2.6 \times 10^{-3}]$ ($99.7\%$ CL)
in the 3+1 scheme. This result may also be of relevance in the 3+1+1 scheme.
Although we take these intervals as guidelines in our numerical explorations,
it is not our aim to address the tensions in the current experimental situation of the SBL anomalies.
Thus, we only restrict our sterile neutrino parameter space at the outset through the conservative bounds
$\sum_i |R_{\alpha i}|^2 < 0.1$ ($\alpha = e,\mu,\tau$),
and via the constraints of \cite{Astier:2001yj, Astier:2003gs, Adhikari:2016bei}
on mixing matrix elements corresponding to
large mass-squared differences $\Delta m^2 \sim 10\text{ eV}^2 - 1\text{ keV}^2$,
as anticipated in section~\ref{sec:devunit}.
\subsection{Case Ia: \texorpdfstring{$M_1\ll M_2\ll M_3$}{M1 << M2 << M3}}
\begin{table}
\renewcommand{\thetable}{\arabic{table}a}
\centering
\renewcommand{\arraystretch}{1.2}
\begin{tabular}{lr}
\toprule
& {\bf Case Ia} numerical benchmark \\
\midrule
\addlinespace
$m$ (GeV) &
$\begin{bmatrix*}[r]
( 2.11 -5.58\, i)\times 10^{-11} & ( 1.29 +1.65\, i)\times 10^{-9} & 11.2 -10.9\, i \\
( 0.85 +2.22\, i)\times 10^{-10} & (-5.29 +3.99\, i)\times 10^{-9} & 10.4 + 0.4\, i\\
(-0.26 +1.98\, i)\times 10^{-10} & (-4.51 -1.05\, i)\times 10^{-9} & -10.5 -34.6\, i
\end{bmatrix*}$ \\
\addlinespace
$M$ (GeV) &
$\begin{bmatrix*}[r]
8.93\times 10^{-10} & 4.45\times 10^{-11} & 1.28\times 10^{-13} \\
4.45\times 10^{-11} & 1.00\times 10^{-6} & 6.22\times 10^{-11} \\
1.28\times 10^{-13} & 6.22\times 10^{-11} & 5.00\times 10^{14}
\end{bmatrix*}$
\\
\addlinespace
\midrule
\addlinespace
$K$ &
$\begin{bmatrix*}[r]
-0.797 +0.071 \, i & 0.578 +0.006 \, i & -0.115+0.096 \, i \\
0.293 -0.086 \, i & 0.575 +0.027 \, i & 0.719+0.010 \, i \\
-0.516 -0.004 \, i & -0.570 +0.020 \, i & 0.606
\end{bmatrix*}$
\\
\addlinespace
$R$ &
$\begin{bmatrix*}[r]
0.024 -0.057 \, i & ( 1.29 +1.65 \, i)\times 10^{-3} & (-2.24+2.18 \, i) \times 10^{-14} \\
0.093 +0.223 \, i & (-5.29 +3.99 \, i)\times 10^{-3} & (-2.08+0.08 \, i) \times 10^{-14} \\
-0.026 +0.199 \, i & (-4.51 -1.05 \, i)\times 10^{-3} & (-2.10+6.92 \, i) \times 10^{-14}
\end{bmatrix*}$
\\
\addlinespace
$X$ &
$\begin{bmatrix*}[r]
-0.003-0.015 \, i & 0.102 +0.023 \, i & 0.050 -0.317 \, i \\
(-5.12+1.72 \, i)\times 10^{-4} & ( 0.46 -4.33 \, i)\times 10^{-3} & (-7.30-2.18 \, i)\times 10^{-3} \\
( 0.23+5.33 \, i)\times 10^{-14} & (-3.44 +2.75 \, i)\times 10^{-14} & ( 0.36-4.41 \, i)\times 10^{-14}
\end{bmatrix*}$
\\
\addlinespace
$O_c$ (tree level) \!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!&
$\begin{bmatrix*}[r]
-0.53 +0.12 \, i & 0.22 -1.12 \, i & -1.41 -0.22 \, i \\
0.22 +0.56 \, i & -1.50 -0.13 \, i & -0.30 +1.03 \, i \\
1.00 -0.06 \, i & 0.23 +0.25 \, i & -0.14 -0.01 \, i
\end{bmatrix*}$
\\
\addlinespace
\midrule
\addlinespace
Masses &
$\begin{matrix*}[l]
m_1 \simeq 1.06\times 10^{-3}\text{ eV}\,,\quad & m_2 \simeq 8.48\times 10^{-3}\text{ eV} \,,\quad & m_3 \simeq 5.02\times 10^{-2}\text{ eV} \,,\,\\
M_1 \simeq 1.00\text{ eV}\,,\, & M_2 \simeq 1.00\text{ keV} \,,\, & M_3 \simeq 5.00\times 10^{14}\text{ GeV}
\end{matrix*}$ \\
\addlinespace
\midrule
\addlinespace
$3\nu$ $\Delta m^2$ &
$\Delta m^2_\odot = \Delta m^2_{21} \simeq 7.08 \times 10^{-5}\text{ eV}^2\,,\,
\quad\,\,\,\,
\Delta m^2_\text{atm} = \Delta m^2_{31} \simeq 2.52 \times 10^{-3}\text{ eV}^2$
\\
\addlinespace
$3\nu$ mixing angles \!\!\!\!\!\!\!\!\!\!\!\!&
$\sin^2 \theta_{12} \simeq 0.344\,,\,\quad\,\,\,\,
\sin^2 \theta_{23} \simeq 0.585\,,\,\quad\,\,\,\,
\sin^2 \theta_{13} \simeq 0.0236$
\\
\addlinespace
$3\nu$ CPV phases \!\!\!\!\!\! &
$\delta \simeq 1.21 \pi\,,\,\quad\,\,\,\,
\alpha_{21} \simeq 0.06 \pi\,,\,\quad\,\,\,\,
\alpha_{31} \simeq 0.06 \pi$
\\
\addlinespace
\midrule
\addlinespace
$\sin^2 2 \vartheta^{(i)}_{\mu e}$ &
$\sin^2 2 \vartheta_{\mu e}^{(4)} \simeq 8.8\times 10^{-4}\,,\,\quad\,\,\,\,
\sin^2 2 \vartheta_{\mu e}^{(5)} \simeq 7.7\times 10^{-10}$
\\
\bottomrule
\end{tabular}
\caption{Numerical benchmark for case Ia. The ordering of light neutrinos is NO.
From the input matrices $m$ and $M$, and taking into account one-loop corrections, the other quantities here listed follow.
It should be noted that $O_c$ of Eq.~\eqref{eq:xc} is only defined at tree level.
Values for the mixing angles and CPV phases of the $3\nu$-framework in the standard parametrisation~\cite{PDG2019}
are extracted by identifying the unitary matrix $V$ with a unitary $3\times 3$ PMNS mixing matrix.}
\label{tab:Ia}
\end{table}
The numerical data for the benchmark corresponding to this case is given in Table~\ref{tab:Ia},
where the one-loop correction of Eq.~\eqref{eq:oneloop} has been taken into account.
Apart from the three light mostly\discretionary{-}{-}{-}active neutrinos,
the spectrum includes three mostly-sterile neutrinos with masses
$M_1 \sim 1$ eV, $M_2 \sim 1$ keV,
and $M_3$ a few orders of magnitude below the grand unification (GUT) scale, $M_3 \sim 10^{14}$ GeV.
The keV-scale neutrino may be a viable dark matter candidate~\cite{Adhikari:2016bei, Boyarsky:2018tvu}.
For the spectrum of case Ia, one has $n=2$ in Eq.~\eqref{eq:wdef}.
In the context of a LBL experiment (e.g.~DUNE~\cite{DUNE}),
the expression of Eq.~\eqref{eq:probability}
applied to the transition probability of muon to electron (anti-)neutrinos
can, in this case, be approximated by:
\begin{equation}
\begin{aligned}
P^\text{LBL}_{\stackon[-.7pt]{$\nu$}{\brabar}_\mu \rightarrow \stackon[-.7pt]{$\nu$}{\brabar}_e}
\,\simeq\,
&\frac{1}{(\Theta\Theta^\dagger)_{\mu\mu}(\Theta\Theta^\dagger)_{ee}}
\Bigg[
\left|(\Theta\Theta^\dagger)_{\mu e}\right|^2
\\ &
- 4 \sum_{i>j}^3\,\re
\left(\Theta_{\mu i}^*\,\Theta_{e i}\,\Theta_{\mu j}\,\Theta_{e j}^*\right)
\sin^2 \Delta_{ij}
\pm 2 \sum_{i>j}^{3}\,\im
\left(\Theta_{\mu i}^*\,\Theta_{e i}\,\Theta_{\mu j}\,\Theta_{e j}^*\right)
\sin 2 \Delta_{ij}
\\ &
- 4 \,\cdot\,\frac{1}{2}\,\re
\left(\Theta_{\mu 4}^*\,\Theta_{e 4}\,\sum_{j=1}^3\,\Theta_{\mu j}\,\Theta_{e j}^*\right)
- 4 \,\cdot\,\frac{1}{2}\,\re
\left(\Theta_{\mu 5}^*\,\Theta_{e 5}\,\sum_{j=1}^4\,\Theta_{\mu j}\,\Theta_{e j}^*\right)
\Bigg] \,,
\label{eq:LBL1Ia}
\end{aligned}
\end{equation}
where terms depending on $\Delta_{4j},\,\Delta_{5j} \gg 1$
have been replaced by their averaged versions
($\sin^2 \Delta_{ij} \to 1/2$, $\sin 2\Delta_{ij} \to 0$).
While the normalisation and the first term in this equation signal the loss of unitarity and
a zero-distance effect, respectively,
the last two terms explicitly represent the effects
of the two lightest mostly-sterile states in oscillations.
If one is in a condition similar to that of
the numerical benchmark of Table~\ref{tab:Ia},
for which $|(\Theta\Theta^\dagger)_{\mu\mu}(\Theta\Theta^\dagger)_{ee} - 1|$
and $|(\Theta\Theta^\dagger)_{\mu e}|^2$ are negligible,
this expression can be further approximated by:
\begin{align}
P^\text{LBL}_{\stackon[-.7pt]{$\nu$}{\brabar}_\mu \rightarrow \stackon[-.7pt]{$\nu$}{\brabar}_e}
\,\simeq\,
P^{\text{LBL, }3\nu}_{\stackon[-.7pt]{$\nu$}{\brabar}_\mu \rightarrow \stackon[-.7pt]{$\nu$}{\brabar}_e}
+ \frac{1}{2}\sin^2 2 \vartheta^{(4)}_{\mu e}
\,,
\label{eq:LBL2Ia}
\end{align}
where we have defined a $3\nu$-framework transition probability which, however, incorporates the effects of deviations of $K$ from unitarity,
\begin{align}
P^{\text{LBL, }3\nu}_{\stackon[-.7pt]{$\nu$}{\brabar}_\mu \rightarrow \stackon[-.7pt]{$\nu$}{\brabar}_e}
\,\equiv\,
- 4 \sum_{i>j}^3\,\re
\left(\Theta_{\mu i}^*\,\Theta_{e i}\,\Theta_{\mu j}\,\Theta_{e j}^*\right)
\sin^2 \Delta_{ij}
\pm 2 \sum_{i>j}^{3}\,\im
\left(\Theta_{\mu i}^*\,\Theta_{e i}\,\Theta_{\mu j}\,\Theta_{e j}^*\right)
\sin 2 \Delta_{ij}
\,,
\end{align}
and have used the definition of Eq.~\eqref{eq:sthmue},
the unitarity of the full $6\times 6$ mixing matrix, and the fact that
$|\Theta_{\alpha 4}|^2 (= |R_{\alpha 1}|^2) \gg |\Theta_{\alpha 5}|^2 (= |R_{\alpha 2}|^2) \ggg |R_{\alpha 3}|^2$.
In a SBL experiment (e.g.~MicroBooNE~\cite{MicroBooNE}),
the relevant form of Eq.~\eqref{eq:probability} for
$\stackon[-.7pt]{$\nu$}{\brabar}_\mu \rightarrow \stackon[-.7pt]{$\nu$}{\brabar}_e$
transitions is:
\begin{equation}
\begin{aligned}
P^\text{SBL}_{\stackon[-.7pt]{$\nu$}{\brabar}_\mu \rightarrow \stackon[-.7pt]{$\nu$}{\brabar}_e}
\,\simeq\,
&\frac{1}{(\Theta\Theta^\dagger)_{\mu\mu}(\Theta\Theta^\dagger)_{ee}}
\Bigg[
\left|(\Theta\Theta^\dagger)_{\mu e}\right|^2
- 4 \,\cdot\,\frac{1}{2}\,\re
\left(\Theta_{\mu 5}^*\,\Theta_{e 5}\,\sum_{j=1}^4\,\Theta_{\mu j}\,\Theta_{e j}^*\right)
\\ &
- 4 \,\re
\left(\Theta_{\mu 4}^*\,\Theta_{e 4}\,\sum_{j=1}^3\,\Theta_{\mu j}\,\Theta_{e j}^*\right)
\sin^2 \Delta_{41}
\pm 2 \,\im
\left(\Theta_{\mu 4}^*\,\Theta_{e 4}\,\sum_{j=1}^{3}\,\Theta_{\mu j}\,\Theta_{e j}^*\right)
\sin 2 \Delta_{41}
\Bigg] \,,
\label{eq:SBL1Ia}
\end{aligned}
\end{equation}
with $\Delta_{41}\simeq \Delta_{42}\simeq \Delta_{43}$, and
where terms depending on $\Delta_{5j} \gg 1$
have been replaced by their averaged versions
($\sin^2 \Delta_{5j} \to 1/2$, $\sin 2\Delta_{5j} \to 0$).
In this context, one is sensitive to oscillations due to the scale of the mass-squared differences $\Delta m^2_{4j}$ with $j=1,2,3$,
while the oscillations pertaining to smaller mass-squared differences have not yet had a chance to develop.
Finally, if one is in a condition similar to that of
the numerical benchmark, this expression can be simply approximated by:
\begin{align}
P^\text{SBL}_{\stackon[-.7pt]{$\nu$}{\brabar}_\mu \rightarrow \stackon[-.7pt]{$\nu$}{\brabar}_e}
\,\simeq\, \sin^2 2 \vartheta^{(4)}_{\mu e}\,\sin^2 \Delta_{41} \,,
\label{eq:SBL2Ia}
\end{align}
where once again one has taken into account
the unitarity of the full mixing matrix and the fact that
$|R_{\alpha 1}|^2 \gg |R_{\alpha 2}|^2 \ggg |R_{\alpha 3}|^2$.
\begin{figure}[t]
\centering
\includegraphics[width=1.0\linewidth]{Case_Ia.png}
\caption{
Active-sterile mixing measure $\sin^2 2\vartheta_{\mu e}^{(4)}$
{\it versus} the lightest neutrino mass $m_\text{min}$ from a scan of the case-Ia parameter space,
with NO ($m_\text{min} = m_1$). The heavy spectrum at tree level has
$M_1 = 1$ eV and $M_2 = 1$ keV, while three values of
the heaviest mass are considered, $M_3 = 10^{13}\, (10^{14})\, [5\times 10^{14}]$ GeV,
corresponding to the black (dark blue) [light blue] points in the scatter plot.
The horizontal green band shows the $99.7\%$ CL interval of Ref.~\cite{Gariazzo:2015rra},
while the vertical red exclusion band is obtained by combining
the most stringent bound on the sum of light neutrino masses from cosmology,
$\sum_i m_i < 0.12$ eV ($95\%$ CL)~\cite{Vagnozzi:2017ovm}\cite{Aghanim:2018eyx},
with the $3\sigma$ ranges of mass-squared differences.
The dark green contour delimits the region inside which loop-stable points have been found (see text),
while the benchmark of Table~\ref{tab:Ia} is marked in yellow.}
\label{fig:Ia}
\end{figure}
To further explore the parameter space of case Ia, we have
produced numerical seesaw structures by specifying
tree-level values of the unitary part $V$ of the mixing matrix $K$,
the mostly-active and mostly-sterile masses in $d$ and $D$,
and by scanning the complex orthogonal matrix $O_c$, parametrised
as a product of three complex rotations times a sign corresponding to its determinant.
We are interested in seesaw structures qualitatively similar to our benchmark,
so that we specify (at tree level) $M_1 = 1$ eV and $M_2 = 1$ keV,
while considering three different values for the heaviest neutrino mass,
$M_3 \in \{10^{13},\, 10^{14},\, 5\times 10^{14}\}$ GeV.
While the lightest neutrino mass $m_\text{min}$ is scanned
in the range $[10^{-4},\,0.1]$ eV, the remaining elements of $d$ are fixed
by specifying the solar and atmospheric mass differences.
The $3\nu$ mixing angles and Dirac CPV phase entering $V$ as well as
the aforementioned $3\nu$ mass-squared differences
are chosen to be the central values of the global fit of Ref.~\cite{Esteban:2018azc}.
We stress that, as was the case for the numerical benchmark of Table~\ref{tab:Ia},
$3\nu$ mixing angles and CPV phases obtained while
identifying $V$ with a unitary $3\times 3$ mixing matrix
are expected to deviate slightly from the mixing angles and CPV phases arising in
a parametrisation of the full $6\times 6$ mixing matrix $\mathcal{V}$,
due to deviations from unitarity.
In Figure~\ref{fig:Ia}
we show the values of $\sin^2 2\vartheta_{\mu e}^{(4)}$ in Eq.~\eqref{eq:sthmue}
against the values of the lightest neutrino mass, for the numerical examples found for case Ia.
Only points for which $\mbox{tr}\left[m\,m^\dagger\right] \in [0.01,\,1]\,v^2$ are kept.%
\footnote{One may avoid very small Yukawa couplings by choosing appropriate values for $M_3$ and $O_c$.}
The horizontal green band highlights the range of $\sin^2 2\vartheta_{\mu e}^{(4)}$
preferred by the global fit of Ref.~\cite{Gariazzo:2015rra} and cited at the beginning of this section.
The dark green contour instead delimits the region inside which relatively loop-stable points can be found,
i.e.~points which,
after the one-loop correction of Eq.~\eqref{eq:oneloop} has been implemented,
still have $3\nu$ mass-squared differences and mixing angles (extracted from $V$)
inside the $3\sigma$ ranges of the fit~\cite{Esteban:2018azc}.
From the figure it can be seen that
raising the scale of $M_3$ will lower
the scale of the light neutrino masses,
disallowing too large values of $m_\text{min}$.
The approximations used in deriving the oscillation formulae
of Eqs.~\eqref{eq:LBL2Ia} and~\eqref{eq:SBL2Ia}
hold for all the plotted points.
Some quantities of potential phenomenological relevance, unrelated to
neutrino oscillations, include
the effective electron neutrino mass in $\beta$-decay, $m_\beta$,
the absolute value of the effective neutrino Majorana mass controlling the rate
of neutrinoless double beta ($(\beta\beta)_{0\nu}$-)decay, $|m_{\beta\beta}|$,
and the $\mu \to e \gamma$ branching ratio, $BR(\mu \rightarrow e\gamma)$.
For all numerical examples pertaining to case Ia which are stable under loop corrections,
the latter is unobservably small $BR(\mu \rightarrow e\gamma) \ll 10^{-30}$,
while the former two are bounded by $m_\beta < 9.4$ meV and $|m_{\beta\beta}| < 6.7$ meV,
and hence still out of reach of present and near-future experiments.
In the computation of $|m_{\beta\beta}|$, the effects of the eV- and keV-scale
neutrinos have been taken into account.
In the presence of a relatively large active-sterile mixing,
future KATRIN-like experiments may be sensitive
to the existence of sterile neutrinos with $\mathcal{O}$(eV) masses~\cite{Riis:2010zm}.
This sensitivity is controlled by $|R_{e1}|^2 = |\mathcal{V}_{e4}|^2$, which is found to be bounded by $|R_{e4}|^2 \lesssim 0.02$
for the loop\discretionary{-}{-}{-}stable numerical examples of this case.
Sterile neutrinos with $\mathcal{O}$(keV) masses may instead be detectable via kink-like signatures
in next-generation $\beta$-decay experiments,
even in the presence of small mixing $|R_{e2}|^2= |\mathcal{V}_{e5}|^2 \sim 10^{-6}$~\cite{Mertens:2014nha}.
\subsection{Case Ib: \texorpdfstring{$M_1\sim M_2\ll M_3$}{M1 \textasciitilde{} M2 << M3}}
\begin{table}
\addtocounter{table}{-1}
\renewcommand{\thetable}{\arabic{table}b}
\centering
\renewcommand{\arraystretch}{1.2}
\begin{tabular}{lr}
\toprule
& {\bf Case Ib} numerical benchmark \\
\midrule
\addlinespace
$m$ (GeV) &
$\begin{bmatrix*}[r]
( 0.46 - 2.57\, i) \times 10^{-10} & ( 2.37 + 0.54\, i) \times 10^{-10} & 11.24 - 2.72\, i \\
(-5.50 - 1.04\, i) \times 10^{-10} & ( 0.68 - 6.20\, i) \times 10^{-10} & 8.90 -27.50\, i \\
(-3.69 + 1.78\, i) \times 10^{-10} & (-1.60 - 4.45\, i) \times 10^{-10} & -1.85 + 0.43\, i\end{bmatrix*}$ \\
\addlinespace
$M$ (GeV) &
$\begin{bmatrix*}[r]
2.88\times 10^{-9} & 8.24\times 10^{-11} & 1.41\times 10^{-11}\\
8.24\times 10^{-11} & 2.87\times 10^{-9} & 1.42\times 10^{-11}\\
1.41\times 10^{-11} & 1.42\times 10^{-11} & 1.00\times 10^{14}
\end{bmatrix*}$
\\
\addlinespace
\midrule
\addlinespace
$K$ &
$\begin{bmatrix*}[r]
-0.799 +0.137 \, i & 0.558 +0.001 \, i & 0.116 -0.071 \, i \\
0.272 -0.172 \, i & 0.582 -0.036 \, i & -0.695 +0.014 \, i \\
-0.480 +0.099 \, i & -0.560 +0.141 \, i & -0.620 -0.019 \, i
\end{bmatrix*}$
\\
\addlinespace
$R$ &
$\begin{bmatrix*}[r]
0.039 +0.077\, i & 0.067 -0.040\, i & (-1.12 +0.27\, i)\times 10^{-13} \\
0.156 -0.105\, i & -0.097 -0.170\, i & (-0.89 +2.75\, i)\times 10^{-13} \\
0.061 -0.140\, i & -0.115 -0.071\, i & ( 1.85 -0.43\, i)\times 10^{-14}
\end{bmatrix*}$
\\
\addlinespace
$X$ &
$\begin{bmatrix*}[r]
-0.003 +0.009\, i & 0.073 -0.064 \, i & -0.168 -0.196 \, i \\
-0.009 -0.005\, i & 0.049 +0.078 \, i & 0.170 -0.185 \, i \\
(1.40 -5.37 \, i)\times 10^{-14} & (-1.47 -1.74 \, i)\times 10^{-13} & (0.37 +2.24 \, i)\times 10^{-13}
\end{bmatrix*}$
\\
\addlinespace
$O_c$ (tree level) \!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!&
$\begin{bmatrix*}[r]
-1.06 -0.51 \, i & -1.10 -1.29 \, i & -1.51 +1.30 \, i \\
0.75 -1.08 \, i & 1.40 -0.83 \, i & -1.46 -1.35 \, i \\
0.91 +0.31 \, i & -0.60 +0.44 \, i & 0.32 -0.05 \, i
\end{bmatrix*}$
\\
\addlinespace
\midrule
\addlinespace
Masses &
$\begin{matrix*}[l]
m_1 \simeq 0.24\times 10^{-3}\text{ eV}\,,\quad & m_2 \simeq 8.76\times 10^{-3}\text{ eV} \,,\quad & m_3 \simeq 5.00\times 10^{-2}\text{ eV} \,,\,\\
M_1 \simeq 3.00\text{ eV}\,,\, & M_2 \simeq 3.16\text{ eV} \,,\, & M_3 \simeq 1.00\times 10^{14}\text{ GeV}
\end{matrix*}$ \\
\addlinespace
\midrule
\addlinespace
$3\nu$ $\Delta m^2$ &
$\Delta m^2_\odot = \Delta m^2_{21} \simeq 7.66 \times 10^{-5}\text{ eV}^2\,,\,
\quad\,\,\,\,
\Delta m^2_\text{atm} = \Delta m^2_{31} \simeq 2.50 \times 10^{-3}\text{ eV}^2$
\\
\addlinespace
$3\nu$ mixing angles \!\!\!\!\!\!\!\!\!\!\!\!&
$\sin^2 \theta_{12} \simeq 0.327\,,\,\quad\,\,\,\,
\sin^2 \theta_{23} \simeq 0.562\,,\,\quad\,\,\,\,
\sin^2 \theta_{13} \simeq 0.0232$
\\
\addlinespace
$3\nu$ CPV phases \!\!\!\!\!\! &
$\delta \simeq 1.26 \pi\,,\,\quad\,\,\,\,
\alpha_{21} \simeq 0.11 \pi\,,\,\quad\,\,\,\,
\alpha_{31} \simeq 0.22 \pi$
\\
\addlinespace
\midrule
\addlinespace
$\sin^2 2 \vartheta^{(i)}_{\mu e}$ &
$\sin^2 2 \vartheta_{\mu e}^{(4)} \simeq 1.1\times 10^{-3}\,,\,\quad\,\,\,\,
\sin^2 2 \vartheta_{\mu e}^{(5)} \simeq 9.2\times 10^{-4}$
\\
\bottomrule
\end{tabular}
\caption{The same as Table~\ref{tab:Ia} for case Ib.}
\label{tab:Ib}
\end{table}
The numerical data for the benchmark corresponding to this case is given in Table~\ref{tab:Ib}.
Apart from the three light mostly-active neutrinos,
the spectrum includes three mostly-sterile neutrinos with masses
$M_1 \sim M_2 \sim 3$ eV, such that $M_2^2 - M_1^2 \simeq 1$ eV$^2$,
while $M_3 \sim 10^{14}$ GeV.
For the spectrum of case Ib, one has $n=2$ in Eq.~\eqref{eq:wdef}.
In a LBL context, the expression of Eq.~\eqref{eq:probability}
applied to the transition probability of muon to electron (anti-)neutrinos
can be approximated by the same expression~\eqref{eq:LBL1Ia} given for case Ia.
Once again,
the last two terms in that equation explicitly show the effects
of the two lightest mostly-sterile states in oscillations.
If one is in a condition similar to that of
the benchmark of Table~\ref{tab:Ib},
for which $|(\Theta\Theta^\dagger)_{\mu\mu}(\Theta\Theta^\dagger)_{ee} - 1|$
and $|(\Theta\Theta^\dagger)_{\mu e}|^2$ are negligible,
this expression can be further approximated by:
\begin{align}
P^\text{LBL}_{\stackon[-.7pt]{$\nu$}{\brabar}_\mu \rightarrow \stackon[-.7pt]{$\nu$}{\brabar}_e}
\,\simeq\,
P^{\text{LBL, }3\nu}_{\stackon[-.7pt]{$\nu$}{\brabar}_\mu \rightarrow \stackon[-.7pt]{$\nu$}{\brabar}_e}
\,+\, \frac{1}{2}
\Big[
\sin^2 2 \vartheta^{(4)}_{\mu e}
+\sin^2 2 \vartheta^{(5)}_{\mu e}
+4\, \re
\left(\Theta_{\mu 4}^*\,\Theta_{e 4}\,\Theta_{\mu 5}\,\Theta_{e 5}^*\right)
\Big] \,,
\label{eq:LBL2Ib}
\end{align}
where we have used the unitarity of the full $6\times 6$ mixing matrix,
and the fact that $|R_{\alpha 1}|^2 \sim |R_{\alpha 2}|^2 \ggg |R_{\alpha 3}|^2$.
The latter prevents us from neglecting $|R_{\alpha 2}|^2$ (and hence $\sin^2 2\vartheta^{(5)}_{\mu e}$)
with respect to $|R_{\alpha 1}|^2$ (and $\sin^2 2\vartheta^{(4)}_{\mu e}$), as we did in the previous case.
In a SBL context,
the relevant form of Eq.~\eqref{eq:probability} for
$\stackon[-.7pt]{$\nu$}{\brabar}_\mu \rightarrow \stackon[-.7pt]{$\nu$}{\brabar}_e$
transitions in case Ib is:
\begin{equation}
\begin{aligned}
P^\text{SBL}_{\stackon[-.7pt]{$\nu$}{\brabar}_\mu \rightarrow \stackon[-.7pt]{$\nu$}{\brabar}_e}
\,\simeq\,
&\frac{1}{(\Theta\Theta^\dagger)_{\mu\mu}(\Theta\Theta^\dagger)_{ee}}
\Bigg[
\left|(\Theta\Theta^\dagger)_{\mu e}\right|^2
\\ &
- 4 \,\cdot\, \frac{1}{2} \,\re
\left(\Theta_{\mu 4}^*\,\Theta_{e 4}\,\sum_{j=1}^3\,\Theta_{\mu j}\,\Theta_{e j}^*\right)
- 4 \,\cdot\, \frac{1}{2} \,\re
\left(\Theta_{\mu 5}^*\,\Theta_{e 5}\,\sum_{j=1}^3\,\Theta_{\mu j}\,\Theta_{e j}^*\right)
\\ &
- 4 \,\re
\left(\Theta_{\mu 5}^*\,\Theta_{e 5}\,\Theta_{\mu 4}\,\Theta_{e 4}^*\right)
\sin^2 \Delta_{54}
\pm 2 \,\im
\left(\Theta_{\mu 5}^*\,\Theta_{e 5}\,\Theta_{\mu 4}\,\Theta_{e 4}^*\right)
\sin 2 \Delta_{54}
\Bigg] \,,
\label{eq:SBL1Ib}
\end{aligned}
\end{equation}
where terms depending on the large $\Delta_{4j}$ and $\Delta_{5j}$ ($j=1,2,3$)
have been replaced by their averaged versions.
It is clear that this case does not correspond to a typical 3+2 scenario
(see for instance~\cite{Gariazzo:2015rra}),
since one has $\Delta m^2_{4j},\,\Delta m^2_{5j} \sim 10$ eV$^2$ for $j=1,2,3$.
Hence, one can be sensitive to oscillations due to the mass-squared difference $\Delta m^2_{54} \sim 1$ eV$^2$,
while oscillations pertaining to larger differences are averaged out and
those driven by smaller mass-squared differences are underdeveloped.
If one is in a condition similar to that of
the numerical benchmark, this expression can be approximated by:
\begin{equation}
\begin{aligned}
P^\text{SBL}_{\stackon[-.7pt]{$\nu$}{\brabar}_\mu \rightarrow \stackon[-.7pt]{$\nu$}{\brabar}_e}
\,\simeq\,
\frac{1}{2} \Big( \sin^2 2 \vartheta^{(4)}_{\mu e} +\sin^2 2 \vartheta^{(5)}_{\mu e} \Big)
&+ 4 \,\re\left(\Theta_{\mu 4}^*\,\Theta_{e 4}\,\Theta_{\mu 5}\,\Theta_{e 5}^*\right)\cos^2 \Delta_{54}\\
&\mp 2 \,\im\left(\Theta_{\mu 4}^*\,\Theta_{e 4}\,\Theta_{\mu 5}\,\Theta_{e 5}^*\right)\sin 2 \Delta_{54}
\,,
\label{eq:SBL2Ib}
\end{aligned}
\end{equation}
where once again we have taken into account
the unitarity of the full mixing matrix and the fact that
$|R_{\alpha 1}|^2 \sim |R_{\alpha 2}|^2 \ggg |R_{\alpha 3}|^2$.
Notice that, unlike the typical 3+2 case,
oscillations here depend on the square of the cosine of the relevant $\Delta_{ij}$.
\begin{figure}[t]
\centering
\includegraphics[width=1.0\linewidth]{Case_Ib.png}
\caption{The average of $\sin^2 2\vartheta_{\mu e}^{(4)}$ and $\sin^2 2\vartheta_{\mu e}^{(5)}$
{\it versus} the lightest neutrino mass $m_\text{min}$, from a scan of the case-Ib parameter space,
with NO ($m_\text{min} = m_1$). The heavy spectrum at tree level has
$M_1 = 3.00$ eV and $M_2 = 3.16$ eV, while three values of
the heaviest mass are considered, $M_3 = 10^{13}\, (10^{14})\, [5\times 10^{14}]$ GeV,
corresponding to the black (dark blue) [light blue] points in the scatter plot.
The vertical red band corresponds to the cosmological constraint, as in Figure~\ref{fig:Ia}.
The dark green contour delimits the region inside which loop-stable points have been found,
while the benchmark of Table~\ref{tab:Ib} is marked in yellow.}
\label{fig:Ib}
\end{figure}
To further explore the parameter space of case Ib, we have
produced numerical seesaw structures qualitatively similar to the benchmark
by following the procedure described while discussing case Ia.
We have specified (at tree level) $M_1 = 3.00$ eV and $M_2 = 3.16$ eV,
and have considered three different values for the heaviest neutrino mass,
$M_3 \in \{10^{13},\, 10^{14},\, 5\times 10^{14}\}$ GeV.
In Figure~\ref{fig:Ib}
we show the values of the average of $\sin^2 2\vartheta_{\mu e}^{(4)}$ and $\sin^2 2\vartheta_{\mu e}^{(5)}$
against the values of the lightest neutrino mass, for the numerical examples found for case Ib.
The former quantity is expected to represent the order of magnitude of potential signals of this case
in SBL and LBL experiments. Only points for which $\mbox{tr}\left[m\,m^\dagger\right] \in [0.01,\,1]\,v^2$ are kept.
As before, the dark green contour delimits the region inside which relatively loop-stable points can be found.
Raising the scale of $M_3$ will again lower
the scale of the light neutrino masses.
The approximations used in deriving the oscillation formulae of Eqs.~\eqref{eq:LBL2Ib} and~\eqref{eq:SBL2Ib}
are valid for all the plotted points.
For all numerical examples pertaining to case Ib which are stable under loop corrections,
$BR(\mu \rightarrow e\gamma) \ll 10^{-30}$ is unobservably small,
while one finds $m_\beta < 9.3$ meV and $|m_{\beta\beta}| < 4.6$ meV,
still out of reach of present and near-future experiments.
In the computation of $|m_{\beta\beta}|$, the effects of both eV-scale
neutrinos have been taken into account.
One additionally finds the bounds $|R_{e4}|^2,\,|R_{e5}|^2 \lesssim 0.01$
for the loop-stable numerical examples of this case.
\subsection{Case II: \texorpdfstring{$M_1\ll M_2\sim M_3$}{M1 << M2 \textasciitilde{} M3}}
\begin{table}
\addtocounter{table}{-1}
\renewcommand{\thetable}{\arabic{table}c}
\centering
\renewcommand{\arraystretch}{1.2}
\begin{tabular}{lr}
\toprule
& {\bf Case II} numerical benchmark \\
\midrule
\addlinespace
$m$ (GeV) &
$\begin{bmatrix*}[r]
-4.15 + 0.47 \,i & ( 4.51 - 1.49 \,i)\times 10^{-9} & (-1.59 + 0.13 \,i)\times 10^{-9} \\
3.98 + 6.17 \,i & (-5.04 - 4.64 \,i)\times 10^{-9} & ( 1.52 + 2.31 \,i)\times 10^{-9} \\
1.53 + 6.58 \,i & (-1.90 - 2.68 \,i)\times 10^{-9} & ( 0.59 + 2.59 \,i)\times 10^{-9}
\end{bmatrix*}$ \\
\addlinespace
$M$ (GeV) &
$\begin{bmatrix*}[r]
2.18\times 10^{-6} & 1390 & 2.96 \\
1390 & -2.19\times 10^{-6} & 5.52\times 10^{-7} \\
2.96 & 5.52\times 10^{-7} & 3.33\times 10^{-9}
\end{bmatrix*}$
\\
\addlinespace
\midrule
\addlinespace
$K$ &
$\begin{bmatrix*}[r]
0.825 +0.061 \,i & 0.536 +0.027 \,i & -0.092 +0.108 \,i \\
-0.302 +0.113 \,i & 0.581 -0.017 \,i & 0.728 -0.052 \,i \\
0.455 +0.054 \,i & -0.599 +0.075 \,i & 0.651 +0.002 \,i
\end{bmatrix*}$
\\
\addlinespace
$R$ &
$\begin{bmatrix*}[r]
0.063 -0.056 \,i & ( 2.11 -0.24 \,i)\times 10^{-3} & (-0.24 -2.11 \,i)\times 10^{-3} \\
-0.066 -0.147 \,i & (-2.03 -3.13 \,i)\times 10^{-3} & (-3.13 +2.03 \,i)\times 10^{-3} \\
-0.021 -0.036 \,i & (-0.79 -3.35 \,i)\times 10^{-3} & (-3.35 +0.79 \,i)\times 10^{-3}
\end{bmatrix*}$
\\
\addlinespace
$X$ &
$\begin{bmatrix*}[r]
0.042 +0.014 \,i & 0.007 +0.099 \,i & -0.069 +0.140 \,i \\
( 1.48 +0.64 \,i)\times 10^{-3} & ( 2.30 +0.66 \,i)\times 10^{-4} & (-2.11 +4.89 \,i)\times 10^{-3} \\
(-0.64 +1.48 \,i)\times 10^{-3} & (-0.66 +2.30 \,i)\times 10^{-4} & (-4.89 -2.11 \,i)\times 10^{-3}
\end{bmatrix*}$
\\
\addlinespace
$O_c$ (tree level) \!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!&
$\begin{bmatrix*}[r]
-0.21 +0.62 \,i & -1.02 +0.06 \,i & 0.62 + 0.31 \,i \\
(-1.11 +2.54 \,i)\times 10^4 & (-1.05 +2.42 \,i)\times 10^3 & ( 2.55 +1.12 \,i)\times 10^4 \\
(-2.54 -1.11 \,i)\times 10^4 & (-2.42 -1.05 \,i)\times 10^3 & (-1.12 +2.55 \,i)\times 10^4
\end{bmatrix*}$
\\
\addlinespace
\midrule
\addlinespace
Masses &
$\begin{matrix*}[l]
m_1 \simeq 4.65\times 10^{-3}\text{ eV}\,,\quad & m_2 \simeq 9.47\times 10^{-3}\text{ eV} \,,\quad & m_3 \simeq 5.01\times 10^{-2}\text{ eV} \,,\,\\
M_1 \simeq 1.00\text{ eV}\,,\, & M_2 \simeq 1390\text{ GeV} \,,\, & M_3 \simeq 1390\text{ GeV}
\end{matrix*}$ \\
\addlinespace
\midrule
\addlinespace
$3\nu$ $\Delta m^2$ &
$\Delta m^2_\odot = \Delta m^2_{21} \simeq 6.80 \times 10^{-5}\text{ eV}^2\,,\,
\quad\,\,\,\,
\Delta m^2_\text{atm} = \Delta m^2_{31} \simeq 2.48 \times 10^{-3}\text{ eV}^2$
\\
\addlinespace
$3\nu$ mixing angles \!\!\!\!\!\!\!\!\!\!\!\!&
$\sin^2 \theta_{12} \simeq 0.298\,,\,\quad\,\,\,\,
\sin^2 \theta_{23} \simeq 0.563\,,\,\quad\,\,\,\,
\sin^2 \theta_{13} \simeq 0.0212$
\\
\addlinespace
$3\nu$ CPV phases \!\!\!\!\!\! &
$\delta \simeq 1.32 \pi\,,\,\quad\,\,\,\,
\alpha_{21} \simeq 1.99 \pi\,,\,\quad\,\,\,\,
\alpha_{31} \simeq 0.02 \pi$
\\
\addlinespace
\midrule
\addlinespace
$\sin^2 2 \vartheta^{(i)}_{\mu e}$ &
$\sin^2 2 \vartheta_{\mu e}^{(4)} \simeq 7.4\times 10^{-4}$
\\
\bottomrule
\end{tabular}
\caption{The same as Table~\ref{tab:Ia} for case II.
For this benchmark, $M_3 - M_2 \simeq 7.6$ eV $\ll M_{2,3}$.}
\label{tab:II}
\end{table}
The numerical data for the benchmark corresponding to this case is given in Table~\ref{tab:II}.
Apart from the three light mostly-active neutrinos,
the spectrum includes a mostly-sterile neutrino with mass $M_1 \sim 1$ eV
and a pair of quasi-degenerate neutrinos with masses $M_2 \simeq M_3 \sim 1$ TeV.
From Table~\ref{tab:II} one sees that the symmetry in the last two columns of $R$ (recall section~\ref{sec:caseII})
is tied to an analogous symmetry in the last two rows of $X$ and of $O_c$.
The latter can be understood from Eqs.~\eqref{eq:epl} and \eqref{eq:xc}.
For the spectrum of case II, one has $n=1$ in Eq.~\eqref{eq:wdef}.
In a LBL context, the expression of Eq.~\eqref{eq:probability}
applied to the transition probability of muon to electron (anti-)neutrinos
can be approximated by:
\begin{equation}
\begin{aligned}
P^\text{LBL}_{\stackon[-.7pt]{$\nu$}{\brabar}_\mu \rightarrow \stackon[-.7pt]{$\nu$}{\brabar}_e}
\,\simeq\,
&\frac{1}{(\Theta\Theta^\dagger)_{\mu\mu}(\Theta\Theta^\dagger)_{ee}}
\Bigg[
\left|(\Theta\Theta^\dagger)_{\mu e}\right|^2
- 4 \,\cdot\,\frac{1}{2}\,\re
\left(\Theta_{\mu 4}^*\,\Theta_{e 4}\,\sum_{j=1}^3\,\Theta_{\mu j}\,\Theta_{e j}^*\right)
\\ &
- 4 \sum_{i>j}^3\,\re
\left(\Theta_{\mu i}^*\,\Theta_{e i}\,\Theta_{\mu j}\,\Theta_{e j}^*\right)
\sin^2 \Delta_{ij}
\pm 2 \sum_{i>j}^{3}\,\im
\left(\Theta_{\mu i}^*\,\Theta_{e i}\,\Theta_{\mu j}\,\Theta_{e j}^*\right)
\sin 2 \Delta_{ij}
\Bigg] \,,
\label{eq:LBL1II}
\end{aligned}
\end{equation}
where terms depending on $\Delta_{4j} \gg 1$
have been replaced by their averaged versions.
If one is in a condition similar to that of
the benchmark of Table~\ref{tab:II},
for which $|(\Theta\Theta^\dagger)_{\mu\mu}(\Theta\Theta^\dagger)_{ee} - 1|$
and $|(\Theta\Theta^\dagger)_{\mu e}|^2$ are negligible,
this expression can be further approximated by:
\begin{align}
P^\text{LBL}_{\stackon[-.7pt]{$\nu$}{\brabar}_\mu \rightarrow \stackon[-.7pt]{$\nu$}{\brabar}_e}
\,\simeq\,
P^{\text{LBL, }3\nu}_{\stackon[-.7pt]{$\nu$}{\brabar}_\mu \rightarrow \stackon[-.7pt]{$\nu$}{\brabar}_e}
+ \frac{1}{2}\sin^2 2 \vartheta^{(4)}_{\mu e}
+ 4\,\re\left(\Theta_{\mu 4}^*\,\Theta_{e 4}\,R_{\mu 2}\,R_{e 2}^*\right)
\,.
\label{eq:LBL2II}
\end{align}
Here, we have used the unitarity of the full $6\times 6$ mixing matrix,
and the approximate symmetry $R_{\alpha 2}\simeq i\, R_{\alpha 3}$.
If, additionally $|\Theta_{\alpha 4}|^2 = |R_{\alpha 1}|^2 \gg |R_{\alpha 2}|^2 \simeq |R_{\alpha 3}|^2$,
the last term can be neglected and one recovers Eq.~\eqref{eq:LBL2Ia} of case Ia.
In a SBL context,
the relevant form of Eq.~\eqref{eq:probability} for
$\stackon[-.7pt]{$\nu$}{\brabar}_\mu \rightarrow \stackon[-.7pt]{$\nu$}{\brabar}_e$
transitions in case II is:
\begin{equation}
\begin{aligned}
P^\text{SBL}_{\stackon[-.7pt]{$\nu$}{\brabar}_\mu \rightarrow \stackon[-.7pt]{$\nu$}{\brabar}_e}
\,\simeq\,
\frac{1}{(\Theta\Theta^\dagger)_{\mu\mu}(\Theta\Theta^\dagger)_{ee}}
\Bigg[
\left|(\Theta\Theta^\dagger)_{\mu e}\right|^2
&- 4 \,\re
\left(\Theta_{\mu 4}^*\,\Theta_{e 4}\,\sum_{j=1}^3\,\Theta_{\mu j}\,\Theta_{e j}^*\right)
\sin^2 \Delta_{41}
\\ &
\pm 2 \,\im
\left(\Theta_{\mu 4}^*\,\Theta_{e 4}\,\sum_{j=1}^{3}\,\Theta_{\mu j}\,\Theta_{e j}^*\right)
\sin 2 \Delta_{41}
\Bigg] \,,
\label{eq:SBL1II}
\end{aligned}
\end{equation}
with $\Delta_{41}\simeq \Delta_{42}\simeq \Delta_{43}$.
One is thus sensitive to oscillations due to the scale of mass-squared differences $\Delta m^2_{4j}$ with $j=1,2,3$,
while the oscillations pertaining to smaller mass-squared differences have not yet developed.
If one is in a condition similar to that of
the numerical benchmark, this expression can be approximated by:
\begin{align}
P^\text{SBL}_{\stackon[-.7pt]{$\nu$}{\brabar}_\mu \rightarrow \stackon[-.7pt]{$\nu$}{\brabar}_e}
\,\simeq\,
\left[
\sin^2 2 \vartheta^{(4)}_{\mu e} + 8\,\re\left(\Theta_{\mu 4}^*\,\Theta_{e 4}\,R_{\mu 2}\,R_{e 2}^*\right)
\right]\sin^2 \Delta_{41}
\mp 4\,\im\left(\Theta_{\mu 4}^*\,\Theta_{e 4}\,R_{\mu 2}\,R_{e 2}^*\right)\sin 2 \Delta_{41}
\,,
\label{eq:SBL2II}
\end{align}
where once again the unitarity of the full mixing matrix has been taken into account,
as well as the relation $R_{\alpha 2}\simeq i\, R_{\alpha 3}$.
If also $|R_{\alpha 1}|^2\gg |R_{\alpha 2}|^2 \simeq |R_{\alpha 3}|^2$,
then the two terms containing $R_{\alpha 2}$ in this equation
can be neglected and one recovers Eq.~\eqref{eq:SBL2Ia} of case Ia.
\begin{figure}[t]
\centering
\includegraphics[width=1.0\linewidth]{Case_II.png}
\caption{
Active-sterile mixing measure $\sin^2 2\vartheta_{\mu e}^{(4)}$
{\it versus} the lightest neutrino mass $m_\text{min}$ from a scan of the case-II parameter space,
with NO ($m_\text{min} = m_1$). The heavy spectrum at tree level has
$M_1 = 1$ eV, while three values of the heaviest quasi-degenerate masses are considered,
$M_2 \simeq M_3 = 8\,v\, (14\,v)\, [20\,v]$,
corresponding to the black (dark blue) [light blue] points in the scatter plot.
Here, $v \simeq 174$ GeV is the Higgs VEV.
The horizontal green band shows the $99.7\%$ CL interval of Ref.~\cite{Gariazzo:2015rra},
and the vertical red band corresponds to the cosmological constraint, as in Figure~\ref{fig:Ia}.
The dark green contour delimits the region inside which loop-stable points have been found,
while the benchmark of Table~\ref{tab:II} is marked in yellow.}
\label{fig:II}
\end{figure}
To further explore the parameter space of case II, we have
produced numerical seesaw structures qualitatively similar to our benchmark
by following a procedure similar to that of case Ia.
We have specified (at tree level) $M_1 = 1$ eV
and three different values for the second heaviest neutrino mass,
$M_2\, (\simeq M_3) \in \{8,\,14,\, 20\}\,v$, where $v\simeq 174$ GeV is the Higgs VEV.
We have further scanned the mass splitting $M_3 - M_2$ in the interval $[0.02,\,200]$ eV.
In Figure~\ref{fig:II}
we show the values of $\sin^2 2\vartheta_{\mu e}^{(4)}$ in Eq.~\eqref{eq:sthmue}
against the values of the lightest neutrino mass, for the numerical examples found for case II.
Only points for which $\mbox{tr}\left[m\,m^\dagger\right] \in [0.001,\,1]\,v^2$ are kept.
As before,
the horizontal green band highlights the range of $\sin^2 2\vartheta_{\mu e}^{(4)}$
preferred by the global fit of Ref.~\cite{Gariazzo:2015rra} and cited at the beginning of the present section,
while the dark green contour delimits the region inside which relatively loop-stable points can be found.
The approximations used in deriving the oscillation formulae of Eqs.~\eqref{eq:LBL2II} and~\eqref{eq:SBL2II}
are valid for all the plotted points.
For the numerical examples pertaining to case II which are stable under loop corrections,
one can obtain values of $BR(\mu \rightarrow e\gamma)$ close to the MEG upper bound of $4.2 \times 10^{-13}$.
Points with larger values of the branching ratio are excluded from our scan.
For the benchmark of Table~\ref{tab:II} one has $BR(\mu \rightarrow e\gamma) \simeq 2.0 \times 10^{-13}$.
Such effects can be probed by the MEG II update~\cite{Cattaneo:2017psr},
which is expected to increase the present sensitivity of MEG by one order of magnitude.
One also finds the bounds $m_\beta < 15$ meV, $|m_{\beta\beta}| < 27$ meV,
and $|R_{e4}|^2 \lesssim 0.02$ for the loop-stable numerical examples of this case.
While KATRIN will seek to improve the current bound on $m_\beta$ down to $0.2$ eV,
values of $|m_{\beta\beta}| \gtrsim 10^{-2}$ eV may be probed in
the next generation of $(\beta\beta)_{0\nu}$-decay experiments~\cite{Vergados:2016hso}.
Concerning the prospect of detecting the heavy neutrino pair in future collider searches,
the reader is further referred to the review~\cite{Antusch:2016ejd}.
If, unlike our benchmark, the heavy neutrino pair would have a mass
in the $1-100$ GeV range and were sufficiently long-lived,
it might lead to displaced vertex signatures~\cite{Antusch:2016vyf}
and produce resolvable neutrino-antineutrino oscillations at colliders~\cite{Antusch:2017ebe}.
Finally, the pseudo-Dirac pair of case II might play a role
in explaining the baryon asymmetry of the Universe
through resonant leptogenesis~\cite{Pilaftsis:2003gt}.
In such a scenario, one should carefully
take into account the washout from the interactions of the
lighter sterile neutrino species.%
\footnote{For an $M_1$ of case II in the range $[0.1,\,50]$ keV,
see the ISS(2,3) analysis of Ref.~\cite{Abada:2017ieq}.}
These interactions may need to be non-standard
in order to reconcile the light sterile neutrino paradigm
with cosmology.
\vskip 2mm
The presented explicit numerical examples are merely illustrative.
However, they give credit to our claim that
models exhibiting an approximate lepton number symmetry
with at least one sterile neutrino mass at the eV scale
are viable and could play a part in explaining the SBL anomalies.
In the next section we look into CP Violation in the present framework in some detail.
\section{CP Violation in this Framework}
\label{sec:cp}
\subsection{Remarks on CP Violation Measurements}
In order to analyse CP Violation effects,
it is instructive to define CP asymmetries $A_{\nu \overline{\nu}}^{\alpha \beta}$
at the level of oscillation probabilities (see e.g.~\cite{Gandhi:2015xza}):
\begin{align}
A_{\nu \overline{\nu}}^{\alpha \beta}
\,\equiv\,
\frac
{P_{\nu_\alpha \to \nu_\beta} - P_{\overline{\nu}_\alpha \to \overline{\nu}_\beta}}
{P_{\nu_\alpha \to \nu_\beta} + P_{\overline{\nu}_\alpha \to \overline{\nu}_\beta}}
\,\equiv\,
\frac{\Delta P_{\alpha\beta}}
{P_{\nu_\alpha \to \nu_\beta} + P_{\overline{\nu}_\alpha \to \overline{\nu}_\beta}}
\,.
\end{align}
We restrict our discussion to the vacuum case, keeping in mind that in a realistic
context the breaking of CP and CPT due to the asymmetry of the matter which neutrinos traverse
should be taken into account. The requirement of CPT invariance results in the relations
$\Delta P_{\alpha\beta} = -\Delta P_{\beta\alpha}$ and $\Delta P_{\alpha \alpha} = 0$.
From the unitarity of the full mixing matrix, one further has
\begin{align}
\sum_\beta\,\Delta P_{\alpha\beta} = 0\,,
\end{align}
for any $\alpha$, with $\alpha$ and $\beta$ running through the whole index set,
$\alpha,\beta=e$, $\mu$, $\tau$, $s_1,\ldots$, $s_q$.
In a $3\times 3$ unitary context, these relations imply that there is only one
independent difference, which can be chosen as $\Delta P_{e\mu}$.
As shown in~\cite{Gandhi:2015xza}, in a $4\times 4$ unitary framework
they imply the existence of 3 independent differences, say $\Delta P_{e \mu}$,
$\Delta P_{\mu \tau}$, and $\Delta P_{\tau e}$.
In the $6\times 6$ unitary case, we find instead that there are 10 independent differences $\Delta P_{\alpha \beta}$ (see also~\cite{Reyimuaji:2019wbn}), while only the
three of them involving just active neutrinos are experimentally relevant.
Thus, one should generically expect different values for $\Delta P_{e \mu}$,
$\Delta P_{\mu \tau}$, and $\Delta P_{\tau e}$ in a given seesaw-type model.
Using Eq.~\eqref{eq:probability}, with $n$ mostly-sterile neutrinos accessible at
an oscillation experiment, one finds:
\begin{align}
\Delta P_{\alpha\beta} \,=\,
\frac{4}{(\Theta\Theta^\dagger)_{\alpha\alpha}(\Theta\Theta^\dagger)_{\beta\beta}}
\, \sum_{i>j}^{3+n}\,\im
\left(\Theta_{\alpha i}^*\,\Theta_{\beta i}\,\Theta_{\alpha j}\,\Theta_{\beta j}^*\right)
\sin 2 \Delta_{ij}\,.
\end{align}
Even if none of the new sterile states are accessible -- corresponding to $n=0$ --
one is still expects $\Delta P_{e \mu}$, $\Delta P_{\mu \tau}$, and $\Delta P_{\tau e}$
to be independent, as the relevant $3\times 3$ mixing submatrix $\Theta \,(= K)$ is not unitary.
This means that it is possible for CP invariance to hold in one oscillation channel, such as
$\stackon[-.7pt]{$\nu$}{\brabar}_\mu \rightarrow \stackon[-.7pt]{$\nu$}{\brabar}_e$
and yet be violated in another, such as
$\stackon[-.7pt]{$\nu$}{\brabar}_\mu \rightarrow \stackon[-.7pt]{$\nu$}{\brabar}_\tau$.
Indeed, one has:
\begin{align}
\Delta P_{\mu \tau} \,&=\, \Delta P_{e \mu} \,+\,
\frac{4}{\prod_{\alpha = e,\mu,\tau} (\Theta\Theta^\dagger)_{\alpha\alpha}}
\, \sum_{i>j}^{3}\,\im
\Big[\Theta_{\mu i}^*\,\Theta_{\mu j}\Big(
\Theta_{e i}\,\Theta_{e j}^* \, (\Theta\Theta^\dagger)_{\tau\tau}
+ \Theta_{\tau i}\,\Theta_{\tau j}^*\, (\Theta\Theta^\dagger)_{ee}
\Big)\Big]
\sin 2 \Delta_{ij}
\,,\\
\Delta P_{\tau e} \,&=\, \Delta P_{e \mu} \,-\,
\frac{4}{\prod_{\alpha = e,\mu,\tau} (\Theta\Theta^\dagger)_{\alpha\alpha}}
\, \sum_{i>j}^{3}\,\im
\Big[\Theta_{e i}^*\,\Theta_{e j}\Big(
\Theta_{\mu i}\,\Theta_{\mu j}^* \, (\Theta\Theta^\dagger)_{\tau\tau}
+ \Theta_{\tau i}\,\Theta_{\tau j}^*\, (\Theta\Theta^\dagger)_{\mu\mu}
\Big)\Big]
\sin 2 \Delta_{ij}
\,.
\end{align}
It is then possible to have a zero $\Delta P_{e\mu}$ while $\Delta P_{\mu \tau }$ and/or $\Delta P_{\tau e}$ are non-zero.
Notice that if $\Theta$ here were unitary, one would recover $\Delta P_{e \mu} = \Delta P_{\mu \tau} = \Delta P_{\tau e}$.
Thus, deviations from unitarity are a potential source of CP Violation. This should come as no surprise,
if one recalls that $\eta$ in Eq.~\eqref{eq:khu} is a complex hermitian matrix containing, in general, CPV physical phases.
For the cases analysed in sections~\ref{sec:loopandsym} and~\ref{sec:numeric},
one has $n = 1,2$. Explicit expressions for the CP asymmetries relevant in a SBL context can be obtained
from the approximate relations~\eqref{eq:SBL2Ib} and~\eqref{eq:SBL2II} of cases Ib and II, respectively.
Instead, from the relation \eqref{eq:SBL2Ia} one sees that SBL CP asymmetries for case Ia are negligible.
One has, for case Ib:
\begin{align}
\Delta P_{e\mu}^\text{SBL,\:Ib} \,\simeq\,
4 \,\im\left(\Theta_{\mu 4}^*\,\Theta_{e 4}\,\Theta_{\mu 5}\,\Theta_{e 5}^*\right)\sin 2 \Delta_{54}
\,,
\end{align}
while for case II:
\begin{align}
\Delta P_{e\mu}^\text{SBL,\:II} \,\simeq\,
8 \,\im\left(\Theta_{\mu 4}^*\,\Theta_{e 4}\,R_{\mu 2}\,R_{e 2}^*\right)\sin 2 \Delta_{41}
\,.
\end{align}
\subsection{CP-odd Weak Basis Invariants}
\label{sec:WBinv}
In section \ref{sec:param} we have shown (see also Ref.~\cite{Branco:2001pq})
that in the present framework, where three right-handed neutrinos have been added to the SM,
there are 6 CPV phases. They can be made to appear in the Dirac mass matrix $m$
by changing to the weak basis (WB) where the charged lepton mass matrix $m_l$
and the Majorana mass matrix $M$ are diagonal and real.
In the study of CP Violation, it is very useful to construct CP-odd WB invariants
following the procedure introduced for the first time for the quark sector
in Ref.~\cite{Bernabeu:1986fc}, see also~\cite{Branco:1999fs}.
This procedure was later applied by different
authors~\cite{Branco:1986gr,Pilaftsis:1997jf,Branco:1998bw,Branco:2001pq,%
Davidson:2003yk,Branco:2005jr,Dreiner:2007yz,Wang:2014lla}
to the leptonic sector, in order to build CP-odd WB invariants relevant in
several different contexts.
Such invariants can be calculated in any convenient WB and their non-vanishing
signals the presence of CP-breaking.
We define six WB invariants which are sensitive to the leptonic CPV phases:%
\begin{equation}
\begin{aligned}
i_R \,&=\, \im \,\mbox{tr} \left[ M^\dagger\, M \, m^\dagger \,m\, (M^\dagger\, M)^2\, (m^\dagger\, m)^2 \right]\,, \\[2mm]
j_R^{(1)} \,&=\, \im \,\mbox{tr} \left[ M^{-1}\,m^T\,m^*\,M\,m^\dagger\,m \right]\,, \\[2mm]
j_L^{(1)} \,&=\, \im \,\mbox{tr} \left[ M^\dagger\,M\,m^\dagger\,h_\ell\,m\,m^\dagger\,m \right]\,,
\end{aligned}
\qquad
\begin{aligned}
i_L \,&=\, \im \mbox{tr} \,\left[ h_\ell \, m \,m^\dagger\, h_\ell^{2}\, (m\, m^\dagger)^2\right]\,,\\[2mm]
j_R^{(2)} \,&=\, \im \,\mbox{tr} \left[ M^{-1}\,m^T\,m^*\,M\,(m^\dagger\,m)^2\right]\,,\\[2mm]
j_L^{(2)} \,&=\, \im \,\mbox{tr} \left[ M^\dagger\,M\,m^\dagger\,h_\ell\,m\,(m^\dagger\,m )^2 \right]\,,
\end{aligned}
\label{eq:js}
\end{equation}
where we have assumed $M$ to be invertible and have additionally defined $h_\ell \equiv m_l\,m_l^\dagger$.
To see how the above invariants capture the 6 leptonic CPV phases, consider the
aforementioned WB of $m_l$ and $M$ diagonal and real:
$h_\ell = \mbox{diag}(m_e^2,\, m_\mu^2,\, m_\tau^2)$ and
$M = \tilde{D} = \mbox{diag}(\tilde{M}_1,\, \tilde{M}_2,\, \tilde{M}_3)$.
Recall that in this basis the full neutrino mass matrix $\mathcal{M}$ is not diagonal
and therefore the $\tilde{M}_i$ do not coincide with the physical masses $M_i$.
We further consider the singular value decomposition of $m$:
\begin{align}
m\,=\,V_L\,d_m\,V_R\,,
\end{align}
with $V_{L,R}$ unitary and $d_m = \mbox{diag}({d}_1,\,{d}_2,\,{d}_3)$ real and positive.
The 6 physical CPV phases of interest are contained in $m$, since 3 out of
its original 9 can be removed by rephasing left-handed fields.
A parametrisation of $V_L$ and $V_R$ which captures explicitly these phases is:
\begin{align}
V_L\,=\,V_{\delta_L}\,K_L\,,
\qquad
V_R\,=\,V_{\delta_R}\,K_R\,,
\end{align}
with $K_{L,R}\equiv \mbox{diag}(1,\, e^{i\alpha_{L,R}},\, e^{i\beta _{L,R}})$ and
\begin{align}
V_{\delta_{L,R}}\,\equiv \,O_{23}\,\mbox{diag}(1,1,e^{i\delta
_{L,R}})\,O_{13}\,O_{12}\,,
\end{align}
the $O_{ij}$ being ordinary real rotation matrices in the $i$-$j$ plane, e.g.
\begin{align}
O_{23}(\theta_{23L})\,=\,
\begin{pmatrix}
1 & 0 & 0 \\
0 & \cos \theta_{23L} & \sin \theta_{23L} \\
0 & -\sin \theta_{23L} & \cos \theta_{23L}
\end{pmatrix}\,.
\end{align}
The phases of interest are then manifestly $\alpha_{L,R}$, $\beta_{L,R}$
and $\delta_{L,R}$.
Using this parametrisation, the invariants can be cast in the forms:
\begin{equation}
\begin{aligned}
i_R \,&=\, \mathcal{K}_{i_R}\, \sin \delta_R \,, \quad
i_L \, =\, \mathcal{K}_{i_L}\, \sin \delta_L \,,\\[3mm]
j_R^{(a)} \,&=\, \mathcal{K}_{j_R^{(a)}}^{\delta_R}\, \sin \delta_R
\,+\, \mathcal{K}_{j_R^{(a)}}^{2\alpha_R}\, \sin 2\alpha_R
\,+\, \mathcal{K}_{j_R^{(a)}}^{2\beta_R} \, \sin 2\beta_R \,, \\[3mm]
j_L^{(a)} \,&=\, \mathcal{K}_{j_L^{(a)}}^{\delta_R}\, \sin \delta_R
\,+\, \mathcal{K}_{j_L^{(a)}}^{\delta_L}\, \sin \delta_L
\,+\, \mathcal{K}_{j_L^{(a)}}^{\alpha_L}\, \sin \alpha_L
\,+\, \mathcal{K}_{j_L^{(a)}}^{\beta_L} \, \sin \beta_L \,,
\label{eq:js1}
\end{aligned}
\end{equation}
with $a=1,2$. Explicit expressions for the $\mathcal{K}$ coefficients
are given in Appendix~\ref{app:WBinv}.
It is clear that $i_R$ and $i_L$ are sensitive to CPV values of $\delta_R$ and $\delta_L$, respectively,
while the $j_R^{(1,2)}$ ($j_L^{(1,2)}$) are further sensitive to
$\alpha_R$ and $\beta_R$ ($\alpha_L$ and $\beta_L$).
\section{Summary and Conclusions}
\label{sec:conclusions}
We have seen that in the framework of the type-I seesaw mechanism
one can naturally have at least one sterile neutrino with a mass
of around one eV.
This can be inferred using a general exact parametrisation,
defined in \cite{Agostinho:2017wfs}, that is valid irrespectively
of the size and structure of the neutrino mass matrix.
Thus we are able to analyse a general seesaw where not all
of the three mostly\discretionary{-}{-}{-}sterile neutrinos need to be very
heavy. We have focused on models where at least one of the
sterile neutrinos is light and its mixing with the active
neutrinos is small enough to respect experimental bounds but
sufficiently large to be relevant to low energy phenomenology
-- for instance, providing a natural explanation to the
short-baseline anomalies.
In section~\ref{sec:framework}, we have shown how the usual
seesaw formulae have to be generalised in order to be applicable
to the special region of parameters which we are considering.
In particular, we have written the full neutrino mixing matrix
in terms of a $3\times 3$ unitary matrix and a $3\times 3$
general complex matrix, which encodes the deviations from unitarity.
The latter was further parametrised at tree level in terms of
neutrino masses and a complex orthogonal matrix.
We carefully distinguish approximate and exact relations,
which are valid in any seesaw regime.
Namely, we have found an exact formula for the neutrino
Dirac mass matrix $m$ in terms of neutrino masses, neutrino mixing
and deviations from unitarity, which generalises the usual
Casas-Ibarra parametrisation of $m$. We additionally derive
an exact seesaw-like relation, equating the product of neutrino
masses and the square of the absolute value of $\det\, m$.
In section~\ref{sec:devunit}, we have further discussed
the parametrisation of deviations from unitarity
as well as constraints on said deviations in our framework.
These significantly depend on the masses of the heavy neutrinos.
In this context, we also find a bound on the lightest neutrino mass $m_\text{min}$,
useful whenever a light sterile is present in the seesaw spectrum.
For the cases of interest, with an eV-scale sterile neutrino and large deviations
from unitarity, one has $m_\text{min} \lesssim 0.1$ eV.
In sections~\ref{sec:loopandsym} and~\ref{sec:numeric} we give examples of viable textures
with at least one sterile neutrino with a mass at the eV scale.
Such light sterile states arise naturally by imposing an approximately conserved lepton number symmetry.
Before the breaking, and for an appropriate assignment of leptonic charges, the lightest
neutrinos are massless at tree level. After the breaking, the lightest neutrinos acquire calculable masses,
with mass differences in agreement with experiment, after the relevant one-loop correction to the zero block
of the neutrino mass matrix has been taken into account. This correction is cast in a simple form,
highlighting the cancellations required by radiative stability, in section~\ref{sec:loop}.
We identify two symmetric textures (I and II) of the neutrino mass matrix which
allow for a separation of high (TeV -- GUT) and low ($\lesssim$ keV) scales.
We then concentrate on three particular scenarios, with differing spectra $(M_1,\, M_2,\,M_3)$ of heavy neutrinos:
case Ia, for which $M_1\ll M_2\ll M_3$; case Ib, with $M_1\sim M_2\ll M_3$; and case II, where $M_1\ll M_2\sim M_3$.
Numerical benchmarks are given for each of these three cases in Tables~\ref{tab:Ia}\,--\,\ref{tab:II}.
Related regions in parameter space are explored in Figures~\ref{fig:Ia}\,--\,\ref{fig:II},
which show that these models can accommodate enough active-sterile mixing
to play a role in the explanation of short\discretionary{-}{-}{-}baseline anomalies.
Since the formulae for neutrino oscillation probabilities are modified
in the presence of deviations from unitarity,
we present, for each case, approximate expressions for muon to electron (anti-)neutrino transition probabilities,
quantifying the impact of light sterile states on oscillations, for both short- and long-baseline experiments.
Attention is further given to the future testability of the proposed models
through non-oscillation effects of the extra neutrino states.
We conclude our work in section~\ref{sec:cp} by discussing CP Violation in the type-I seesaw framework
under analysis.
At the level of oscillation probability asymmetries,
we have found that deviations from unitarity may source CP Violation,
with generically independent effects in the standard transition channels.
We have also constructed 6 CP-odd weak basis invariants which are sensitive the CP-violating phases
in the lepton sector. This last point has been shown explicitly,
for a particular choice of weak basis and parametrisation of $m$.
\section*{Acknowledgements}
This work was supported by Funda\c{c}\~{a}o para a Ci\^{e}ncia e a Tecnologia (FCT, Portugal)
through the projects CFTP-FCT Unit 777 (UID/FIS/00777/2013 and UID/FIS/00777/2019),
CERN/FIS-PAR/0004/2017, and PTDC/FIS-PAR/29436/2017 which are partially funded
through POCTI (FEDER), COMPETE, QREN and EU. The work of PMFP was supported by
several short term Master-type fellowships from the CFTP and CERN projects listed above. At
present PMFP acknowledges support from FCT through PhD grant SFRH/BD/145399/2019.
The work of JTP has been supported by the PTDC project listed above. GCB and MNR thank
the CERN Theoretical Physics Department, where part of this work was done, for hospitality
and partial support and also acknowledge participation in the CERN Neutrino Platform --
Theory working group (CENF-TH).
|
2,869,038,154,386 | arxiv | \section{\label{}}
The recent isolation of single layers of carbon atoms\cite{graph1}
has attracted growing attention in the transport properties of
graphene-based devices. In these systems unconventional phenomena
like the half-integer quantum Hall effect\cite{qhe} and the Klein
tunneling\cite{klein} have been observed. Such peculiar behavior
stems from the relativistic character of the electrons in the
carbon honeycomb lattice. Closed to Dirac point the charge
carriers behave as two-dimensional (2D) massless Dirac
fermions\cite{review} and have very high mobility\cite{morozov}.
This fact has stimulated theoretical and experimental
investigations into graphenic ultrafast devices\cite{thz0} like
field effect transistors\cite{fet}, \textit{p-n} junction diodes
and THz detectors\cite{thz1,thz2}. In these systems it is crucial
to have full control of the electronic response after the sudden
switch-on of an external perturbation, and the study of the
real-time dynamics is becoming increasingly important. The
investigation of the transient response also is of fundamental
interest. One of the most debated aspects of the transport
properties of graphene is the minimum conductivity
$\sigma_{\mathrm{min}}$ at the Dirac point.
From the theoretical point of view the problem arises from the
fact that the value of $\sigma_{\mathrm{min}}$ is sensitive to the
order in which certain limits (zero disorder and zero frequency)
are taken\cite{ziegler}, thus producing different values around
the quantum $e^{2}/h$\cite{ziegler,peres,varlamov,beenakker}. Very
recently Lewkowicz and Rosenten overcame this ambiguity employing
a time-dependent approach in which the dc conductivity is
calculated by solving the quench dynamics of 2D Dirac excitations
after the sudden switching of a constant electric field.
Interestingly their approach does not suffer from the use of any
regularization related to the Kubo or Landauer formalism and
yields $\sigma_{\mathrm{min}}=\pi e^{2}/2h$. Despite the large
effort devoted to the study of the transport properties of
graphenic systems, a genuine real-time analysis which treats on
equal footing transients effects and the long-time response of
graphene nanoribbons in contact with semi-infinite reservoirs is
still missing.
In this Letter we study the time-dependent transport properties of
undoped graphene nanoribbons with finite width and virtually
infinite length after the sudden switch-on of an external bias
voltage. The geometry that we consider is sketched in
Fig.\ref{fig1}. The nanoribbon is divided in three regions, namely
a left (L) and a right (R) semi-infinite graphenic reservoirs and
a central (C) region of length $L=aN_{c}$, where $N_{c}$ is the
number of cells along the longitudinal $x$ direction and $a=2.46$
\AA $\,$ is the graphene lattice constant. The width of the ribbon
is $W=a\sqrt{3}N_{y}$, where $N_{y}$ is the number of cells along
the transverse $y$ direction, in which periodic boundary
conditions are imposed\cite{blanter,katnelson,guinea}.
The three regions are linked via transparent interfaces, in such a
way that in equilibrium the system is translationally invariant
along the $x$ direction, see Fig.\ref{fig1}.
\begin{figure}[h]
\includegraphics[height=6.1cm ]{honew6.pdf}
\caption{Schematic representation of the system. The nanoribbon
has semi-infinite L a nd R graphenic reservoirs and periodic
boundary conditions are imposed along the $y$ direction. The bias
voltage profile with a linear drop inside the C region is also
shown.}
\label{fig1}
\end{figure}
Once the system is driven out of equilibrium, the quench dynamics
involves high energy excitations and the Dirac-cone approximation,
which is valid only at energies lower than 1 eV, is inaccurate.
For this reason we adopt a tight-binding description of the
system, with Hamiltonian given by
\begin{equation}
H=H_{0}+U(t)=v\sum_{\langle i,j \rangle }c^{\dagger}_{i}c_{j}+
\theta(t)\sum_{i}V_{i}c^{\dagger}_{i}c_{i} \, ,
\label{ham}
\end{equation}
where the spin index has been omitted and $v=2.7$ eV is the
hopping integral of graphene. The first sum runs over all the
pairs of nearest neighbor sites of the ribbon honeycomb lattice
and $c^{(\dagger)}_{i}$ is the annihilation (creation) operator of
a $\pi$ electron on site $i$. Here we use the collective index
$i=\{p, i_{y},i_{x} \}$ to identify a site in the nanoribbon such
that $p=\alpha,\beta$ indicates the two inequivalent longitudinal
zig-zag chains, $i_{y}$ denotes the cell in the $y$ direction, and
$i_{x}$ is the position in the $x$ direction. $H_{0}$ describes
the translationally invariant equilibrium system, while $U(t)$ is
the bias perturbation with (non self-consistent) voltage
profile\cite{lineardrop,lineardrop1,lineardrop2} given by the
function $V_{i}$
\begin{equation}
V_{i}=\left\{
\begin{array}{ll}
V/2 & \quad i \in \mathrm{L} \\
V/2-E \, i_{x} & \quad i \in \mathrm{C}\\
-V/2 & \quad i \in \mathrm{R}
\end{array}
\right. \,,
\label{profile}
\end{equation}
where $V$ is the total applied voltage and $i_{x} \in (0,L)$ is
the $x$-coordinate of the site $i$ in the C region, which is
subject to the uniform electric field $E=V/L$ (see
Fig.\ref{fig1}). The above modelling of the bias profile could
find an approximate realization, e.g., in a planar junction in
which the L and R regions of the nanoribbon are on top of metallic
electrodes.
The time-dependent total current $I(t)$ flowing across the
interface in the middle of the C region is written as
\begin{equation}
I(t)
=2\sum_{i_{y}=1}^{N_{y}}\sum_{p=\alpha,\beta}I^{(p)}_{i_{y}}(t) \,
,
\end{equation}
where the factor 2 accounts for the spin degeneracy and
$I^{(p)}_{i_{y}}$ is the current flowing across the
$p=\alpha,\beta$ chain in the $i_{y}$-th cell, in the middle of
the C region (see Fig.\ref{fig1}).
Since periodic boundary conditions are imposed along the $y$
direction, the current $I^{(p)}_{i_{y}}$ does not depend on the
cell and chain indices $i_{y}$ and $p$, and hence
$I(t)=4N_{y}\bar{I}(t)$, where we have defined $\bar{I} \equiv
I^{(\alpha)}_{i_{y}}=I^{(\beta)}_{i_{y}}$ for any $i_{y}$.
\begin{figure}[h]
\includegraphics[height=3.5cm ]{ladder.pdf}
\caption{Ladder model for fixed transverse momentum $k_{y}$. The
bias profile is the same as in Fig.\ref{fig1}.}
\label{fig1a}
\end{figure}
Since the transverse momentum $k_{y}=2\pi n/\sqrt{3}aN_{y}$ (with
$n=0,\dots N_{y}-1$) is conserved the current $\bar{I}$ can be
written as
\begin{equation}
\bar{I}(t)=\frac{1}{N_{y}}\sum_{k_{y}}\bar{I}_{k_{y}}(t) \,.
\label{currsum}
\end{equation}
It is seen that the total current of the original $N_{y}$-wide
problem can be calculated by summing the currents
$\bar{I}_{k_{y}}$ coming from $N_{y}$ independent 1-wide ladder
problems\cite{blanter} with staggered $k_{y}$-dependent
transverse hopping (see Fig.\ref{fig1a}), with the same bias
profile as in Eq.(\ref{profile}).
In order to calculate $\bar{I}_{k_{y}}$ it is convenient to
introduce the $y$-Fourier transform of the original electron
operators $c_{\{p,k_{y},i_{x} \}}=(N_{y})^{-1/2}
\sum_{i_{y}}e^{ik_{y}i_{y}\sqrt{3}a}c_{\{p,i_{y},i_{x} \}}$ in
terms of which the current $\bar{I}_{k_{y}}$ can be cast as
\begin{equation}
\bar{I}_{k_{y}}(t)=\frac {2\, e\, v}{\hbar}\mathrm{Re} \left[
G^{<}_{\{p,k_{y},\frac{L}{2}\};\{p,k_{y},\frac{L}{2}+\frac{a}{2}\}}(t,t)
\right] \,,
\end{equation}
where $G^{<}$ is the lesser Keldysh Green's function
\begin{equation}
G^{<}_{\{p,k_{y},i_{x}\};\{r,q_{y},j_{x}\}}(t_{1},t_{2})=i\langle
c^{\dagger}_{\{p,k_{y},i_{x} \}} (t_{1}) c_{\{r,q_{y},j_{x}\}}
(t_{2})\rangle \,.
\end{equation}
The time-evolution of the Green's function is evaluated according
to
\begin{eqnarray}
G^{<}_{\{p,k_{y},i_{x}\};\{p,k_{y},j_{x}\}}(t,t)= \nonumber \\
i \left[ e^{-iH t} f(H_{0}) e^{iH t}
\right]_{\{p,k_{y},i_{x}\};\{p,k_{y},j_{x}\}} \, ,
\label{evol}
\end{eqnarray}
where we recall that there is no actual dependence on $p$ and
where $f$ is the Fermi distribution function. For each transverse
momentum $k_{y}$ the current $\bar{I}_{k_{y}}$ is numerically
calculated by computing the exact time evolution of the
corresponding ladder system in Fig.\ref{fig1a}, where we take the
reservoirs with a \textit{finite} length $L_{r}=a N_{r}$. This
approach allows us to reproduce the time evolution of the
infinite-leads system up to a time $T_{\mathrm{max}} \approx
2L_{r} /v_{F}$, where $v_{F}$ is the Fermi velocity\cite{perf}.
For $t > T_{\mathrm{max}}$ electrons have time to propagate till
the far boundary of the leads and back, yielding undesired finite
size effects in the calculated current. Accordingly we choose
$N_{r}$ such that $T_{\mathrm{max}}$ is much larger than the time
at which the steady-state is reached.
\begin{figure}[h]
\includegraphics[height=4.5cm ]{current.pdf}
\caption{Total time dependent current $I(t)$ with geometric
parameters $L\approx 25$ $n$m, $W \approx 106$ $n$m and applied
voltage $V=8\times10^{-4}$ Volt.
Current is in units of $V \sigma_{0}$, where $\sigma_{0}=e^{2}/h$
is the quantum of conductance and time in units of
$t_{v}=2.5\times 10^{-16}$ s. The dashed line represents the
long-time conductance $\sigma_{2}/\sigma_{0}=4$ given by the
Landauer formula\cite{onipko2}, while the dotted line represents
the combination $(\sigma_{\mathrm {min}} \times
W/L)/\sigma_{0}=\pi/2 \times W/L $.}
\label{fig2}
\end{figure}
For practical purposes we represent $H_{0}$ and $H(t)$ in a hybrid
basis, in which the Hamiltonians of the isolated (equilibrium and
biased) L and R reservoirs are diagonal in the set of states
derived analytically in Ref.\onlinecite{onipko}, while the
Hamiltonian describing the C region remains represented in the
basis $\{p,k_{y},i_{x}\}$. The enormous advantage of such choice
is clarified in the following. Since we are interested in the dc
conductance we set a small bias $V<<\hbar v_{F}/eW$. In this way
we ensure that at long times a linear regime is established in
which only the electrons right at the Dirac point (with $k_{y}=0$)
contribute to the total current. This agrees with the Landauer
formula, according to which only the states within the bias window
contribute at the steady-state. On the other hand during the
transient \textit{all} transverse modes are excited and the sum in
Eq.(\ref{currsum}) must be computed including the complete set of
$k_{y}$. Indeed we have checked numerically that in the limit
$t\rightarrow \infty $ all currents $\bar{I}_{k_{y}}$, except the
one with $k_{y}=0$, vanish. We have also observed that the damping
time of $\bar{I}_{k_{y}>0}(t)$ goes like $k_{y}^{-1}$ and the
calculation of the currents with small $k_{y}$ requires a very
long propagation before the zero-current steady-state is
approached. As a consequence very large values of $N_{r}$ are
needed, making the computation in principle too demanding. Such
numerical difficulty is, however, compensated by the fact that,
after a transient time of order $L/v_{F}$, for $k_{y}$ close to
the Dirac point only few low-energy states contribute to
$\bar{I}_{k_{y}}(t)$. Therefore for any given $k_{y}>0$ we
introduce an energy cutoff $\Lambda_{k_{y}}\approx 10 \hbar v_{F}
k_{y}$ in the reservoirs Hamiltonians and retain only the
lead-eigenstates with transverese momentum $k_{y}$ and energy in
the range $(-\Lambda_{k_{y}},\Lambda_{k_{y}})$. This allows us to
deal with very long leads ($N_{r}>>1$) and can be explicitly
implemented by using the analytic eigenstates of a rectangular
graphenic macromolecule\cite{onipko}. The C region (which is the
one where we calculate the current) is treated exactly within the
original full basis $\{p,k_{y},i_{x}\}$, but the overall
computational cost remains moderate.
\begin{figure}[h]
\includegraphics[height=4.5cm ]{transient.pdf}
\caption{Short time transient current $I(t)$ for different ribbon
widths $W_{1}=106$ $n$m, $W_{2}=27$ $n$m, $W_{3}=13$ $n$m. The
rest of parameters, dotted/dashed lines and units are as in
Fig.\ref{fig2}.} \label{fig3}
\end{figure}
In Fig.\ref{fig2} we show the time-dependent current $I(t)$
calculated for a nanoribbon of central length $L\approx 25$ $n$m
($N_{c}=100$) and width $W \approx 106$ $n$m ($N_{y}=250$) with an
applied voltage $V=8\times10^{-4}$ Volt and zero temperature. The
very early transient regime ($t\lesssim \hbar/v \equiv t_{v}$)
depends on the details with which the electric field $E$ has been
switched-on (sudden in time in the present case). For $ t_{v}
\lesssim t \lesssim L/v_{F}$, however the time evolution only
depends on the geometry of the device and on the potential
profile. Within this time domain, if the ribbon is wide enough,
the ballistic electron dynamics of the system probes only the bulk
properties of graphene, since the particles do not have time to
explore the reservoirs, where $E=0$. According to the Drude
picture, the ballistic transport in bulk materials subjected to
uniform electric fields produces a time-dependent conductance
given by
\begin{equation}
\sigma(t)=\frac{\gamma t}{1+(\omega t)^{2}} \, , \label{drude}
\end{equation}
where $\gamma$ is a constant proportional to the density of
states. In ordinary solids the dc conductance ($\omega \rightarrow
0$) increases linearly in time up to the breakdown of the
ballistic regime, in which one has to replace $t$ with the finite
scattering time $\tau$. However in pure graphene the transport is
ballistic up to very large length-scales of the order of micron
and $\sigma$ can apparently diverge, producing a Drude peak.
Nevertheless in undoped graphenic samples the density of states at
the Fermi level vanishes, thus making the product $\gamma t $
constant at long times. This subtle compensation is at the origin
of the finite minimal dc conductivity of 2D pure
graphene\cite{lew} and of the difficulties in constructing a
suitable propagation scheme in finite width nanoribbons. In
Fig.\ref{fig2} we see that $W \approx 100$ $n$m is enough to
observe such compensation. The current $I(t)$, instead of
increasing linearly in time, reaches a temporary plateau with
average current $I_{1}$ which lasts till $t \approx L/v_{F}
\approx 60 t_{v}$. On the contrary in an ordinary system (e.g. in
a 2D square lattice model) the current would be linear in time up
to $L/v_{F}$, producing the well known Drude peak in the bulk
limit $L,W \to \infty $ at fixed $E$. We would like to observe
that the plateau at $I_{1}$ is reached via transient oscillations
with frequency $2v /\hbar$\cite{lew}. Interestingly such frequency
is not displayed by any of the individual currents
$\bar{I}_{k_{y}}(t)$, but appears only as a cumulative effect
after the summation in Eq.(\ref{currsum}) is performed. Therefore
it is a a genuine bulk property and may be at the origin of the
resonant effect predicted to occur in optical response of graphene
right at $\omega=2 v/\hbar$\cite{acgraphene}.
\begin{figure}[h]
\includegraphics[height=4.5cm ]{currobc.pdf}
\caption{Short time transient current $I(t)$ for three different
metallic armchair ribbons with open boundary conditions. The
ribbon parameters are $L=5.5$ $n$m, $W_{1}=3.6$ $n$m, $W_{2}=2.1$
$n$m, $W_{3}=0.6$ $n$m and the applied bias is $V=0.03$ Volt.
Units and dotted lines are as in Fig.\ref{fig2}. The dashed line
represents the long-time conductance $\sigma_{2}/\sigma_{0}=2$
given by the Landauer formula\cite{peres,onipko2}.}
\label{fig4}
\end{figure}
From the first plateau of $I(t)$ we can provide an independent
evaluation of the minimal conductivity of graphene. According to
its definition, the conductivity $\sigma_{1}$ of a bulk system
subjected to a small constant electric field $E$ is given by the
current density $J$ divided by $E$:
\begin{equation}
\sigma_{1}=\frac{J}{E}=\frac{I_{1} }{V }\frac{L}{W} \, .
\label{sigma1}
\end{equation}
By exploiting Eq.(\ref{sigma1}) it is seen that our data are
consistent with the value $\sigma_{1} = \pi e^{2}/2h \equiv
\sigma_{\mathrm{min}}$ with excellent precision, see
Fig.\ref{fig2}.
Further we have investigated the formation of the first plateau as
a function of the ribbon width $W$. In Fig.\ref{fig3} we see that
for narrow ribbons with $W \approx 10$ $n$m finite-size effects
produce a drastic deviation from the ideal graphene bulk. The
transient current does not show a temporary saturation, but grows
with an approximate linear envelope (up to a time $\sim L/v_{F}$),
in qualitative agreement with the Drude behavior given in
Eq.(\ref{drude}).
At times larger than $\sim L/v_{F}$ the electrons start exploring
the reservoirs, where the electric field is zero. At this point a
standard dephasing mechanism sets in and the current tends to its
true final steady-state. As discussed above, however, such value
is reached after a very slow damping process, in which the current
displays decaying oscillations with dominant frequency
$\bar{\omega} = 2 \pi v_{F} /W$ (see Fig.\ref{fig2}), $\hbar
\bar{\omega}$ being the energy spacing between the transverse
energy subbands of the ribbon. We have checked numerically that
the asymptotic value $I_{2}$ agrees well with the Landauer formula
and does not depend on $L$ and $W$, provided that $V <<\hbar
v_{F}/eW$. Thus the conductance of the device is simply extracted
as $\sigma_{2}=I_{2} /V$. Our numerical data provides
$\sigma_{2}=4e^{2}/h$ with high numerical accuracy. This value is
indeed twice the conductance of metallic nanoribbons, and this is
due to the chosen periodic boundary conditions\cite{onipko2}. Thus
we have calculated $I(t)$ also in the case of open boundary
conditions. Unfortunately, as discussed above, the lack of
translational invariance along the transverse direction makes the
computation much more demanding, and only system with small $L$
and $W$ can be studied within the present approach. In
Fig.\ref{fig4} we show $I(t)$ for three different metallic
armchair nanoribbons with open boundaries. It can be seen that
already for $W \approx 2-4 $ $n$m there is a tendency to form the
universal first plateau leading to $\sigma_{\mathrm{min}}$, while
for $W \lesssim 1 $ $n$m the current $I(t)$ grows linearly in time
until $t \approx L/v_{F}$. On the other hand at long times the
current tends clearly to the Landauer value, consistent with
$\sigma_{2}=2e^{2}/h$, independently on the aspect
ratio\cite{peres,onipko}.
In summary we pointed out the subtle difficulties in constructing
a reliable method to perform time-evolutions of finite width
graphene nanoribbons and proposed an efficient numerical scheme to
overcome them. We presented a real-time study of the transport
properties of these systems in contact with virtual semi-infinite
reservoirs in the linear regime. We have shown that for large
enough undoped samples the time-dependent current displays two
plateaus. From the first of these plateaus we can extract an
independent measure of the minimal conductivity $\pi e^{2}/2$ of
bulk graphene by resorting the aspect ratio $L/W$ of the device.
The second plateau corresponds to reaching the steady-state and is
independent of the geometry. Here the conductance is $2e^{2}/h$,
which coincides with the Landauer result for metallic nanoribbons.
To conclude we wish to point out that in presence of ac bias, the
time-dependent conductivity can be used to obtain the optical
conductivity $\sigma_{\mathrm{ac}}$ of graphene. It was shown
experimentally\cite{ac1} and explained theoretically\cite{ac2}
that $\sigma_{\mathrm{ac}}$ is almost $\omega$-independent and
equals $\sigma_{\mathrm{min}}$ with high accuracy over a wide
range of frequencies. Remarkably to extract the universal value of
$\sigma_{\mathrm{ac}}$ high frequency signals with $\omega \sim
1/t_{v}$ have been employed\cite{ac1}. Therefore we believe that a
real-time approach like to one presented here is needed to
enlighten the crossover from the dc case to ultrafast scenarios in
which the period of the ac signal is comparable with the intrinsic
hopping time of the bulk system.
|
2,869,038,154,387 | arxiv | \section{Introduction}
\label{sec:intro}
Boundary integral equation methods are useful for solving boundary
value problems for linear, elliptic, partial differential equations
~\cite{guenther1996partial, kress1999linear}. Rather than solving the
partial differential equation directly, one represents the solution as
a layer potential, an integral operator applied to a density. The
density is the solution of an integral equation on the boundary of the
domain that includes the prescribed boundary data. This formulation
offers several advantages for the numerical solution of boundary value
problems. First, the dominant computational cost is from the integral
equation on the boundary whose dimension is lower than that of the
domain. Second, this boundary integral equation can be solved to very
high order using Nystr\"om methods~\cite{atkinson1997secondkind,
delves1988computational}. Finally, the solution, given by this layer
potential, can be evaluated anywhere in the domain without restriction
to a particular mesh. For these reasons, boundary integral equations
have found broad applications, including in fluid mechanics and
electromagnetics.
One particular challenge in using boundary integral equation methods
is the so-called close evaluation problem~\cite{barnett2014evaluation,
helsing2008evaluation}. Since a layer potential is an integral over
the boundary, it is natural to evaluate it numerically using the same
quadrature rule used in the Nystr\"om method to solve the boundary
integral equation. In that case, we say that the layer potential is
evaluated using its native quadrature rule. Numerical evaluation of
the layer potential using its native quadrature rule inherits the high
order accuracy associated with solving the boundary integral equation,
except for points close to the boundary. For these close evaluation
points, the native quadrature rule produces an $O(1)$ error. This
$O(1)$ error is due to the sharply peaked kernel of
the layer potential leading to its nearly singular behavior.
There are several problems that require accurate layer
potential evaluations close to the boundary of the domain. For
example, modeling of micro-organisms swimming, suspensions of
droplets, and blood cells in Stokes flow use boundary integral methods
~\cite{smith2009boundary, barnett2015spectrally, marple2016fast,
keaveny2011applying}. The key to these problems is the accurate
computation of velocity fields or forces close to the boundary as
these quantities provide the physical mechanisms leading to locomotion
and other phenomena of interest. Another example is in the field of
plasmonics~\cite{Maier07}, where one seeks to gain control of light at
the sub-wavelength scales for applications such as
nano-antennas~\cite{akselrod2014probing,novotny2011antennas} and
sensors~\cite{mayer2008label,sannomiya2008situ}. Surface plasmons are
sub-wavelength fields localized at interfaces between nano-scale metal
obstacles and their surrounding dielectric background medium. Thus,
these problems require accurate computation of electromagnetic fields
near interfaces. These problems and others motivate
the need to address the close evaluation problem.
The close evaluation problem for layer potentials has been studied
previously for Laplace's equation. For example, Beale and
Lai~\cite{beale2001method} have studied this problem in two dimensions
by first regularizing the nearly singular kernel and then adding
corrections for both the discretization and the regularization. The
result of this approach is a uniform error in space. This method has
been extended to three-dimensional problems~\cite{beal2016asimple}.
Helsing and Ojala~\cite{helsing2008evaluation} have studied the
Dirichlet problem in two dimensions by combining a globally
compensated quadrature rule along with interpolation to achieve very
accurate results over all regions of the domain.
Barnett~\cite{barnett2014evaluation} has also studied this problem in
two dimensions. In that work, Barnett has established a bound for the
error associated with the periodic trapezoid rule. We make use of
this result in our work below. To address the $O(1)$ error in the
close evaluation problem, Barnett has used surrogate local expansions
with centers placed near, but not on, the boundary.
This new method, called quadrature by expansion (QBX), leads to very
accurate evaluations of the layer potential close to the
boundary. Further error analysis of this method and extensions to
evaluations on the boundary for the Helmholtz equation is presented in
Kl\"{o}ckner {\it et al}~\cite{klockneretal2013}.
For the special case of rectangular domains, Fikioris
{\it et al}~\cite{fikioris1987strongly, fikioris1988strongly} have
addressed the close evaluation problem by removing problematic terms
from the explicit eigenfunction expansion of the fundamental
solution.
Here, we develop a new method to address the close
evaluation problem. We first determine the asymptotic behavior of
the sharply peaked kernel and then use that to approximate the layer
potential. Doing so relieves the quadrature rule from having to
integrate over a sharp peak. Instead, the quadrature rule is used to
correct the error made by this approximation. This new method is
accurate, efficient, and easy to implement.
In this paper, we study the close evaluation problem in two dimensions
for Laplace's equation. We use a Nystr\"om method based on the
periodic trapezoid rule to solve the boundary integral equation. We
study the double-layer potential for the interior Dirichlet problem
and the single-layer potential for the exterior Neumann problem. For
both of these problems, we introduce an asymptotic expansion for the
sharply peaked kernel of the layer potential for close evaluation
points, which is the main cause for error. Using the Fourier series
of this asymptotic expansion, we compute its contribution to the layer
potential with spectral accuracy. Through several examples, we show
that this asymptotic method effectively reduces errors in the close
evaluation of layer potentials.
The remainder of this paper is as follows. In Section \ref{sec:circle}
we study in detail the illustrative example of the interior Dirichlet
problem for a circular disk. For this problem, we obtain an explicit
error when using the periodic trapezoid rule to evaluate the
double-layer potential. This error motivates the use of an asymptotic
expansion for the sharply peaked kernel in the general method we
develop in Section \ref{sec:doublelayer} to evaluate the double-layer
potential for the interior Dirichlet problem. In Section
\ref{sec:singlelayer}, we extend this method to the single-layer
potential for the exterior Neumann problem. We discuss the general
implementation of these methods in Section
\ref{sec:implementation}. Section
\ref{sec:conclusions} gives our conclusions.
\section{Illustrative example: interior Dirichlet problem for a
circular disk}
\label{sec:circle}
We first study the close evaluation problem for
\begin{subequations}
\begin{gather}
\Delta u = 0 \quad \text{in $D = \{ r < a \}$}, \label{eq:2.1a}\\
u=f \quad \text{on $\partial D = \{ r = a \}$}. \label{eq:2.1b}
\end{gather}
\label{eq:2.1}
\end{subequations}
For this problem, we compute an explicit error when
applying an $N$-point periodic trapezoid rule (PTR$_{N}$). This
error reveals the key factors leading to the large errors observed
in the close evaluation problem. Moreover, this analysis provides
the motivation for the general asymptotic method.
It is well understood that the solution of \eqref{eq:2.1} is given by
Poisson's formula~\cite{strauss1992partial}. Here, we seek the
solution as the double-layer potential~\cite{kress1999linear},
\begin{equation}
u(\mathbf{x}) = \frac{1}{2\pi} \int_{|\mathbf{y}| = a}
\frac{\mathbf{n}_{y} \cdot (\mathbf{x} - \mathbf{y})}{|\mathbf{x} -
\mathbf{y}|^{2}} \mu(\mathbf{y}) \mathrm{d}\sigma_{y}.
\label{eq:2.2}
\end{equation}
Here, $\mathbf{x} \in D$ denotes the evaluation point,
$\mathbf{y} \in \partial D$ denotes the variable of integration,
$\mathbf{n}_{y}$ denotes the unit, outward normal at $\mathbf{y}$, and
$\mathrm{d}\sigma_{y}$ denotes a differential boundary element. The
density, $\mu(\mathbf{y})$, satisfies the following boundary integral
equation,
\begin{equation}
- \frac{1}{2} \mu(\mathbf{y}) - \frac{1}{4\pi a} \int_{|\mathbf{y}'
| = a} \mu(\mathbf{y}') \mathrm{d}\sigma_{y'} = f(\mathbf{y}),
\label{eq:2.3}
\end{equation}
from which we determine that
\begin{equation}
\mu(\mathbf{y}) = \frac{1}{2\pi a} \int_{| \mathbf{y}' | = a}
f(\mathbf{y}') \mathrm{d}\sigma_{y'} - 2 f(\mathbf{y}).
\label{eq:2.4}
\end{equation}
To numerically evaluate \eqref{eq:2.2}, we substitute the
parameterization, $\mathbf{x} = (r \cos t^{\ast}, r \sin t^{\ast})$,
and $\mathbf{y} = (a \cos t, a \sin t)$, with $r < a$, and
$t^{\ast}, t \in [0,2\pi]$, and obtain
\begin{equation}
u(r,t^{\ast}) = \frac{1}{2\pi} \int_0^{2\pi}
\left[ \frac{ a r \cos ( t - t^{\ast} ) - a^{2}}{a ^{2}+ r^2 - 2
ar \cos( t - t^{\ast} ) } \right] \mu(t) \mathrm{d}t =
\frac{1}{2\pi} \int_{0}^{2\pi} K(t - t^{\ast}) \mu(t) \mathrm{d}t.
\label{eq:2.5}
\end{equation}
Using PTR$_{N}$ with points $t_{j} = 2\pi j/N$ for
$j = 1, \cdots, N$, to approximate \eqref{eq:2.5}, we obtain
\begin{equation}
u(r,t^{\ast}) \approx U^{N}(r,t^{\ast}) = \frac{1}{N} \sum_{j =
1}^{N} K\left( \frac{2\pi j}{N} - t^{\ast} \right) \mu\left(
\frac{2\pi j}{N} \right).
\label{eq:2.6}
\end{equation}
\begin{figure}[t]
\centering
\includegraphics[width=0.4\linewidth]{introfigure_5-eps-converted-to.pdf}
\includegraphics[width=0.4\linewidth]{introfigure_6-eps-converted-to.pdf}
\caption{[Left] Plot of the kernel, $K(t - t^{\ast})$, defined in
\eqref{eq:2.5} with $a = 1$ for $r = 0.9$ (dot-dashed curve),
$0.95$ (dashed curve), and $0.99$ (solid curve). [Right] Plot of
the kernel, $K(t - t^{\ast})$, with $a = 1$, and $r = 0.99$ (solid
curve), and the corresponding piecewise linear approximation
associated with PTR$_{128}$ (dot-dashed curve).}
\label{fig:1}
\end{figure}
The error, $U^{N}(r,t^{\ast}) - u(r,t^{\ast})$, is not uniform in $D$.
In particular, $U^{N}$ is very accurate for evaluation points far away
from the boundary. On the other hand, it is $O(1)$ when $r \sim a$.
The reason for this large error is due to the kernel,
$K(t - t^{\ast})$. In Fig.~\ref{fig:1}, we show plots of $K$ as a
function of $t - t^{\ast}$. The left plot of Fig.~\ref{fig:1} shows
that $K$ becomes sharply peaked about $t = t^{\ast}$ when $r \sim a$.
We do not evaluate the double-layer potential on the
boundary. Nonetheless, because $K$ becomes sharply peaked as
$r \to a$, we say that the double-layer potential is nearly singular.
The right plot of Fig.~\ref{fig:1} shows that the piecewise linear
approximation of $K$ used by PTR$_{128}$ will grossly overestimate
the magnitude of the double-layer potential for $r = 0.99 a$. It is
this error that leads to the $O(1)$ errors produced by PTR$_{N}$ for
close evaluation points. Barnett~\cite{barnett2014evaluation} has
shown that this error is $O(1)$ for $a - r = O(1/N)$. In light of
this result, we say that the error exhibits a boundary layer of
thickness $O(1/N)$ in which it undergoes rapid growth.
In the limit as $N \to \infty$, PTR$_{N}$ converges because the
boundary layer vanishes at a rate of $O(1/N)$. However, that is not
the limit we consider here. Rather, we study the limit as the
evaluation point approaches the boundary with $N$ fixed. For that
case, PTR$_{N}$ is unable to accurately capture the sharp peak of
the kernel about $t = t^{\ast}$ that forms as $r \to a$.
Using the error associated with
PTR$_{N}$~\cite{davis1959ptr}, we find $U^{N}$
defined in \eqref{eq:2.6} satisfies
\begin{equation}
U^{N}(r,t^{\ast}) = u(r,t^{\ast}) + \sum_{\substack{l = -\infty\\l
\neq 0}}^{\infty} \hat{p}[lN],
\label{eq:2.7}
\end{equation}
where
\begin{equation}
\hat{p}[k] = \frac{1}{2\pi} \int_{0}^{2\pi} K(t - t^{\ast}) \mu(t)
e^{-\mathrm{i} k t } \mathrm{d}t.
\label{eq:2.8}
\end{equation}
The error in \eqref{eq:2.7} is aliasing of high frequencies. To
determine $\hat{p}[k]$ explicitly, we use the Fourier series
representation of the kernel,
\begin{equation}
K(t-t^{\ast}) = \frac{ ar \cos ( t - t^{\ast} )
-a^2 }{a ^{2}+ r^2 - 2 ar\cos ( t - t^{\ast} )} =
-\frac{1}{2} - \frac{1}{2} \sum_{m = -\infty}^{\infty}
\left( \frac{r}{a} \right)^{|m|} e^{\mathrm{i} m (t - t^{\ast})},
\label{eq:2.9}
\end{equation}
and of the density
\begin{equation}
\mu(t) = \sum_{n = -\infty}^{\infty} \hat{\mu}[n] e^{\mathrm{i} n t},
\label{eq:2.10}
\end{equation}
to find
\begin{equation}
K(t - t^{\ast}) \mu(t) = -\frac{1}{2} \sum_{n =
-\infty}^{\infty} \hat{ {\mu}}[n] e^{\mathrm{i} n t} -\frac{1}{2}
\sum_{m = -\infty}^{\infty} \left( \frac{r}{a} \right)^{|m|}
e^{-\mathrm{i} m t^{\ast}} \sum_{n = -\infty}^{\infty} \hat{\mu}[n]
e^{\mathrm{i} (m + n) t}.
\label{eq:2.11}
\end{equation}
Substituting \eqref{eq:2.11} into \eqref{eq:2.8}, and
rearranging terms, we find that
\begin{equation}
\hat{p}[k] = -\hat{\mu}[k] -\frac{1}{2} \sum_{m = 1}^{\infty} \left(
\frac{r}{a} \right)^{m} \left( e^{\mathrm{i} m t^{\ast}}
\hat{\mu}[k+m] + e^{-\mathrm{i} m t^{\ast}} \hat{\mu}[k-m] \right).
\label{eq:2.12}
\end{equation}
\begin{figure}[t]
\centering
\includegraphics[width=0.7\linewidth]{introfigure_9.pdf}
\caption{Contour plot of $\log_{10} |E^{N}(r,t)|$ where $E^{N}$ is
given by \eqref{eq:2.15} with $a = 1$ and $N = 128$.}
\label{fig:2}
\end{figure}
We now obtain the error in using PTR$_{N}$ to
evaluate the double-layer potential by substituting \eqref{eq:2.12}
into \eqref{eq:2.7} which yields
\begin{multline}
E^{N}(r,t^{\ast}) = U^{N}(r,t^{\ast}) - u(r,t^{\ast}) = \sum_{l =
1}^{\infty} \left\{ -\hat{\mu}^{\ast}[lN] - \frac{1}{2} \sum_{m =
1}^{\infty} \left[ \left( \frac{r}{a} \right)^{m} \left(
\hat{\mu}^{\ast}[lN-m] e^{\mathrm{i} m t^{\ast}} +
\hat{\mu}^{\ast}[lN+m]
e^{-\mathrm{i} m t^{\ast}} \right) \right] \right\}\\
+ \sum_{l = 1}^{\infty} \left\{ - \hat{\mu}[lN] -\frac{1}{2} \sum_{m
= 1}^{\infty} \left[ \left( \frac{r}{a} \right)^{m} \left(
\hat{\mu}[lN+m] e^{\mathrm{i} m t^{\ast}} + \hat{\mu}[lN-m]
e^{-\mathrm{i} m t^{\ast}} \right) \right] \right\}.
\label{eq:2.13}
\end{multline}
Here, we have assumed $\mu$ is real, so
$\hat{\mu}[-k] = \hat{\mu}^{\ast}[k]$, where $[ \cdot ]^{\ast}$
denotes complex conjugation. Suppose we have chosen $N$ to be large
enough so that $\hat{\mu}[lN] \ll 1$ for $l > 0$.
For that case, only terms in \eqref{eq:2.13} proportional to
$\hat{\mu}[lN-m]$ will substantially contribute to the error. By
neglecting the other terms, we obtain
\begin{equation}
E^{N}(r,t^{\ast}) \sim - \sum_{l = 1}^{\infty} \sum_{m
= 1}^{\infty} \left( \frac{r}{a} \right)^{m} \left[ \text{Re}\{
\hat{\mu}[lN-m] \} \cos( m t^{\ast} ) + \text{Im}\{
\hat{\mu}[lN-m] \} \sin( m t^{\ast} ) \right].
\label{eq:2.14}
\end{equation}
Equation \eqref{eq:2.14} is the asymptotic error made by
PTR$_{N}$. The key point is that
when $N$ is fixed,
this error is not uniform for $r \in [0,a)$. When $r \ll a$, we see
that the error is much smaller than when $r \sim a$. Consider the
specific case in which $\mu = 1$, so that $\hat{\mu}[0] = 1$ and
$\hat{\mu}[k] = 0$ for all $k \neq 0$. For that case, \eqref{eq:2.14}
simplifies to
\begin{equation}
E^{N}(r,t^{\ast}) \sim \frac{\left( \frac{r}{a} \right)^{2N} -
\left( \frac{r}{a} \right)^{N} \cos (N t^{\ast})}{1 + \left(
\frac{r}{a} \right)^{2N} - 2 \left( \frac{r}{a} \right)^{N} \cos
(N t^{\ast})}.
\label{eq:2.15}
\end{equation}
According to \eqref{eq:2.15},
$| E^{N}(a(1-\epsilon),t^{\ast}) | = O((1-\epsilon)^{N}) =
O(e^{-\epsilon N})$,
and $| E^{N}(r,t^{\ast}) | \to 1/2$ as $r \to a$. These results show
that $E^{N}$ has a boundary layer of thickness $O(1/N)$ where it
exponentially increases to values that are $O(1)$. Fig.~\ref{fig:2}
shows a plot of \eqref{eq:2.15} over the entire circular disk and a
close-up near the boundary. These plots show the boundary layer about
$r = a$ where the error attains $O(1)$ values. In practice, we would
like to set $N$ based on the resolution required to solve the boundary
integral equation. It is neither desirable nor practical to increase
$N$ just to reduce aliasing in the evaluation of the double-layer
potential. In light of this, we make the following observations.
\begin{itemize}
\item Equation \eqref{eq:2.13} gives the error incurred by
PTR$_{N}$ to approximate the double-layer
potential. This error is due to aliasing. Equation \eqref{eq:2.14}
gives the asymptotic approximation of this error when the $N$-point
grid sufficiently samples $\mu$.
\item The aliasing error is not uniform with respect to $r$. For the
case in which $\mu = 1$, the asymptotic error simplifies to
\eqref{eq:2.15}. From this result, we find that the error grows
rapidly and becomes $O(1)$ in a boundary layer of thickness $O(1/N)$
near the boundary. This boundary layer is shown in Fig.~\ref{fig:2}.
\item For points within the boundary layer, the sharply peaked kernel
causes aliasing due to insufficient resolution. Fig.~\ref{fig:1}
shows how the sharp peak of the kernel when $r/a = 0.99$ is
under-resolved on the grid for PTR$_{128}$.
\end{itemize}
Alternatively, by substituting \eqref{eq:2.9} and
\eqref{eq:2.10} into \eqref{eq:2.5}, we obtain
\begin{equation}
u(r,t^{\ast}) = \sum_{n = -\infty}^{\infty} \hat{K}^{\ast}[n]
\hat{\mu}[n] e^{- \mathrm{i} n t^\ast} \approx \sum_{n =
-N/2}^{N/2-1} \hat{K}^{\ast}[n] \hat{\mu}[n] e^{- \mathrm{i} n
t^\ast}.
\label{eq:2.16}
\end{equation}
Since the coefficients, $\hat{\mu}[n]$ for $n = -N/2, \cdots, N/2-1$,
can be computed readily using the Fast Fourier Transform, we introduce
the truncated sum as an approximation in \eqref{eq:2.16}. The decay of
$\hat{\mu}[n]$ controls the error of this approximation. Therefore,
choosing $N$ to accurately solve the boundary integral equation yields
a spectrally accurate approximation of the double-layer potential.
For general problems, we do not know $\hat{K}[n]$ explicitly as we do
here. Instead, we compute an asymptotic expansion of
the sharply peaked kernel. We determine the Fourier
coefficients of this asymptotic expansion explicitly. Hence, we
evaluate its contribution to the double-layer potential using an
approximation like the one in \eqref{eq:2.16}. By removing the
kernel's sharp peak in this way, we are left with a smooth function to
integrate using the PTR$_{N}$. We present this
method to evaluate the double-layer potential for the
interior Dirichlet problem for Laplace's equation in Section
\ref{sec:doublelayer}, and the single-layer potential for the exterior
Neumann problem for Laplace's equation in Section
\ref{sec:singlelayer}.
\section{Double-layer potential for the interior Dirichlet problem}
\label{sec:doublelayer}
Consider a simply connected, open set denoted by
$D \subset \mathbb{R}^2$, with analytic boundary $\partial D$. Let
$\overline{D} = D \cup \partial D$. The function
$u \in C^2(D) \cap C^1(\overline{D})$ satisfies
\begin{subequations}
\begin{gather}
\Delta u = 0 \quad \text{in $D$}, \label{eq:3.1a}\\
u = f \quad \text{on $\partial D$}, \label{eq:3.1b}
\end{gather}
\label{eq:3.1}
\end{subequations}
with $f$ an analytic function. We seek $u$ as the
double-layer potential~\cite{kress1999linear},
\begin{equation}
u(\mathbf{x}) = \frac{1}{2\pi} \int_{\partial D}
K(\mathbf{x},\mathbf{y})\mu(\mathbf{y})\mathrm{d}\sigma_{y}, \quad
\mathbf{x} \in D,
\label{eq:DLP}
\end{equation}
with
\begin{equation}
K(\mathbf{x},\mathbf{y})=\mathbf{n}_{y} \cdot \frac{\mathbf{x} -
\mathbf{y}}{| \mathbf{x} - \mathbf{y} |^{2}}.
\label{eq:DLP-kernel}
\end{equation}
The density, $\mu$, satisfies the boundary integral equation,
\begin{equation}
- \frac{1}{2} \mu(\mathbf{y}) + \frac{1}{2\pi} \int_{\partial D}
K(\mathbf{y},\mathbf{y'})\mu(\mathbf{y}') \mathrm{d}\sigma_{y'} =
f(\mathbf{y}), \quad \mathbf{y} \in \partial D.
\label{eq:DLP-BIE}
\end{equation}
In what follows, we assume that we have solved \eqref{eq:DLP-BIE}
using PTR$_{N}$.
\begin{figure}[t]
\centering
\def\svgwidth{0.4\columnwidth}
\input{sketch.pdf_tex}
\caption{Sketch of the quantities introduced in
\eqref{eq:DLP-target} to study evaluation points close to the
boundary.}
\label{fig:sketch}
\end{figure}
To evaluate \eqref{eq:DLP} when $\mathbf{x}$ is close to the boundary,
we set
\begin{equation}
\mathbf{x} = \mathbf{y}^{\ast} - \frac{\epsilon}
{ |\kappa^{\ast}| }\mathbf{n}_{y}^{\ast},
\label{eq:DLP-target}
\end{equation}
where $\mathbf{y}^{\ast}$ is the closest point to $\mathbf{x}$ on
the boundary, $\mathbf{n}_{y}^{\ast}$ is the unit, outward
normal at $\mathbf{y}^{\ast}$, $\kappa^{\ast}$ is the signed
curvature at $\mathbf{y}^{\ast}$, and $\epsilon >0$ is a small
parameter. Fig.~\ref{fig:sketch} gives a sketch of these
quantities. Substituting \eqref{eq:DLP-target} into
\eqref{eq:DLP-kernel} yields
\begin{equation}
K \left(\mathbf{y}^{\ast} - \frac{\epsilon}
{ |\kappa^{\ast}| }\mathbf{n}_{y}^{\ast},
\mathbf{y} \right) =
\frac{|\kappa^{\ast}|}{\epsilon} \frac{\mathbf{n}_{y}
\cdot |\kappa^{\ast}| ( \mathbf{y}^{\ast} - \mathbf{y} )/\epsilon -
\mathbf{n}_{y} \cdot \mathbf{n}_{y}^{\ast}}{| \kappa^{\ast} (
\mathbf{y}^{\ast} - \mathbf{y} )/\epsilon |^{2} - 2
\mathbf{n}_{y}^{\ast} \cdot |\kappa^{\ast}| ( \mathbf{y}^{\ast} -
\mathbf{y} )/\epsilon + 1}.
\label{eq:asympt-kernel}
\end{equation}
We have written $K$ in \eqref{eq:asympt-kernel} to reveal its inherent
dependence on the stretched variable,
$\mathbf{y} = \mathbf{y}^{\ast} + \epsilon
\mathbf{Y}/|\kappa^{\ast}|$.
\subsection{Matched asymptotic expansion of the kernel}
\label{ssec:asymp}
We determine the matched asymptotic expansion of
\eqref{eq:asympt-kernel} \cite{bender1999advanced}. Consider first
the outer expansion in which $\mathbf{y}^{\ast}$ and $\mathbf{y}$ are
held fixed and $\epsilon \to 0^{+}$, so that
$|\mathbf{Y}| \to \infty$. To leading order, we find that
\begin{equation}
K^{\text{out}} \sim -\frac{|\kappa^{\ast}|}{\epsilon}
\frac{\mathbf{n}_{y} \cdot \mathbf{Y}}{|\mathbf{Y}|^{2}} =
\frac{\mathbf{n}_{y} \cdot ( \mathbf{y}^{\ast} - \mathbf{y} )}{|
\mathbf{y}^{\ast} - \mathbf{y} |^{2} }.
\label{eq:DLP-outer}
\end{equation}
The error of \eqref{eq:DLP-outer} is $O(\epsilon)$. Since this outer
expansion
is the kernel in \eqref{eq:DLP-BIE}, we find that
\begin{equation}
\frac{1}{2\pi} \int_{\partial D} K^{\text{out}}(\mathbf{y}^{\ast} -
\mathbf{y}) \mu(\mathbf{y}) \mathrm{d}\sigma_{y} =
f(\mathbf{y}^{\ast}) + \frac{1}{2} \mu(\mathbf{y}^{\ast}).
\label{eq:outerDLP}
\end{equation}
The inner expansion is \eqref{eq:asympt-kernel}
written in terms of the stretched variable, $\mathbf{Y}$. We seek
an explicit parameterization of this inner expansion
using $ \mathbf{y}(t) = ( y_{1}(t), y_{2}(t) )$, with
$t \in [0,2\pi]$. It follows that
$\mathrm{d}\sigma_{y} = | \mathbf{y}'(t) | \mathrm{d}t$, the unit
tangent is
$\boldsymbol{\tau}_{y}(t) = (y_{1}'(t),y_{2}'(t))/|\mathbf{y}'(t)|$,
the outward unit normal is
$\mathbf{n}_{y}(t) = (y_{2}'(t),-y_{1}'(t))/|\mathbf{y}'(t)|$, and the
signed curvature is
$ \kappa(t) = ( y_{1}'(t) y_{2}''(t) - y_{1}''(t)
y_{2}'(t))/|\mathbf{y}'({t})|^{3}$.
Let $\mathbf{y}^{\ast} = \mathbf{y}(t^{\ast})$ and
$\kappa^{\ast} = \kappa(t^{\ast})$ with $t^{\ast} \in [0,2\pi]$. We
introduce the stretched parameter, $t = t^{\ast} + \epsilon T$, and
find by expanding about $\epsilon = 0$ that
\begin{equation}
\mathbf{y}(t^{\ast} + \epsilon T) = \mathbf{y}(t^{\ast}) +
\epsilon T | \mathbf{y}'(t^{\ast}) | \boldsymbol{\tau}_{y}(t^{\ast})
- \frac{1}{2} \epsilon^{2} T^{2} \left[ \kappa^{\ast} |
\mathbf{y}'(t^{\ast}) |^{2} \mathbf{n}_{y}(t^{\ast}) - (
\boldsymbol{\tau}_{y}(t^{\ast}) \cdot \mathbf{y}''(t^{\ast}) )
\boldsymbol{\tau}_{y}(t^{\ast}) \right] + O(\epsilon^{3}).
\label{eq:3.10}
\end{equation}
It follows that
\begin{equation}
\mathbf{n}_{y}(t^{\ast} + \epsilon T) \cdot |\kappa^{\ast}| (
\mathbf{y}(t^{\ast}) -
\mathbf{y}(t^{\ast} + \epsilon T ) ) = -\frac{1}{2} \epsilon^{2} T^{2}
\gamma^{\ast} + O(\epsilon^{3}),
\label{eq:3.11}
\end{equation}
\begin{equation}
\mathbf{n}_{y}(t^{\ast}) \cdot |\kappa^{\ast}| ( \mathbf{y}(t^{\ast}) -
\mathbf{y}(t^{\ast} + \epsilon T ) ) = \frac{1}{2} \epsilon^{2} T^{2}
\gamma^{\ast} + O(\epsilon^{3}),
\label{eq:3.12}
\end{equation}
\begin{equation}
\mathbf{n}_{y}(t^{\ast} + \epsilon T ) \cdot
\mathbf{n}_{y}(t^{\ast}) = 1 - \frac{1}{2} \epsilon^{2} T^{2}
|\gamma^{\ast}|^{2} + O(\epsilon^{3}),
\label{eq:3.13}
\end{equation}
and
\begin{equation}
| \kappa(t^{\ast}) [ \mathbf{y}(t^{\ast}) - \mathbf{y}(t^{\ast} +
\epsilon T ) ] |^{2} = \epsilon^{2} T^{2} | \gamma^{\ast} | +
O(\epsilon^{3}),
\label{eq:3.14}
\end{equation}
with
$\gamma^{\ast} = \text{sgn}[\kappa^{\ast}] | \kappa^{\ast}
\mathbf{y}'(t^{\ast}) |^{2}$. Here, $\text{sgn}[x] = x/|x|$ for
$x \ne 0$ and $\text{sgn}[x] = 0$ for $x = 0$. Substituting
\eqref{eq:3.11} -- \eqref{eq:3.14} into \eqref{eq:asympt-kernel}, we
find that
\begin{equation}
K^{\text{in}}(T;\epsilon) = |\kappa(t^{\ast})| \frac{ - \epsilon
-\frac{1}{2} \epsilon^{2} T^{2} \gamma^{\ast} +
O(\epsilon^{3})}{\epsilon^{2} T^{2} | \gamma^{\ast} | +
\epsilon^{2} + O(\epsilon^{3})}.
\label{eq:3.15}
\end{equation}
Next, we substitute
$\epsilon^{2} T^{2} \sim 2 - 2 \cos( t - t^{\ast} )$ into
\eqref{eq:3.15} and determine that the leading order asymptotic
behavior of $K^{\text{in}}$ is given by
\begin{equation}
K^{\text{in}}(t - t^{\ast};\epsilon) \sim |\kappa(t^{\ast})| \frac{ -
( \gamma^{\ast} + \epsilon ) + \gamma^{\ast} \cos(t -
t^{\ast})}{(2 | \gamma^{\ast} | + \epsilon^{2} ) - 2 |
\gamma^{\ast} | \cos(t - t^{\ast})}.
\label{eq:DLP-inner}
\end{equation}
The error of \eqref{eq:DLP-inner} is at most $O(\epsilon)$.
To form the leading order matched asymptotic expansion,
we establish asymptotic matching in the overlap region
of the outer and inner expansions. We first evaluate
\eqref{eq:DLP-outer} in the limit as
$\mathbf{y} \to \mathbf{y}^{\ast}$ and find that
\begin{equation}
K^{\text{out}} \to -
\frac{\kappa^{\ast}}{2}, \quad \mathbf{y} \to \mathbf{y}^{\ast}.
\end{equation}
Next, we evaluate \eqref{eq:DLP-inner} in the limit as
$\epsilon \to 0^{+}$ and find that
\begin{equation}
K^{\text{in}} \to - \frac{\kappa^{\ast}}{2}, \quad
\epsilon \to 0^{+}.
\end{equation}
Thus, the overlapping value is $-\kappa^{\ast}/2$. It
follows that the matched asymptotic expansion for the kernel of the
double-layer potential is given by
\begin{equation}
K = K^{\text{out}} + K^{\text{in}} + \frac{\kappa^{\ast}}{2} + O(\epsilon),
\quad \epsilon \to 0^{+},
\label{eq:DLP-matched}
\end{equation}
with $K^{\text{out}}$ given in \eqref{eq:DLP-outer} and
$K^{\text{in}}$ given in \eqref{eq:DLP-inner}. The error of this
matched asymptotic expansion has $O(\epsilon)$ error because the error
of $K^{\text{out}}$ given by \eqref{eq:DLP-outer} is $O(\epsilon)$,
and $K^{\text{in}}$ given by \eqref{eq:DLP-inner} is at most
$O(\epsilon)$. For example, we plot $K$, $K^{\text{in}}$, and $K^{\text{out}}$ in
Fig.~\ref{fig:4} as a function of $t - t^{\ast}$ with $t^{\ast} = \pi$
and $\epsilon = 0.1$ for the boundary curve $r(t) = 1 + 0.3 \cos 5
t$. The right plot shows the $L_{\infty}$-error
made by \eqref{eq:DLP-matched} as a function of $ \epsilon$.
The solid curve is the linear fit through these data and has slope
$1.2034$ consistent with the $O(\epsilon)$ error.
\begin{figure}[t]
\centering
\includegraphics[width=0.40\linewidth]{AsymptoticDLPkernel_v2-eps-converted-to.pdf}
\includegraphics[width=0.40\linewidth]{AsymptoticDLPerror-eps-converted-to.pdf}
\caption{[Left] Plot of the kernel, $K$, given in
\eqref{eq:asympt-kernel} (solid curve) and the leading order
behavior of its inner expansion, $K^{\text{in}}$ given in
\eqref{eq:DLP-inner} (dashed curve) and its outer expansion,
$K^{\text{out}}$ given in \eqref{eq:DLP-outer} (dotted curve) as a
function of $t - t^{\ast}$ with $t^{\ast} = \pi$ and
$\epsilon = 0.1$ for the boundary curve,
$r(t) = 1 + 0.3 \cos 5 t$. [Right] $L_{\infty}$--error of the
matched asymptotic expansion given in \eqref{eq:DLP-matched}
evaluated at $t^{\ast} = \pi$ for $\epsilon = 0.0001$, $0.001$,
$0.01$, and $0.1$. These computed errors are plotted as
circles. The solid curve gives the result of fitting this data to
the function, $C \epsilon^{p}$. This fit produced $p = 1.2034$
indicating the $O(\epsilon)$ error of the matched asymptotic
expansion.}
\label{fig:4}
\end{figure}
\subsection{Fourier coefficients of $K^{\text{in}}$}
The inner expansion given by \eqref{eq:DLP-inner} accurately captures
the sharp peak of the kernel at $t = t^{\ast}$ in the
limit as $\epsilon \to 0^{+}$ as shown in
Fig.~\ref{fig:4}. To avoid using PTR$_{N}$ to
integrate over this sharp peak, we seek the Fourier coefficients,
\begin{equation}
\hat{K}^{\text{in}}[n] = \frac{1}{2\pi} \int_{0}^{2\pi}
K^{\text{in}}(t;\epsilon) e^{-\mathrm{i} n t} \mathrm{d}t,
\label{eq:3.19}
\end{equation}
so that we may use an approximation similar to that given in
\eqref{eq:2.16}. To do so, we rewrite \eqref{eq:DLP-inner} as
\begin{equation}
K^{\text{in}}(t - t^{\ast};\epsilon) = - \frac{| \kappa(t^{\ast})
|}{C_{0}} \frac{ \frac{1}{2} A_{0} - A_{1} \cos(t - t^{\ast})}{ 1 +
C_{1} \cos(t - t^{\ast} )},
\label{eq:DLP-FP}
\end{equation}
with $A_{0} = 2 ( \gamma^{\ast} + \epsilon ),$
$A_{1} = \gamma^{\ast},$ $C_{0} = 2 | \gamma^{\ast} | + \epsilon^{2},$
and $C_{1} = - 2 |\gamma^{\ast}| / C_{0}.$ Equation \eqref{eq:DLP-FP}
gives $K^{\text{in}}$ as a rational function of trigonometric
polynomials which have been studied by Geer~\cite{geer1995rational} in
the context of constructing Fourier-Pad\'e approximations. Since
$|C_{1}| < 1$, we have
\begin{equation}
\hat{K}^{\text{in}}[n] =
\begin{cases}
\frac{1 + \rho^{2}}{1 - \rho^{2}} \left( \frac{A_{0}}{2} + A_{1}
\rho \right), & n = 0\\
\frac{1 + \rho^{2}}{1 - \rho^{2}} \left( \frac{A_{0}
\rho^{|n|}}{4} + A_{1} ( \rho^{|n|-1} + \rho^{|n|+1} )
\right), & n \neq 0
\end{cases},
\label{eq:DLP-Fourier}
\end{equation}
where $\rho = \left(\sqrt{1 - C_{1}^{2}} - 1 \right)/C_{1}$.
We find that we can improve on our approximation by considering the
specific case in which the boundary is a circle of radius $a$. For
that case, $K^{\text{out}} = - \kappa^{\ast}/2$ which cancels with the
asymptotic matching term in \eqref{eq:DLP-matched}. If we set
\begin{subequations}
\begin{align}
A_{0} &= 2 ( \gamma^{\ast} + \epsilon - \epsilon | \gamma^{\ast} |),\\
A_{1} &= \gamma^{\ast} - \epsilon | \gamma^{\ast} |,\\
C_{0} &= 2 ( | \gamma^{\ast} | - \epsilon \gamma^{\ast} ) +
\epsilon^{2},\\
C_{1} &= - 2 ( | \gamma^{\ast} | - \epsilon \gamma^{\ast} ) / C_{0},
\end{align}
\label{eq:DLP-FPcoeffs}
\end{subequations}
instead of the coefficients defined above, we find that
\eqref{eq:DLP-FP} gives the exact evaluation of the kernel at
$r = a ( 1 - \epsilon )$. For this reason, we use
\eqref{eq:DLP-FPcoeffs} in \eqref{eq:DLP-FP} and
\eqref{eq:DLP-Fourier} in practice. These coefficients just include
the $O(\epsilon^3T^2)$ terms in the asymptotic
expansion of $K^{\text{in}}$ for a general boundary.
To compute the contribution by $K^{\text{in}}$ to the
double-layer potential, we use the approximation
\begin{equation}
\frac{1}{2\pi} \int_{0}^{2\pi} K^{\text{in}}(t - t^{\ast};\epsilon)
\mu(t) | \mathbf{y}'(t) | \mathrm{d}t \approx \sum_{n = -N/2}^{N/2-1}
\hat{K}^{\text{in} \ast}[n] \hat{\mu}_y[n] e^{-\mathrm{i}n t^\ast},
\label{eq:innerDLP}
\end{equation}
with
\begin{equation}
\hat{\mu}_y[n] = \frac{1}{2\pi} \int_{0}^{2\pi} \mu(t) | \mathbf{y}'(t)
| e^{-\mathrm{i} n t} \mathrm{d}t.
\label{eq:DLP-muhat}
\end{equation}
We use \eqref{eq:DLP-Fourier} and compute \eqref{eq:DLP-muhat} using
the Fast Fourier Transform to evaluate the approximation in
\eqref{eq:innerDLP}. Provided that $N$ is chosen to solve boundary
integral equation \eqref{eq:DLP-BIE} so that $\mu(t) | \mathbf{y}'(t)
|$ is sufficiently resolved, the approximation in \eqref{eq:innerDLP}
is spectrally accurate.
\subsection{Evaluating the double-layer potential}
The new method developed here for close evaluation of the double-layer
potential uses \eqref{eq:DLP-matched}. For convenience, let us
introduce the residual kernel,
\begin{equation}
\tilde{K} = K - K^{\text{out}} - K^{\text{in}} -
\frac{\kappa^{\ast}}{2}.
\label{eq:3.27}
\end{equation}
$\tilde{K} = O(\epsilon)$ and more importantly, it does not have a
sharp peak about $t = t^{\ast}$. We rewrite the double-layer
potential as
\begin{multline}
u\left(\mathbf{y}^{\ast} - \frac{\epsilon}
{ |\kappa^{\ast}| }\mathbf{n}_{y}^{\ast} \right) = \frac{1}{2\pi}
\int_{0}^{2\pi} \tilde{K}(t - t^{\ast};\epsilon) \mu(t) |
\mathbf{y}'(t) | \mathrm{d}t + \frac{1}{2\pi} \int_{0}^{2\pi}
K^{\text{out}}(t - t^{\ast};\epsilon)
\mu(t) | \mathbf{y}'(t) | \mathrm{d}t \\
+ \frac{1}{2\pi} \int_{0}^{2\pi} K^{\text{in}}(t -
t^{\ast};\epsilon) \mu(t) | \mathbf{y}'(t) | \mathrm{d}t +
\frac{\kappa^{\ast}}{4\pi} \int_{0}^{2\pi} \mu(t) |
\mathbf{y}'(t) | \mathrm{d}t.
\label{eq:3.26}
\end{multline}
Substituting \eqref{eq:outerDLP} and \eqref{eq:innerDLP} into
\eqref{eq:3.26}, we obtain
\begin{equation}
u\left(\mathbf{y}^{\ast} - \frac{\epsilon}
{ |\kappa^{\ast}| }\mathbf{n}_{y}^{\ast} \right) \approx \frac{1}{2\pi}
\int_{0}^{2\pi} \left[ \tilde{K}(t - t^{\ast};\epsilon) +
\frac{\kappa^{\ast}}{2} \right] \mu(t) | \mathbf{y}'(t) |
\mathrm{d}t + f(\mathbf{y}(t^{\ast})) + \frac{1}{2} \mu(t^{\ast})
+ \sum_{n = -N/2}^{N/2-1} \hat{K}^{\text{in} \ast}[n]
\hat{\mu}_y[n] e^{- \mathrm{i} n t^\ast}.
\label{eq:3.29}
\end{equation}
Applying PTR$_{N}$ with $t_{j} = 2\pi j/N$ to the
remaining integral in \eqref{eq:3.29}, we arrive at
\begin{equation}
u\left(\mathbf{y}^{\ast} - \frac{\epsilon}
{ |\kappa^{\ast}| }\mathbf{n}_{y}^{\ast} \right)\approx \frac{1}{N} \sum_{j =
1}^{N} \left[ \tilde{K}( t_{j} - t^{\ast};\epsilon ) +
\frac{\kappa^{\ast}}{2} \right] \mu(t_{j}) |\mathbf{y}'(t_{j})|
+ f(\mathbf{y}(t^{\ast})) + \frac{1}{2} \mu(t^{\ast}) + \sum_{n =
-N/2}^{N/2-1} \hat{K}^{\text{in} \ast}[n] \hat{\mu}_y[n] e^{-
\mathrm{i} n t^\ast}.
\label{eq:DLP-asymptotic}
\end{equation}
Equation \eqref{eq:DLP-asymptotic} gives our method for computing the
double-layer potential for close evaluation points. It avoids aliasing
incurred by the sharp peak of $K^{\text{in}}$ by using
\eqref{eq:innerDLP}. Integration of $K^{\text{out}}$
is replaced by
$f(\mathbf{y}^{\ast}) + \frac{1}{2} \mu(\mathbf{y}^{\ast})$, which
comes from evaluating boundary integral equation \eqref{eq:DLP-BIE}
at $\mathbf{y}^{\ast}$.
PTR$_{N}$ is now used only to integrate the term with
the kernel, $\tilde{K} + \kappa^{\ast}/2$.
This term is important for taking into account additional, non-local
contributions to the double-layer potential, which may be
significant.
\subsection{Numerical results}\label{ssec:DLPresults}
We present results of this method for evaluating the double-layer
potential by computing the harmonic function,
$u(\mathbf{x}) = -\frac{1}{2\pi} \log | \mathbf{x} - \mathbf{x}_{0} |$
with $\mathbf{x}_{0} ={ (1.85,1.65)}$. We compute the solution
interior to the boundary curve, $r(t) = 1.55 + 0.4 \cos 5 t$. The
Dirichlet data in \eqref{eq:3.1b} is determined by evaluating the
harmonic function on the boundary. Boundary integral equation
\eqref{eq:DLP-BIE} is solved using PTR$_{N}$ and we
use the resulting density, $\mu(t_{j})$ with $t_{j} = 2\pi j/N$ for
$j = 1, \cdots, N$ in the double-layer potential. We evaluate the
double-layer potential using two methods: (1)
PTR$_{N}$ and (2) asymptotic PTR$_{N}$, the new method given in
\eqref{eq:DLP-asymptotic}. We present the solution on a body-fitted
grid in which evaluation points are found by moving
along the normal into the domain from boundary grid
points. The solution is evaluated on a grid of $200$ equispaced
points along each normal starting at the boundary until we reach a
distance $1/\kappa_{\max}$ from the boundary, where
$\kappa_{\max} = \max_{0 \le t^{\ast} < 2 \pi} |\kappa(t^{\ast})|$.
This grid captures the boundary layer, but does not coincide exactly
with it since the boundary layer depends on the local curvature.
For regions of high curvature, this body-fitted grid
extends beyond the boundary layer.
In {Fig.}~\ref{fig:6} we show the errors
($\log_{10}$-scale) in computing the double-layer potential using
PTR$_{128}$ and asymptotic PTR$_{128}$. These results show that
asymptotic PTR$_{N}$ produces errors that are several
orders of magnitude smaller than those of the
PTR$_{N}$. To give an indication of this
improvement, the $L_{\infty}$ error is $8.03$ for
PTR$_{128}$ and $1.85 \times 10^{-4}$ for asymptotic
PTR$_{128}$. To examine this error in more detail,
we show in {Fig.}~\ref{fig:7} a plot of the error in computing the
double-layer potential evaluated at the points indicated in
Fig.~\ref{fig:6} ($t^{\ast} = 0$, $t^{\ast} = \pi/2$, and
$t^{\ast} = \pi$) as a function of $\epsilon$. These three cases are
plotted over different ranges of $\epsilon$ corresponding to
$0 < \epsilon < \kappa(t^{\ast})/\kappa_{\max}$. We observe that
asymptotic PTR$_{N}$ does significantly better than
PTR$_{N}$ for small $\epsilon$ as expected. It
reduces the $O(1)$ error by at least four orders of magnitude. We
find similar results over all values of
$t^{\ast}$.
\begin{figure}[ht!]
\centering
\includegraphics[width=0.75\linewidth]{plots_DLP_v3.pdf}
\caption{[Left] Plot of absolute error
($\log_{10}$-scale) in computing the double-layer potential
using PTR$_{128}$ for the boundary
$r(t) = {1.55 + 0.4 \cos 5 t }$ for the Dirichlet data,
$f(\mathbf{y}) = \frac{1}{2\pi} \log | \mathbf{y} - \mathbf{x}_{0}
|$
with $\mathbf{x}_{0} = {(1.85, 1.65)}$. [Right] Plot of
absolute error ($\log_{10}$-scale) in computing
the double-layer potential using the asymptotic PTR$_{128}$
given in \eqref{eq:DLP-asymptotic} for the same problem. The
``$\times$'' symbols on the boundary indicates the points
corresponding to $t^{\ast}=0$, $t^{\ast}=\pi/2$, and
$t^{\ast}=\pi$. }
\label{fig:6}
\end{figure}
\begin{figure}[ht!]
\centering
\includegraphics[width=0.31\linewidth]{error_t1_v3-eps-converted-to.pdf} \, \,
\includegraphics[width=0.31\linewidth]{error_t2_v3-eps-converted-to.pdf} \, \,
\includegraphics[width=0.31\linewidth]{error_t3_v3-eps-converted-to.pdf}
\caption{Plot of the absolute error as a function of $\epsilon$ made
in evaluating the double-layer potential for different $t^\ast$
values using PTR$_{128}$ (solid curve) and using
asymptotic PTR$_{128}$ (dashed curve).}
\label{fig:7}
\end{figure}
We can further improve the new method using the identity for the
double-layer potential~\cite{kress1999linear} (see
\eqref{kernel_identities}) to rewrite \eqref{eq:DLP} as follows:
\begin{equation}
u(\mathbf{x}) = \frac{1}{2\pi} \int_{\partial D}
K(\mathbf{x},\mathbf{y})(\mu(\mathbf{y}) - \mu(\mathbf{y}^\ast)
)\mathrm{d}\sigma_{y} - \mu(\mathbf{y}^\ast), \quad
\mathbf{x} \in \partial D.
\label{eq:DLP+subtraction}
\end{equation}
In \eqref{eq:DLP+subtraction}, the integrand is now smoother as it
vanishes at the point $\mathbf{y} = \mathbf{y}^\ast$, and the error
using PTR$_{N}$ drastically decreases. Applying
asymptotic PTR$_{N}$ to \eqref{eq:DLP+subtraction},
we obtain
\begin{equation}
u\left(\mathbf{y}^{\ast} - \frac{\epsilon}
{ |\kappa^{\ast}| }\mathbf{n}_{y}^{\ast} \right) \approx
\frac{1}{N} \sum_{j = 1}^{N} \left[ \tilde{K}( t_{j} -
t^{\ast};\epsilon ) + \frac{\kappa^{\ast}}{2} \right]
(\mu(t_{j})-\mu(t^\ast)) |\mathbf{y}'(t_{j})| +
f(\mathbf{y}(t^{\ast})) + \sum_{n = -N/2}^{N/2-1}
\hat{K}^{\text{in} \ast}[n] \hat{\mu}^\ast_y[n] e^{- \mathrm{i} n
t^\ast}
\label{eq:asymptoticDLP+subtraction}
\end{equation}
with
\begin{equation}
\hat{\mu}^\ast_y[n] = \frac{1}{2\pi} \int_{0}^{2\pi} (\mu(t)
-\mu(t^\ast)) | \mathbf{y}'(t) | e^{-\mathrm{i} n t} \mathrm{d}t.
\label{eq:DLP-muhatast}
\end{equation}
For the example problem discussed above, the $L_{\infty}$ error is
$7.13 \times 10^{-5}$ for PTR$_{128}$ applied to
\eqref{eq:DLP+subtraction}, and $2.54 \times 10^{-5}$ for asymptotic
PTR$_{128}$ given by
\eqref{eq:asymptoticDLP+subtraction}.
This additional improvement \eqref{eq:DLP+subtraction} is only valid for the
double-layer potential and does not generalize.
\section{Single-layer potential for the exterior Neumann problem}
\label{sec:singlelayer}
We now consider the exterior Neumann problem,
\begin{subequations}
\begin{gather}
\Delta v = 0 \quad \text{in $\mathbb{R}^{2} \backslash
\bar{D}$}, \label{eq:4.1a}\\
\frac{\partial v}{\partial n} = g \quad \text{on $\partial
D$}, \label{eq:4.1b}
\end{gather}
\label{eq:4.1}
\end{subequations}
with $g$ denoting an analytic function satisfying
\begin{equation}
\int_{\partial D} g(\mathbf{y}) \mathrm{d}\sigma_{y} = 0.
\label{eq:4.2}
\end{equation}
We seek $v$ as the single-layer potential~\cite{kress1999linear},
\begin{equation}
v(\mathbf{x}) = \frac{1}{2\pi} \int_{\partial D}
S(\mathbf{x},\mathbf{y}) \varphi(\mathbf{y}) \mathrm{d}\sigma_{y},
\quad \mathbf{x} \in \mathbb{R}^{2} \backslash \bar{D},
\label{eq:SLP}
\end{equation}
with
\begin{equation}
S(\mathbf{x},\mathbf{y}) = - \log | \mathbf{x} - \mathbf{y} | .
\label{eq:SLP-kernel}
\end{equation}
The density, $\varphi(\mathbf{y})$, satisfies the boundary integral
equation,
\begin{equation}
- \frac{1}{2} \varphi(\mathbf{y}) + \frac{1}{2\pi} \int_{\partial D}
\frac{\partial S(\mathbf{y},\mathbf{y'}) }{\partial n_{y} }
\varphi(\mathbf{y}') \mathrm{d}\sigma_{y'} = g(\mathbf{y}), \quad
\mathbf{y} \in \partial D.
\label{eq:SLP-BIE}
\end{equation}
To study the close evaluation of \eqref{eq:SLP}, we now set
\begin{equation}
\mathbf{x} = \mathbf{y}^{\ast} + \frac{\epsilon}{|\kappa^{\ast}|}
\mathbf{n}_{y}^{\ast}.
\label{eq:SLP-target}
\end{equation}
Substituting \eqref{eq:SLP-target} into \eqref{eq:SLP-kernel}, we obtain
\begin{equation}
S\left(\mathbf{y}^{\ast} + \frac{\epsilon}
{ |\kappa^{\ast}| }\mathbf{n}_{y}^{\ast},
\mathbf{y} \right) = - \log \epsilon +
\log | \kappa^{\ast} | - \frac{1}{2} \log\left( | \kappa^{\ast}
( \mathbf{y}^{\ast} - \mathbf{y} )/\epsilon |^{2} + 2
\mathbf{n}_{y}^{\ast} \cdot | \kappa^{\ast} | ( \mathbf{y}^{\ast}
- \mathbf{y} )/\epsilon + 1 \right).
\label{eq:asympt-SLPkernel}
\end{equation}
Just as we have done for $K$ in \eqref{eq:asympt-kernel}, we have
written \eqref{eq:asympt-SLPkernel} to show the underlying dependence
on the stretched variable,
$\mathbf{y} = \mathbf{y}^{\ast} + \epsilon
\mathbf{Y}/|\kappa^{\ast}|$.
The outer expansion of \eqref{eq:asympt-SLPkernel} is
$S^{\text{out}} \sim - \log |\mathbf{y^\ast} -\mathbf{y}|$.
This outer expansion is singular. In contrast to the double-layer
potential, this outer expansion does not correspond to the kernel
boundary integral equation \eqref{eq:SLP-BIE}. One could use a high
order quadrature rule that explicitly takes into account this
singularity~\cite{sidi1988quadrature, kress1991boundary}. However,
we choose to not use one here because we find in the numerical
examples below that our method significantly reduces the dominant error.
To compute the inner expansion, $S^{\text{in}}$, we introduce the
stretched parameter, $t = t^{\ast} + \epsilon T$ into the same
parameterization of the boundary used for the double-layer
potential. Making use of \eqref{eq:3.11} and \eqref{eq:3.13}, we find
by expanding as $\epsilon \to 0^{+}$ that
\begin{equation}
S^{\text{in}}(T;\epsilon) = \log | \kappa^{\ast} | - \frac{1}{2}
\log\left( \epsilon^{2} T^{2} | \gamma^{\ast} | + \epsilon^{2} +
O(\epsilon^{3}) \right).
\label{eq:4.7}
\end{equation}
Substituting $\epsilon^{2} T^{2} \sim 2 - 2 \cos( t - t^{\ast} )$, we
find that to leading order,
\begin{equation}
S^{\text{in}}(t - t^{\ast};\epsilon) \sim \log | \kappa^{\ast} | -
\frac{1}{2} \log\left( ( 2 | \gamma^{\ast} | + \epsilon^{2} ) - 2 |
\gamma^{\ast} | \cos( t - t^{\ast}) \right).
\label{eq:4.8}
\end{equation}
\subsection{Fourier coefficients of $S^{\text{in}}$}
Using the modified coefficients introduced in \eqref{eq:DLP-FPcoeffs},
we write \eqref{eq:4.8} as
\begin{equation}
S^{\text{in}}(t - t^{\ast};\epsilon) \sim \log |\kappa^{\ast}| -
\frac{1}{2} \log C_{0} - \frac{1}{2} \log[ 1 + C_{1} \cos(t -
t^{\ast}) ].
\label{eq:4.10}
\end{equation}
We now seek to compute
\begin{equation}
\hat{S}^{\text{in}}[n] = \delta_{n,0} \left[ \log |\kappa^{\ast}|
- \frac{1}{2} \log C_{0} \right] - \frac{1}{2\pi} \int_{0}^{2\pi}
\frac{1}{2} \log[ 1 + C_{1} \cos(t - t^{\ast}) ] e^{-\mathrm{i} n t}
\mathrm{d}t,
\label{eq:4.11}
\end{equation}
with $\delta_{n,0}$ denoting the Kronecker delta. To compute the
integral in \eqref{eq:4.11}, we start with
\begin{equation}
\frac{\mathrm{d}}{\mathrm{d}t} \log[ 1 +
C_{1} \cos(t - t^{\ast}) ] = \frac{C_{1} \sin(t - t^{\ast})}{1 +
C_{1} \cos(t - t^{\ast})}.
\label{eq:4.12}
\end{equation}
The right-hand side of \eqref{eq:4.12} is another example of a
rational trigonometric function studied by
Geer~\cite{geer1995rational}. It can be readily shown that
\begin{equation}
\frac{1}{2\pi} \int_{0}^{2\pi} \frac{\sin(t -
t^{\ast})}{1 + C_{1} \cos(t - t^{\ast})} e^{\mathrm{i} n t}
\mathrm{d}t = \text{sgn}(n) \frac{\mathrm{i}}{2} \frac{1 +
\rho^{2}}{1 - \rho^{2}} \left( \rho^{|n| - 1} - \rho^{|n| + 1}
\right),
\end{equation}
where $\rho = \left( \sqrt{1 - C_{1}^{2}} \right)/C_{1}$. It follows
from term-by-term integration of the Fourier series with these
coefficients that
\begin{equation}
\hat{S}^{\text{in}}[n] = \begin{cases}
\displaystyle \log | \kappa^{\ast} | - \frac{1}{2} \log C_{0} -
\frac{C_{1}}{2} \frac{1 + \rho^{2}}{1 - \rho^{2}} \left(
\frac{1}{\rho} - \rho \right) \log( 1 - \rho) & n = 0,\\
\displaystyle \frac{C_{1}}{4 |n|} \frac{1 +
\rho^{2}}{1 - \rho^{2}} \left( \rho^{|n| - 1} - \rho^{|n| + 1}
\right) & n \neq 0.
\end{cases}
\label{eq:4.14}
\end{equation}
\subsection{Evaluating the single-layer potential}
Given the inner expansion computed above, our method for evaluating
the single-layer potential is to compute an approximation of
\begin{equation}
v\left(\mathbf{y}^{\ast} + \frac{\epsilon}
{ |\kappa^{\ast}| }\mathbf{n}_{y}^{\ast} \right) = \frac{1}{2\pi}
\int_{0}^{2\pi} \tilde{S}(t - t^{\ast}) \varphi(t) | \mathbf{y}'(t) |
\mathrm{d}t + \frac{1}{2\pi} \int_{0}^{2\pi} S^{\text{in}}(t -
t^{\ast};\epsilon) \varphi(t) | \mathrm{y}'(t) | \mathrm{d}t,
\label{eq:4.9}
\end{equation}
with $\tilde{S} = S - S^{\text{in}}$. We use
PTR$_{N}$ to evaluate the first integral with kernel,
$\tilde{S}$, and a truncated convolution sum to evaluate the second
integral with kernel, $S^{\text{in}}$,
\begin{equation}
v\left(\mathbf{y}^{\ast} + \frac{\epsilon}
{ |\kappa^{\ast}| }\mathbf{n}_{y}^{\ast} \right) \approx
\frac{1}{N} \sum_{j = 1}^{N} \tilde{S}(t_{j} - t^{\ast})
\varphi(t_{j}) | \mathbf{y}'(t_{j}) | + \sum_{n = -N/2}^{N/2-1}
\hat{S}^{\text{in}}[n] \hat{\varphi}_{y}[n] e^{-\mathrm{i} n t^\ast},
\label{eq:4.15}
\end{equation}
where we compute
\begin{equation}
\hat{\varphi}_{y}[n] = \frac{1}{2\pi} \int_{0}^{2\pi} \varphi(t) |
\mathbf{y}'(t) | e^{-\mathrm{i} n t} \mathrm{d}t,
\label{eq:4.16}
\end{equation}
using the Fast Fourier Transform. Just as with the double-layer
potential, provided that $N$ is chosen so that it solves boundary
integral equation \eqref{eq:SLP-BIE} with sufficient accuracy, the
truncated convolution sum in \eqref{eq:4.15} is spectrally accurate.
\subsection{Numerical examples}\label{ssec:SLPresults}
We present results for the evaluation of the single-layer potential by
computing the harmonic function,
$v(\mathbf{x}) = ( \mathbf{x} - \mathbf{x}_{0} ) / | \mathbf{x} -
\mathbf{x}_{0} |^{2}$,
with $\mathbf{x}_0 = (0.1,0.4)$. We compute the solution exterior to
the the boundary curve, $r(t) = 1.55 + 0.4 \cos 5 t$. The Neumann
data in \eqref{eq:4.1b} is determined by computing the normal
derivative of the harmonic function on the boundary. Boundary
integral equation \eqref{eq:SLP-BIE} is solved using
PTR$_{N}$ and we use the resulting density
$\varphi(t_{j})$ with $t_{j} = 2\pi j/N$ for $j = 1, \cdots, N$ in the
single-layer potential. We evaluate the single-layer potential using
two methods: (1) PTR$_{N}$ and (2) asymptotic
PTR$_{N}$, the new method given in \eqref{eq:4.15}.
We modify the body-fitted grid described above for the evaluation of
the double-layer potential to evaluate exterior points. The solution
is evaluated on a grid of $200$ equispaced points along each normal
starting at the boundary until we reach a distance $1/\kappa_{\max}$
from the boundary.
\begin{figure}[h!]
\centering
\includegraphics[width=0.75\linewidth]{plot_SLP_v3.pdf}
\caption{[Left] Plot of the absolute error
($\log_{10}$-scale) in computing the single-layer potential
using PTR$_{128}$ for the boundary
$r(t) = {1.55 + 0.4 \cos 5 t }$ for the Neumann data,
$g(\mathbf{y}) = \frac{\partial v}{\partial \mathbf{n}}$ with
$v(\mathbf{x}) = ( \mathbf{x} - \mathbf{x}_{0} ) / | \mathbf{x} -
\mathbf{x}_{0} |^{2}$,
$\mathbf{x}_{0} = {(0.1, 0.4)}$. [Right] Plot of the absolute
error ($\log_{10}$-scale) in computing the
single-layer potential using asymptotic PTR$_{128}$ given in
\eqref{eq:4.15} for the same problem. The ``$\times$'' symbols on
the boundary indicates the points corresponding to $t^{\ast}=0$,
$t^{\ast}=\pi/2$ and $t^{\ast}=\pi$. }
\label{fig:8}
\end{figure}
In {Fig.}~\ref{fig:8} we show the absolute error
($\log_{10}$-scale) in computing the single-layer potential using
the PTR$_{128}$ and using asymptotic
PTR$_{128}$. The single-layer potential kernel is
not as sharply peaked as the double-layer potential kernel, so the
error in evaluating the single-layer potential is less than the error
when evaluating the double-layer potential. Even so, we still observe
a boundary layer of thickness $O(1/N)$ in which the error is $O(1)$
due to aliasing when using PTR$_{N}$. Asymptotic
PTR$_{N}$ effectively reduces the error in the
boundary layer. To give an indication of this improvement, the
$L_{\infty}$ error is $0.113$ for PTR$_{128}$ and
$5.39 \times 10^{-5}$ for asymptotic PTR$_{128}$. In
Fig.~\ref{fig:9}, we plot the error in computing the single-layer
potential evaluated at the points indicated in Fig.~\ref{fig:8}
($t^{\ast} = 0$, $t^{\ast} = \pi/2$ and $t^{\ast} = \pi$) as a
function of $\epsilon$ with
$0 < \epsilon < \kappa(t^{\ast})/\kappa_{\max}$. These plots show
that the asymptotic method reduces the error by at least 3 orders of
magnitude for small $\epsilon$. We find similar results over all
values of $t^{\ast}$. For the case in which $t^{\ast} = \pi$, the
error of asymptotic PTR$_{N}$ becomes larger than
that for PTR$_{N}$ for $0.36 < \epsilon < 0.52$. For
this particular boundary curve, $\kappa_{\max}$ is attained at
$t^{\ast} = \pi$. Hence, the body-fitted grid at $t^{\ast} = \pi$
plots the single-layer potential over $0 < \epsilon < 1$. For this
range of $\epsilon$, we consider points outside the boundary layer
where PTR$_{N}$ is competitive with, and may even
become more accurate than asymptotic PTR$_{N}$. In
fact, that is what is observed in Fig.~\ref{fig:9} for
$t^{\ast} = \pi$.
\begin{figure}[h!]
\centering
\includegraphics[width=0.31\linewidth]{error_t1SLP_v3-eps-converted-to.pdf} \, \,
\includegraphics[width=0.31\linewidth]{error_t2SLP_v3-eps-converted-to.pdf} \, \,
\includegraphics[width=0.31\linewidth]{error_t3SLP_v3-eps-converted-to.pdf} \, \,
\caption{Plot of the absolute error as a function of $\epsilon$ made
in evaluating the single-layer potential for different $t^\ast$
using PTR$_{128}$ (solid curve) and using
asymptotic PTR$_{128}$ (dashed curve).}
\label{fig:9}
\end{figure}
\section{General implementation}
\label{sec:implementation}
In the results presented above, we are using a body-fitted grid, in
which evaluation points are found by moving along the
normal into the domain from boundary integration points. Often the
solution is needed at more generally defined points. Asymptotic
PTR$_{N}$ tacitly assumes in \eqref{eq:DLP-target}
for interior problems, or \eqref{eq:SLP-target} for exterior problems,
that $\mathbf{y}^{\ast}$, is the unique minimum distance from the
boundary to the evaluation point, $\mathbf{x}$. In the examples, the
closest point on the boundary, $\mathbf{y}^{\ast}$ coincides with a
PTR$_{N}$ grid point from which we extended in the
normal direction. We discuss here the more general problem.
Suppose we have an evaluation point in the domain, $\mathbf{x}$. Then
the first problem to address is whether $\mathbf{x}$ is, in fact,
close enough to the boundary to warrant special attention and requires
use of asymptotic PTR$_{N}$. To solve this problem,
we make use of the identity for the double-layer
potential~\cite{kress1999linear},
\begin{equation}
\frac{1}{2\pi}\int_{\partial D} \frac{\mathbf{n}_y \cdot (\mathbf{x}
- \mathbf{y})}{|\mathbf{x} - \mathbf{y}|^2} \, \mathrm{d} \sigma_y
= \begin{cases}
-1 & \mathbf{x} \in D\\
-\frac{1}{2} & \mathbf{x} \in \partial D\\
\,\,\, 0 & \mathbf{x} \in \mathbb{R}^2 \setminus \overline{D}
\end{cases}.
\label{kernel_identities}
\end{equation}
Evaluating \eqref{kernel_identities} using PTR$_{N}$
suffers from the same aliasing problem that the more general layer
potential evaluations do. Thus, we use it to determine if $\mathbf{x}$
lies within the boundary layer. To do this, we set a user-defined
threshold for the error. If the error in evaluating
\eqref{kernel_identities} using PTR$_{N}$ is less
than this threshold value, we keep the result computed using
PTR$_{N}$. Otherwise, we use the appropriate
asymptotic approximation.
To use these asymptotic approximations, we must determine the
parameter, $t^{\ast}$, where
$t^{\ast} = \min_{0 \le t < 2 \pi} | \mathbf{x} - \mathbf{y}(t)
|^{2}$.
For a general boundary, this problem may not have a unique
solution. In practice, we find a unique minimizer for evaluation
points that are identified to be in the boundary layer using
PTR$_{N}$ evaluation of \eqref{kernel_identities}.
Once $t^{\ast}$ is determined, all other quantities required for the
asymptotic approximations follow. Finally, we evaluate the solution of
the boundary value problem at hand at any point $\mathbf{x}$ using
either PTR$_{N}$ or asymptotic
PTR$_{N}$.
We present results of this generalized method for evaluation of the
double-layer potential and single-layer potential for the same
problems presented in Section \ref{ssec:DLPresults} and
\ref{ssec:SLPresults}, respectively. We use a threshold of
$1 \times 10^{-8}$, as described above, to determine when the
evaluation point is inside the boundary layer and asymptotic
PTR$_{N}$ method is to be used. In
{Fig.}~\ref{fig:10} we show the error in computing the double-layer
potential using PTR$_{256}$ and asymptotic
PTR$_{256}$ when solving on a Cartesian grid with
meshsize $h = 0.005$ within the boundary curve. Similarly, we present
the evaluation of the single-layer potential in
{Fig.}~\ref{fig:12}. Here, we are computing on a Cartesian grid with
meshsize $h = 0.005$ exterior to the boundary curve. These results
show, similar to the results while considering a body-fitted grid,
that the error made by asymptotic PTR$_{N}$ is
several orders of magnitude smaller than those made by
PTR$_{N}$. However, there are more variations in
these errors because $\mathbf{y}^\ast$ does not always coincide with a
quadrature point. We choose to use 256 quadrature points here as this
is what is actually needed to solve the boundary integral equations
for the densities such that $\mu(t) | \mathbf{y}'(t) |$ and
$\varphi(t) | \mathbf{y}'(t)|$ are sufficiently resolved. We were able
to use less points for the body-fitted grid as this restriction is not
as strict when $\mathbf{y}^\ast$ is a quadrature point.
\begin{figure}[h!]
\centering
\includegraphics[width=1\linewidth]{DLP_regulargrid_v5-eps-converted-to.pdf}
\caption{[Left] Plot of the absolute error
($\log_{10}$-scale) in computing the double-layer
potential using PTR$_{256}$ for the boundary
$r(t) = {1.55 + 0.4 \cos 5 t }$ for the Dirichlet data,
$f(\mathbf{y}) = \frac{1}{2\pi} \log | \mathbf{y} - \mathbf{x}_{0}
|$
with $\mathbf{x}_{0} = {(1.85, 1.65)}$. We evaluate the solution
inside the domain on a regular grid. [Right] Plot of the absolute
error ($\log_{10}$-scale) in computing the
double-layer potential using asymptotic
PTR$_{\text{256}}$ given in
\eqref{eq:DLP-asymptotic} for the same problem. The asymptotic
method is used in a boundary layer determined by a threshold on
the error from evaluating \eqref{kernel_identities}.}
\label{fig:10}
\end{figure}
\begin{figure}[ht!]
\centering
\includegraphics[width=1\linewidth]{SLP_regular_v5-eps-converted-to.pdf}
\caption{[Left] Plot of the absolute error
($\log_{10}$-scale) in computing the single-layer
potential using PTR$_{256}$ for the boundary
$r(t) = {1.55 + 0.4 \cos 5 t }$ for the Neumann data,
$g(\mathbf{y}) = \frac{\partial v}{\partial \mathbf{n}}$ with
$v(\mathbf{x}) = ( \mathbf{x} - \mathbf{x}_{0} ) / | \mathbf{x} -
\mathbf{x}_{0} |^{2}$,
$\mathbf{x}_{0} = {(0.1, 0.4)}$. [Right] Plot of $\log_{10}$ of
the absolute error ($\log_{10}$-scale) in
computing the single-layer potential using the asymptotic
PTR$_{256}$ given in \eqref{eq:4.15} for the same
problem. The asymptotic method is used in a boundary layer
determined by a threshold on the error from evaluating
\eqref{kernel_identities}.}
\label{fig:12}
\end{figure}
\section{Conclusions}
\label{sec:conclusions}
We have presented a new method to address the close evaluation
problem. When solving boundary value problems using boundary
integrals equation methods, the solution is evaluated at desired
points within the domain by numerically evaluating layer potentials.
Using the same quadrature rule that is used to solve the integral
equation for this evaluation achieves high order accuracy everywhere
in the domain, except close to the boundary where an $O(1)$ error is
incurred. The new method developed here takes advantage of the
knowledge of the sharply peaked kernel of layer potentials close to
the boundary to reduce this error by several orders of magnitude. We
have used asymptotic methods to analyze the kernels of the double- and
single-layer potentials to relieve the numerical method from having to
integrate over this sharply peaked kernel. The resulting method is
straightforward to implement. We have presented results for both the
interior Dirichlet problem and exterior Neumann problem for Laplace's
equation and show a reduction in error of four to five orders of
magnitude in the solution evaluation close to the
boundary. Furthermore, we have presented how to generalize this method
to solve within the whole domain, including how to determine when to
use the new asymptotic method.
This asymptotic method has been recently applied to acoustic
scattering problems by sound-soft obstacles~\cite{carvalho2017}. For
those problems, the sharp peaks in the kernels for the single- and
double-layer potentials, both which are needed, have the same
character as those for Laplace's equation discussed here. Hence, only
small modifications are needed to apply these methods to wave
propagation problems. We are currently extending this asymptotic
method to three-dimensional problems. Furthermore, we are working on
different applications which include extending the method presented
here to the Stokes equations and studying surface plasmons.
\section*{Acknowledgments}
The authors thank Fran\c{c}ois Blanchette and Boaz Ilan for their
thoughtful discussions leading up to this work. S. Khatri was
supported in part by the National Science
Foundation (PHY-1505061). A. D. Kim acknowledges support by Air
Force Office of Scientific Research (FA9550-17-1-0238) and the
National Science Foundation.
|
2,869,038,154,388 | arxiv | \section{Introduction}
Observing of the light diffused from a scattering sample is a widely
used, well-known tool applied to the study of biophysical systems,
colloidal suspensions, and complex fluids
\cite{scheffold2007}. Homodyne dynamic light scattering and heterodyne
detection are based on interference and, in particular, on the
observation of speckle patterns. In this case, a long coherence is
needed because the light path difference of the interfering beams must
not exceed the coherence length. On the other hand, the disappearance
of interference beyond the coherence length may be exploited to select a
well-defined slab from a thick sample, for example in optical coherence
tomography \cite{huang1991}.
A single-mode laser beam has a very long longitudinal coherence and is
transversally coherent across its section. In a beam with a short
longitudinal coherence, the regions in which the electric field is
correlated are slabs perpendicular to the beam direction. Using a
dispersing optical element, it is possible to manipulate such a beam
so that the coherent slabs are skewed by an angle $\sigma$ with
respect to the plane perpendicular to the beam direction
\cite{picozzi2002}. This effect has been extensively studied in the
context of ultra-short optical pulses \cite{martinez1986, porras2003},
and it can be utilized to achieve more efficient nonlinear pulse
generation (by increasing the phase-matching condition)
\cite{martinez1989, szabo1990}, or to avoid some linear side effects
(such as group velocity mismatch and walk-off) \cite{ditrapani1998}.
In this paper, we describe the scattering of a skewed coherence beam
by a random medium. In the near-field images of the sample, we observe
that a skewed coherence beam gives rise to a speckle pattern that is not
visible using short-coherence illumination, although the coherence
length is identical. Thanks to the visibility of the speckle pattern,
our experimental setup with the skewed coherence beam can be used as a
so called Scattering In Near Field (SINF) device \cite{brogioli2008,
croccolo2011} that operates in a heterodyne detection configuration.
Accordingly, we are able to obtain heterodyne light scattering
measurements despite the short coherence of the illumination.
\section{Experimental setup}
\begin{figure}
\includegraphics{setup.eps}
\caption{
\label{fig:setup}
A schematic of the experimental setup. (a), generation of
the skewed beam. The coherent slabs of the original beam are
perpendicular to the propagation direction. In the first order
diffracted beam the slabs are skewed with respect to the propagation
direction. (b), the sample (colloidal suspension) and collection
optics. Typical near-field images obtained with various beams
are shown: a laser beam (c), a short coherence beam (d), and
a skewed coherence beam with skew angle of $\sigma=47^{\circ}$ (e).
The scattering medium is a water suspension of 80~nm
polystyrene nanospheres. }
\end{figure}
Our short-coherence light source is a laser diode
driven under threshold (Sacher Lasertechnik SAL-0660-025).
The emission has a maximum at a wavelength of 660~nm, with an
8~nm FWHM bandwidth. The regions of the emitted beam in
which the field is coherent consist of thin slabs that are orthogonal
to the direction of propagation and whose thickness equals the
longitudinal coherence length (i.e. about 17~$\mu$m).
To obtain the desired skewed coherence, the beam is
diffracted by a reflective grating \cite{porras2003, picozzi2002},
with 600 lines per millimeter, blazed at $17.5^{\circ}$, and the first
order diffracted beam is selected (see Fig.~\ref{fig:setup}a). In the
resultant beam, the regions in which the field is coherent are slabs
that are skewed (inclined) by an angle $\sigma$ with respect to a direction
perpendicular to the propagation direction \cite{porras2003,
picozzi2002}. In our case, the skew angle $\sigma$ can be adjusted
from about $\sigma=20^{\circ}$ to $\sigma=50^{\circ}$ by changing the
incidence angle of the beam with respect to the grating.
As shown in Fig.~\ref{fig:setup}b,
the skewed beam passes through a 1~mm-thick cell containing
the scattering sample (e.g. a colloidal suspension).
The intensity of the light in a plane close to
the exit face of the cell is imaged by a CCD camera (Luca Andor) through
a high numerical aperture microscope objective (MO; Nikon CFI Plan
Apochromat 63X NA 1.4), which collects both the transmitted and
scattered beams (heterodyne detection).
\section{Experimental results}
The images obtained by illuminating the sample with a laser beam
(Fig.~\ref{fig:setup}c) clearly show the usual near field speckle patterns
\cite{goodman_speckles, giglio2001},
with a strong contrast, due to the random interference of the
scattered beams. When a non-skewed
beam (i.e. $\sigma=0$) with a short temporal coherence is used
(Fig.~\ref{fig:setup}d), the speckles are almost invisible
and appear extremely smeared because no interference can occur.
Quite surprisingly, the images obtained with the skewed beams
(i.e. $\sigma>0$) do show speckles (Fig.~\ref{fig:setup}e), although
their texture is different from that which is observed for laser light.
\begin{figure}
\begin{center}
\begin{tabular}{cc}
(a), $\sigma = 0^{\circ}$ & (b), $\sigma =22^{\circ}$\\
\includegraphics{spectrum0.eps} &
\includegraphics{spectrum22.eps}
\\
(c), $\sigma = 34^{\circ}$ & (d),
$\sigma = 47^{\circ}$\\
\includegraphics{spectrum34.eps} &
\includegraphics{spectrum47.eps}\\
\multicolumn{2}{c}{(e)
\includegraphics{cone.eps}
}
\end{tabular}
\caption{
\label{fig:spectra}
Power spectra. (a), (b), (c), (d),
Two-dimensional false-color power spectra of the near-field speckle
patterns of the images obtained with an impinging beam with a short
coherence, at various skew angles $\sigma$. (a), the spectrum of
the image in Fig.~\ref{fig:setup}d, using a non-skewed beam. (b),
(c), spectra of the images at a skew angle $\sigma=22^{\circ}$ and
$\sigma=34^{\circ}$. (d), the spectrum of the image in
Fig.~\ref{fig:setup}e, using a skew angle $\sigma=47^{\circ}$. The
scattering medium is a water suspension of 80~nm polystyrene
particles. (e), A three dimensional schematic of the
relationship among the wave vector $\vect{K_I}$ of the impinging beam,
the wave vectors $\vect{K_S}$ of the scattered beams, and the wave
vectors $\vect{q}=(q_x,q_y)$ of the interference fringes in the image
plane. The most intense points of the power spectrum are generated by
light scattered along a cone, whose axis (green dashed line) is
perpendicular to the coherent slabs. }
\end{center}
\end{figure}
To characterize the speckle fields, we evaluated the power
spectra of the images obtained with beams skewed at various angles;
the results are reported in Fig.~\ref{fig:spectra}.
The power spectrum at $\sigma=0$ exhibits a weak signal for the wave
vectors $\vect{q}$ close to the center of the Fourier-space image:
this corresponds to the faint speckles.
As the skew angle $\sigma$ is increased, higher frequency modes
appear in the images; in the power spectra, they appear as ellipses.
The size of the ellipse in the power spectrum increases as the skew
angle $\sigma$ is increased.
These images can be interpreted as holograms of the scattered field.
Each point in the near-field image (with wave vector $\vect{q}$)
is generated by the interference between the most intense transmitted
beam (with wave vector $\vect{K_I}$) and a single scattered beam (with
wavevector $\vect{K_S}$). The relationship among $\vect{K_I}$,
$\vect{K_S}$ and $\vect{q}$ is shown graphically in
Fig.~\ref{fig:spectra}e; the wave vector $\vect{q}$ is the projection
of the transferred wave vector $\vect{Q}=\vect{K_S}-\vect{K_I}$ in the
image plane \cite{brogioli2009bis}. The SINF technique
\cite{brogioli2008, brogioli2009, brogioli2009bis, cerbino2008} exploits
this relationship to measure the scattering intensity by acquiring near-field
images. In this case, the following observations can be made:
{\em i}) the observed ellipses correspond to scattering along a cone;
{\em ii}) the impinging beam direction is a generatrix of the cone;
{\em iii}) the axis of the cone is orthogonal to the coherent slabs.
In reality, the
sample scatters at all angles, but only the scattering along the
cone beats coherently with the local oscillator to provide a
detectable heterodyne signal. Thus it will be called the ``scattering
detection cone'' (SDC).
\begin{figure}
\begin{center}
\includegraphics{multiple_scattering.eps}
\caption{
\label{fig:scattering}
Measured scattering intensity. The sample consists of 400~nm polystyrene
particles in water at various volume fractions $\phi$. The data were obtained
using a laser beam (a) or a skewed $\sigma=47^{\circ}$
beam (b). The continuous line represents the Mie theory prediction
for 400~nm dielectric spheres.
}
\end{center}
\end{figure}
Following the idea of SINF, we can recover the scattering intensities
from the power spectra. The results for both a laser beam and a skewed
coherence beam are shown in Fig.~\ref{fig:scattering}. In both cases,
at a low volume fraction, the measured
scattering intensity closely follows the Mie theory
\cite{vandehulst}. This shows that the detected light, scattered along
the SDC, has the same direction and intensity as the light scattered
by the particles, and the SDC acts only as a mask. By skewing the
coherence, we can perform heterodyne detection, despite the
short coherence of our light source.
When using laser light, we observe a progressive departure from
the Mie theory for large scattering wave
vectors \cite{vandehulst}, that can be easily interpreted to be due to
multiple scattering. In contrast, the use of the skewed beam
(Fig.~\ref{fig:scattering}b) results in an almost perfect overlap
between the data and the theoretical results single scattering,
independent of the nanoparticle concentration. This experiment shows that the use
of skewed coherence beams in a heterodyne scattering detection setup
suppresses the detection of multiple scattered light.
\section{Discussion}
Figure~\ref{fig:explanation} presents a schematic explanation of the
obtained results. A coherent slab impinges on a set of particles,
which emit secondary waves. Interference only occurs where
there is overlap between the coherent slab and
the parts of the secondary waves that are coherent with it. In the
section shown, overlap only occurs at two points, which define
the two directions (indicated by arrows) along which the scattered
waves interfere with the impinging beam. One is always along the forward
direction, and the other is at a scattering angle of $2\sigma$. This
direction is identical for every particle. In three dimensions, the
intersection of the coherent slab with the secondary spherical wave
occurs along a circle, which defines a scattering cone with
properties identical to those of the SDC.
\begin{figure}
\begin{center}
\includegraphics{explanation.eps}
\end{center}
\caption{
\label{fig:explanation}
A qualitative interpretation of the scattering mechanism in the presence
of a skewed beam.
The impinging beam propagates from left to right
in the direction shown by the arrow. The skewed coherent slab is shown
as a yellow stripe. When it hits the particles (blue dots),
secondary spherical waves are generated. The parts of the
secondary wave that are coherent with the slab are the spherical shells
(orange rings). Heterodyne interference is only observed where
the coherent slab and the coherent spherical shells overlap.}
\end{figure}
From this simplified picture of the interference generated by
a skewed beam, one can also infer why multiple scattering is not detected
in this configuration. In fact, light that has been scattered more than
once will arrive at
the screen plane with a time shift greater than the beam's temporal
coherence, thereby preventing interference.
When we take into account the thickness of the coherent slabs, we
notice that the secondary waves indeed interfere with the impinging
beam for an interval of directions around $\vartheta=2\sigma$, whose
extent $\delta \vartheta$ increases as the distance from the coherent
slab of the impinging beam to the emitting particle decreases. In
particular, if we analyze a plane inside the sample, the particles
whose distance from the detection plane is less than the coherence
length can give an heterodyne signal at every angle. In other words,
only a thin volume around the detection plane contributes to the
scattering signal at a generic angle, but the whole volume of the
sample contributes to the heterodyne signal along the SDC. This
explains why the heterodyne signal along the SDC is much higher than
along other directions.
In general, the manipulation of the coherent slab allows to select the
volume of the sample which is detected: in our experiments, we are
interested in extending this volume up to the whole sample; on the
other hand, the so-called ``optical coherence tomography''
\cite{huang1991} exploits the short coherence for selecting a well
defined slice of the sample (this is the meaning of ``tomography'',
namely ``drawing the image of a slice''). This view makes evident the
deep analogy between the rejection of the multiple scattering in our
experiment and the rejection of the slices other than the selected one
in optical coherence tomography.
\section{Applications}
Currently, measuring the skew angle of a sample beam is performed
using non-linear optics and only in the case of ultra-short pulses
with high energy \cite{tipa}. However, the phenomenon described here
enables the development of a much simpler technique based on observing
the SDC in the power spectra; the axis of the SDC directly provides the normal to
the coherence slabs. Such a technique would also be effective for
faint beams, and both for continuous waves and ultra-short pulses.
Furthermore, multiple scattering is a serious issue affecting
scattering measurements, and several techniques have been developed to
suppress the contributions of multiple scattering
(e.g. ref.~\cite{moitzi2009, pusey1999} and references therein).
This experiment shows that the use of skewed coherence beams in
a heterodyne scattering detection setup suppresses multiple
scattering and represents the most natural and effective way
to reach this target.
In the case of X-rays \cite{sutton2008, sutton1991, mochrie1997,
cerbino2008bis, nugent2010} the coherence conditions necessary for
obtaining interference can be achieved by synchrotron radiation or
FEL. However, this only occurs after filtering through a
monochromator, which strongly reduces the photon flux. Our findings
suggest that skewing the coherence of a short-coherence X-ray beam can
enable heterodyne detection.
The actual optical scheme will depend on the application. For example,
the main beam of a small-angle X-ray scattering (SAXS) apparatus can
be used. In this case, the angular dispersion of the beam is much less
than 1/1000 of radiant, which ensures a transversal coherence length
of the order of the micron, while the transversal coherence is of the
order of a few nanometers. In order to obtain the skewed-coherence
beam, a transversally coherent region of the main beam is selected by
means of a pin-hole, the resulting beam is sent through a transmission
grating with a line spacing of the order of 100~nm, and the
first-order diffracted beam is selected by a second pin-hole. The
resulting beam has a skew angle of the order of the milliradiants, and
the corresponding SDC has an aperture of the same order, which is in
the range usually detected with SAXS. The SINF detection scheme can
still be applied with a variant with respect to the above-described
method. Instead of taking an image of a plane, we will place an
intensity mask (a transmission grating) on the same plane, with the
wave vector we want to measure. The transmitted intensity will
represent the amplitude of the corresponding Fourier mode of the image,
and will represent the heterodyne signal.
\section{Conclusions}
We have shown that the use of a skewed-coherence beam allows to detect
a heterodyne signal also with short-coherence light. We show that it
is possible to skew the coherence of a short-coherence beam by means
of an optical system including a diffraction grating. Anyhow, the
skewing optical system do not increase the coherence length, nor it
acts as a narrow bandwidth filter. The detection is possible only for
light scattered along the SDC, while the other scattering directions
give a negligible heterodyne signal. The axis of the SDC is
perpendicular to the coherent slabs, and thus the detection of the SDC
represents a very effective method for measuring the coherence
skewness of either a continuous wave or a pulsed beam. When applied
to quite turbid samples, the technique has the remarkable advantage of
suppressing the multiple scattering contribution of the scattering
signal. We suggest that the phenomenon presented here can be used as a
mean to perform heterodyne scattering measurement with any
short-coherence radiation, and the application of the technique to
X-rays has been discussed.
\section*{Bibliography}
|
2,869,038,154,389 | arxiv | \section{Introduction}
The propagation of very low frequency (VLF) radio signals within the
Earth-Ionosphere waveguide is strongly affected by changes of the solar
UV, e. g., during the day/night slow variation or the rapid changes
during a solar flare.
The upper boundary of this waveguide is the D-region of the ionosphere,
starts at $\sim$ 70-80 km above the surface of the Earth, and is
formed by the Lyman-$\alpha$ radiation from the Sun which ionizes
molecular Nitrogen and Oxygen; Nitric oxide; and various atoms such
as Sodium and Calcium \citep{Nicolet60}.
During a solar eclipse, the UV solar
flux decreases and consequently, the rate of ionization in the D-region is
strongly reduced, causing an elevation of the
effective height of the reflecting ionospheric layer.
Thus the conditions in the lower ionosphere approach to those observed during the night-time~\citep{MendesDaCosta:1995, Tereshchenko:2015}.
Making solar eclipses very helpful natural setups to study the ionization dynamics of the lower ionosphere.
The effects of solar eclipses on the amplitude and phase of VLF signals and the consequent estimation of the ionospheric reflection height at the times of maximum phase of the eclipse, have been investigated by several authors,
using different techniques \citep[e. g. ][]{MendesDaCosta:1995, Clilverd:2001, Guha:2012, Kaufmann:1968, De:2010, De:1997}, as instance:
using the Long Wave Propagation Capability waveguide code (LWPC) to calculate the change of ionospheric reflection height,
from its
unperturbed value of $H'=71$ km
\citet{Clilverd:2001} found
that the
maximum effects during the August 11, 1999 eclipse occurred when
the effective height parameter $H'$ was 79 km, on a GCP of 1245 km at a steep angle with respect to the totality path of the eclipse.
Also, these effects were investigated
through the measurements of VLF sferics by \citet{Guha:2012} who
calculated increases of the reflection height of $\sim 4.85$ km on a
GCP $>$ 10,000 km for the July 22, 2009 eclipse;
and
$\sim 5.14$ km on a GCP $<$ 10,000 km for the January 15, 2010 eclipse.
By comparing the eclipse with the day/night phase changes,
\cite{MendesDaCosta:1995} estimated the maximum phase retardation
as 43$\%$ of the total diurnal average phase change, which
corresponds to a rise of the VLF effective reflection height of 6.18 km on a GCP of 2820 km.
Furthermore, \cite{De:2010}
via a quantitative relationship between the phase delay and the reflection height change, a rise of the VLF reflection height of around 3.75 km on a GCP of 5761 km
during the August 1, 2008 eclipse.
Recently,
\cite{2019JGRA..124..616V} estimated the
reflection height increase of 3 km on a GCP of 4800 km for the
July 22, 2009 eclipse; and also \cite{2012P&SS...73..310P} reported a
3 km increase on a GCP of 5700 km for the January 15, 2010 eclipse.
Besides ground observations, rocket measurements of the total electron
density have been performed during solar eclipses,
reporting effective VLF reflection height increasings of $\sim 9$ and
$\sim 8$ km
during the eclipses observed on
November 12, 1966 in Brazil and March 7, 1970 in Virginia, respectively \citep{Clilverd:2001}.
It is worth noting that during a total solar eclipse the magnitude of
VLF signal amplitude decrease depends on GCP length and its
orientation with totality path, time of eclipse, and the eclipse
magnitude \citep{2019JGRA..124..616V}. Therefore one can expect slightly different decreases in signal amplitude for propagation paths of similar lengths.
In this paper we present the VLF observation of the ``Great American''
eclipse of August 21, 2017 done by the LAVNet-Mex receiver station
\citep{Borgazzi:2014} in Mexico City, Mexico (Sec. \ref{sec:exp}).
We studied the eclipse effects on the phase and amplitude of the NDK transmitter
signal (25.2 kHz, from La Moure, ND, USA) over the correspondent path length of
3007.15 km (Sec. \ref{sec:obs}). In particular we present a phase
deviation model that includes eclipse effects, such as shortening of
the propagation path between transmitter and receiver, and increases
in the effective reflection height of the ionosphere (Sec. \ref{sec:model}).
We also have modeled the effects of a C3 flare occurred at the time of
the maximum occultation (Sec. \ref{sec:flare}). Finally our summary is
in Sec. \ref{sec:summary}.
\section{Experimental setup} \label{sec:exp}
For this study,
we used the measurements of phase and amplitude of the VLF waves detected with the Latin American Very Low Frequency Network at Mexico (LAVNet-Mex) receiver station, which
operates at the frequency range of 10-48 kHz and is located at the
Geophysics Institute of the National Autonomous University of Mexico
(UNAM), at 99$^{\circ}$ 11$'$ W, 19$^{\circ}$ 20$'$ N,
\citep{Borgazzi:2014}. LAVNet-Mex is formed by two loop-type antennas;
each one has a very low noise preamplifier in differential input
configuration, and uses a commercial high quality sound card as a
digitizer. The system bandwidth is 40 kHz centered at 30 kHz, the
voltage gain is 51.88 dB and the common-mode rejection ratio is 74.83
dB. The two wire-loop antennas (N-S and E-W configuration) have 100
turns, mounted on aluminum square frame with 1.8 m per side, which
gives around 324 m$^{2}$ of effective area, \citep{Borgazzi:2014}. We
achieved successful phase measurements with the use of a compact GPS
receiver through a one pulse per second (PSS) signal. Then, the Sound card's crystal-clock signal is locked to the GPS internal clock (PPS). The resulting signal phase has a precision of less than $\sim 1^{\circ}$ \citep{Raulin:2010}.
\section{The Great American Solar Eclipse} \label{sec:eclipse}
The Great American solar eclipse (August 21, 2017) began at the
North Pacific Ocean; the inland totality first occurred at $\sim$ 17:17
UT in Oregon; the last contact with the mainland occurred at around
18:48 UT in South Carolina; and finally, the eclipse ended at the North Atlantic Ocean.
The maximum duration of totality was 2m40s. This eclipse provided us with a rare opportunity to study the effects on the propagation of VLF waves within the Earth-ionosphere waveguide because the path of totality crossed the propagation path between the transmitter and the receiver.
The path and locations of the NDK transmitter and the LAVNet-Mex receiver (RX) with respect to the path of totality on August 21, 2017, are shown in Figure \ref{fig:Figure 1}. The great circle path between the transmitter and the receiver is 3007.15 km long. The continuous line represents the center of the shadow, while the dashed lines represent the lower and the upper limits of the totality shadow. Point A (40$^{\circ}$ 54$'$ N, 98$^{\circ}$ 27$'$ W) represents the location of the maximum eclipse on our propagation path,
at 18:00 UT.
We note that NDK-RX path crosses the path of totality almost perpendicularly. At the receiver site
the maximum obscuration was $\sim 27\%$ at around 18:20 UT, while at the transmitter site NDK (46$^{\circ}$ 22$'$ N, 98$^{\circ}$ 20$'$ W) the obscuration was $\sim 83\%$ at around 17:57 UT.
\begin{figure}
\begin{center}
\includegraphics[scale=0.52]{figure1_small.png}
\caption[f1]{Map of the path of the August 21, 2017 eclipse. The
continuous line represent the center of the totality, whereas the lower
and upper limits are marked with dashed lines. The great circle
from the transmitter (NDK) to the receiver (RX) in Mexico City,
Mexico is marked with the thick full line. The point A on the map
represents the location of the maximum of the eclipse on our propagation path, the maximum phase of the eclipse at point A occurred at 18:00 UT.}
\label{fig:Figure 1}
\end{center}
\end{figure}
\section{VLF Observations}\label{sec:obs}
In Figure \ref{fig:Figure 2} we show the phase (a) and the amplitude
(b) variation of the VLF signals propagating along NDK-RX path, from
10:00 UT on August 21, 2017 to 10:00 UT next day, for the N-S antenna. We marked sunrise (SR) and sunset (SS) according to the diurnal pattern where the phase increases/decreases abruptly,
at $\sim$ 12:00 UT and $\sim$ 01:30 UT the next day,
respectively. The night-time value of phase is $\sim 0^{\circ}$, while
the unperturbed daytime value is $\sim 260^{\circ}$.
The amplitude shows
similar patterns of sunrise/sunset with an unperturbed daytime value of
$\sim 8$ dB.
The vertical dashed lines indicate the beginning
(16:34 UT), the maximum (18:00 UT) and the end (19:26 UT) of the
eclipse along our propagation path.
The bottom panel in Figure \ref{fig:Figure 2} represents a rough
schematic of the diurnal phase change. We are interested in the ratio
of the phase change due to the solar eclipse ($\Delta \Phi_E$) and the
phase change between the night and day during the sunrise ($\Delta
\Phi_{SR}$) and sunset ($\Delta \Phi_{SS}$). The maximum phase change
during the eclipse accounted for $\sim$25$\%$ of the total diurnal
average phase change. Assuming a total diurnal change (at the
reference height) of $\sim$ 20-30 km and an upper limit of the
daytime D-region of 90 km of altitude, the eclipse phase change represents a rise
in the effective reflection height of the ionosphere of $\sim$ 5-8 km.
In Table \ref{tab:Table 1} we present the measured phase change, the ratios of eclipse with respect to day-night phase change and the estimated change in the effective reflection height of the ionosphere.
In panels (c) and (d) of Figure \ref{fig:Figure 2}, we present a
detailed view of the phase and the amplitude changes during the time
of the solar eclipse (16:00 - 20:00 UT).
For the sake of simplicity,
we shifted to zero the values of the phase and the amplitude during unperturbed times.
The phase reaches a minimum of -63.36$^{\circ}$ at 18:05 UT.
We note that
at the time of the eclipse
a C3.0 solar flare occurred, with a maximum at 17:57 UT (in X-ray flux). The flare was powerful enough to shift the phase towards positive values thus distorting the eclipse pattern.
In this way,
the real minimum should be occurred a few minutes prior to the measured one, at 18:00 UT, which is the time of totality on our propagation path (point A in Figure \ref{fig:Figure 1}).
The change in amplitude reaches a minimum of -4.90 dB at 17:47 UT,
however the effect of the C3.0 flare on amplitude is more profound
than the case of the phase.
Differences between amplitude and phase time-behavior are regularly
seen at sunrise and sunset \citep{wait1968mode, 1983ZaMM...63..281L,davies65} as clearly seen at 11:30 UT (sunrise) and at 03:30 UT (sunset) in panels (a) and (b) of Figure 2. These are due to the different response of the phase delay and amplitude to the modal conversions due to the sudden changes of the height and surface characteristics of the waveguide \citep{wait1968mode}.
Similar differences are seen during the eclipse (panels (c) and (d) of
Figure 2). The phase delay is more sensitive than the amplitude to the
small changes of the Sun illumination before the second contact.
As the quiescent D-layer uplifts at the beginning of the eclipse, the phase velocity of the first-order mode decreases and so does the phase shift. The change of the phase-shift slope may occur when the wave-guide reaches an altitude high enough to allow a significant second-order mode propagating along with the first-order mode causing a further reduction of the phase velocity.
\begin{figure
\begin{center}
\includegraphics[scale=0.25]{figure2and3_merged2.png}
\caption[f2]{
Phase (a) and amplitude (b) variations of the VLF signal along
NDK-RX path, from 10:00 UT on August 21, 2017 to 10:00 UT next day,
of the N-S antenna. We marked the sunrise (SR) and the sunset (SS)
where the phase changes abruptly, according to the diurnal pattern.
A detailed view of the phase (c) and the amplitude (d) at the time of the eclipse (16:00 to 20:00 UT). The vertical dashed lines indicate the beginning (16:34 UT), the maximum (18:00 UT) and the end (19:26 UT) of the eclipse along our propagation path. The maximum of the C3.0 solar flare, at 17:57 UT is marked.
The bottom panel schematically represents the diurnal phase change. $\Delta \Phi_E$ represents a phase change due to the eclipse, while $\Delta \Phi_{SR}$ and $\Delta \Phi_{SS}$ represent a phase change between the night and the day, one at sunrise time (SR) and one at sunset time (SS). }
\label{fig:Figure 2}
\end{center}
\end{figure}
\begin{table
\begin{center}
\begin{tabular}{c|c|c|c }
\hline
& \textbf{$\Delta \Phi$($^{\circ}$)} & \textbf{Ratio} & \textbf{$\Delta$z(km)}\\
\textbf{Sunrise (SR)} & 300.15 $\pm$ 4.21 & 0.24 $\pm$ 0.01 & (4.8-7.2) $\pm$ 0.1\\
\textbf{Sunset (SS)} & 273.40 $\pm$ 3.21 & 0.27 $\pm$ 0.01 & (5.4-8.1) $\pm$ 0.1\\
\textbf{Eclipse (E)} & 72.84 $\pm$ 2.60 & & \\
\hline
\end{tabular}
\end{center}
\caption{The measured phase (column 2, $\Delta \Phi$) during the sunrise (SR), and sunset (SS); and the corrected value between noon and the eclipse minimum (E).
The ratio between phase changes and the expected change of the ionospheric reflection layer $\Delta$z, are in columns 3 and 4.}
\label{tab:Table 1}
\end{table}
\section{The Eclipse Model} \label{sec:model}
In this section we present a model of the phase deviation during the
eclipse. We consider effects such as the equivalent shortening of the
propagation path between transmitter and receiver, and increasing of
the effective ionospheric reflection height. We start with the
equation formulated by \citet{1959ITAP....7..154W},
see also
\citet{1972JATP...34..255D} and \citet{ Muraoka:1977}:
\begin{equation}
\label{eq:Equation 1}
\Delta \Phi = 360^{\circ}\frac{d}{\lambda}\left(\frac{1}{2a}+\frac{\lambda^2}{16z^3}\right)\Delta z,
\end{equation}
where $\Delta \Phi$ is the phase delay, $d$ is the distance between
transmitter and receiver, $\lambda$ is the wavelength of the VLF wave,
$a$ is the radius of the Earth, $\Delta z$ is the variation of the reflection
height and $z$ is the daytime altitude of the ionosphere. In this
analysis we used z =
70.5 km as proposed by \cite{Thomson:2010}. We note that Equation
\ref{eq:Equation 1} is valid for a propagation path of distance $d \ge
3000~ km$
illuminated by the Sun and
our propagation path is in the lower limit,
even during the
eclipse when only a small part of the path is totally obscured by the
Moon.
The
steps of the eclipse modelling are illustrated in Figure
\ref{fig:model}. First,
in order to filter out the effect of the C3.0 solar flare,
we fitted the observed phase profile, at the time of totality (without considering the time of the flare) with
a model constructed by the
addition of Gaussian curves and
due to the asymmetric nature of the phase time profile, the best fit
was achieved by adding six Gaussian models (dashed black line in panel a).
Next, we propose that the illuminated propagation path distance is a
function of time, since the fraction of the path covered by the
eclipse shadow changes rapidly. Thus we can write Equation \ref{eq:Equation 1} as:
\begin{equation}
\label{eq:Equation 2}
\Delta \Phi(t) = C_1 \Delta d(t) \Delta z(t),
\end{equation}
where $C_1 =
360^{\circ}\frac{1}{\lambda}(\frac{1}{2a}+\frac{\lambda^2}{16z^3})$ is
a constant, $\Delta d(t)$ is a time function of the propagation path
distance and $\Delta z(t)$ is a time function of the reflection height
variation. We can rewrite Equation \ref{eq:Equation 2} in terms of
relative changes as:
\begin{equation}
\label{eq:Equation 3}
\widetilde{\Phi(t)} = C_2 \widetilde{d(t)}\Delta z(t),
\end{equation}
where the relative phase change is:
\begin{equation}
\label{eq:Equation 4}
\widetilde{\Phi(t)} = \frac{\Delta \Phi(t)}{\Phi_0} = \frac{\Phi_0-\Phi(t)}{\Phi_0} = 1 -\frac{\Phi(t)}{\Phi_0};
\end{equation}
and the relative propagation path distance is:
\begin{equation}
\label{eq:Equation 5}
\widetilde{d(t)} = \frac{d(t)}{d}= \frac{d- \Delta d(t)}{d}=1 - \frac{\Delta d(t)}{d};
\end{equation}
$C_2 = C_1 d/\Phi_0$ is a constant where $\Phi_0 = 260.15^{\circ}$
is the daytime value of the phase; $\Phi(t)$ is the measured
phase; and $\Delta d(t)$ represents the distance along the propagation path covered by the shadow.
To find out $\widetilde{d(t)}$ we proceed as follows
\begin{itemize}
\item
Compute, with a time cadence of 1 minute, the altitude and azimuth angles of the Sun and the Moon, as
seen by virtual observers located at a set of evenly spaced points
(every 10 km) along the propagation path NDK-RX.
\item
With this, compute the percentage of the solar disk area $A$ covered
by the Moon (via a two circle intersection approach, see
\ref{apendix:covered} for details), seen at each observational
point along the propagation path ($d$), as a function of time ($t$),
obtaining the curve $A(d,t)$
($A_{jk}$ in \ref{apendix:matrix}).
Note that, for a fixed time $k$ the function $A(d,t_k)$ represents
the covered area of the solar disk along the entire path (this is equivalent to $A_{k}^{C}$ in \ref{apendix:matrix}).
\item
For consistency and to facilitate the computations, in this step,
instead $A(d,t_k)$ we use its percentage (where 100\% is the maximum
of the solar disk area covered by the moon, observed on the entire time
interval and over all observational points, later on we
recover the actual value as weighting process in \ref{apendix:matrix}).
Then,
compute the distance $\Delta d(t_k, A_m)$
(or $\Delta d_{km}$ in \ref{apendix:matrix})
at a set of evenly spaced percentage values.
As an example Figure \ref{fig:model}b shows
$A(d,t_k)$ for $t_k =$ 18:00 UT with a continuous line and the
correspondent $\Delta d(t_k, A_m)$ for $m=60\%$ is marked by the
horizontal dashed line.
\item
Substitute $\Delta d(t,A_m)$ in Equation \ref{eq:Equation 5} to obtain
$\widetilde{d(t,A_m)}$, this is the distance illuminated as
a function of time and percentage (m).
As instance,
Figure \ref{fig:model}c shows two iso-curves at $ m=30 \%$ and
$80 \%$, with long and short dashed lines, respectively; and the colored
contours are the same but for different values of m.
\item
Finally, we define the total change of illuminated distance as the
envelope of all the
iso-curves. This is shown by the continuous line in Figure
\ref{fig:model}c (see \ref{apendix:matrix} for the
computational details).
\end{itemize}
Once we know the effect of the eclipse on the propagation path, we are able to
quantify
the
increase of the effective reflection height of the ionosphere caused
by the Moon shadow. In order to find $\Delta z(t)$ (Equation \ref{eq:Equation 3}), we need to do a discrete convolution over two finite sequences:
\begin{equation}
\label{eq:Equation 6}
(f \ast g)[n] = \sum_{m=-M}^{M} f[n-m]g[m],
\end{equation}
where $f$ and $g$ are one-dimensional input arrays whose convolution
is defined at each overlapping point,
in this case $f = 1/\widetilde{d(t)}$ and $g =
\widetilde{\Phi(t)}$. The resulting convolution (see Figure
\ref{fig:model}d) is proportional to $C_2 \Delta z(t)$, which is
dimensionless. By normalizing it, we are left with a dimensionless
function of reflection height variation $\widetilde{\Delta z(t)} =
\Delta z(t)/b$, where $b$ is an unknown parameter in units of
km. Putting together equations \ref{eq:Equation 3}, \ref{eq:Equation
4} and \ref{eq:Equation 5}, we obtain the model of the phase
$\Phi_M$ as:
\begin{equation}
\label{eq:Equation 7}
\Phi_{M}(t) = \Phi_0 - C_1 d(t) \widetilde{\Delta z(t)} b.
\end{equation}
Still we do not know the value of the parameter $b$. Therefore we
compute $\chi ^2$ between the modelled $\Phi_{M}(t)$ and the measured $\Phi(t)$ for $b \in [0,20]$ km, with a step of 0.1 km. We adopt parameter $b$, where MIN($\chi ^2$) holds. This directly gives us the profile of the reflection height variation at the time of the eclipse,
\begin{equation}
\label{eq:Equation 8}
\Delta z(t) = \widetilde{\Delta z(t)} b.
\end{equation}
The best fit between
our model and the measured phase is achieved when $b = 9.3 \pm 0.1$
km, see Figure \ref{fig:model}e. Meaning that the estimated rise in
reflection height at the time of the maximum eclipse (18:00 UT) on
propagation path NDK-RX, is about $\Delta z_{max} = 9.3 \pm 0.1$ km,
see Figure \ref{fig:model}f. This means that the effective reflection height of the ionosphere was $z_{max} = 80.0 \pm 0.5$ km at the time of totality.
As seen in Table \ref{table:dh},
the computed $\Delta z$ (marked as this work)
is slightly higher than the similar results obtained with the VLF
technique, although our result is in very good agreement with the
$\Delta z$ computed using rockets.
\begin{table}
\begin{tabular}{rrcc}
\hline
GCP (km) & $\Delta z$ (km) & Date & Source \\
\hline
1245 & 8.00 & 11-Aug-1999 & {\cite{Clilverd:2001}} \\
2820 & 6.18 & 30-Jun-1992 & {\cite{MendesDaCosta:1995}} \\
3007 & 9.30 & 21-Aug-2017 & This work, Model\\
3007 & 5-8 & 21-Aug-2017 & This work, Ratio\\
4800 & 3.00 & 22-Jul-2009 & {\cite{2019JGRA..124..616V}} \\
5700 & 3.00 & 15-Jan-2010 & {\cite{2012P&SS...73..310P}} \\
5761 & 3.75 & 1-Aug-2008 & {\cite{De:2010}} \\
$< 10 000$ & 5.14 & 15-Jan-2010 & {\cite{Guha:2012}} \\
$>10 000$ & 4.85 & 22-Jul-2009 & {\cite{Guha:2012}} \\
Rocket & 8.00 & 7-Mar-1970 & {\cite{Clilverd:2001}} \\
Rocket & 9.00 & 12-Nov-1966 & {\cite{Clilverd:2001}} \\
\hline
\end{tabular}
\caption{Change of $\Delta z$ during solar eclipses measured by
different authors using VLF waves except for the last two rows, in
these cases the measurements were made by rockets. This work model
refers to our eclipse model (Secc. \ref{sec:model}) and Ratio refers to the
basic comparison between the eclipse effect and the Sunrise and Sunset \ref{sec:obs}.}
\label{table:dh}
\end{table}
\begin{figure
\begin{center}
\includegraphics[scale=0.50]{figure4and5_merged2.png}
\caption[f4]{(a) The normalized measured phase $\Phi(t)/\Phi_0$ (red)
was approximated by the addition of 6-Gaussian profiles (dashed
black line).
(b) Percentage of
the covered solar disk as a function of the distance along
the NDK-RX propagation path at 18:00 UT (continuous line) the
horizontal dashed line represents the distance where the solar disk
was 60\% covered at this specific time.
(c) The iso-curves of the amount of \% covered as a function of time
(discontinuous lines to mark 30\ and 80\% and colored contours mark
other percentage levels) and the resulting envelope ($\widetilde{d(t)}$) for the entire time range
(thick solid line).
(d) The discrete convolution
between $f = 1/\widetilde{d(t)}$ (dashed line) and $g = \widetilde{\Phi(t)}$, (dot-dashed line).
The convolution (continuous line) is proportional to $C_2 \Delta z(t)$.
(e) Our model (black) which best fits the measured phase (red) when $b = 9.3 \pm 0.1$ km.
(f) Change of the ionospheric reflection height during the eclipse. }
\label{fig:model}
\end{center}
\end{figure}
\section{The Flare Input} \label{sec:flare}
In this section we investigate the effects of the C3.0 flare occurred at the time of
the eclipse, reaching its maximum X-ray flux at 17:57 UT. This event
presents a unique opportunity to study the ionospheric conditions during the eclipse, i. e., low background radiation, at the excess flux of a solar flare.
We first take the actual data of phase (NDK-RX) and subtract it from
the 6-Gaussian fit, which describes the minimum expected value of the
phase at the time of totality over our propagation path. Thus we
obtain solely the time profile of the C3.0 flare.
In order to compare the duration and shape of the Solar X-ray flux
excess and the associated ionospheric effect observed through the VLF
data, we have plotted (see Figure \ref{fig:Figure 5}) the normalized VLF phase response
and the Solar X-ray flux from GOES satellite, 1-8 \AA ~ and 0.5-4.0
\AA. We notice a much more sharper profile than the regular response that usually
appears in the non-eclipse conditions, i. e.,
the phase change (due to a Solar flare) behaves
in strong accordance with the X-ray flux. This could be attributed
to the different (eclipsed) background conditions in the lower ionosphere, causing an
increase in the effective recombination coefficient. A detailed
study of this event will be published elsewhere.
\begin{figure
\begin{center}
\includegraphics[scale=0.50]{figure5_goes_vs_lavnet.png}
\caption[f4]{Solar flare X-ray flux from GOES satellite, 0.5-4.0 \AA ~
(black line) and 1-8 \AA ~ (blue line). Vertical axis represents
the normalized amplitude of GOES flux and the normalized VLF phase (red) for NDK data.}
\label{fig:Figure 5}
\end{center}
\end{figure}
We use the ionospheric height profile
$z = z(t)$, as well as the propagation
distance function $d = d(t)$ (see Section \ref{sec:model})
and equation \ref{eq:Equation 1}, but in this case, $\Delta \Phi$ is the
phase difference caused by the C3.0 flare; $z(t) = z_0 +
\Delta z_e(t)$ corresponds to the ionospheric height profile under the
eclipse conditions; and $\Delta z_e(t)$ is the eclipsed change of
height (after the convolution of Eq. \ref{eq:Equation 8}, see figure \ref{fig:model}f).
We express the flare-induced ionospheric height difference $\Delta
z_f(t)$ to
quantify the
flare contribution to the total effective ionospheric height
change and then
compare the ionospheric height profile during both: the solar eclipse
and the solar eclipse combined with the solar flare input, see figure
\ref{fig:Figure 6}.
Our results are in good agreement with the same obtained by
\citet{2018AdSpR..62..651C}, who measured the signal amplitude of 2
receiving stations in North America, YADA in McBaine and KSTD in
Tulsa.
In the latter they observed similar effects caused by a C3.0 flare.
The authors used the LWPC code and the Wait's two-component D-region
ionospheric model, to numerically reproduce the observed signal
amplitude variation at both receiving locations.
Their altitude profile at maximum electron density for combined
effects (solar eclipse and solar flare) correspond well with our
results, see Figure 8 in \citet{2018AdSpR..62..651C}. This is
remarkable due to the relative simplicity of our approach.
\begin{figure}
\begin{center}
\includegraphics[scale=0.80]{ionosphere_height.png}
\caption[f4]{Comparison of ionospheric height of both only the solar
eclipse and the solar eclipse combined with the solar flare
input. The solar flare contributes to a maximum change of height by about -3 km $\pm$ 1 km.}
\label{fig:Figure 6}
\end{center}
\end{figure}
\section{Summary and discussion} \label{sec:summary}
We have used the Latin American Very Low Frequency Network at Mexico
(LAVNet-Mex) station to study the effects of the August 21, 2017 total
solar eclipse on the D-region of the Earth's ionosphere.
Our main goal was to estimate the rise of the reflection height at the
time of the eclipse via the measured phase deviation of the received
signal from NDK transmitter in ND, USA (at a fixed frequency of 25.2 kHz).
We presented the experimental setup used for phase and amplitude measurements of the received VLF waves.
Also, we described the so called ``Great American Solar Eclipse'' and
its path across the mainland which crossed almost perpendicularly,
the 3007.15 km long propagation path NDK-RX. As expected, the effect
of the eclipse caused a decrease in both the phase and the amplitude of the
VLF signal.
During the eclipse, a substantial part of the ionosphere above our propagation path
received less ionizing radiation from the Sun. Therefore, to estimate
the rise of the VLF reflection height,
we first approach the problem in a very simple way by
using as reference the
total night/day change (i. e. at the sunrise and the sunset) of 20-30 km, and calculating the ratio between the maximum eclipse phase change and the day-night phase change.
This accounted for $\sim 25\%$ of the total diurnal average phase change,
equivalent to a rise in reflection height of about 5-8 km.
Around the totality time (17:57 UT) a C3.0 solar flare occurred and
was powerful enough to shift the phase and amplitude to more positive
values, distorting the eclipse pattern. To remove its effects, we
created a model by adding Gaussian functions to better describe the
phase profile during the time of the eclipse totality.
In
Section \ref{sec:model}
we described our model which considers the eclipse geometry to obtain
the area of the solar disk occulted by the Moon as a function of
time. At each time is considered the percentage of the GCP darkening
and obtained the expected values for phase and height variations of
D-region using the \citet{1959ITAP....7..154W}, \citet{1972JATP...34..255D} and \citet{Muraoka:1977} formulations for VLF signal propagation in the Earth-ionosphere waveguide.
In particular, we found that at the time of the eclipse totality (18:00 UT),
the rise in reflection height was $\Delta z_{max} = 9.3 \pm 0.1$ km.
Therefore, the effective reflection height of the ionosphere rose to approximately $z_{max} = 80.0 \pm 0.5$ km at the time of totality.
It is important to note that our results of reflection height variation during the eclipse, obtained by two different methods (calculating ratios and eclipse modeling) show a good agreement
with the results from similar measurements (Table \ref{table:dh}).
Particularly, the
analysis of the ratio between the ionospheric changes during the
eclipse and the day/night transition (Secc. \ref{sec:obs}) is in very good agreement with the results
obtained with similar GCP and analysis method (rows 2 and 4 of
Table \ref{table:dh}). Whereas the result of the eclipse model (Secc. \ref{sec:model}) is slightly higher
compared with other similar VLF observations, although it is in very good
agreement with the rocket observations (rows 3, 10 and 11 of Table \ref{table:dh}).
These differences are expected due to the fact that
the amount of the reflection height variation during an eclipse depends
on the GCP length, and the relative direction of the GCP and the path
of the eclipse totality \citep{2019JGRA..124..616V}. In our case, GCP
distance was 3007.15 km and it crossed the path of the total eclipse
almost perpendicularly.
Finally, our model of the C3 X-ray flare observed during the eclipse is
in very good accordance with the results obtained through LWPC simulations reported in \citet{2018AdSpR..62..651C}, for a short-length propagation path. This gives support to the use of this model to study of the eclipsed ionosphere.
\section*{Acknowledgements}
R. Vogrin\v{c}i\v{c} thanks grant Public Scholarship, Development, Disability and Maintenance Fund of the Republic of Slovenia.
A. Lara thanks UNAM-PASPA for partial support.
J.P. Raulin thanks funding agencies CAPES
(PRINT-88887.310385/2018-00 and Proc. 88881.310386/2018–01) and CNPq (312066/2016-3) for partial support.
The eclipse timing and location information used in this paper was obtained from the NASA website at \url{eclipse.gsfc.nasa.gov} (Fred Espinak).
Solar flare soft X-rays fluxes were obtained from the GOES satellite database, website at ftp.swpc.noaa.gov (Prepared by the U.S. Dept. of Commerce, NOAA, 371 Space Weather Prediction Center).
%
The description of linear convolution used for our eclipse model was
obtained from SciPy website at \url{docs.scipy.org}
\section*{References}
\bibliographystyle{elsarticle-harv}
|
2,869,038,154,390 | arxiv | \section{Introduction}\label{sec:intro}
The last decades have seen dramatic advances in our knowledge of galaxy formation and evolution \citep{Giavalisco02,Renzini06,Silk12,Carilli13,Madau14,Naab17}. The global star-formation rate density (SFRD) has been found to rise during cosmic reionisation from $z \sim 10$, peak at $1 < z < 3$, and finally decrease by a factor of $\sim 10$ to the local Universe \citep{Lilly96,Bouwens11,Madau14}. Several studies suggest that, at all epochs, the bulk of the star-formation activity takes place in galaxies lying on the \lq\lq Main-Sequence\rq\rq{} (MS): a tight correlation between the star-formation rate (SFR) and the stellar mass ($\mathrm{M_{*}}$; \citealt{Daddi07,Rodighiero11,Speagle14,Santini17}). Therefore, we have indications that most of the stars in the Universe formed along the MS, at the peak of the SFRD at $z\sim2$. However, we are still trying to understand which are the main mechanisms responsible for the rapid increase of the SFRD at $z < 6$. One possible explanation is an increase in the gas fraction along with a rising in star-formation efficiency per unit mass, possibly driven by galaxy mergers \citep{Genzel15,Silverman15,Scoville16}.
At $z>3$ the cosmic SFRD is almost exclusively constrained by UV-selected samples \citep{Bouwens12a,Bouwens12b,Schenker13,Oesch15}, lacking information about the star formation obscured by dust. Rest-frame UV-selected galaxies must be corrected for dust absorption: wrong dust corrections can lead to large uncertainties on the SFR estimates and, consequently, to an incorrect picture of the star-formation history (SFH) of the Universe \citep{Gallerani10,Castellano14,Scoville15,Alvarez16}. At the same time, heavily dust-obscured star-forming galaxies (SFGs) may be completely missed by surveys probing rest-frame UV/optical emission.
With the advent of new facilities, such as the Atacama Large Millimiter Array (ALMA), a population of faint, dusty SFGs has been confirmed at high redshift, e.g. sub-millimiter galaxies \citep{Dunlop04,Daddi09,Riechers10,Huang14,Santini16}, ALMA-only sources (e.g. \citealt{Williams19}), the extremely red objects selected with $H$ and IRAC colors (HIEROs galaxies) from \cite{Wang16}. While the bulk of these objects peaks at $2<z<3$, a significant tail of higher redshift, dusty galaxies without optical/near-infrared (NIR) detections is in place at $z>4$ \citep{Capak08,Daddi09,Riechers10,Walter12,Riechers13,Pavesi18}. For instance, \cite{Walter12} combined measurements from the IRAM Plateau de Bure Interferometer (PdBI) and the Jansky Very Large Array (VLA) to put constraints on the dust-obscured starburst HDF850.1, one of the first detected optical/NIR invisible galaxies. This source is at $z=5.18$ among an overdensity of galaxies at the same redshift, with a [CII]/FIR luminosity ratio comparable to that observed in local SFGs. In addition, most of these objects are often extreme starbursts, such as HFLS3. This source is confirmed to be at $z=6.34$ exploiting information from different molecular and atomic fine structure cooling lines and shows a large FIR luminosity (i.e. $\mathrm{L_{FIR}\sim2 \times 10^{13}}$ $\mathrm{L_\odot}$) and SFR $>10^3$ $\mathrm{M_{\odot}/yr}$ \citep{Riechers13}.
An in-depth study of this elusive population of galaxies is necessary in order to complete the census of SFGs at high redshift contributing to the cosmic SFH as well as to better understand the early phases of the galaxy formation \citep{Blain02,Casey14}.
In this context, the ALMA Large Program to INvestigate [CII] at Early times (ALPINE; B\'ethermin et al. in prep.; \citealt{Faisst19}; \citealt{LeFevre19}) is going to improve our knowledge of the obscured star formation at $z>4$. It takes advantage of observations of the singly ionised carbon [CII] at 158\thinspace$\mathrm{\mu m}$ and its adjacent FIR continuum for a sample of 118 SFGs in the Cosmic Evolution Survey (COSMOS; \citealt{Scoville07a,Scoville07b}) and the Extended Chandra Deep Field South (E-CDFS; \citealt{Giavalisco04,Cardamone10}) fields. These sources are spectroscopically confirmed to be at $4 < z < 6$ with the Visible Multi-Object Spectrograph (VIMOS) at the Very Large Telescope (VLT; \citealt{LeFevre03,LeFevre15}) and with the DEep Imaging Multi-Object Spectrograph (DEIMOS) at the Keck II telescope \citep{Faber03,Hasinger18}.
The [CII] line is one of the strongest lines in the FIR band (e.g. \citealt{Stacey91}) as it is one of the main coolants of the interstellar medium (ISM; \citealt{Carilli13}). Since it has a lower ionisation potential than neutral hydrogen (HI), i.e. 11.3 eV compared to 13.6 eV, this line can trace different gas phases, such as dense photodissociation regions (PDRs; \citealt{Hollenbach99}), neutral diffuse gas (e.g. \citealt{Wolfire03,Vallini15}), and diffuse ionised gas (e.g. \citealt{Cormier12}). In principle, in order to remove the ambiguity on the interpretation of the [CII] emission, the relative contribution of the various gas phases should be assessed. However, different studies suggest that the bulk of the [CII] emission arises from the external layers of molecular clouds heated by UV photons in PDRs \citep{Stacey91,Madden97,Kaufman99,Cormier15,Pavesi16}; thus, this line can be used as a tracer of star formation (e.g. \citealt{DeLooze14}; but see also \citealt{Zanella18} who suggest that [CII] is a better tracer of the molecular gas). Therefore, the combination of FIR continuum and UV measurements, together with the [CII] observations, will provide, at the redshift explored by the ALPINE survey, an estimate of the total (obscured and unobscured) star formation in these galaxies, corresponding to 80-95$\%$ of the cosmic SFRD at $4<z<6$ \citep{Casey12,Bouwens16,Capak15,Aravena16,Novak17}.
The remaining 5-20$\%$ of star formation which is not traced by UV data is yielded by a free blind survey covering an additional area of 25 arcmin$^2$ beyond the targeted sources, where many galaxies have been serendipitously detected so far (Loiacono et al. in prep.). Among these, several sources are invisible in the optical bands. The study of these objects is crucial for obtaining a robust estimate of the total SFRD at $z > 4$ and for characterising the overall population of the high-redshift SFGs.
In this work, we discuss the nature of a galaxy (hereafter, Gal-A) randomly discovered in the field of the ALPINE target DEIMOS$\_$COSMOS$\_$665626 (hereafter, DC$\_$665626). The galaxy has a spatial offset of $\sim 6$ arcsec (1 arcsec is $\sim 7$ kpc at $z = 4.583$, the redshift of the target) from DC$\_$665626; it does not show any optical counterpart at the position of the emission detected with ALMA and, for this reason, its nature results to be ambiguous. Besides, since Gal-A is the brightest galaxy detected in line emission among all those having no optical counterpart and serendipitously observed in ALPINE so far (Loiacono et al. in prep.), this work can be exploited as a benchmark for future analysis on these types of sources.
The paper is organised as follows: in Section 2 we introduce the available data we have for Gal-A, and explain the methods used to analyse this source. We present the results in Section 3 and discuss them in Section 4, trying to constrain the nature of the galaxy. Summary and conclusions are provided in Section 5.
Throughout this paper, we assume a $\Lambda$CDM cosmology with $\mathrm{H_0}$ = 70 km/s/Mpc, $\mathrm{\Omega_m}$ = 0.3, and $\mathrm{\Omega_\Lambda}$ = 0.7 \citep{Planck18}. We furthermore use a \cite{Chabrier03} initial mass function (IMF) and AB magnitudes.
\section{Observations and Data Reduction}
\subsection{ALMA data}
DC$\_$665626 has been observed with ALMA in Band 7 ($\nu_{\mathrm{obs}}=[275-373]$ GHz) on 25 May 2018 (Cycle 5; Project 2017.1.00428.L, PI O. Le F\`evre) using 45 antennas with the C43-2 array configuration (with a maximum baseline of $\sim$ 250 m). The on-source integration time is 16 minutes, with a total elapsed time of 37 minutes.
The spectral setup consists of two sidebands with a frequency range of $\mathrm{\Delta_{\nu}^{l} \simeq [339-343]}$ GHz and $\mathrm{\Delta_{\nu}^{u} \simeq [351-355]}$ GHz for the lower and upper sidebands, respectively. Each sideband is made up of two spectral windows (SPWs) of width 1.875 GHz, each of which containing 128 channels 15.625 MHz wide (the sidebands overlap for 7 channels), with a typical rms of 0.6 mJy beam$^{-1}$ per channel. The flux and phase are calibrated using the standard calibrators J1058+0133 and J0948+0022, respectively.
The data are analysed using standard pipelines for ALMA reduction included in the software CASA \citep{McMullin07}, version 5.4.0. The imaging is obtained running the \texttt{TCLEAN} task on the visibilities, setting a threshold of 3$\mathrm{\sigma_{rms}}$ on the noise level when cleaning the data (where $\mathrm{\sigma_{rms}}$ is obtained from the dirty image), and with a natural weighting scheme to increase the sensitivity.
\subsection{Identification of the serendipitous source}
As part of the COSMOS field \citep{Scoville07a,Scoville07b}, which is one of the most thoroughly studied regions of the sky so far, multi-wavelength data are available for the whole ALPINE sample, including high-resolution Hubble Space Telescope (HST) imaging \citep{Koekemoer07,Koekemoer11}, and photometry from the Canada-France-Hawaii Telescope (CFHT), the Spitzer telescope and other facilities \citep{Capak07,Laigle16}. Spectroscopic redshifts are available from large optical spectroscopic campaigns at the VLT (VUDS; \citealt{LeFevre15}) and Keck (DEIMOS; \citealt{Hasinger18}). Multi-band photometry and spectroscopic data allow to build spectral energy distributions (SEDs) and to derive robust parameters including SFRs and stellar masses through SED-fitting \citep{Faisst19}. Through this analysis, we find that DC$\_$665626 has log$\mathrm{(M_{*}/M_{\odot})} = 9.21^{+0.16}_{-0.18}$, log$\mathrm{(SFR/[M_{\odot} yr^{-1}]) = 0.71^{+0.29}_{-0.18}}$, and a spectroscopic redshift of $z_\mathrm{spec} = 4.583$, obtained from Ly$\alpha$ emission and ISM absorption lines in the observed-frame optical spectrum.
\begin{figure}
\includegraphics[width=\columnwidth]{CII_mapV8.png}
\caption{Continuum-subtracted moment-0 map of Gal-A. The ALPINE target DC$\_$665626, Gal-A and Gal-B are labelled. Line and continuum emissions are shown with grey and purple contours starting from $4\sigma$ and $3\sigma$ (at step of 2$\sigma$), respectively. The white ellipse in the bottom left corner is the synthesized beam.}
\label{fig:maps}
\end{figure}
\begin{figure}
\includegraphics[width=\columnwidth]{CII_line_final_newV4.png}
\caption{Emission line flux at the position of Gal-A (black histogram) as a function of the observed frequency. The solid red curve represents the gaussian fit on the line. The dashed grey line marks the zero-flux level. Also shown is the velocity offset on the top axis.}
\label{fig:CII_spec}
\end{figure}
\begin{figure*}
\begin{center}
\includegraphics[width=1.7\columnwidth]{cutouts_V5.png}
\end{center}
\caption{Cutouts centred on Gal-A in different photometric filters, from HST/ACS \citep{Koekemoer07} and Subaru, UltraVISTA and Spitzer \citep{Capak07,Laigle16}. Grey and purple contours are $>3\sigma$ line and continuum emissions (at step of 2$\sigma$), respectively. Gal-A, Gal-B, and Gal-C are labelled in the upper-left plot of the figure. Wavelengths increase from the upper-left to the bottom-right corner.}
\label{fig:cutouts}
\end{figure*}
Since the ALPINE target DC$\_$665626 is at $z_\mathrm{spec}=4.583$, the [CII] emission from this source ($\mathrm{\nu_{rest} = 1900.54}$ GHz) is expected to be redshifted at around $\mathrm{\nu_{obs}=340.42}$ GHz, falling inside the lower sideband of the observed ALMA spectrum. When we inspect the cube, together with the [CII] emission coming from DC$\_$665626 (at 4.4$\sigma$; B\'ethermin et al. in prep.), we identify a more significant emission with a spatial offset of $\sim 6$'' ($\sim 40$ proper kpc at $z\sim4.6$) with respect to the ALPINE target. We refer to the source of this emission as Gal-A (RA: 10:01:13.82, Dec: +02:18:40.66), that is detected both in continuum and in line emission at 5$\sigma$ and 12$\sigma$, respectively. Fig. \ref{fig:maps} shows the continuum-subtracted moment-0 map of Gal-A (see section \ref{sec:analysis}). Also shown are the synthesized beam with a size of 1.08'' $\times$ 0.85'' at P.A. = - 80$^{\circ}$, and another galaxy (Gal-B) detected at 9$\sigma$ in continuum only northwards of the offset emission ($\sim$ 2'' away from Gal-A when considering the peak position of the two emissions).
We show in Fig. \ref{fig:CII_spec} the spectrum of the emission line observed at the position of Gal-A; it is extracted from a circular region 2'' wide, including the 2$\sigma$ contours from the moment-0 map of the source. Using the \textit{spectral profile tool} within the CASA viewer, we fit the line profile with a Gaussian function finding a full width at half maximum (FWHM$_\mathrm{line}$) of $308 \pm 34$ km/s and a peak frequency at $\mathrm{\nu_{peak}} = 340.76$ GHz.
Though DC$\_$665626 is detected in [CII] in spatial coincidence with its UV emission, we consider the possibility that the emission centred at the position of Gal-A is connected with that of the ALPINE target. The displacement between [CII] and UV/Ly$\alpha$ emission has already been observed in high-redshift galaxies (\citealt{Gallerani12,Willott15}; Cassata et al. in prep.; but see also \citealt{Bradac17}). It is also reproduced by radiative transfer simulations as a consequence of the strong stellar feedback which could quench the [CII] emission in the central region of the galaxies, allowing it to arise mostly from infalling or satellite clumps of neutral gas around them \citep{Vallini13,Maiolino15}. However, these models predict spatial offsets up to $\sim 1-2$ arcsec ($\sim 7-14$ kpc at the redshift of the target), well below the offset that we measure in this case ($\gtrsim 6$ arcsec). Therefore, we exclude that the observed ALMA emission at the position of Gal-A is directly linked to DC$\_$665626.
\subsection{Multi-wavelength photometry of Gal-A}
As Gal-A lies in the COSMOS field \citep{Laigle16}, we exploit all the available multi-wavelength photometry in order to identify the counterpart associated with the discovered emission. In Fig. \ref{fig:cutouts} we present some cutouts centred on this galaxy in different photometric filters, from UV to FIR. Gal-B, that in the COSMOS2015 catalogue \citep{Laigle16} has a photometric redshift $z = 2.249^{+0.223}_{-0.151}$, is visible in most of the photometric bands. Another foreground galaxy, labelled Gal-C in Fig. \ref{fig:cutouts}, is well detected in the images from optical to NIR wavelengths, and has $z = 2.021^{+0.123}_{-0.116}$ in COSMOS2015. Conversely, Gal-A is not clearly visible in any optical filter except for the UltraVISTA $K_s$ band, even if it is not listed as a detection in the UltraVISTA DR4 catalogue \citep{McCracken12}.
More in detail, to reproduce the SED of Gal-A, we use observations in $u^{*}$ band from MegaCam on CFHT, as well as the $B$, $V$, $r^{+}$, $i^{+}$, and $z^{++}$ filters from Suprime-Cam on Subaru, in order to set an upper limit to the optical emission of the source. NIR constraints come from the $J$, $H$, and $K_s$ bands from VIRCAM on the VISTA telescope. Finally, we obtain information on the SED up to $\sim8$ $\mu$m in the observed-frame from the IRAC channels on Spitzer. For each band, we centre a fixed aperture of 1.4'' of diameter on Gal-A (enclosing the 3$\sigma$ contours of the emission line detected by ALMA) and estimate its flux. The rms is computed as the average rms within several apertures (of 1.4'' of diameter) placed in different regions of the sky, close to the source but away from evident emission. As expected, we don't find any significant detection of our source in the optical bands. Some marginal detections are present in the $B$, $r^{+}$, and $i^{+}$ filters, but they are all below $2\sigma$ and could be partially contaminated by the emission of Gal-C. For this reason, we consider the fluxes measured in these bands as upper limits. The same argument applies to the VISTA filters, except for the $K_s$ band in which, as mentioned above, a faint emission arises at the position of Gal-A. Making use of \texttt{SExtractor} \citep{Bertin96}, we manage to deblend the analysed galaxy from the other two nearby sources obtaining an estimate of its apparent magnitude in this band. Through this analysis, Gal-A is detected at $\sim$ 2.3$\sigma$, with an AB magnitude $K_s = (24.8 \pm 0.5$), which is very close to the corresponding 3$\sigma$ limiting magnitude of $\sim25$ from the UltraVISTA DR4 catalogue.
Finally, a weak emission seems to arise at the position of Gal-A in the IRAC bands. However, as shown in Fig. \ref{fig:cutouts}, this could be partially contaminated by the emission of the two nearby galaxies at $z\sim2$ in the 3.6 and 4.5 $\mu$m bands, while it seems to emerge from the background at 8.0 $\mu$m, where Gal-B and Gal-C become fainter. We find that, in the \textit{Rainbow} catalogue \citep{Perez08,Barro11a,Barro11b}, Gal-B and Gal-C have been deblended in all the four IRAC channels using the Subaru $r$ band as a prior for the two sources, while no counterpart of Gal-A is present.
In order to extract the photometric information on Gal-A from the IRAC bands, we attempt a deblending procedure using the 2D GALFIT fitting algorithm \citep{Peng02}. We model Gal-B and Gal-C as point-like sources, using their optical positions and deblended fluxes from \textit{Rainbow} as a first guess, and considering for each IRAC channel its typical PSF ($\sim 2''$). To obtain the Gal-A flux in each channel, we perform aperture photometry at the position of Gal-A in the residual maps\footnote{As we use a fixed aperture of 1.4'' (which is smaller than the typical PSF of the IRAC channels) centred on Gal-A to obtain the photometry in each band, we apply aperture corrections to estimate the total fluxes in the IRAC filters. In particular, we divide the flux measured in these bands by 0.61, 0.59, 0.49 and 0.45, going from 3.6 to 8.0 $\mu$m}. We are aware that with this procedure we may underestimate the flux of Gal-A in the IRAC channels as we are spreading the global flux of the three components on only two sources. To account for this, when performing SED-fitting (see Section \ref{section:mstar}) we decide to consider IRAC fluxes ranging between the deblended (lower) and blended (higher) values. We find, however, that our conclusions do not depend on this assumption; in fact, we obtain similar results when using the deblended fluxes in the SED-fitting. As an alternative approach, we tried to fit a three-components model leaving as a free parameter the flux corresponding to Gal-A and using the ALMA continuum peak position as a prior. However, probably due to the small distance between the galaxies, the code is not able to perform the fit. Table \ref{tab:phot} summarises the photometric information we obtain for Gal-A; this is exploited in section \ref{section:mstar} to estimate the stellar mass of this galaxy.
\begin{table}
\begin{center}
\begin{tabular}{c c c c c}
\hline
\hline
Instrument & Filter & Central $\mathrm{\lambda}$ & Observed flux\\
/Telescope & & [$\mu$m] & [$\mu$Jy]\\
\hline
MegaCam/CFHT & $u^{*}$ & 0.3783 & $<1.93\times10^{-2}$ \\ \hline
Suprime-Cam & $B$ & 0.4458 & $<4.12\times10^{-2}$\\
/Subaru & $V$ & 0.5478 & $<6.90\times10^{-2}$\\
& $r^{+}$ & 0.6289 & $<6.88\times10^{-2}$\\
& $i^{+}$ & 0.7684 & $<9.32\times10^{-2}$\\
& $z^{++}$ & 0.9037 & $<2.92\times10^{-1}$\\
\hline
VIRCAM & $J$ & 1.2495 & $<3.70\times10^{-1}$\\
/VISTA & $H$ & 1.6553 & $<5.22\times10^{-1}$\\
& $K_s$ & 2.1640 & (4.25$\pm$1.85)$\times10^{-1}$\\
\hline
IRAC/Spitzer & ch1 & 3.5634 & $<1.10$\\
& ch2 & 4.5110 & $<1.36$\\
& ch3 & 5.7593 & $<2.12$\\
& ch4 & 7.9595 & $<4.27$\\
\hline
\hline
\end{tabular}
\caption{Summary of available data for Gal-A in each photometric band used for the SED-fitting (see section \ref{section:mstar}). The first two columns are the instruments (with relative telescopes) and filters used. Central wavelength is the mean wavelength weighted by the transmission of the filter. In the last column, the fluxes (wich are all 2$\sigma$ upper limits except for the $K_s$ band) of Gal-A are shown. For the IRAC channels, we report the upper limits obtained by measuring the flux of Gal-A before the deblending procedure (see text). Except for the $K_s$ detection (which is obtained with \texttt{SExtractor}), all the estimated photometry is directly obtained from the maps with an aperture of 1.4'' of diameter centered on Gal-A.}
\label{tab:phot}
\end{center}
\end{table}
\begin{table*}
\begin{center}
\begin{tabular}{c c c c c c}
\hline
\hline
& $\mathrm{\nu_{rest}}$ & $z_\mathrm{gal}$ & $\mathrm{log(L_{line})}$ & $\mathrm{log(L_{FIR})}$ & log(SFR) \\
& [GHz] & & [L$_{\odot}$] & [L$_{\odot}$] & [M$_{\odot}$/yr] \\
\hline
CO(9-8) & 1036.9 & 2.043 & 8.04 $\pm$ 0.04 & 11.44 $\pm$ 0.50 & 1.45 $\pm$ 0.50 \\
CO(10-9) & 1152.0 & 2.381 & 8.20 $\pm$ 0.04 & 11.42 $\pm$ 0.50 & 1.43 $\pm$ 0.50 \\
{[CII]} & 1900.5 & 4.577 & 8.88 $\pm$ 0.04 & 11.38 $\pm$ 0.50 & 1.38 $\pm$ 0.50 \\
\hline
\hline
\end{tabular}
\caption{Summary of the physical parameters estimated for the three possible emission lines attributed to Gal-A. The first three columns report the considered emission line, its rest-frequency emission, and the redshift $z_\mathrm{gal}$ derived using the observed peak frequency, respectively. The fourth and fifth columns list the line luminosity ($\mathrm{L_{line}}$) and the total infrared luminosity ($\mathrm{L_{FIR}})$ for each emission lines, respectively. Finally, the last column report the SFRs, directly computed from the FIR luminosities following \citealt{Kennicutt98}.}
\label{tab:param}
\end{center}
\end{table*}
\subsection{Analysis of the serendipitous source}
\label{sec:analysis}
Since Gal-A shows no optical counterpart, we do not know a priori the nature of the emission line; it could be [CII] emission at a similar redshift of DC$\_$665626 (i.e. $z\sim4.6$), but also high-J CO transitions are expected ($\mathrm{J_{up}} > 3$) at the observed frequencies in ALMA Band 7, although at lower redshift \citep{Carilli13}.
In this work, we consider only the two high-J CO transitions with $\mathrm{J_{up}}=9,10$ which fall into the SPW of observation at $z\gtrsim2$. Indeed, \cite{Ilbert13} claim that galaxies at $z<2$ (corresponding in our case to lower CO transitions) should be more easily detected in UV/optical filters, with a fraction always greater than 95\% of sources detected in at least four photometric bands, from UV to NIR \citep{Ilbert06}. Therefore, if our source was at $z<2$, we would expect it to be visible in the optical bands shown in Fig. \ref{fig:cutouts}.
For these reasons, in this work we discuss the nature of Gal-A considering three transitions as possible interpretations for the observed emission: [CII] at $\mathrm{\nu_{rest}}=1900.5$ GHz, CO(9-8) at $\mathrm{\nu_{rest}}=1036.9$ GHz, and CO(10-9) at $\mathrm{\nu_{rest}}=1152.0$ GHz. As the observed emission line has a peak frequency of 340.76 GHz, Gal-A would be at redshift $z_\mathrm{gal} = 4.577$, $z_\mathrm{gal} = 2.043$ and $z_\mathrm{gal} = 2.381$ for [CII], CO(9-8) and CO(10-9), respectively. Table \ref{tab:param} lists the considered transitions and their rest frequencies, as well as the corresponding redshift for Gal-A in the three cases.
To estimate the intensity of the line and continuum emissions from Gal-A, we separate these components using the CASA \texttt{IMCONTSUB} task; in particular, giving in input all the channels in the SPWs free of the emission line, this task creates a continuum map of the source and a continuum-subtracted cube. We then select all the consecutive channels having emission above 1$\mathrm{\sigma_{spec}}$ (i.e. the rms estimated from the line spectrum) encompassing the emission line in order to compute the moment-0 map with the CASA \texttt{IMMOMENTS} task.
\begin{figure*}
\begin{center}
\includegraphics[width=13cm,angle=0]{LCO_LCII_V5.png}
\caption{Left panel: empirical relations between CO(9-8) (solid brown line), CO(10-9) (solid blue line) and FIR luminosity \citep{Liu15} with overlaid the values for individual local galaxies as brown and blue open squares, respectively (Liu et al. 2015, private communication). The two stars are the values found for Gal-A in this work (same color legend). Error bars are estimated by propagating the error of the line flux on L$\mathrm{_{CO}}$, and assuming a variation of 0.5 dex for $\mathrm{L_{FIR}}$. Also shown are the values obtained for high-redshift sub-mm galaxies/quasi-stellar objects (QSOs) as the orange and grey filled circles, in case of CO(9-8) and CO(10-9) transitions, respectively \citep{Carilli13,ALMA15,Carniani19}. Right panel: [CII] as a function of FIR luminosity for several kinds of objects at different redshifts. Black crosses are local SFGs \citep{Malhotra01b}; brown diamonds are $z = 1-2$ galaxies, including starburst- and AGN-dominated sources \citep{Stacey10}; magenta triangles are $z = 4.1-7.1$ QSO host galaxies \citep{Pety04, Maiolino05, Iono06, Maiolino09, Wagg10, Willott13}; $z = 4-7$ SFGs are the cyan, green, and orange hexagons \citep{Lagache18}. The dashed grey line represents the average [CII]-to-FIR ratio for local galaxies \citep{Ota14}. The yellow star shows the position of our source. Error bars are estimated by propagating the error of the line flux on L$_\mathrm{[CII]}$, and assuming a variation of 0.5 dex for $\mathrm{L_{FIR}}$.}
\label{fig:LCII}
\end{center}
\end{figure*}
The line and continuum fluxes are computed using the CASA \texttt{IMFIT} task. We define a region surrounding the emissions and then select only the pixels with a flux density larger than 2$\mathrm{\sigma}$: since the size of the emission region is comparable with the clean beam size, we assume that the source is unresolved and we take the peak flux as the total flux. We obtain $\mathrm{S_{cont}} = 245 \pm 24$ $\mu$Jy and $\mathrm{S_{line}\Delta v = 1.19 \pm 0.11}$ Jy km/s for the continuum and the line, respectively.
We derive the total infrared (between 8 and 1000 $\mu$m) luminosity of the source, in the three cases, assuming a shape of its SED from \cite{Magdis12}, and normalizing its flux to $\mathrm{S_{cont}}$, which is the observed flux at $\sim 845-880$ $\mu$m; according to \cite{Kennicutt98}, this luminosity also provides a good estimate of the obscured SFR. We obtain $\mathrm{log(L_{FIR}/L{_\odot})}=11.38\pm0.5$ in case of [CII] emission, $\mathrm{log(L_{FIR}/L{_\odot})}=11.44\pm0.5$ for CO(9-8), and $\mathrm{log(L_{FIR}/L{_\odot})}=11.42\pm0.5$ for CO(10-9) emissions. The uncertainties on the FIR luminosities are calculated by adding in quadrature the error on the continuum flux ($\sim0.04$ dex, which directly affects the $\mathrm{L_{FIR}}$ estimates), and a systematic error of 0.5 dex which takes into account possible variations in the luminosity caused by different FIR SED templates; as can be seen, this latter term dominates over the uncertainty on the flux. Following Eq. (4) in \cite{Kennicutt98}, these FIR luminosities translate into SFRs ranging from 24 to 28 $\mathrm{M_{\odot}/yr.}$\footnote{We scale the SFR from Salpeter to Chabrier IMF by dividing by 1.7 (e.g. \citealt{Zahid12}).} Finally, we estimate the line luminosities as in \cite{Solomon92} using the following relation:
\begin{ceqn}
\begin{equation}
\mathrm{L_{line} = 1.04 \times 10^{-3} \hspace{0.5mm} S_{line}\Delta v \hspace{0.5mm} D^2_L \hspace{0.5mm} \nu_{obs} \hspace{1mm} [L_{\odot}]},
\label{eq:L_CII}
\end{equation}
\end{ceqn}
\noindent where $\mathrm{D_L}$ is the luminosity distance of the source in Mpc, and $\mathrm{\nu_{obs}}$ the observed peak frequency in GHz. We thus obtain $\mathrm{log(L_{[CII]}/L{_\odot})}=8.88\pm0.04$, $\mathrm{log(L_{CO}/L{_\odot})}=8.04\pm0.04$ for CO(9-8) and $\mathrm{log(L_{CO}/L{_\odot})}=8.20\pm0.04$ for CO(10-9), where the uncertainties are computed by propagating the line flux error on the above equation. Table \ref{tab:param} reports all the above-mentioned physical quantities computed for Gal-A.
\section{Results}
\subsection{On the nature of the serendipitous source}
With the only information of the ALMA Band 7 line and continuum, and with no detections in optical bands, unveiling the nature of Gal-A is a challenging task. We use the physical quantities estimated in section \ref{sec:analysis} to deduce plausible conclusions on this source.
Fig. \ref{fig:LCII} (left panel) shows the correlation between L$\mathrm{_{CO}}$ (for the (9-8) and (10-9) transitions) and $\mathrm{L_{FIR}}$ for a compilation of SFGs in literature, together with the expected position of Gal-A; the respective best-fitting lines on the individual data are also shown (solid lines, \citealt{Liu15}). It is worth noting that the reported values are for local galaxies, spanning a FIR luminosity range between $\sim 10^8-10^{12}$ L$_{\odot}$. However, the empirical correlations continue to apply even including high-redshift galaxies (open diamonds in the figure); in this case indeed, as shown in \cite{Liu15}, the results of the fit do not significantly change. We then note that the computed $\mathrm{L_{FIR}}$ of \cite{Liu15} are integrated between $40-400$ $\mu$m, which is a smaller range with respect to the one adopted in this paper. In order to take this difference into account, we rescale the FIR luminosities of Gal-A in Fig. \ref{fig:LCII} to the same integration interval as in \cite{Liu15}, for consistency ($\mathrm{L_{FIR}^{8-1000}/L_{FIR}^{40-400} \sim 1.4}$, on average). It can be seen that, for both possible CO transitions, our galaxy would be an outlier of the empirical relations found by \cite{Liu15}, if it was at $z\sim2$. However, considering the large uncertainties on $\mathrm{L_{FIR}}$ (i.e. 0.5 dex), Gal-A could still be part of the lower envelope of local SFGs in the figure, tracing high-density regions ($\mathrm{n_{H2,crit}}\sim10^5-10^6$ $\mathrm{cm}^{-3}$; \citealt{Carilli13}) where star formation may occur.
In the right panel of Fig. \ref{fig:LCII} we plot the [CII] luminosity as a function of $\mathrm{L_{FIR}}$ in case Gal-A was a [CII] emitter at $z \sim 4.6$, along with other results from several authors for different types of objects (e.g. \citealt{Malhotra01b}, \citealt{Stacey10}). Our source perfectly sits on the local SFGs relation, with $\mathrm{log(L_{[CII]}/L_{FIR})} \sim -2.5$; possibly, this galaxy may belong to the high-redshift SFGs population which extends to $\mathrm{log(L_{FIR}/L_{\odot})} \sim 11$. As previously said, the [CII] line is mostly produced by UV radiation field in star-forming regions (e.g. \citealt{Cormier15}), then it can trace the SFR. Therefore, as the FIR emission marks out the SFR of a source, the relation between [CII] luminosity and $\mathrm{L_{FIR}}$ translates into a correlation between $\mathrm{L_{[CII]}}$ and the SFR of a galaxy; Gal-A follows this relation, not showing the typical [CII] deficit which arises at $\mathrm{L_{FIR}>10^{11}}$ $\mathrm{L_{\odot}}$ \citep{Luhman98,Malhotra01a,Luhman03,Lagache18}.
These results suggest that our source, randomly detected in the DC$\_$665626 field, may more likely be a strongly obscured [CII] emitter at high redshift. However, to validate this hypothesis, more data are needed.
\subsection{Estimate of the stellar mass}
\label{section:mstar}
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{spectra_V2.png}
\caption{SEDs of Gal-A at $z=4.6$ (top panel) and $z=2.2$ (bottom panel). The green and blue curves are the best-fit models computed with the MAGPHYS and LePHARE codes, respectively. Upper limits on the flux, as reported in Table \ref{tab:phot}, are shown in black. The orange points with the error bars are the detection in the UltraVISTA $K_s$ band and the observed ALMA continuum in Band 7.}
\label{fig:SED}
\end{center}
\end{figure}
We derive the stellar mass of Gal-A through SED-fitting using LePHARE \citep{Arnouts99, Ilbert06}.
We use a synthetic set of templates of SFGs based on stellar population synthesis models from \cite{Bruzual03}. We explore constant, exponentially declining (with $\tau=0.1,0.3,1,3$ Gyrs) and delayed (with $\tau=0.1,0.5,1,3$ Gyrs) SFHs. To account for metallicity dependence, we use models with solar ($\mathrm{Z_{\odot}}$) and sub-solar (0.2 $\mathrm{Z_{\odot}}$) metallicity. We then account for dust attenuation using the \cite{Calzetti00} attenuation law with a stellar $E_s(B-V)$ ranging from 0 to 0.7 in steps of 0.05. Following \cite{Ilbert09}, we also add the contribution of rest-frame UV and optical emission lines in the different filters. Finally, following \cite{Faisst19}, we perform the fit in flux density space and add systematic errors (depending on the filter) in order to avoid the $\chi^2$ computation to be dominated by small errors.
Fig. \ref{fig:SED} shows the SEDs obtained with LePHARE (blue curves) from the best-fit between the models and the photometry of Gal-A at $z=4.6$ and $z=2.2$. In the first case, the best-fit is given by an exponentially declining model with $\tau=3.0$ Gyrs while, at $z=2.2$, a delayed $\tau$-model with $\tau=0.5$ Gyrs better reproduces the observations.
Since Gal-A is very faint from optical to NIR wavelengths, we decide to perturb the flux in each filter by its relative rms to test the dependence of the fitting on the observed photometry of the galaxy. We thus run a Montecarlo simulation, building 1000 perturbed SEDs that we then refit, in order to obtain a better estimate of the above-mentioned physical parameters from their probability distributions. More in detail, we extract the perturbed flux in each band from a gaussian distribution centred on the measured flux and with standard deviation equal to the measured rms. We list our results in Table \ref{tab:fit}. At $z=4.6$, these results point towards the solution for which Gal-A is a young, dusty SFG. Moreover, as can be seen, the SFR and the FIR luminosity are quite in agreement with the corresponding quantities in Table \ref{tab:param}. Adopting the same procedure for the SED-fitting at $z=2.2$, we find that Gal-A should be a less massive and dustier galaxy with respect to the previous case.
We further compare the results obtained with LePHARE with the MAGPHYS code \citep{daCunha08}, in which we also include the observed ALMA continuum in Band 7. The best models that fit the observations are shown in Fig. \ref{fig:SED} as the green curves. The results from the best-fit, both at $z=2.2$ and $z=4.6$, are very similar to those of LePHARE within the uncertainties. This reassures us about the robustness of our estimates.
\begin{figure*}
\begin{center}
\includegraphics[width=12cm]{SFR_vs_massV10.png}
\caption{Star-forming MS relations (dot-dashed lines; \citealt{Speagle14}) at redshift 2.2 (left panel) and 4.6 (right panel). The grey bands indicate the scatter from the MS ($\pm0.3$ dex width). The orange stars represent the positions of our source in the diagram, given by the estimated stellar mass from the SEDfitting and the SFR from the FIR luminosity of Gal-A. For the case at $z=4.6$, we also show the positions of the ALPINE galaxies at $4.4\leq z \leq 4.6$ (small circles).}
\label{fig:mlim}
\end{center}
\end{figure*}
With the stellar mass obtained from the SED-fitting and the SFR measured from the FIR luminosity of the source, we determine the position of Gal-A along the MS of SFGs. In Fig. \ref{fig:mlim} we show the MS relations, assuming a Chabrier IMF, at $z = 2.2$ (left panel) and $z = 4.6$ (right panel) obtained by \cite{Speagle14} combining measurements from previous works in literature. Should the source be at $z=2.2$, it would lie $\sim2\sigma$ above the MS, towards the region populated by starburst galaxies.
Whether the source is at $z=4.6$, instead, it would sit on its corresponding MS. In this case, we also show the location of the ALPINE sample (in the redshift range $4.4\leq z\leq4.6$) in the figure. The ALPINE galaxies have ages in the range $7.8\lesssim \mathrm{log(Age)} \lesssim 9.0$ and $E_s(B-V)$ between 0 and 0.5 \citep{Faisst19}. Gal-A has a similar age to those estimated for the ALPINE targets; moreover, its SFR and $\mathrm{M_{*}}$ are comparable with those of the ALPINE sources and place it along the MS at $z=4.6$. However, the mean $E_s(B-V)$ of the ALPINE galaxies is $\sim0.1$, while Gal-A has $E_s(B-V)\sim0.4$, lying on the tail of the distribution of the color excess, and making it undetected in the optical bands. In this scenario, we should expect an entire population of optically-invisible SFGs, still to be observed, which might significantly contribute to the cosmic SFRD at early times.
\begin{table}
\begin{center}
\begin{tabular}{c c c}
\hline
\hline
Physical & z=2.2 & z=4.6\\
parameters & &\\
\hline
$E_s(B-V)$ & $0.5\pm0.1$ & $0.4\pm0.1$\\
$\mathrm{log(Age/Gyrs)}$ & $7.9\pm0.3$ & $7.8\pm0.1$\\
$\mathrm{log(SFR/M_{\odot}yr^{-1})}$ & $1.3\pm0.4$ & $2.1\pm0.3$\\
$\mathrm{log(L{_{FIR}}/L_{\odot})}$ & $11.0\pm0.1$ & $11.6\pm0.2$\\
$\mathrm{log(M_{*}/M_{\odot})}$ & $9.1\pm0.2$ & $9.7\pm0.2$\\
\hline
\hline
\end{tabular}
\caption{Physical parameters estimated from the SED-fitting at $z=2.2$ and $z=4.6$. Each value represents the mean of the probability distribution obtained perturbing the photometry of Gal-A 1000 times and fitting that photometry with the models. The uncertainties are given by the 16th and 18th percentiles of the distributions.}
\label{tab:fit}
\end{center}
\end{table}
\subsection{Estimate of the dynamical mass}
In this paragraph we attempt an estimate of the galaxy dynamical mass (M$_\mathrm{dyn}$) obtained from the FWHM of the observed emission line. Following \cite{Wang13}, we assume a rotating disk geometry for the gas as a first approximation; in this way, $\mathrm{M_{dyn} = 1.16\times10^5 \hspace{0.5mm} v_{cir}^2 \hspace{0.5mm} D}$, where $\mathrm{v_{cir} = 0.75 \hspace{0.5mm} FWHM_{line}/sin(i)}$ is the circular velocity of the gas disk in km/s (with $i$ the inclination angle between the gas disk and the line of sight), and D is the disk diameter in kpc. Since Gal-A is not resolved, we take the FWHM of the major axis of the 2D Gaussian fitted to the emission line, as the size of our galaxy ($1.06 \pm 0.04$ arcsec, which corresponds to $7.09 \pm 0.27$ kpc at $z \sim 4.6$, and to $8.99 \pm 0.34$ kpc at $z \sim 2.2$). We derive dynamical masses (uncorrected for galaxy inclination) of $\mathrm{M_{dyn} \hspace{0.5mm} sin^2(i) = 4.4 \times 10^{10}}$ $\mathrm{M_{\odot}}$ and $5.6 \times 10^{10}$ $\mathrm{M_{\odot}}$ for $z = 4.6$ and $z=2.2$ respectively, with a 25\% uncertainty obtained from individual errors on the FHWM$_\mathrm{line}$ and on the size of the source. Following \cite{Capak15}, we assume the two values for the inclination angle $\mathrm{sin(i)}=0.45$ and $\mathrm{sin(i)}=1$, ranging from a nearly face-on to an edge-on disk. When $\mathrm{sin(i)=1}$, the previous dynamical masses remain unchanged; however, in the case with $\mathrm{sin(i)=0.45}$, $\mathrm{M_{dyn}}$ increases of a factor 5. This reflects the large uncertainties on the size and geometry of the source, which cannot be well constrained with the current data and our poor resolution.
Furthermore, this approximation could cease to be valid in case the stellar mass of the source is smaller than the mass threshold above which galaxies are thought to form ordered disks. For instance, \cite{Simons15} found a so-called \lq\lq mass of disk formation\rq\rq{} of $\mathrm{log(M_{*}/M{_\odot})}=9.5$ above which the majority of the galaxies of their sample are rotation-dominated; below this threshold there is instead a large scatter and the galaxies could be either rotation-dominated disks and asymmetric or compact galaxies without any sign of rotation. At $z=2.2$, Gal-A should have $\mathrm{log(M_{*}/M{_\odot})}=9.1$, therefore it is prone to this kind of issue.
For comparison, we also run the 3D-BAROLO algorithm (3D-Based Analysis of Rotating Objects from Line Observations; \citealt{DiTeodoro15}) on the continuum-subtracted data cube to obtain a more accurate estimate of the dynamical mass. This code creates synthetic 3D observations of the galaxy and compares them with the input cube, finding the kinematical and geometrical parameters which best describe the data. It is particularly useful to retrieve information on low-resolution data where the kinematics is biased by the size of the beam, as in this case. We find $\mathrm{log(M_{dyn}/M_{\odot}) = 10.4\pm1.0}$ for $z=4.6$ and $\mathrm{log(M_{dyn}/M_{\odot}) = 10.5\pm1.0}$ for $z=2.2$. These results are quite in agreement with the former, given the large error on $\mathrm{M_{dyn}}$. In particular, at both redshifts, $\mathrm{M_{dyn}/M_{*}} > 1$, likely indicating that the galaxy has recently begun forming stars, resulting in small stellar masses and large gas fractions. However, given the large uncertainties on the dynamical mass, this result is not conclusive.
\section{Discussion}
In light of the above results, it seems more likely that Gal-A is a dust-obscured galaxy at $z=4.6$. From the [CII]/FIR diagnostic, our source presents similar properties to a large population of SFGs in literature. In addition, the SED-fitting reveals a large dust attenuation as expected for such an obscured galaxy, and places Gal-A along the MS at $z\sim4.6$. Nevertheless, we cannot exclude the possibility that the observed emission line is associated to a dusty, less massive source at $z\sim2.2$, with a $\sim2\sigma$ scatter from the MS and having relatively higher CO luminosities than those typical of local SFGs and high-z sub-millimeter galaxies. In this latter case, the (spectroscopic) redshift of Gal-A would also be comparable with the (photometric) redshifts of Gal-B and Gal-C, maybe suggesting the presence of an on-going merging at that epoch. However, to test this hypothesis, more kinematic information is needed.
In the most likely scenario in which Gal-A is at $z\sim4.6$, it may be part of the same dark matter halo of DC$\_$665626. In this case, we can assume a stellar mass $-$ halo mass (SMHM) relationship in order to estimate some physical properties of the halo. There are several ways to derive this relation; e.g. \cite{Behroozi10,Behroozi13} used the abundance matching technique to explore the SMHM relation out to $z\sim8$ assuming that the most massive galaxies are monotonically assigned to the most massive halos. Another approach is the Halo Occupation Distribution modeling which assumes that the number of galaxies in a given dark matter halo depends only on the halo mass; \cite{Harikane16} used this method to reproduce the SMHM relation out to $z=7$, obtaining results in agreement with \cite{Behroozi13}. In particular, since Gal-A has a larger stellar mass than DC$\_$665626, we can suppose that the ALPINE target is a satellite galaxy of our serendipitous source embedded in its dark matter halo. In this case, from the stellar mass of Gal-A (i.e. log$\mathrm{(M_{*}/M_{\odot})} = 9.7 \pm 0.2$), the previously discussed models predict a halo mass between log$\mathrm{(M_{h}/M_{\odot})\sim11.5}$ and log$\mathrm{(M_{h}/M_{\odot})\sim11.7}$. Using the empirical model by \cite{Mashian15}, which links the SFR of the central galaxy to its host halo mass via abundance matching techniques, $\mathrm{M_h}$ also translates into an SFR between $\sim20$ and 40 $\mathrm{M_{\odot} yr^{-1}}$, in agreement, within the uncertainties, with the value estimated from the FIR continuum for Gal-A, i.e. SFR $\sim24$ $\mathrm{M_{\odot} yr^{-1}}$. Exploiting these information and following \cite{Lapi18}, we compute the virial radius of the halo as $\mathrm{R_H \equiv [3 M_H/4\pi \rho_c \Delta_H E_z]^{1/3}}$, where $\mathrm{\rho_c \approx 2.8 \times 10^{11} h^2}$ $\mathrm{M_{\odot}/Mpc^3}$ is the critical density, $\mathrm{\Delta_H \simeq 18 \pi^2 + 82[\Omega_m(1+z)^3/E_z-1] - 39[\Omega_m(1+z)^3/E_z-1]^2}$ is the non-linear density contrast at collapse, and $\mathrm{E_z = \Omega_\Lambda + \Omega_m (1+z)^3}$ is a redshift dependent factor; we obtain $\mathrm{R_H \sim 39-45}$ kpc. Comparing this result to the observed spatial offset between our source and DC$\_$665626 ($\sim 40$ kpc), we may conclude, according to this scenario, that the main ALPINE target could be a low mass satellite in the dark matter halo of Gal-A.
It is worth noting that we obtain similar results even in the opposite case in which Gal-A is a satellite galaxy of DC$\_$665626. Following the same procedure explained above, and since DC$\_$665626 has $\mathrm{log(M_{*}/M_{\odot}) \sim 9.2}$, we obtain $\mathrm{log(M_{h}/M_{\odot})\sim11.4}$ and $\mathrm{log(SFR/[M_{\odot} yr^{-1}])}\sim1.0$ (which is consistent with the SFR of the ALPINE target obtained through SED-fitting). In turn, this provides $\mathrm{R_H \sim 36}$ kpc, which is again comparable with the observed offset between the two galaxies.
Finally, Gal-A may also be part of the massive proto-cluster of galaxies PCI J1001+0220 located at $z=4.57$ in the COSMOS field \citep{Lemaux18}. In fact, our source lies well inside the 2 Mpc boundary used for spectroscopic membership in that work, with a systemic velocity offset $< 350$ km/s. This strengthens the hypothesis that this source is at $z\sim4.6$.
\section{Summary and Conclusions}
In this paper we present the characterisation of Gal-A, a galaxy serendipitously discovered in one of the ALPINE pointings. This source is detected both in line and continuum and does not show any optical counterpart, from UV to FIR, except for the $K_s$ band from UltraVISTA (DR4). This leads to high uncertainties on the real nature of the observed emission line, i.e. [CII] at $z_\mathrm{gal} = 4.577$, CO(9-8) or CO(10-9) at $z_\mathrm{gal} = 2.043$ and 2.381, respectively.
Although we cannot definitively exclude that Gal-A is a dust-obscured galaxy at $z\sim2.2$, the analysis undertaken in this work suggests that this source is more likely a $z\sim4.6$ MS SFG missed by UV/optical surveys because of its high level of dust-obscuration. Moreover, at this epoch, several dusty galaxies without optical/NIR detections have been yet confirmed, mostly as extreme starbursts (e.g. \citealt{Riechers13,Riechers17,Alcalde19}); Gal-A could be part of this elusive population of sources, with a smaller luminosity and/or mass. In this last case, we compute an SFR $\sim 24$ $\mathrm{M_{\odot}/yr}$ from the FIR luminosity, $\mathrm{log(M_{*}/M_{\odot})} \sim 9.7$ and an age of $\sim70$ Myrs from Montecarlo simulations on the SED-fitting procedure.
Whether the emission comes from CO or [CII], both the cases presented above are undoubtedly interesting. If it was at $z\sim2.2$, our galaxy would increase the sample of high-J CO emitters at high redshift, leading to a more in-depth study of the excitation conditions of the molecular gas in these sources; only a handful of these kind of objects has been detected so far, and most of them seem to be associated with active galactic nuclei (AGNs) activities \citep{Weiss07,Riechers11,Riechers13}. Should the serendipitous emission be [CII] instead, we would identify a SFG invisible to optical-NIR observations. [CII] emission traces recent star formation in the galaxy, and the ALPINE survey will allow us to quantify how many similar objects to the one analysed in this work we will be able to discover. In fact, among the serendipitous sources found in ALPINE, there is a high fraction of objects without UV/optical counterparts (Loiacono et al. in prep.). Thanks to ALPINE, we are now able to estimate the overall contribution of these dust-obscured galaxies to the SFRD in the early Universe.
Eventually, we plan to spectroscopic follow-up this source in order to firmly establish the nature of its emission line. For instance, ALMA observations in Band 6 could reveal [NII] emission at 205 $\mu$m rest-frame if the galaxy is at $z\sim4.6$; in this case the ratio [CII]/[NII] would also provide the fraction of [CII] emission arising from the ionised gas, i.e. from star-forming regions \citep{Oberst06,Oberst11,Zhao16}. X-shooter at the Very Large Telescope (VLT) could also be useful to unveil the redshift of this source by observing [OII] emission at $z=4.6$, or even H$\alpha$ emission redshifted in the NIR region of the spectrum at $z\sim2.2$. However, these observations could be hampered by the large $E_s(B-V)$ found for this source which makes it invisible in optical filters. Finally, the Near-Infrared Spectrograph (NIRSpec) on the James Webb Space Telescope (JWST) will be a powerful facility for the follow-up of this kind of sources as well.
\section*{Acknowledgements}
This paper is based on data obtained with the ALMA observatory, under the Large Program 2017.1.00428.L. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada), MOST and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ. Based on data products from observations made with ESO Telescopes at the La Silla Paranal Observatory under ESO programme ID 179.A-2005 and on data products produced by CALET and the Cambridge Astronomy Survey Unit on behalf of the UltraVISTA consortium. This work has made use of the Rainbow Cosmological Surveys Database, which is operated by the Centro de Astrobiología (CAB/INTA), partnered with the University of California Observatories at Santa Cruz (UCO/Lick,UCSC). We thank D. Liu and collaborators for providing us individual values of CO and FIR luminosities estimated in their work. S.B., A.C., C.G., F.L., F.P., G.R., and M.T. acknowledge the support from grant PRIN MIUR 2017. L.V. acknowledges funding from the European Union’s Horizon 2020 research and innovation program under the Marie Sk lodowska-Curie Grant agreement No. 746119. D.R. acknowledges support from the National Science Foundation under grant numbers AST-1614213 and AST-1910107 and from the Alexander von Humboldt Foundation through a Humboldt Research Fellowship for Experienced Researchers. G.C.J. acknowledges ERC Advanced Grant 695671 ``QUENCH'' and support by the Science and Technology Facilities Council (STFC). S.F. is supported by the Cosmic Dawn Center of Excellence founded by the Danish National Research Foundation under the grant No. 140.
\section{Introduction}
The journal \textit{Monthly Notices of the Royal Astronomical Society} (MNRAS) encourages authors to prepare their papers using \LaTeX.
The style file \verb'mnras.cls' can be used to approximate the final appearance of the journal, and provides numerous features to simplify the preparation of papers.
This document, \verb'mnras_guide.tex', provides guidance on using that style file and the features it enables.
This is not a general guide on how to use \LaTeX, of which many excellent examples already exist.
We particularly recommend \textit{Wikibooks \LaTeX}\footnote{\url{https://en.wikibooks.org/wiki/LaTeX}}, a collaborative online textbook which is of use to both beginners and experts.
Alternatively there are several other online resources, and most academic libraries also hold suitable beginner's guides.
For guidance on the contents of papers, journal style, and how to submit a paper, see the MNRAS Instructions to Authors\footnote{\label{foot:itas}\url{http://www.oxfordjournals.org/our_journals/mnras/for_authors/}}.
Only technical issues with the \LaTeX\ class are considered here.
\section{Obtaining and installing the MNRAS package}
Some \LaTeX\ distributions come with the MNRAS package by default.
If yours does not, you can either install it using your distribution's package manager, or download it from the Comprehensive \TeX\ Archive Network\footnote{\url{http://www.ctan.org/tex-archive/macros/latex/contrib/mnras}} (CTAN).
The files can either be installed permanently by placing them in the appropriate directory (consult the documentation for your \LaTeX\ distribution), or used temporarily by placing them in the working directory for your paper.
To use the MNRAS package, simply specify \verb'mnras' as the document class at the start of a \verb'.tex' file:
\begin{verbatim}
\documentclass{mnras}
\end{verbatim}
Then compile \LaTeX\ (and if necessary \bibtex) in the usual way.
\section{Preparing and submitting a paper}
We recommend that you start with a copy of the \texttt{mnras\_template.tex} file.
Rename the file, update the information on the title page, and then work on the text of your paper.
Guidelines for content, style etc. are given in the instructions to authors on the journal's website$^{\ref{foot:itas}}$.
Note that this document does not follow all the aspects of MNRAS journal style (e.g. it has a table of contents).
If a paper is accepted, it is professionally typeset and copyedited by the publishers.
It is therefore likely that minor changes to presentation will occur.
For this reason, we ask authors to ignore minor details such as slightly long lines, extra blank spaces, or misplaced figures, because these details will be dealt with during the production process.
Papers must be submitted electronically via the online submission system; paper submissions are not permitted.
For full guidance on how to submit a paper, see the instructions to authors.
\section{Class options}
\label{sec:options}
There are several options which can be added to the document class line like this:
\begin{verbatim}
\documentclass[option1,option2]{mnras}
\end{verbatim}
The available options are:
\begin{itemize}
\item \verb'letters' -- used for papers in the journal's Letters section.
\item \verb'onecolumn' -- single column, instead of the default two columns. This should be used {\it only} if necessary for the display of numerous very long equations.
\item \verb'doublespacing' -- text has double line spacing. Please don't submit papers in this format.
\item \verb'referee' -- \textit{(deprecated)} single column, double spaced, larger text, bigger margins. Please don't submit papers in this format.
\item \verb'galley' -- \textit{(deprecated)} no running headers, no attempt to align the bottom of columns.
\item \verb'landscape' -- \textit{(deprecated)} sets the whole document on landscape paper.
\item \verb"usenatbib" -- \textit{(all papers should use this)} this uses Patrick Daly's \verb"natbib.sty" package for citations.
\item \verb"usegraphicx" -- \textit{(most papers will need this)} includes the \verb'graphicx' package, for inclusion of figures and images.
\item \verb'useAMS' -- adds support for upright Greek characters \verb'\upi', \verb'\umu' and \verb'\upartial' ($\upi$, $\umu$ and $\upartial$). Only these three are included, if you require other symbols you will need to include the \verb'amsmath' or \verb'amsymb' packages (see section~\ref{sec:packages}).
\item \verb"usedcolumn" -- includes the package \verb"dcolumn", which includes two new types of column alignment for use in tables.
\end{itemize}
Some of these options are deprecated and retained for backwards compatibility only.
Others are used in almost all papers, but again are retained as options to ensure that papers written decades ago will continue to compile without problems.
If you want to include any other packages, see section~\ref{sec:packages}.
\section{Title page}
If you are using \texttt{mnras\_template.tex} the necessary code for generating the title page, headers and footers is already present.
Simply edit the title, author list, institutions, abstract and keywords as described below.
\subsection{Title}
There are two forms of the title: the full version used on the first page, and a short version which is used in the header of other odd-numbered pages (the `running head').
Enter them with \verb'\title[]{}' like this:
\begin{verbatim}
\title[Running head]{Full title of the paper}
\end{verbatim}
The full title can be multiple lines (use \verb'\\' to start a new line) and may be as long as necessary, although we encourage authors to use concise titles. The running head must be $\le~45$ characters on a single line.
See appendix~\ref{sec:advanced} for more complicated examples.
\subsection{Authors and institutions}
Like the title, there are two forms of author list: the full version which appears on the title page, and a short form which appears in the header of the even-numbered pages. Enter them using the \verb'\author[]{}' command.
If the author list is more than one line long, start a new line using \verb'\newauthor'. Use \verb'\\' to start the institution list. Affiliations for each author should be indicated with a superscript number, and correspond to the list of institutions below the author list.
For example, if I were to write a paper with two coauthors at another institution, one of whom also works at a third location:
\begin{verbatim}
\author[K. T. Smith et al.]{
Keith T. Smith,$^{1}$
A. N. Other,$^{2}$
and Third Author$^{2,3}$
\\
$^{1}$Affiliation 1\\
$^{2}$Affiliation 2\\
$^{3}$Affiliation 3}
\end{verbatim}
Affiliations should be in the format `Department, Institution, Street Address, City and Postal Code, Country'.
Email addresses can be inserted with the \verb'\thanks{}' command which adds a title page footnote.
If you want to list more than one email, put them all in the same \verb'\thanks' and use \verb'\footnotemark[]' to refer to the same footnote multiple times.
Present addresses (if different to those where the work was performed) can also be added with a \verb'\thanks' command.
\subsection{Abstract and keywords}
The abstract is entered in an \verb'abstract' environment:
\begin{verbatim}
\begin{abstract}
The abstract of the paper.
\end{abstract}
\end{verbatim}
\noindent Note that there is a word limit on the length of abstracts.
For the current word limit, see the journal instructions to authors$^{\ref{foot:itas}}$.
Immediately following the abstract, a set of keywords is entered in a \verb'keywords' environment:
\begin{verbatim}
\begin{keywords}
keyword 1 -- keyword 2 -- keyword 3
\end{keywords}
\end{verbatim}
\noindent There is a list of permitted keywords, which is agreed between all the major astronomy journals and revised every few years.
Do \emph{not} make up new keywords!
For the current list of allowed keywords, see the journal's instructions to authors$^{\ref{foot:itas}}$.
\section{Sections and lists}
Sections and lists are generally the same as in the standard \LaTeX\ classes.
\subsection{Sections}
\label{sec:sections}
Sections are entered in the usual way, using \verb'\section{}' and its variants. It is possible to nest up to four section levels:
\begin{verbatim}
\section{Main section}
\subsection{Subsection}
\subsubsection{Subsubsection}
\paragraph{Lowest level section}
\end{verbatim}
\noindent The other \LaTeX\ sectioning commands \verb'\part', \verb'\chapter' and \verb'\subparagraph{}' are deprecated and should not be used.
Some sections are not numbered as part of journal style (e.g. the Acknowledgements).
To insert an unnumbered section use the `starred' version of the command: \verb'\section*{}'.
See appendix~\ref{sec:advanced} for more complicated examples.
\subsection{Lists}
Two forms of lists can be used in MNRAS -- numbered and unnumbered.
For a numbered list, use the \verb'enumerate' environment:
\begin{verbatim}
\begin{enumerate}
\item First item
\item Second item
\item etc.
\end{enumerate}
\end{verbatim}
\noindent which produces
\begin{enumerate}
\item First item
\item Second item
\item etc.
\end{enumerate}
Note that the list uses lowercase Roman numerals, rather than the \LaTeX\ default Arabic numerals.
For an unnumbered list, use the \verb'description' environment without the optional argument:
\begin{verbatim}
\begin{description}
\item First item
\item Second item
\item etc.
\end{description}
\end{verbatim}
\noindent which produces
\begin{description}
\item First item
\item Second item
\item etc.
\end{description}
Bulleted lists using the \verb'itemize' environment should not be used in MNRAS; it is retained for backwards compatibility only.
\section{Mathematics and symbols}
The MNRAS class mostly adopts standard \LaTeX\ handling of mathematics, which is briefly summarised here.
See also section~\ref{sec:packages} for packages that support more advanced mathematics.
Mathematics can be inserted into the running text using the syntax \verb'$1+1=2$', which produces $1+1=2$.
Use this only for short expressions or when referring to mathematical quantities; equations should be entered as described below.
\subsection{Equations}
Equations should be entered using the \verb'equation' environment, which automatically numbers them:
\begin{verbatim}
\begin{equation}
a^2=b^2+c^2
\end{equation}
\end{verbatim}
\noindent which produces
\begin{equation}
a^2=b^2+c^2
\end{equation}
By default, the equations are numbered sequentially throughout the whole paper. If a paper has a large number of equations, it may be better to number them by section (2.1, 2.2 etc.). To do this, add the command \verb'\numberwithin{equation}{section}' to the preamble.
It is also possible to produce un-numbered equations by using the \LaTeX\ built-in \verb'\['\textellipsis\verb'\]' and \verb'$$'\textellipsis\verb'$$' commands; however MNRAS requires that all equations are numbered, so these commands should be avoided.
\subsection{Special symbols}
\begin{table}
\caption{Additional commands for special symbols commonly used in astronomy. These can be used anywhere.}
\label{tab:anysymbols}
\begin{tabular}{lll}
\hline
Command & Output & Meaning\\
\hline
\verb'\sun' & \sun & Sun, solar\\[2pt]
\verb'\earth' & \earth & Earth, terrestrial\\[2pt]
\verb'\micron' & \micron & microns\\[2pt]
\verb'\degr' & \degr & degrees\\[2pt]
\verb'\arcmin' & \arcmin & arcminutes\\[2pt]
\verb'\arcsec' & \arcsec & arcseconds\\[2pt]
\verb'\fdg' & \fdg & fraction of a degree\\[2pt]
\verb'\farcm' & \farcm & fraction of an arcminute\\[2pt]
\verb'\farcs' & \farcs & fraction of an arcsecond\\[2pt]
\verb'\fd' & \fd & fraction of a day\\[2pt]
\verb'\fh' & \fh & fraction of an hour\\[2pt]
\verb'\fm' & \fm & fraction of a minute\\[2pt]
\verb'\fs' & \fs & fraction of a second\\[2pt]
\verb'\fp' & \fp & fraction of a period\\[2pt]
\verb'\diameter' & \diameter & diameter\\[2pt]
\verb'\sq' & \sq & square, Q.E.D.\\[2pt]
\hline
\end{tabular}
\end{table}
\begin{table}
\caption{Additional commands for mathematical symbols. These can only be used in maths mode.}
\label{tab:mathssymbols}
\begin{tabular}{lll}
\hline
Command & Output & Meaning\\
\hline
\verb'\upi' & $\upi$ & upright pi\\[2pt]
\verb'\umu' & $\umu$ & upright mu\\[2pt]
\verb'\upartial' & $\upartial$ & upright partial derivative\\[2pt]
\verb'\lid' & $\lid$ & less than or equal to\\[2pt]
\verb'\gid' & $\gid$ & greater than or equal to\\[2pt]
\verb'\la' & $\la$ & less than of order\\[2pt]
\verb'\ga' & $\ga$ & greater than of order\\[2pt]
\verb'\loa' & $\loa$ & less than approximately\\[2pt]
\verb'\goa' & $\goa$ & greater than approximately\\[2pt]
\verb'\cor' & $\cor$ & corresponds to\\[2pt]
\verb'\sol' & $\sol$ & similar to or less than\\[2pt]
\verb'\sog' & $\sog$ & similar to or greater than\\[2pt]
\verb'\lse' & $\lse$ & less than or homotopic to \\[2pt]
\verb'\gse' & $\gse$ & greater than or homotopic to\\[2pt]
\verb'\getsto' & $\getsto$ & from over to\\[2pt]
\verb'\grole' & $\grole$ & greater over less\\[2pt]
\verb'\leogr' & $\leogr$ & less over greater\\
\hline
\end{tabular}
\end{table}
Some additional symbols of common use in astronomy have been added in the MNRAS class. These are shown in tables~\ref{tab:anysymbols}--\ref{tab:mathssymbols}. The command names are -- as far as possible -- the same as those used in other major astronomy journals.
Many other mathematical symbols are also available, either built into \LaTeX\ or via additional packages. If you want to insert a specific symbol but don't know the \LaTeX\ command, we recommend using the Detexify website\footnote{\url{http://detexify.kirelabs.org}}.
Sometimes font or coding limitations mean a symbol may not get smaller when used in sub- or superscripts, and will therefore be displayed at the wrong size. There is no need to worry about this as it will be corrected by the typesetter during production.
To produce bold symbols in mathematics, use \verb'\bmath' for simple variables, and the \verb'bm' package for more complex symbols (see section~\ref{sec:packages}). Vectors are set in bold italic, using \verb'\mathbfit{}'.
For matrices, use \verb'\mathbfss{}' to produce a bold sans-serif font e.g. \mathbfss{H}; this works even outside maths mode, but not all symbols are available (e.g. Greek). For $\nabla$ (del, used in gradients, divergence etc.) use \verb'$\nabla$'.
\subsection{Ions}
A new \verb'\ion{}{}' command has been added to the class file, for the correct typesetting of ionisation states.
For example, to typeset singly ionised calcium use \verb'\ion{Ca}{ii}', which produces \ion{Ca}{ii}.
\section{Figures and tables}
\label{sec:fig_table}
Figures and tables (collectively called `floats') are mostly the same as built into \LaTeX.
\subsection{Basic examples}
\begin{figure}
\includegraphics[width=\columnwidth]{example}
\caption{An example figure.}
\label{fig:example}
\end{figure}
Figures are inserted in the usual way using a \verb'figure' environment and \verb'\includegraphics'. The example Figure~\ref{fig:example} was generated using the code:
\begin{verbatim}
\begin{figure}
\includegraphics[width=\columnwidth]{example}
\caption{An example figure.}
\label{fig:example}
\end{figure}
\end{verbatim}
\begin{table}
\caption{An example table.}
\label{tab:example}
\begin{tabular}{lcc}
\hline
Star & Mass & Luminosity\\
& $M_{\sun}$ & $L_{\sun}$\\
\hline
Sun & 1.00 & 1.00\\
$\alpha$~Cen~A & 1.10 & 1.52\\
$\epsilon$~Eri & 0.82 & 0.34\\
\hline
\end{tabular}
\end{table}
The example Table~\ref{tab:example} was generated using the code:
\begin{verbatim}
\begin{table}
\caption{An example table.}
\label{tab:example}
\begin{tabular}{lcc}
\hline
Star & Mass & Luminosity\\
& $M_{\sun}$ & $L_{\sun}$\\
\hline
Sun & 1.00 & 1.00\\
$\alpha$~Cen~A & 1.10 & 1.52\\
$\epsilon$~Eri & 0.82 & 0.34\\
\hline
\end{tabular}
\end{table}
\end{verbatim}
\subsection{Captions and placement}
Captions go \emph{above} tables but \emph{below} figures, as in the examples above.
The \LaTeX\ float placement commands \verb'[htbp]' are intentionally disabled.
Layout of figures and tables will be adjusted by the publisher during the production process, so authors should not concern themselves with placement to avoid disappointment and wasted effort.
Simply place the \LaTeX\ code close to where the figure or table is first mentioned in the text and leave exact placement to the publishers.
By default a figure or table will occupy one column of the page.
To produce a wider version which covers both columns, use the \verb'figure*' or \verb'table*' environment.
If a figure or table is too long to fit on a single page it can be split it into several parts.
Create an additional figure or table which uses \verb'\contcaption{}' instead of \verb'\caption{}'.
This will automatically correct the numbering and add `\emph{continued}' at the start of the caption.
\begin{table}
\contcaption{A table continued from the previous one.}
\label{tab:continued}
\begin{tabular}{lcc}
\hline
Star & Mass & Luminosity\\
& $M_{\sun}$ & $L_{\sun}$\\
\hline
$\tau$~Cet & 0.78 & 0.52\\
$\delta$~Pav & 0.99 & 1.22\\
$\sigma$~Dra & 0.87 & 0.43\\
\hline
\end{tabular}
\end{table}
Table~\ref{tab:continued} was generated using the code:
\begin{verbatim}
\begin{table}
\contcaption{A table continued from the previous one.}
\label{tab:continued}
\begin{tabular}{lcc}
\hline
Star & Mass & Luminosity\\
& $M_{\sun}$ & $L_{\sun}$\\
\hline
$\tau$~Cet & 0.78 & 0.52\\
$\delta$~Pav & 0.99 & 1.22\\
$\sigma$~Dra & 0.87 & 0.43\\
\hline
\end{tabular}
\end{table}
\end{verbatim}
To produce a landscape figure or table, use the \verb'pdflscape' package and the \verb'landscape' environment.
The landscape Table~\ref{tab:landscape} was produced using the code:
\begin{verbatim}
\begin{landscape}
\begin{table}
\caption{An example landscape table.}
\label{tab:landscape}
\begin{tabular}{cccccccccc}
\hline
Header & Header & ...\\
Unit & Unit & ...\\
\hline
Data & Data & ...\\
Data & Data & ...\\
...\\
\hline
\end{tabular}
\end{table}
\end{landscape}
\end{verbatim}
Unfortunately this method will force a page break before the table appears.
More complicated solutions are possible, but authors shouldn't worry about this.
\begin{landscape}
\begin{table}
\caption{An example landscape table.}
\label{tab:landscape}
\begin{tabular}{cccccccccc}
\hline
Header & Header & Header & Header & Header & Header & Header & Header & Header & Header\\
Unit & Unit & Unit & Unit & Unit & Unit & Unit & Unit & Unit & Unit \\
\hline
Data & Data & Data & Data & Data & Data & Data & Data & Data & Data\\
Data & Data & Data & Data & Data & Data & Data & Data & Data & Data\\
Data & Data & Data & Data & Data & Data & Data & Data & Data & Data\\
Data & Data & Data & Data & Data & Data & Data & Data & Data & Data\\
Data & Data & Data & Data & Data & Data & Data & Data & Data & Data\\
Data & Data & Data & Data & Data & Data & Data & Data & Data & Data\\
Data & Data & Data & Data & Data & Data & Data & Data & Data & Data\\
Data & Data & Data & Data & Data & Data & Data & Data & Data & Data\\
\hline
\end{tabular}
\end{table}
\end{landscape}
\section{References and citations}
\subsection{Cross-referencing}
The usual \LaTeX\ commands \verb'\label{}' and \verb'\ref{}' can be used for cross-referencing within the same paper.
We recommend that you use these whenever relevant, rather than writing out the section or figure numbers explicitly.
This ensures that cross-references are updated whenever the numbering changes (e.g. during revision) and provides clickable links (if available in your compiler).
It is best to give each section, figure and table a logical label.
For example, Table~\ref{tab:mathssymbols} has the label \verb'tab:mathssymbols', whilst section~\ref{sec:packages} has the label \verb'sec:packages'.
Add the label \emph{after} the section or caption command, as in the examples in sections~\ref{sec:sections} and \ref{sec:fig_table}.
Enter the cross-reference with a non-breaking space between the type of object and the number, like this: \verb'see Figure~\ref{fig:example}'.
The \verb'\autoref{}' command can be used to automatically fill out the type of object, saving on typing.
It also causes the link to cover the whole phrase rather than just the number, but for that reason is only suitable for single cross-references rather than ranges.
For example, \verb'\autoref{tab:journal_abbr}' produces \autoref{tab:journal_abbr}.
\subsection{Citations}
\label{sec:cite}
MNRAS uses the Harvard -- author (year) -- citation style, e.g. \citet{author2013}.
This is implemented in \LaTeX\ via the \verb'natbib' package, which in turn is included via the \verb'usenatbib' package option (see section~\ref{sec:options}), which should be used in all papers.
Each entry in the reference list has a `key' (see section~\ref{sec:ref_list}) which is used to generate citations.
There are two basic \verb'natbib' commands:
\begin{description}
\item \verb'\citet{key}' produces an in-text citation: \citet{author2013}
\item \verb'\citep{key}' produces a bracketed (parenthetical) citation: \citep{author2013}
\end{description}
Citations will include clickable links to the relevant entry in the reference list, if supported by your \LaTeX\ compiler.
\defcitealias{smith2014}{Paper~I}
\begin{table*}
\caption{Common citation commands, provided by the \texttt{natbib} package.}
\label{tab:natbib}
\begin{tabular}{lll}
\hline
Command & Ouput & Note\\
\hline
\verb'\citet{key}' & \citet{smith2014} & \\
\verb'\citep{key}' & \citep{smith2014} & \\
\verb'\citep{key,key2}' & \citep{smith2014,jones2015} & Multiple papers\\
\verb'\citet[table 4]{key}' & \citet[table 4]{smith2014} & \\
\verb'\citep[see][figure 7]{key}' & \citep[see][figure 7]{smith2014} & \\
\verb'\citealt{key}' & \citealt{smith2014} & For use with manual brackets\\
\verb'\citeauthor{key}' & \citeauthor{smith2014} & If already cited in close proximity\\
\verb'\defcitealias{key}{Paper~I}' & & Define an alias (doesn't work in floats)\\
\verb'\citetalias{key}' & \citetalias{smith2014} & \\
\verb'\citepalias{key}' & \citepalias{smith2014} & \\
\hline
\end{tabular}
\end{table*}
There are a number of other \verb'natbib' commands which can be used for more complicated citations.
The most commonly used ones are listed in Table~\ref{tab:natbib}.
For full guidance on their use, consult the \verb'natbib' documentation\footnote{\url{http://www.ctan.org/pkg/natbib}}.
If a reference has several authors, \verb'natbib' will automatically use `et al.' if there are more than two authors. However, if a paper has exactly three authors, MNRAS style is to list all three on the first citation and use `et al.' thereafter. If you are using \bibtex\ (see section~\ref{sec:ref_list}) then this is handled automatically. If not, the \verb'\citet*{}' and \verb'\citep*{}' commands can be used at the first citation to include all of the authors.
\subsection{The list of references}
\label{sec:ref_list}
It is possible to enter references manually using the usual \LaTeX\ commands, but we strongly encourage authors to use \bibtex\ instead.
\bibtex\ ensures that the reference list is updated automatically as references are added or removed from the paper, puts them in the correct format, saves on typing, and the same reference file can be used for many different papers -- saving time hunting down reference details.
An MNRAS \bibtex\ style file, \verb'mnras.bst', is distributed as part of this package.
The rest of this section will assume you are using \bibtex.
References are entered into a separate \verb'.bib' file in standard \bibtex\ formatting.
This can be done manually, or there are several software packages which make editing the \verb'.bib' file much easier.
We particularly recommend \textsc{JabRef}\footnote{\url{http://jabref.sourceforge.net/}}, which works on all major operating systems.
\bibtex\ entries can be obtained from the NASA Astrophysics Data System\footnote{\label{foot:ads}\url{http://adsabs.harvard.edu}} (ADS) by clicking on `Bibtex entry for this abstract' on any entry.
Simply copy this into your \verb'.bib' file or into the `BibTeX source' tab in \textsc{JabRef}.
Each entry in the \verb'.bib' file must specify a unique `key' to identify the paper, the format of which is up to the author.
Simply cite it in the usual way, as described in section~\ref{sec:cite}, using the specified key.
Compile the paper as usual, but add an extra step to run the \texttt{bibtex} command.
Consult the documentation for your compiler or latex distribution.
Correct formatting of the reference list will be handled by \bibtex\ in almost all cases, provided that the correct information was entered into the \verb'.bib' file.
Note that ADS entries are not always correct, particularly for older papers and conference proceedings, so may need to be edited.
If in doubt, or if you are producing the reference list manually, see the MNRAS instructions to authors$^{\ref{foot:itas}}$ for the current guidelines on how to format the list of references.
\section{Appendices and online material}
To start an appendix, simply place the \verb' |
2,869,038,154,391 | arxiv | \section{Introduction}
The Spitzer Space Telescope has revolutionized the observational
characterization of exoplanets by detecting infrared emission from
these objects; measurements have been reported for HD 209458b
\citep{deming05a}, TrES-1 \citep{charbonneau05}, HD 189733b
\citep{deming06}, and $\upsilon$ Andromeda b \citep{harrington06}. HD
209458b, the first reported transiting exoplanet
\citep{charbonneau00}, is located at a distance of 47 pc and has a G0
stellar primary (V = 7.6 mag). The most recent system parameters for
this hot Jovian exoplanet have been established by \cite{knutson07},
with $P = 3.52474859\pm$0.00000038 days, $M_{planet} = (0.64 \pm
0.06) M_{J}$, an eccentricity consistent with zero, and
$R_{P}=(1.320\pm0.025)R_{J}$, 10-20\% larger than predicted by
irradiated planet models. The detection of infrared emission from hot
Jovian exoplanets has stimulated extensive theoretical work on the
atmospheric structure and emission of these planets. Constraining the
model predictions for infrared emission from hot Jovian atmospheres is
an important motivation for current observing programs.
Spectral characterization of hot Jovian exoplanets is a high priority
and is essential for understanding atmospheric composition and
properties. Spectroscopic detection of exoplanet emission has proved
challenging from the ground \citep{richardson03,deming05b}; space-based
infrared spectroscopy is particularly appealing due to the absence of
an atmosphere, improved signal-to-noise (SNR), and instrument
stability. Recently, the announcement of a Spitzer/IRS detection of a
featureless emission spectrum from HD 189733b \citep{grillmair07} and
an emission spectrum containing emission features from HD 209458b
\citep{richardson07} has generated great interest. However,
observations with the Spitzer IRS instrument are complicated by
systematic errors that are large compared to the observable signature.
Some of these systematic errors introduce wavelength-dependent
effects; thus, careful calibration and validation is essential. In
this paper we present results based on a new approach for calibrating
the major instrument systematic effects affecting these observations.
Using data taken from the Spitzer archive, we have determined the
spectrum of of HD 209458b using two semi-independent methods.
\section{Observations}
\label{observations}
The observations we analyzed (originally proposed by Richardson et
al. 2007) were taken with the Spitzer Space Telescope \citep{werner04}
using the Infrared Spectrograph (IRS; Houck et al. 2004). The data
were taken on 6 July 2005 and 13 July 2005 as two separate
Astronomical Observing Requests (AORs 14817792 and 14818048) and
provide approximately continuous coverage of the secondary eclipse
event (see Fig. \ref{fig:initialTimeSeries}). The timing of the
observations is well suited for application of the secondary eclipse
technique (also termed ``occultation spectroscopy"), in which data
from portions of the orbit where light originates from the
``star+planet" and ``star" are subtracted to obtain the planet's
emission \citep{richardson03}. For both sets of observations, the IRS
instrument was operated in first order (7.5 to 15.2 $\mu$m) at low
spectral resolution (R=60-120; SL1) with a nod executed at the midpoint of
the observations. This observational sequence provides two completely
independent data sets that span an interval covering the sequence:
\begin{enumerate}
\item before eclipse (flux originates from star+planet),
\item ingress (planet flux contribution changing with time),
\item secondary eclipse (flux originates from star only),
\item egress (planet flux contribution changing with time), and
\item after eclipse (flux originates from star+planet).
\end{enumerate}
\noindent Each nod contains 140 samples with an integration time
of 60 seconds each.
To determine the orbital phase of HD 209458b we used the results by
\cite{knutson07} for both the period and the ephemeris. The time for
each data point was determined using the {$\rm DATE\_OBS$} keyword in
the header, which was then converted to Julian date using the IDL
routine JDCNV.pro from the IDL astronomy library. We then converted to
heliocentric Julian date (HJD) using the IDL routine {$\rm
HELIO\_JD.pro$} (also from the astronomical library) for direct
comparison with the \cite{knutson07} results. The phase was then
estimated by $phase = ((time\_in\_HJD - ephemeris)\times
mod(period))/period$. In what follows, we will refer to the segment
of the orbital phase when both star and planet are visible as ``SP".
Similarly, we refer to the segment of orbital phase when only the star
is visible (when the planet is passing behind the star) as ``S". To
determine the planet spectrum, we have applied the analysis described
below to the spectral range of the IRS SL1 module.
\section{Analysis}
\label{analysis}
The extracted flux density time series suffers from four kinds of
temporal changes (see Fig. \ref{fig:initialTimeSeries}) that
completely dominate (by a factor of $\sim$10) the expected signature
of secondary eclipse flux decrement of $\sim$0.0025 \citep{deming05a}.
These effects are (i) a flux offset between nods, (ii) a periodic flux
modulation, (iii) initial flux stabilization, and (iv) monotonic flux
drift within a nod. These temporal changes are not random; a scatter
diagram shows that the flux density values are highly correlated
(correlation coefficients of $\sim$0.99). We find that these four
major temporal flux density changes listed above are caused by (in
order of importance) errors in telescope pointing, background
subtraction, and latent charge accumulation.
Effective calibration of these systematic effects can be challenging
to demonstrate. To test our control of the systematic errors, we
developed two methods for estimating the exoplanet spectrum. The
first method is differential and has the property that errors which
are ``common-mode'' in wavelength are rejected. The second method is
an absolute method and results in an exoplanet spectrum in units of Jy. A
schematic picture of our data reduction method is shown in
Fig. \ref{fig:cal_diagram}; central to our approach is comparing the
results of the two semi-independent estimates of the exoplanet
spectrum. Because the two methods interact with systematic errors
differently, the comparison is useful for accessing the
level of uncalibrated, residual systematics. In this section, we
describe the initial data extraction, the major systematic errors, and
each of our spectral extraction methods. In Section 4, we discuss the
comparison between the differential and absolute methods for
obtaining the planet spectrum. We then present our results and
discuss the implications. We also discuss the differences between the
methods and results of our approach and previous work.
\subsection{Data Extraction}
Our initial data extraction method is an extension of the method
described by \cite{bouwman06}. The series of extracted images are
used to define a median background image for each of the two nod
positions. The median background image (for each nod position) is
then subtracted from all the individual observations with the source
in the other nod position; this generates the background subtracted
images. We then identify bad pixels using a median filter and visual
inspection. The bad pixels are then corrected using an approach
similar to the Nagamo-Matsuyama filtering method \citep{nagano79}.
The source spectrum is then extracted using the method developed by
\cite{higdon04} and implemented in the SMART data reduction package.
The spectra were extracted using a fixed-width aperture of 6 pixels
centered on the position of the source. The exact source position
relative to the slit was determined by fitting a $sinc$ profile to the
spectra in the dispersion direction using the collapsed and normalized source
profile. The accuracy at which the source position can be determined
is about 0.02 pixels. This, together with the aperture width of 6
pixels, ensures that any flux variablity due to slight changes in the
positioning of the aperture are far less than the expected planetary
flux.
\subsection{Systematic Errors}
Here we discuss the origin and chromaticity of the three significant
systematic errors present in these data. There may be other
systematic errors as well, but they, and the residuals of the errors
we explicitly deal with, are smaller than the uncertainty level
achieved in our calibration. We acknowledge that there are different
points of view regarding the calibration of IRS data for determining
exoplanet spectra \citep{richardson07,grillmair07} and that these
approaches may perform similar (but not identical) corrections to the
data while ascribing the underlying systematics to different causes.
However, ours is the only method that allows determining the absolute
planet spectrum.
\subsubsection{Pointing Errors}
The periodic and linear drift components of the Spitzer pointing error
have been documented with long-duration IRAC observations
\citep{morales06}. Pointing errors cause modulation of the measured
flux because telescope motion perpendicular to the slit axis changes
the position of the stellar image with respect to the spectrometer
entrance slit; this causes changes in the vignetting of the stellar
image. Even small pointing errors change how the wings of the point
spread function (PSF) are vignetted. Since the PSF size is
proportional to wavelength, the measured flux changes due to pointing
are wavelength dependent. In the absence of other effects, the
measured flux density, $S(\lambda)$, is
\begin{equation}
S(\lambda) = {\bf \zeta}(y,\lambda) F(\lambda)
\end{equation}
where $F$ is the ``true'' flux density, $\zeta$ is the
pointing-induced fractional flux density ($\zeta =1$ for no pointing
error), and $y$ is the angular error with respect to the spectrometer
entrance slit center position in units of pixels. In principle, if
$\zeta$ can be determined, the effects of pointing error can
corrected.
We determined $\zeta(y,\lambda)$, by using the spectral map
observations of IRS calibrators HD 173511 (AOR 13481216) and HR 7341
(AOR 16295168). The spectral map data consist of a series of pointed
observations in which the star spectrum is measured on a
two-dimensional grid (7x7 and 5x23 positions, respectively, for these
AORs). For each scan perpendicular to the slit axis, we normalized
the measured spectrum by the spectrum measured at the nominal slit
center position, $\zeta(y) = S_{y}/S_{0}$. Assuming the slit has a
constant width, we combined the normalized measurements from all the
slit scans. This resulted in a series of values at each nominal
pointing position perpendicular to the slit axis ($y \in [y_{1},
y_{2}, y_{3}, \ldots]$). The difference in these values at each
nominal slit scan position reflects a pointing error that can be
corrected for in an iterative process. We defined a ``template'' by
taking the average value of the points at each pointing offset
position. The individual slit scan data were then shifted in the
horizontal axis and renormalized to minimize the $\chi^{2}$ value of
the shifted curve with respect to the template. After all the slit
scans had been shifted and renormalized, a linear interpolation was
done to find revised values for the template function at the nominal
pointing offset positions transverse to the slit axis. The individual
slit scan data were then shifted and renormalized again for a best fit
to the revised template function. This process was iterated until
convergence was reached; it resulted in pointing-error-corrected,
slit-scanned data. We determined $\zeta$ by fitting a cubic-spline at
each wavelength through the shifted and renormalized slit scan
measurements (see Fig. \ref{fig:pointingCorrect} top).
While a periodic pointing error component is frequently seen in
Spitzer observations, it is not necessarily repeatable in terms of
shape or amplitude \citep{carey07}. The IRS data we analyzed for HD
209458 show a periodic modulation of the measured flux (see Fig.
\ref{fig:initialTimeSeries}) that could be due to the Spitzer pointing
error. To test the hypothesis that changes in the measured flux are
due to pointing errors, we modelled the pointing error periodic motion
in both the spatial and spectral axis. This leads to an elliptical
motion that creates a symmetric profile about individual maxima and
minima. The asymmetric profiles in these data require the addition of
a harmonic term for angular velocity; when this is incorporated, the
pointing error is given by
\begin{equation}
\dot{\theta} = \dot{\theta_{0}} + A_{\theta} sin(\omega t-\phi_{\theta}) ,
\end{equation}
\begin{equation}
x = x_{o} + m_{x} t + A_{x} cos[\omega \theta(t)-\phi_{x}] ,
\end{equation}
\begin{equation}
y = y_{o} + m_{y} t + A_{y} cos[\omega \theta (t)-\phi_{y}] ,
\end{equation}
where $t$ is time, $x$ is the position parallel to the slit axis (the
spatial dimension on the array), $y$ is the position perpendicular to
the slit axis (the spectral dimension of the array), $x_{o}$ and
$y_{o}$ are initial offsets, $m_{x}$ and $m_{y}$ are the linear drift
terms, $\dot{\theta}$ is the angular velocity, $A$ is the amplitude,
$\omega$ is the frequency, and $\phi$ is the phase. The normalization
of $t$ and $\dot{\theta}$ is determined by the conditions
\begin{equation}
t \in [0,2\pi] ,
\int_0^{2\pi} \dot{\theta} dt = 2\pi ,
A_{\theta} \leq \dot{\theta_{0}} .
\end{equation}
\noindent We determined the parameters for the $x$ and $\theta$
components of our pointing model by fitting to the source motion along
the slit axis using the following steps for the data in each AOR:
\begin{enumerate}
\item{\bf Determine position:} For each measurement in the time
series, we constructed the spatial profile at each spectral channel.
These profiles are normalized by wavelength in the spatial axis and
shifted so that they can be ``stacked'' coherently. A median spatial
profile is then determined. This median spatial profile is then fit
with the function $sinc^{2}(x)$. The fitted position of the maximum
of $sinc^{2}(x)$ as a function of time is used as the measure of
telescope pointing changes in the spatial axis.
\item {\bf Linear fit:} We fit and removed the linear component,
$m_{x}$, of the source position in the spatial axis of the data in
each nod.
\item{\bf Determine the frequency:} To determine the frequency,
$\omega$, of the periodic oscillation in each nod, we took the Fourier
transform of the linearly detrended position function in the slit
spatial axis. The normalized frequency values were the same within
the errors, and the mean frequency was used in the remaining analysis.
\item{\bf Characteristic profile:} We folded the data, computed a
median profile and local standard deviation, applied a 10 $\sigma$
clip to remove discrepant points, and determined the mean profile.
\item {\bf Determine model parameters:} We determine the values for
$x_{o}, A_{\theta}$, $A_{x}$, $\phi_{\theta}$, $\phi_{x}$ by fitting
the predicted position along the slit axis to the measured source
position along the slit axis (see Fig. \ref{fig:positionFit}); the
values for these parameters are given in Table 1.
\end{enumerate}
\noindent At this point, we only need to determine the values of
$(y_{0},m_{y},A_{y},\phi_{y})$ to completely describe the telescope
pointing. Judicious selection of values for
$(y_{0},m_{y},A_{y},\phi_{y})$ produces an estimate of the cross-slit
position changes that (with appropriate normalization) agree
remarkably well with the intensity time series (see Fig.
\ref{fig:positionFit} bottom) and successfully reproduce the
asymmetric component in the shape of the periodic modulation.
The excellent agreement between our simple pointing model and the
observed changes in the measured flux confirm that, in the case of a
point source, the IRS measurement of the flux density is affected by
the position of the (stellar) image in the spectrometer entrance slit.
The results (see Fig. \ref{fig:positionFit} bottom) imply that
pointing changes as small as $\sim$ 10 milliarcseconds have an effect
on the measured IRS flux for bright, point-like objects. Because the
PSF size is a function of wavelength, the pointing error effect is
chromatic. The asymmetry in the PSF wings also causes pointing errors
to be asymmetric with respect to the nominal center of the slit (this
can be seen in Fig. \ref{fig:pointingCorrect}).
Equipped with equations 1 and 4, we can now decompose the changes in
the measured flux density into three specific kinds of pointing
errors, all of which can be seen in Fig. \ref{fig:initialTimeSeries}.
Each of these pointing errors contributes a specific component of $y$.
Note that the values of $A_{y}$ \& $\phi_{y}$ are the same for all
AOR/nod combinations, while $y_{o}$ \& $m_{y}$ are different for each
AOR/nod combination.
\begin{itemize}
\item initial peakup/nod error - The pointing error associated with
the initial peakup or nod operation. The high-accuracy peakup, used
for these observations, has a 1-$\sigma$ error circle radius of 0.4
arcseconds. This translates into a flux uncertainty of $\sim5-10$\%.
When a nod is executed, there is significant motion perpendicular to
the slit axis. This is the reason why the median flux density
differs in each AOR/nod1. The initial error is static and represents
a constant offset described by $y=f(y_{0})$.
\item pointing drift - During an observation, there is a slow linear
drift in pointing during each nod. The drift rate is larger at the
nod2 position. The slow pointing drift rate ranges from 3 mas/hr to
19 mas/hr (based on a 1.85 arcsecond per pixel plate scale and a nod
duration of 2.9026 hr - see Tab. 1). This linear pointing error is
described by $y=f(m_{y})$.
\item periodic error - The Spitzer telescope has a known periodic
pointing error $\sim \pm 30$ milliarcseconds. This is the error that
causes the clear periodic modulation of the flux. The periodic
position changes are described by $y=f(A_{y},\phi_{y})$.
\end{itemize}
\subsubsection{Background Correction}
In the mid-infrared, accurate measurement of the infrared source flux
requires subtraction of the background due to local zodiacal emission.
To remove the background contribution to the spectrum, we construct
and subtract a median background image. However, this median image
must be constructed with care as there is a systematic error in the
backgound estimate due to leakage from the bright source. This
leakage is manifested as a flux density offset between the background
at the nod1 and nod2 positions. In principle, this offset could be
caused by structure in the background. However, inspection of IRS
calibrator star data shows that the difference in the background
between the nods is systematic in that it occurs for all the multiply
observed IRS calibrators we checked; the effect is highly repeatable
and is proportional to the measured source flux. IRS calibrators
observed with a series of slit offsets show the measured source flux
decreases with the slit offset from the target, and the background
offset is proportional to the measured flux. This suggests that some
of the light from the source contaminates the background through the
wings of the PSF. Because the Spitzer PSF is asymmetric
\citep{bayard04}, the leakage differs in nod 1 and nod 2.
To determine the amount of a point source contamination in the
background estimate, we have used observations covering an interval of
approximately three years for five IRS calibrator stars (HR 6606, HR
7341, HD 166780, HD 173511, HR 6348), together with the assumption of
a locally uniform background. The IRS calibrators we selected were
observed in the nominal nod1 and nod2 positions for both SL1 and SL2
modes. These stars were observed on a regular basis throughout the
Spitzer operational period up to the time of these observations. Each
star was typically observed at least 20 times over a three year
interval. Thus, slit precession over a period of one year is a strong
test of our assumption of uniform background.
We determined the source contamination in the background by
subtracting two SL1 background positions when the star is in the SL1
and SL2 positions. The background source contribution function,
$BSCF$, for the nod1 position has the form
\begin{equation}
BSCF_{nod1} = \frac{S_{leak}(nod1)}{S_{source}(nod2)} =
\frac{B_{nod1}(SL1,nod2) - B_{nod1}(SL2,nod1)}{S'(nod2) -
B_{nod2}(SL2,nod1)} \left( \frac{RSRF_{nod2}}{RSRF_{nod1}} \right),
\end{equation}
where $B_{nod1}(SL1,nod2)$ is the background at the nod1 position
measured when the source is located at the SL1, nod2 position;
$B_{nod1}(SL2,nod1)$ is the background at the nod1 position measured
when the source is located at the SL2, nod2 position; $S'(nod2)$ is
the measured source flux at the SL1, nod2 position with no background
correction; and $RSRF$ is the relative spectral response function at
either the nod1 or nod2 source position. For each term, the subscript
denotes the position on the array where the value was measured while
the source position at the time of the measurement is indicated by the
parenthesis. Thus, $B_{nod1}(SL2,nod1)$ is the background measured at
the SL1, nod1 position when the source is located at the SL2, nod1
position. Since we are calibrating SL1 data, the background is always
measured in the SL1 slit. However, determining the $BSCF$ requires
using data when a star was observed with both the SL1 and SL2 slits.
The $BSCF_{nod2}$ is similarly defined except that all nod1 instances
become nod2 and {\it visa versa}. The corrected background
flux density at the two SL1 nod positions is then
\begin{equation}
S(nod1) = \left[ S'(nod1) - B_{nod1}(SL1,nod2) \right] + BSCF_{nod1} \times S(nod2),
\end{equation}
and
\begin{equation}
S(nod2) = \left[ S'(nod2) - B_{nod2}(SL1,nod1) \right] + BSCF_{nod2} \times S(nod1).
\end{equation}
The system of linear equations is then solved for the background
corrected, measured source flux density, $S$, at each nod position.
This results in an estimate of the $BSCF$ each time the calibrator
stars were observed. We then averaged the results for all the
calibrator observations to determine a mean $BSCF$ (see
Fig. \ref{fig:background}). The uncertainty in the $BSCF$ at each
wavelength was determined by the standard deviation in the mean.
Applying the $BSCF$ substantially reduces the background flux density
offset between the nod1 and nod2 positions (see
Fig. \ref{fig:background}). For wavelengths shorter than 13 $\mu$m,
the correction we derive is $\sim$0.4\% for nod 1 and $\sim$0.9\% for
nod 2. This means that there can be $\sim$ 0.5\% of the source flux
present in the wings of the PSF $\sim$ 20 arcseconds away from the
observed source position. Thus, the contamination of the background by
the source is of the same magnitude as the signal from the exoplanet.
This correction for source contamination of the background may not be
necessary for many observations. However, for high dynamic range
measurements on bright point sources, neglecting the leakage of the
source into the background estimate introduces systematic errors in
the data for each nod.
\subsubsection{Latent Charge Accumulation}
In the context of the IRS instrument, latent charge accumulation has
been reported by several authors
\citep{grillmair07,richardson07,deming06}, and is sometimes termed
``charge trapping''. Currently, the details of the semi-conductor
physics that produce the effect are not well understood. Empirically,
the responsivity of a pixel initially depends on the illumination.
When the flux density time series is median-normalised (e.g.
$F_{i}(\lambda)/<F(\lambda)>$), the light curves at each wavelength can
be ``stacked'' \citep{richardson07}. The effect of latent charge
accumulation can be seen at the beginning of each AOR in
Fig. \ref{fig:initialTimeSeries}; the effect is characterized by a
rapid initial increase in the measured flux density, which then
approaches an equilibrium. If one excludes the first $\sim $ 20
points in each AOR and finds the slope of a best fit line to the data,
the slope in nod2 is greater than the slope in nod1. As
Fig. \ref{fig:positionFit} shows, a simple pointing model explains the
changes in the measured flux after the first $\sim$ 20 minutes. Note
that it is possible to confuse the linear component of the pointing
drift with the effect of latent charge accumulation after the first
$\sim$ 20 minutes. By explicitly modelling the pointing, our analysis
breaks this degeneracy and allows us to separate the effects of these
two systematic errors. We conclude that latent charge effects are
negligible after the first 20 minutes. We omitted the data affected
by latent charge accumulation from further analysis so that the
effects of latent charge do not impact our estimate of the planet
spectrum.
\subsection{Spectral Response Function}
After the initial extraction, the data were background corrected using
the background correction discussed above. The next stage of the
calibration was to derive and apply a spectral response function.
Using the IRS calibrator $\eta$ Dor, we derived our own spectral
response function, the $SRF$, for the nominal nod positions and
extraction aperture. We selected this source for defining the
spectral response function because it has the same brightness as HD
209458 and thus should minimize any remaining instrument residuals.
Both HD 209458 and the $\eta$ Dor data were extracted using identical
methods, and both incorporate identical methods for background
correction. Thus, the treatment of both the calibrator and source data
sets is fully self-consistent. In the case of AOR 14818048, an
additional calibration step for the $SRF$ was required because the
observations were not carried out at the nominal nod positions. To
determine the changes in the $SRF$ for other (but still relatively
nearby slit positions), we used observations of the IRS calibrator
stars HD 42525 and HR 7341, which were observed at intervals along the
IRS slit. We interpolated between observing positions to determine
how the calibrator star's spectrum changed as a function of slit
position and used this information to renormalize the $SRF$ derived,
using $\eta$ Dor for the nod positions used in AOR 14818048.
At this point, the data were ready for extraction of the spectrum. Of
the three major systematic errors, the background had been removed (at
this point in the calibration sequence) by explicit calibration. The
effects of latent charge were removed by excluding the effected data
from the spectral estimation. However, the pointing error remained
uncorrected. In what follows, two methods were used to correct the
pointing error and extract the planet spectrum.
\subsection{Spectral Estimation}
\subsubsection{Differential Method}
This approach assumes that changes in the measured flux have a
wavelength-independent component, characterized by $G(t)$, which can
vary on a timescale of minutes, and a wavelength-dependent component,
$G(\lambda)$, which is stable for a given nod but can change between
nods. The $G(\lambda)$ term is removed by construction of a spectral
flat. We derived a spectral flat for each nod by comparing the
average flux in each spectral channel to the flux, $F(\lambda)$, of a
stellar photosphere model for HD 209458 \citep{kurucz92} normalized to
the 12 $\mu$m flux. Thus at each wavelength, $\lambda$, the spectral
flat was defined as the inverse of
$[S_{S}(\lambda)/F_{Kurucz}(\lambda)]\times[F_{Kurucz}(12)/S_{S}(12)]$
where $S_{S}$ is the measured flux observed in interval S. After
normalization of each nod by the associated spectral flat field, the
data were assumed to vary only in time; a more extensive discussion of
this technique can be found in \cite{bryden06} and \cite{beichman06}.
To reject the wavelength-independent $G(t)$ term, we constructed a
differential observable using the following method. During period SP
(star+planet), the measured flux, $S_{SP}(\lambda)$, can be written as
\begin{equation}
S_{SP}(\lambda) = G(t) \left[ F_{S}(\lambda) + F_{P}(\lambda)\right].
\end{equation}
\noindent This can be expanded as
\begin{equation}
S_{SP}(\lambda) = G(t) F_{S}(\lambda) \left[ 1 +
\frac{F_{P}(\lambda)}{F_{P}(\lambda^{'})}
\frac{F_{P}(\lambda^{'})}{F_{S}(\lambda^{'})}
\frac{F_{S}(\lambda^{'})}{F_{S}(\lambda)} \right],
\end{equation}
\noindent where $F(\lambda)$ is the true source flux, the
subscripts refer to the star or planet, and $\lambda^{'}$ is a
reference wavelength selected for the comparison. We set the transit
depth at $\lambda^{'}$ to a plausible value such that
$\beta=(F_{P}(\lambda^{'})/F_{S}(\lambda^{'}))<<1$ and the transit
depth at $\lambda$, relative to the transit depth at $\lambda^{'}$, is
$\alpha=(F_{P}(\lambda)/F_{P}(\lambda^{'}))$. $S_{SP}(\lambda)$ can
then be expressed in terms of $\alpha$ and $\beta$ as
\begin{equation}
S_{SP}(\lambda) =G(t) F_{S}(\lambda) \left[ 1 + \alpha \beta
\frac{F_{S}(\lambda^{'})}{F_{S}(\lambda)} \right].
\end{equation}
\noindent During period S (star only), the measured signal, $S_{S}$, is
$S_{S}(\lambda) = G(t) F_{S}(\lambda)$. The ratio of the two
wavelengths during the SP and S periods is
$R_{SP}=S_{SP}(\lambda)/S_{SP}(\lambda^{'})$ and
$R_{S}=S_{S}(\lambda)/S_{S}(\lambda^{'})$. The advantage of taking
the ratio is that the wavelength-independent gain term, $G(t)$, drops out.
Appropriate substitution, and solving for $\alpha$, yields
\begin{equation}
\alpha = [R_{SP} (1 + \beta) - R_{S}] / \beta .
\end{equation}
\noindent The observables are $R_{SP}$ and $R_{S}$, and $\alpha$ is
the measure of the brightness of the planet at $\lambda$ compared to
the planet brightness at $\lambda^{'}$. The results in
Fig. \ref{fig:specCompare} reflect a value for $\beta=0.003$.
However, the results for the spectral slope are not strongly dependent
on the assumed value for $\beta$, and we explicitly measured the
eclipse depth in any case (using the absolute method).
We summarize the steps in the differential method as follows:
\begin{enumerate}
\item Treat each AOR and nod combination as an independent secondary
eclipse measurement with an independent calibration; this leads to
four independent estimates of the planet spectrum.
\item Normalize the data in a given nod by dividing the flux
density by the 12 $\mu$m value at each sample in the time series.
\item Average the SP and S intervals in the time series to get the
``star only'' and ``star+planet'' spectra.
\item Normalize a Kurucz model for HD 209458 flux density by the model's
12 $\mu$m prediction.
\item Construct a ``super flat'' by dividing the normalized Kurucz
model into the normalized data (for both SP and S intervals).
\item Estimate the planet spectrum by subtracting the S interval
spectrum from the SP interval spectrum.
\end{enumerate}
The four estimates of the exoplanet spectrum are then averaged to
create the final spectrum. The errors are estimated by taking the
average value of the differences in the estimate at each wavelength.
To assess the magnitude of residual systematics, we made a comparison
(see Fig. \ref{fig:specCompare}) between the spectra from each of the
four independent AOR and nod combinations. In the central region of
the IRS SL1 instrument bandpass, the spectra are in relatively good
agreement. However, this agreement becomes worse at either end of the
instrument bandpass; this is especially true for wavelengths between
7.5 and 9 $\mu$m. The reason for this is that the assumption that
$G(\lambda)$ is time invariant is only a first order approximation.
Because we have normalized by the 12 $\mu$m flux density, the effect
of the small, uncorrected pointing errors within a nod is greatest at
the band edges (see Fig. \ref{fig:pointingCorrect}). To determine the
best estimate of the differential spectrum, the four independent
differential spectra are averaged together.
\subsubsection{Absolute Method:} Here we describe how to apply the $\zeta$
correction to the source and calibrator data. Although we do not
know {\it a priori} what the telescope pointing error is, we can
determine the correct flux density for a given pointing error, $y$,
using $F=S/\zeta(y)$. From Eq. 4, we know that $y =
f(y_{0},m_{y},A_{y},\phi_{y})$. Thus, our task is to identify the
correct values for ${y_{0},m_{y},A_{y},\phi_{y}}$. One way to do this
is to require that the absolute spectrum be self-similar. We
implemented this by constructing all unique combinations of the
relation
\begin{equation}
{\bf R(i,j)} =
F_{i}(\lambda,\theta)/F_{j}(\lambda,\theta) =
\frac{S(t_{i}) \zeta(y_{j})}{S(t_{j}) \zeta(y_{i})}
\end{equation}
for the SP and S portions of each nod separately, where $i$ and $j$
are individual measurements in the time series. We then iteratively
searched this space to determine the values of
${y_{0},m_{y},A_{y},\phi_{y}}$ (given in Tab. 2),
which resulted in most closely approximating ${\bf
R(i,j)}=1$. Fig. \ref{fig:timeSeries} shows the result of the
application of the pointing offset correction, and the secondary
eclipse event is directly visible. Similarly, we applied the $\zeta$
correction to $\eta$ Dor and determinded the pointing offset by
requiring spectral self-similarity. As with the differential method,
we evaluated the internal consistency of the pointing correction by
comparing the spectra from both nods in both AORs. The agreement
between the absolute spectra is excellent (see
Fig. \ref{fig:method2check}), and we now compare the
differential and absolute spectra to assess the level of any residual
systematics.
\section{Discussion}
In this section, we compare the results of the differential and
absolute methods we used to extract the planet spectrum. The
assumption of wavelength-dependent stability used in the differential
method is evaluated. We discuss our estimate of the eclipse
depth and spectral features; we interpret these results in the context
of recent models. We also discuss the significant differences between
our analysis methods and results and those of \cite{richardson07}.
\subsection{Comparison of differential and absolute methods}
A fundamental strength of our approach is the use of two
semi-independent methods to demonstrate understanding and calibration
of the dominant systematic errors. As Fig. \ref{fig:specCompare}
shows, the agreement between the differential and absolute planet
spectrum estimation methods is excellent over most of the instrument
passband. While agreeing within the errors, between 7.5 and 9 $\mu$m,
the differential spectrum is systematically below the absolute spectrum.
This is caused by small pointing errors occuring within a nod that are
not removed by the differential method. Because the internal scatter
of the absolute method is similar at all wavelengths, we consider the
absolute spectra to be the best estimte of the planet spectrum.
That the two spectral extraction procedures yield consistent results
is encouraging and gives us a high degree of confidence that the
calibration of systematic errors is successful within the error
bars. Given that the differential method appears to make no specific
correction for pointing, one might wonder why the agreement with the
absolute method is so good. The source of the agreement is that the
normalization by the Kurucz model corrects for any chromatic error,
pointing or otherwise, {\it so long as the chromatic error changes in
time are relatively small}. Thus, normalization by the Kurucz model
corrects the chromatic error produced by the largest pointing errors,
which are static and occur during the initial peak-up and during the
nod. Because the periodic pointing errors are relatively small, the
change in the measured flux during a nod is, to first order,
wavelength independent, and thus the spectral flat field is a good
approximation for the flux correction due to the initial pointing
error. Thus, the agreement between the differential and absolute
spectral estimation methods supports the original assumption that the
$G(\lambda)$ term is relatively (but not completely) constant during a
nod. The increased size of the error bars in the differential method
results from the periodic component of the pointing errors.
\subsection{Eclipse Depth}
We have determined an average eclipse depth for the data by
normalizing the absolute (pointing error corrected) flux density time
series at each wavelength by the median values of the time series,
$F(\lambda)/<F(\lambda)>$. This is then averaged over wavelength to
develop a broad-band light curve. The result of this can be seen in
Fig. \ref{fig:eclipseDepth}; the broad-band light curve clearly shows
the eclipse and the transitions between ingress and egress. We can
derive four independent estimates (one for each nod) of the broad-band
eclipse depth, and these are consistent within the errors. After
averaging the individual estimates, we find the average eclipse depth
between 7.6 and 14 $\mu$m to be 0.00315$\pm$0.000315. This minor
restriction in wavelength was implemented to exclude the channels with
lower SNR. When compared to theoretical models \citep{burrows06}, the
measured eclipse depth suggests that substantial heat redistribution
from the dayside to the nightside is occurring. This evidence of heat
redistribution is similar to the interpretation given to observations
of HD 189733b by \cite{grillmair07} and \cite{knutson07}.
\subsection{Planet Spectrum}
We have determined the planet spectrum (see
Fig. \ref{fig:absoluteSpec}) and find it to range from about 600
$\mu$Jy at 7.5 $\mu$m to about 200 $\mu$Jy at 15.2 $\mu$. The SNR in
the spectrum ranges from $\sim$ 10 at short wavelengths to $\sim$ 2 at
the longest wavelengths. To our knowledge, this is the first
determination of the absolute spectrum of exoplanet emission. Our
results for the spectral shape agree well with previous work (see
Fig. \ref{fig:richardsonSpec}) albeit with improved SNR; specifically,
we confirm the marginal detection by \cite{richardson07} of a narrow
feature near 7.7 $\mu$m. For most wavelengths, the planet spectrum is
characterized by approximately featureless emission. However, between
7.5 and 8.5 $\mu$m, there is evidence for one broad (previously
unreported) and one narrow (previously reported) spectral feature.
Both the absolute spectrum and the contrast spectrum show evidence of
a possible $\sim$ 0.5 $\mu$m-wide feature centered around 8.1 $\mu$m,
with a significance of about 4 $\sigma$. This broad feature
represents a flux deficit from the local trend and could be due to
absorption. At the full spectral resolution, there is also a
suggestion of a narrow feature around 7.7 $\mu$m. This narrow feature
candidate could be either in absorption (a deficit relative to the
local trend in the 7.57 and 7.63 $\mu$m channels) or in emission (an
excess relative to the local trend in the 7.69 and 7.75 $\mu$m
channels). The shape of the broad feature causes us to favor the
hypothesis of a narrow absorption feature in the 7.57 and 7.63 $\mu$m
channels. However, movement of any one of these four spectral points
(7.57, 7.63, 7.69, and 7.75 $\mu$m) by $\sim$1.5 $\sigma$ towards the
local trend would convert this candidate feature into an outlier
consistent with a normal measurement distribution. The narrow feature
candidate is sufficiently marginal that additional observations are
required to confirm or rule out a spectral feature at this wavelength.
The indication that the spectrum of HD 209458b contains one broad and
one narrow feature between 7.5 and 8.5 $\mu$m is supported by the
\cite{richardson07} measured spectrum. Indeed, the striking
qualitative agreement (one broad and one narrow feature) between
previous work and our results for the spectral modulation between 7.5
and 8.5 $\mu$m is a strong indication that this modulation is real.
Although \cite{richardson07} did not discuss the broad feature, it is
present in their spectrum and we confirm their measurement. While we
cannot totally exclude the possibility of some residual instrument
systematic, it is highly significant that the shape of this spectral
modulation is consistent using three independent methods conducted by
two independent groups. Because of the repeatability of the result
and the maturity of the exoplanet spectrum determination, the spectral
modulation between 7.5 and 8.5 $\mu$m is likely real and may serve as
a useful constraint on models for emission from HD 209458b.
In the interpretation of the previous results for these data,
\cite{richardson07} reported the detection of a broad emission feature
centered at 9.65 $\mu$m, identified as a silicate feature, and a
narrow emission feature centered at 7.78 $\mu$m. We find no evidence
to support the identification of a 9.65 $\mu$m feature in our
spectrum. Additional averaging and scrolling median filtering does
not reveal any candidate feature with the characteristics claimed by
\cite{richardson07}. It is possible that the narrow feature
identified by \cite{richardson07} corresponds to the 7.67 and 7.75
$\mu$m channels in our analysis. If this is the case, the difference
in wavelength is possibly due to the non-standard wavelength
calibration method used by \cite{richardson07} (see discussion below).
However, we stress that the candidate {\it absorption} feature at 7.57
$\mu$m is at least as likely as an {\it emission} feature at 7.69
$\mu$m
\subsection{Differences with Previous Work}
There are several significant differences in our data calibration
method and results when compared to the approach used by
\cite{richardson07}. Our approach explicitly corrects for the
telescope pointing error and source leakage into the background; both
of these effects are chromatic errors capable of introducting
systematic errors in a spectrum. We also use two methods, one
differential and one absolute, to extract the spectrum of the
exoplanet, and we demonstrate good agreement between the methods.
Unlike the previous work, we are able to explicitly measure the
secondary eclipse depth from the IRS data. The improved SNR and lower
internal scatter in our spectrum allows a clear identification of the
spectral modulation between 7.5 and 8.5 $\mu$m, and rules
out the possibility of significant silicate emission at 9.65 $\mu$m.
Below, we explain some of the important details in the differences
between our methods and results and those of \cite{richardson07}.
\begin{itemize}
\item {\bf absolute spectrum} (result): We have determined the
spectrum of HD 209458b in Jy. To our knowledge, this is the first
absolute determination of an exoplanet emission spectrum.
\item {\bf eclipse depth} (result): We explicitly determine the
broad-band eclipse depth from the IRS data at high SNR ($10\sigma$).
This determines the eclipse depth in the IRS SL1 instrument passband
and avoids the uncertainty associated with incomplete matching of the
IRS wavelength coverage to the 8 $\mu$m IRAC channel.
\item{\bf spectral features} (result): We find no evidence for the
silicate feature identified by \cite{richardson07}. There is the
possibility of a narrow candidate feature at $\sim$7.7 $\mu$m, but at
the 1.5-$\sigma$ level it is consistent with noise. In addition, the
position of the 7.64 and 7.70 $\mu$m spectral points relative to the
neighbors make this candidate feature as likely to be an absorption
feature as an emission feature.
\item {\bf wavelength calibration} (method): As part of the spectral
extraction process, using SMART, we include the wavesamp.tbl table
calibration file provided by the Spitzer Science Center. This
approach implements an interpolation method to determine how
fractions of a pixel contribute to a given wavelength. This approach
accounts for the spectra tilt and curvature and provides
Nyquist sampling of the spectra in the dispersion direction. In
contrast, the wavelength definition used by \cite{richardson07} is
based on the b0\underline{ }wavesamp\underline{ }wave.fits file
which, according to the IRS handbook, is for notional purposes only
and should not be used for a scientific analysis. It is likely that
relying on the b0\underline{ }wavesamp\underline{ }wave.fits file
for the wavelength definition is why the wavelength scales for the two
AORs are different in the \cite{richardson07} analysis.
\item {\bf background correction} (method): Our background subtraction
approach includes a correction for contamination from the source.
This is a wavelength-dependent effect, which is of the order of the
secondary eclipse depth. Failure to correct for source leakage in a
normal background subtraction approach causes a wavelength-dependent
error if the data in a nod are simply adjusted (the ``multiplicative
factor'' for \cite{richardson07}) to make the time series continuous.
\item {\bf pointing correction} (method): Our method includes a
specific correction for the pointing error, which corrects the static
offset, periodic changes, and linear drift error terms in the telescope
pointing. Uncorrected pointing errors that change with time
introduce spectral errors.
\item {\bf spectral response function} (method): Our determination of
the spectral response function includes a correction for both the
pointing error and the source contamination of the background. The
spectral response function derivation is required for an absolute
exoplanet spectrum.
\item {\bf error estimate} (method): Our error bars are determined by
the standard deviation in the mean of multiply determined quantities
(e.g. the background corrected and pointing corrected time series) and
by the root sum of squares for combined quantities. The error bars in the
\cite{richardson07} analysis are determined by offsetting the time
series by one time step, subtracting the original time series, and then
determining the standard deviation in the mean of the resulting time
series (in every spectral channel). This approach removes the effect of
all systematic error with timescales longer than $\sim$ 2 minutes and
thus has the potential to underestimate the measurement uncertainty.
\end{itemize}
\section{Conclusions}
Our results for the spectrum of HD 209458b are consistent with a
smooth, largely featureless spectrum ranging from about 600 $\mu$Jy at
7.5 $\mu$m to about 200 $\mu$Jy at 14 $\mu$m. However, there is
evidence of a spectral feature between 7.5 and 8.5 $\mu$m. We find
evidence for a broad $\sim$ 0.5 $\mu$m wide feature, centered at
approximately 8.1 $\mu$m, that is possibly due to absorption. Near 7.7
$\mu$m we find a narrow feature candidate that could be either
absorption or emission, depending on wavelength and local baseline trend
assumptions; this candidate feature is only $\sim$ 1.5 $\sigma$ from
being consistent with noise. We find no evidence for the silicate
feature reported in \cite{richardson07}. The relatively
smooth character of the HD 209458b spectrum suggests the planet
emission is dominated by purely thermal emission over most of the IRS
SL1 passband. However, the spectral modulation between 7.4 and 8.4
$\mu$m is significant and suggests that the dayside vertical
temperature profile of the planet atmosphere is not entirely
isothermal \citep{fortney06}.
We are able to make a direct measurement of the eclipse depth.
Between 7.6 and 14.2 $\mu$m we find an average eclipse depth of
0.00315$\pm$0.000315; when compared to planet emission models such as
\cite{burrows07}, the measured eclipse depth is suggestive of
substantial heat redistribution between the nightside and dayside.
Similar conclusions have been drawn for observations of HD 189733b
\citep{grillmair07,knutson07}.
The methods we have developed for calibration of the background and
pointing errors represent a significant improvement in the
state of the art for IRS calibrations on bright objects. Using a
simple pointing model and requiring self-consistency of the spectrum
for the ``star+planet'' and ``star'' portions of the time series, we
are able to optimally recover the spectrum of HD 209458b. By applying
our calibration of ({\it i}) source contribution to the background and
({\it ii}) pointing errors to the definition of the spectral response
function, we have achieved an absolute flux density calibration
approaching 0.1 $\%$. This implies that our calibration method is
suitable for spectroscopy of emission from the nightside of exoplanets
and would significantly increase the SNR for IRS spectra of relatively
bright point sources.
\acknowledgments
We thank the original PI team for the proposal and preparation of the
AORs required to obtain these data. We appreciate the comments of the
anonymous referee who encouraged us to fully describe our calibration
process and extend our calibration method to the entire IRS SL1
passband; these suggestions led to the detection of spectral
modulation that was outside our original spectral passband. We also
thank Drake Deming for several helpful conversations regarding the
reduction of secondary eclipse data. We thank John Bayard for several
helpful discussions concerning Spitzer pointing errors and Sara Seager
for discussion regarding the possible role of clouds in exoplanet
atmospheres.
|
2,869,038,154,392 | arxiv | \section{Introduction}
ML has recently been applied to accelerate the solution of optimization problems, with MILP being one of the most active research areas~\cite{bengio,kotary2021end,mazyavkina2021reinforcement}. A MILP is an optimization problem that involves both continuous and discrete variables, and aims to minimize or maximize a linear objective function $\boldsymbol{c}^{\intercal}\boldsymbol{x}$, over its decision variables $\boldsymbol{x} \in \mathbb{Z}^{|\mathbb{J}|} \times \mathbb{R}^{n - |\mathbb{J}|}$ while satisfying a set of $m$ linear constraints $ \boldsymbol{A}\boldsymbol{x}\leq \boldsymbol{b}$. Here, $\mathbb{J} \subseteq \{1,\cdots,n\}, |\mathbb{J}|\geq 1$, corresponds to the set of indices of integer variables. Similarly, Integer programming (IP) problems are of the same form only with discrete variables, i.e $\boldsymbol{x} \in \mathbb{Z}^{n}$. The MILP problem is written as:
\begin{align}
\label{eqn:MILP}
z^{IP} = \text{min}\{\boldsymbol{c}^{\intercal}\boldsymbol{x} \ \ | \ \ \boldsymbol{A}\boldsymbol{x}\leq \boldsymbol{b}, \ \ \boldsymbol{x} \in \mathbb{Z}^{|\mathbb{J}|} \times \mathbb{R}^{n - |\mathbb{J}|} \}
\end{align}
The MILP formalism is widely used in supply chain and logistics, production planning, etc. While the MILP~\eqref{eqn:MILP} problem is NP-hard in general, modern solvers are able to effectively tackle large-scale instances, often to global optimality, using a combination of exact search and heuristic techniques. The backbone of MILP solving is the implementation of a tree search algorithm, \emph{Branch and Bound} (B\&B) \cite{b_and_b}, which relies on repeatedly solving computationally tractable versions of the original problem where discrete variables are relaxed to take on continuous values. Formally, we denote the linear programming (LP) relaxation of problem~\eqref{eqn:MILP}:
\begin{align}
\label{eqn:LP}
z^{LP} = \text{min}\{\boldsymbol{c}^{\intercal}\boldsymbol{x} \ \ | \ \ \boldsymbol{A}\boldsymbol{x}\leq \boldsymbol{b}, \ \ \boldsymbol{x} \in \mathbb{R}^{n} \}
\end{align}
To render the B\&B search more efficient, valid linear inequalities (or tightening constraints) to problem~\eqref{eqn:MILP}-- \textit{cutting planes} --are added to LP relaxations of the MILP with the aim of producing tighter relaxations and thus better lower bounds to problem~\eqref{eqn:MILP}, as illustrated in~\ref{fig:cut_2d}; this approach is referred to as the Branch and Cut (B\&C) algorithm. Cuts are essential for MILP solving, as they can significantly reduce the feasible region of the B\&B algorithm and exploit the structure of the underlying combinatorial problem, which a pure B\&B approach does not do. The incorporation of cutting planes in the B\&B search necessitates appropriate filtering and selection as there can be many such valid inequalities and adding them to a MILP comes at a cost in computation time. As such, \textit{cut selection} has been an area of active research in recent years.
Various families of general-purpose and problem-specific cuts have been studied theoretically and implemented in modern-day solvers~\cite{Santanu}. However, there is an overall lack of scientific understanding regarding many of the key design decisions when it comes to incorporating cuts in B\&B. Traditionally, the management of cutting plane generation and selection is governed by hard-coded parameters and manually-designed expert heuristic rules that are based on limited empirical results. These rules include deciding:
\begin{itemize}
\item[--] the number of cuts to generate and select;
\item[--] the number of cut generation and selection rounds;
\item[--] whether to add cuts at the root node of the tree only or at all nodes of the B\&B tree;
\item[--] which metrics to use to score cuts for selection;
\item[--] which cuts to remove and when to remove them.
\end{itemize}
A cut selection strategy common in modern solvers is to use a scoring function parameterized by a linear weighted sum of cut quality metrics that aim to gauge the potential effectiveness of a single cut. This process is iteratively used to produce a ranking among a set of candidate cuts. However, manually-designed heuristics and fixed weights may not be optimal for all MILP problems \cite{ACS}, and as such, researchers have proposed using ML to aid in cuts selection. Figure \ref{fig:cut_2d} gives the reader a 2D visualization of cuts and potential selection rules.
Recently, the field of ML for cutting plane selection has gained significant attention in MILP~\cite{Columbia,huawei,tree_complexity,local_cuts,ACS,look_ahead,Analytic_Centers} and even mixed-integer \textit{quadratic} programming~\cite{BalteanLugojan2019ScoringPS} and \textit{stochastic} integer programming~\cite{Benders_cut}. To select more effective cutting planes, researchers have proposed ML methods ranging from reinforcement to imitation and supervised learning. This survey aims to provide an overview of the field of ML for MILP cut selection starting with the relevant ML and MILP background (Section~\ref{sec:background}), the state of cut selection in current MILP solvers (Section~\ref{sec:cutsel}), recently proposed ML approaches (Section~\ref{sec:ml}), relevant learning theory (Section~\ref{sec:theory}), and avenues for future research (Section~\ref{sec:conclusion}).
\begin{figure}[t]
\centering
\resizebox{\columnwidth}{!}{
\includegraphics[scale = 0.39]{2d_cuts_new1.png}
} \caption{A 2-dimensional integer program. The optimum of the LP relaxation is shown as a blue star, whereas the integer optimum is shown as a red star. The colored cuts separate the LP optimum as desired. The best cut is cut 1 as it produces the convex hull, shaded in pale green, and evaluation metrics are calculated this cut and shown above the graph.}
\label{fig:cut_2d}
\end{figure}
\section{Background}
\label{sec:background}
\subsection{Integer programming and Cutting planes}
Cutting planes (or \emph{cuts}) are valid linear inequalities to problem~\eqref{eqn:MILP} of the form $\mathbf{\boldsymbol{\alpha}}^T x \leq \beta, \mathbf{\boldsymbol{\alpha}} \in \mathbb{R}^n, \beta \in \mathbb{R}$. They are ``valid" in the sense that adding them to the constraints of the LP relaxation in ~\eqref{eqn:LP} is guaranteed not to cut off any feasible solutions to~\eqref{eqn:MILP}. Additionally, one seeks cuts that separate the current LP solution, $x^*_{LP}$, from the convex hull of integer-feasible solutions; see Figure~\ref{fig:cut_2d}. While adding more cuts can help achieve tighter relaxations in principle, a clear trade-off exists: as more cuts are added, the size of the LP relaxation grows resulting in an increased cost in LP solving at the nodes of the B\&B tree~\cite{Tobias_thesis}. Adding too few cuts, however, may lead to a large number of nodes in the search tree as more branching is required.
We note that the so-called \textit{cutting plane method} can theoretically solve integer linear programs by iteratively solving relaxed versions of the given problem then adding cuts to separate the fractional relaxed solution $x^*_{LP}\in \mathbb{R}^n$, and terminating when $x^*_{LP}\in \mathbb{Z}^n$. Despite theoretical finite convergence results for the cutting plane method using Gomory cuts, numerical errors and design decisions such as cut selection will often prevent convergence to an optimal solution in practice.
\subsection{ML for Combinatorial Optimization }
The use of ML in MILP and combinatorial optimization (CO) has recently seen some success, with a diversity of approaches in the literature \cite{bengio} that can be split into two main categories. The first is to directly predict near-optimal solutions conditioned on the representation of particular instances. Notable examples of this include learning for quadratic assignment problems\cite{nowak}, solving CO problems with pointer networks \cite{Vinyals}, and using attention networks to solve travelling salesman problems \cite{att_tsp}. These approaches aim to completely replace traditional solvers with ML models and are hence very appealing given their black-box nature. In contrast, a second approach focuses on automating decision-making in solvers through learned inductive biases. This can take on the form of replacing certain challenging algorithmic computations with rapid ML approximations or using newly learnt heuristics to optimize solution time. Notable examples include the learning of computationally challenging variable selection rules for B\&B \cite{khalil2016learning,alvarez2017machine,GCNN,zarpellon2021parameterizing}, learning to schedule heuristics \cite{learn_to_schedule}, or learning variables biases \cite{mip_gnn}.
\noindent\textbf{Representing MILPs for ML.} One of the key challenges in applying ML to MILP is the need for efficient and clever feature engineering. This requires a deep understanding of solver details, as well as a thorough understanding of the underlying problem structure. In recent years, graph neural networks (GNNs) have emerged as a popular architecture for several ML applications for MILP~\cite{cappart}. GNNs have the ability to handle sparse MILP instances and exhibit permutation invariance, making them well-suited for representing MILP instances. The GNN operates on the so-called \textit{variable-constraint graph} (VCG) of a MILP. The VCG has $n$ variable nodes and $m$ constraint nodes corresponding to the decision variables and constraints of~\ref{eqn:MILP}. The edges between a variable node $j$ and constraint node $k$ represent the presence of that variable $x_j$ in constraint $k$ (i.e, $A_{jk} \neq 0$), where the weight of the edge is $A_{jk}$.
\subsection{Common Learning Paradigms in ML for CO}
\noindent\emph{\textbf{Supervised Learning}}: The simplest and most common learning paradigm is supervised learning (SL) where the learning algorithm aims to find a function (model) $f :X \rightarrow Y ,\ f \in F$, where $F$ is the function's hypothesis space, given a labelled dataset of $N$ training samples of the form $\{(x_1, y_1),\cdots(x_N, y_N)\}$ where $x_i\in X$ represents the feature vector of the i-th sample and $y_i\in Y$ its label. The goal is to find a $f$ such that a loss function $L(y_i,\hat{y_i})$ measuring how well predictions, $\hat{y_i}$, from $f$ \emph{fit} to the training data with the hope of generalizing to unseen test instances.
\noindent\emph{\textbf{Reinforcement Learning}}: A Markov Decision Process (MDP) is a mathematical framework for modelling sequential decision-making problems commonly used in reinforcement learning (RL). At time step $t\geq0$ of an MDP, an agent in state $s_t \in \mathcal{S}$ takes an action $a_t\in \mathcal{A}$ and transitions to the next state $s_{t+1} \sim p(\cdot|s_t,a_t)$, incurring a scalar reward $r_t\in\mathbb{R}$. A policy, denoted by $\pi$, provides a mapping $\mathcal{S} \mapsto P(\mathcal{A})$ from any state to a distribution over actions $\pi(\cdot|s_t)$. The goal of an MDP is to find a policy $\pi$ that maximises the expected cumulative rewards over a horizon $T$, i.e, $max_{\pi}$ J($\pi$) = $\mathbb{E}_{\pi}[\sum_{t=0}^{T-1}\gamma^tr_t]$, where $\gamma \in (0,1]$ is the discount factor.
\noindent\emph{\textbf{Imitation Learning}}: Imitation Learning (IL) aims to find a policy $\pi$ that mimics the behaviour of an expert or oracle in a given task through demonstrations. This is often formalized as an optimization problem of the form $\min_{\pi} L(\pi,\hat{\pi})$, where $\hat{\pi}$ is the expert policy and $L$ is a loss function measuring the difference between the expert and the learned policy. IL can be seen as a special case of RL where the agent's objective is to learn a policy that maximizes the likelihood of the expert's actions instead of the expected cumulative reward. In ML-for-MILP, IL (and SL) has been used to amortize the cost of powerful yet computationally intractable scoring functions.
Such functions appear in cut generation/selection~\cite{Amaldi2014CoordinatedCP,Coniglio} and have been recently approximated using IL~\cite{look_ahead}, as we will discuss hereafter.
\section{Cutting Planes in MILP solvers}
\label{sec:cutsel}
\subsection{Cut generation}
At a node of the search tree during the B\&C process and prior to branching, the solver runs the cutting plane method for a pre-specified \emph{number of separation rounds}, where each round $k$ involves i) solving a continuous relaxation $P^{(k)}$ to get a fractional $x^k_{LP}$; ii) generating cuts $\mathcal{C}^k$, referred to as the \emph{cut-pool} followed by selecting a subset $\mathcal{S}^{k}\subseteq\mathcal{C}^k$; iii) adding $S^{k}$ to the relaxation and proceeding to round $k+1$. After $k$ separation rounds the LP relaxation $P^k$ will consist of the original constraints $\boldsymbol{A}\boldsymbol{x}\leq \boldsymbol{b}$ and any cuts of the form ($\mathbf{\boldsymbol{\alpha}},\beta$) that have been selected.
Concretely, we write $P^k$ as:
\begin{equation}
\begin{aligned}
P^k &=\{ \boldsymbol{A}\boldsymbol{x}\leq \boldsymbol{b}, \ \mathbf{\boldsymbol{\alpha}}^T \boldsymbol{x} \leq \beta \ \forall \ (\mathbf{\boldsymbol{\alpha}},\beta) \in \bigcup_{i=1}^{k} S^i, \ \ \boldsymbol{x} \in \mathbb{R}^{n} \}\\
&=\{\boldsymbol{A}^k\boldsymbol{x}\leq \boldsymbol{b}^k, \boldsymbol{x} \in \mathbb{R}^{n} \} \quad \text{with } \ \boldsymbol{A}^0=\boldsymbol{A}, \ \boldsymbol{b}^0=\boldsymbol{b}
\end{aligned}
\end{equation}
\noindent \emph{Global cuts} are cuts that are generated at the root node whereas \emph{local cuts} are cuts generated at nodes further down the tree. Traditionally, solely relying on global cuts is referred to as \emph{Cut \& Branch}, in comparison to B\&C which uses both global and local cuts. Solvers use various hard-coded parameters determined experimentally to control the number of separation rounds, types of generated cuts, their frequency, priority among separators, whether to use local cuts or not, among others. We use SCIP's internal cut selection subroutine to highlight key decisions regarding cut selection. However, similar methodologies are used in other MILP solvers.
\subsection{Cut selection}
During the cut selection phase, the solver selects a subset of cuts $S^k$ from the cut-pool to add to the MILP. In particular, SCIP scores each cut using a simple linear weighted sum of cut quality metrics from the MILP literature:
\begin{equation}
\label{eqn:ScipScore}
S = \lambda_1 \textbf{eff} + \lambda_2 \textbf{dcd} + \lambda_3 \textbf{isp} + \lambda_4 \textbf{obp} \quad
\mathbf{\lambda} \geq \mathbf{0}, \mathbf{\lambda} \in \mathbb{R}^4
\end{equation}
Here, the weights ($\lambda_1,\lambda_2,\lambda_3,\lambda_4$) are solver parameters that correspond to four metrics: efficacy (\textbf{eff}), directed cutoff distance (\textbf{dcd}), integer support (\textbf{isp}), and objective parallelism (\textbf{obp}). Cuts are then added greedily by the highest-ranking score $S$ and added to $S_k$, followed by filtering the remaining cuts for parallelism. This is done until a pre-specified number of cuts have been selected or no more candidate cuts remain. The current default weights for SCIP version 8.0 \cite{scip8} are $\mathbb{\lambda_{DEF}}^T = [0.9, 0.0, 0.1, 0.1]$.
\subsection{Cut Metrics}
Cheap metrics such as the ones above are used to gauge the potential bound improvement that will result from adding a cut ($\boldsymbol{\alpha},\beta$) to a current relaxation with optimal solution $x_{LP}$ and best known incumbent $\hat{x}$.
\emph{Efficacy} is the {Euclidean distance} between the hyperplane $\boldsymbol{\alpha}^T x = \beta$ and $x_{LP}$ and can be interpreted as the \emph{distance cut off by a cut} \cite{Wesselmann}, eXpressed as:
\begin{equation}
\label{eqn:eff}
\text{\textbf{eff}}(\boldsymbol{\alpha},\beta,x_{LP})\coloneqq \frac{\boldsymbol{\alpha}^T x_{LP} - \beta}{\|\boldsymbol{\alpha}\|}.
\end{equation}
\emph{Directed cutoff distance} \cite{scip_6} is the distance between the hyperplane $\boldsymbol{\alpha}^T x = \beta$ and $x_{LP}$ in the direction of $\hat{x}$ and is measured as:
\begin{equation}
\label{eqn:dcd}
\text{\textbf{dcd}}(\boldsymbol{\alpha},\beta,x_{LP},\hat{x})\coloneqq \frac{\boldsymbol{\alpha}^T x_{LP} - \beta}{|\boldsymbol{\alpha}^T\mathbf{y}|}, \ \mathbf{y}\coloneqq\frac{\hat{x}-x_{LP}}{\|\hat{x}-x_{LP}\|}.
\end{equation}
The \emph{support of a cut} is the fraction of coefficients $\boldsymbol{\alpha}_i$ that are non-zero; sparser cuts are preferred for computational efficiency and numerical stability~\cite{Santanu}. The integer support takes this notion one step further by considering the fraction of coefficients corresponding to integer variables that are non-zero, measured as:
\begin{equation}
\label{eqn:isp}
\text{\textbf{isp}}(\boldsymbol{\alpha}) \coloneqq \frac{\sum_{i\in\mathbb{J}}^{}\text{NZ}(\boldsymbol{\alpha}_i)}{\sum_{i=1}^{n}\text{NZ}(\boldsymbol{\alpha}_i)}, \ \text{NZ}(\boldsymbol{\alpha}_i) \coloneqq \begin{cases}
0 & \text{if } \boldsymbol{\alpha}_i=0 \\
1 & \text{else}
\end{cases} \\
\end{equation}
\emph{Objective parallelism} is measured by considering the cosine of the angle between $c$ and $\boldsymbol{\alpha}$, with $\textbf{obp}(\boldsymbol{\alpha},c) = 1$ for cuts parallel to the objective function:
\begin{equation}
\label{eqn:obp}
\text{\textbf{obp}}(\boldsymbol{\alpha},c)\coloneqq \frac{|\boldsymbol{\alpha}^T c|}{\|\boldsymbol{\alpha}\| \|c\| }
\end{equation}
\noindent More directly useful, but expensive, evaluation metrics can be measured by solving the relaxation obtained by adding the selected cuts and observing its objective value. Specifically, the \emph{integrality gap} (IG) after separation round $k$ is given by the bound difference $g^k \coloneqq z_{IP} -z^{k}\geq 0$ whereas the integrality gap closed (IGC) is measured as :
\begin{equation}
IGC^{k}\coloneqq\frac{g^0-g^k}{g^0}=\frac{z^k-z^0}{z^{IP}-z^0} \in [0,1]
\end{equation}
and represents the factor by which the integrality gap is closed between the first relaxation $P^0$ and the relaxation $P^k$ obtained after $k$ separation rounds \cite{Columbia}.
\subsection{Precursors to Learning to Cut}
Since cut selection is not an exact science and no formal guideline exists, traditional methods to find good cut selection parameters for Eq.~\eqref{eqn:ScipScore} rely on performing parameter sweeps using appropriately designed grid searches. For instance, the first large-scale computational experiment regarding cut selection was presented in \cite{Tobias_thesis} and many more computational studies in SCIP have since been published \cite{scip7,scip8}. \cite{Tobias_thesis} is the basis for SCIP's scoring function and involves testing many configurations of cut metrics presented in \cite{Wesselmann} for \ref{eqn:ScipScore} and demonstrates a large decrease in overall solution time and nodes in the B\&B tree if the parameters are tuned properly.
Overall, the many hard-coded parameters that are used in MILP solvers can be tuned either by MILP experts or through black-box algorithm configuration methods. These include grid search, but also more sophisticated methods such as sequential model-based optimization methods \cite{SMAC3} or black-box local search~\cite{xu2011hydra}.
\begin{table*}[t]
\centering
\rowcolors{2}{gray!15}{white}
\begin{tabular}{p{2.9cm}p{2.5cm}p{1.5cm}p{1.3cm}p{3.4cm}p{1.2cm}p{1.4cm}}
\toprule
\textbf{Paper} & \textbf{Learning task} & \textbf{Solver} & \textbf{ML paradigm} & \textbf{Instance type/source} & \textbf{Instance size} & \textbf{ML model} \\
\midrule
\midrule
\cite{Columbia} & score single cut & Gurobi & RL & Synthetic (Packing, Binary Packing, Max Cut, Production Planning) & small & Attention \& LSTM \\
\cite{look_ahead} & score single cut & SCIP & IL & Synthetic + NN verification & large & GNN \\
\cite{BalteanLugojan2019ScoringPS} & score single cut & CPLEX & SL & QP + QCQP & large & MLP \\
\cite{Benders_cut} & classify single cut & Bender's for 2SP & SL & CFLP + CMND & medium & SVM \\
\cite{huawei} & score bag of cuts & Proprietary solver & SL & Proprietary data & large & MLP \\
\cite{ACS} & learning weights for cut scoring & SCIP & RL & MIPLIB 2017 & large & GNN \\
\cite{local_cuts} & learning when to cut & Xpress & SL & MIPLIB 2017 + Proprietary benchmark & large & Random Forest \\
\bottomrule
\end{tabular}
\caption{Table summarizing and categorizing tasks tackled by research embedding ML for cut management in optimization solvers. The three instance size classifications, small, medium, and large correspond to instances with $n \times m$ in the range $[0,1000],[1000,5000],[5000,\infty]$ respectively. Additionally, QCQP refers to quadratically constrained QPs and Synthetic refers to the 4 IP instances presented in \protect\cite{Columbia} found in the first row.}
\label{tab:table_TASKS}
\end{table*}
\section{Learning to Cut}
\label{sec:ml}
The research on ``\emph{Learning to Cut}" can be categorized along three axes: the choice of the cut-related learning task, the ML paradigm used, and the optimization problem class of interest (MILP or others). We use these axes to organize the survey. Table \ref{tab:table_TASKS} provides a classification of the surveyed papers.
\subsection{Directly scoring individual cuts in MILP}
\subsubsection*{Scoring using Reinforcement Learning~\cite{Columbia}}
\citeauthor{Columbia},~\citeyear{Columbia}, were the first to motivate and experimentally validate the use of \emph{any} {learning} for cut selection in MILP solvers. The authors present an MDP formulation of the iterative cutting plane method (discussed in Section~\ref{sec:background}) for Gomory cuts from the LP tableau. A single cut is selected in every round via a neural network (NN) that predicts cut scores and produces a corresponding ranking. Given that GNNs were still in their infancy at the time, the authors instead used a combination of attention networks for order-agnostic cut selection and an LSTM network for IP size invariance. The authors used evolutionary strategies as their learning algorithm and considered the following baseline selection policies: maximum violation, maximum normalized violation, lexicographical rule, and random selection.
At iteration $t$ of the proposed MDP, the state $s_t \in \mathcal{S}$ is defined by $\{\mathcal{C}^{(t)},x^*_{LP}(t),P^{(t)}\}$ and the discrete action space $\mathcal{A}$ includes available actions given by $\mathcal{C}^{(t)}$, i.e., the Gomory cuts parameterized by $\mathbf{\alpha_i} \in \mathcal{R}^n,\beta_i\in\mathcal{R} \ \forall \ i \in \{0,\dots,|\mathcal{C}^{(t)}|\}$ that could be added to the relaxation $P^{(t)}$. The reward $r_t$ is the objective value gap between two consecutive LP solutions, i.e., $r_t\coloneqq\mathbf{c}^{T}[x^*_{LP}(t+1) - x^*_{LP}(t)] \geq 0$ which when combined with a discount factor $\gamma < 1$, encourages the agent to reduce the IG and reach optimality as fast as possible. Given a state $s_t = \{\mathcal{C}^{(t)},x^*_{LP}(t),P^{(t)}\}$ and an action $a_t$ (i.e, a chosen Gomory cut $\mathbf{\alpha_i}^Tx\leq\beta_i$), the new state $s_{t+1} = \{\mathcal{C}^{(t+1)},c,x^*_{LP}(t+1),P^{(t+1)}\}$ is determined by i) solving the new relaxation $P^{(t+1)} = P^{(t)} \cup\{\alpha_i^Tx\leq \beta_i\}$ using the simplex method to get $x^*_{LP}(t+1)$, ii) generating the new set of Gomory cuts $\mathcal{C}^{(t+1)}$ read from the simplex tableau.
The RL approach significantly outperformed all metrics by effectively closing the IG with the fewest number of cuts for four sets of synthetically generated IP instances. They demonstrated generalization across the instance types as well as across instance sizes in two test-bed environments: 1) pure cutting plane method, 2) $B\&C$ using Gurobi Callbacks.
However, limitations of this work include weak baselines, the restriction to Gomory cuts and a state encoding that does not scale well for large-scale instances as the input to the NN includes all constraints and available cuts. Additionally, the instance sizes considered were fairly small for research in MILP which may have been acceptable given that this was early work in this space. Although many recent papers outperform this approach, it is significant given that it is the first and sole paper that appropriately defines an RL task for cut selection in the cutting plane method or B\&C.
\subsubsection*{Scoring using Imitation Learning~\cite{look_ahead}}
In a recent paper, \citeauthor{look_ahead} demonstrate the strength of a greedy selection rule that explicitly looks ahead to select the cut that yields the best bound improvement, but they note that this approach is too expensive to be deployed in practice. They propose the lookahead score, $s_{LA}$, that measures the increase in LP relaxation value obtained from adding a cut $C_j$ to an LP relaxation $P$. Formally, $C_j \in \mathcal{C}$ where $\mathcal{C}$ is a pool of candidate cuts, and $C_j$ is parameterized by $(\boldsymbol{\alpha},\beta)$. Let $z^j$ denote the optimal value of LP relaxation $P^j \coloneqq P \cup \{\boldsymbol{\alpha}^T x \leq \beta \}$, the new relaxation obtained by adding $C_j$ to $P$. The (non-negative) lookahead score then reads:
\begin{align}
s_{LA}(C_j,P) \coloneqq z^j - z.
\end{align}
In response to this metric's computational intractability, the authors propose a NN architecture, NeuralCut, that is trained using imitation learning with $s_{LA}$ as its expert. The prohibitive cost of the expensive lookahead, which requires solving one LP per cut to obtain $z^j$, is thus alleviated by an approximation of the score.
The authors collect a dataset of expert samples by running the cut selection process for 10 rounds and recording the cuts, LP solutions, and scores from the Lookahead expert creating samples specified by $\{\mathcal{C},P,\{s_{LA}(C_j,P)\}_{C_j\in\mathcal{C}}\}$. They use this expert data to learn a scoring $\hat{s}$ that mimics the lookahead expert by minimizing a soft binary entropy loss overall cuts,
\begin{gather}
L(s) \coloneqq -\frac{1}{|\mathcal{C}|}\sum_{C \in \mathcal{C}}^{}q_C \log s_C + (1-q_C)\log (1-s_C),
\end{gather}
where $q_C \coloneqq \frac{s_{LA}(C)}{s_{LA}(C^*_{LA})}$ and $C^*_{LA} = \text{argmax}_{C \in \mathcal{C}} s_{LA}(C)$. To encode the cut selection decision that is described by the cut-pool and LP relaxation pair $(\mathcal{C},P)$, the authors use a tripartite graph whose nodes hold feature vectors for variables and constraints of P (\textbf{Vars} and \textbf{Cons} as well as cuts from $\mathcal{C}$ (\textbf{Cons}). In this graph, an edge exists between \textbf{Vars} and \textbf{Cons} (resp. \textbf{Vars} and \textbf{Cuts}) if that variable appears in a constraint (resp. a cut) with the edge weight corresponding to the nonzero coefficient. The weights between \textbf{Cons} and \textbf{Cuts} are a measure of their similarity (i.e, parallelism).
The 4 synthetic IP instance classes from \cite{Columbia} were used in this work. Only large instances were used given that small and medium-sized instances were observed to be too easy to solve. To evaluate their approach, the GNN is deployed for 30 consecutive separation rounds and adds a single cut per round in a pure cutting plane setting. The results show that NeuralCut exhibits great generalization capabilities, a close approximation of the lookahead policy, and outperforms the approach in \cite{Columbia} as well as many of the manual heuristics in \cite{Wesselmann} for 3 out of the 4 instances types; the packing instances tied for many methods and did not significantly benefit from a lookahead scorer or NeuralCut. To stress-test their approach, the authors employ NeuralCut at the root node in B\&C for a challenging dataset of NN verification problems \cite{Vinod_Nair} which are harder for SCIP to solve due to their larger size and notoriously weak formulations. They demonstrated clear benefits to the learned cut scorer.
A drawback of $s_{LA}$ is its limitation to scoring \textit{a single cut} due to the computational intractability of scoring a subset of cuts, of which there are combinatorially many. Additionally, although \cite{look_ahead} improves on \cite{Columbia}, both approaches have the inherent flaw of scoring each cut independently without taking into account the \emph{collaborative nature} of the selected cuts (i.e, complementing each other and uniquely collaborating in tightening relaxations).
\subsection{Directly scoring individual cuts for Non-convex Quadratic Programming and Stochastic Programming}
The first work to incorporate any type of \emph{learning} for data-driven cut selection policies, even prior to~\cite{Columbia}, is that in~\cite{BalteanLugojan2019ScoringPS}. It similarly focuses on estimating lookahead scores for cuts. The lookahead criterion in their setting, non-convex quadratic programming (QP), involves solving a semidefinite program which is not viable in a B\&C framework. Although a different optimization setting, many ideas from MILP still apply and a similar approach of employing a NN estimator that predicts the objective improvement of a cut is used. However, a supervised regression task is considered which resulted in a trained multilayer perceptron (MLP) that exhibited speed-ups for evaluating cut selection measures approximately on the order of $2$x, $30$x and $180$x when compared to LAPACK's eigendecomposition method \cite{anderson1999lapack}, Mosek solver \cite{aps2019mosek} and SeDuMi solver \cite{polik2007sedumi} respectively.
Another optimization setting where appropriate cut selection is crucial is two-stage stochastic programming (2SP)~\cite{ahmed2010two}. Traditional solution techniques to 2SP include using Bender's decomposition which leverages problem structure through objective function approximations and the addition of cuts to sub-problem relaxations and a relaxed master problem. The authors in \cite{Benders_cut} leverage SL to train support vector machines (SVM) for the binary classification of the usefulness of a Bender's Cut and observe that their model allows for a reduction in the total solving time for a variety of 2SP instances. More specifically, solution time reductions ranging from $6\%$ to $47.5\%$ were observed on test instances of capacitated facility location problems (CFLP) and slightly smaller reductions were observed for multi-commodity network design (CMND) problems.
\subsection{Directly scoring a bag of cuts for MILP}
In contrast to learning to score individual cuts, the authors in \cite{huawei} tackle cut selection through \textit{multiple instance learning} \cite{MIL} where they train a NN, in a supervised fashion, to score a \emph{bag} of cuts for Huawei's proprietary commercial solver. More specifically, the training samples, denoted by the tuple $\{\mathcal{P},C^\prime,r\}$, are collected using active and random sampling \cite{bello2016neural} where $r$ is the reduction ratio of solution time for a given MILP, with relaxation $\mathcal{P}$, when adding the bag of cuts $C^\prime$. The authors formulate the learning task as a binary classification problem, where the label of a bag $C^\prime$ is $1$ if $r$ ranks in the top $\rho\%$ highest reduction ratios for a given MILP, ($0$ otherwise), and $\rho \in (0,100)$ is a tunable hyper-parameter controlling the percentage of positive samples. At test time, the NN is used to assign scores to all candidate cuts and then select the top $K\%$ cuts with the highest predicted scores, where $K$ is another hyper-parameter. Other notable design decisions include designing bag features from aggregated cut features and a cross-entropy loss with L2 regularization to combat overfitting.
The data for this work consisted of synthetic MILP problems (Set Cover, Knapsack, Planning and General MIP) solved within 25 seconds and large real-world production planning problems ranging in the $10^7$ variables. The results clearly demonstrate the benefit of a learned scorer by comparing their approach to rules from \cite{Wesselmann} and to a fine-tuned manual selection rule used by Huawei's proprietary solver.
Once again, this method suffers from fixing the size/ratio of selected cuts, $K$, and scores cuts independently which neglects the preferred collaborative nature of selected cuts. Note that this is despite training the model to predict the quality of a bag of cuts: at test time, a ``bag" has only a single cut. Additionally, non-learnable hyper-parameters, such as the fixed ratio of high-scoring cuts $\rho$ or $K$, make the framework susceptible to manual tuning and overfitting.
\subsection{Learning Adaptive Cut Selection Parameters}
Rather than directly predicting cut scores,~\citeauthor{ACS},~\citeyear{ACS}, motivate learning instance-specific weights, $\mathbb{\lambda_{ACS}} \in \mathbb{R}^4$, for the SCIP cut scoring function in Eq.~\eqref{eqn:ScipScore}.
The goal is to improve over the default parameterization $\mathbb{\lambda_{DEF}}$ in terms of relative gap improvement (RGI) at the root node LP with 50 separations rounds of 10 cuts per round and a best-known primal bound.
Besides the learning approach proposed by the authors, a grid search over convex combinations of the four weights, $\sum_{i=1}^{4}\lambda_i = 1$, where $\lambda_i = \frac{\beta_i}{10}, \beta_i \in \mathbb{N}$, was performed individually for a large subset of MIPLIB 2017 \cite{gleixner2021miplib} instances. This experiment demonstrates the potential for improvement that one could get with instance-specific weights. The resulting parameters, referred to as $\mathbb{\lambda_{GS}}$, resulted in a median RGI of 9.6\%.
The GNN architecture and VCG features are based on \cite{GCNN} but LP solution features are not used. The output of the model $\mu \in \mathcal{R}^4$ represents the mean of a multivariate normal distribution $\mathbb{N}_4(\mu,\gamma \mathbb{I})$, with $\gamma \in \mathbb{R}$ (a hyper-parameter) that is sampled to generate instance-specific parameters, $\mathbb{\lambda_{ACS}}$. Although the authors claim to use RL to train their GNNs, they fail to appropriately define the sequential nature of their MDP given that the time horizon, $T$, is 1. In their MDP, an action $a_t$ corresponds to a weight configuration $\mathbb{\lambda_{ACS}}$ sampled from $\mathbb{N}_4(\mu,\gamma \mathbb{I})$ which will in turn result in an RGI that is used as the instant reward $r_t$. As such, we consider their work to belong to instance-specific algorithm configuration~\cite{malitsky2014instance}, and the gradient descent approach used to train the GNN can be seen as approximating the unknowing mapping from (instance, parameter configuration) to RGI.
GNN trained on an individual instance basis were able to achieve a relative gap improvement of 4.18\%. However, when trained over MIPLIB 2017 itself, a median RGI of 1.75\% was achieved whereas a randomly initialized GNN produced an RGI of 0.5\%. The authors also observe that $\mathbb{\lambda_{GS}}$ and $\mathbb{\lambda_{ACS}}$ differ heavily from $\mathbb{\lambda_{DEF}}$ as seen in Table~\ref{tab:lambdas} and none of the values tend towards zero, meaning that all the metrics are able to provide utility depending on the given instance.
\begin{table}[h]
\centering
\resizebox{\columnwidth}{!}{%
\begin{tabular}{c|ccc|ccc}
\toprule
Method & \multicolumn{3}{c}{Grid Search $\mathbb{\lambda_{GS}}$} & \multicolumn{3}{c}{ACS approach $\mathbb{\lambda_{ACS}}$} \\
\midrule
Parameter & Mean & Median & Std. Dev & Mean & Median & Std. Dev \\
\midrule
$\lambda_1$ (\textbf{eff}) & 0.179 & 0.100 & 0.216 & 0.294 & 0.286 & 0.122\\
\midrule
$\lambda_2$ (\textbf{dcd}) & 0.241 & 0.200 & 0.242 & 0.232 & 0.120 & 0.274 \\
\midrule
$\lambda_3$ (\textbf{isp}) & 0.270 & 0.200 & 0.248 & 0.257 & 0.279 & 0.088 \\
\midrule
$\lambda_4$ (\textbf{obp}) & 0.310 & 0.300 & 0.260 & 0.216 & 0.238 & 0.146 \\
\bottomrule
\end{tabular}
}
\caption{Statistics for $\mathbb{\lambda_{GS}}$ and $\mathbb{\lambda_{ACS}}$ from \protect\cite{ACS}.}
\label{tab:lambdas}
\end{table}
In comparison to learning to score cuts directly, this methodology has clear limitations that should be addressed. The performance of this approach is inherently upper bounded by the performance of the metrics being considered in the scoring function. Additionally, the VCG encoding of a given MILP in \cite{ACS} is agnostic to the set of candidate cuts $\mathcal{C}^k$, unlike the tripartite graph from \cite{look_ahead}. It is also well known that MILP solvers suffer from performance variability when changing minute parameters, for instance as simple as the random seed, see \cite{Mip_var}. For this reason, any trained agent obtained with such a method will try to learn optimal cut selection parameters for a specific solver environment with specific parameters being run on specific hardware.
\subsection{Learning \textit{when} to cut}
\citeauthor{local_cuts},~\citeyear{local_cuts}, focus on applying ML to an old and rather Hamletic cut-related question: \emph{To cut or not to cut}? The authors highlight that there is very little understanding on whether a given MIP instance benefits from using only global cuts or also using local cuts, i.e, to Cut (at the root node only) \& Branch or to Branch \& Cut (at all nodes of the tree). They refer to these two alternatives as \emph{"No Local Cuts"} (NLC) and \emph{"Local Cuts"} (LC), respectively, and demonstrate that if access to a perfect decision oracle was possible, a speed-up of 11.36\% is attainable w.r.t. the average solver runtime on a large subset of MIPLIB 2017~\cite{gleixner2021miplib} instances for FICO Xpress, a commercial MIP solver.
The authors use SL to identify MIP instances that exhibit clear performance gains with one method over the other. Rather than considering this problem as a binary classification task, it is tackled as a regression task predicting the speed-up factor between LC and NLC. The motivation for this is two-fold; first, the ultimate goal is to improve average solver runtime which is a \emph{numerical metric} rather than a \emph{categorical metric}. Second, there are some instances where this decision has negligible impact on solver runtime which complicates creating class labels in the classification approach.
By utilizing feature engineering inspired by the literature on ML for MILP, the authors represent a MILP instance using a 32-dimensional vector that incorporates both static and dynamic features. Static features refer to those that are solver-independent and closely related to the MILP formulation and combinatorial structure, such as the presence or absence of sparsity/symmetry, types of variables and constraints as well as their interactions. On the other hand, dynamic features are solver-dependent and provide insight into the solver's current behaviour and understanding of a given MILP at different stages in the solution process. These features include information about the performance of the presolver, the magnitude of the problem data after scaling, among others.
The best results were provided by a random forest (RF) which exhibited a speedup of $8.5\%$ on the train set and $3.3\%$ on the test set, prompting further successful experiments which resulted in the implementation of RF as a default heuristic into the new version of FICO Xpress MIP solver.
\section{Theoretical results}
\label{sec:theory}
The nascent line of theoretical work on the sample complexity of learning to tune algorithms for MILP and other NP-Hard problems~\cite{balcan2021much} has also been applied to cut selection. The setting is as follows: There is an unknown probability distribution over MILP instances of which one can access a random subset, the training set. The ``parameters" being learned are $m$ weights which, when applied to the corresponding constraints in the MILP, generate a single cut. \cite{balcan2022structural} and~\cite{tree_complexity} study the number of training instances required to accurately estimate the expected size of the B\&C search tree for a given parameter setting. The expectation here is over the full, unknown distribution over MILP instances. Theorem 5.5 of~\cite{balcan2022structural} shows that the sample complexity is \textit{polynomial} in $n$ and $m$ for Gomory Mixed-Integer cuts, a positive result which is nonetheless not directly useful in practice.
The authors in \cite{ACS} prove that fixed cut selection weights $\lambda$ for Eq.~\eqref{eqn:ScipScore} do not always lead to finite convergence of the cutting plane method and are hence not optimal for all MILP instances. More specifically, consider the convex combination of two metrics (\textbf{isp} and \textbf{obp}), i.e, $\lambda \in \mathcal{R}$ resulting in a finite discretization of $\lambda$ as well an infinite family of MILP instances together with an infinite amount of family-wide valid cuts that can be generated. The use of a pure cutting plane method with a single cut per round can result in the aforementioned instances not terminating to optimality when using $\lambda$ values in the discretization, whereas they terminate for an infinite number of alternative $\lambda$ values.
\section{Conclusion and Future Directions}
\label{sec:conclusion}
Given the rise in the use of machine learning in combinatorial and integer optimization and the challenges that still exist within, this survey serves as a starting point for future research in integrating data-driven decision-making for cut management in MILP solvers and other related discrete optimization settings. We summarized and critically examined the state-of-the-art approaches whilst demonstrating why ML is a prime candidate to optimize decisions around cut selection and generation, both through a practical and theoretical lens.
Given the reliance on many default parameters that do not consider the underlying structure of a given general MILP instance, learning techniques that aim to find instance-specific parameter configurations is an exciting area of future research for both MILP and ML communities.
Many additional future directions remain for the research surrounding Learning to Cut. These can range from revisiting algorithmic configuration for cut-related parameters, using ML to identify new strong scoring metrics, simultaneously learning how to select cuts as well as how many to select and their ordering, and embedding ML in other cut components such as cut generation or removal.\\
A diverse set of challenges arise from the aforementioned discussion on the literature for Learning to Cut:\\
\noindent\textbf{A standardized solver setting:} There is a need for establishing a \textbf{\emph{common solver setting}} for learning to select cuts given that much of the research varies in solvers and solver parameter settings (e.g., disabling or enabling presolve and heuristics, loading in best known primal solutions, root node restriction, fixed number of cuts, etc.).\\
\noindent\textbf{Fair Baselines:} There is a lack of fair baselines for ML methods that appropriately balance solver viability and the computational expense from training and data collection that may or may not warrant the use of complex methods such as ML. For instance, \cite{ACS} clearly motivates instance-specific weights however given the relatively marginal learning capabilities and even smaller generalization capabilities we believe analysis into methods like algorithmic configuration should be considered.\\
\noindent\textbf{Collaborative Nature of Cut Selection:} There is a lack of research into ML for cut selection that speaks to the collaborative nature and interactions among selected cuts on top of other potential learning opportunities such as learning how many cuts to select. This aspect has been explored in the MILP literature \cite{Coniglio}, demonstrating its importance for efficient cut selection, but is yet to make it into the realm of Learning to Cut.\\
\noindent\textbf{Large-scale parameter exploration:} There is an overall lack of non-commercial and publicly available large-scale instance datasets for the assessment of cut parameter configuration. \cite{local_cuts} is a prime example that not only are there still many decisions in MILP solvers that are not truly understood, but ML serves as a prime candidate to learn optimal instance-specific decision-making in complex algorithmic settings. For example, a recent "challenge" paper \cite{contardo2022cutting} shows small experiments on MIPLIB 2010~\cite{koch2011miplib} that go against the common belief among MILP researchers that conservative cut generation in B\&B is preferred.
\clearpage
\bibliographystyle{named}
|
2,869,038,154,393 | arxiv | \section{\@startsection {section}{1}{\z@}{+3.0ex plus +1ex minus
+.2ex}{2.3ex plus .2ex}{\large\bf\boldmath}}
\def\subsection{\@startsection{subsection}{2}{\z@}{+2.5ex plus +1ex
minus +.2ex}{1.5ex plus .2ex}{\normalsize\bf\boldmath}}
\def\subsubsection{\@startsection{subsubsection}{3}{\z@}{+3.25ex plus
+1ex minus +.2ex}{1.5ex plus .2ex}{\normalsize\it}}
\oddsidemargin -0.5cm
\evensidemargin -0.5cm
\marginparwidth 68pt
\marginparsep 10pt
\topmargin 0cm
\headheight 0pt
\headsep 0pt
\footskip 30pt
\textheight 22cm
\textwidth 16.5cm
\columnsep 10pt
\columnseprule 0pt
\graphicspath{{.}{plots/}}
\def\CC#1{{\bf \textcolor{red}{CC: {#1}}}}
\begin{document}
\thispagestyle{empty}
\def\arabic{footnote}{\fnsymbol{footnote}}
\begin{flushright}
\end{flushright}
\vspace{1cm}
\begin{center}
{\Large {\bf Four-lepton
$Z$ boson decay constraints on the SMEFT}}
\\[3.5em]
{\large
Radja~Boughezal$^1$, Chien-Yi~Chen$^2$, Frank~Petriello$^{1,2}$ and Daniel~Wiegand$^{1,2}$
}
\vspace*{1cm}
{\sl
$^1$ HEP Division, Argonne National Laboratory, Argonne, Illinois 60439, USA \\[1ex]
$^2$ Department of Physics \& Astronomy, Northwestern University,\\ Evanston, Illinois 60208, USA
}
\end{center}
\vspace*{2.5cm}
\begin{abstract}
We discuss how four-lepton decays of the $Z$ boson probe currently unconstrained flat directions in the parameter space of the Standard Model Effective Field Theory (SMEFT). We derive the constraints from these decays on four-lepton operators in the SMEFT and show how the LHC data for this process complements probes from neutrino-trident production. Future differential measurements with high-luminosity data can strongly constrain four-lepton operators and remove all flat directions in the four-muon sector of the SMEFT. We comment briefly on the possibility of using rare $Z$-decays to $\tau$-leptons to probe untested directions in the SMEFT parameter space.
\end{abstract}
\setcounter{page}{0}
\setcounter{footnote}{0}
\newpage
\section{Introduction}
The search for physics beyond the Standard Model (SM) at the Large
Hadron Collider (LHC) and other experiments has so far yielded no new
particles. This lack of evidence for new electroweak-scale physics
suggests that there is a mass gap between the SM and the next energy
scale at which new particles appear. Although the search for new
particles will continue in the future at the high-luminosity LHC, it
is becoming increasingly important to search for potentially small and subtle
indirect signatures of new physics, and to understand the constraints imposed by current data on high-scale new physics. A systematic framework for
characterizing deviations from the SM in the presence of no new
electroweak-scale particle is the SM effective field theory (SMEFT).
The SMEFT is constructed by allowing higher-dimensional
operators containing only SM fields that respect the SM gauge
symmetries. These operators are suppressed by an energy scale $\Lambda$ at
which the effective theory breaks down and new fields must be added to
the Lagrangian. The leading lepton-number conserving dimension-6 operators
characterizing deviations from the SM have been
classified~\cite{Buchmuller:1985jz, Arzt:1994gp, Grzadkowski:2010es}.
Significant effort has been devoted to performing global
analyses of the available data within the SMEFT
framework with varying assumptions~\cite{Han:2004az,Pomarol:2013zra,Chen:2013kfa,Ellis:2014dva,Wells:2014pga,Falkowski:2014tna,deBlas:2016ojx,Cirigliano:2016nyn,Hartmann:2016pil,Falkowski:2017pss,Biekotter:2018rhp,Grojean:2018dqj,Hartland:2019bjb,Brivio:2019ius,Aoude:2020dwv}.
Since the general dimension-6 SMEFT Lagrangian contains 2499 parameters for three
generations assuming baryon-number conservation, quite often additional
flavor symmetries such as Minimal Flavor Violation (MFV) are assumed
in order to reduce the number of Wilson coefficients. Assuming MFV implies that the flavor structure of the SMEFT Wilson coefficients are carried by combinations of Yukawa matrices. This leads to several familiar intuitions~\cite{Alonso:2013hga,Cullen:2020zof}: for example, that the coefficients of scalar and dipole operators are suppressed by small fermion masses for the lighter generations, and that vector four-fermion interactions are generation-independent. Flavor assumptions such as MFV lead to several advantages. In general fits without such flavor assumptions, flat directions exist since current
experimental constraints cannot access all possible Wilson coefficients. MFV also effectively suppresses strongly-constrained flavor-violating effects.
Despite these advantages it remains important to extend fits within the SMEFT framework beyond the MFV assumption. Going beyond MFV allows global fits to encompass a broader range of ultraviolet completions. For example, models which attempt to explain discrepancies in rare $B$-meson decays have a structure that violates lepton flavor universality~\cite{Altmannshofer:2017fio}. Allowing for flavor structure in the SMEFT requires addressing and removing the flat directions between Wilson coefficients that appear. The removal of flat directions in fits to SMEFT Wilson coefficients require the use of additional processes and experiments~\cite{Boughezal:2020uwq}. In this work we point out that the rare $Z$ boson decays to four-leptons offer the potential to
probe combinations of four-fermion Wilson coefficients not accessible
in other measurements. In particular, only a single combination of
four-muon Wilson coefficients is currently constrained in global fits
by the neutrino-trident production process $\gamma^{*} \nu_{\mu} \to \nu_{\mu}\mu^+\mu^-$~\cite{Altmannshofer:2014pba,Falkowski:2017pss}. Four-muon
$Z$ boson decays at the LHC probe orthogonal combinations of these
Wilson coefficients, allowing for a complete determination of the four-muon operators in the SMEFT. The potential of four-lepton decay modes to constrain physics beyond the SM has been investigated previously, particularly in the context of $Z^{\prime}$ models~\cite{Altmannshofer:2014pba,Rainbolt:2018axw}.
We study the constraints imposed by current LHC
data, as well as potential future constraints at a high-luminosity
LHC. Current measurements of this mode consider only the total rate. We point out that differential measurements can completely determine all four-muon Wilson coefficients, which motivates their study with future high-luminosity data. Although we focus here on the four-muon mode
as experimental searches for $Z\to 4\mu$ exist, other channels such as
$Z\to2\tau 2\mu$ and $Z\to 4\tau$ may provide probes of completely
untested parameters in the SMEFT. We comment briefly on this possibility in our conclusions.
Our paper is organized as follows. We review aspects of the SMEFT needed for our analysis in Section~\ref{sec:smeft}. In Section~\ref{sec:inc} we study the constraints imposed by inclusive LHC measurements of the $Z \to 4 \mu$ decay rate on the SMEFT four-muon operators. We also discuss their complementarity with constraints from neutrino-trident production. We discuss what can be learned from future differential LHC measurements in Section~\ref{sec:diff}. Finally, we conclude in Section~\ref{sec:conc}.
\section{Review of the SMEFT} \label{sec:smeft}
We review in this section aspects of the SMEFT relevant for
our analysis of four-muon decays of the $Z$ boson. The SMEFT is an extension of the SM Lagrangian to include terms
suppressed by an energy scale $\Lambda$ at which the ultraviolet completion
becomes important and new particles beyond the SM appear. Truncating the expansion in $1/\Lambda$ at dimension-6, and
ignoring operators of odd-dimension which violate lepton number, we
have
\begin{equation}
{\cal L} = {\cal L}_{SM}+ \frac{1}{\Lambda^2}\sum_i C_{i} {\cal
O}_{i} + \ldots,
\end{equation}
where the ellipsis denotes operators of higher dimensions. The Wilson
coefficients $C_i$ defined above are dimensionless. When
computing the $Z$ boson decay width we consider only the leading
interference of the SM amplitude with the dimension-6 contribution.
This is consistent with our truncation of the SMEFT expansion above,
since the dimension-6 squared contributions are formally the
same order in the $1/\Lambda$ expansion as the dimension-8 terms which we neglect. The Wilson coefficients are renormalization-scheme dependent quantities. In an $\overline{\text{MS}}$ scheme they become scale-dependent and run with energy. As we perform only a leading-order analysis in this manuscript we neglect this running.
Corrections to the $Z \to 4 l$ decay widths come from two
sources: shifts of the $Z \bar{l} l$ and $\gamma \bar{l}l$ vertices
that scale as $v^2/\Lambda^2$ where $v$ is the Higgs vev, and
four-fermion operators which scale as $E^2/\Lambda^2$ where $E$ is the
characteristic energy scale of the process. Note that the $\gamma \bar{l}l$ vertex is shifted from the SM expression in the $(G_{\mu},M_W,M_Z)$ input parameter scheme~\cite{Brivio:2017btx} adopted here since the electromagnetic coupling is shifted. We summarize in
Table~\ref{tab:ffops} the dimension-6 operators that shift the decay
width at leading-order in its perturbative expansion.
\begin{table}[h!]
\centering
\begin{tabular}{|c|c||c|c|}
\hline
${\cal O}_{\substack{ll \\ prst}}$ & $(\bar{l}_p\gamma^{\mu} P_L l_r) (\bar{l}_s\gamma_{\mu} P_L l_t)$
& ${\cal O}_{\substack{le\\prst}}$ & $(\bar{l}_p\gamma^{\mu}P_L l_r) (\bar{l}_s\gamma_{\mu} P_R l_t)$ \\
${\cal O}_{\substack{ee\\prst}}$ & $(\bar{l}_p\gamma^{\mu} P_Rl_r)(\bar{l}_s\gamma_{\mu} P_Rl_t)$
&${\cal O}_{\phi WB}$ & $\phi^\dagger \tau^I \phi W_{\mu\nu}^I B^{\mu\nu}$ \\
${\cal O}_{\phi D}$&$(\phi^\dagger D^\mu \phi)^\ast(\phi^\dagger D_\mu \phi)$
&${\cal O}_{\substack{\phi e\\rs}}$ &$i(\phi^\dagger \overleftrightarrow{D}_\mu \phi)(\bar{l}_r \gamma^\mu P_R l_s)$ \\
${\cal O}_{\substack{\phi l\\rs}}^{(1)}$ &$i(\phi^\dagger \overleftrightarrow{D}_\mu \phi)(\bar{l}_r \gamma^\mu P_L l_s)$
&${\cal O}_{\substack{\phi l\\rs}}^{(3)}$ &$i(\phi^\dagger \overleftrightarrow{D^I}_\mu \phi)(\bar{l}_r\tau^I \gamma^\mu P_L l_s)$ \\
\hline
\end{tabular}
\caption{Dimension-6 operators contributing to the decay $Z\to 4l$. The last five operators only lead to overall shifts of the SM $Z\overline{l}l$ and $\gamma\overline{l}l$ vertices.\label{tab:ffops}}
\end{table}
Here, $l$ denotes a Dirac lepton, $\phi$ the Higgs boson, and $W$ and $B$ the field-strength tensors of the SU(3)$\times$U(1) gauge bosons. $p,r,s,t$ denote generation indices. We have introduced explicit projection
operators $P_{L,R}$ to denote the projections onto left-handed
doublets and right-handed singlets. All operators containing the Higgs field $\phi$ only shift the
$Z\bar{l}l$ and $\gamma\bar{l}l$ vertices and can be combined into shift constants $\delta g_{Z}^L$, $\delta g_{Z}^R$, $\delta g_{\gamma}^L$ and $\delta g_{\gamma}^R$ respectively. We note that $\delta g_{\gamma}^R=\delta
g_{\gamma}^L = \delta g_{\gamma}$. Explicit expressions for these shifts are given in Appendix~\ref{app:A}.
There are potentially additional dimension-6 contributions from operators modifying the total $Z$ boson decay width that appear in the denominator of the branching ratio. To study these effects we express each $Z$ boson partial width in terms of its dimension-4 and dimension-6 contribution:
\begin{equation}
\Gamma_i = \Gamma_i^{(4)}+\Gamma_i^{(6)},
\end{equation}
where the dimension-6 contribution is assumed to be small. The branching ratio for $Z \to 4\mu$ can then be written as
\begin{equation}
\text{BR}(Z\to 4\mu) = \frac{\Gamma(Z\to 4\mu)}{\sum_i \Gamma_i}\approx \frac{\Gamma^{(4)}(Z\to 4\mu)}{\sum_i \Gamma_i^{(4)}}\left[ 1+\frac{\Gamma^{(6)}(Z\to 4\mu)}{\Gamma^{(4)}(Z\to 4\mu)}-\frac{\sum_i \Gamma_i^{(6)}}{\sum_i \Gamma_i^{(4)}}\right]
\end{equation}
where the sum over $i$ includes all $Z$ boson decay modes and we have expanded to linear order in the dimension-6 corrections. The second term in the square bracket above comes from the dimension-6 corrections to the total decay width. Since $\sum_i \Gamma_i^{(4)} \gg \Gamma^{(4)}(Z\to 4\mu)$, the only significant corrections from this last term come from the large $Z$ partial decay widths. We assume that the dominant corrections come from the $Z \to \bar{f}f$ decay widths. The corrections to these widths come from shifts in the left and right-handed couplings of the $Z$ boson to the different fermions, and are analogous to the operators leading to the shifts of the leptonic vertices in Table~\ref{tab:ffops} above. We will study the effects from shifts to the $Z\to \bar{f}f$ decays and absorb them into global shift factors $\delta g^{L,R}_{Zf}$ for each fermion species.
Finally, we note that dipole operators that can potentially contribute vanish for massless fermions upon truncation of the EFT expansion to dimension-6, which we assume here. The explicit expressions for all decay widths in terms of the SMEFT coefficients can be found in Appendix~\ref{app:A}.
\section{Constraining the four-lepton operators} \label{sec:inc}
The coefficients parameterizing the SMEFT contributions to the decay $Z \to 4\mu$ depend on the matrix elements describing the process and the imposed experimental cuts. We evaluate them numerically at leading-order using the {\tt Madgraph} package {\tt SMEFTsim}~\cite{Brivio:2017btx}. The UV scale $\Lambda$ is set to the Higgs vacuum expectation value $v=246$ GeV for
comparison with previous results in the literature~\cite{Falkowski:2017pss}. Explicit expressions for the four-lepton decay widths are given in Appendix~\ref{app:B}. We express the deviation for the four-muon decay mode in terms of the normalized branching ratio
\begin{align}
\frac{\text{BR}(Z \to 4 \mu)}{\text{BR}_{SM}} = 1&+a_{ll}
C_{\substack{ll \\ 2222}}+a_{le} C_{\substack{le \\ 2222}}+a_{ee}
C_{\substack{ee\\2222}} +a_{Zl}^L \delta g_{Zl}^L +a_{Zl}^R \delta g_{Zl}^R
+a_{\gamma \mu} \delta g_{\gamma \mu} \nonumber\\
&+a_{Z\nu}^L \delta g_{Z\nu}^L+a_{Zu}^L \delta g_{Zu}^L+a_{Zu}^R \delta g_{Zu}^R+a_{d}^L \delta g_{Zd}^L+a_{d}^R \delta g_{Zd}^R ,
\label{eq:BRdef}
\end{align}
where we assume lepton and quark flavor universality for the vertex shift operators for clarity of this argument. We note from the expression above that this decay is directly sensitive to the four-muon couplings in the SMEFT, making it of interest for accessing these
operators.
We discuss next how well measurements of the inclusive $Z \to 4\mu$ decay
width can constrain the four-muon SMEFT operators defined in the
previous section. We first show that after accounting for the strong constraints on the vertex shifts from $Z\to 2f$ decays at LEP and other experiments,
measurements of the $Z \to 4 l$ decay are primarily
sensitive to leptonic four-fermion operators, which are not as strongly bounded yet. The relevant comparison to establish
whether $Z$ vertex shifts or four-fermion terms dominate the
SMEFT correction is the size of $a_i C_i$ for each term defined in
Eq.~(\ref{eq:BRdef}). We evaluate
the branching ratio imposing $80\,\text{GeV}<m_{4l}<100\,\text{GeV}$
and $m_{ll}>4$ GeV for all fermion
pairs, consistent with experimental analyses, and obtain the
results for the $a_i$ shown in Table~\ref{tab:ffopscoeff}.
\begin{table}[h!]
\centering
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|}
\hline
$a_{ll}$ & $a_{ee}$ & $a_{le}$ & $a_{Zl}^L$ & $a_{Zl}^R$ & $a_{\gamma\mu}$&$a^L_{Z\nu}$ &$a_{Zu}^L$&$a_{Zu}^R$&$a_{Zd}^L$&$a_{Zd}^R$ \\
\hline
0.025 & 0.016 & 0.009 & 4.2 & -3.3&4.0 &-0.40&-0.39&-0.071&-0.87&-0.027 \\
\hline
\end{tabular}
\caption{Results for the $a_i$ coefficients given the cuts $80\,\text{GeV}<m_{4l}<100\,\text{GeV}$
and $m_{ll}>4$ GeV. For comparison with the available $Z$-pole bounds we assume flavor universality for the vertex-shift operators.\label{tab:ffopscoeff}}
\end{table}
The lepton vertex-shift $a_{Zl}^{L,R}$ factors are in general two orders of magnitude larger
than the four-muon $a$ coefficients for the relevant experimental cuts. However, the
$\delta g_{Zl}^{L,R}$ are constrained to be $2 \times 10^{-4}$ or
smaller from LEP data~\cite{ALEPH:2005ab}.
The hadronic vertex shifts that enter the branching fraction through the total width are similarly constrained through the available LEP data, though the bounds are generally weaker than the ones on the leptonic coefficients by a factor $3$ to $5$. However, the shift factors $a_{Zu,d}^{L,R}$ are small, and these corrections are numerically negligible. The relevant bounds on the Wilson coefficients that enter the vertex shifts from the literature are adapted from~\cite{Dawson:2019clf} and are summarized in Table~\ref{tab:bounds}. The single combination of
four-muon couplings probed by neutrino trident production is only
constrained to the $2 \times 10^{-1}$ level~\cite{Falkowski:2017pss}, and the two orthogonal combinations are not bounded at all. Therefore the $a_i C_i$ factors for the four-muon couplings are
allowed to be at least an order of magnitude larger than those of the
vertex shifts. In what follows we assume the vertex shifts are
strongly constrained by other measurements and neglect them,
consistent with the above observation.
\begin{table}[ht]
\begin{center}
\begin{tabular}{|c| c|| c|c|}
\hline
$|C_{\phi D}|$ & $<0.0012$&$|C_{\phi l}^{(1)}|$&$<0.0006$ \\
$|C_{\phi WB}|$ & $<0.0017$ &$|C_{\phi l}^{(3)}|$&$<0.0029$ \\
$|C_{ll}|$ & $<0.0006$& $|C_{\phi e}|$&$<0.0003$ \\
$|C_{\phi q}^{(1)}|$ & $<0.0023$& $|C_{\phi q}^{(3)}|$&$<0.0005$ \\
$|C_{\phi u}|$ & $<0.0073$& $|C_{\phi d}|$&$<0.014$ \\
\hline
\end{tabular}
\caption{$68\%$ confidence-level (C.L.) bounds for a single Wilson coefficient. The UV scale is set to $\Lambda = 246\,\textrm{GeV}$. The bounds are derived assuming flavor universality. These numbers are adapted from Ref.~\cite{Dawson:2019clf}. \label{tab:bounds}}
\end{center}
\end{table}
\subsection{Single Wilson-coefficient constraints from inclusive LHC measurements}
Both ATLAS and CMS have performed measurements of the $Z \to 4l$
branching ratios~\cite{CMS:2012bw,Khachatryan:2016txa,Sirunyan:2017zjc,Aad:2014wra}. These experiments are summarized in
Ref.~\cite{Rainbolt:2018axw}, where a combination of existing
results is also given. The combined measurement of the
$Z \to 4l$ branching ratio is:
\begin{equation}
\text{BR}(Z \to 4l) = (4.58 \pm 0.26) \times 10^{-6}.
\end{equation}
The measurements are scaled via a Monte-Carlo simulation to the
following common phase-space region:
\begin{equation}
80\, \text{GeV}<m_{4l}<100\, \text{GeV},\;\;\; m_{l^+l^-}>4\, \text{GeV}
\end{equation}
where $m_{l^+l^-}$ refers to the invariant mass of any combination of
oppositely-charged leptons. As our interest is in the four-muon mode,
we convert this combination to a result for $\text{BR}(Z \to
4\mu)$ as follows. For each of the experimental measurements in Table~1 of Ref.~\cite{Rainbolt:2018axw} we scale the central value by the leading-order ratio
$\Gamma(Z \to 4\mu)/\Gamma(Z \to 4l)$ computed using {\tt Madgraph},
and the statistical uncertainty by $\sqrt{\Gamma(Z \to 4l)/\Gamma(Z
\to 4\mu)}$. Combining the results yields
\begin{equation}
\text{BR}(Z \to 4\mu) = (1.21 \pm 0.41) \times 10^{-6}.
\end{equation}
We have checked that including the correlation coefficients listed in
Table~2 of Ref.~\cite{Rainbolt:2018axw} does not
significantly change this result. We estimate the expected
constraints from an inclusive measurement at a high-luminosity LHC by
assuming that both statistical and systematic errors scale as
$1/\sqrt{{\cal L}}$, where ${\cal L}$ is the integrated luminosity.
We show the current constraints and those from a HL-LHC assuming 3000 fb$^{-1}$ below in
Table~\ref{tab:fit} for each four-muon Wilson coefficient turned on
separately. The current constraints on the Wilson coefficients from the inclusive branching ratio measurement are quite weak. They become much stronger with the full HL-LHC data set.
\begin{table}[ht]
\centering
\begin{tabular}{|c||c|c|}
\hline
& Current ($Z\to 4\mu$) & HL-LHC ($Z\to 4\mu$) \\
\hline
$|C_{\substack{ll \\ 2222}}|$ & $<10.5 $ &$< 1.0 $ \\
$|C_{\substack{ee \\ 2222}}|
$ & $<16.9 $ &$< 1.6 $ \\
$|C_{\substack{le \\ 2222}}|
$ & $<28.8 $ &$< 2.7 $ \\
\hline
\end{tabular}
\caption{Single parameter constraints on the Wilson coefficients of the four-$\mu$ operators at 68\% CL for both the current LHC data and a projection
based on the HL-LHC with a luminosity of 3 ab$^{-1}$. For the
projection of the uncertainties at the HL-LHC, we assume that all uncertainties scale as $\frac{1}{\sqrt{N}}$.
\label{tab:fit}}
\end{table}
We note that the choice of the phase-space constraint $m_{l^+l^-}$ was
not optimized for SMEFT studies. However, the effect of increasing
this cut does not lead to stronger constraints with the current LHC
data. We estimate this by using {\tt Madgraph} to compute the change
in branching ratio and consequently statistical error that occurs by
increasing the cut on $m_{l^+l^-}$.
Although increasing the cut increases the size of the SMEFT-induced
deviation since it grows with energy, the corresponding increase in
the statistical error overwhelms this growth and leads to weaker bounds.
\subsection{Complementarity with neutrino trident production}
\label{sec:ntrident}
Another constraint on four-muon operators comes from the
neutrino-trident production process $\nu_{\mu} \gamma^{*} \to
\nu_{\mu} \mu^+\mu^-$ which occurs in the Coulomb field of a heavy nucleus. Formulae for the deviation of this process
from SM predictions within the SMEFT framework are given in Ref.~\cite{Falkowski:2017pss}. We reproduce this deviation below:
\begin{align}
\frac{\sigma_\textrm{trident}}{\sigma_\textrm{trident}^\textrm{SM}} = 1 + \frac{2}{(1+4s_W^2+8s_W^4)}\frac{v^2}{\Lambda^2}&\left\{(C_{\substack{ll\\1221}} - C_{\substack{ll\\2222}}) (1 + 2 s_W^2) - 2 s_W^2 C_{\substack{le\\2222}} +
2 (\delta g^L_{W \mu}\right.\nonumber\\
&+ \delta g^L_{Z\mu} - \delta g^L_{Z\nu_\mu} + 2s_W^2 \delta g^L_{W\mu} + 2 \delta g^L_{Z\mu} s_W^2 +
2s_W^2 \delta g^R_{Z\mu} \nonumber\\
&\left.+ 8 s_W^4 \delta g^L_{Z\nu_\mu} - (1 + 2 s_W^2)\delta g^L_{We})\right\}
\label{eq:ntrident}
\end{align}
where $\delta g^L_{W\mu}$ is the shift to the $W\mu\nu_\mu$ vertex. Its explicit expression in terms of standard SMEFT operators is given in Appendix~\ref{app:A}. We see that the deviation depends on the following combination of four-muon
Wilson coefficients:
\begin{equation}
\hat{C}_{\overset{ll}{2222}} = C_{\substack{ll\\2222}}+\frac{2
s_W^2}{1+2s_W^2} C_{\substack{le\\2222}}.
\end{equation}
From
this we see that this measurement is proportional to only a single
combination of $C_{\substack{ll\\2222}}$ and $C_{\substack{le\\2222}}$,
and is insensitive to $C_{\substack{ee\\2222}}$. The $Z \to 4\mu$
decay is sensitive to all three operators in a different combination
than neutrino-trident production. Once differential measurements are
made with higher luminosities, all three four-muon Wilson coefficients
can be separately determined from a combination of neutrino-trident production and LHC data.
To demonstrate what can be learned from a combination of neutrino-trident production and inclusive $Z \to 4\mu$ measurements at the LHC we perform fits to the inclusive LHC measurement
and neutrino-trident production data from the experiments CCFR~\cite{Mishra:1991bv} and CHARM-II~\cite{Geiregat:1990gz}. We consider two
different choices of Wilson coefficients:
\begin{enumerate}
\item $C_{\substack{ll\\2222}}$ and $C_{\substack{ee\\2222}}$
non-zero;
\item $C_{\substack{le\\2222}}$ and $C_{\substack{ee\\2222}}$
non-zero.
\end{enumerate}
The results of these fits are shown in Figs.~\ref{fig:inccllcee}
and~\ref{fig:incclecee}. The solid bands refer to the constraints
from neutrino trident production. We see that this data is not sensitive to $C_{\substack{ee\\2222}}$.
The ellipses refer to current LHC constraints, and projections for 300
fb$^{-1}$ and 3000 fb$^{-1}$ of integrated luminosity. Including the LHC data removes the flat direction that occurs due to the insensitivity of neutrino-trident production to $C_{\substack{ee\\2222}}$. We note that
the constraints from neutrino trident production on $C_{\substack{ll\\2222}}$ and $C_{\substack{le\\2222}}$ are stronger than the current LHC bounds, with these coefficients constrained to
be less than unity while current LHC data only requires
$C_{\substack{ee\\2222}}\lesssim 20$. The power of the LHC measurement
increases with higher luminosities. With 3000 fb$^{-1}$ the constraints
on $C_{\substack{ee\\2222}}$ approach the level of the neutrino-trident production bounds on $C_{\substack{ll\\2222}}$ and $C_{\substack{le\\2222}}$.
\begin{figure}[htbp]
\centering
\includegraphics[width=.6\textwidth]{2d-Cll-Cee-NT}
\caption{68\% C.L. bounds on the combination of inclusive LHC $Z \to 4 \mu$ data and neutrino trident production assuming non-zero $C_{\overset{ll}{2222}}$ and $C_{\overset{ee}{2222}}$}
\label{fig:inccllcee}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=.6\textwidth]{2d-Cle-Cee-NT}
\caption{68\% C.L. bounds on the combination of inclusive LHC $Z \to 4 \mu$ data
and neutrino trident production assuming non-zero $C_{\overset{le}{2222}}$ and $C_{\overset{ee}{2222}}$ }
\label{fig:incclecee}
\end{figure}
\section{Differential measurements with future LHC data} \label{sec:diff}
The fits in the previous section to the inclusive $Z \to 4 \mu$ measurement and
neutrino-trident production probe two independent combinations of the
three four-muon Wilson coefficients. With differential measurements
of $Z \to 4 \mu$, all three coefficients can be determined. Enough $Z \to 4 \mu$ events will be
available to allow for differential measurements with high-luminosity LHC data. Four-lepton final
states are defined by five angles~\cite{Gao:2010qx}: $\theta_1$,
$\theta_2$, $\theta^{*}$, $\Phi_1$, and $\Phi$. We illustrate their definitions in
Fig.~\ref{fig:angles}. When defining these angles we have several choices of
pairing each muon with an anti-muon. Labeling the muon momenta as
$p_1$, $p_2$ with $p_T(p_1) > p_T(p_2)$, and the anti-muons as $p_3$,
$p_4$ with $p_T(p_3) > p_T(p_4)$, we find that if we look at
single-differential distributions, the pairing that gives
the most discrimination between SMEFT-induced deviations and the SM is
$p_1$ with $p_4$ and $p_2$ with $p_3$.
\begin{figure}[htbp]
\centering
\includegraphics[width=.75\textwidth]{angles1}
\caption{Illustration of the angles characterizing the decay $Z\to 4\mu$ in the rest frame of the $Z$ boson. The nomenclature is adapted from~\cite{Gao:2010qx}. $Z'$ denotes the direction of the boost of the highest $p_T$ muon-system. Not shown is the angle $\Phi_1$ which is between the normal vectors of the planes spanned by the $Z$ axis and $Z'$ as well as the one spanned by the highest $p_T$ muons.}
\label{fig:angles}
\end{figure}
To demonstrate that this is the optimal pairing we perform simple
one-dimensional fits between the SM and the SMEFT with a single Wilson
coefficient turned on. We define the following test statistic:
\begin{equation}
\label{chisq}
\chi^2 \equiv \sum_{i=1}^{\# \rm{of \; bins}} {\frac{(N_i^{\rm SMEFT} - N_i^{\rm SM} )^2}{(\sigma_i^{\rm SM})^2}}
\end{equation}
where the number of bins is set to 10 and $N_i^{\rm SM} (N_i^{\rm SMEFT})$ stands for the number of SM (SMEFT) events in the $i$th bin.
$ \sigma_i^{\rm SM} =\sqrt{N_i^{SM}}$ represents the standard
deviation of the $i$th bin. For each of the four-muon Wilson
coefficients we use this test statistic to probe the sensitivity of each
variable to deviations for each pairing. The results are shown in
Tables~\ref{tab:chi13} and \ref{tab:chi14}, where we have highlighted
the three most discriminating cases for each pairing. We see that the
$p_1-p_4$ pairing is generally more sensitive, and that the most
discriminating variables are $\theta_1$ and $\theta_2$.
\begin{table}[ht]
\centering
\begin{tabular}{c||c|c|c|c|c}
($l_1$, $l_3$) & $\cos\theta^*$ & $\cos\theta_1$ & $\cos\theta_2$ & $\Phi_1$ & $\Phi$ \\
\hline\hline
$C_{\substack{ll\\2222}}$ & 39.8 & \textcolor{red}{73.3} & 18.1 & 9.7 & 15.2 \\
$C_{\substack{ee\\2222}}$ & 37.3 & 41.6 & 14.0 & 17.0 & 16.9 \\
$C_{\substack{le\\2222}}$ & 16.0 & \textcolor{red}{51.0} & 18.7 & 10.2 & \textcolor{red}{76.3} \\
\hline\hline
\end{tabular}
\caption{$\chi^2$ values for the five single-differential distribution. $l_1$ and $l_3$ ($l_2$ and $l_4$) are grouped together in the same decay plane. The three largest $\chi^2$ values are highlighted.}\label{tab:chi13}
\end{table}
\begin{table}[ht]
\centering
\begin{tabular}{c||c|c|c|c|c}
($l_1$, $l_4$) & $\cos\theta^*$ & $\cos\theta_1$ & $\cos\theta_2$ & $\Phi_1$ & $\Phi$ \\
\hline\hline
$C_{\substack{ll\\2222}}$ & 24.6 & 77.8 & 61.6 & 24.1 & 65.3 \\
$C_{\overset{22}{2222}}$ & 11.6 & 44.9 & \textcolor{red}{102.1} & 21.0 & 69.6 \\
$C_{\substack{le\\2222}}$ & 6.6 & \textcolor{red}{375.2} & \textcolor{red}{335.5} & 25.5 & 48.7 \\
\hline\hline
\end{tabular}
\caption{$\chi^2$ values for the five single-differential distribution. $l_1$ and $l_4$ ($l_2$ and $l_3$) are grouped together in the same decay plane here. The three largest $\chi^2$ values are highlighted.
\label{tab:chi14}}
\end{table}
To determine the sensitivity of the future LHC data to four-muon
Wilson coefficients, and its complementarity with neutrino-trident
production, we perform fits to both data sets. We study both 300 fb$^{-1}$ and 3000 fb$^{-1}$ to mimic future LHC data sets.
We construct a two-dimensional differential distribution based on the variables $\theta_1$ and $\theta_2$, which were found above to be
the most sensitive to SMEFT-induced deviations. While a more sophisticated multi-variate analysis could potentially improve the results found here, we believe that our approach captures the essence of what can be learned from differential measurements. We define a $\chi^2$ function
as follows:
\begin{equation}
\label{chisq-joint}
\chi^2 \equiv \sum_{i=\# \; \rm of\; bins} {\frac{(N_i^{\rm theo} - N_i^{\rm exp} )^2}{(\sigma_i^{\rm exp})^2}} + \sum_{j} {\frac{(f_j^{\rm theo} - f_j^{\rm exp} )^2} {(\sigma_j^{\rm exp})^2}}
\end{equation}
where the first term accounts for predicted future LHC data for $Z\to 4\mu$. $i$ ranges from 1 to the
number of bins of a given differential distribution. In constructing
our binning we impose the requirement $N_i > 10$ so that we can assume
Gaussian errors. The cuts used in
Ref.~\cite{Sirunyan:2017zjc} are applied. We conservatively use the systematic uncertainty from Ref.~\cite{Sirunyan:2017zjc}, neglecting possible improvements with future LHC data, and assume that it is constant and uncorrelated for all bins. We stress that this is only a simple estimate of the LHC potential, and is meant to motivate more detailed future experimental studies. The statistical uncertainty of the $i$th bin is assumed to be $\sqrt{N_i}$. The second term in
Eq.~(\ref{chisq-joint}) accounts for the neutrino-trident experimental measurements discussed in Section~\ref{sec:ntrident}. $f_j^{\rm theo}$ denotes the theoretical prediction for the neutrino-trident cross section given in Eq.~(\ref{eq:ntrident}), while the $f_j^{\rm exp}$ are the experimental measurements from CCFR and CHARM-II. The $\sigma_j^{\rm exp}$ in the denominator denote the experimental errors.
To permit simple two-dimensional representations of our results we allow $C_{\substack{ll\\2222}}$ and $C_{\substack{le\\2222}}$ to be
non-zero. Only a single combination of these parameters can be
determined from neutrino-trident production, so this example will
study how well differential LHC measurements can help break the remaining degeneracy between Wilson coefficients that occurs given only the inclusive branching ratio measurement. For comparison we also fit to the inclusive LHC
measurement. The
results assuming differential LHC measurements are
shown in Fig.~\ref{fig:diffcllcle2}. For comparison the result assuming only an inclusive branching ratio measurement with 3000 fb$^{-1}$ is shown as well. The improvement going from inclusive to differential
measurements at the LHC is significant, with bounds on
$C_{\substack{le\\2222}}$ improving from ${\cal O}(10)$ to ${\cal
O}(1)$. This strong improvement is in large part due to the sign of the SMEFT deviations changing in different regions of $(\theta_1,\theta_2)$ space, which is partially averaged out in the inclusive analysis, while the differential analysis resolves the opposite-sign contributions. The flat direction in the $C_{\substack{le\\2222}}$ versus
$C_{\substack{ll\\2222}}$ plane present with just neutrino trident production leads to the elongated shape of the constraint ellipse in this figure. This is removed by the high-luminosity LHC data.
\begin{figure}[htbp]
\centering
\includegraphics[width=.6\textwidth]{2d-diff-Cll-Cle-com}
\caption{68\% C.L. bounds on the combination of differential LHC $Z \to 4 \mu$ data
and neutrino trident production assuming non-zero $C_{\overset{ll}{2222}}$ and $C_{\overset{le}{2222}}$ }
\label{fig:diffcllcle2}
\end{figure}
\section{Conclusions} \label{sec:conc}
In this paper we have studied what can be learned about four-muon
operators in the SMEFT from rare $Z \to 4\mu$ decays at the LHC.
Measurements of this decay mode constrain linear combinations of
Wilson coefficients not accessible in other processes. We determined
the constraints imposed on these coefficients from current
measurements of the inclusive branching ratio, and showed their
complementarity with existing constraints from neutrino-trident
production. Future differential measurements of $Z \to 4\mu$ have the
potential to completely determine the four-muon Wilson coefficients in
SMEFT. We show that strong bounds on all four-muon interactions can
be obtained assuming 3000 fb$^{-1}$ of integrated luminosity at the
LHC.
We have focused in this paper on the $Z\to 4\mu$ decay since experimental searches for this channel exist. However, measurements of the decay $Z\to2\mu 2\tau$ would probe SMEFT Wilson coefficients
such as $C_{\overset{ll}{2332}}$, $C_{\overset{le}{2332}}$, and $C_{\overset{ee}{2332}}$, while $Z\to 4\tau$ would probe $C_{\overset{ll}{3333}}$, $C_{\overset{le}{3333}}$, and $C_{\overset{ee}{3333}}$. Only $C_{\overset{le}{2332}}$ is weakly constrained by $\tau$-decays. The remaining coefficients are completely untested. The suggested rare $Z$-decays would provide the first tests
of this unknown sector of the SMEFT. To our knowledge these rare $Z$-decays into $\tau$-leptons have not been considered. However, searches for Higgs bosons into these final states have been performed at the LHC~\cite{Khachatryan:2015nba,Khachatryan:2017mnf}. We encourage the ATLAS and CMS collaborations to perform these searches with future data.
\section*{Acknowledgments}
We thank W.~Hopkins for helpful discussions. R.~B. is supported by the DOE contract DE-AC02-06CH11357. C.-Y.~C. is
supported by the NSF grant NSF-1740142. F.~P. and D.~W. are supported
by the DOE grants DE-FG02-91ER40684 and DE-AC02-06CH11357.
|
2,869,038,154,394 | arxiv | \section{Introduction}
\label{sec:introduction}
The origin of the dichotomy between radio-quiet and radio-loud
objects is still a matter of debate: is it a question of {\it nature} or of {\it nurture}? Both intrinsic differences in the central engine and extrinsic differences in the surrounding medium have been considered. It is intriguing that optical AGN (Seyfert-like type) are {\sl mainly} found in late-type, disk galaxies (both in the local as in the far away Universe, see e.g. \citealt{schawinski10}), while this is not the case for radio-loud AGN. These are {\sl mainly} hosted by
early-type galaxies (\citealt{best05} and refs therein).
The fact that Seyfert galaxies do have, on average, radio core powers lower than those of radio galaxies
suggests that they are weaker radio sources from the outset (see e.g. \citealt{bick98}) and points, therefore, to a connection with intrinsic
differences e.g. BH mass and/or spin. However, the characteristics of
the inter-stellar medium (ISM) in which a radio source is born appear
also to play a role in its further evolution.
This suggests, for example, that entrainment by the radio jet may
have some influence on the level of kiloparsec-scale radio emission
\citep{bick98} and/or that in radio sources hosted by spiral
galaxies, even initially relativistic jets can be rapidly
decelerated by collisions in a dense surrounding broad line region or
ISM (see e.g. \citealt{taylor89}). Connected to this, the
possibility that the radio emission could actually be temporarily
enhanced by such interactions, see e.g. \citep{gopal91}, further
complicates our classification of radio sources.
Although it is extremely rare for nearby radio galaxies to be hosted by genuine disk galaxies, a few
examples do exist that have been studied in detail (see Table \ref{tab:list} for a summary). For example, the spiral galaxy 0313-192 in
the Abell cluster A428 hosts a large double-lobed Fanaroff-Riley I (FR
I) radio source \citep{ledlow98,ledlow01,keel06}, while NGC~612 is an
S0 galaxy with a powerful FR-I/FR-II hybrid radio source and a
large-scale star-forming H{\,\small I}\ disk \citep{veroncetty01,emonts08}. B2~0722+30 is another FR I radio
galaxy whose jets are mis-aligned to the galaxy disk, but appear to be
aligned with an H{\,\small I}\ bridge towards a nearby, interacting pair of
galaxies \citep{emonts09}. Other possible examples include 3C~293
\citep{vanbreugel84} and 3C~305 \citep{heckman82}, whose host galaxy morphologies
are highly disturbed due to recent galaxy interactions, but show disk-like
characteristics. An other very recent example is the intermediate redshift disk-like objects that shows spectacular radio lobe structures on scales of several 100 kpc suggestive of relicts \cite{hota11}.
These rare cases of disk-dominated radio galaxies provide an excellent
opportunity for studying the host galaxy properties and environmental
effects that could be important for the triggering and/or evolution of
their radio sources. In addition, a detailed knowledge of these
systems provides valuable information for a comparison with
studies at high redshift. Some of these studies are now in progress,
exploring whether disk-like host galaxies may be much more common at
high than at low redshift \citep{norris08}. However, much more
will be done in the near future, due to planned large radio and optical
surveys.
\begin{figure*}
\centerline{\psfig{file=morganti_Fig1.eps,width=16cm,angle=0}}
\caption{{\sl Left}: Optical $r'$-band GMOS-S image of PKS~1814-637 obtained from the
Gemini South. The bright peak coincides with a foreground
star. Despite the presence of the star, the extended disky component and
dust-lane structure in the galaxy are clearly visible. {\sl Right}:
Composite radio map showing
the 2.3~GHz VLBI image (Tzioumis et al. 2002) with the
8.4 GHz (i.e. high resolution) image from Ojha et
al. 2004) superimposed. This overlay illustrates the presence of a compact component (SE of
the brighter lobe) that becomes prominent in the high frequency
observations: we identify this component with the radio core.
The image at 2.3~GHz
was obtained from the SHEVE array, the peak a
level is 1.7 Jy and contours are shown at
-1.5,1.5,3,6,12,18,35,50,65,80 \% of the peak (from Tzioumis et
al. 2002).}
\label{fig:img1814s}
\end{figure*}
In this paper we present a detailed study of a newly found, extreme
example of a radio source hosted by a disk galaxy (see Fig. \ref{fig:img1814s}). Unlike
all objects mentioned above, the radio power of PKS~1814-637 (P$_{\rm{5 GHz}} = 4.1 \times 10^{25}$ W
Hz$^{-1}$, \citealt{tadhunter93, morganti03, morganti01}) falls well above the radio power boundary between FRI and FRII radio sources
(P$_{\rm{5 GHz}} \sim 10^{25}$ W Hz$^{-1}$; \citealt{fanaroff74}). For
comparison, its radio power is two orders of magnitude higher than the most powerful
radio Seyfert (NGC~1068, P$_{\rm{5 GHz}} \sim 10^{23}$ W Hz$^{-1}$), and more than a factor of four higher than the next most powerful radio source hosted by a disk galaxy (see Table \ref{tab:list}). Interestingly, radio-loud narrow line Seyfert 1 (NLS1) galaxies can reach radio power comparable to PKS~1814-637 (see e.g. \citealt {foschini11} and refs therein) but because their radio emission is likely dominated by a beamed jet emission, their {\sl intrinsic} jet (and extended radio lobe powers) might be orders of magnitude lower, making them more similar to Seyferts or low power FRIs.
Because of all this, PKS~1814-637 stands out as a particularly interesting object.
\begin{table}
\begin{tabular}{llll}
\hline
Object &Redshift($z$) &P$_{5~GHz}$ &Reference \\
& &W Hz$^{-1}$ & \\
\hline
NGC~612 &0.0298 &$8\times10^{24}$ &1\\
0313-192 &0.0671 &$9\times10^{23}$ &2,3,4\\
B2~0722+30 &0.0188 &$6\times10^{22}$ &4,5\\
3C293 &0.0450 &$9\times10^{24}$ &6\\
3C305 &0.0416 &$4\times10^{24}$ &7 \\
Speca & 0.1378 & $3\times10^{24*}$ & 8 \\
PKS~1814-637 &0.0641 &$4\times10^{25}$ &9 \\
\hline
\end{tabular}
\caption{The properties of powerful radio radio sources hosted by disk galaxies.
References: 1. \citet{emonts08}; 2. \citet{ledlow01}; 3. \citet{keel06}; 4. NASA
Extragalactic Database; 5. \citet{emonts09}; 6. \citet{vanbreugel84}; 7.
\citet{heckman82}; 8. \citet{hota11}; 9. this paper.
$^*$ Total flux derived from NVSS at 1.4~GHz and extrapolated to 5~GHz assuming $\alpha=0.7$.}
\label{tab:list}
\end{table}
From the radio perspective, PKS~1814-637 is a Compact Steep Spectrum (CSS)
radio source \citep{tzioumis02}, see also Fig.\ 1, right), of about 480 pc linear size\footnote{Throughout this paper
we use a Hubble constant $H_{\rm o}$= 70 km s$^{-1}$ Mpc$^{-1}$ and
$\Omega_\Lambda=0.7$ and $\Omega_{\rm M} = 0.3$. At the distance of PKS~1814-637 this results in 100 mas =
120 pc, \cite{wright06}.}. The consensus is that the majority of such
sources are young radio sources. However, this class can also include
cases that are unable to become large due to confinement of the radio source by the dense ISM (see e.g.
\citealt{reynolds97,kunert04,orienti10b} and refs therein). Finally, H{\,\small I}\ in absorption with high optical depth has been detected
in PKS~1814-637 \citep{veroncetty00,morganti01} and has motivated the
VLBI follow up that is presented in this paper.
Here we will use new optical, IR and H{\,\small I}\ observations of PKS1814-637
to investigate the morphology of the host galaxy,
study its ISM, and attempt to understand why such a
strong radio source is located in this unusual host.
The paper is organised in the following way. In Section 2 we
characterise the optical morphology by analysing deep Gemini optical
images recently presented in \cite{ramos11} as part of a larger deep
optical imaging survey of the 2~Jy sample. In Section 3, we derive
an accurate systemic velocity of the galaxy by
analysing previously unpublished near-IR spectroscopic data and by reanalysing our
optical spectroscopy results (Holt et al. 2008). We also include an investigation of the emission line kinematics and discuss the optical spectral classification of the AGN. In Sec. \ref{sec:spitzer}
we discuss our recent mid-IR Spitzer data (see also \citealt{dicken11}) and in Sec. \ref{sec:radio} we
discuss new radio VLBI observations obtained with the Australian Long Baseline Array (LBA). We bring all the results together
in Section 6 and we further discuss the implication for high-$z$ observations in Sec. 7.
\section{Host galaxy morphology}
\label{sec:modelling}
\begin{figure*}
\centerline{
\begin{tabular}{ccc}
\hspace*{-1.5cm}\psfig{file=morganti_Fig2a.ps,width=6cm,angle=0} &
\hspace*{-1.5cm}\psfig{file=morganti_Fig2b.ps,width=6cm,angle=0} &
\hspace*{-1.5cm}\psfig{file=morganti_Fig2c.ps,width=6cm,angle=0} \\
(a) & (b) & (c) \\
\end{tabular}
}
\caption{{\sc galfit} modelling results. (a) Gemini Optical $r^{\prime}$-band GMOS-S image contours of PKS1814-637, (b) best fitting
{\sc galfit} model and (c) residuals. See Section 2 for details. Note that the orientation of these images differs from that of the image displayed in Fig. \ref{fig:img1814s}.}
\label{fig:galfit}
\end{figure*}
Although the morphology of the host galaxy of PKS~1814-637 already
appears to be exceptional when compared to other radio galaxies
(c.f. deep imaging of the 2Jy sample presented in \citealt{ramos11}), we now quantify this via modelling the optical morphology.
Deep optical broad ($r'$) band images of PKS~1814-637 were obtained using the
Gemini Multi-Object Spectrograph South (GMOS-S) on the 8.1-m Gemini South telescope at Cerro Pach{\'{o}}n, Chile (see Figure 1). For
a full discussion of the observations and data reduction procedure we refer readers to \citet{ramos11}.
We have modelled the optical image using {\sc galfit} (\citealt{peng02,peng10};
version 3.0)\footnote{{\sc galfit} is a well-documented two-dimensional fitting algorithm which allows
the user to simultaneously fit a galaxy image with an arbitrary number
of different model components, in order to extract structural parameters of the galaxy.
The model galaxy is convolved with a point spread function (PSF) and, using the
downhill-gradient Levenberg-Marquardt algorithm, is matched to the observational
data via the minimization of the $\chi^2$ statistic.}.
This has been a challenging process because of the dust lane, as well as the presence
of the bright foreground star 1.53 arcsec in projection from the
nucleus of the radio galaxy. Shorter
exposure time images were used in the modelling in order to avoid saturation effects. We derived a
PSF profile by extracting 2D images of stars in the
GMOS-S image, normalizing to unit flux and taking an average profile weighted
by the signal-to-noise ratio of the component extracted stellar profiles.
The host galaxy was modelled over a 84$\times$84 kpc$^2$ area using
a S\'ersic profile \citep{sersic63}, and two Gaussian components for fitting the
foreground star and the unresolved nuclear point source emission from the AGN.
All model parameters, including the host galaxy, star and AGN centroids
were allowed to vary freely. We also left the residual background
level as an additional free parameter. In order to obtain a reasonable
fit, it was necessary to mask the dust lane.
Since the galaxy is in a crowded field, we also iteratively modelled all neighbouring stars
and galaxies that interfere with the host galaxy model fit. Once a good model for these adjacent
objects had been obtained, their parameters were held fixed, effectively removing them from consideration.
The final reduced-$\chi^2$ value was determined by repeat modelling
all other objects in the field of view, resulting in an ideal value of 1.053.
The best fit to the optical image of PKS 1814-637 is a S\'ersic profile with index $n=2$ which is shown in Figure
\ref{fig:galfit}. Models using more elliptical profiles (e.g. S\'ersic $n=4$) fail to fit the optical image of PKS~1814-637. The estimated
effective radius of the single $n=2$ model is $R_{eff}$=6.4 kpc,
and the position angle of its major axis $PA=-57\degr$, ellipticity $b/a=0.48$,
and magnitude
$r^{\prime}=15.75$ mag. The AGN magnitude was found to be $r^{\prime}$=20.98 mag and that of the
foreground star $r^{\prime} = 15.47$ mag.
Other models were tried, such as excluding the very faint AGN component and
a two-component, S\'ersic fit. None of these fits gave a better fit to
the data and so the simplest model providing a good fit is preferred.
Figure \ref{fig:galfit} shows the contour plots of only the central 50$\times$50 kpc$^2$ of
the modelled region, together with contours of the best-fitting model overlaid on a grey-scale
of the model-subtracted residual images in the same region. The residuals of the fit clearly show the central dust lane, aligned with a disk of
$\sim$45 kpc in length, which is heavily warped at the extremes; the fact that the dust lane intercepts the bulge of the galaxy close to the position of the nucleus implies that the inner disk of the galaxy is observed close to edge-on ($i > 80\degr$).
To summarise, {\sc galfit} modelling is consistent with PKS 1814-637 having a strong disk morphology, confirming the results based on
visual examination of the images. In addition, the heavily warped outer parts of the disk provide evidence
that the galaxy has undergone a recent galaxy interaction, perhaps related to the triggering of the
activity.
\begin{figure}
\centerline{\psfig{file=morganti_Fig3.ps,width=8cm,angle=0.}}
\caption[Nuclear [O{\,\small III}]\ line profile.]{Model to the
[O{\,\small III}]$\lambda\lambda$4959,5007 emission-line doublet in the nuclear aperture. The solid lines represent the
star-subtracted spectrum and the overall model fit to the doublet. The components are: narrow (dotted) and
broad (dashed). The two components are overplotted on the radial
velocity profile in Figure \ref{fig:kinematics}. }
\label{fig:o3}
\end{figure}
\section{Re-deriving the systemic velocity }
\label{sec:optical}
In order to understand the role of the various gaseous components
in PKS 1814-63, it is essential to determine an accurate
systemic velocity. Like the morphological study, spectroscopic investigation of PKS1814-637 is complicated by
the presence of the bright foreground star close to the nucleus of
the galaxy. As shown in \cite{holt08}, the nuclear emission line profiles
can be modelled using two Gaussian components (see also Figure 3). The
narrowest component is spectroscopically unresolved, and the broader
component has a FWHM of 569 $\pm$ 35 km s$^{-1}$.
\begin{figure}
\centerline{\hspace*{-0.5cm}\psfig{file=morganti_Fig4.ps,width=9cm,angle=0}}
\caption[Redshift comparison.]{Comparison of the redshifts derived
from the various observations. The horizontal lines represent the
original (dotted; from \protect\citealt{holt08}) and new (solid; this paper)
interpretations of the systemic velocity. The vertical dot-dashed line
labelled `shallow' marks the range of velocities over which the
shallow H{\,\small I}\ absorption is observed \protect\citep{morganti01}. The data are also summarised
in Table 1.}
\label{fig:kinematics}
\end{figure}
As for many other sources in their compact radio sources sample, \cite{holt08} assumed that the optical narrow component
in PKS1814-637 represents
the systemic velocity. However, this result has always been considered uncertain: only half of the optical
radial velocity profile (or `rotation curve') is observed due to the presence of
the bright foreground star, which completely dominates the flux on
one side of the galaxy (see Figure 1 of \citealt{holt08} and Figure B9 of \citealt{holt09}). Here we re-analyse the
optical spectroscopic data with the help of other available data.
Figure \ref{fig:kinematics} shows the redshifts derived from the two
optical components from \cite{holt08} and from the deep and shallow
H{\,\small I}\ components from \citet{morganti01}, see also
Sec. \ref{sec:radio} for more details. Also plotted are the
redshifts derived from new Spitzer data (see Sec. \ref{sec:spitzer}) and near-
IR data from NTT/SOFI (M. Bellamy, priv. comm.).
Combining the new data and the original optical data, it is clear
from Fig. \ref{fig:kinematics} that a more
likely interpretation of the kinematics is that the detected optical
broad component is consistent with the systemic velocity, as
this is also consistent with all other measured redshifts. While
the optical broad component has a relatively large FWHM, this can
be explained in terms of unresolved rotation in the inner disk.
In this interpretation,
the optical narrow component is associated with the (quiescent) disk of the galaxy
rotating away from the observer on the west side of the galaxy; the rotation
of the large-scale disk is
not clearly detected on the east side of the nucleus due to the
bright foreground star, but it is detected in the nuclear aperture along with the
broad component (see Figure 3). The failure to detect a blueshifted narrow component
in the nuclear aperture (corresponding to the part of the extended disk rotating towards us)
is likely to be due to a combination of uneven gas emission and dust obscuration.
Hence, in our new kinematic interpretation, no nuclear outflow is observed
in this source, making it unusual amongst compact radio
sources (c.f. \cite{holt08}). The new, heliocentric corrected systemic redshift of
PKS 1814-637 is 0.06412$\pm$0.00014. The kinematic offsets of the various
components have been calculated with respect to this new systemic
redshift and are presented in Table \ref{tab:kinematics}.
\begin{figure}
\centerline{\psfig{figure=morganti_Fig5.eps,width=7cm,angle=0}}
\caption{Plot of [O{\,\small III}]\ luminosity versus 5~GHz radio power. The points represent radio sources from the 2Jy sample (Dicken et al. 2011), with the CSS/GPS objects marked as stars. }
\label{fig:o3_rad}
\end{figure}
\section{Optical spectral classification}
Recently there has been much speculation that the optical spectral classification of a radio galaxy is strongly correlated to the rate or mode of accretion of material onto its central supermassive black hole. Radio galaxies with strong optical emission lines (also labelled as ``high excitation galaxies'', HEGs), including narrow line radio galaxies (NLRG), broad line radio galaxies (BLRG) and quasars, are thought to be energised by the accretion of cool/warm material via a thin accretion disk. On the hand, weak line radio galaxies (sometimes labelled as ``low excitation galaxies'', LEGs) may be powered by the Bondi accretion of the hot phase of the ISM. Details about this classification can be found e.g. in \cite{hardcastle06,buttiglione10}. In this context, it is interesting to consider the optical spectral classification of PKS~1814-637.
In terms of its emission line luminosity, PKS~1814-637 falls (by an order of magnitude) well below
the correlation between emission line luminosity and radio power (see Fig. \ref{fig:o3_rad}). In this sense
it is similar to the WLRG, that are defined to
have small [OIII]$\lambda$5007 emission line equivalent widths ($EW_{[OIII]} < 10$ \AA, see
Tadhunter et al. 1998). However, in contrast to the other WLRG in the 2Jy sample, our previous spectroscopic
investigation of PKS~1814-637 \citep[see][]{holt08} suggested a higher equivalent width and classification as a NLRG.
Unfortunately, our previous estimate of the equivalent
width derived from the long-slit spectrum was potentially hampered by the (uncertain) subtraction of the continuum associated with the
bright star near the nucleus. Therefore we have re-estimated it using better data. Our Gemini images, which have good seeing, allow more reliable
subtraction of the star and determination of the galaxy continuum flux
in the aperture used for the spectroscopic observations.
They confirm that the nuclear continuum level is similar to that in the spectrum presented in \cite{holt08}. Combining this information
with the most reliable estimate of the [O{\,\small III}]\ emission line flux (Tadhunter et al. 1993), we find that PKS~1814-637 has an [O{\,\small III}]\ equivalent width in the range 50
- 100\AA, whereas we define WLRG to have EW$_{\rm [OIII]} <$ 10\AA. Therefore this
object is truly ambiguous: it appears like a NLRG in terms of [O{\,\small III}]\ EW, but
more like a WLRG in terms of the [OIII] emission line luminosity;
PKS~1814-637 is classified as a NLRG, despite its low L$_{\rm [OIII]}$, because it
has an unusually low (stellar) continuum flux. We can
naturally link this with the unusual morphology of the host
galaxy: a disk galaxy with a relatively low central surface brightness, rather
than an elliptical with a high central surface brightness.
It is interesting that the emission line luminosity of PKS~1814-637
(and indeed other WLRG) falls within the range measured for Seyfert
galaxies in the local Universe. This leads to the intriguing
conclusion that {\sl if it were not for the radio data,
PKS~1814-637 would have been classified as a typical Seyfert galaxy in
terms of its optical and mid-IR spectra (see below), and optical
morphology}.
\begin{table*}
\caption[Gas kinematics.] {Summary of the kinematic data. Columns are:
(1) measured emission/absorption line, (2) emission/absorption line
component, (3 \& 4) velocity width (FWHM) and error in km s$^{-1}$, (5 \& 6)
velocity shift and error (km s$^{-1}$) with respect to the assumed
systemic velocity (taken to be the nuclear broad component of {[O
III]}; see Section \ref{sec:optical}), (7 \& 8) redshift and error with respect to
the broad component of {[O III]} and (9) references: H08:
\protect\citet{holt08}; M01: \protect\citet{morganti01}; D11: Dicken
et al. 2011; V0: \protect\citet{veroncetty00}. $^{a}$ Full
width at zero intensity (FWZI) of the broad, shallow
absorption. $\dagger$
\citet{veroncetty00} give e.g. FWZI and we have estimated
the FWHM from their figures, but also give their data here.}
\label{tab:kinematics}
\begin{center}
\begin{tabular}{ll rrrrrrr}\hline\hline
Lines & Component & Velocity & $\Delta$ & Velocity & $\Delta$
& $z$ & $\Delta$ & Ref\\
& & width & & shift & & & \\
&& (km s$^{-1}$) & (km s$^{-1}$) &(km s$^{-1}$) &(km s$^{-1}$) & \\
(1) & (2) & (3)& (4)& (5)& (6)& (7) & (8) & (9)\\ \hline
\multicolumn{9}{l}{\bf Optical} \\
{[O III]} & n & unres & & +162 & 21& 0.06466 & 0.00007 & H08\\
&b & 569 & 35 & 0 & 20 &0.06412 &0.00014 & H08\\
& 1 Gaussian & 411 & 17 & & & & &H08\\
\\
\multicolumn{9}{l}{\bf Radio}\\
H{\,\small I}\ & n & $\sim$50 & & -24 & &0.06404&&M1 \\
& b & $\sim$280$^{a}$ & &162 to -119 & & &&M1 \\
& & 62 $\dagger$ & & -192$\dagger$ & & && V0$\dagger$\\
\\
\multicolumn{9}{l}{\bf Mid-IR} \\
{[Ne II]}, {[Ne III]} & 1 Gaussian & &&-87&60& 0.06383 &0.00020 & D11 \\
{[O IV]}, H$_{2}$ \\
\\
\multicolumn{9}{l}{\bf near-IR} \\
Paschen $\alpha$ & & & & -126 & 60 & 0.06370 & 0.00015 & \\
{[Fe II]} & &&&-6 & 30 & 0.0641 & 0.0001 & \\
S(3) &&&&-36 & 90 & 0.0640 & 0.0003 & \\
s(1) &&&&-36 & 60 & 0.0640 & 0.00015 &\\ \hline\hline
\end{tabular}
\end{center}
\end{table*}
\begin{figure*}
\centerline{\psfig{figure=morganti_Fig6.eps,width=16cm,angle=0}}
\caption{{\it Spitzer} Mid-infrared spectrum of PKS~1814-637. Fine
structure lines are indicated as well as the position of strong PAH
emission bands at 6.2, 7.8, 8.6 and 11.3 microns. Note the strong 10 micron silicate absorption feature.}
\label{fig:PKS1814_Spitzer}
\end{figure*}
\section{The ISM of PKS~1814: Spitzer mid-IR data}
\label{sec:spitzer}
Figure \ref{fig:PKS1814_Spitzer} shows the Spitzer IRS spectrum
obtained as part of our multi-wavelength investigation of the 2~Jy
sample (Dicken et al. 2011). The spectrum shows prominent fine
structure lines, as well as PAH and H$_2$ emission features, and a silicate absorption band.
The detection of emission lines in the mid-IR spectra of radio
galaxies is common, and the presence of high ionisation potential lines, such as
the [OIV] and [NeV] lines, clearly detected in PKS1814-63,
indicates a significant AGN photoionised component (see e.g. \citealt{ogle10}
and refs. therein). However, the
detection of H$_2$ and PAH features is rare
in radio galaxies in general --- only 5 objects (12\%) in the 2Jy
sample show significant H$_2$
emission, and only 10 objects (23\%) have detected PAH bands
\citep{dicken11}.
The most prominent feature in the mid-IR spectrum of
PKS~1814-637 is the 10\um\ silicate absorption feature. The absorption
depth of the silicate feature in PKS~1814-637 is relatively high ($\tau_{10\mu m} = 0.48$)
compared to the 40\% of NLRG in the 2~Jy sample that show a silicate absorption feature in their mid-infrared spectra.
In the context of orientation based unification
schemes for AGN, the detected silicate absorption is consistent with
an AGN viewed edge-on that is obscured by circum-nuclear
dust i.e. a NLRG. However, the 10\um\ silicate
absorption feature in PKS~1814-637 is unique for CSS in the 2~Jy sample. This fact may be related to the large-scale dusty disk
component that is more prominent in the case of
PKS~1814-637 than in any other radio galaxy in the 2~Jy sample.
Overall, the mid-infrared spectral data for PKS~1814-637 provide
evidence for the presence of a rich ISM, with a range of ionized gas, molecular
and dust features detected. Although strong PAH and H$_2$ features (but not silcate absorption: see above) appear to be a relatively common feature of
CSS sources \citep{dicken11}, their detection is rare for powerful radio galaxies in general. Indeed, the overall character
of the mid-IR spectrum of PKS~1814-637 --- with its mix of high and low ionization
fine structure, H$_2$, silicate and PAH features --- shows greater similarity
to Seyfert galaxies \citep{gallimore10,baum10} than it does to typical radio galaxies \citep{ogle06,dicken11},
as already pointed out in the previous section.
\section{The ISM of PKS~1814: radio continuum and 21-cm atomic neutral hydrogen}
\label{sec:radio}
\subsection{Previous radio continuum VLBI observations}
PKS~1814-637 was observed in VLBI at 2.3 GHz by \cite{tzioumis02}. Fig. \ref{fig:img1814s} (right)
shows the complex structure of the source which has an overall extent of $\sim 400$ mas (i.e. $\sim 480$ pc).
Two lobe-like structures with very different morphologies are observed: the southern region is dominated by two components with
similar brightness embedded in a relatively large diffuse component, while the northern region shows a prominent bright component with possible
North-South symmetrical extensions. Just over 50\% of the total flux density of the source is detected in the VLBI observations, indicating the
presence of a more diffuse extended component (but limited to the
arcsec scale as no extended component was reported from the ATCA
observations).
A weak radio continuum component is also observed (at the 5\% level)
between the two major lobes in the 2.3~GHz image, which corresponds to
a 15$\sigma$ detection. Interestingly, 8.4~GHz VLBI observations
\citep{ojha04} confirm the presence and prominence of this component
at high frequency, suggesting this component is likely to be the radio
core in PKS~1814-637 (see Fig. \ref{fig:img1814s} right).
As a final remark, it is worth mentioning that the radio structure is
not perpendicular to the galactic disk and dust lane (see
Fig. \ref{fig:img1814s}) as in many dust lane galaxies (e.g. Cen A),
but forms an angle of about 50$^\circ$ (projected) to the galactic disk and dust
lane.
\begin{figure}
\vspace{-3mm}
\centerline{\psfig{figure=morganti_Fig7.ps,width=8cm,angle=0}}
\vspace{0.3mm}
\caption{PKS~1814-637 at 1335 MHz as obtained from the line-free channels of
the observations presented in this paper. }
\label{fig:img1814_1335}
\end{figure}
\begin{figure}
\centerline{\psfig{figure=morganti_Fig8.ps,width=8cm,angle=0}}
\vspace{0.3mm}
\caption{Comparison of the ATCA (black) and LBA (red) integrated H{\,\small I}\ absorption profiles.}
\label{fig:HIcomparison}
\end{figure}
\subsection{LBA observations: data reduction and radio continuum}
H{\,\small I}\ observations centred at 1336~MHz were obtained with the Australian Long Baseline Array (LBA) on 27 Nov 1998.
The array comprised four stations; Parkes (64 m), Mopra (22 m), the Australia Telescope Compact Array (5$\times$22-m dishes as
tied-array), and the Mount Pleasant 26-m antenna of the University of Tasmania. We used a recording band of 8~MHz width in each
circular polarisation and 256 spectral channels.
The editing and part of the calibration of the data was done in {\sc aips} and then the data were transferred to {\sc miriad} \citep{sault95} for the bandpass calibration. The calibration
of the bandpass was done using additional observations of the strong
calibrator PKS~1921--293.
The resulting velocity resolution is $\sim 7$ $\,$km$\,$s$^{-1}$\ before Hanning
smoothing.
The line cube was made using uniform weighting after subtracting
the continuum emission from the $uv$-data using the line-free channels. The noise per channel is $\sim 4$ mJy beam$^{-1}$\ after
Hanning smoothing and the restoring beam size is $33 \times 16$ mas (p.a.\ = $-72.4^{\circ}$).
A continuum image was obtained using the
line-free channels. The image is shown in Fig. \ref{fig:img1814_1335} and it was obtained using {\sc aips}
and {\sc difmap}. The beam size is $36 \times 18$ mas (p.a.\ = $- 76^{\circ}$). These data were originally intended to provide information
about the spectral index (in combination with the 2.3~GHz data). However, the data quality prevented us from fully achieving this goal.
The rms noise in the image is $\sim 12$ mJy/beam. Although the quality is clearly inferior to the 2.3~GHz image of Tzioumis et
al. (2002) - the rms noise is more than twice as high - the 1.3~GHz image confirms the overall structure of the
source.
The total flux detected is $S_{\rm 1.3~GHz} = 11.5$ Jy. Compared with the 13.5 Jy detected with ATCA observations it
confirms that a fraction of the flux (at least 2 Jy,
i.e. 15\%) is likely undetected because it originates in diffuse, low surface
brightness emission. Because of the calibration problems mentioned above, we could only attempt an estimate of the
integrated spectral index (using 1.3~GHz and 2.3~GHz images convolved to the
same restoring beam). We obtain values between 1.3~GHz and 2.3~ GHz of
$\alpha^{1.3}_{2.3} \sim -1.4$ for the southern lobe and
$\alpha^{1.3}_{2.3} \sim -1.15$ for the northern
lobe, consistent with our assumption that both structures are in fact radio lobes.
Higher quality data will be necessary for an accurate estimate of the spectral index.
\subsection{Results from the H{\,\small I}\ absorption}
The LBA observations show that the H{\,\small I}\ absorption is extended and complex on the VLBI scale.
The VLBI H{\,\small I}\ observations recover a large fraction of the absorbed flux observed at low resolution with the ATCA (see Fig. \ref{fig:HIcomparison}).
The ATCA profile already suggested the presence of at least two
components of H{\,\small I}\ absorption: a deep and relatively narrow component
(with high optical depth, $\tau \sim 20$\%) and broad and shallow
wings. Interestingly, these components have, on the VLBI scale,
different spatial distributions. As can be seen in
Fig. \ref{fig:Panel}, the deep absorption is extended and covers the
entire source, while the shallow wings are more localised. The
redshifted wing is observed {\sl only} against the northern lobe while
the blueshifted wing is detected against the southern lobe. The
redshifted wing appears to be more prominent than the blueshifted one.
The column density is $\sim 3 \times 10^{20}$ cm$^{-2}$ for the deep
component assuming a T$_{spin}$ (temperature that gives the relative population of the two levels) $\sim$ 100 K.
As discussed in Sec. \ref{sec:optical}, the central velocity of the
deep absorption is consistent with the results from the optical and IR emission lines
and is defined as systemic velocity. Interestingly, the range in
velocity covered by the H{\,\small I}\ in the broad wings is comparable to the
velocity range seen in the ionised gas (see Fig. \ref{fig:kinematics}
and Table \ref{tab:kinematics}).
The H{\,\small I}\ absorption can be interpreted as coming from two separated
gaseous structures, with the deep absorption due to cold gas
located at large distance from the nucleus and radio source likely associated with the large-scale disk of the host
galaxy. This would explain why this component is detected over the
entire source and peaks around the systemic velocity. Furthermore,
the high optical depth would be due to gas with a column density
similar to that found in large scale disks, e.g. in Seyfert galaxies
but also for radio-quiet early-type galaxies \citep{gallimore99}, and
characterised by a T$_{spin}$ $\sim$ 100 K.
Considering that the large-scale, galactic disk is seen edge-on, the
fact that the deep component is projected over the entire VLBI source
would imply a thickness of the disk of the order of 400 - 500 pc,
comparable to what is expected for these types of structures in
spiral galaxies (in particular in the outer regions where they tend
to flare). The shallow, broad components could, instead, trace
a circumnuclear disk located closer to the radio source: the broad widths of the shallow, shifted absorption features would be due
to unresolved rotation or projected along the line of sight.
If this circumnuclear disk has approximately the same orientation as the
large-scale one, the detection of a velocity gradient would then be due to
the misalignment of the radio structure compared to the gaseous
disk. This would explain the observed velocity gradient (redshifted
against the northern lobe and blueshifted against the southern
lobe). The low optical depth observed even on the VLBI scale for the
broad component would not be due to effect of filling factor but could
also be due to an higher spin temperature (i.e. T$_{spin} \gta 1000$
K) of the gas in this structure due to the vicinity of the AGN. This would be consistent with what found for other compact sources (see e.g. \cite{holt06}).
\begin{figure*}
\centerline{\psfig{figure=morganti_Fig9.eps,width=14cm,angle=0}}
\caption{VLBI 2.3~GHz image of PKS~1814-637 (left) from
\cite{tzioumis02}. The locations of the two {\sl integrated}
H{\,\small I}\ spectra shown on the right are marked. The two spectra clearly illustrate that the deep
absorption is present at both locations while the broad, shallow
absorption changes drastically going from the northern to the
southern lobe.}
\label{fig:Panel}
\end{figure*}
\section{Discussion}
\label{sec:discussion}
The analysis of multi-wavelength data (imaging and spectroscopy) has
revealed that PKS~1814-637 is an intriguing radio galaxy. The
modelling of the optical image has shown the presence of a prominent
disk component, confirmed by the best fitting S\'ersic $n=2$ profile.
This is interesting given the high radio power of
PKS~1814-637: this is the first radio galaxy of such power (FRII-like)
found to reside in a disk dominated galaxy, and the only one of this
type in a complete southern sample of powerful radio sources (2~Jy
sample, see e.g. \citealt{ramos11}). This raises the question of how
this disk galaxy succeeded in producing such a powerful radio source, and why
this is not a more common phenomenon.
\subsection{PKS~1814-637: an "impostor" radio galaxy?}
\label{sec:imposter}
Given the evidence for a rich ISM we have found for PKS~1814-637 as well as its highly distorted
radio morphology, we argue that the interaction of the radio plasma
with the ISM must have had a major impact on the characteristics of
this source.
In a study of a sample of nearby radio galaxies,
\cite{tadhunter11} noted the high incidence of
CSS/GPS sources in particular among the objects showing the presence
of a young stellar population component in their host galaxies. This
has been suggested to be the result of an observational selection
effect where the strong interaction of the radio jets with the rich ISM,
characteristic of objects resulting from major mergers, may influence
the conversion of the jet power into radio luminosity. Due to the
compression of the magnetic field and the increased density of particles,
the radio luminosity will be boosted. This would make these objects
more likely to enter a radio flux selected sample of objects.
Indeed, the possibility of a variation of efficiency with which beam power is
converted into radio emission has been also suggested by
\cite{gopal91} in order to explain the dependence of linear size of
powerful radio sources on redshift.
Thus, PKS~1814-637 could represent one of the best examples supporting
this scenario and be an {\sl "imposter"} in the 2~Jy sample: an
intrinsically lower power object that is selected in the sample because
of the rich ISM that may contribute to the efficient conversion of jet power
into radio emission.
This idea is also supported by the fact that, although PKS~1814-637 is
classified as a NLRG on the basis of the optical spectrum (see
Sec. \ref{sec:optical}), its [O{\,\small III}]\ luminosity is lower than one would
expect on the basis of its radio power \citep{morganti97,holt09}. In Figure \ref{fig:o3_rad} we plot
[O{\,\small III}]\ emission line luminosity versus 5~GHz radio luminosity for the 2~Jy sample, where
$L_{\rm [OIII]}$ is a known to be a good indicator of AGN power. This shows
that, on average, CSS/GPS sources
lie below the the distribution of radio galaxies for any given
$L_{\rm Radio}$\footnote{Similar results are found correlating $L_{\rm Radio}$ with
other AGN indicators in the mid-infrared i.e. [O{\,\small IV}]\ 25.89\um\ and 24\um\ luminosity.}, providing evidence that these objects have enhanced radio
emission for their AGN power. It is notable that PKS~1814-637 has the lowest
[O{\,\small III}]\ luminosity out of all the objects classified as NLRG -- including the other CSS/GPS sources -- in the 2~Jy
sample.
As discussed in Section \ref{sec:optical}, PKS~1814-637 has an emission line luminosity, but not equivalent
width, similar to that of WLRG, as seen in Figure \ref{fig:o3_rad}. WLRG have also been shown
to have intrinsically weak AGN for their radio power, and the properties of their local ISM
are also hypothesised to boost the radio luminosity \citep{dicken09}.
In this context, it is interesting to
note that large H{\,\small I}\ disks in a sample of radio galaxies have been
found {\sl only} around compact radio sources
\citep{emonts07}. PKS~1814-637 could be another example in this trend,
considering that the characteristics of the deep H{\,\small I}\ absorption
suggests that it originates from a large scale disk (not seen in
emission because the object is too far away and the sensitivity of the
ATCA is too low). This suggests that
there exists a group of compact radio sources with large
reservoirs of cool gas, either originating from gas-rich major mergers or
present in pre-existing disks,
where the radio emission has been
boosted, at least temporarily, by the interaction of the radio plasma
with the rich ISM. Therefore, as the jets expand beyond the central regions of the galaxy that contains the rich ISM,
their radio luminosities will decline until they drop below the flux limit of the particular radio sample being considered.
Such a scenario leads to a short life time and agrees with the apparent rarity of objects like PKS~1814-637.
In addition to this, because of the many characteristics that we have
identified in common between PKS~1814-637 and Seyfert galaxies, this
group of objects may even represent a "missing link" between radio
galaxies and Seyferts.
\subsection{The lack of outflow}
If the conclusion about the systemic velocity is confirmed, it means that, unlike the majority of high radio power CSS/GPS \citep{holt08}, PKS~1814-637 does not show evidence
for a fast outflow of ionised or atomic neutral gas. This may seem surprising if, as suggested in the previous section, the radio emission is really boosted by strong jet-cloud interactions.
The orientation of the source may help to explain the lack of evidence for an outflow: a relation between the orientation of the source and the amplitude of the outflow has been presented in \citealt{holt09} (i.e. higher inclination lower outflow velocity). Being close to the plane of the sky, it could be more difficult to see any outflow produced along the radio axis in PKS~1814-63. However, if this is the case, we might expect to see strong line broadening effects at the site of the jet-cloud interaction (e.g. due to expansion of the radio lobes perpendicular to the radio jets). It is also possible that the emission line outflow (but not a neutral outflow) could be hidden by the near edge-on dusty disk at optical wavelengths. In this case the emission lines would represent more extended material that is illuminated by the AGN but not directly involved in the outflow.
One possible solution may be that the shock induced by the radio jets is so strong, and the gas heated to such high temperatures, that it does not have the chance to cool and to radiate in optical emission lines or absorb in H{\,\small I}. Therefore, we only see strong emission from the shock photoionized precursor gas. This is what we think may be happening in Coma A to the SE of the nucleus \citep{solorzano03}, where the jet appears to make a "direct hit" on a dense cloud (and the radio source is clearly deflected at this location), but despite the strong high ionization line emission associated with the cloud, we do not see {\sl any} sign of kinematic disturbance in the emission line kinematics.
If this is the case, one may expect to detect X-ray (bremsstrahlung) emission. New Chandra observations have been recently obtained and show a soft excess (Mingo B. priv. communications). The lack of adequate spatial resolution will likely make difficult to confirm whether this excess can be due to bremsstrahlung emission.
\subsection{Link to Seyfert galaxies}
We have pointed out throughout this paper that similarities exist between PKS~1814-637 and Seyfert galaxies.
We also remarked on the basis of our optical data in Sec. \ref{sec:optical}, as well as from our mid-IR data (Sec. \ref{sec:spitzer}), that without the information from the radio data, PKS~1814-637 would have been likely classified as a typical Seyfert galaxy in terms of its optical and mid-IR spectra, and optical morphology.
However, PKS~1814-637 has much higher radio power compared to even the most powerful Seyfert galaxies. In the case of radio-loud NLS1, that have been mentioned in the introduction, their radio power comparable to PKS~1814-637 (see e.g. \citealt {yuan08,foschini11} and refs therein) is likely due to their radio emission is dominated by a beamed jet emission. Furthermore, in general the host galaxies of radio-loud NLS1 have not been yet well characterised and, therefore, it is not clear that these object are host by disk galaxies. Two examples (see \citealt{zhou07,foschini11}) suggest that this could be the case at least for some of them. If this is confirmed for a larger sample, it may indicate a possible, interesting link between this group of object and objects like PKS~1814-637.
However, if we consider unbeamed, nearby Seyfert galaxies, even in those where clear evidence of jet-cloud interactions have been found, the radio emission does not reach the level observed in PKS~1814-637. An example is IC~5063, a radio-loud Seyfert galaxy where a strong ongoing interaction is observed between the radio plasma and the ISM (\citealt{oosterloo00, morganti07} and ref. therein). The radio emission is aligned with the dust-lane, and the lobe interacting with a cloud of molecular gas is also much brighter in radio \citep{oosterloo00}. This object may represent another example of radio emission boosted by jet-cloud interaction, however the total radio power is almost two orders of magnitude lower than in PKS~1814-637.
Thus, it is unlikely that the difference between the radio power of Seyfert galaxies and that of PKS~1814-637 is solely due to the interaction; it is probable that the jet in PKS~1814-637 is intrinsically more powerful than in typical Seyfert galaxies, perhaps related to a higher bulge and black hole mass. To investigate this, we have attempted to estimate the mass of the black hole in PKS~1814-637 using different methods.
First, we have used the velocity gradient observed in the shallow H{\,\small I}\ component -- that we have identified with a possible circumnuclear disk -- as a probe of the BH mass. Ignoring projection effects that cannot be quantified, the H{\,\small I}\ velocities range from $+180$ to $-100$ km/s relative to the systemic velocity. These velocities are measured at the location of the VLBI lobes, i.e. at a projected distance of about 100 pc from the core. Under these conditions, we estimate a relatively high value for the BH mass in the range between $3 \times 10^8$ and $10^9${$M_\odot$}.
Alternatively, we can also make use of known BH - bulge mass relations (\cite{magorrian98}) and derive the BH mass from the bulge mass obtained from the K-band images \citep{inskip10} and from the modelling of the galaxy.
We have used the K-band magnitude from \cite{inskip10} to obtain a proxy and an upper limit to the bulge mass. The K-band magnitude (K$=12.52$) is equivalent to an observed flux of $3.77\times 10^{-16}$ erg cm$^{-2}$ s$^{-1}$ A$^{-1}$. Assuming a galaxy with a 12~Gyr old stellar population, the SED proposed by \cite{maraston09} and the luminosity distance of $\sim$ 265 Mpc, the observed flux matches a galaxy mass of $\sim 3\times 10^{11}$ {$M_\odot$}. Using this mass and the correlation shown in Fig. 2 of \cite{haring04}, we derive a BH mass again in the range $\sim 3 \times 10^8$ to $\sim 10^9${$M_\odot$}, nicely consistent with that we derived from the H{\,\small I}. A similar value of $6 \times 10^8${$M_\odot$}\ is obtained by following instead \cite{marconi03}. These black hole mass estimates for PKS~1814-637 are higher than those estimated for any of the nearby Seyfert galaxies in the sample of \citet{mclure01, mclure02}, once corrected to our cosmology.
Finally, from the modelling of the optical images (see Sec. \ref{sec:modelling}) we can derive an estimate of the bulge mass inside the derived effective radius ($6.45 \pm 0.01$ kpc) using the integrated bulge magnitude ($r'$-band = 15.75 mag), and thus estimate a lower limit to the BH mass. We can use the absolute magnitude derived in this way (M$_r = -21.52$) to extract, from the correlation shown in \cite{mclure01}, the BH mass.
Again, the bulge luminosity of PKS~1814-637 appears to be located at the higher end of the distribution of Seyfert galaxies confirming that, unlike a typical Seyfert galaxy, PKS~1814-637 is hosted by a more early-type galaxy (S0-like), i.e. a galaxy with a large bulge.
Following the prediction of \cite{magorrian98} (see also \citealt{mclure01} for details) the BH mass would be just below $\sim 10^8${$M_\odot$}. However the plot in \cite{mclure01} shows a large scatter in the value derived for Seyfert galaxies with comparable bulge mass, up to a few times 10$^8${$M_\odot$}.
Thus, PKS~1814-637 appears to host a quite massive BH that would compare only with the most massive BH found in Seyfert galaxies.
The combination of the strong interaction with the surround ISM discussed in the previous session and massive black hole (related to the large bulge of the host galaxy) could provide the right conditions to host a powerful radio AGN in this galaxy.
\section{Radio sources like PKS~1814-637: how rare?}
\label{sec:rare}
The case of PKS~1814-637 is particularly interesting because it is the first case of a {\sl powerful} radio source in a disk galaxy.
As described above, this allows us to understand the conditions in which a powerful radio source can be triggered even when hosted by a disk-like galaxy. In addition to this, it also helps our understanding of whether this type of AGN and radio source were more common in the earlier Universe.
The idea of the interaction between radio plasma and dense ISM affecting the radio emission has indeed been proposed for high-$z$ radio galaxies to explain e.g. the correlation between the steepness of the spectral index and the redshift of the sources \citep{athreya98, klamer06}.
The possibility of a higher incidence of powerful radio sources associated with disk galaxies at high-$z$ has been brought up by \citet{norris08}, although they have so far identified only one possible candidate in their deep field. More recently, \cite{schawinski10} have studied the optical morphologies of host galaxies of AGN at $1.5<z <3$ X-ray selected (10$^{42} < L_X < 10^{44}$ erg s$^{-1}$) in the CDF-S. The majority of these AGN are hosted by disk galaxies consistent with what is found for Seyfert galaxies in the local Universe (see also \citealt{cisternas11}).
Using the deep VLA radio observation presented by \cite{manieri08}, we find that two of the AGN selected by \cite{schawinski10} appear to have radio counterparts of 3.7 and 0.07 mJy at 1.4~GHz. These sources are both at $z \sim 1.6$ and converting this to radio luminosity, these sources
have a radio power of log P$_{\rm 1.4GHz} = 25.6$ W/Hz and 24.1 W/Hz respectively. Considering the limited coverage of this deep field (at most 50 sq armin) this represents an interesting result that may indicate that indeed high power radio sources hosted in disk galaxies could be more common at higher redshift.
The availability of large surveys coming on-line in the near future from new radio facilities (e.g. LOFAR, ASKAP, Apertif) should allow the existence of this population of sources to be confirmed.
\begin{acknowledgements}
CRA acknowledges Christian Leipski for very useful comments on the GALFIT fitting.
The Australia Telescope Compact Array (/Parkes telescope/Mopra telescope/Long Baseline array) is part of the Australia Telescope which is funded by the Commonwealth of Australia for operation as a National Facility managed by CSIRO.
The authors would like to thanks the referee, Luigi Foschini, for his very constructive comments.
This research has made use of the NASA Extragalactic Database (NED), whose contributions to this paper are gratefully acknowledged. This publication makes use of data products from the Two Micron All Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology, funded by the National Aeronautics and Space Administration and the National Science Foundation.
\end{acknowledgements}
\bibliographystyle{aa}
|
2,869,038,154,395 | arxiv | \section*{Acknowledgements}
This work was supported by the 'Data Modeling Community Engagement in Health Emergencies' project funded by the Bill \& Melinda Gates Foundation, including contributions from the Institute for Disease Modeling. The Sierra Leone Ebola Database (SLED) data utilized in this paper were accessed as a component of this project. The authors wish to thank the Ministry of Health and Sanitation of the Government of Sierra Leone, the SLED team in Sierra Leone and the US Centers for Disease Control (in particular Dr. Yelena Gorina, Negasi Beyene and John Redd) for their support in reviewing the project and facilitating data access. LHD, JB, LS, DP and BMA were supported by Bill \& Melinda Gates through the Global Good Fund. LHD and JGY acknowledge support from the National Institutes of Health 1P20 GM125498-01 Centers of Biomedical Research Excellence Award. AA acknowledges financial support from the Sentinelle Nord initiative of the Canada First Research Excellence Fund and from the Natural Sciences and Engineering Research Council of Canada (project 2019-05183). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
\section*{Additional information}
\textbf{Competing financial interests} The authors declare that they have no
competing financial interests.
|
2,869,038,154,396 | arxiv | \section{Introduction}
Molecules are natural quantum mechanical platforms where several atoms are interlinked via electronic bonds. The inherent coupling between the electronic transitions at optical frequencies and the mechanical nuclear motions (vibrons) at terahertz frequencies renders molecular systems ideal for the realization of quantum optomechanical effects. This is however different from the radiation pressure coupling mechanism in macroscopic systems, as optomechanical interactions in molecules intrinsically occur in a hybrid fashion involving a two-step process of photon-electron and vibron-electron (vibronic) interactions \cite{moerner2002adozen, roelli2016molecular, benz2016single, neuman2019quantum}. The vibronic coupling resembles the radiation pressure Hamiltonian (via a boson-spin replacement) which can be in the strong coupling regime since the strength of the coherent coupling can be comparable to the vibrational frequency. At cryogenic temperatures (e.g., at $T\sim 4\,\mathrm{K}$), molecular vibrations are in their quantum ground state thus circumventing usual complications arising from additional optical cooling requirements \cite{mahler1996molecular}. Moreover, naturally occurring or engineered differences in the curvatures of the ground and excited state potential surfaces of the molecular electronic orbitals can lead to the direct generation of non-classical squeezed vibrational wavepackets \cite{averbukh1993optimal}. These aspects suggest that molecular systems offer natural platforms, where one can exploit the inherent opto-vibrational coupling as a quantum resource.\\
\indent When molecules couple to their condensed-matter environment, e.g.~in the solid state, the mechanical modes of localized intramolecular vibrations (vibrons) are augmented by collective delocalized vibrational excitations of the host material (phonons), which allow for electron-phonon (polaron) couplings. In practice, coupling to a large number of phonon modes makes the study of molecular vibrations in the solid state notoriously challenging. Some of the challenges can be tamed under cryogenic conditions where experiments manage to reduce phonon coupling on the so-called zero-phonon line (ZPL) of the transition between $\ket{g, n_\nu=0}$ and $\ket{e, n_\nu=0}$ sufficiently to reach its natural linewidth limit. This can be verified in ensemble measurements, e.g.~via hole burning, or in single-molecule spectroscopy \cite{rigler2001single}. A good example of an experimental platform is provided by dibenzoterrylene (DBT) molecules embedded in anthracene crystals [see Fig.~\ref{fig1}(a)], exhibiting a lifetime-limited linewidth and near-unity radiative yields at cryogenic temperatures \cite{nicolet2007single, kozankiewicz2014single, lombardi2018photostable, polisseni2016stable, wang2019turning}. However, even if vibrational spectroscopy at the single-molecule level is readily accessible in the laboratory \cite{makarewicz2012vibronic, deperasinska2011single}, a quantitative understanding of the couplings between the molecular vibrational modes and their internal and external degrees of freedom is still largely missing. In particular, a detailed study of decoherence sources is necessary.\\
\indent An open quantum system approach, such as employed in our treatment, can shed light onto a few aspects of coherent and incoherent vibrational dynamics and onto the light-matter interactions in the presence of vibrons, phonons and cavity-localized photon modes. Our formalism makes use of quantum Langevin equations which allows us to follow the evolution of system operators such as the electronic coherence and vibrational quadratures and to derive analytical results for the time dynamics of both expectation values and two-time correlations (needed for the computation of emission and absorption spectra). We find that closely spaced molecules can experience collective vibrational relaxation, an effect similar to the sub- and superradiance of quantum emitters in the electromagnetic vacuum. This can be exploited to decouple collective two-molecule vibrational states from the decohering phononic environment leading to the possibility of coherently mapping motion onto light and vice versa. In addition, at the level of the pure light-matter interface, coupling to confined optical cavity modes can increase the oscillator strength of the molecule by effectively reducing vibronic couplings \cite{wang2019turning}.
\indent Our formalism also allows us to treat problems relevant to experiments in cavity quantum electrodynamics with molecules, where standard concepts such as strong coupling or the Purcell effect can suffer important modifications once couplings between electronic transitions and vibrations are taken into account. To this end, we make use of analytical tools based on quantum Langevin equations \cite{reitz2019langevin} to account for an arbitrary number of vibron and phonon modes. Earlier theoretical works have either traced out the typically fast vibrational degrees of freedom \cite{wang2017coherent, haakhsqueezed2015}, used limited numerical simulations, or focused mostly on aspects such as vibrational relaxation in solids \cite{rebane1970impurity,bondybev1984relaxation, knox2002low, hill1988vibrational}, electron-phonon and electron-vibron couplings \cite{sild1988zero, hochstrasser1972phonon, vogel1986theory}, temperature dependence of the zero-phonon linewidth \cite{mccumber1963linewidth, silsbee1962thermal} and anharmonic effects \cite{kenkre1994theory, waxer1997molecular}.
However, it should also be borne in mind that the relevance of our treatment is not restricted to the physical system considered here as very similar effects also occur in related solid-state emitters such as quantum dots or vacancy centers in diamond. The coupling of such systems to photonic nanostructures has been studied quite extensively over the last years \cite{ilessmith2017limits, norambuena2016microscopic, kaer2010nonmarkovian, mccutcheon2010quantum, nazir2016modeling, lodahl2015interfacing, brash2019light,englund2010deterministic}. There is, furthermore, a general current interest in impurities interacting with a quantum many-body environment, such as molecular rotors immersed in liquid solvents \cite{lemeshko2015rotation, lemeshko2017quasiparticle}, Rydberg impurities in quantum gases \cite{schmidt2016mesoscopic} or magnetic polarons in the Fermi-Hubbard model \cite{koepsell2019imaging}. Our treatment can then be understood as a general model for the coupled dynamics of spin systems to many, possibly interconnected, bosonic degrees of freedom as illustrated in Fig.~\ref{fig1}(c).
\section{Model}
\label{model}
\subsection{General considerations}
\begin{figure}[t]
\includegraphics[width=0.96\columnwidth]{Fig1.pdf}
\caption{\emph{Model}. (a) Schematic representation of a host crystal containing molecular impuritites (e.g.~DBT) which is placed inside an optical cavity. For simplicity we restrict our treatment to a single nuclear coordinate. The molecules are either illuminated directly by a laser with amplitude $\eta_\ell$ or indirectly driven via the cavity mode with amplitude $\eta_c$. (b) Jablonski scheme of a molecule in a solid-state environment showing vibrational and phononic sublevels of both electronic excited $\ket{e}$ and ground $\ket{g}$ state. Excitation of the molecule is typically followed by quick vibrational relaxation (wavy lines). (c) Illustration of the relevant couplings between electronic $\{\sigma\}$, localized vibrational $\{Q\}$, phononic $\{q_k\}$ and optical $\{a\}$ degrees of freedom as well as decay processes indicated by wavy lines (spontaneous emission rate $\gamma$, cavity decay rate $\kappa$, and phonon decay rate $\gamma_k^\text{ph}$).}
\label{fig1}
\end{figure}
\indent We develop here a complex model where all interactions between light, electronic transitions, vibrons and phonons are taken into account for finite temperatures. We derive general expressions for the light scattered by a molecular system (of one or more molecules) embedded in a solid-state environment outside or inside an optical cavity [see Fig.~\ref{fig1}(a)]. As schematically illustrated in Fig.~\ref{fig1}(c) the light (mode $a$) couples to electronic transitions (Pauli operator $\sigma$) via a Tavis-Cummings Hamiltonian. These are in turn affected by the vibronic coupling to one or more molecular vibrations which leads to the red-shifted Stokes lines in emission [cf.~Fig.~\ref{fig1}(b)]. We focus here on a single mode with relative motion coordinate $Q$ for the sake of simplicity. The solid-state matrix supports a multitude of bosonic phonon modes with displacements $q_k$ ($k$ from 1 to $N$) which directly modify the electronic transition leading to the occurrence of phonon wings in the emission and absorption spectra. In addition, molecular vibrons can deposit energy into phonons as a displacement-displacement interaction, leading to an irreversible process of vibrational relaxation. \textcolor{black}{We will start with the description of the vibrational relaxation process in Sec.~\ref{vibrationalrelaxation} since all subsequent effects will depend on this mechanism}. We show that linear phonon-vibron couplings can already result in irreversible vibrational relaxation involving both single- and multi-phonon processes. Moreover, such dynamics can be either Markovian or non-Markovian, depending on the relation between the vibrational frequency and the maximum phonon frequency. For closely spaced molecules, the same formalism allows for the derivation of collective relaxation dynamics exhibiting effects similar to super/subradiance in dense quantum emitter systems. Classical light driving is included in Sec.~\ref{molecularspectroscopy} by calculating absorption spectra for coherently driven molecules under the influence of vibronic and phononic couplings as well as thermal effects. We show that interestingly, the vibronic and electron-phonon couplings do not cause any dephasing dynamics even at high temperatures, i.e.~the zero-phonon line is mainly lifetime-limited in the linear coupling model. Following a quantum Langevin equations approach, we derive absorption spectra for coherently driven molecules under the influence of vibronic and phononic couplings as well as thermal and finite-size effects. Finally, for molecular polaritonic systems in a cavity setting, we derive transmission functions of the cavity field (see Sec.~\ref{cavityspectroscopy}), showing the reduction of the vacuum Rabi splitting with increasing vibronic and phononic coupling, as well as phononic signatures in the Purcell regime. The effect of temperature on the asymmetry of cavity polaritons is quantified by deriving effective rate equations for the polariton cross-talk dynamics.
\subsection{Hamiltonian formulation}
We consider one molecule (later we extend to more than one) embedded in a bulk medium comprised of $N$ unit cells. \textcolor{black}{Our perturbational assumption is that, since the bulk is large, the guest molecule does not significantly change the overall modes of the bulk}. The electronic degrees of freedom of the molecule are denoted by states $\ket{g}$ and $\ket{e}$ with the former at zero energy and the latter at $\omega_0$ (we set $\hbar$ to unity), corresponding to a lowering operator $\sigma=\ket{g}\bra{e}$. We assume only a pair of ground and excited potential landscapes with identical curvature along the nuclear coordinate and make the harmonic approximation, where the motion of the nuclei can be described by a harmonic vibration at frequency $\nu$ and bosonic operators $b$ and $b^\dagger$, satisfying the usual bosonic commutation relations $[b,b^\dagger]=1$.
From the displacement between the minima of the two potential landscapes one obtains a vibronic coupling quantified by a dimensionless factor $\lambda$ (the square root of the Huang-Rhys parameter) and described by a standard Holstein-Hamiltonian \cite{holstein1959study},
\begin{equation}
\label{holsteinelvib}
H_{\text{el-vib}}=-\lambda\nu \sqrt{2} \sigma^\dagger \sigma Q,
\end{equation}
where $Q=(b+b^\dagger)/\sqrt{2}$ is the dimensionless position operator of the vibronic degree of freedom (the momentum quadrature is given by $P=i(b^\dagger-b)/\sqrt{2}$). The Holstein coupling also leads to a shift of the electronic excited state energy $\omega_0+\lambda^2\nu$, which is removed by the diagonalizing polaron transformation $\mathcal{U}_{\text{el-vib}}$. The polaron transformation $\mathcal{U}_{\text{el-vib}}=e^{i\sqrt{2}\lambda P\sigma^\dagger\sigma}=\ket{g}\bra{g}+\mathcal{B}^\dagger \ket{e}\bra{e}$ can be seen as a conditional displacement affecting only the excited state, where $\mathcal{B}^\dagger=e^{i\sqrt{2}\lambda P}$ is the inverse displacement operator for the molecular vibration creating a coherent state when applied to vacuum: $\mathcal{B}^\dagger\ket{0_\nu}=\ket{-\lambda}$. \textcolor{black}{The Hamiltonian in Eq.~(\ref{holsteinelvib}) does not consider nonadiabatic vibronic coupling which would lead to off-diagonal coupling terms (proportional to $\sigma_x$ and $\sigma_y$) and which could drive electronic transitions. Such nonadiabatic terms become relevant if two potential surfaces come close to each other \cite{ulusoy2019modifying}. In Appendix \ref{offdiagonal} we briefly discuss how one could treat such terms in the Langevin equations of motion.} One could also consider a difference in curvatures between ground (frequency $\nu$) and excited state (frequency $\bar{\nu}$) potential surfaces which would result in a quadratic coupling term $H_{\text{el-vib}}^{\text{quad}}=\beta Q^2\sigma^\dagger\sigma$ with squeezing parameter of the vibrational wavepacket $\beta=(\bar{\nu}^2-\nu^2)/(2\nu)$. We will assume that the vibron \textcolor{black}{quickly} thermalizes with the environment (via the \textcolor{black}{fast} mechanism of vibron-phonon coupling described below) at temperature $T$ and achieves a steady state thermal occupancy $\bar{n}=[\exp({ \nu/(k_\text{B}\text{T})})-1]^{-1}$.
The electronic transition is coupled to the quantum electromagnetic vacuum which opens a radiative decay channel with collapse operator $\sigma$ via spontaneous emission at rate $\gamma$. For a general collapse operator $\cal{O}$ with rate $\gamma_{\cal{O}}$ we model the dissipative dynamics via a Lindblad term ${\cal{L}}_{\cal{O}}[\rho]=\gamma_{\cal{O}}\left\{2\cal{O}\rho \cal{O}^\dagger-\rho\cal{O}^\dagger\cal{O}-\cal{O}^\dagger\cal{O}\rho\right\}$ applied as a superoperator to the density operator $\rho$ of the system. The vibronic coupling leads to the presence of Stokes lines in emission and to a mismatch between the molecular emission and absorption profiles. Following the stochastic quantum evolution of a polaron operator $\tilde{\sigma}=\mathcal{B}^\dagger\sigma$ (vibrationally dressed Pauli operator for the electronic transition) analytical solutions for the absorption and emission spectra of the molecule can be derived in the presence of vibrons~\cite{reitz2019langevin}. \\
\indent In addition to the coupling to internal vibrations of its nuclei, the electronic transition is also modified through coupling to the delocalized phonon modes of the crystal. We describe the bulk modes as a bath of independent harmonic oscillators with bosonic operators $c_k$ and $c_k^\dagger$ and frequencies $\omega_k$. The electron-phonon coupling (see Appendix \ref{derivation} for derivations) can then be cast in the same Holstein form as for the vibron
\begin{equation}
H_{\text{el-phon}}= -\sum_k \lambda_k \omega_k\sqrt{2}\sigma^\dagger\sigma q_k,
\end{equation}
where the displacement operators refer to each individual collective phonon mode $q_k=(c_k+c^\dagger_k)/\sqrt{2}$ (the momentum operator is given by $p_k=i(c_k^\dagger-c_k)/\sqrt{2}$). The coupling factors $\lambda_k$ depend on the specifics of the molecule and the bulk crystal. Similarly to the vibronic case, the electron-phonon interaction can be diagonalized by means of a polaron transformation $\mathcal{U_{\text{el-phon}}}=\ket{g}\bra{g}+\mathcal{D}^\dagger \ket{e}\bra{e}$, whereby $\mathcal{D}^\dagger=\prod_k \mathcal{D}_k^\dagger=e^{\sum_k i \sqrt{2}\lambda_k p_k}$ is the product of all phonon mode displacements, signifying a collective transformation for all phonon modes. We will assume that the bulk is kept at a constant temperature and is always in thermal equilibrium with the individual mode thermal average occupancies amounting to $\bar{n}_k=[\exp({ \omega_k/(k_\text{B}\text{T})})-1]^{-1}$. The coupling to the phonons gives rise to a multitude of sidebands in the absorption and emission spectra which coalesce into a phonon wing that becomes especially important at elevated temperatures. We will then follow the temporal dynamics of a collective polaron operator $\tilde{\sigma}=\mathcal{D}^\dagger\mathcal{B}^\dagger\sigma$ which includes both vibronic and electron-phonon couplings.\\
\indent Phonons also affect the dynamics of the vibrational mode. Modifications of the bond length associated with the molecular vibration leads to a force on the surrounding crystal (and vice versa), giving rise to a displacement-displacement coupling,
\begin{equation}
\label{Hvibphon}
H_{\text{vib-phon}}=-\sum_k \alpha_k q_k Q\,.
\end{equation}
The coupling coefficients $\alpha_k$ are explicitly derived in Appendix \ref{derivation}. In the limit of large bulk media, this Hamiltonian can lead to an effective irreversible dynamics, i.e.~a vibrational relaxation effect. This is the Caldeira-Leggett model widely treated in the literature as it leads to a non-trivial master equation evolution which cannot be expressed in Lindblad form and is \textcolor{black}{cumbersome} to solve analytically \cite{caldeira1983path, caldeira1981influence, hu1992quantum}. To circumvent this difficulty, we follow the formalism of Langevin equations under the concrete conditions imposed by the one-dimensional situation considered here. We are then in a position to identify the Markovian versus non-Markovian regimes of vibrational relaxation conditioned on the phonon spectrum, namely on the maximum phonon frequency $\omega_{\text{max}}$ of the system. We can additionally account for a finite phonon lifetime by including a decay rate $\gamma_k^{\text{ph}}$ for each phonon mode. \\
\indent To perform spectroscopy, we add a laser drive modeled as $H_{\ell}=i\eta_{\ell} \left(\sigma^\dagger e^{-i\omega_{\ell}t}-\sigma e^{i\omega_{\ell}t}\right)$ with amplitude $\eta_{\ell}$. \textcolor{black}{We will assume weak driving such that the assumption of thermal equilibrium is still valid}. Furthermore, to treat various aspects of molecular polaritonics, we describe the dynamics of a hybrid light-matter platform by adding the coupling of a confined optical mode at frequency $\omega_c$ to the electronic transition via a Jaynes-Cummings interaction
\begin{equation}
H_\text{JC}= g(a\sigma^\dagger+\sigma a^\dagger).
\end{equation}
The bosonic operator $a$ satisfies the commutation relation $[a,a^\dagger]=1$ and the coupling is given by $g=[d_{\text{eg}}^2\omega_c/(2\epsilon_0 V)]^{1/2}$, where $d_{\text{eg}}$ is the electronic transition dipole moment and $V$ is the quantization volume ($\epsilon_0$ is vacuum permittivity). Spectroscopy of the cavity-molecule system can be done by adding a cavity pump $H_{\ell}=i\eta_{\text{c}} \left(a^\dagger e^{-i\omega_{\ell}t}-a e^{i\omega_{\ell}t}\right)$ with amplitude $\eta_{\text{c}}$. The cavity loss is modeled as a Lindblad process with collapse operator $a$ and rate $\kappa$. In standard cavity QED, depending on the magnitude of the coherent exchange rate $g$ to the loss terms $\kappa$ and $\gamma$ one can progressively advance from a strong cooperativity Purcell regime to a strong coupling regime where polaritons emerge. We will mainly focus on analytical derivations of the effects of electron-vibron and electron-phonon couplings at finite temperatures on the emergence of a spectral splitting in the strong coupling regime as well as the transmission in the Purcell regime.
\section{Vibrational relaxation}
\label{vibrationalrelaxation}
\begin{figure*}[t]
\includegraphics[width=2.0\columnwidth]{Fig2.pdf}
\caption{\emph{Markovian vs.~non-Markovian vibrational relaxation}. (a) Real and imaginary part of the frequency-dependent non-Markovian response function $\Gamma(\omega)$. The dotted horizontal lines $\Gamma_r(\omega) =\Gamma_{\text{m}}$ and $\Gamma_i (\omega)=0$ show the Markovian limit $\omega_{\text{max}}\to \infty$. (b) Logarithmic plot of the mechanical susceptibility $|\chi(\omega)|^2$ in the Markovian regime $\omega_{\text{max}}\gg\nu$ (dashed curve) and for a finite cutoff $\omega_{\text{max}}=1.4\nu$ (solid curve). In the latter case, the cutoff at $\pm\omega_{\text{max}}$ is indicated by the vertical dotted lines. (c) Relaxation of molecular vibrational energy $E_\nu (t)$ due to coupling to $N=500$ phonon modes (simulation of classical equations of motion) in the Markovian limit $\omega_{\text{max}}=7\nu$ for $\Gamma_{\text{m}}=\nu/20$ and comparison with Brownian motion theory (red dashed curve). (d) Vibrational relaxation for identical $\Gamma_{\text{m}}$ but in the non-Markovian regime $\omega_{\text{max}}=\nu$. Exponential decay with $\Gamma_{\text{m}}$ (dashed red curve) then fails to predict the behavior. In both cases we assumed a finite phonon lifetime with constant $\mathcal{Q}$-factor $\mathcal{Q}=\omega_k/\gamma_k^\text{ph}=50$.}
\label{fig2}
\end{figure*}
A decay path for the molecular vibration stems from its coupling to the bath of phonon modes supported by the bulk. While it is generally agreed that nonlinear vibron-phonon couplings contribute to the vibrational relaxation process, especially in the higher temperature regime \cite{nitzan2017energy}, we restrict our treatment to a coupling in the bilinear form of Eq.~\eqref{Hvibphon}. To understand the physical picture, we first show that in perturbation theory the bilinear Hamiltonian leads to a competition between fundamental processes that involve the decay of a vibrational quantum into superpositions of either single phonon states or many phonons adding together in energy to the initial vibrational state energy. Afterwards we proceed by writing a set of coupled deterministic equations of motion for the vibrational quadratures of the molecule $\left\{Q,P\right\}$ and the collective normal modes of crystal vibrations $\left\{q_k,p_k\right\}$. This allows for the elimination of the phonon degrees of freedom and the derivation of an effective Brownian noise stochastic evolution model for the molecular vibrations. We illustrate regimes of Markovian and non-Markovian dynamics and show that an equivalent approach tailored to two molecules can lead to collective vibrational relaxation strongly dependent on the molecule-molecule separation.
\subsection{Fundamental vibron-phonon processes}
Let us consider an initial state containing a single vibrational quantum $\ket{1_{\nu}, \text{vac}_{\text{ph}}}$ that evolves according to the vibron-phonon bilinear Hamiltonian of Eq.~\eqref{Hvibphon}. We aim to follow the fundamental processes leading to the energy of the vibration deposited in superpositions of single or multi-phonon states. We move to the interaction picture by removing the free energy with $U = e^{iH_{0}t}$ with the free Hamiltonian $H_{0} = \nu b^{\dagger}b + \sum_{k=1}^{N} \omega_{k} c_{k}^{\dagger}c_{k}$. The time-dependent interaction picture Hamiltonian, thus, becomes
\begin{eqnarray}
\label{Eq2}
\tilde{H}\! =\! -\sum_{k=1}^{N}\! \alpha_{k}\! \left( e^{-i\nu t}b + e^{i\nu t}b^{\dagger}\right) \!\left(e^{-i\omega_{k} t}c_{k} + e^{i\omega_{k} t}c_{k}^{\dagger} \right)\!.
\end{eqnarray}
The formal solution of the Schr\"{o}dinger equation can then be written as a Dyson series $\ket{\phi(t)} = \mathcal{T} e^{-i\int_{0}^{t}d\tau \tilde{H}(\tau)}\ket{1_{\nu}, \text{vac}_{\text{ph}}}$. We can proceed by evaluating the first term in the series which leads to (see Appendix \ref{vibronphonon} for details) resonant scattering ($\omega_{k} = \nu$) into single-phonon states $\ket{0_{\nu},1_k}$ at perturbative rate $\alpha_k t$ as well as off-resonant scattering ($\omega_{k} \neq \nu$) into states $\ket{0_{\nu},1_k}$ with probability inversely proportional to the detuning $\omega_{k} -\nu$. We note that for $\nu > \omega_{\text{max}}$, only off-resonant transitions are possible. The next order of perturbation theory, however, leads to multi-phonon processes where resonant transitions to states containing three phonons $\ket{0_{\nu},1_{j_1},1_{j_2},1_{j_3}}$ become possible. The resonance condition reads $\omega_{j_1} + \omega_{j_2} + \omega_{j_3} =\nu $ for $j_1 \neq j_2 \neq j_3$, and its amplitude is a sum over terms $\alpha_{j_1}\alpha_{j_2}\alpha_{j_3}t/(\omega_{j_2} + \omega_{j_3})(\omega_{j_3}-\nu)$. These terms are small with respect to the rates of the resonances starting in the first order for $\nu \leq \omega_{\text{max}}$ and in total are comparable to the single-phonon scattering off-resonant terms.
\subsection{Effective Brownian noise model}
Formal elimination of the phonon modes (see Appendix \ref{browniansection} for details) leads to an effective Brownian motion equation for the momentum of the vibrational mode
\begin{align}
\label{dotp}
\dot{P}=-\tilde{\nu}Q -\Gamma\ast P+\xi,
\end{align}
while the displacement follows the unmodified equation $\dot{Q}=\nu P$. The effect of the phonon bath is twofold: (i) it can shift the vibrational frequency to $\tilde{\nu}=\nu-\nu_s$ and (ii) it leads to a generally non-Markovian decay kernel expressed as a convolution $\Gamma\ast P=\int_{0}^{\infty}dt'\Gamma(t-t')P(t')$. For the particular case considered in the Appendix, the crystal-induced frequency shift is expressed as
\begin{equation}
\nu_s=\nu\frac{(\Delta k)^2}{2k_0k_{\text{M}}},
\end{equation}
where $k_0$ denotes the spring constant of the host crystal, $k_{\text{M}}$ represents the spring constant of the vibron, and $\Delta k$ is a measure for the coupling of the molecule's relative motion to the bulk. For a discrete system, the expression for the damping kernel $\Gamma(t)=\sum_k \alpha_k^2\nu/\omega_k\cos(\omega_k t)\Theta(t)$ involves a sum over all phonon modes which can be turned into the following expression in the continuum limit ($N\to\infty$)
\begin{equation}
\Gamma(t-t')=\Gamma_{\text{m}}\frac{J_1(\omega_{\text{max}}|t-t'|)}{|t-t'|}\Theta(t-t').
\end{equation}
Here, $J_n(x)$ denotes the $n$-th order Bessel function of the first kind, $\Theta(t)$ stands for the Heaviside function and $\Gamma_{\text{m}}={2\nu\nu_s}/{\omega_{\text{max}}}$ is the decay rate in the Markovian limit. A similar expression is known from the Rubin model \cite{rubin1963momentum}, where one considers the damping of a single mass defect in a 1D harmonic crystal. The zero-average Langevin noise term $\xi$ is determined by the initial conditions of the phonon bath and can be expressed in discrete form as $\xi(t)=\sum_k \alpha_k \left(q_k(0)\cos(\omega_k t)+p_k (0)\sin(\omega_k t)\right)$. We can treat Eq.~(\ref{dotp}) more easily in the Fourier space where the convolution becomes a product
\begin{align}
-i\omega P(\omega)=-\tilde{\nu}Q(\omega)-\Gamma(\omega)P(\omega)+\xi(\omega),
\end{align}
and the Fourier transform of the non-Markovian decay kernel $\Gamma(\omega)$ generally contains a real and imaginary part $\Gamma(\omega)=\Gamma_r(\omega)+i\Gamma_i (\omega)$. Figure~\ref{fig2}(a) shows a plot of $\Gamma_r(\omega)$ and $\Gamma_i(\omega)$ where we can interpret the imaginary part as a frequency shift which is largest around $\omega=\pm\omega_{\text{max}}$. Together with the transformed equation for the position quadrature $-i\omega Q(\omega)=\nu P(\omega)$ we then obtain an algebraic set of equations which allows us to calculate any kind of correlations for the molecular vibration, both in time and frequency domains. This will be needed later on for computing the optical response of the molecule.
\begin{figure*}[t]
\includegraphics[width=2.078\columnwidth]{Fig3.pdf}
\caption{\emph{Collective vibrational effects}. (a) Collective interaction kernel $\Gamma_{12}(\tau)=\Gamma_{21}(\tau)$ as a function of time delay $\tau$ and separation $j$. In the crystal, only integer values of $j$ are permitted (dashed vertical lines). (b) Comparison between individual decay (dashed curve, assuming two identical molecules $\Gamma_1(\tau)=\Gamma_2(\tau)$) and collective interaction for increasing distance $j$ (as indicated by the numbers above the curves). (c) Real and (d) imaginary part of $\Gamma_{12}(\omega)$ between $-\omega_{\text{max}}$ and $+\omega_{\text{max}}$ for $j=1$ (solid), $2$ (dashed), and $3$ (dotted).}
\label{fig3}
\end{figure*}
\subsection{Markovian versus non-Markovian regimes}
The Markovian limit is achieved when the vibrational frequency lies well within the phonon spectrum $\omega_\text{max}\gg\nu$ and $\Gamma(\omega)$ becomes flat in frequency space: $\Gamma(\omega)=\Gamma_{\text{m}}$. In this case the memory kernel tends to a $\delta$-function: $\Gamma(t)=2\Gamma_{\text{m}}\delta(t)\Theta(t)$ with the convention $\Theta(0)=1/2$. In the continuum limit, the correlations at different times are
\begin{align}
\braket{\xi(t)\xi(t')}=\frac{1}{2\pi}\int_{-\infty}^{\infty}d\omega e^{-i\omega(t-t')} S_{\text{th}}(\omega),
\end{align}
where the noise spectrum is expressed similarly to the case of a standard thermal spectrum for a harmonic oscillator in thermal equilibrium $S_{\text{th}}(\omega)={\Gamma_r(\omega)\omega}/\nu [\coth\left(\beta\omega/2\right)+1]$ in terms of the inverse temperature $\beta=(k_\text{B} T)^{-1}$. The difference lies in the frequency-dependence of the real part of the decay rate function
\begin{align}
\Gamma_r(\omega)=\Gamma_{\text{m}}\frac{\sqrt{\omega_{\text{max}}^2-\omega^2}}{\omega_{\text{max}}}\Theta(\omega_{\text{max}}\!-\!\omega)\Theta(\omega_{\text{max}}\!+\!\omega),
\end{align}
where the Heaviside functions provide a natural cutoff of the spectrum at $\pm\omega_{\text{max}}$. While in the time domain, the noise is only $\delta$-correlated at high temperatures and $\omega_{\text{max}}\to\infty$, the noise is always $\delta$-correlated (yet colored) in the frequency domain $\braket{\xi(\omega)\xi(\omega')}=S_{\text{th}}(\omega)\delta(\omega+\omega')$. This property is helpful for analytical estimation of the molecular absorption and emission in the presence of non-Markovian vibrational relaxation. In frequency space, the response of the vibron to the input noise of the phonon bath is characterized by the susceptibility $\chi(\omega)={-i\omega}\left[{\nu^2-\omega^2-i\Gamma(\omega)\omega}\right]^{-1}$ defined by $P(\omega)=\chi(\omega)\xi(\omega)$ (for simplicity we assumed $\tilde{\nu}\approx\nu$).
In Fig.~\ref{fig2}(b), we plot $|\chi(\omega)|^2$ for the two cases $\omega_{\text{max}}\gg\nu$ (Markovian limit) and $\omega_{\text{max}}\approx\nu$ (non-Markovian regime) for identical $\Gamma_{\text{m}}$. While in the Markovian regime, the susceptibility has two approximately Lorentzian sidebands with linewidth $\Gamma_{\text{m}}$ centered around $\pm\nu$, the finite frequency cutoff in the non-Markovian case leads to an unconventional lineshape with reduced linewidth and slight frequency shift. In the time domain, we can simulate the microscopic classical equations of motion for a large number of phonon modes and compare the results to the standard Markovian limit obtained from Brownian motion theory. This is illustrated in Figs.~\ref{fig2}(c) and (d) where we simulate the average energy of the vibron mode $E_\nu(t)=\left(\bar{P}(t)^2+\bar{Q}(t)^2\right)/2$ for classical observables $\{\bar{P},\bar{Q}\}$ interacting with $N=500$ phonon modes in the Markovian and non-Markovian regimes, respectively. While one obtains an exponential decay with $\Gamma_{\text{m}}$ in the Markovian regime [Fig.~\ref{fig2}(c)], in the non-Markovian case [Fig.~\ref{fig2}(d)] one finds a slower nonexponential decay (for identical $\Gamma_{\text{m}}$) which does not reach zero for long times.\\
\indent The time domain correlations can be easily computed from the thermal spectrum convoluted with the modified mechanical susceptibility in the Fourier domain
\begin{align}
\label{fouriertransformp}
\braket{P(t)P(t')}=\frac{1}{2\pi}\int_{-\infty}^{\infty}d\omega e^{-i\omega(t-t')}|\chi(\omega)|^2 S_{\text{th}}(\omega),
\end{align}
which also includes the non-Markovian regime. At low temperatures $\beta^{-1}\ll \nu$ the sideband at $-\nu$ is suppressed and the thermal spectrum can be approximated as $S_{\text{th}}(\omega)=[2{\Gamma_r(\omega)\omega}/\nu ] \Theta(\omega)$. This two-time correlation function of the momentum quadrature will be required later in the calculation of molecular spectra in section \ref{molecularspectroscopy}.
\subsection{Collective vibrational effects}
A collection of impurity molecules sitting close to each other within the same crystal will see the same phonon bath and can, therefore, undergo a collective vibrational relaxation process. This is similar to the phenomenon of subradiance/superradiance of quantum emitters commonly coupled to an electromagnetic environment, where the rate of photon emission from the whole system can be smaller or larger than that of an individual isolated emitter. In order to elucidate this aspect, we follow the approach sketched above, i.e.~we eliminate the phonon modes to obtain a set of coupled Langevin equations for two molecules situated $2j$ sites apart from each other:
\begin{subequations}
\label{collectivevib}
\begin{align}
\dot{P}_1=-\tilde{\nu}_1 Q_1-\Omega Q_2-\Gamma_1\ast P_1-\Gamma_{12}\ast P_2+\xi_1,\\
\dot{P}_2=-\tilde{\nu}_2 Q_2 -\Omega Q_1-\Gamma_2\ast P_2-\Gamma_{21}\ast P_1+\xi_2.
\label{collective}
\end{align}
\end{subequations}
The mutually induced (small) energy shift $\Omega=\sum_k {\alpha_{k,1}\alpha_{k,2}}/{\omega_k}$ and the mutual damping kernels $\Gamma_{12}=\nu_2\sum_k {\alpha_{k,1}\alpha_{k,2}}/{\omega_k}\cos(\omega_k t)\Theta(t)$ and $\Gamma_{21}=\nu_1\sum_k {\alpha_{k,1}\alpha_{k,2}}/{\omega_k}\cos(\omega_k t)\Theta(t)$ are strongly dependent on the intermolecular separation $2j$ (see Appendix \ref{collectivevibrationalrelaxation} for full expression), whereas the individual decay terms $\Gamma_1$ and $\Gamma_2$ are given by the expressions derived previously. Importantly, now also the noise terms $\xi_1$ and $\xi_2$ are not independent of each other but correlated according to a separation-dependent expression specified in Appendix \ref{collectivevibrationalrelaxation}. In the continuum limit $N\to\infty$, the collective interaction kernels can be approximated with the aid of higher-order Bessel functions (assuming identical molecules $\nu_1=\nu_2$ and consequently $\Gamma_{12}(t)=\Gamma_{21}(t)$):
\begin{align}
\Gamma_{12}(t-t')=\Gamma_{\text{m}}\frac{4j J_{4j}(\omega_{\text{max}} |t-t'|)}{|t-t'|}\Theta(t-t').
\end{align}
In Fig.~\ref{fig3}(a), we plot the collective decay kernel as a function of time and intermolecular separations $2j$. The collective effects do not occur instantaneously but in a highly time-delayed fashion [cf.~Fig~\ref{fig3}(b)]. We can interpret the collective interaction as an exchange of phonon wavepackets between the two molecules, where the wavepackets are traveling with the group velocity $v_g =d\omega/dq$ of the crystal (lattice constant $a$) at a maximum speed $v_g^{\text{max}}\approx a\omega_{\text{max}}/2$ (the high frequency components towards the band edge are slower). This leads to an approximate time of $\tau=4j\omega_{\text{max}}^{-1}$ for the wavepacket to propagate from one molecule to the other.\\
\indent The collective interaction will also lead to a modification of the vibrational lifetimes of the molecules which we want to describe in the following. To this end, one can again proceed with a Fourier analysis of Eqs.~(\ref{collectivevib}). The expression for the non-Markovian collective interaction kernel in frequency space (between $-\omega_{\text{max}}$ and $+\omega_{\text{max}}$) reads
\begin{align}
\Gamma_{12}(\omega) =& i\Gamma_{\text{m}} U_{4j-1}\left(\frac{\omega}{\omega_{\text{max}}}\right)\frac{\sqrt{\omega_{\text{max}}^2-\omega^2}}{\omega_{\text{max}}}\\\nonumber
&+\Gamma_{\text{m}} T_{4j}\left(\frac{\omega}{\omega_{\text{max}}}\right),
\end{align}
where we introduced the Chebychev polynomials of first ($T_n $) and second kind ($U_n$). We are interested in the real part of the above expression which will give rise to a collectively-induced modification of the vibrational lifetime while the imaginary part corresponds again to a frequency shift. In Figs.~\ref{fig3}(c) and (d) we plot the real and imaginary parts of $\Gamma_{12}(\omega)$ for small distances $j$, respectively.
In the Markovian limit $\omega_{\text{max}}\gg\nu$ everything becomes flat in frequency space and one can approximate $\Gamma_{12}(\omega)=\Gamma_{12}(0)=\Gamma_{\text{m}}$ and consequently $(\Gamma_{12}\ast P_i)\approx \Gamma_{\text{m}}P_i$ with $i=\{1,2\}$. A diagonalization can be performed by moving into collective quadratures $P_+=P_1+P_2$ and $P_- = P_1 - P_2$ (and identically for the positions) for which the equations of motion decouple
\begin{subequations}
\begin{align}
\dot{P}_+&\approx -(\tilde{\nu}+\Omega)Q_+-2\Gamma_{\text{m}}P_++\xi_1+\xi_2,\\
\dot{P}_-&\approx -(\tilde{\nu}-\Omega)Q_-+\xi_1-\xi_2.
\end{align}
\end{subequations}
While one of the collective modes undergoes relaxation at an increased rate $2\Gamma_{\text{m}}$, the orthogonal collective mode can be eventually decoupled from the phononic environment. Of course, as the derivation we have performed is restricted to one-dimensional crystals, it would be interesting to explore this effect in three dimensional scenarios where both longitudinal and transverse phonon modes have to be considered with effects stemming from the molecular orientation as well as the influence of anharmonic potentials. A recent theoretical work also discusses phonon-bath mediated interactions between two molecular impurities immersed in nanodroplets with respect to the rotational degrees of freedom of the molecules \cite{li2020intermolecular}.
\section{Fundamental spectral features}
\label{molecularspectroscopy}
Let us now consider a molecule driven by a coherent light field. We will make use of and extend the formalism used in Ref.~\cite{reitz2019langevin} to compare the effect of Markovian versus non-Markovian vibrational relaxation, phonon imprint on spectra and temperature effects. To derive the absorption profile of a laser-driven molecule, one can compute the steady-state excited state population $\mathcal{P}_{\text{e}}=\braket{\sigma^\dagger\sigma}=\eta_\ell \left[\braket{\sigma}+\braket{\sigma}^*\right]/(2\gamma)$. The average steady-state dipole moment can formally be written as (\textcolor{black}{note that we are assuming weak driving conditions $\eta_\ell\ll\gamma$ such that the laser drive only probes the linear response of the dipole}):
\begin{align}
\braket{\sigma}=\eta_\ell\int_{-\infty}^t dt' e^{-\left[\gamma-i\left(\omega_\ell-{\omega}_0\right)\right](t-t')} \braket{\mathcal{B}(t)\mathcal{B}^\dagger (t')}.
\label{integralsigma}
\end{align}
The important quantity to be estimated is the correlation function for the displacement operators of the molecular vibration $\braket{\mathcal{B}(t)\mathcal{B}^\dagger (t')}$, which is fully characterized by the Huang-Rhys factor $\lambda^2$ and the second-order momentum correlation functions:
\begin{equation}
\braket{\mathcal{B}(t)\mathcal{B}^\dagger (t')}=e^{-2\lambda^2\left(\braket{P^2}-\braket{P(t)P(t')}\right)}.
\end{equation}
The stationary correlation $\braket{P(t)^2}=\braket{P^2}=1/2+\bar{n}$ includes the temperature of the environment and does not depend on the details of the decay process. The two-time correlations $\braket{P(t)P(t')}$ (and consequently the vibrational linewidths of the resulting optical spectrum) are crucially determined by the details of the dissipation model \textcolor{black}{derived in section \ref{vibrationalrelaxation}}. In order to capture the non-Markovian character of the vibrational relaxation, we extend the method used in Ref.~\cite{reitz2019langevin} by computing correlations in the Fourier domain and then transforming to the time domain.
\subsection{The non-Markovian vibrational relaxation regime}
Let us first consider the imprint of the particularities of the vibrational relaxation process onto the absorption and emission spectra when molecule-light interactions are taken into account.
For the calculation of the momentum correlation function $\braket{P(t)P(t')}$ one has to evaluate the integral in Eq.~(\ref{fouriertransformp}), where the susceptibility weighted with the thermal spectrum is given by the general expression
\begin{figure}[t]
\includegraphics[width=1.02\columnwidth]{Fig4.pdf}
\caption{\emph{Molecular spectroscopy}. (a) Real part of the correlation function $\braket{P(t)P(t')}$ for Markovian decay (dashed) versus non-Markovian decay (solid, cutoff at $\omega_{\text{max}}=1.3\nu$) for $\Gamma_{\text{m}} =0.1\nu$ and at zero temperature $\bar{n}=0$. (b) Comparison between the resulting absorption spectra (steady state population $\mathcal{P}_{\text{e}}$ normalized by steady-state population of resonantly driven two-level system $\mathcal{P}_0$) in the Markovian and non-Markovian regimes for the same parameters as in (a) and $\lambda=1$, $\gamma=\Gamma_{\text{m}}/4$. (c) Effect of thermal occupation $\bar{n}$ on absorption spectra ($\lambda=1$) without vibrational relaxation $\Gamma_{\text{m}}=0$ and (d) including vibrational relaxation $\Gamma_{\text{m}}=8\gamma$ (assuming Markovian decay).}
\label{fig4}
\end{figure}
\begin{align}
\label{nonmarksusc}
|\chi(\omega)|^2 S_{\text{th}}(\omega)=\frac{\Gamma_r (\omega)\omega^3\left[\coth(\frac{\beta\omega}{2})+1\right]/\nu}{\left(\nu^2\! -\! \omega^2\! +\Gamma_i (\omega)\omega \right)^2\! +\! \Gamma_r(\omega)^2\omega^2},
\end{align}
with $\Gamma_i(\omega)=\Gamma_{\text{m}} {\omega}/\omega_{\text{max}}$ between $-\omega_{\text{max}}$ and $+\omega_{\text{max}}$. As discussed in the previous section, the real part of $\Gamma(\omega)$ determines the decay rate while the imaginary part leads to a frequency shift. Generally, performing the integral over the expression in Eq.~(\ref{nonmarksusc}) is difficult since the line shapes can be very far from simple Lorentzians. However, assuming a good oscillator ($\Gamma_{\text{m}} \ll \nu$) and consequently a sharply peaked susceptibility that only picks frequencies around the vibrational resonance, we can obtain an effective modified frequency $\nu'$ and decay rate $\Gamma'$ in the non-Markovian regime (however assuming $\omega_{\text{max}}>\nu$) with $\nu'=\left[\nu^2+\Gamma_i(\nu)\nu\right]^{1/2}$ and $\Gamma'=\Gamma_r(\nu')$. By expanding Eq.~(\ref{nonmarksusc}) around the poles of the denominator $\omega=\pm\nu'+\delta$ and assuming $|\delta|\ll|\nu'|$, one can then calculate the temperature-dependent momentum correlation function in the non-Markovian regime:
\begin{align}
\braket{P(t)P(t')}\! =\! \left[\!\left(\bar{n}\!+\!\frac{1}{2}\right)\!\cos(\nu'\tau)\!-\!\frac{i}{2}\sin(\nu'\tau)\!\right]\! e^{-\frac{\Gamma'}{2}|\tau|},
\end{align}
with time delay $\tau=t-t'$. This allows for an analytical evaluation of the integral in Eq.~(\ref{integralsigma}) (see Appendix \ref{calcabsorptionspectrum} for detailed calculation) and leads to a steady-state excited-state population of
\begin{align}
\label{pss}
\frac{\mathcal{P}_{\text{e}}}{\eta_\ell^2}=\!\sum_{n=0}^\infty\sum_{l=0}^n\frac{L(n)B(n,l)\left(\gamma\!+\!n\frac{\Gamma'}{2}\right)/\gamma}{(\gamma\!+\!n\frac{\Gamma'}{2})^2+\left[(\omega_\ell\!-\!\omega_0)-(n\!-\!2l){\nu'}\right]^2},
\end{align}
where we introduced $L(n)=e^{-\lambda^2(1+2\bar{n})}\frac{\lambda^{2n}}{n!}$ and $B(n,l)=\binom{n}{l}\left(\bar{n}+1\right)^{n-l}\bar{n}^l$ . One can immediately obtain the result for the Markovian limit by replacing $\nu'\to\nu$ and $\Gamma'\to\Gamma_{\text{m}}$. Figures \ref{fig4}(a),(b) show a comparison between the momentum correlation function and the resulting steady-state population (normalized by the steady-state population of a resonantly driven two-level system $\mathcal{P}_0={\eta_\ell^2}/{\gamma^2}$) in the Markovian and non-Markovian regimes (for fixed $\Gamma_{\text{m}}$). We can see that non-Markovianity leads to modified spectral positions and linewidths of the vibronic sidebands while the ZPL is not affected by the dissipation process. The denominator of Eq.~(\ref{pss}) contains a sum over Lorentzians with a series of blue-shifted lines with index $n$ arising from the electron-vibration interaction which are weighted by a Poissonian distribution. Thermal occupation of the vibrational states can counteract this effect however by leading to red-shifted lines in absorption [see Figure \ref{fig4}(c)] with index $l$ and weighted by a binomial distribution. However, as shown in Figure \ref{fig4}(d), for large vibrational relaxation rates $\Gamma_{\text{m}}\gg\gamma$ the sidebands will be suppressed and absorption and emission of the molecule will mostly occur on the ZPL transitions $\ket{g,m_\nu}\leftrightarrow\ket{e,m_\nu}$. While in the case of zero temperature the ZPL is solely determined by $n=0$, for finite temperatures all terms with $n=2l$ can contribute to it.
An important quantity is the Franck-Condon factor $f_{\text{FC}}$ which measures the reduction of the ZPL intensity due to coupling to internal vibrations. This factor is given by the average displacement squared $f_{\text{FC}}=\braket{\mathcal{B}^\dagger}^2=e^{-\lambda^2(1+2\bar{n})}$ and does not depend on the vibrational relaxation of the molecule.
Using the fact that $(\bar{n}+1)/\bar{n}=e^{\beta\nu}$, one can express Eq.~(\ref{pss}) as a sum over just a single index in the limit $2\lambda^2\sqrt{\bar{n}(\bar{n}+1)}\ll 1$ (see Appendix \ref{calcabsorptionspectrum} for derivation) as
\begin{align}
\frac{\mathcal{P}_{\text{e}}}{\eta_\ell^2}=\sum_{n=-\infty}^\infty \frac{f_{\text{FC}}\left(\frac{\bar{n}+1}{\bar{n}}\right)^{\frac{n}{2}}I_n\left(2\lambda^2\bar{N}\right)\left(\gamma\!+\!|n|\frac{\Gamma'}{2}\right)/\gamma}{(\gamma+|n|\frac{\Gamma'}{2})^2+(\omega_\ell-\omega_0-n\nu')^2},
\end{align}
with $I_n (x)$ denoting the \textit{modified} Bessel functions of the first kind and $\bar{N}=\sqrt{\bar{n}(\bar{n}+1)}$. This expression is similar to the result known from the standard Huang-Rhys theory for emission and absorption \cite{huang1950theory}, but it now additionally includes vibrational relaxations $\Gamma'$. The ZPL contribution ($n=0$) at resonance is thus simply given by $\mathcal{P}_{\text{e}}={\eta_\ell^2f_{\text{FC}}}/{\gamma^2}$. The emission spectrum can be calculated from the Fourier transform of two-time correlations $\braket{\sigma^\dagger (\tau)\sigma (0)}$. Considering the decay of an initially excited molecule, one finds that the emission spectrum is simply given as the mirror image (with respect to the ZPL) of the absorption spectrum which is why we restrict ourselves to the calculation of the absorption profile.
\begin{figure*}[t]
\includegraphics[width=1.96\columnwidth]{Fig5.pdf}
\caption{\emph{Phonon imprint on absorption}. Absorption spectra (logarithmic scale) of zero-phonon line including phonon wing at different temperatures for (a) 1D density of states and $\lambda_{\text{e-ph}}^{\text{1D}}=0.03$ and (b) 3D density of states and $\lambda^{\text{3D}}_{\text{e-ph}}=0.02\,\mathrm{ps^2}$ with $\omega_{\text{max}}=3\,\mathrm{THz}$ in both cases. Insets show schematic of total molecular absorption spectrum with vibrational sidebands accompanied by phonon wings. (c) Debye-Waller factor $f_{\text{DW}}$ for a 3D density of states as a function of temperature with increasing coupling strength $\lambda_{\text{e-ph}}^{\text{3D}}$ (as indicated by arrow) in equidistant steps from $\lambda_{\text{e-ph}}^{\text{3D}}=0.01\,\mathrm{ps^2}$ to $\lambda_{\text{e-ph}}^{\text{3D}}=0.1\,\mathrm{ps^2}$ and cutoff frequency of $\omega_{\text{max}}=3\,\mathrm{THz}$.}
\label{fig5}
\end{figure*}
\subsection{Phonon imprint on spectra}
So far, we have considered the phonons only as a bath that can provide vibrational relaxation for the molecule and have neglected the effect of electron-phonon coupling. However, this can become a dominant mechanism at larger temperatures where all acoustic and optical phonon modes are thermally activated ($> 50\,\text{K}$) and the probability of a ZPL transition is very small. To include electron-phonon coupling, the expression for the steady-state dipole moment [cf.~Eq.~(\ref{integralsigma})] has to also account for the displacement of the electronic excited state caused by the phonons
\begin{align}
\label{ephsigma}
\braket{\sigma}&=\\\nonumber
&\eta_\ell\!\int_{-\infty}^t\! dt' e^{-\left[\gamma-i\left(\omega_\ell-\tilde{{\omega}}_0\right)\right](t-t')}\! \braket{\mathcal{B}(t)\mathcal{B}^\dagger (t')}\braket{\mathcal{D}(t)\mathcal{D}^\dagger (t')}.
\end{align}
Here, the coupling to phonons additionally leads to a renormalization of the electronic transition frequency $\tilde{\omega}_0=\omega_0-\sum_k \lambda_k^2\omega_k$ (polaron shift). \textcolor{black}{The expression in Eq.~(\ref{ephsigma}) now jointly contains all of the effects: electron-phonon coupling, electron-vibron coupling and vibrational relaxation (through the correlation function $\braket{\mathcal{B}(t)\mathcal{B}^\dagger(t')}$)}. Since we consider phonon modes to be independent of each other, their displacement correlation function \textcolor{black}{of the phonons} can be factorized $\braket{\mathcal{D}(t)\mathcal{D}^\dagger(t')}=\prod_k\braket{\mathcal{D}_k(t)\mathcal{D}_k^\dagger(t')}$ where the correlation for each single mode is given by $\braket{\mathcal{D}_k(t)\mathcal{D}_k^\dagger(t')}=\mathrm{exp}\left[{2\lambda_k^2 \left(\braket{p_k^2}-\braket{p_k(t)p_k(t')}\right)}\right]$. When replacing the sum over $k$ with an integral over $\omega$ in the continuum limit, this yields (neglecting phonon decay as it will not influence the spectra in the continuum limit):
\begin{align}
\label{correlationphonons}
\braket{\mathcal{D}(t)\mathcal{D}^\dagger (t')}&=\\\nonumber
&e^{\int_0^{\infty}\!d\omega\! \frac{J(\omega)}{\omega^2}\left[\coth\left(\frac{\beta\omega}{2}\right)\left(\cos\left(\omega\tau\right)-1\right)-i\sin\left(\omega\tau\right)\right]}.
\end{align}
Here, we have introduced the spectral density of the electron-phonon coupling $J(\omega)=\sum_k |\lambda_k\omega_k|^2\delta(\omega-\omega_k) =n(\omega)\lambda(\omega)^2\omega^2$ where $n(\omega)$ denotes the density of states. In the one-dimensional derivation considered here we obtain for the spectral density
\begin{align}
J^{\text{1D}}(\omega)=\lambda_{\text{e-ph}}^{\text{1D}}\cdot\omega\frac{\sqrt{\omega_{\text{max}}^2-\omega^2}}{\omega_{\text{max}}}\Theta(\omega_{\text{max}}-\omega).
\end{align}
The electron-phonon coupling constant $\lambda_{\text{e-ph}}^{\text{1D}}$ is derived in Appendix \ref{elphcoupling} and depends, among other things, on the displacement of the crystal atoms upon excitation of the molecule as well as on the spring constants between the molecule's atoms and the neighboring crystal atoms. Again, the cutoff at $\omega_{\text{max}}$ arises naturally from the dispersion of the crystal. In the continuum limit considered here, this spectral density would lead to a divergence of the integral in Eq.~(\ref{correlationphonons}) due to the high density of low-frequency phonons, which is a well known problem for 1D crystals \cite{kikas1996anomalous, hizhnyakov2012zero}. This issue can be addressed by considering only a finite-sized 1D crystal with a minimum phonon frequency cutoff $\omega_{\text{min}}>0$. However, one can instead also consider a spectral density stemming from a 3D density of states:
\begin{align}
J^{\text{3D}}(\omega)=\lambda_{\text{e-ph}}^{\text{3D}}\cdot\omega^3\frac{\sqrt{\omega_{\text{max}}^2-\omega^2}}{\omega_{\text{max}}}\Theta(\omega_{\text{max}}-\omega),
\end{align}
where the electron-phonon coupling constant $\lambda_{\text{e-ph}}^{\text{3D}}$ now has units $[\text{s}^2]$.
In Figs.~\ref{fig5}(a) and (b) we plot the resulting absorption spectrum of the ZPL for 1D and 3D densities of states, whereby the exact shape of the phonon wing is determined by the spectral density function $J(\omega)$. While analytical expressions for the integral in Eq.~(\ref{correlationphonons}) are difficult to obtain in the continuum case, we can express the absorption spectrum of the ZPL including phonon sideband in terms of discrete lines as
\begin{align}
\frac{\mathcal{P}_{\text{e}}}{\eta_\ell^2}\!=\!\sum_{\{n_k\}}^{\infty}\!\sum_{\{l_k\}}^{\{n_k\}}\frac{\prod_{k=1}^{N}L_k(n_k)B_k(n_k,l_k)}{\gamma^2\!+\!\left[\omega_\ell-\tilde{\omega}_0\!-\!\sum_{k=1}^N (n_k\!-\!2l_k)\omega_k\right]^2}.
\end{align}
Here the sum runs over all $\{n_k\}=n_1,\hdots,n_N$ and $\{l_k\}=l_1,\hdots,l_N$. This can be seen as a generalization of the result in Eq.~(\ref{pss}) for many modes where the $N$ phonon modes are indexed by $k$ and the function $L_k (n_k)$ accounts for the displacement of the excited state while the binomial distribution $B_k (n_k, l_k)$ accounts for the thermal occupation of each mode. As one can see in Figs.~\ref{fig5}(a) and (b), thermal occupation of the phonons leads to red-shifted phonon sidebands in absorption and eventually to a symmetric absorption spectrum around the zero-phonon line in the limit of large temperatures. Note that here we did not explicitly include phonon decay $\gamma_k^{\text{ph}}$ as it does not influence the absorption spectra in the continuum limit (the phonon peaks overlap and are not resolved). However, one can easily account for a finite phonon lifetime by including it in the momentum correlations $\braket{p_k (t)p_k (t')}=\frac{1}{2}[(1\!+\!2\bar{n}_k)\cos(\omega_k \tau)\!-\!i\sin(\omega_k \tau)]e^{-\gamma_k^{\text{ph}}|\tau|}$.
Similarly to the Franck-Condon factor for vibrons, one defines the Debye-Waller factor $f_\text{DW}=\braket{\mathcal{D}^\dagger}^2=\mathrm{exp}\left[{-\int_0^\infty d\omega J(\omega)\omega^{-2}\coth(\beta\omega/2) }\right]$ which measures the reduction of the ZPL intensity due to the scattering of light into phonons. In Fig.~\ref{fig5}(c) we show the behavior of the $f_\text{DW}$ in the 3D case for different coupling strengths at low temperatures, revealing a stronger temperature-dependence for larger couplings. The total reduction of the ZPL intensity as compared to the two-level system case is then given by the product $f_{\text{FC}}\cdot f_{\text{DW}}$.
\subsection{Dephasing}
Within the model we consider, where all interactions stem from a harmonic treatment of both intramolecular vibrations and crystal motion, the zero-phonon linewidth of the electronic transitions is largely independent of temperature. In reality, to account for higher temperature effects one needs contributions quadratic in the phononic displacements which has been theoretically pointed out and experimentally observed \cite{muljarov2004dephasing,jakubczyk2016impact}. However, even in the linear regime, the fact that vibronic and electron-phonon couplings do not lead to significant dephasing is a non-trivial result. One could e.g.~expect that the Holstein-Hamiltonian for electron-phonon coupling $H_{\text{H}}=\left[\omega_0-\sum_k \sqrt{2}\lambda_k\omega_k q_k\right]\sigma^\dagger \sigma+\sum_k \omega_k c_k^\dagger c_k$ which sees a stochastic shift of the excited electronic level should lead to a dephasing of the ground-excited coherence $\braket{\sigma}$. One reason is the similarity to the pure dephasing of a two-level transition subjected to a noisy laser undergoing evolution with the Hamiltonian $[\omega_0+\dot{\phi}(t)]\sigma^\dagger \sigma$ where the frequency is continuously shaken by a white noise stochastic term of zero average and obeying $\braket{\dot{\phi}(t)\dot{\phi}(t')}=\gamma_{\text{deph}} \delta(t-t')$. It is straightfoward to show that the time evolution of the coherence in this case becomes $\braket{\sigma(t)}=\braket{\sigma(0)}e^{-i\omega_0 t}e^{-\gamma_\text{deph}t}$ such that the correlations of the noise indicate the increase in the linewidth of the transition \cite{plankensteiner2016laser}. Similarly, one could expect that the zero-averaged quantum noise stemming from the shaking of the electronic transition in the Holstein-Hamiltonian would lead to the same kind of effect. However, computing the exact time evolution of the coherence in the interaction picture [with Hamiltonian $\tilde{H}_{\text{H}}(t)$] one obtains:
\begin{align}
\braket{\sigma(t)}& = \braket{\sigma(0)} \mathcal{T}\{ e^{-i\int_0^t dt' \tilde{H}_{\text{H}}(t')} \}e^{-\gamma t} \\\nonumber
&= \braket{\sigma(0)} e^{-(\gamma+i\tilde{\omega}_0) t}\braket{\mathcal{D}(t)\mathcal{D}^\dagger(0)},
\end{align}
where the time-ordered integral can be resolved by a second-order Magnus expansion, confirming the result already known from the polaron picture. The correlation $\braket{\mathcal{D}(t)\mathcal{D}^\dagger(0)}=e^{-\lambda_k^2(2\bar{n}_k+1)}e^{\varphi(t)}$ with $\varphi(t)=\lambda_k^2\left[(2\bar{n}_k+1)\cos(\omega_k t)-i\sin(\omega_k t)\right]$ (for a single mode $\omega_k$) shows a cosine-term similar to the dephasing but which does not continuously increase in time. For small times $t\ll\omega_k^{-1}$, the cosine-term can be expanded and the dephasing rate can be approximated by $\gamma_{\text{deph}}=\lambda_k^2(\bar{n}_k+1/2)\omega_k^2 t$ while for larger times the rate goes to zero (the time scale is set by $\gamma^{-1}$). In the continuum limit, the time-dependent dephasing rate $\gamma_{\text{deph}}(t)=-\Re\left[\dot{\varphi} (t)\right]$ expresses as
\begin{align}
\gamma_{\text{deph}}(t)=\int_0^\infty d\omega \frac{J(\omega)}{\omega}\coth\left(\frac{\beta\omega}{2}\right)\sin(\omega t).
\end{align}
In accordance with Figs.~\ref{fig5}(a),(b) we can see that linear Holstein coupling can consequently lead to a temperature-dependent zero-phonon line if there is a large density of low frequency (long wavelength) phonon modes with $\omega_k< \gamma$ which is the case in 1D but not in higher dimensions. This peculiarity of dephasing in 1D has already been discussed in the literature \cite{reichman1996on, kikas1996anomalous, hizhnyakov2012zero}. It is however also well established within the literature that the major contribution of the experimentally observed temperature-dependent broadening of the zero-phonon line is caused by a higher-order electron-phonon interaction of the form \cite{muljarov2004dephasing, osadko1985dephasing, reichman1996on, reigue2017probing}
\begin{align}
H_{\text{el-phon}}^{\text{quad}}=\sum_{k, k'}\beta_{k k'} q_k q_{k'} \sigma^\dagger\sigma,
\end{align}
with the coupling constant of the quadratic interaction $\beta_{k k'}$. This form of the interaction can stem either, within the harmonic assumption, from a difference in curvatures between the ground and excited state potential surfaces or from anharmonic potentials.
\section{Molecular Polaritonics}
\label{cavityspectroscopy}
It is currently of great interest to investigate the behavior of hybrid platforms containing organic molecules interacting with confined light modes such as provided by optical cavities \cite{walther2006cavity,haroche1989cavity,wang2017coherent} or plasmonic nanostructures \cite{chikkaraddy2016singlemolecule, zengin2015realizing}. Such light-dressed platforms have been studied both at the single- and few-molecule level \cite{wang2017coherent, wang2019turning, chikkaraddy2016singlemolecule} as well as in the mesoscopic, collective strong-coupling limit \cite{shalabney2015coherent, lidzey1999room, holmes2004strong}. In these cases, the strong light-matter coupling leads to the formation of polaritonic hybrid states with both light and matter components. Experimental and theoretical works are currently exploring fascinating enhanced properties such as exciton and charge transport \cite{orgiu2015conductivity,schachenmayer2015cavity,feist2015extraordinary,hagenmuller2017cavity,hagenmuller2018cavity}, superconductive behavior \cite{sentef2018cavity, thomas2019exploring} and modified chemical reactivity \cite{hutchinson2012modifying,galego2016suppressing,herrera2016cavity,martinezmartinez2018can,kampschulte2018cavity}. \textcolor{black}{There is also recent interest in the modification of nonadiabatic light-matter dynamics at so-called conical intersections leading to fast nonradiative decay of electronic excited states \cite{ulusoy2019modifying, vendrell2018collective}}. It has been recently shown that the Purcell regime of cavity QED can result in a strong modification of the branching ratio of a single molecule and suppress undesired Stokes lines~\cite{wang2019turning}. Recent theoretical works account for the vibronic coupling of molecules by solving a Holstein-Tavis-Cummings Hamiltonian which leads to the occurence of polaron-polariton states, i.e.~light-matter states where the hybridized states between the bare electronic transition and the light field additionally get dressed by the vibrations of the molecules \cite{herrera2018theory, zeb2018exact, herrera2017dark, neuman2018origin, wu2016when, litinskaya2004fast, kansanen2019theory,reitz2019langevin}. Many models rely on numerical simulations and are based on following the evolution of state vectors under simplified assumptions assuming only vibronic interactions and finite temperature effects. We employ here the approach of the last section and add a Jaynes-Cummings interaction of a molecule in the phononic environment with a localized cavity mode. A weak laser drive maps the intracavity molecular polaritonics effects to the cavity transmission profile, identifying polariton cross-talk effects at any temperature. Furthermore, we map the combined effect of vibronic and electron-phonon interactions onto the cavity output field.
\subsection{Cavity transmission}
\begin{figure*}[t]
\includegraphics[width=1.99\columnwidth]{Fig6.pdf}
\caption{\emph{Cavity-molecule spectroscopy}. Normalized cavity transmission $|\mathcal{T}(\omega)|^2$ at resonance $\omega_c=\omega_\ell$ in strong coupling as a function of frequency and thermal occupation $\bar{n}$ for (a) neglecting electron-phonon coupling (b) including electron-phonon coupling (using 3D density of states) for $\lambda=0.8$, $\lambda_{\text{e-ph}}^{\text{3D}}=0.03\,\text{ps}^2$. The white lines show cross sections of the transmission profile for different thermal occupations $\bar{n}$. The dashed red lines show the effective Rabi splitting $2g_{\text{eff}}$. Other parameters: $\nu=6\,\text{THz}$, $\omega_{\text{max}}=3\,\text{THz}$, $g=\nu/2, \Gamma_{\text{m}}=0.08\nu$, $\kappa=1\,\text{THz}$. (c) Comparison of transmission in the Purcell regime between a pure two-level system (TLS) and a molecule subject to electron-phonon and electron-vibron coupling for $\lambda=0.8$, $\lambda_{\text{e-ph}}^{\text{3D}}=0.2\,\text{ps}^2$ at a temperature of $T=10\,\mathrm{K}$. Other parameters: $\omega_{\text{max}}=3\,\mathrm{THz}=1.5\kappa$, $\nu=6\,\mathrm{THz}=3\kappa$, $g=0.35\kappa$.}
\label{fig6}
\end{figure*}
We will consider a cavity mode which is driven with amplitude $\eta_c$ and start with a set of coupled Lagevin equations for the electric field operator $a$ as well as the polaron operator $\tilde{\sigma}(t)=\mathcal{D}^\dagger(t)\mathcal{B}^\dagger(t)\sigma(t)$ in a rotating frame at the laser frequency $\omega_\ell$:
\begin{subequations}
\begin{align}
\label{equationscavitya}
\dot{a}&=-[\kappa-i(\omega_\ell-\omega_c)]a-ig\sigma+\sqrt{2\kappa}A_{\text{in}},\\
\dot{\tilde{\sigma}}&=-(\gamma+i\tilde{\omega}_0)\tilde{\sigma}-i g \mathcal{D}^\dagger\mathcal{B}^\dagger a +\sqrt{2\gamma}\mathcal{D}^\dagger\mathcal{B}^\dagger{\sigma}_{\text{in}},
\label{equationscavityb}
\end{align}
\end{subequations}
where we defined the effective cavity input $A_{\text{in}}=\eta_c/\sqrt{2\kappa}+a_{\text{in}}$ with zero-average input noise $a_{\text{in}}$ but non-vanishing correlation $\braket{a_{\text{in}}(t)a_{\text{in}}^\dagger(t')}=\delta(t-t')$. The electronic transition is also affected by a white noise input $\sigma_{\text{in}}$ with non-zero correlation $\braket{\sigma_{\text{in}}(t)\sigma_{\text{in}}^\dagger(t')}=\delta(t-t')$. We can formally integrate Eq.~(\ref{equationscavityb}) and substitute it in Eq.~(\ref{equationscavitya}):
\begin{widetext}
\begin{equation}
\braket{\dot{a}}=-\left[\kappa-i(\omega_\ell-\omega_c)\right]\braket{a}-g^2\int_{-\infty}^\infty dt' e^{-\left(\gamma+i\tilde{\omega}_0\right)(t-t')} \Theta(t-t') \braket{\mathcal{D}(t)\mathcal{D}^\dagger (t')}\braket{ \mathcal{B}(t) \mathcal{B}^\dagger(t')} \braket{a(t')}+\eta_c\,,
\label{cavitysubs}
\end{equation}
\end{widetext}
where we took the averages and assume factorizability between optical and vibronic/phononic degrees of freedom which is valid if the timescales of vibrational relaxation and cavity decay are separated, e.g.~$\Gamma_{\text{m}}\gg \kappa$. We notice that the second term in Eq.~(\ref{cavitysubs}) represents a convolution since the correlation functions $\braket{\mathcal{D}(t)\mathcal{D}^\dagger (t')}$ and $\braket{\mathcal{B}(t)\mathcal{B}^\dagger (t')}$ only depend on the time difference $|t-t'|$. Denoting $H(t-t')=e^{-(\gamma+i\tilde{\omega}_0)(t-t')}\Theta(t-t') \braket{\mathcal{D}(t)\mathcal{D}^\dagger (t')}\braket{ \mathcal{B}(t) \mathcal{B}^\dagger(t')}$, the normalized cavity transmission amplitude $\mathcal{T}(\omega)=\frac{\braket{A_{\text{out}}(\omega)}}{\braket{A_{\text{in}}(\omega)}}$ can be derived from input-output relations as
\begin{align}
\mathcal{T}(\omega)=\frac{\kappa}{g^2 H(\omega)-i\omega+\left[\kappa-i(\omega_\ell-\omega_c)\right]},
\end{align}
where $H(\omega)$ is the Fourier transform of $H(t)$ and describes the optical response of the molecule to the light field \textcolor{black}{including electron-phonon, electron-vibron and vibron-phonon coupling}. If we neglect electron-phonon interactions ($\lambda_{\text{e-ph}}=0$) and assume, for the sake of simplicity, Markovian decay for the vibration (this can also be extended to the non-Markovian regime, \textcolor{black}{see Section (\ref{molecularspectroscopy})}), $H(\omega)$ acquires the form
\begin{align}
H(\omega)=\sum_{n=0}^\infty \sum_{l=0}^n \frac{L(n)B(n,l)}{(\gamma+n\frac{\Gamma_{\text{m}}}{2})-i[({\omega\! -\! \omega_0})-({n\! -\! 2l})\nu]}.
\end{align}
Again, the above expression indicates a series of sidebands with strength determined by the Huang-Rhys factor $\lambda^2$ and dependent on the thermal occupation $\bar{n}$. In the case of large \textcolor{black}{vibrational relaxation} $\Gamma_{\text{m}}\gg \gamma$ (corresponds to typical experimental situation), however, those sidebands are suppressed and the cavity will mostly couple to the ZPL transition. We can then define an effective Rabi coupling for the ZPL
\begin{align}
\label{effective}
g_{\text{eff}}=g\sqrt{ f_{\text{FC}}\cdot f_{\text{DW}}}\,,
\end{align}
which takes into account the reduction of the oscillator strength due to Franck-Condon and Debye-Waller factors. In Figs.~\ref{fig6}(a) and (b) we plot the cavity transmission at resonance $\omega_c=\omega_\ell$ for increasing thermal occupation with and without the influence of phonons and find that the splitting of the polariton modes is well described by Eq.~(\ref{effective}). This also manifests itself in the transmission signal in the Purcell regime characterized by weak coupling $g\!<\!|\kappa-\gamma|/2$, but large cooperativity $C=g^2/(\kappa\gamma)\gg 1$ which is a more realistic regime in currently available single-molecule experiments \cite{wang2017coherent, wang2019turning}. In Figure \ref{fig6}(c) we compare the transmission of a pure two-level system (obtained by setting $\lambda=0$, $\lambda_{\text{e-ph}}=0$) with a molecule in a solid-state environment. Here the ZPL appears as a dip in the transmission profile with an increase in width $\tilde{\gamma}=\gamma(1+C_{\text{eff}})$ proportional to the effective cooperativity $C_{\text{eff}}=g_{\text{eff}}^2/(\kappa\gamma)$. As compared to the two-level system case, the coupling to vibrons and phonons leads to a reduction in both width and depth of the antiresonance. If the cavity bandwidth is comparable to the maximum phonon frequency $\omega_{\text{max}}$, the imprint of the phonon wing can also be detected in the transmission signal of the cavity, which is relevant for plasmonic scenarios characterized by large bandwidths \cite{chikkaraddy2016singlemolecule} [see Fig.~\ref{fig6}(c)]. The sidebands of vibrons typically lie at frequencies outside the bandwidth of the cavity $\nu\gg\kappa$ and are consequently unmodified.
\subsection{Vibrationally mediated polariton cross-talk}
As shown in the previous sections, vibronic and electron-phonon couplings reduce the oscillator strength of the molecule and lead to decoherence and are consequently considered as detrimental. However such couplings can also lead to interesting new physics: In Ref.~\cite{reitz2019langevin} it was already shown that vibrations can couple upper and lower polaritonic states in a dissipative fashion, resulting in an effective transfer of population from upper to lower polariton and consequently an asymmetric cavity transmission profile with a suppressed upper polaritonic peak (at zero temperature $\bar{n}=0$) and dominant emission occuring from the lower polariton (this can also be seen in Figs.~\ref{fig6}(a) and (b)). We derive here a more general expression for the population transfer between polaritons showing that for finite thermal occupations of the vibrational mode $\bar{n}$ also a transfer from lower to upper polariton can be activated. Diagonalizing the Jaynes-Cummings part of the Hamiltonian at resonance $\omega_c=\omega_0$ by introducing annihilation operators for upper and lower polariton, $U=(a+\sigma)/\sqrt{2}$ and $L=(a-\sigma)/{\sqrt{2}}$, the Holstein part of the Hamiltonian [Eq.~(\ref{holsteinelvib})] gives rise to a vibration-mediated interaction between upper and lower polariton
\begin{align}
H_{\text{int}}=\frac{\lambda\nu}{2}(U^\dagger L+L^\dagger U)(b^\dagger+b).
\end{align}
\begin{subequations}
This can be interpreted as an exchange interaction which is mediated by either the destruction or creation of a vibrational quantum. From this one can derive equations of motion for the populations of upper and lower polaritonic states:
\begin{align}
\dot{\mathcal{P}}_U&=-2\gamma_+ \mathcal{P}_U +\lambda\nu\Im\braket{U^\dagger L (b^\dagger+b)},\\
\dot{\mathcal{P}}_L&=-2\gamma_- \mathcal{P}_L +\lambda\nu\Im\braket{L^\dagger U (b^\dagger+b)},
\end{align}
\end{subequations}
with the hybridized decay rates of upper and lower polaritonic state $\gamma_\pm = (\kappa+\gamma)/2$. In the limit of fast textcolor{blue}{vibrational relaxation} $\Gamma_{\text{m}}\gg\kappa$ this can be turned into a set of rate equations with an effective excitatation transfer from the upper polariton to the lower polariton $\kappa_+$ and a transfer from the lower polariton to the upper polariton $\kappa_-$ (for detailed calculation see Appendix \ref{polcrosstalk}):
\begin{subequations}
\begin{align}
\dot{\mathcal{P}}_U&=-(2\gamma_+ +\kappa_+)\mathcal{P}_U +\kappa_-\mathcal{ P}_L,\\
\dot{\mathcal{P}}_L&=-(2\gamma_-+\kappa_-) \mathcal{P}_L +\kappa_+\mathcal{P}_U.
\end{align}
\end{subequations}
Under the assumption of weak vibronic coupling as compared to the splitting between upper and lower polaritonic state $\lambda\nu\ll 2g$, the rates can be calculated to first order as (again we assume Markovian decay for the vibration for that sake of simplicity):
\begin{subequations}
\begin{align}
\label{polaritonrates}
\kappa_{+}=\frac{1}{4}\frac{\lambda^2\nu^2\Gamma_{\text{m}}(\bar{n}+1)}{(\Gamma_{\text{m}}/2)^2+\left(\omega_+-\omega_--\nu\right)^2},\\
\kappa_{-}=\frac{1}{4}\frac{\lambda^2\nu^2\Gamma_{\text{m}}\bar{n}}{(\Gamma_{\text{m}}/2)^2+\left(\omega_+-\omega_--\nu\right)^2}.
\end{align}
\end{subequations}
Energy transfer between the polaritons can consequently occur if the Rabi splitting $\omega_+-\omega_-\approx 2g$ is roughly equal to the vibrational frequency. In the case of zero temperature ($\bar{n}=0$) the above equations reduces to the results presented in \cite{reitz2019langevin} using a Lindblad decay model for the vibration instead of a Brownian noise model. The ratio $\kappa_-/\kappa_+=\bar{n}/(\bar{n}+1)$ which can be inferred from the polariton heights (for normalized Lorentzians the height and width are connected) and which tends to unity in the limit $\bar{n}\gg 1$ can be seen as a direct measure for temperature as it does not depend on any other parameters. While for single molecules the condition $\omega_+-\omega_-\approx \nu$ is difficult to achieve for vibrational modes in the THz range, this can be achieved in the collective strong coupling regime for many molecules where the coupling grows as $g\sqrt{N}$ or for single molecules with phononic modes in the GHz regime. \textcolor{black}{We also note that, in a similar fashion to the linear coupling, also quadratic electron-phonon and vibronic coupling gives rise to a vibrationally-mediated polariton cross-talk with coupling $H_{\text{int}}^{\text{quad}}=\beta(U^\dagger L+L^\dagger U)Q^2/2$ (for a single vibrational mode). To this end, one could again derive effective rate equations for the quadratically-mediated population transfer between the polaritons in a similar fashion as for the linear coupling case.}
\section{Discussions. Conclusions}
\indent We have provided a new approach based on quantum Langevin equations for the analysis of the fundamental quantum states of molecules and their coupling to their surroundings. These features, which lie at the heart of molecular polaritonics, go well beyond the electronic degrees of freedom and address phenomena such as electron-vibron and electron-phonon couplings as well as vibron-phonon interactions resulting in the relaxation of molecular vibrations. In particular, we have provided analytical expressions for spectroscopic quantities such as molecular absorption and emission inside and outside optical cavities in the presence of vibrations and phonons at any temperature. Moreover, we have presented a model of vibrational relaxation that takes into account the structure of the surrounding phonon bath and makes a distinction between Markovian and non-Markovian regimes. We have demonstrated that the vibrational relaxation of a molecule is crucially determined by the structure of the bath, especially by the maximum phonon frequency $\omega_{\text{max}}$. For two molecules embedded in the same crystalline environment, we have shown that the vibrational modes of the spatially separated molecules can interact with each other, resulting in collective dissipative processes that allow for weaker relaxation of collective vibrations. In the strong coupling regime of cavity QED, we have derived temperature-dependent transfer rates for vibrationally mediated cross-talk between upper and lower polaritonic states, i.e.~hybrid light-matter states that are normally uncoupled in cavity QED studies of atomic systems. In this work, we based our model on first-principle derivations of the relevant coupling strengths between a single nuclear coordinate of a molecule embedded in a 1D chain. However, the calculations could be readily extended to 3D scenarios and compared with ab-initio calculations for real materials. We point out that our theory could also be relevant for vacancy centers in diamond where similar interactions between electronic degrees of freedom and both localized and delocalized phonon modes occur \cite{albrecht2013coupling, londero2018vibrational}. In the future we want to address the influence of higher-order interactions such as quadratic electron-phonon and vibron-phonon couplings, which are known to play an important role at elevated temperatures. \textcolor{black}{It could also be interesting to consider the cavity-modification of the nonradiative relaxation of molecules at conical intersections \cite{ulusoy2019modifying}}. We also plan to investigate the collective radiation states of dense molecular ensembles in confined electromagnetic environments such as e.g.~occuring in organic semiconductor microcavities.
\textit{Note added.} Recently, we became aware of a related study \cite{clear2020phonon}. \\
\section{Acknowledgments}
We acknowledge financial support from the Max Planck Society and from the German Federal Ministry of Education and Research, co-funded by the European Commission (project RouTe), project number 13N14839 within the research program "Photonik Forschung Deutschland" (C.~S, V.~S.~and C.~G.).
|
2,869,038,154,397 | arxiv | \section{#1}\setcounter{equation}{0}}
\newcommand{\subsectiono}[1]{\subsection{#1}\setcounter{equation}{0}}
\newcommand{\zeta}{\zeta}
\newcommand{\stackrel{>}{\sim}}{\stackrel{>}{\sim}}
\newcommand{\stackrel{<}{\sim}}{\stackrel{<}{\sim}}
\newcommand{\Lambda}{\Lambda}
\newcommand{i}{i}
\newcommand{j}{j}
\def{\hbox{ 1\kern-.8mm l}}{{\hbox{ 1\kern-.8mm l}}}
\def{\hbox{ 0\kern-1.5mm 0}}{{\hbox{ 0\kern-1.5mm 0}}}
\def{\wh a}{{\widehat a}}
\def{\wh b}{{\widehat b}}
\def\check{{\widehat c}}
\def\check{\check}
\def{\wh d}{{\widehat d}}
\newcommand{\check z}{\check z}
\newcommand{{\bf i}}{{\bf i}}
\renewcommand{\theequation}{\thesection.\arabic{equation}}
\newcommand{\bea}[1]{\begin{eqnarray}\label{#1} }
\newcommand{\end{eqnarray}}{\end{eqnarray}}
\newcommand{\wt J}{\widetilde J}
\newcommand{{\bf N}}{{\bf N}}
\newcommand{b}{b}
\newcommand{\refb}{\refb}
\newcommand{{\rm u}}{{\rm u}}
\newcommand{{\dot{\alpha}}}{{\dot{\alpha}}}
\newcommand{{\dot{\beta}}}{{\dot{\beta}}}
\newcommand{{\dot{\gamma}}}{{\dot{\gamma}}}
\newcommand{\beta}{\beta}
\newcommand{V}{V}
\newcommand{G}{G}
\newcommand{e}{e}
\newcommand{{\cal P}}{{\cal P}}
\newcommand{\VV_{\rm G}}{{\cal V}_{\rm G}}
\newcommand{\VV^c_{\rm G}}{{\cal V}^c_{\rm G}}
\usepackage{bm}
\usepackage[table]{xcolor}
\def\rpnote#1{{\color{magenta} #1}}
\def\arnote#1{{\color{blue} #1}}
\def\asnote#1{{\color{red} #1}}
\newcommand{{\bf M}}{{\bf M}}
\newcommand{\scalar}{{\cal V}_{\rm S}}
\newcommand{\wscalar}{\widetilde{\cal V}_{\rm B}}
\newcommand{\fermion}{{\cal V}_{\rm F}}
\newcommand{\wfermion}{\widetilde{\cal V}_{\rm F}}
\newcommand{\wt\Sigma}{\widetilde\Sigma}
\newcommand{\wt\Sigma^c}{\widetilde\Sigma^c}
\newcommand{(4)}{(4)}
\newcommand{\cL} {\{\hskip -4pt\{}
\newcommand{\cR} {\}\hskip -4pt\}}
\newcommand{\sL} {[\hskip -1.5pt[}
\newcommand{\sR} {]\hskip -1.5pt]}
\newcommand{{\overline{\RR}}}{{\overline{{\cal R}}}}
\newcommand{\tilde\omega}{\tilde\omega}
\def{\bf j}{{\bf j}}
\def{\bf N}{{\bf N}}
\newcommand{\mu}{\mu}
\newcommand{a}{a}
\def\textcolor{red}{\textcolor{red}}
\def\figpicardthree{
\def0.8{0.8}
\ifx0.8\undefined\def0.8{1}\fi
\unitlength 0.8 mm
\begin{picture}(130,80)(0,0)
\linethickness{0.1mm}
\put(70,0){\line(0,1){80}}
\linethickness{0.1mm}
\put(10,40){\line(1,0){120}}
\linethickness{0.3mm}
\linethickness{0.3mm}
\linethickness{0.3mm}
\linethickness{0.3mm}
\linethickness{0.3mm}
\linethickness{0.3mm}
\linethickness{0.5mm}
\qbezier(70,40)(80.38,40)(90,40)
\qbezier(90,40)(99.62,40)(110,40)
\qbezier(110,40)(120.44,40)(125.25,40)
\qbezier(125.25,40)(130.06,40)(130,40)
\linethickness{0.5mm}
\qbezier(70,40)(70,37.4)(70,35.59)
\qbezier(70,35.59)(70,33.79)(70,32.5)
\qbezier(70,32.5)(70.02,31.23)(68.81,27.62)
\qbezier(68.81,27.62)(67.61,24.02)(65,17.5)
\qbezier(65,17.5)(62.41,10.99)(60,6.78)
\qbezier(60,6.78)(57.59,2.57)(55,0)
\put(95,40){\makebox(0,0)[cc]{$\times$}}
\put(95,45){\makebox(0,0)[cc]{$\psi^0=1/a$}}
\put(95,65){\makebox(0,0)[cc]{complex $\psi^0$-plane}}
\put(60,45){\makebox(0,0)[cc]{$\psi^0=0$}}
\put(80,39.8){\makebox(0,0)[cc]{$\Rightarrow$}}
\put(120,39.8){\makebox(0,0)[cc]{$\Rightarrow$}}
\put(70,40){\makebox(0,0)[cc]{$\times$}}
\put(70,35){\makebox(0,0)[cc]{$\Uparrow$}}
\end{picture}
}
\begin{document}
\baselineskip 24pt
\begin{center}
{\Large \bf Normalization of D-instanton Amplitudes}
\end{center}
\vskip .6cm
\medskip
\vspace*{4.0ex}
\baselineskip=18pt
\centerline{\large \rm Ashoke Sen}
\vspace*{4.0ex}
\centerline{\large \it Harish-Chandra Research Institute, HBNI}
\centerline{\large \it Chhatnag Road, Jhusi,
Allahabad 211019, India}
\vspace*{1.0ex}
\centerline{\small E-mail: sen@hri.res.in}
\vspace*{5.0ex}
\centerline{\bf Abstract} \bigskip
D-instanton amplitudes suffer from various infrared divergences associated with tachyonic
or massless open string modes, leading to ambiguous contribution to string amplitudes.
It has been shown previously
that string field theory can resolve these ambiguities and lead
to unambiguous expressions for D-instanton contributions to string amplitudes, except for an
overall normalization constant that remains undetermined. In this paper we show
that string field theory, together with the world-sheet description of the amplitudes,
can also fix this normalization constant. We apply our analysis to the
special case of two dimensional string theory, obtaining results in
agreement with the matrix model
results obtained by Balthazar, Rodriguez and Yin.
\vfill \eject
\tableofcontents
\sectiono{Introduction} \label{s1}
D-instantons give a class of non-perturbative contributions to string amplitudes.
One characteristic of these contributions is the presence of an
overall multiplicative factor $e^{-C/g_s}$
where $g_s$ is the closed string coupling and $C$ is a
constant. Besides this factor, the amplitudes admit usual perturbation
expansion in powers of $g_s$. The contribution to an amplitude at any given order
in $g_s$ can be computed
using the standard world-sheet approach by including Riemann surfaces with boundaries
ending on the D-instanton, but at each order
one encounters certain infra-red divergences\cite{9407031,9701093,1907.07688}
that render the
amplitudes ambiguous. At any given order,
these ambiguities can be
encoded in a set of undetermined constants.
String field theory\cite{wittensft,9206084,9705241,1703.06410,1905.06785,okawa} provides an
unambiguous procedure for determining these constants, by identifying the
physical origin of these infrared divergences and rectifying them based on this
understanding\cite{1908.02782,2002.04043,2003.12076,2012.11624}.
So far this procedure
has been applied to two dimensional
string theory, for which there is a dual matrix model description that can be used to check
the results.
However previous analysis left one constant undetermined -- namely the overall
normalization of the D-instanton amplitude. Formally this is given by the exponential of the
annulus amplitude, with D-instanton boundary condition at the two boundaries and no
other vertex operator insertion. However the annulus amplitude is divergent due to the
presence of massless and tachyonic open string modes on the D-instanton.
In conventional string perturbation theory,
such diagrams are part of bubble diagrams and
drop out in the computation of physical amplitudes. However for D-instanton amplitudes
the situation is somewhat different since the D-instanton contribution to the amplitude
has to be first added to the perturbative amplitude and then the sum needs to be divided
by the sum of perturbative and D-instanton contribution to
bubble diagrams. Therefore the overall normalization is physically relevant,
and one expects that it should be possible to compute this in string theory. Since string
field theory is capable of making sense of infrared divergences in the amplitudes, the
natural expectation would be that string field theory should be able to give an
unambiguous result for the normalization constant.
However, when one tries to compute this using string field theory, which in this
case is a theory of open and closed strings, one finds that there is no internal consistency
requirement within string field theory that can be used to fix this normalization, since this
can be changed by adding a field independent constant to the string field theory action
that does not violate any constraint coming from the requirement of gauge invariance.
To overcome this problem, we shall take the viewpoint that the world-sheet approach already
fixes the normalization as the exponential of the annulus partition function, and the job of string
field theory is to simply give physical interpretation of the divergences of the amplitude and
render them finite based on this interpretation.
We show that the world-sheet result may be regarded as the gauge fixed version of a path
integral in string theory with a specific normalization,
and the divergences that we encounter arise due to breakdown
of the gauge choice. However the `gauge invariant' form of the path integral, expressed
as an integral over the full classical string field divided by the volume of the gauge group,
yields unambiguous result.
We apply this procedure
to the case of
two dimensional string theory, and find that the normalization of the one instanton amplitude
determined this way agrees with the results of the matrix model computed in \cite{1907.07688}
following the general formalism developed in \cite{9111035}.
The rest of the paper is organized as follows. In \S\ref{s2} we
express the exponential of the annulus partition function as a path integral over string fields
in the Siegel gauge. At this stage the path integral remains singular due to the existence of
zero modes, reflecting the singularity of
the annulus partition function.
In \S\ref{s3} we
trace these singularities to the breakdown of the Siegel gauge, and show that we can get
finite result for the path integral by rewriting it in a `gauge invariant' form.
In \S\ref{s3.5} we calculate the multiplier factor -- the multiple of the steepest descent contour
of the D-instanton that forms part of the actual integration contour of the full string theory, and
show that after multiplying the exponential of the annulus partition function, computed in
\S\ref{s3}, by this factor, we get agreement
with
the matrix model result. In \S\ref{s4} we discuss possible application of our analysis to other
systems. In appendix \ref{sa} we show how the central result used in our analysis -- the
equivalence of the gauge invariant version of the path integral over string fields and the
Siegel gauge fixed version of the same path integral, can be proved directly using the
standard Faddeev-Popov approach instead of the abstract results of the Batalin-Vilkovisky
(BV) formalism\cite{bv1,bv2,henn,bocc1,bocc2,thorn,9205088}.
\sectiono{The normalization constant from the Siegel gauge path integral} \label{s2}
Our goal is to compute the normalization constant ${\bf N}$ appearing in the D-instanton
amplitudes. In the world-sheet description it is given by:
\begin{equation} \label{e1}
{\bf N} = \zeta\, \exp[{\cal A}]\, ,
\end{equation}
where ${\cal A}$ is the exponential
of the annulus partition function:
\begin{equation} \label{e2n}
{\cal A} = \int_0^\infty {dt\over 2t} \, Tr(e^{-2\pi t L_0})\, .
\end{equation}
Here $Tr$ denotes trace over all states of the open string projected into the Siegel gauge by
the projection operator $b_0c_0$,
weighted by $(-1)^F$ where $-(-1)^F$ denotes the grassmann
parity of the vertex operator corresponding to the state. The extra minus sign multiplying
$(-1)^F$ is a reflection of the fact that bosonic (fermionic) open string modes correspond
to grassmann odd (even) states in the world-sheet theory.
$\zeta$ in \refb{e1} is the multiplier factor that depends on how the
steepest descent contour associated with the D-instanton fits inside the actual
integration contour\cite{1206.6272,1511.05977,1802.10441}.
In particular $\exp[{\cal A}]$ represents the one loop contribution to the
path integral from the full steepest descent contour passing through the instanton
solution and
$\zeta$ reflects the multiple of the steepest descent contour that forms part of the
actual integration contour.
We shall see for example that in the two dimensional string theory, $\zeta=1/2$
up to a sign.\footnote{In string field theory, \refb{e1}
may be justified by demanding
that at the tachyon vacuum\cite{9911116}
${\bf N}$ must be 1 so that we get the usual
perturbative closed string amplitudes. Since the boundary state vanishes at the tachyon vacuum\cite{0810.1737},
we have ${\cal A}=0$ and therefore $e^{\cal A}=1$. Furthermore it will be seen in
\S\ref{s3.5}, Fig.~\ref{figpicardthree} that $\zeta=1$ at the perturbative vacuum. Therefore
\refb{e1} should not have any additional factor.}
The constant ${\bf N}$ given in \refb{e1}
is the overall multiplicative factor that appears in the instanton
induced effective action of the closed string fields\cite{ 2012.00041}.
This is related to the normalization constant ${\cal N}$ introduced
in \cite{1907.07688,1912.07170}, appearing
as a multiplicative factor in the S-matrix, via the relation ${\cal N}=i\,{\bf N}$.
In our analysis, we shall not be careful in fixing the
sign of ${\bf N}$, since this will be fixed at the end using separate considerations.
We can express \refb{e2n} as
\begin{equation}\label{e3}
{\cal A} = \int_0^\infty {dt\over 2t} \left[\sum_i e^{-2\pi t h^b_i} -\sum_j e^{-2\pi t h^f_j}\right]\, ,
\end{equation}
where $\{h^b_i\}$ and $\{h^f_j\}$ are the $L_0$ eigenvalues of the grassmann odd
and the grassmann even
states of the world-sheet CFT.
If we assume that the total number of bosonic modes equals
the total number of fermionic modes so that the integrand is finite as $t\to 0$, and furthermore
that the $h^b_i$ and $h^f_j$ are positive so that the integrand falls off exponentially as
$t\to\infty$, then the integral \refb{e3} is finite. In this case it gives the result
\begin{equation}
{\cal A}={1\over 2} \ln {\prod_j h^f_j\over \prod_i h^b_i}\, .
\end{equation}
Substituting this into \refb{e1}, we get
\begin{equation}
{\bf N} = \zeta\, \sqrt{\prod_j h^f_j\over \prod_i h^b_i}\, .
\end{equation}
In the system that we shall analyze,
$h^f_j$'s come in pairs of equal values so that we can write this as
\begin{equation} \label{e8pre}
{\bf N} = \zeta\, {\prod'_j h^f_j\over \sqrt{\prod_i h^b_i}}\, ,
\end{equation}
where $\prod'_j$ corresponds to the product running over only one
member for each pair.
We can express this as the result of integration over the bosonic variables $b_i$ and
fermionic variables $f_j,\widetilde f_j$ as follows:
\begin{equation}\label{e8}
{\bf N} =\zeta\,
\int \prod_i \, {db_i \over \sqrt{2\pi}}
\prod_j df_j \, d\widetilde f_j \, \exp\left[-{1\over 2} \sum_i h^b_i b_i^2
+ {\sum_j}' \, h^f_j \, \widetilde f_j \, f_j\right] \, .
\end{equation}
Equality of \refb{e1} and \refb{e8} is an identity when all the $h^b_i$'s and $h^f_j$'s are positive,
but we shall take \refb{e8} to be the defining expression for ${\bf N}$ even when this condition
fails.
In particular,
we shall apply this formalism to D-instanton system for which some of the $L_0$
eigenvalues
vanish and / or are negative.
A justification for this may be given as follows.
Instead of studying open strings on a single D-instanton, we can
take a system of two D-instantons separated along the Euclidean
time direction and analyze the states of the open string stretched between the pair of
D-instantons. In this case $L_0$ will get a non-vanishing contribution from the tension of
the stretched open string and the manipulations carried out above will be well defined for
sufficiently large separation.
We can recover the original system of interest by analytic continuation of this
result to zero separation and
using the fact that in this limit the spectrum of open strings with two ends lying on
different D-instantons coincides with the spectrum of open strings with both ends
lying on the same D-instanton.
Of course \refb{e8} is not
well defined in this limit due to the appearance of zero eigenvalues in the bosonic
and fermionic
sectors, and so it does not lead to a finite unambiguous result for ${\bf N}$ at this stage.
However, we shall see in \S\ref{s3} that it is possible to trace these zero eigenvalues to
singular gauge choice and transform \refb{e8} to
finite, unambiguous result \refb{egaugeinv} using
insights from string field theory.
In \refb{e8},
the variables $b_i$ may be interpreted as
the bosonic open string fields on the D-instanton, the variables $f_i,\widetilde f_i$ may be
interpreted as the
fermionic open string
fields on the D-instanton and the argument of the exponential may be
interpreted as the
quadratic part of the action of the open string field theory in the
Siegel gauge.
To see how this arises, we now review some basic aspects of string field theory.
The off-shell open string field describing the degrees of freedom of a D-instanton
is taken to be an arbitrary
element $|\Psi\rangle$ of $H$ --
the vector space of states of the open string, including
matter and ghost excitations. Let $\{|\phi_r\rangle\}$ be the set of basis
states in $H$. Then we can expand $|\Psi\rangle\in H$ as
\begin{equation}\label{esftexpansion}
|\Psi\rangle =\sum_r \chi^r |\phi_r\rangle\, .
\end{equation}
$\{\chi^r\}$'s are the degrees of freedom over which the path integral
is to be performed after suitable gauge fixing.
Even though we have referred to the $\chi_r$'s as fields, they
are actually zero dimensional fields -- ordinary variables -- since
on the D-instanton the open strings do not carry any continuous momentum labels.
Therefore it is more appropriate to call them modes.
$\chi^r$ has even (odd) grassmann parity if the ghost number of $\phi_r$ is
odd (even).
The kinetic term of the BV master action of string
field theory takes the form:
\begin{equation}\label{esftaction}
S=-{1\over 2} \langle \Psi|Q_B|\Psi\rangle\, ,
\end{equation}
where $Q_B$ is the world-sheet BRST operator. The minus sign in front of the
action is unusual, but has been introduced keeping in mind that we shall be using a
convention in which the Euclidean path integral is weighted by $e^S$.
In the BV formalism
the open string modes
multiplying states of ghost number $\le 1$ are regarded
as fields and the modes multiplying
states of ghost number $\ge 2$ are regarded as antifields.
If we introduce basis states $\{|\varphi_r\rangle\}$ in the ghost number
$\le 1$ subspace and $\{|\varphi^r\rangle\}$ in the ghost number $\ge 2$ subspace such that
\begin{equation} \label{eortho}
\langle \varphi^r |\varphi_s\rangle=\delta^r_s=\langle\varphi_s|\varphi^r\rangle, \qquad
\langle\varphi^r|\varphi^s\rangle
=0, \qquad \langle \varphi_r|\varphi_s\rangle=0\, ,
\end{equation}
and expand the string field as,
\begin{equation} \label{edefantifield}
|\Psi\rangle = \sum_r( \psi^r |\varphi_r\rangle + \psi_r |\varphi^r\rangle)\, ,
\end{equation}
then we call $\psi^r$ a field and $\psi_r$ the conjugate anti-field up to a sign.
The path integral
is carried out over a Lagrangian submanifold.
For our analysis it will be sufficient to consider a special class of
Lagrangian submanifolds in which,
for each pair $(\psi^r, \psi_r)$, we set either $\psi^r$ to 0 or $\psi_r$ to 0.
The path integral can be shown to be
formally independent of the choice of the Lagrangian submanifold.
The Siegel gauge corresponds to the choice of the Lagrangian submanifold
in which we impose the condition:
\begin{equation} \label{esiegelgauge}
b_0|\Psi\rangle=0\, .
\end{equation}
In this gauge the action \refb{esftaction} takes the form:
\begin{equation} \label{eactiongf}
S_{g.f.}= -{1\over 2} \langle \Psi|c_0 L_0|\Psi\rangle \,.
\end{equation}
If we choose the basis states $\{|\phi^{(n)}_r\rangle\}$ of ghost number $n$ in the Siegel
gauge, satisfying
\begin{equation}
b_0|\phi^{(n)}_r\rangle =0, \qquad \langle\phi^{(2-n)}_r|c_0|\phi^{(n)}_s\rangle =\delta_{rs}
\quad \hbox{for $n\le 1$}\, ,
\end{equation}
then by expanding $|\Psi\rangle$ in this basis and substituting in the action
\refb{eactiongf}, we recover the exponent in \refb{e8} if we identify the variables
$b_i,f_i$ and $\widetilde f_i$ as the coefficients of expansion of $|\Psi\rangle$ in this basis.
This shows that \refb{e8} may be given an interpretation as path integral over the
open string fields in the Siegel gauge.
Note however that \refb{e8} comes with a specific normalization
of the integration measure that
will be important for us. String field theory, by itself, cannot fix the overall normalization
of the measure, since this corresponds to adding a constant to the string field theory action,
and the requirement that the action satisfies the BV master equation
does not fix this constant.
\sectiono{`Gauge invariant' path integral} \label{s3}
Let us now focus on the specific case of (1,1) D-instanton in two dimensional string theory.
In this case we have\cite{1912.07170}:
\begin{equation}
{\cal A} = \int_0^\infty {dt\over 2t} \, \left( e^{2\pi t}-1\right)\, .
\end{equation}
Comparing this with \refb{e3} we see that the contribution from all states with $L_0>0$
cancel between bosonic and fermionic states.
It follows that in the path integral expression \refb{e8} for ${\bf N}$, we can drop the
integration over all states with $L_0>0$. Therefore we shall introduce a restricted string
field $|\Psi_R\rangle$
given by a linear combination of basis states with $L_0\le 0$. Before gauge fixing,
$|\Psi_R\rangle$ has the following expansion:
\begin{eqnarray}\displaystyle \label{epsir}
|\Psi_R\rangle &=& \psi^0 c_1|0\rangle + \psi_0 c_0c_1|0\rangle +
\psi^1 c_0|0\rangle + \psi^2|0\rangle + \psi_1 \, c_{-1} c_1|0\rangle + \psi_2 \, c_{-1}
c_0 c_1|0\rangle \nonumber \\ && + \, \psi^3 c_1\alpha_{-1}|0\rangle + \psi_3 c_0c_1 \alpha_{-1} |0\rangle\, ,
\end{eqnarray}
where $|0\rangle$ is the SL(2,R) invariant vacuum, $c_n$, $b_n$ are the usual ghost oscillators
and
$\alpha_m$ are the oscillators associated with the Euclidean time coordinate $X$, satisfying
$[\alpha_m,\alpha_n]=m\, \delta_{m+n,0}$.
In the $\alpha'=1$
unit the $X$'s satisfy the operator product expansion\footnote{We shall use the standard
doubling trick in which we regard $\partial X$ as an analytic function over the full complex plane, with
the understanding that $\partial X(z)$ for $z$ in the lower half plane actually represents
$-\bar\partial X(z)$. \label{fo1}}
\begin{equation}\label{exxope}
\partial X(z) \, \partial X(w) = -{1\over 2 (z-w)^2}\, .
\end{equation}
This leads to the following state operator correspondence:
\begin{equation} \label{exxope1}
c_1\alpha_{-1}|0\rangle = i\, \sqrt 2 \, c(0) \, \partial X(0)|0\rangle\, .
\end{equation}
The basis states in which we have expanded the
string field in \refb{epsir}
are normalized according to \refb{eortho} provided we choose:
\begin{equation}
\langle 0| c_{-1} c_0 c_1|0\rangle =1\, .
\end{equation}
In this case the $\{\psi^r\}$'s label
fields and the $\{\psi_r\}$'s label the conjugate anti-fields in the BV formalism.
In the Siegel gauge, the modes that survive are $\psi^0,\psi_1,\psi^2$ and $\psi^3$. Of these,
$\psi^0$ and $\psi^3$ are bosonic modes and $\psi_1$ and $\psi^2$ are fermionic
modes.
Therefore, \refb{e8} may now be written as:
\begin{equation}\label{esigact}
{\bf N} = \zeta\, \int {d\psi^0\over \sqrt{2\pi}} \int {d\psi^3\over \sqrt{2\pi}} \,
d\psi_1 \, d\psi^2 \, e^S\, .
\end{equation}
However, as
discussed in \cite{2002.04043,2012.11624},
this is a singular gauge choice due to the presence of the zero mode
$\psi_1$.
We avoid this problem by choosing the `gauge' in which $\psi_1=0$ but
$\psi^1\ne 0$. In this case all the anti-fields are set to zero and we
we integrate over all the field modes $\psi^0,\psi^1,\psi^2,\psi^3$.\footnote{Consequences
of this for tree amplitudes have been discussed in
\cite{1912.05463,2002.04043,2006.16270}.}
Now
we have three bosonic modes $\psi^0,\psi^1,\psi^3$ and one fermionic mode
$\psi^2$.
The action \refb{esftaction} now takes the form:
\begin{equation} \label{eresaction}
S = - \left[ -{1\over 2} (\psi^0)^2 - (\psi^1)^2
\right]\, .
\end{equation}
This
shows that we still have a pair of zero modes -- one bosonic zero mode $\psi^3$ and
one fermionic zero mode $\psi^2$ over which we need to integrate.
Since $\psi^2$ is a grassmann odd variable,
naively the integral would vanish. However, the mode
$\psi^2$ is the ghost field associated with the string field theory gauge transformation
generated by $\theta|0\rangle$ for a parameter $\theta$, and the integration over $\psi^2$
can be interpreted as division by $\int d\theta$, with the integral running over the volume of
the gauge group\cite{2002.04043,2012.11624},.
This allows us to express \refb{esigact} as,
\begin{equation} \label{egaugeinv}
{\bf N} =\zeta\, \int {d\psi^0\over \sqrt{2\pi}} \int {d\psi^3\over \sqrt{2\pi}}
\, d\psi^1 \, e^S \Bigg/ \int d\theta
\, .
\end{equation}
In the BV formalism the equivalence of the `gauge invariant' form
of the path integral, where we set all the anti-fields to zero,
to the Siegel gauge fixed version,
is usually proved
at the level of correlation functions\cite{bocc1,bocc2,thorn} for which the overall normalization of the path
integral cancels. Since the normalization is important for us, we have shown
in appendix \ref{sa} that
the equality of \refb{esigact} and \refb{egaugeinv}
can be understood using the
standard Faddeev-Popov formalism.
We shall now show that \refb{egaugeinv} leads
to finite unambiguous result.
Let us first carry out the integral over $\psi^0$ and
$\psi^1$ by taking the integration contours to be the steepest descent contours.
Both of these
lie along the imaginary axis, and the final result takes the form:
\begin{equation} \label{enewN}
{\bf N} = - \zeta\, {1\over \sqrt 2} \, \int d\psi^3 \Bigg/ \int d\theta\, .
\end{equation}
The minus sign in \refb{enewN} is the result of the product of two $i$'s, one from having
to integrate the tachyon $\psi^0$ along the imaginary axis and the other from having to
integrate $\psi^1$ along the imaginary axis. However in the open string field theory, the
reality condition on the mode $\psi^1$ is $(\psi^1)^*=-\psi^1$\cite{9705038},
indicating that we should
carry out the path integral over the variable $i\psi^1$ instead of $\psi^1$.
This would remove
one factor of $i$ from \refb{enewN}. There is however a similar factor of $i$ involved in the
integration over
the gauge
transformation parameter $\theta$ in \refb{egaugeinv}. These effects cancel each other,
and so we shall proceed with \refb{enewN} without removing any factor of $i$.
This has been discussed in footnote \ref{fo3}.
The mode $\psi^3$ is related by field redefinition to
the collective mode corresponding to the freedom of translating
the D-instanton along the Euclidean time direction. If $\widetilde\phi$ denotes the
correctly normalized collective mode that measures the amount of translation along
the time coordinate, then the dependence of any amplitude on
$\widetilde \phi$ should be of the form $e^{-i\omega\widetilde\phi}$ where $\omega$ is the
total energy carried by all the external closed string states. Therefore the relation
between $\psi^3$ and $\widetilde \phi$ may be found by studying the coupling of $\psi^3$ to
an amplitude and comparing this with the expected coupling of $\widetilde\phi$ to the same
amplitude. Let us begin with a disk amplitude of a set of closed string states carrying
energies $\omega_1,\omega_2,\cdots$.
Since the vertex operator of the state associated with $\psi^3$ is given by
$i\, \sqrt 2 \, c \, \partial X$, inserting this into this amplitude will correspond to inserting the
integrated vertex operator
\begin{equation} \label{e26}
i\, \sqrt 2 \,\int \partial X(z) dz\, .
\end{equation}
Using the operator product expansion \refb{exxope}, and recalling that when we use the
doubling trick mentioned in footnote \ref{fo1}, insertion of a vertex operator
$e^{-i\omega_k X(w_k)}$ is implicitly accompanied by its image $e^{i\omega_k X(\bar w_k)}$,
we get
\begin{eqnarray}\displaystyle \label{eopecoll}
&& \left\langle i\, \sqrt 2 \,\int \partial X(z) dz\, \prod_k e^{-i\omega_k X(w_k)} \right\rangle
\nonumber\\
&=& i\, \sqrt 2 \,\sum_j \int dz \, \left[
\left\{ {i\omega_j \over 2 (z-w_j)} - {i\omega_j \over 2 (z-\bar w_j)}\right\}
\, \left\langle\prod_k e^{-i\omega_k X(w_k)} \right\rangle\right]
\nonumber \\
&=& -2\pi i \, {\omega\over \sqrt 2} \, \left\langle \prod_k e^{-i\omega_k X(w_k)}\right\rangle,
\qquad \omega\equiv\sum_j\omega_j \, .
\end{eqnarray}
Since we have not included any dependence on the string coupling in the quadratic terms
in the action, the open string vertex operator \refb{e26} should also carry
a factor of the open string coupling
$g_o\propto
\sqrt{g_s}$.
The precise relation between $g_o$ and $g_s$ was determined in \cite{9911116}
and takes the form $g_o^2 = g_s/ (2\pi^2)$
in the convention in which the instanton action
is given by $1/g_s$.
We shall proceed
for now by ignoring the factors of $g_o$ since $g_o$ (in)dependence of ${\bf N}$
has already been understood in \cite{private}.
At the end of this
section we shall briefly discuss $g_o$ dependence of different contributions to ${\bf N}$ and
show how they cancel.
\refb{eopecoll} now shows that
coupling of $\psi^3$ to an amplitude with closed string states carrying total
energy $\omega$ generates a factor of $-\sqrt 2\pi i\omega$. On the other hand, since the
dependence of an amplitude on the collective coordinate $\widetilde \phi$ is of the form
$e^{-i\omega\widetilde\phi}= (1-i\omega\widetilde\phi+\cdots)$,
the coupling of $\widetilde\phi$ to an amplitude with closed string state
carrying
energy $\omega$ generates a factor of $-i\omega$. This gives the identification
of $-\sqrt 2\pi i\omega\psi^3$ to $-i\omega\widetilde\phi$, in agreement with the results of
\cite{9911116}. Therefore in \refb{enewN} we can
make the replacement:
\begin{equation}\label{e312}
d\psi^3 = {1\over \sqrt 2 \pi} \, d\widetilde\phi\, .
\end{equation}
Integration over the collective mode
$\widetilde\phi$ generates the usual energy conserving delta function $2\pi\delta(\omega)$
that is
part of any amplitude and is not included in the normalization constant
${\bf N}$. Therefore we can now express \refb{enewN} as
\begin{equation} \label{enewerN}
{\bf N} = -\zeta\, {1\over \sqrt 2} \, {1\over \sqrt 2 \pi} \Bigg/ \int d\theta = -\zeta\, {1\over 2\pi}
\Bigg/ \int d\theta\, .
\end{equation}
We now turn to the evaluation of $\int d\theta$.
Physically, this gauge transformation is related by field redefinition to the
rigid U(1) gauge transformation that multiplies any state of the
open string, stretched from the
D-instanton to a second D-instanton, by $e^{i\widetilde\theta}$. Since $\widetilde\theta$
has period $2\pi$, in order
to determine the range of $\theta$ integral, we need to find the relation between $\theta$
and $\widetilde\theta$. This in turn can be determined by comparing the string field theory gauge
transformation law generated by $\theta$ to the rigid U(1) gauge transformation with
parameter $\widetilde\theta$ for any state of the open string that connects the D-instanton to the
second D-instanton. This is achieved as follows:
\begin{enumerate}
\item As in \cite{2012.11624}, we shall work with a particular mode
$\xi$ that multiplies the vacuum state
$|0\rangle$
of the open string stretched
between the two D-instantons but the relation between $\theta$ and $\widetilde\theta$
is independent of this choice.
The conjugate
anti-field $\xi^*$ of $\xi$
will multiply the state $c_{-1} c_0 c_1|0\rangle$ of the open string that connects the
second D-instanton to the original D-instanton.
\item The vertex operators associated with the modes
$\xi$ and $\xi^*$ are accompanied by Chan-Paton
factors $\pmatrix{0 & 1\cr 0 & 0}$ and $\pmatrix{0 & 0\cr 1 & 0}$ respectively,
and
the open string mode $\psi^2$, that connects the original D-instanton to itself, carries
Chan-Paton factor $\pmatrix{1 & 0\cr 0 & 0}$.
\item It follows from the gauge transformation laws of the string field theory that
the gauge transformation of $\xi$ under the
gauge transformation generated by $\theta$
is given by the second derivative of the action with
respect to $\xi^*$ and $\psi^2$.
The leading contribution comes from the
$\xi$-$\xi^*$-$\psi^2$ coupling in the action arising from the disk amplitude.
Since two of the three vertex operators -- those associated with $\xi$ and $\psi^2$ are
just identity operators, the coefficient of this term is given by
\begin{equation} \label{eKapp}
\langle 0| c_{-1}c_0 c_1|0\rangle \, Tr\left[
\pmatrix{0 & 1\cr 0 & 0}
\pmatrix{0 & 0\cr 1 & 0}\pmatrix{1 & 0\cr 0 & 0}
\right] = 1 \, .
\end{equation}
This corresponds to the presence of a term
\begin{equation} \label{eghost0}
\xi\, \xi^*\, \psi^2
\end{equation}
in the action if we ignore the factors of $g_o$ as before.
\item
Taking the derivative of \refb{eghost0} with respect to $\xi^*$ and $\psi^2$, we see
that the gauge transformation generated by the parameter
$\theta$ takes the form $\delta\xi = \theta\xi$.
Comparing this with the infinitesimal rigid U(1) transformation
$\delta\xi=i\widetilde\theta \xi$.
we get $\theta=i\widetilde\theta$.\footnote{This factor of $i$ is the result of imposing wrong
reality condition on the mode $\psi^2$ or equivalently the parameter $\theta$. We have not
corrected it since this cancels the factor of $i$ arising out of the wrong choice of reality
condition for the mode $\psi^1$. This has been discussed below \refb{enewN}. \label{fo3}
}
\end{enumerate}
This gives
\begin{equation} \label{efull}
{\bf N} = -\zeta\, {1\over 2\pi} \, \Bigg/ \int d\theta = \zeta\, {i\over 2\pi} \, \Bigg/ \int d\widetilde\theta
= \zeta\, {i\over 4\, \pi^2}\, .
\end{equation}
Finally we shall discuss the dependence of ${\bf N}$ on the string coupling. This has already
been fully understood in \cite{private} but we include the discussion here for completeness.
We denote by $g_o= \sqrt{g_s/(2\pi^2)}$ the open string coupling\cite{9911116}.
We shall work in the convention in which the kinetic term of the open string fields
has $g_o$ independent normalization, so that in the Siegel gauge the quadratic part of the
action is $g_o$ independent, in agreement with the $g_o$ independent exponent appearing
in \refb{e8}. In this convention, each open string vertex operator carries a factor of $g_o$.
This introduces an additional
factor of $g_o$ in \refb{e26}, \refb{eopecoll} and therefore a factor of $1/g_o$ in
the right hand sides of \refb{e312}, \refb{enewerN},\refb{efull}.
On the other hand the disk three point function
of three open string vertex operators now gets a factor of $g_o^{-2}$ from the disk, and
a factor of $g_o^3$ from the three open string vertex operators, producing a net
factor of $g_o$. Therefore \refb{eghost0} gets a factor of $g_o$, leading to the
gauge transformation law $\delta\xi=g_o \theta\xi$. Therefore we now have
$g_o\theta=i \, \widetilde\theta$, leading to an extra factor of $g_o$ on the right hand side
of \refb{efull}. This cancels the earlier factor of $1/g_o$ and leaves the right hand side
of \refb{efull} unchanged. Therefore ${\bf N}$ is $g_o$ independent.
\begin{figure}
\begin{center}
\figpicardthree
\end{center}
\vskip -.2in
\caption{The integration contour
in the complex $\psi^0$ plane.
\label{figpicardthree}
}
\end{figure}
\sectiono{The multiplier} \label{s3.5}
We now turn to the determination of the multiplier $\zeta$.
For this we need to know how the
steepest descent contour / Lefschetz thimble
passing through the saddle point representing the D-instanton fits inside the
actual integration cycle that computes the full amplitude in string theory\cite{1206.6272,1511.05977,1802.10441}.
For the case of two dimensional bosonic string theory
this was discussed in \cite{2012.00041} where it was argued that the actual integration contour
contains only half of this thimble. In brief, the argument can be stated as follows.
After integrating out the massive open string modes, the tachyon effective potential
on the D-instanton has a potential $V(\psi^0)$
that has a maximum at $\psi^0=0$ describing the
D-instanton and a minimum at some positive value $1/a$ describing the perturbative
vacuum where the potential vanishes.
The potential is unbounded from below as $\psi^0\to -\infty$. Therefore the integration
contour over $\psi^0$ cannot be taken to be along the real axis all the way to $-\infty$,
but near the perturbative
vacuum where the potential has a local minimum we expect the contour to lie along
the real axis. If we model the potential as
\begin{equation}\label{eppp3}
V(\psi^0) = - {1\over 2} (\psi^0)^2 + {1\over 3} a \, (\psi^0)^3 +{1\over 6 \, a^2}\, ,
\end{equation}
then one can easily see that the potential goes to $+\infty$ as we approach the asymptotic
region within three $60^\circ$ cones, centered around the lines $\psi^0=r$, $\psi^0=r\,
e^{2\pi i/3}$ and $\psi^0=r\, e^{-2\pi i/3}$
for real positive $r$. Therefore we can take the integration contour to interpolate between
the regions $\psi^0=e^{-2\pi i/3}\times\infty$ and $\psi^0=\infty$ as shown in
Fig.~\ref{figpicardthree} or we can choose
the complex conjugate contour. On the other hand, the
steepest descent contour for the saddle point at $\psi^0=1/a$, representing the perturbative
vacuum, lies along the real $\psi^0$ axis from 0 to $\infty$, while the steepest descent
contour for the saddle point at $\psi^0=0$, representing the D-instanton, consists of a
contour through the origin
that interpolates between the regions around $\psi^0= e^{-2\pi i/3}\times\infty$ and
$\psi^0=e^{2\pi i/3}\times\infty$. Therefore the integration contour shown in
Fig.~\ref{figpicardthree}
can be regarded as the union of the steepest descent contour of the saddle point at
$\psi^0=a$, and half of the steepest descent contour of the saddle point at $\psi^0=0$.
This gives
\begin{equation} \label{ezetafin}
\zeta={1\over 2}\, ,
\end{equation}
and
\begin{equation} \label{ehalf}
{\bf N}
= {i\over 8\, \pi^2}\, .
\end{equation}
Let us now comment on the sign of ${\bf N}$ about which we have not been careful so far.
This clearly depends on the
choice of the full integration contour -- if instead of the contour shown in
Fig.~\ref{figpicardthree} we choose the complex conjugate contour, the sign of ${\bf N}$ will
change. The actual choice should be dictated by physical
considerations, e.g. if a D-instanton induced amplitude leads to violation of
unitarity, it should be describable by an effective Hamiltonian with negative imaginary
part, reflecting loss of probability due to possible transition to states that
have not been accounted for in the effective Hamiltonian.
As discussed in \cite{2012.00041},
the choice of sign given in \refb{ehalf} is the correct choice
according to this consideration. Therefore \refb{ehalf} gives the final result for the
normalization constant associated with single D-instanton amplitudes in two dimensional
string theory. This agrees with the result obtained in \cite{1907.07688}
by comparison with the matrix
model results for the instanton induced amplitudes,
after we multiply this by a factor of $i$ to
compute the normalization of the D-instanton contribution to the S-matrix elements.
\sectiono{Discussion} \label{s4}
The method described here can in principle be applied to other D-instanton systems,
e.g. general $(m,n)$ ZZ-instantons in two dimensional bosonic string theory\cite{1912.07170},
D-instantons in two dimensional type 0B string theory\cite{0307083,0307195}
and D-instantons
in type IIB string theory\cite{9701093,9712156}.
Part of the analysis that may be somewhat non-trivial is
the computation of the multiplier factor $\zeta$, since this requires the knowledge of how the
steepest descent contour / Lefschetz thimble associated with a particular D-instanton fits
into the full integration contour. For example, in the context of $(m,n)$ ZZ-instantons in two
dimensional string theory, this will
require the knowledge of
how the different ZZ-instantons are represented as different extrema in the
configuration space of string fields. However since the multiplier factors are just rational
numbers, we
only need topological information on the locations of various extrema in the
configuration space of string fields
instead of requiring detailed dynamical information. Therefore we do not consider this to be
an insurmountable problem. Similarly for computing the D-instanton contribution to the
type IIB string theory amplitude, we need to understand how the D-instantons, which in
this case represent complex saddle points, fit inside the integration contour over the string
fields.
\bigskip
\noindent {\bf Acknowledgement:}
I wish to thank Bruno Balthazar,
Victor Rodriguez, Xi Yin and Barton Zwiebach for many useful discussions.
This work was
supported in part by the
J. C. Bose fellowship of
the Department of Science and Technology, India and the Infosys chair professorship.
|
2,869,038,154,398 | arxiv | \section{Introduction}
The need for AI accountability has spurred the development of many explainable AI (XAI) techniques~\cite{abdul2018trends, adadi2018peeking, arrieta2020explainable, guidotti2018survey, hohman2018visual}.
However, current approaches tend to use rudimentary, off-the-shelf visualizations, such as bar or line charts and heat maps, that assume users are analytically-driven to study the visualizations.
Consequently, these are difficult to make sense of~\cite{kaur2022sensible}, too simplistic to provide effective feedback~\cite{poursabzi2021manipulating}, or require significant subsequent effort to interpret~\cite{dhanorkar2021needs}.
Additionally, since machine learning models make predictions on learned rules, many XAI techniques produce explanations that also reason deductively.
However, humans can also reason with other processes~\cite{peirce1903harvard}.
Abductive reasoning is a particularly powerful approach to first generate hypotheses, then test them to determine why an observation or event occurred~\cite{popper2014conjectures}.
Hoffman et al.~\cite{hoffman2017explaining, hoffman2020explaining} and Wang et al.~\cite{wang2019designing} argued for the need for XAI to also support abductive explanations.
Abduction will allow the user and AI to reason with hypotheses from a shared domain,
and save user effort to contextualize low-level explanations.
Towards the goal of human-like XAI~\cite{wang2019designing,zhang2022towards}, we propose an approach to leverage abduction to provide hypothesis-driven explanations
for complex real world problems.
Yet, hypotheses of challenging tasks tend to be complex, thus we need expressive representations for AI explanations.
Diagrams are used in many domains to explain sophisticated observations and events.
In physics, force diagrams can explain how objects move when interacting with other objects or fields.
In medicine, diagrams can describe the physical mechanisms of a disease.
Diagrams are distinct from visualization since they can encode inherent constraints based on hypotheses~\cite{shimojima1999graphic}, and provide a systematic approach to read it, thus simplifying interpretation.
Indeed, diagrams are a generalization of visual and verbal representations~\cite{peirce1976new}, thus expanding the diversity of explanations.
\rev{Extending abductive reasoning, people engage in diagrammatic reasoning~\cite{hoffmann2010diagrams} to:
I) construct diagrams as consistent systems of representation,
II) perform experiments based on the rules of the diagrams, and
III) note the experiment results.}
Therefore, to reduce interpretation burden, we propose \textit{XAI diagrammatization} to
use abductive inference and
generate explanations in expressive and constrained diagrams.
Towards this goal, we made the following \textbf{contributions}:
\begin{enumerate}[label=\arabic*)]
\item Introduced \textit{Diagrammatization} \rev{as a design framework for diagrammatic reasoning in XAI to i) support abductive reasoning with hypotheses, ii) follow domain conventions, and iii) can be represented visually or verbally.}
\item Proposed \textit{DiagramNet}, a deep neural network to provide diagram-based, hypothesis-driven, abductive explanations by inferring to the best explanation while inferring the prediction label.
\item Targeted a clinical application and proposed \textit{clinically-relevant explanations} to diagnose cardiac disease using murmur diagrams. We mathematically formalized murmur shapes to predict them as explanations in DiagramNet.
\item Evaluated DiagramNet using a real-world heart auscultation dataset~\cite{yaseen2018classification} with multiple studies.
\begin{enumerate}[label=\alph*)]
\item \textit{Demonstration study} to illustrate that diagrammatization can diversely support abductive, contrastive, counterfactual, and case (example-based) explanations.
\item \textit{Modeling study} to show that abductive reasoning in DiagramNet improves both prediction performance and explanation faithfulness compared to baseline and alternative models, and
\item \textit{Qualitative user study} with medical domain experts to show that diagram-based explanations are more clinically sound, useful, and convincing than saliency map explanations.
\end{enumerate}
\item Discussed \textit{implications} for XAI and \textit{generalization} of diagrammatization to other application domains.
\end{enumerate}
\begin{figure}[t!]
\centering
\includegraphics[width=14.0cm]{figures/fig-concept-intepretability-gap.pdf}
\vspace{-0.1cm}
\caption{
\rev{Reasoning processes between the user and AI.
a) With current XAI explanations, the user abducts hypotheses to evaluate and interpret conclusions. This leaves an interpretability gap.
b) With ante-hoc diagrammatic explanations, the AI abducts hypotheses to justify its conclusions, and provides higher-level, domain-conventional explanations, where the user simply acknowledges.
Diagrammatic explanations are derived from diagrammatic reasoning to i) perform abductive-deductive reasoning with hypotheses, ii) follow domain conventions, and iii) can be represented visually or verbally.}
}
\label{fig:concept-interpretability-gap}
\vspace{-0.2cm}
\end{figure}
\section{Conceptual background: Abductive and diagrammatic reasoning}
\rev{Current XAI explanations show common visualizations (e.g., charts, saliency maps), but this requires users to form their own hypotheses to evaluate. This leaves an interpretability gap.
We propose ante-hoc \textit{diagrammatic explanation} to close this gap.
Here, the AI performs abduction to generate and evaluate its own hypotheses to justify its prediction.
The explanation follows diagrammatic reasoning to be consistent with the conventions in the target application domain, and best represent domain hypotheses.
Fig. \ref{fig:concept-interpretability-gap} illustrates the interpretability gap for users of current XAI, and how diagrammatic explanations can reduce interpretability burden for users.
In this section, we introduce the human reasoning processes of abductive and diagrammatic reasoning, to distinguish their nuances from reasoning processes and representations typically used in XAI.}
\subsection{Inferential Reasoning}
On observing an object or event, people engage in various reasoning processes.
Philosopher Charles S. Peirce defined 3 types of inferential reasoning: induction, deduction, and abduction~\cite{peirce1903harvard}.
Fig. \ref {fig:concept-induction-deduction-abduction} shows how they differ.
For pedagogical clarity, we use a stylized scenario of recognizing cats and dogs based on ear shape, and concept-based rather than causal explanations.
Later, we describe reasoning on a complex case of cardiac diagnosis with causal hypotheses in Fig. \ref{fig:concept-abduction-deduction-murmur}.
\begin{figure}[t!]
\centering
\includegraphics[width=15.0cm]{figures/fig-concept-abductive-explanations.pdf}
\vspace*{-0.15cm}
\caption{
Three processes of reasoning:
a) induction to infer the general theory from annotated and labeled instances,
b) deduction to infer labels of instances by following theory,
c) abduction to infer the best explanation from hypothesized labels.
For visual clarity, we present the case of recognizing cats and dogs based on ear shapes.
The Peircean abductive process starts with I) observing an instance, II) hypothesizing labels, III) inferring the best explanation, and IV) deducting on rules to infer the label.
Since Doge the dog has pointy ears, it is misclassified as a cat.
Image credits: “dog face” and “cat face” by “irfan al haq”, “Dog” by Maxim Kulikov
}
\label{fig:concept-induction-deduction-abduction}
\vspace{-0.3cm}
\end{figure}
\subsubsection{Induction}
People infer general rules and theories of objects and events by using inductive reasoning.
For example in Fig. \ref{fig:concept-induction-deduction-abduction}a, given several instances of the same labels (cat or dog), and annotated observation of specific features (whether the ears are pointy or floppy), induction would learn the theory relating the label and observation (pointy ears imply cat, and floppy ears imply dog).
Machine learning trains models using induction from training instances.
\subsubsection{Deduction}
This process uses predefined rules for inference.
Fig. \ref{fig:concept-induction-deduction-abduction}b shows that deductive reasoning starts with specific observations of the instance (ear shape), evaluating them against rules, and inferring the label based on which is true.
Humans can implicitly extract observations (dotted arrows), but machines need explicit feature extraction to infer them.
This reveals that a latent reasoning process is needed, and this is what abductive reasoning does.
\subsubsection{Abduction} \label{subsection:abductive-reasoning}
Harman defined abduction as "inference to the best explanation" \cite{harman1965inference}; instead of inferring a label, this infers the underlying reason,
which could be causal or non-causal~\cite{williamson2016abductive}.
Peirce and later Popper describe abduction as "guessing" hypotheses~\cite{peirce1903harvard,popper2014conjectures}
that need to be evaluated for plausibility.
Combining abduction with deduction supports the \textit{hypothetico-deductive} reasoning method~\cite{popper2014conjectures} of forming and testing hypotheses.
This is equivalent to the \textbf{Peircean abduction process} that Hoffman et al.~\cite{hoffman2017explaining,hoffman2020explaining} and Miller~\cite{miller2019explanation} highlight as relevant to XAI, which we elaborate:
\begin{enumerate}[label=\Roman*.]
\item \textit{Observe event}, noting relevant cues for further reasoning.
\item \textit{Generate plausible explanations} as potential causes of the observation,
e.g., identities, states, diseases.
\item \textit{Evaluate and judge plausibility of explanations} by applying a system of rules to compare evaluation results. We infer the strongest evidence (best explanation) that is consistent with the observation (from Step I).
\item \textit{Resolve explanation} by using the best inferred explanation deductively to infer the final decision (inferred label).
\end{enumerate}
Fig. \ref{fig:concept-induction-deduction-abduction}c to \ref{fig:concept-induction-deduction-abduction}b illustrate an example:
I) on observing Doge (red dotted circle, Fig. \ref{fig:concept-induction-deduction-abduction}c),
II) we hypothesize that it could be a cat or a dog.
III) Next, we abduct on the rules for cat and dog, and determine that it could have pointy or floppy ears, respectively.
We then evaluate the fit of the pointy and floppy ears hypotheses, judge that its ears are more pointy than floppy,
and infer that Doge has pointy ears.
IV) We then deduce on the rules for pointy ears to infer that Doge is a cat.
This illustrates how abduction helps people to form hypotheses to extract features for deduction.
Hence, like \cite{hoffman2017explaining, hoffman2020explaining, wang2019designing}, we argue that AI should integrate abductive reasoning to generate and evaluate explanations for its deductive inferences.
Specifically, we implemented the Piercean abductive reasoning process in our technical approach (Section \ref{sec:DiagramNet}).
\subsection{Diagrammatization as a general XAI representation \rev{for domain-specific conventions}}
\begin{figure}[t!]
\centering
\includegraphics[width=7.5cm]{figures/fig-concept-diagrammatization-types.pdf}
\vspace{-0.3cm}
\caption{
\rev{Venn diagram of diagrammatic explanations that encompass different types of visualization and verbalization explanations.
Specific AI explanations can contain one or multiple types.
Note that visual diagrams (e.g., trees, network graphs, murmur diagrams) are indeed a type of visualization, but diagrammatization is a schema that can also include verbalization.}
}
\label{fig:concept-diagrammatization-venn-diagram}
\vspace{-0.3cm}
\end{figure}
\rev{Next, we articulate how diagrammatic reasoning applies abduction on diagram representations for complex domains.}
As a literature review, we first introduce current XAI methods based on visualization and verbalization,
articulate how diagrammatization is a broader paradigm encompassing both\rev{, and
how diagrams can be more expressive and constrained to efficiently convey hypotheses and concepts to domain experts
(see Fig. \ref{fig:concept-diagrammatization-venn-diagram} and Table \ref{table:verbal-visual-diagram})}.
\begin{table}[b!]
\small
\centering
\vspace{-0.05cm}
\caption{
Diagrammatization design space with dimensions to compare verbal, visual, and diagram representations of XAI.
}
\vspace{-0.15cm}
\label{table:verbal-visual-diagram}
\begin{tabular}{llllllllll}
\hline
& & & \multicolumn{2}{c}{Representation System} & & \multicolumn{4}{c}{Representation Properties} \\ \cline{4-5} \cline{7-10}
\addlinespace[0.05cm]
&
&
&
\multicolumn{1}{c}{Consistency} &
\multicolumn{1}{c}{Rules} &
&
\multicolumn{1}{c}{\begin{tabular}[c]{@{}c@{}}Level\\ of states\end{tabular}} &
\multicolumn{1}{c}{Homomorphism} &
\multicolumn{1}{c}{\begin{tabular}[c]{@{}c@{}}Content\\ expressivity\end{tabular}} &
\multicolumn{1}{c}{\begin{tabular}[c]{@{}c@{}}Inherent\\ constraints\end{tabular}} \\ \cline{1-2} \cline{4-5} \cline{7-10}
\addlinespace[0.1cm]
\multicolumn{2}{l}{\color{verbal_color}Verbalization} & & & & & & & & \\
& Symbolic & & High & Formalized & & Categorical & Low (mathematical) & Bounded & Logical, math \\
& Template-based & & High & Bounded & & Categorical & Low (descriptive) & Bounded & Taxonomical \\
& NL Generative & & Low & Implicit & & Categorical & Low (may be spurious) & Unbounded & None \\
\addlinespace[0.1cm]
\multicolumn{2}{l}{\color{visual_color}Visualization} & & & & & & & & \\
& Model-free & & Low & None & & Continuous & Low (by data type) & Some bounds & None \\
& Model-based & & High & Formalized & & Continuous & High (conceptual) & Bounded & Topological \\
& Example-based & & Low & Implicit & & Continuous & High (physical) & Some bounds & None \\
& Concept-based & & Medium & Bounded & & Continuous & Low (semantic) & Bounded & Taxonomical \\
\addlinespace[0.1cm]
\multicolumn{2}{l}{\color{diagram_color}Diagrammatization} &
&
High &
Formalized &
&
Continuous &
\begin{tabular}[c]{@{}l@{}}High (physical, \\ conceptual)\end{tabular} &
Some bounds &
\begin{tabular}[c]{@{}l@{}}Topological,\\ geometrical\end{tabular} \\
\hline
\end{tabular}
\vspace{-0.1cm}
\end{table}
\subsubsection{Visualization}
Leveraging visualization to augment human cognition, many XAI techniques are rendered in visual form. We organize them into four broad categories based on their semantic structures rather than visual format:
\begin{enumerate}[label=\alph*)]
\item \textit{\color{visual_color}Model-free}
explanations use generic, off-the-shelf, low-level visualizations.
These assume linear or univariate relationships between variables, and are meant to be accessible to a broad audience (though lay users may struggle to comprehend them~\cite{abdul2020cogam}).
Techniques include:
\textit{Bar charts}
to show feature attributions~\cite{ribeiro2016should}, weights of evidence~\cite{kulesza2009fixing,lim2011design}. Extensions use point clouds to show data distributions~\cite{lundberg2017unified}, or violin plots to show uncertainty~\cite{wang2021show}.
\textit{Line graphs}
to show nonlinear relationships, which can be estimated with partial dependence plots~\cite{krause2016interacting},
modeled with generalized additive models (GAM)~\cite{caruana2015intelligible,abdul2020cogam}, etc.
\textit{Scatter plots} to show multivariate relationships~\cite{cavallo2018visual} and clusters~\cite{ahn2019fairsight}.
\textit{Saliency maps}
to show important regions as heatmaps on images~\cite{selvaraju2017grad,bach2015pixel,zhou2016learning} or highlights on text~\cite{wang2022interpretable}.
\item \textit{\color{visual_color}Model-based}
explanations visualize the data structure of the prediction model or a simplified proxy.
Many use graph network or rule-based data structures, which are complex but known to data scientists.
Techniques include:
\textit{Neural network} activations~\cite{kahng2017cti}, canonical filters in CNNs~\cite{olah2017feature}, or distilled networks~\cite{bau2017network,hohman2019s}.
\textit{Decision trees} to show nodes and decision branches to explain system decisions~\cite{lim2009and}, medical diagnoses~\cite{wu2018beyond}, and step count behavior~\cite{lim2019does}.
\item \textit{\color{visual_color}Example-based}
explanations retrieve examples that are similar~\cite{koh2017understanding}, contrastive~\cite{cai2019effects}, or even adversarial~\cite{kim2016examples,woods2019adversarial} for users to compare with the current observation. Typically visualized in native format, e.g., images instead of charts.
\item \textit{\color{visual_color}Concept-based}
explanations increase interpretability by explaining with semantically meaningful concept vectors~\cite{kim2018interpretability}, conceptual attributes~\cite{koh2020concept}, or relatable cues~\cite{zhang2022towards}.
Interactive editing also helps with understanding~\cite{cai2019human,kahng2018gan,zhang2021method}.
\end{enumerate}
\subsubsection{Verbalization}
Instead of visual representations, explanations can also be written (or spoken) verbally.
This is done with logical syntax (symbolic) or more "naturally" with text. We organize verbalization explanations as follows.
\begin{enumerate}[label=\alph*)]
\item \textit{\color{verbal_color}Symbolic}
explanations use mathematical notation to describe logical relationships. Since math is written sequentially, it is sentential and verbal~\cite{chandrasekaran2005makes}.
\textit{Rules} are popular to explain the AI's decision logic and can be simplified with various regularizations~\cite{letham2015interpretable,lakkaraju2016interpretable}.
They are particularly useful to provide counterfactual explanations~\cite{ribeiro2018anchors,wachter2017counterfactual}.
Formal logic has also been used to provide abductive explanations with prime implicants~\cite{ignatiev2019abduction} and constraining deep models towards abductive rules~\cite{dai2019bridging}, though these explanations remain highly mathematical and inaccessible to non-technical users.
\item \textit{\color{verbal_color}Template-based}
text explanations are a straightforward way to convert symbolic expressions into text with a mapping function. They produce text explanations with fixed terms and sentence structures (e.g.,~\cite{abujabal2017quint}).
\item \textit{\color{verbal_color}Natural Language Generative (NLG)}
explanations are "natural" by emulating how humans communicate and explain~\cite{ehsan2018rationalization, rajani2019explain, rosenthal2016verbalization}.
These are trained by showing a machine state (e.g., game state, text and hypotheses) to human annotators who rationalize an explanation.
Training is labor intensive, and yet may be spurious, since human annotators reason independently of the machine.
Moreover, incorrect annotations cannot be easily validated.
\end{enumerate}
Ehsan et al.'s definition of \textit{rationalization} is particularly instructive: an NLG explanation justifies a model's decision \textit{"based on how a human would think"}, but does \textit{"not necessarily reveal the true decision making process"}~\cite{ehsan2019automated}.
In contrast, diagrammatization extends this to include visual diagrams and also reveals the true decision making process of the AI.
\subsubsection{Diagrammatization}
Peirce considered {\color{diagram_color}diagrams} as a general framework that encompasses graphical (visual), symbolic (equations), and sentential (verbal) representations with several elements:
an \textit{ontology} that defines the entities and their relations,
\textit{conventions} that prescribe how to interpret diagrams, and
\textit{rules} to evaluate experiments~\cite{peirce1976new}.
Hoffman determined 5 steps for diagrammatic reasoning~\cite{hoffmann2010diagrams} which align with the aforementioned Peircean abduction process:
\begin{enumerate}[label=\Roman*.]
\item Construct a diagram by means of a \textit{consistent system of representation}.
\item Perform experiments upon this diagram according to the \textit{rules} of the chosen system of representation.
\item Note the \textit{results} of those experiments.
\end{enumerate}
From this, we analyze representations by their consistency and rules.
{\color{verbal_color}Generative text} verbalization is open-ended with low consistency and rules implicit to language and tacit knowledge.
{\color{verbal_color}Symbolic} and {\color{verbal_color}template-based} verbalizations have high consistency and are bounded formally or implicitly to rules due to their predefined structure.
{\color{visual_color}Model-free} and {\color{visual_color}example-based} visualizations have low consistency to render any data that fit their formats, though examples are bounded by natural variations.
{\color{visual_color}Concept-based} visualizations are more consistent by restricting to fixed concepts.
{\color{visual_color}Model-based} visualizations and {\color{diagram_color}diagrams} have high consistency and formal rules that obey conventions of their formats.
Shimojima identified dimensions to distinguish linguistic and graphical representations~\cite{shimojima1999graphic}.
We found some instructive and adapt them to differentiate diagrammatization, verbalization, and visualization representations for XAI.
\begin{enumerate}
\item [A.] \textit{Level of states}
describe whether the representions can be "analog" (categorical) or "digital" (continuous).
{\color{verbal_color}Verbalizations} impose categorical representation,
while {\color{visual_color}visualizations} and {\color{diagram_color}diagrams} also support continuous quantities.
\item [D.] \textit{Homomorphism}
refers to how analogous the diagram is to the represented domain.
{\color{verbal_color}Verbal} representations have low homomophism, since people have to translate text to symbols and structures.
{\color{visual_color}Model-free} visualizations may have formats irrelevant to the domain (e.g., spectrogram of heart sounds).
{\color{visual_color}Model-based} visualizations may be homomorphic with the domain if chosen appropriately.
{\color{visual_color}Example-based} visualizations of instances in their native format are highly homomorphic.
{\color{visual_color}Concept-based} explanations have to be interpreted verbally, so have low homomorphism.
{\color{diagram_color}Diagrams} can be chosen to physically represent the notions familiar to domain experts, thus can have high homomorphism.
\item [E.] \textit{Content expressivity}
refers to whether the representation limits information expressiveness.
{\color{verbal_color}Generative text} verbalization is unbounded, since any text could be predicted.
{\color{visual_color}Model-free} and {\color{visual_color}example-based} visualizations limit the visual format, but any relevant value can be rendered.
Other representations are bounded by the graphical, symbolic, or template formats.
High expressivity is useful to show nuances for experts, but is overwhelming to non-experts.
\item [F.] \textit{Inherent constraints.}
All representations share \textit{extrinsic} constraints of the represented domain, but can impose differing \textit{inherent} constraints.
{\color{verbal_color}Generative text} verbalizations can include any words, so have no inherent constraints, while {\color{verbal_color}template-based text} is bounded to the taxonomy in the template.
{\color{visual_color}Concept-based} visualizations are also constrained by the taxonomy of concepts.
{\color{visual_color}Model-based} visualizations are constrained by topological structure (e.g., decision tree).
{\color{diagram_color}Diagrams} can be constrained by topological or geometric constraints (e.g., physics, time sequence).
\end{enumerate}
\subsubsection{Diagrammatization for various explanation types}
Diagrammatization supports multi-faceted explanations, namely:
\begin{enumerate}[label=\alph*)]
\item \textit{Abductive explanations}
to select the best-fitting hypothesis and show that as the explanation for the prediction.
\item \textit{Contrastive explanations}~\cite{miller2019explanation}
to describe the evidence for predicting alternative outcomes in the model. By generating a hypothesis for each outcome, this can show how well each hypothesis fits the current instance observation.
\item \textit{Counterfactual explanations}~\cite{wachter2017counterfactual}
to propose changes to input values to predict another outcome.
Though used mostly for symbolic reasoning with tabular data, this can also be used for unstructured data (e.g., images)~\cite{cai2019human,zhang2022towards}.
\item \textit{Case (example-based) explanations}
to show examples of similar or contrastive predictions~\cite{cai2019effects}
for comparison.
\end{enumerate}
\section{Domain theory: Clinical background}
Cardiovascular diseases cause an estimated 17.9 million worldwide deaths, accounting for 32\% of deaths in 2019~\cite{who2021cardiovascular}.
We aim to develop an early diagnosis AI system for heart disease
to augment clinicians with deficient auscultation skills~\cite{alam2010cardiac}.
When predictions impact people's lives, it is critical to provide explanations for review by relevant experts.
Here, we describe the background to clarify how our AI explanations are clinically-relevant for practicing clinicians.
\subsection{Heart auscultation}
Fig. \ref{fig:concept-heart-murmur-diagrams} (Left) shows a partial heart cycle with blood flowing into the left atrium, pumped into the left ventricle through the \textit{mitral valve}, and pumped out through the \textit{aortic valve}.
Valves prevent blood from flowing backward.
Their closing produces a "lub-dub" sound:
the 1st heart sound (commonly termed S1) "lub" is from the mitral valve, and the 2nd heart sound (S2) "dub" is from the aortic valve.
S1 and S2 demarcate the systolic (between S1 and S2) and diastolic phases of the heart cycle.
In heart auscultation, the clinician uses a stethoscope to listen for normal or abnormal sounds.
\subsection{Murmur diagrams to diagnose cardiac valvular diseases}\label{sec:murmurs}
Abnormal heart sounds --- "murmurs" --- may indicate heart disease.
Clinicians make diagnoses by listening to changes in murmur loudness.
These are commonly represented in murmur diagrams~\cite{judge2015heart} (Fig. \ref{fig:concept-heart-murmur-diagrams}, Right).
We describe four prevalent diseases: aortic stenosis (AS), mitral regurgitation (MR), mitral valve prolapse (MVP), and mitral stenosis (MS).
\begin{figure}[t!]
\centering
\includegraphics[width=14.0cm]{figures/fig-concept-murmur-diagrams-2.pdf}
\vspace*{-0.0cm}
\caption{
Left:
Anatomy of the heart showing two valves that may suffer from heart disease;
adapted from {\color[HTML]{0060df}\underline{\href{https://commons.wikimedia.org/wiki/File:Diagram_of_the_human_heart_(valves_improved).svg}{\smash{Diagram of the human heart}}}} by {\color[HTML]{0060df}\underline{\href{https://en.wikipedia.org/wiki/de:User:Ungebeten}{\smash{Ungebeten}}}} under the {\color[HTML]{0060df}\underline{\href{https://creativecommons.org/licenses/by-sa/3.0/deed.en}{CC BY-SA 3.0 license}}}.
Right:
Murmur diagrams showing typical murmurs for a) aortic stenosis (AS), b) mitral regurgitation (MR), c) mitral valve prolapse (MVP), d) mitral stenosis (MS), and their more severe variants (e-h)
with slightly different shapes.
Black rectangles indicate normal "lub" (S1) and "dub" (S2) sounds. Red areas indicate abnormal murmur sounds.
}
\label{fig:concept-heart-murmur-diagrams}
\vspace{-0.2cm}
\end{figure}
\begin{enumerate}[label=\arabic*)]
\item\textit{Aortic Stenosis (AS):}
the aortic valve leaflets stiffen due to calcification,
narrowing the valve opening (i.e., stenosis),
resulting in a high-pitched noise that increases as the valve opens and decreases as it closes.
This produces a \textit{crescendo-descresendo} murmur during the systolic heart phase, visualized as a diamond shape (Fig. \ref{fig:concept-heart-murmur-diagrams}a).
In severe AS, the shape apex shifts later and is lower, due to delayed valve closure and weaker heart performance (Fig. \ref{fig:concept-heart-murmur-diagrams}e).
\item\textit{Mitral Regurgitation (MR):}
the mitral valve fails to fully close, allowing blood to flow backward (i.e., regurgitation).
This reverse flow is heard as a constant, high-pitched murmur during the systolic heart phase, and
is visualized as a \textit{uniform} low amplitude sound (Fig. \ref{fig:concept-heart-murmur-diagrams}b).
Sometimes, the mitral valve remains closed until mid-systole (Fig. \ref{fig:concept-heart-murmur-diagrams}f).
\item\textit{Mitral Valve Prolapse (MVP):}
the tendons keeping the mitral valve closed fails, causing the valve to pop open (prolapse), allowing blood to regurgitate.
This opening is heard as a mid-systolic "click", visualized as a vertical line (Fig. \ref{fig:concept-heart-murmur-diagrams}c).
Often the regurgitation is audible as a uniform, high-pitched murmur, which is MVP with MR (Fig. \ref{fig:concept-heart-murmur-diagrams}g).
\item\textit{Mitral Stenosis (MS):}
the mitral valve leaflets fuse (i.e., stenosis) due to rheumatic heart disease,
reducing blood flow during the \textit{diastolic} heart phase (Fig. \ref{fig:concept-heart-murmur-diagrams}d).
After the S2 "dub",
the valve snapping open makes a "click" sound,
enabling large blood flow, followed by a decrescendo as flow reduces, then a constant low-pitch "rumble", and a crescendo before the next S1.
Severe MS has an earlier click during diastole and longer murmur decrescendo (Fig. \ref{fig:concept-heart-murmur-diagrams}h).
\end{enumerate}
\subsection{Abductive inference of best murmur shape explanation for cardiac diagnosis}
With the medical knowledge, the domain expert can diagnose using hypothetico-deductive reasoning.
Fig. \ref{fig:concept-abduction-deduction-murmur} applies the Peircean abduction process to a medical case.
On hearing a heart sound, the clinician
I) observes an abnormal murmur (red amplitude),
II) hypothesizes plausible diseases (AS, MS, MVP, MR) and matches each to a corresponding murmur shape,
III) abductively infers the most likely shape as crescendo-decrescendo that best fits the sound heard, and
IV) deductively infers the diagnosis of AS from the rule that the murmur was Systolic \textit{and} ($\cap$) Crescendo-Decrescendo ($\blacktriangle$).
\begin{figure}[t!]
\centering
\vspace*{-0.05cm}
\includegraphics[width=6.0cm]{figures/fig-concept-abductive-deductive-murmur.pdf}
\vspace{-0.15cm}
\caption{Illustration of the hypothetico-deductive reasoning process to diagnose cardiac disease using
abductive reasoning (red arrows) to infer the most likely explanatory murmur shape and
deductive reasoning (blue arrows) to infer the consequent diagnosis.}
\label{fig:concept-abduction-deduction-murmur}
\vspace{-0.2cm}
\end{figure}
\subsection{Current XAI for medicine and heart auscultation}
With the critical situations under which medical AI operates~\cite{lim2009assessing}, several works have pursued XAI for medicine.
Wang et al. showed how XAI can mitigate cognitive biases in medical diagnoses~\cite{wang2019designing}.
Cai et al. identified requirements for trust in medical AI, including needing to \textit{"compare and contrast AI schemas relative to known human decision-making schemas"}~\cite{cai2019hello}.
Cai et al. designed SMILEY to find similar pathology cases by region, example and concept~\cite{wang2019designing}.
Lundberg et al. proposed tree-based explanations to address \textit{"model mismatch -- where the true relationships in data do not match the form of the model"}~\cite{lundberg2020local},
though models take more forms than trees.
Tjoa and Guan's review of medical XAI identified several challenges, including the lack of human interpretability, explanations unfaithfulness, need for data science training in medical education~\cite{tjoa2020survey}.
In contrast, Vellido argued for the \textit{"need to integrate the medical experts in the design of data analysis interpretation strategies"}~\cite{vellido2020importance}.
Similarly, we use diagrammatization to imbue medical expertise into XAI.
In this work, we focus on AI for diagnosing cardiac disease.
Much work has been on electrocardiogram (ECG) data (e.g., \cite{siontis2021artificial}) and less on phonocardiograms (PCG) of heart auscultations.
Yet, the few works on PCGs focus on classifying normal or abnormal sounds (e.g., \cite{rubin2017recognizing}) or segmentating time (e.g., \cite{dwivedi2018algorithms}).
These lack clinical usefulness, since they do not provide a differential diagnosis to rank multiple plausible diagnoses.
Work on XAI for PCGs is even more sparse,
focusing on saliency maps on spectrograms~\cite{dissanayake2020robust, raza2022designing, ren2022deep};
we show later how clinicians are unconvinced with this format.
\subsection{Diagrammatization for murmur diagrams}
The complexity of biological processes demands diagrammatic reasoning in medicine.
Furthermore, clinical diagnosis is indeed a form of abductive reasoning, where the clinician infers the best disease cause (explanation) based on symptoms (observation).
Therefore, heart auscultations and murmur diagrams provide an ideal use case to study and demonstrate diagrammatization.
We characterize murmur diagrams in terms of the diagrammatization design dimensions:
\begin{itemize}
\item \textit{Consistent system of representation (ontology).}
Key concepts are audio volume (amplitude) over time, normal "lub" (S1) and "dub" (S2) sounds, and abnormal murmur sounds. Murmurs, can be systolic or diastolic, have shape categories with specific slopes (crescendo, decrescendo, uniform) and may include "clicks".
\item \textit{Rules to interpret representation.}
Base: represent heart sounds with phonocardiograms (PGC) and draw amplitude over time.
Annotations: S1 and S2 positions are demarcated as tall rectangles, and murmur shapes are drawn with multi-part straight lines.
These conventions help with drawing, reading, and evaluating the diagrams.
\item \textit{Categorical and continuous level of states.}
For each diagnosis, the murmur shape must fit a categorical profile, but there is some flexibility (e.g., slope steepness, time span length) to support continuous variation in observations.
\item \textit{Bounded content expressivity.}
Murmur diagrams emphasize murmur shapes, and are bounded to show the amplitude.
They do not represent other information, such as pitch, stethoscope position, and sound radiation.
\item \textit{High physical and conceptual homomophism.}
All clinicians are trained to interpret murmur diagrams, these diagrams can be overlaid on PCGs and intuitively represent how sound volume changes over time.
\item \textit{Geometrical inherent constraints.}
Murmur shapes are geometrically constrained to be between S1-S2 or S2-S1 and have positive, negative, or flat slopes. The shapes should also fit the amplitude data optimally.
\end{itemize}
\rev{These describe how diagrams are expressive, constrained, and conventional to convey murmur shape hypotheses from heart sounds to explain cardiac diagnosis.
Next, we describe our technical approach for the AI to perform abductive and diagrammatic reasoning, and generate diagrammatic explanations.
By demonstrating the AI's independent ante-hoc reasoning, which is clinician-like, we aim to increase its trustworthiness for clinicians.}
\section{Technical approach}
We developed an explainable model to predict cardiac diagnosis from phonocardiograms (PCG).
Following clinical practice, the model generates diagrammatic explanations with murmur diagrams based on sound.
We discuss generalization to other applications later in Discussion.
We describe our data source, data preparation, baseline modeling, problem formulation of murmur shapes, proposed DiagramNet model, and alternative model for cardiac diagnosis prediction.
\subsection{Heart auscultation dataset, data preparation, and annotation}
\subsubsection{Dataset}
We trained models to predict cardiac diagnoses using the dataset by Yaseen et al. \cite{yaseen2018classification}.
It comprises 1000 audio recordings of heart cycles, each 1.15-3.99s long with 8 kHz sampling rate.
There are 200 recordings each of various diagnoses: normal (N), AS, MR, MVP, and MS.
We next describe how we preprocess the 1000 recordings into 14,672 instances which is sufficiently large for deep learning (achieving 86.0\% for a base CNN).
\subsubsection{Preprocessing}
We processed each \texttt{.wav} audio file into multiple time series 1D tensors.
To
classify auscultations starting at any time point, we created instances based on sliding windows, with window length 1.0s (8000 samples) and stride 0.1s.
The window length was chosen such that each instance will likely contain only one heart cycle with 0 or 1 murmur,
thus simplifying predictions.
In total, we have 14,672 instances, which we split into training and test sets with a 50\% ratio.
We ensured that all time windows for the same original audio files only occur in the training or test set, not both.
We further extract the amplitude of the audio time series $\bm{a} = \mathscr{A}(\bm{x})$ which is key to estimate murmur shapes.
\subsubsection{Annotation}
The dataset only contains diagnosis labels, but lacked annotations about murmur locations.
Thus, we manually annotated the segments of when the murmur occurs to derive $\tau_1$ and $\tau_L$ as the murmur start and end times, respectively. These are used for the supervised training of the murmur segment predictions.
Using this segment, we fit a nonlinear function describing the correct murmur shape to the data. This provides ground truth estimates of shape parameters $\bm{\theta} = (\bm{\tau},\bm{\pi})$, where $\bm{\tau}$ and $\bm{\pi}$ are time and slope parameters, respectively. Details are described later.
The annotations were performed and verified in consultation with our clinical collaborators who are cardiologists.
Since it was prohibitively expensive to recruit clinicians as annotators, we trained ourselves (computer scientists) to understand the domain concepts of auscultation and murmurs.
We took care to minimize annotation errors.
Two annotators checked the annotations for consistency with each diagnosis as described in Section \ref{sec:murmurs} using time series visualizations of all annotated PCGs.
Any discrepancy between annotators were reconciled through discussions.
Our clinical collaborators verified a subset of annotations.
All mislabeled annotations were corrected.
Our detailed description of the clinical background demonstrates our obtained domain knowledge.
Our results (discussed later) suggest that annotation errors were limited, since we demonstrated improved model accuracy for all diagnoses.
\subsection{Base prediction model for cardiac diagnosis prediction}
We treat each audio time series like a 1D image, since all instances are fixed-length and single-channel.
We further concatenate displacement $\bm{x}$ and amplitude $\bm{a}$ into a 2-channel "image".
To compare with our full proposed model, we trained a base convolutional neural network (CNN) \cite{hershey2017cnn} as model $M_0$ on $(\bm{x},\bm{a})$ to predict cardiac disease $\hat{y}_0$ (see Fig. \ref{fig:architecture-base-cnn}).
\begin{figure}[h!]
\centering
\vspace*{-0.05cm}
\includegraphics[width=5.4cm]{figures/fig-architecture-base-cnn.pdf}
\vspace*{-0.05cm}
\caption{
Base CNN model that inputs the displacement $\bm{x}$ and amplitude $\bm{a} = \mathscr{A}(\bm{x})$ to predict cardiac diagnosis $\hat{y}_0 = M_0(\bm{x},\bm{a})$.
}
\label{fig:architecture-base-cnn}
\vspace{-0.1cm}
\end{figure}
\subsection{Formalization of murmur shapes as piecewise linear functions}
To enable our model to predict various murmur shapes,
we formulated them as parametric nonlinear functions over time $f_y(t)$.
Since murmur shapes are defined with crescendo, decrescendo, and uniform slopes, we approximate each slope as a line. Thus, we model the total murmur shape as a \textit{piecewise linear function}, instead of other less relevant families of functions, e.g., sum of polynomials (Taylor series) or sine/cosine (Fourier series).
A Taylor series would include spurious artifacts due to the mathematical fit being clinically irrelevant, and be unintuitive to interpret for non-mathematical applications.
A Fourier series, which spectrograms actually represent, would capture important frequency information in murmurs, but does not emphasize the recognizable murmur shapes.
\begin{table}[b!]
\small
\centering
\caption{
Formulations of murmur shapes for various cardiac diagnoses as piecewise linear functions $f_y({\color[HTML]{0070C0}t})$ of murmur amplitude (red shape) changes over time ${\color[HTML]{0070C0}t}$.
$[]$ represents the Iverson bracket, which is 1 if its internal expression is true and 0 otherwise.
}
\label{table:math-murmur-functions}
\begin{tabular}{clrlccc}
\hline
Diagnosis $y$ &
&
\multicolumn{1}{c}{Murmur Diagram} &
&
Phase &
Shape Function $f_y({\color[HTML]{0070C0}t})$ &
Parameters \\ \cline{1-1} \cline{3-3} \cline{5-7}
\addlinespace[1.0mm]
N &
&
\raisebox{-0.4cm}{\includegraphics[width=3.5cm]{figures/figs-math-piecewise-functions-N.pdf}} &
&
{\color[HTML]{C0C0C0} \textit{n.a.}} &
{\color[HTML]{C0C0C0} 0} &
{\color[HTML]{C0C0C0} $\varnothing$} \\
AS &
&
\raisebox{-0.4cm}{\includegraphics[width=3.5cm]{figures/figs-math-piecewise-functions-AS.pdf}} &
&
Systolic &
\begin{tabular}[c]{@{}l@{}}$[\tau_1 \leq {\color[HTML]{0070C0}t} < \tau_L](\pi_0 + \pi_1({\color[HTML]{0070C0}t} - \tau_1) $\\ $+ [\tau_2 \leq {\color[HTML]{0070C0}t}](-(\pi_1 + \pi_2)({\color[HTML]{0070C0}t} - \tau_2)))$\end{tabular} &
\begin{tabular}[c]{@{}c@{}}$\tau_1, \tau_L,$\\ $\pi_0, \pi_1, \pi_2$\end{tabular} \\
MR &
&
\raisebox{-0.4cm}{\includegraphics[width=3.5cm]{figures/figs-math-piecewise-functions-MR.pdf}} &
&
Systolic &
$[\tau_1 \leq {\color[HTML]{0070C0}t} < \tau_L] \pi_0$ &
\begin{tabular}[c]{@{}c@{}}$\tau_1, \tau_L,$\\ $\pi_0$\end{tabular} \\
MVP &
&
\raisebox{-0.4cm}{\includegraphics[width=3.5cm]{figures/figs-math-piecewise-functions-MVP.pdf}} &
&
Systolic &
\begin{tabular}[c]{@{}l@{}}$[\tau_1 \leq {\color[HTML]{0070C0}t} < \tau_L](\pi_0 + \pi_1({\color[HTML]{0070C0}t} - \tau_1) $\\ $+ [\tau_2 \leq t](-2\pi_1({\color[HTML]{0070C0}t} - \tau_2)$\\ $+ [\tau_3 \leq {\color[HTML]{0070C0}t}](\pi_1({\color[HTML]{0070C0}t} - \tau_3))))$\end{tabular} &
\begin{tabular}[c]{@{}c@{}}$\tau_1, \tau_2, \tau_3, \tau_L,$\\ $\pi_0, \pi_1$\end{tabular} \\
\addlinespace[1.0mm]
MS &
&
\raisebox{-0.4cm}{\includegraphics[width=3.5cm]{figures/figs-math-piecewise-functions-MS.pdf}} &
&
Diastolic &
\begin{tabular}[c]{@{}l@{}}$[\tau_1 \leq {\color[HTML]{0070C0}t} < \tau_L](\pi_0 + \pi_1({\color[HTML]{0070C0}t} - \tau_1) $\\ $+ [\tau_2 \leq {\color[HTML]{0070C0}t}](-2\pi_1({\color[HTML]{0070C0}t} - \tau_2)$\\ $+ [\tau_3 \leq {\color[HTML]{0070C0}t}](\pi_1({\color[HTML]{0070C0}t} - \tau_3)$\\ $+ [\tau_4 \leq {\color[HTML]{0070C0}t}](\pi_2({\color[HTML]{0070C0}t} - \tau_4)))))$\end{tabular} &
\begin{tabular}[c]{@{}c@{}}$\tau_1, \tau_2, \tau_3, \tau_4, \tau_L,$\\ $\pi_0, \pi_1, \pi_2$\end{tabular} \\
\addlinespace[1.0mm]
\hline
\end{tabular}
\end{table}
All candidate murmur shapes share the murmur segment start $\tau_1$ and end $\tau_L$ time parameters, but can have varying number of time $\bm{\tau}$ and slope $\bm{\pi}$ parameters depending on the complexity of the shape.
Crescendos are modeled as lines with positive slope, decrescendos as lines with negative slope, and uniform with 0 slope.
Table \ref{table:math-murmur-functions} illustrates the murmur shapes mathematically with relevant parameters, and their shape function $f_y(t)$ equations:
\begin{enumerate}[label=\arabic*)]
\item \textit{Normal (N)}
has no murmurs, so murmur segment start $\tau_1$ and end $\tau_L$ are undefined $\varnothing$, and $f_N(t) = 0$ by definition.
\item \textit{Aortic stenosis (AS)}
has a crescendo-decrescendo murmur defined with positive slope $\pi_1$ from $\tau_1$ to $\tau_2$ and negative slope $-\pi_2$ from $\tau_2$ to $\tau_L$.
The vertical position of the shape is anchored by the intercept term $\pi_0$.
\item \textit{Mitral regurgitation (MR)}
has a uniform murmur between $\tau_1$ and $\tau_L$ at amplitude level $\pi_0$.
\item \textit{Mitral valve prolapse (MVP)}
murmurs start with a "click" which we model as a short crescendo-decrescendo with slopes $\pi_1$ and $-\pi_1$ from $\tau_1$ through $\tau_2$ to $\tau_3$.
The uniform murmur spans from $\tau_3$ to $\tau_L$ with 0 slope.
If there is no subsequent MR murmur, then the region with uniform slope would just have 0 amplitude.
\item \textit{Mitral stenosis (MS)}
has a very similar shape to that of MVP, but it ends with a crescendo with positive slope $\pi_2$ from $\tau_4$ to $\tau_L$.
Also note that this murmur happens in the diastolic heart phase, not systolic.
\end{enumerate}
\subsection{DiagramNet: Diagrammatic network with abductive explanations of murmur shapes}\label{sec:DiagramNet}
We introduce DiagramNet, a deep neural network meta-architecture to infer a prediction and perform abductive reasoning to infer the best explanations that is consistent with the observation and prediction.
We implement this for the application of diagnosing cardiac diseases (as described in the previous section), but note that the architecture is generalizable to other domains with formalized hypotheses.
Fig. \ref{fig:model-DiagramNet} shows that the model has 7 stages that correspond in sequence to the 4-step Peircean abductive reasoning process described in Section \ref{subsection:abductive-reasoning}.
\begin{figure}[h!]
\centering
\includegraphics[width=14.0cm]{figures/fig-architecture-DiagramNet.pdf}
\caption{
Modular architecture of DiagramNet deep neural network.
Each module is numbered to correspond to the its key stages (1-7) and the steps of the Piercean abduction process (I to IV) in Section \ref{subsection:abductive-reasoning}.
Black arrows indicate feedforward activations, the blue arrow indicates an iterative nonlinear optimization to estimate the final murmur shape parameters. \textbf{Bold} variables are vectors or tensors, variables with a hat ($\string^$) indicate predicted values, $\circ$ is the Hardamard operator for element-wise multiplication of 2 vectors for masking the amplitude on the murmur region.
Narrow rectangles indicate an input or predicted variable.
Other shapes indicate processes, such as trainable neural network blocks (capital letters), non-trainable heuristic processes (script letters), and vector operators.
}
\label{fig:model-DiagramNet}
\end{figure}
\subsubsection{Audio displacement and amplitude inputs}
Given the 1-sec (8000-sample) audio data as displacement $\bm{x}$, we extract the amplitude $\bm{a}$, concatenate them as a 2-channel 1D tensor.
Although the convolutional layers of the CNN could learn frequency information from $\bm{x}$, explicitly computing $\bm{a}$ makes it easier for the model to learn patterns from amplitude.
\subsubsection{Murmur segmentation}
Next, we input $(\bm{x},\bm{a})$ into a U-Net~\cite{ronneberger2015unet} model $M_m$ to predict the time region of the murmur $\hat{\bm{m}}$, defined as a mask vector.
We used U-Net since it is popular for image segmentation~\cite{kohl2018probabilistic, ronneberger2015unet}.
However, it suffers from \textit{over-segmentation} by inferring multiple regions of murmurs in a single instance, although there should only be one.
As in \cite{farha2019ms}, we resolve this with a smoothing loss regularization using the truncated mean squared error:
$L_\mu = \frac{1}{T} \sum_t^T \max(\epsilon_t, \epsilon)$, where $\epsilon_t = (\log \hat{m}(t) - \log \hat{m}(t-1))^2$ is the squared of log differences and $\epsilon$ is the truncation hyperparameter.
This may still result in >1 segments, so we choose the longest among remaining segments.
\subsubsection{Initial prediction}
Now, we exploit the embedding representation learned when training murmur segmentation by
feeding it into fully-connected layers $F_{y_0}$ to predict an initial diagnosis $\hat{y}_0$.
This is similar to predicting the diagnosis with a base CNN, but benefits from the added multi-task learning towards predicting the murmur segment $\hat{\bm{m}}$ too.
\subsubsection{Murmur shape fit optimization}
Next, we extract the murmur amplitude by applying murmur segment $\hat{\bm{m}}$ as a mask on the full amplitude, i.e., $\tilde{\bm{a}}_m = \bm{a} \circ \hat{\bm{m}}$, where $\circ$ is the Hadamard element-wise multiplication.
We can then estimate murmur shapes focused on this region from $\tau_1=\hat{m}_1$ to $\tau_L=\hat{m}_L$ based on the shape functions $f_y(t)$ for each diagnosis $y$.
First, we estimate initial shape parameter values using heuristics based on typical characteristics $\tilde{\bm{\theta}}_0 = \Theta_0(\tilde{\bm{a}}_m)$ defined in Table \ref{table:math-initial-params}.
These do not have to be very accurate, since we will optimize them later. We briefly describe the heuristics:
\begin{enumerate}[label=\arabic*)]
\item \textit{Normal (N).}
No parameters are estimated, since no murmur is expected.
\item \textit{Aortic stenosis (AS).}
We estimate the apex of the crescendo-decresendo to occur at the time $\tau_2 = \argmax_t(a)$ of highest amplitude $\max(a)$.
$\pi_0$ is just the amplitude at $\tau_1$, $\pi_1$ is the slope from the murmur start to apex, and $\pi_2 \approx \pi_1$.
\item \textit{Mitral regurgitation (MR).}
The shape is a flat line at the average amplitude of the murmur segment $\bar{a}_m = \sum_{\tau_1 < t < \tau_L} a(t)$.
\item \textit{Mitral valve prolapse (MVP).}
We estimate the apex at $\tau_2$ in the same way as for AS, $\tau_3$ to occur at twice the distance from $\tau_1$ to $\tau_2$.
$\pi_0$ and $\pi_1$ are calculated the same way as for AS.
\item \textit{Mitral stenosis (MS).}
Estimating time parameters is inconsistent using heuristics, so we use a data-driven approach by using the median of time differences from the training dataset. These are calculated for $\tau_2$ and $\tau_4$ relative to the murmur start $\tau_1$ and end $\tau_L$ times.
$\tau_3$, $\pi_0$ and $\pi_1$ are calculated the same way as for MVP.
\end{enumerate}
With these initial shape parameter values $\tilde{\bm{\theta}}_0$, we compute the murmur shapes for all diagnoses $\tilde{\bm{h}}_m = \mathscr{F}_h(\tilde{\bm{\theta}}_0)$.
These may not fit well, so we optimize $\mathscr{O}$ the fit
using L-BFGS \cite{nocedal2006numerical} to minimize the shape fit MSE, i.e., $\Breve{\bm{\theta}} = \argmin_{\bm{\theta}} ||\mathscr{F}_h(\bm{\theta}) - \tilde{\bm{a}}_m||_2^2$.
\rev{This iterative optimization is similar to approaches used in activation maximization~\cite{nguyen2016synthesizing} and CLIP~\cite{radford2021learning}.}
This results in the optimal murmur shapes $\Breve{\bm{h}}_m$ for all diagnoses that best fit the murmur amplitude data.
{\tabulinesep=1.1mm
\begin{table}[t]
\small
\centering
\caption{
Heuristics to estimate initial values of murmur shape parameters for each plausible diagnosis $y$ based on predicted murmur segment start $\hat{m}_1$ and end $\hat{m}_L$, and data-driven approach (for MS).
Shape parameters for all diagnoses are predicted, regardless of predicted diagnosis.
$a$ represents all amplitudes in the observation, $\bar{a}_m$ is the average amplitude during the murmur, $a(t)$ represents the amplitude at time $t$, $\Delta\tau_{12} = \tau_2 - \tau_1$, $\Delta\tau_{4L} = \tau_L - \tau_4$, and $\mu_{0.5}(\Delta\tau)$ represents the median of all training set instances for $\Delta\tau$.
}
\label{table:math-initial-params}
\vspace{-0.2cm}
\begin{tabu}{rlccccclccc}
\hline
\addlinespace[0.02cm]
&
&
\multicolumn{5}{c}{Initial Time Parameters} &
&
\multicolumn{3}{c}{Initial Slope Parameters} \\ \cline{3-7} \cline{9-11}
\addlinespace[0.06cm]
$y$ &
&
$\tau_1$ &
$\tau_2$ &
$\tau_3$ &
$\tau_4$ &
$\tau_L$ &
&
$\pi_0$ &
$\pi_1$ &
$\pi_2$ \\ \cline{1-1} \cline{3-7} \cline{9-11}
\addlinespace[0.15cm]
N &
&
{\color[HTML]{C0C0C0} $\varnothing$} &
{\color[HTML]{C0C0C0} $\varnothing$} &
{\color[HTML]{C0C0C0} $\varnothing$} &
{\color[HTML]{C0C0C0} $\varnothing$} &
{\color[HTML]{C0C0C0} $\varnothing$} &
&
{\color[HTML]{C0C0C0} $\varnothing$} &
{\color[HTML]{C0C0C0} $\varnothing$} &
{\color[HTML]{C0C0C0} $\varnothing$} \\
AS &
&
\textit{$\hat{m}_1$} &
$\argmax_t (a)$ &
{\color[HTML]{C0C0C0} $\varnothing$} &
{\color[HTML]{C0C0C0} $\varnothing$} &
\textit{$\hat{m}_L$} &
&
$a(\tau_1)$ &
$\frac{a(\tau_2)-\pi_0}{\Delta\tau_{12}}$ &
$\pi_1$ \\
MR &
&
\textit{$\hat{m}_1$} &
{\color[HTML]{C0C0C0} $\varnothing$} &
{\color[HTML]{C0C0C0} $\varnothing$} &
{\color[HTML]{C0C0C0} $\varnothing$} &
\textit{$\hat{m}_L$} &
&
$\bar{a}_m$ &
{\color[HTML]{C0C0C0} $\varnothing$} &
{\color[HTML]{C0C0C0} $\varnothing$} \\
MVP &
&
\textit{$\hat{m}_1$} &
$\argmax_t (a)$ &
$\tau_1 + 2\Delta\tau_{12}$ &
{\color[HTML]{C0C0C0} $\varnothing$} &
\textit{$\hat{m}_L$} &
&
$a(\tau_1)$ &
$\frac{a(\tau_2)-\pi_0}{\Delta\tau_{12}}$ &
{\color[HTML]{C0C0C0} $\varnothing$} \\
MS &
&
\textit{$\hat{m}_1$} &
$\tau_1 + \mu_{0.5}(\Delta\tau_{12})$ &
$\tau_1 + 2\Delta\tau_{12}$ &
$\tau_L - \mu_{0.5}(\Delta\tau_{4L})$ &
\textit{$\hat{m}_L$} &
&
$a(\tau_1)$ &
$\frac{a(\tau_2)-\pi_0}{\Delta\tau_{12}}$ &
$\frac{a(\tau_L)-a(\tau_4)}{\Delta\tau_{4L}}$ \\ \hline
\end{tabu}
\end{table}
\vspace{-0.2cm}
}
\subsubsection{Shape fit measurement}
We calculate the MSE lack-of-fit for each each murmur shape function $\Breve{\bm{h}}_m^{(y)}$ for each diagnosis $y$ to the amplitude $\tilde{\bm{a}}_m$ of the inferred murmur segment $\hat{\bm{m}}$, i.e., $\Breve{\bm{d}}^{(y)} = ||\Breve{\bm{h}}_m^{(y)} - \tilde{\bm{a}}_m||_2^2 / ||\hat{\bm{m}}||_1$.
\subsubsection{Hypothesis-driven abductive prediction}
One would expect $\Breve{\bm{d}}$ to be negatively monotonic with the likelihood of predicting $y$, but there are discrepancies due to
hypothesis \textit{overfitting}.
More expressive shape functions can subsume simpler ones, i.e., $f_{MR} \subseteq f_{AS} \subseteq f_{MVP} \subseteq f_{MS}$.
For example, Fig. \ref{fig:demo-mvp}d shows MS overfitting for MVP.
To overcome this, we could regularize the hypotheses and penalize the loss for functions with more parameters.
However, this omits another rule that MS murmurs only occur in diastole, not systole, which can clearly inform whether a murmur is MVP or MS.
Remarkably, exploiting auxiliary information from earlier in the model
can help.
Specifically, we extract the heart phase $\hat{\phi}_0$ from the initial diagnosis prediction $\hat{y}_0$ of whether the the murmur is \textit{n.a.}, systolic or diastolic. We then input $\Breve{\bm{d}}$ and $\hat{\phi}_0$ into fully-connected layers $F_{y_h}$ to predict the hypothesis-driven prediction $\hat{y}_h$.
Note that this inference only uses shape fit and heart phase information, and does not need detailed amplitude information.
\subsubsection{Final combined prediction}
Finally, we perform ensemble learning using both initial $\hat{y}_0$ and hypothesis-driven $\hat{y}_h$ predictions for the final prediction, i.e., $\hat{y} = F_y(\hat{y}_0, \hat{y}_0)$.
This completes the model decision making process as: a) initial prediction, b) explanatory hypothesis and evaluation, c) resolved prediction and explanation.
In summary, the technical approach (stages 1-7) follows the Peircean abduction process:
\begin{enumerate}[label=\Roman*.]
\item \textit{Observe event} by observing displacement to interpret its amplitude (stage 1), and perceive the murmur location (2).
\item \textit{Generate plausible explanations} by "guessing" the initial diagnosis (3), and generating murmur shape hypotheses (4).
\item \textit{Evaluate and judge plausibility} by measuring the fit each hypotheses (5), and contextually judging the inference (6).
\item \textit{Resolve explanation} by combining the initial guess with abducted inference to make a final predicted diagnosis (7).
\end{enumerate}
\subsection{Alternative prediction model with spectrogram input and saliency map explanation}
To compare our proposed method with current XAI,
we implemented a spectrogram-based base CNN classifier (Fig. \ref{fig:model-spectrogram}).
Spectrograms are popular to extract features from high-frequency time series data~\cite{dissanayake2020robust, ren2022deep}.
They show how the frequency (pitch) of the signal (y-axis) changes over time (x-axis), by indicating the the amplitude of specific frequency components at each pixel (by color or numeric value).
Spectrograms are amenable to modeling with CNNs, since they are images.
Specifically, we use the mel spectrogram $\bm{s}$, since it is more sensitive to variations in lower frequencies.
Saliency maps are popular to explain which pixels were important for image-based predictions with CNNs~\cite{simonyan2014deep,selvaraju2017grad,zhou2016learning}.
For spectrograms, this indicates the frequencies at specific times that the model focused on.
We implement Grad-CAM~\cite{selvaraju2017grad} to generate saliency explanation $\hat{\bm{e}}_s$.
Despite their popularity,
we argue that using saliency maps neglects the interpretability needs of domain experts.
Specifically, clinicians are not trained on spectrograms, thus we hypothesize that saliency maps are less appropriate than murmur-shape diagrammatic explanations.
Also, similar to \cite{zhang2022towards}, we provide simplified saliency map explanations to show importance by time, time-saliency $\hat{\bm{e}}_t$,
by aggregating all saliency across frequencies $\Sigma_f$.
This is simpler and
does not require the user to understand spectrograms or note frequencies.
These explanations were used as baseline comparisons in our qualitative user study, described next.
\begin{figure}[h!]
\centering
\vspace{-0.0cm}
\includegraphics[width=6.8cm]{figures/fig-architecture-spectrogram.pdf}
\vspace{-0.2cm}
\caption{
Alternative CNN model that inputs the mel specrogram $\bm{s} = \mathscr{S}(\bm{x})$ of the audio to predict diagnosis $\hat{y}_s = M_s(\bm{s})$.
It generates a saliency map explanation $\hat{\bm{\alpha}}_s$ overlaid on $\bm{s}$ as a Spectrogram-saliency explanation $\hat{\bm{e}}_s$ or aggregated by time as Time-saliency $\hat{\bm{e}}_t$.
}
\label{fig:model-spectrogram}
\vspace{-0.2cm}
\end{figure}
\section{Evaluations}
We evaluated diagrammatization and DiagramNet in multiple stages:
1) a demonstration study showing the interpretability of abductive explanations;
2) a quantitative modeling study comparing DiagramNet against the baseline CNN models and other reasonable approaches; and
3) a qualitative user study with medical students investigating the usefulness of diagrammatic explanations compared to more common, but overly-technical saliency map explanations.
\subsection{Demonstration study: predictions and explanations}
We demonstrate abductive (best), alternative contrastive, counterfactual, and case explanations of diagrammatization.
\begin{enumerate}[label=\alph*)]
\item \textit{Abductive explanations.}
DiagramNet selects the most consistent murmur shape with its prediction, thus \textit{inferring to the best explanation}.
Fig. \ref{fig:demo-explanations} shows the best explanation for each diagnosis type.
Users can see the predicted murmur segment by the coverage (or absence) of the red shape, and how the shape fits the amplitude time series optimally.
\item \textit{Contrastive explanations.}
Fig. \ref{fig:demo-mvp} shows contrastive explanations for a case with MVP.
The murmur shape for MVP fits best among all diagnoses.
Note that since the MS shape function has a higher degree than for MVP, it can also fit this murmur data.
Hence, the MS hypothesis overfits the observation, but MVP is preferred since it is simpler and sufficient.
Furthermore, the MS murmur should be in diastole not systole, thus the murmur shape should be flat in this region.
See Appendix Table \ref{table:demo-mvp-predictions} for predicted shape parameters and goodness-of-fit MSE for each murmur shape.
\item \textit{Counterfactual explanations.}
These can be derived from the contrastive explanations to show how the murmur amplitude could be slightly different to be predicted as due to another diagnosis.
See Appendix Fig. \ref{fig:demo-mvp-counterfactual} for examples.
\item \textit{Case (example-based) explanations.}
We can retrieve instances that have good fits for specific murmur shapes.
Fig. \ref{fig:demo-as} demonstrates several cases of the crescendo-decrescendo murmurs representative of AS. This can help clinicians verify their current case, or review how robust the AI predictions are for non-standard expressions of the disease.
\end{enumerate}
\subsection{Modeling study}\label{sec:modeling-study}
Since we implemented DiagramNet $M$ as an ante-hoc explainable model, imbued with hypotheses
(i.e., murmur shapes for each diagnosis),
we expect it to perform better than other less knowledgeable models.
Hence, we quantitatively compared the prediction performance and explanation faithfulness of DiagramNet against other models.
In the appendix, Fig. \ref{fig:model-others} shows the architectures of these models.
We describe the models compared, evaluation metrics, and results.
\subsubsection{Comparison models}
For models that are a subset of DiagramNet ($M_0$ and $M_m$), this also serves as an ablation study to examine how adding new architectural features improve performance.
We included alternative models that seem reasonable to predict murmur shapes, but we found to be inadequate.
In all, the models evaluated are:
\begin{enumerate}[label=\arabic*)]
\item $M_0(\bm{x},\bm{a}) = \hat{y}_0$, base CNN model trained on displacement $\bm{x}$ and amplitude $\bm{a}$ to predict diagnosis.
\item $M_s(\bm{s}) = \hat{y}_0$, base CNN model trained on spectrogram $\bm{s}$ to predict diagnosis, which is used in our user study.
\item $M_{\tau}(\bm{x},\bm{a}) = (\hat{y}_0,\hat{\bm{\tau}})$, multi-task model to predict diagnosis, and murmur segment start and end times. This is trained with supervised learning from $y$ labels and $\bm{\tau}=(\tau_1,\tau_L)$ annotations. This does not consider spatial information.
\item $M_{\theta}(\bm{x},\bm{a}) = (\hat{y}_0,\hat{\bm{\theta}})$, multi-task model to predict diagnosis, and murmur shape parameters. This is trained with $y$ labels and $\bm{\theta}=(\bm{\tau},\bm{\pi})$ annotations. This does not consider spatial or geometrical information.
\item $M_{m}(\bm{x},\bm{a}) = (\hat{y}_0,\hat{\bm{m}})$, encoder-decoder model to predict diagnosis, and murmur segment. Like $M_{\tau}$, this identifies the murmur start and end times, but by using U-Net \cite{ronneberger2015unet} to predict pixel locations of the murmur. This models spatial information through the transpose-CNN layers, so we expect it to be more accurate than $M_{\tau}$.
\item $M_{a_m}(\bm{x},\bm{a}) = (\hat{y}_0,\hat{\bm{a}}_m)$, encoder-decoder model to predict diagnosis, and murmur amplitude.
This explanation indicates that the model can "see" where the murmur is and reconstruct it.
\item $M(\bm{x},\bm{a}) = (\hat{y}_0,\hat{\bm{h}}_m, \hat{y})$, DiagramNet with initial $\hat{y}_0$ and final $\hat{y}$ predicted diagnoses, and hypotheses explanations $\hat{\bm{h}}_m$.
\end{enumerate}
\begin{figure}[h!]
\centering
\includegraphics[width=9.0cm]{figures/figs-explanations-all.pdf}
\vspace*{-0.25cm}
\caption{
Phonocardiogram (PCG) of cases with different cardiac diagnoses, showing the correct predicted murmur segments and murmur shapes to explain each diagnosis.
We provide interactive demos to explore shape functions for:
{\color[HTML]{0060df}\underline{\href{http://www.desmos.com/calculator/gjvrllldr0}{AS}}},
{\color[HTML]{0060df}\underline{\href{http://www.desmos.com/calculator/jmdhs9gsci}{MR}}},
{\color[HTML]{0060df}\underline{\href{http://www.desmos.com/calculator/raaap9gyqb}{MVP}}}, and
{\color[HTML]{0060df}\underline{\href{http://www.desmos.com/calculator/d559nqxzng}{MS}}}.
}
\label{fig:demo-explanations}
\vspace*{-0.1cm}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=11cm]{figures/figs-contrastive-explanations-MVP.pdf}
\vspace*{-0.25cm}
\caption{
Contrastive explanations of alternative diagnoses for an actual MVP case.
Murmur shapes for AS and MR clearly do not fit the murmur amplitude, but both MVP and MS shapes do.
because the MS function overfits to MVP data.
}
\label{fig:demo-mvp}
\vspace{-0.1cm}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=11cm]{figures/figs-similarity-explanations-AS.pdf}
\vspace*{-0.25cm}
\caption{
Case (example-based) explanations of similar AS cases.
The user can compare how a specific case looks similar to other cases with the same diagnosis.
For example, (c) is missing the S2 sound, but this is similar to (a) with a very weak S2 sound too.
}
\label{fig:demo-as}
\vspace{-0.6cm}
\end{figure}
\subsubsection{Evaluation metrics}
We compared the models using various measures of
\textit{prediction performance} (accuracy, sensitivity, specificity) and
\textit{explanation faithfulness} (murmur overlap, murmur parameters estimation errors).
These were evaluated on a dataset of 7,262 1-sec instances.
For each instance, we calculated:
\begin{itemize}
\item \textit{Prediction correctness} ($\uparrow$ better)
of whether the predicted diagnosis matches the actual diagnosis, i.e., $[y = \hat{y}]$.
We aggregated metrics for each diagnosis and included common metrics used in medicine.
\begin{itemize}[label=$\circ$]
\item \textit{Accuracy} is calculated by averaging correctness over the test set.
\item \textit{Sensitivity} ($TP/{(TP+TN)}$) measures how likely the model can detect actual disease.
\item \textit{Specificity} ($TN/{(TN+FP)}$) measures how likely the model will not cause a false alarm.
\end{itemize}
\item \textit{Murmur segment Dice coefficient} ($\uparrow$ better)
measures the overlap between predicted $\hat{\bm{m}}$ and actual $\bm{m}$ murmur segments,
i.e., $s_\tau = 2(\bm{m} \cdot \hat{\bm{m}}) / (|\bm{m}|^2 + |\hat{\bm{m}}|^2)$.
For $M_\tau$ and $M_\theta$ that only predict parameters, we computed $\hat{\bm{m}} = [\tau_1 < \bm{t} < \tau_L]$.
\item \textit{Murmur segment parameter MSE} ($\downarrow$ better)
indicates how well the model predicted the start $\tau_1$ and end $\tau_L$ time parameters of the murmur, i.e., $\varepsilon_\tau = ||\tau_1-\hat{\tau}_1||_2^2 + ||\tau_L-\hat{\tau}_L||_2^2$.
\end{itemize}
\begin{itemize}
\item \textit{Murmur shape parameters MSE} ($\downarrow$ better)
indicates how well the model predicted the murmur shape function parameters, i.e., $\varepsilon_\theta = ||\bm{\theta} - \hat{\bm{\theta}}_y||_2^2$, where $\bm{\theta}$ and $\hat{\bm{\theta}}_y$ are the actual and predicted for the correct diagnosis $y$.
\item \textit{Murmur shape fit MSE} ($\downarrow$ better)
indicates how well the predicted murmur shape $\hat{\bm{h}}_m$ (or reconstructed murmur amplitude $\hat{\bm{a}}_m$) fits the ground truth murmur amplitude $\bm{a}_m$, i.e., $\varepsilon_a = ||\bm{a}_m - \hat{\bm{h}}_m||_2^2$ or $\varepsilon_a = ||\bm{a}_m - \hat{\bm{a}}_m||_2^2$.
\end{itemize}
\subsubsection{Results}
Fig. \ref{fig:results-model-performances} shows the performance of all 7 models for four evaluation metrics. See Table \ref{table:results-performance} in Appendix for numeric details.
For base CNN models, predicting on the spectrogram ($M_s$) improved performance only very slightly over predicting on amplitude ($M_0$), suggesting that CNNs can already model frequency information with its convolution filters.
Multi-task models ($M_\tau$, $M_\theta$) sacrificed diagnosis prediction accuracy to predict segment and shape parameters, yet still had high estimation errors, and totally inaccurate segment location prediction.
This suggests that merely treating parameters as stochastic variables to predict is less reliable than explicitly modeling spatial and geometric information.
Encoder-decoder models ($M_m$, $M_{a_m}$) performed better by more accurately predicting diagnoses than base CNN models, could reasonably locate segment regions, and had moderately low shape parameter and fit estimation errors.
DiagramNet was the best performing with highest diagnosis prediction accuracy, and very low shape parameter and fit estimation errors.
Due to training with backprop, even its initial diagnosis $\hat{y}_0$ was better than other models, though its prediction based only on hypothesized abduction $\hat{y}_h$ was weaker.
Interestingly, despite murmur shape prediction $\hat{\bm{h}}_m$ being less expressive than murmur amplitude prediction $\hat{\bm{a}}_m$, since it predicts straight lines, its fit is still better (lower).
Hence, imbuing diagrammatic constraints in DiagramNet improved both its prediction performance and interpretability~\cite{rudin2019stop}.
Next, we examined the diagnostic performance for each cardiac disease.
Fig. \ref{fig:results-sensitivity-specificity} shows the confusion matrices for the base CNN model and the three diagnostic stages of DiagramNet.
Base CNN often confuses different diseases, unlike DiagramNet. Particularly, note how it confuses between MVP and MS due to their similar murmur shapes.
When predicting on the murmur shape fits $\hat{y}_h$, DiagramNet does confuse between systolic murmurs (AS, MR, MVP), but can accurately distinguish between MVP and MS due to considering heart phase information $\hat{\phi}_0$. The combined diagnosis prediction $\hat{y}$ ameliorates weaknesses in initial and fit-based predictions to produce a very clean confusion matrix.
Finally, Fig. \ref{fig:results-confusion-matrices} shows that DiagramNet has higher final sensitivity and specificity for all diagnoses than the base CNN model.
\begin{figure}[t]
\centering
\includegraphics[width=11.8cm]{figures/fig-results-model-performances.pdf}
\vspace*{-0.2cm}
\caption{
Results from the modeling study comparing DiagramNet with baseline and alternative models.
Model performance is measured with initial $y_0$, hypothesis-based $y_h$, and final $y$ diagnosis accuracy.
Explanation faithfulness by murmur segment $\bm{\tau}$ Dice coefficient, murmur shape parameters $\bm{\theta}$ MSE, murmur shape $\bm{h}_m$ fit MSE, reconstructed murmur shape amplitude$\bm{a}_m$ MSE.
For blue metrics, higher is better; for red metrics, lower is better.
DiagramNet has highest prediction and segmentation accuracy, and lowest estimation error for parameter values and shape fits (all good).
See Table \ref{table:results-performance} (in Appendix) for numeric details.
}
\label{fig:results-model-performances}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=15.2cm]{figures/fig-results-confusion-matrices.pdf}
\vspace*{-0.2cm}
\caption{
Confusion matrices of diagnoses prediction for the base CNN and DiagramNet models.
This shows how each model may confuse one diagnosis for another.
For example, MS is regularly predicted as MVP in the base model, but much less with DiagramNet.
}
\label{fig:results-confusion-matrices}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=11.0cm]{figures/fig-results-sensitivity-specificity.pdf}
\vspace*{-0.2cm}
\caption{
Clinical performance of DiagramNet compared to the base CNN model for various diagnoses.
The final prediction $\hat{y}$ of DiagramNet is higher than baseline for all diagnoses, though the explanation-based predictions $\hat{y}_h$ are sometimes less accurate (specifically for AS) due to the simplicity of the abductive models.
}
\label{fig:results-sensitivity-specificity}
\vspace{-0.5cm}
\end{figure}
\clearpage
\subsection{Qualitative user study}
We evaluated the usefulness of diagrammatic explanations with a qualitative user study.
Since we implemented \nohyphens{DiagramNet} for heart murmurs, we needed to evaluate with domain experts.
Since all medical students learn about auscultation, they are suitable participants.
Furthermore, we did not conduct a summative evaluation due to the impracticality of recruiting many medical practitioners for sufficient statistical power.
Our key research questions were:
\begin{enumerate}[label=RQ\arabic*), left=1em..3.6em]
\item How do clinicians diagnose heart disease from auscultation \textit{without AI}?
\item How do clinicians interpret AI-based diagnosis \textit{without XAI}?
\item How do clinicians interpret AI-based diagnosis \textit{with XAI}? With diagrammatic or saliency explanations?
\end{enumerate}
\subsubsection{Experiment conditions and apparatus}
Participants used different user interfaces (UI) of our heart auscultation diagnosis software tool based on XAI condition.
All UI were implemented on a black background to increase the visibility of the saliency maps.
In addition to the non-explainable baseline, there were three explainable UI variants:
\begin{enumerate}[label=\arabic*)]
\setcounter{enumi}{-1}
\item \textit{Baseline}
to play the heart sound and examine the phonocardiogram (PCG). After providing an initial diagnosis, the participant can click to reveal the AI's predicted diagnosis.
\item \textit{Murmur-diagram XAI}
that explains diagnosis prediction with a murmur diagram that overlays a red murmur shape on the predicted murmur region (Fig. \ref{fig:userstudy-shape}).
This diagrammatic explanation aligns with clinical training, so we expect it to be the most useful and trusted explanation type.
\item \textit{Spectrogram-saliency XAI}
that shows the PCG and spectrogram of the heart wave (Fig. \ref{fig:userstudy-spectrogram}).
On viewing, the saliency map is overlaid on the spectrogram. It retains colors for important pixels, making others translucent.
We included this since spectrograms for audio AI and saliency maps in XAI are popular, but we expect it to be less trusted due to non-use in clinical practice, spectrograms being overly-technical, and saliency maps being potentially spurious.
\item \textit{Time-saliency XAI}
that explains diagnosis based on time saliency (Fig. \ref{fig:userstudy-time}),
which is presented in 1D along the time axis to indicate important regions.
This simpler saliency map does not require users to know of spectrograms.
\end{enumerate}
\subsubsection{Experiment method and procedure}
The study was conducted by presenting several cases to each participant, where we performed a structured interview, observed how he/she interacted with the UI, and described his/her thoughts using the think aloud protocol.
We verified that participants had decent headphones to carefully hear the heart sounds.
We obtained ethics approval from our institution before commencing the study.
For each participant, the procedure was:
\begin{enumerate}[label=\arabic*)
\item \textit{Introduction} about the experiment objective and procedure, and give primer on the cardiac diagnoses used in the study (N, AS, MVP, MR, or MS). We confirmed that all participants were familiar with these diagnoses.
\item \textit{Consent} to participate and have their voice and interactions recorded.
\item \textit{Tutorial} on the UI variants including how to interpret their explanations.
Since spectrograms are rather technical, we took care to teach how to interpret them, check for understanding later during think aloud, and clarify as needed.
\item Three UI sessions with
\begin{itemize}
\item \textit{Condition} randomly assigned to an XAI type (Murmur-diagram, Time-saliency, Spectrogram-saliency).
\item Up to two patient case trials, where
\begin{itemize}[label=$\circ$]
\item \textit{Case} is randomly chosen with a specific diagnosis (N, AS, MVP, MR, or MS).
As clinicians also use patient information (e.g., age, symptoms, medical history) when making diagnoses, we provided it on request.
\end{itemize}
\begin{enumerate}[label=\roman*)]
\item \textit{Initial diagnosis} is elicited from the participant to learn their decision and rationale based only on the audio and PCG. Participants using Spectrogram-saliency also see the spectrogram at this stage.
\end{enumerate}
\end{itemize}
\end{enumerate}
\begin{figure}[h!]
\centering
\includegraphics[width=12.5cm]{figures/fig-user-study-murmur-shape.pdf}
\vspace*{-0.25cm}
\caption{
User interface for the AI diagnosis system with Murmur-diagram XAI used in the user study.
The user can play the heart sound at various volumes and speeds, and view the phonocardiogram (PCG).
On clicking to view the explanation, the murmur diagram of the predicted diagnosis is overlaid on the PCG, showing the recognized murmur region and shape, as a shaded red area.
In this case, the model fit the crescendo-decresendo shape in the murmur region to explain its prediction for AS.
}
\label{fig:userstudy-shape}
\vspace*{-0.25cm}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=12.5cm]{figures/fig-user-study-spectrogram.pdf}
\vspace*{-0.25cm}
\caption{
User interface for the spectrogram-based AI diagnosis system with Spectrogram-saliency XAI used in the user study.
She can view the PCG (top) and spectrogram (middle) of the heart sound.
For the spectrogram, we used the Viridis color map, where yellow-green indicates higher amplitude for the frequency at the time shown, and dark purple indicates lower amplitude.
After initial diagnosis, the participant can click to view the predicted diagnosis, and click to view the explanation.
For this UI, the explanation is a saliency map showing the important regions in the spectrogram (bottom).
More important regions are left colored, while less important ones are more transparent.
In this case, the model predicts that the diagnosis is AS, because the low frequencies during S1 and S2 were most important, followed by sporadic time regions in the murmur and one region near the apex with higher frequencies.
}
\label{fig:userstudy-spectrogram}
\vspace*{-0.25cm}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=12.5cm]{figures/fig-user-study-time-saliency.pdf}
\vspace*{-0.25cm}
\caption{
User interface for the AI diagnosis system with Time-saliency XAI used in the user study.
On clicking to view the explanation, a 1D saliency map is shown to indicate important time regions for the prediction. More red indicates more important.
In this case, the model thinks that the start of S1, S2, and some sporadic regions in the murmur (perhaps, including the apex) were important for predicting aortic stenosis (AS).
Strangely, the model also thinks that the end of the PCG is important.
}
\label{fig:userstudy-time}
\vspace{-1.0cm}
\end{figure}
\clearpage
\begin{enumerate}[label=\arabic*)]
\setcounter{enumi}{4}
\item[] \begin{itemize}
\begin{enumerate}[label=\roman*), start=2]
\item \textit{AI diagnosis} is revealed, and the participant is asked whether he/she agreed or disagreed, and why.
\item \textit{XAI explanation} is revealed, showing an explanation based on Condition. The participant interprets the explanation, describes helpful and unhelpful aspects, and provides suggestions for improvement.
\end{enumerate}
\end{itemize}
\item \textit{XAI ranking} where we ask the participant to judge XAI types by convincingness and explain why.
\item \textit{Debrief and conclusion.} We thank the participant for their time and feedback, and conclude with compensation.
\end{enumerate}
\subsection{Findings}
We recruited 7 medical students using snowball sampling.
They were 5 females, 2 males, with ages 20-23 years old.
All were in year 4 or 5 of their MBBS undergraduate degree.
It was difficult to recruit more due to their busy schedules.
The study took 30 minutes, and each participant was compensated with a \$10 gift card.
Participants completed 40 cases collectively.
We briefly describe quantitative results elicited from participant comments.
They were good at diagnosing independently (33/40 = 82.5\% correct),
generally agreed with the AI (34/40 = 85.0\% agreement) before seeing explanations,
and trusted murmur diagram explanations (12/14) more than saliency explanations (5/13 for Spectrogram-saliency, 8/13 for Time-saliency.
Since participants were thinking aloud and discussing with the experimenter, it was not meaningful to evaluate task completion times.
We performed a thematic analysis on the participant sessions and identified several key themes with regards to XAI usage, which we discuss next.
\subsubsection{Diagrammatic explanations were most domain aligned, helpful and trusted.}
After diagnosing multiple cases across all XAI types, all but one participant ranked Murmur-diagram XAI to be the most convincing.
Commenting on an AS case, P2 mentioned that \textit{"when I see crescendo-decrescendo, I trust the AI diagnosis is correct, compared to if the [Time-saliency] interface shows straight line ... I can understand this without having to think ... when I see this [shape] outline given by the AI, then I realised that the [amplitude] waveform does depict crescendo-decrescendo better, but before I saw the red thing I would have looked at this and see that the volume is very level, I blocked out the fact that this is an up-slope/down-slope but I now clearly see its there."}
P4 noticed the that \textit{"there is a linear murmur [between S1 and S2], which makes it pansystolic"} and agreed that the explanation \textit{"aligns with my understanding"}.
P6 remarked how the murmur shapes \textit{"aligns to what I learnt in school"}.
The exception was P3, who felt that \textit{"systolic murmurs are better differentiated by the spectrograms"}, and that some shape-based explanations \textit{"did not really conform to usual shapes"}, referring to an MS case where he \textit{"was expecting more of a rectangular [shape]"} but saw a decrescendo-uniform-crescendo shape instead.
\subsubsection{Unfamiliar and unconventional explanations are less interpretable and trustworthy}
Participants were unfamiliar with spectrograms initially, though many could eventually understand them.
P1 \textit{"didn't really look at the frequency much"} and felt that it was unconventional for diagnosis: \textit{"it kind of is different from what we will use in real life"}.
P5 \textit{"personally am not used to a spectrogram... so kind of... no I don't think it makes me trust the AI because I don't understand it myself"}.
In contrast, P7 \textit{"agreed with [the XAI], lower pitch meaning that its the sounds of regular valve closure, and murmur is higher pitch than regular S1 and S2"}. She remarked that the \textit{"underlying theory of the spectrogram explanation makes sense to me, but [I've] never seen or learnt the spectrogram explanation. However, quite intuitive after brief introduction."}
In contrast, participants found the simpler Time-saliency XAI intuitive.
P5 noticed the salient time regions and thought \textit{"that [its decision] was fair, since the AI pays attention to the S1, S2, and the click."}
However, participants were not taught to diagnose with this unconventional time-saliency, and struggled to interpret it; P7 felt that \textit{"this is more confusing to me, therefore I trust the AI less ... I didn't understand what it was saying just by reading important time spans."}
\subsubsection{Some benefits of rich technical XAI}
Some participants wanted detailed explanations.
P5 thought that \textit{"the spectrogram makes me trust the AI more [than Time-saliency]; it is really very intricate in differentiating between high or low pitch ... spectrogram can be quite unconventional but it has the most answer out of all, most details out of all 3 of them, but the extra details are useful."}
P4 liked the experimenter-provided \textit{"verbal explanation of spectrograms, since the more information can be obtained to increase the user's interpretation ability... if someone can give a clearer interpretation, the frequency information will be more valuable than the time-span one."} This prompts the need for relatable explanations~\cite{zhang2022towards}.
\subsubsection{Need for supplementary information}
Some participants uses information beyond what the model processed.
P2 remarked that, with a real patient, \textit{"I would be confirming [the diagnosis] based on the position of stethoscope, anti-apex, is the apex the loudest part I'm hearing this sound. That's the first thing, time with pulse check if really pan-systolic, [since] other murmurs can be pan-diastolic."}
On examining an MS case, P3 diagnosed it as either AS or MS and wanted a differential diagnosis (with additional tests or observations) to eliminate which would be less likely.
\section{Discussion}
We discuss the implication, limitations, and generation of our work on diagrammatization.
\subsection{Diagrammatization to support human cognition and user domain knowledge}
Despite myriad XAI techniques, many have neglected the domain knowledge of users,
thus leaving an interpretability gap.
This goes beyond supporting human-centric XAI at the cognitive level by tailoring explanations to support specific reasoning processes~\cite{wang2019designing}, cognitive load limitations~\cite{abdul2020cogam, lage2019evaluation}, uncertainty aversion~\cite{wang2021show}, preferences~\cite{lage2018human, ross2017right, erion2021improving}, or relatability~\cite{zhang2022towards}.
This goes beyond social factors~\cite{ehsan2021expanding, liao2020questioning, veale2018fairness}, or fitting contextual situations~\cite{lim2009assessing}.
Diagrammatization provides a basis to support \textit{user-centric XAI} that satisfies both human cognition and user domain knowledge.
This will allow users to interpret the AI explanations at a more useful, higher level, further fostering human-AI collaboration.
\rev{Our key proposal was for the AI explanation to be hypothesis-driven (with murmur shapes), rather than deferring to users to abductively infer their own hypotheses.
While we compared segment-based ($M_m$) vs. shape-based ($M$) models in our modeling study (Section \ref{sec:modeling-study}), we did not compare segment-based vs. shape-based explanations. That would have specifically evaluated user vs. AI abduction.
Instead, we focused on evaluating diagrammatic explanations against popular saliency map explanations to clearly show the latter's poor fit.
In addition to reducing user interpretability burden for abductive reasoning, this evaluates the need to follow diagrammatic conventions of the expert domain.
We note that segmentation is a common prediction task in AI, yet some application developers may not immediately consider them explanations.
However, saliency maps are a specific form of image localization~\cite{selvaraju2017grad}, which also includes segmentation. Thus segmentation is a valid approach for explaining image predictions~\cite{tjoa2020survey}.
Our approach uses segmentation integrally for the AI prediction.
A similar argument can be made for shape-fitting hypotheses being merely fitted lines.
Yet, clinicians explain their diagnoses on PCGs by drawing simplified line diagrams describing murmur shapes~\cite{judge2015heart}, thus DiagramNet automates this explanatory process.
Our technical approach creates an intelligent AI to apply its knowledge of known murmur shapes to real data, thus performing abduction to the best murmur shape \textit{on its own}.
The shape fitting is not done \textit{post-hoc} after the AI has made its prediction, but rather explicitly encoded as part of its reasoning \textit{ante-hoc}.
Collectively, the murmur shape diagram explanations from DiagramNet mimic the clinician reasoning process to identify where the murmur is (segmentation), abductively infer the most likely murmur shape (hypothesis evaluation), and resolve the explanations to make a coherent diagnosis.
Thus, with diagrammatization, the AI can automously generate its hypotheses and evaluate them to derive its prediction.}
\subsection{Modeling and evaluating domain-relevant explanations with domain experts}
Developing concrete explanations for complex domains requires significant effort in formulation and evaluation.
We had investigated diagrammatization for one application --- heart auscultation;
future work can validate it for other domains.
We had conducted a small qualitative study due to challenges in recruiting domain experts. This is a perennial challenge when recruiting busy professionals with rare expertise, and will intensify as we develop more useful, domain-relevant explanations.
Nevertheless, we identified strengths and some weaknesses in our approach; a larger, summative study would have limited value despite high cost.
To model diagrammatic XAI, we formalized murmur shapes with amplitude, but omitted other concepts such as pitch, position, and radiation.
Yet, our model performance and convincingness are already superior.
Incorporating these features is left to future work for a more complete engineered solution.
\subsection{Generalizing diagrammatization to other domains}
Our approach for diagrammatization requires tailoring to specific applications.
This helps to solve practical problems more concretely.
We describe the general approach of diagrammatization to apply to other applications:
\begin{enumerate}[label=\arabic*)]
\item \textit{Study} the concepts and decision processes in the application domain.
\begin{itemize}
\item Identify the system of representation, its conventions for interpretation and rules for evaluation.
\item Identify the structured hypotheses for abductive reasoning.
\end{itemize}
\item \textit{Formalize} the representation and rules mathematically, so that we can compute on them.
\item \textit{Implement} the formal specifications in a predictive AI model, taking note to identify specific stages.
\item \textit{Evaluate} with domain experts to check consistency with the user mental model of the domain problem.
\end{enumerate}
Abduction is inference to the best explanation, which we have implemented as \textit{hypothesis fitting}. This goes beyond \textit{curve fitting} of line graphs, and can be implemented for other domains that reason with other representations.
We discuss another application to generalize diagrammatization.
Consider skin cancer diagnosis using computer vision and explaining with the ABCDE features.
Instead of merely rendering a saliency map or stating concept influences for XAI, we can draw explanatory diagrams.
\textit{A}symmetry can be explained by bisecting the image and showing whether each half differs from each other more than a threshold.
\textit{B}order smoothness can be shown by tracing the lesion outline, calculating it as the 2nd-order derivative of the curve, and comparing against a threshold.
\textit{C}olor can be assessed by highlighting parts of the lesion with different pigments and comparing to a threshold of contrast ratio.
\textit{D}iameter can be shown by drawing a bounding circle and diameter with length reading, and comparing against the 6mm threshold.
Thus, the model is more trustworthy, since it can demonstrate the same geometrical measurements as a medical expert.
\subsection{\rev{Generalizing DiagramNet to other applications}}
\rev{The modularity of DiagramNet helps in its generality, since each module has a distinct purpose as described in Section \ref{sec:DiagramNet}.}
Although implementing diagrammatization requires substantial formulation for each application, DiagramNet is generalizable to applications that use line diagrams.
This requires extracting a line representation from the input instance, formulating the a parametric function for each hypothesis, segmenting the diagram to identify the region to fit, and fitting the best hypothesis to the instance data.
We discuss three examples summarized in Table \ref{table:discussion-alt-diagrams}:
\begin{enumerate}[label=\alph*)]
\item \textit{Electrocardiograms (ECG)} are another clinical diagram for cardiac diagnosis.
Clinicians diagnose atrial flutter by inferring a "sawtooth" pattern (Table \ref{table:discussion-alt-diagrams}a).
Like PCG, ECG is time series with high sampling rate,
but instead of extracting amplitude $\bm{a}$ from the sound wave, we directly use the ECG trace signal $\bm{x}$.
On segmenting the region of interest,
we can formulate the sawtooth pattern as a piecewise linear function similar to how we modeled murmur shapes.
Most of DiagramNet can be reused here \rev{(Steps 2, 5-7 in Fig. \ref{fig:model-DiagramNet})} and only input $\bm{x}$ \rev{(Step 1)} and hypotheses $\mathscr{F}_h(\bm{\theta})$ \rev{(Step 5)} need to be redefined.
\item \textit{Candlestick charts} are a time series diagram used to analyse stock price.
Unlike PCG, it represents time at a lower sampling rate (e.g., days, years).
Each candlestick represents low, opening, closing, and high prices for each time period.
Analysts look for chart patterns like "broadening top", "descending triangle", and "rising wedge" to anticipate how a stock would change~\cite{bulkowski2021encyclopedia} (Table \ref{table:discussion-alt-diagrams}b).
To explain an imminent breakdown, a "descending triangle" explanation could be fit to a segment $(\tau_1,\tau_L)$ with two lines $(\bm{x}_1,\bm{x}_2)$.
We can estimate the bottom line with the low price 10\%-tile $x_1(t) = P(x_{\text{low}},10)$, and hypotenuse line from the linear fit of high prices $x_2(t) = -wt + x_0$, where $w$ and $x_0$ are fit from data.
Changes to DiagramNet are similar as with ECG \rev{(only Steps 1 and 5 in Fig. \ref{fig:model-DiagramNet} need to be changed)}, but hypotheses are formulated with 2 linear functions instead.
\item \textit{Photographs} of skin lesions with ABCDE annotations are an image-diagram method to diagnose skin cancer.
This representation is very different from the audio time series of our work.
We consider the analysis of lesion \textit{A}symmetry (Table \ref{table:discussion-alt-diagrams}c):
1) extract the lesion outline via edge detection (e.g., Sobel filter),
2) reflect the outline across a bisecting axis, and
3) compute the non-overlapping area $a$ between the original and reflected outlines.
The lesion is asymmetrical if $a$ is larger than a threshold $\alpha$, thus explaining the malignant prediction.
Changes to DiagramNet is also modest as it already models time as a 1D image. Here, the image is a 2D tensor, the outline extraction and reflection can be implemented heuristically to extract area $a$ \rev{(Step 1 in Fig. \ref{fig:model-DiagramNet})}. The hypothesis $a > \alpha$ is rule-based and does not need fitting to instance \rev{(so Steps 4 and 5 can be omitted)}.
\end{enumerate}
\rev{Specific technical details for each application is left for future work.}
The aforementioned domains rely on custom diagrams for diagrammatic explanations.
However, if a domain uses basic off-the-shelf visualizations, then these can be used, subjected to inherent domain constraints.
\begin{table}[t!]
\centering
\caption{
Generalizing DiagramNet to different applications
--- a) cardiac diagnosis, b) stock price prediction, c) skin cancer detection ---
i) with various diagrams
ii) of different data types to
iii) input base data,
iv) extract the diagram line representation
v) with various explanation hypotheses,
vi) to justify predictions.
Image credits:
ECG adapted from {\color[HTML]{0060df}\underline{\href{https://commons.wikimedia.org/wiki/File:Atrial_flutter34.svg}{\smash{Atrial\_flutter34}}}} by {\color[HTML]{0060df}\underline{\href{https://commons.wikimedia.org/wiki/User:Jmh649}{\smash{James Heilman, MD}}}} under the {\color[HTML]{0060df}\underline{\href{https://creativecommons.org/licenses/by-sa/3.0/deed.en}{CC BY-SA 3.0 license}}},
stock price data from {\color[HTML]{0060df}\underline{\href{https://commons.wikimedia.org/wiki/File:Gold_Price_(1968-2008).gif}{\smash{Gold Price (1968-2008)}}}} by {\color[HTML]{0060df}\underline{\href{https://commons.wikimedia.org/wiki/User:Emilfaro}{\smash{Emilfaro}}}}
{\color[HTML]{0060df}\underline{\href{https://visualsonline.cancer.gov/details.cfm?imageid=9186}{\smash{Melanoma}}}} from the National Cancer Institute.
}
\vspace{-0.05cm}
\label{table:discussion-alt-diagrams}
\begin{tabular}{llccc}
\hline
& & a) Cardiac diagnosis & b) Stock price prediction & c) Skin cancer detection \\ \cline{3-5} \addlinespace[0.1cm]
i. &
Diagram &
\multicolumn{1}{m{3.6cm}}{\includegraphics[width=3.6cm]{figures/fig-discussion-alt-diagram-ecg.pdf}} &
\multicolumn{1}{m{3.6cm}}{\includegraphics[width=3.6cm]{figures/fig-discussion-alt-diagram-stocks.pdf}} &
\multicolumn{1}{m{3.6cm}}{\includegraphics[width=3.6cm]{figures/fig-discussion-alt-diagram-melanoma.pdf}} \\
ii. & Data type & Time series (over msec) & Time series (over years) & Image \\ \addlinespace[0.2cm]
iii. & Base data & Electrocardiogram (ECG) & Candlestick chart & Photograph \\
iv. & Diagram line & ECG trace signal & Key prices over time & Lesion outline \\ \addlinespace[0.2cm]
v. & Explanation & Sawtooth wave & Descending triangle & Asymmetrical outline \\
vi. & Prediction & Atrial flutter & Breakdown imminent & Malignant tumor \\ \hline
\end{tabular}
\vspace{-0.2cm}
\end{table}
\subsection{\rev{Diagrammatization compared to Visualization}}
\rev{While diagrams can refer to drawings or visualizations, we have defined diagrammatization as comprising three aspects:}
\begin{quote}
\centering
\rev{Diagrammatization = {\Large\textcircled{\small{i}}} Piercean abduction + {\Large\textcircled{\small{ii}}} Domain conventions + {\Large\textcircled{\small{iii}}} Piercean diagrams}
\end{quote}
\rev{Each aspect has specific benefits for XAI technical development and user experience.
First, with diagrammatization, the AI performs Piercean abductive reasoning to generate and evaluate specific hypotheses.
This improves user trust, since the AI's reasoning is human-like.
This improves user experience by reducing the burden on users to have to do their own abduction.
Current use of visualizations in XAI are typically of convenient or basic charts and heatmaps that do not exploit domain hypotheses, thus requiring users to perform additional abduction.
There are also modeling benefits. Used in our ante-hoc approach, the hypotheses regularize and constrain the AI prediction, thus the AI's reasoning and explanation would be less spurious than current XAI~\cite{zhang2022debiased}, and its prediction performance improved.
Post-hoc visualization explanations do not change the original AI prediction, and hence do not improve the AI performance.}
\rev{Second, diagrammatic explanations should be constrained by diagram conventions in the target domain. Thus, domain experts would be familiar with them and can interpret them efficiently and effectively.
Current XAI visualizations typically use off-the-shelf charts and heatmaps that, while simple to read, are not necessarily relevant to domain experts who have been trained to use specific or sophisticated diagrams with implicit ontologies, conventions, and rules.
In our case, despite murmur shapes taking the rudimentary form of line charts, this work is the first to explain with shape-based murmur diagrams that are clinically-relevant, since XAI developers overrely on traditional charts and heatmaps that are more suited for data scientists.
With diagrammatization, we encourage XAI developers to consider how domain experts explain with their own conventions to develop more sophisticated, domain-relevant visualizations.}
\rev{Third, we follow the definition of diagrams by Pierce and other philosophers~\cite{peirce1976new, hoffmann2010diagrams} to encompass graphical (visualization), symbolic (math, equations), sentential (verbalization) representations (see Fig. \ref{fig:concept-diagrammatization-venn-diagram}).
In this work, we have examined diagrammatization with the visualization of murmur diagrams.
Although we have not studied how to generate symbolic or text-based "diagrams", the framing of diagrammatization can support this future research.}
\subsection{Diagrammatization compared to Rationalization}
We have shown that diagrammatic explanations are useful, but other forms are also useful. We discuss this with a compelling verbal explanation.
In heart auscultation, a clinician may explain that a patient has aortic stenosis (AS) because \textit{"the aortic valve leaflets are abnormally stiff due to calcification, causing the valve to have difficulty opening and closing, producing a crescendo-decrescendo murmur sound"}.
This verbal explanation is comprehensive, and actually consists of multiple representation types. We discuss how each representation compares with diagrammatization.
\begin{enumerate}[label=\arabic*)]
\item The explanation is a \textit{rationalization}~\cite{ehsan2018rationalization} which goes beyond the direct evidence (PCG) of the observation, since the clinician did not directly observe the calcification of the aortic valve; that would require an echocardiogram.
Besides, automatic rationalization may be spurious or irrelevant since it depends on unbounded text.
\item It is only explains at the \textit{low-level} concepts from the physical concepts to murmur shape description; there remains a gap between the "crescendo-decrescendo" concept to the audio signal.
DiagramNet overlays its explanation (the murmur shape) on the audio representation (PCG) to explicitly show its hypothesis in context of the observation.
\item It is a \textit{concept-based} explanation that requires knowledge of valves, their properties and location, and the causal pathway from calcification to stiffness to opening/closing to murmur shape.
This requires modeling knowledge bases and causal networks, which is very costly to construct, and beyond the scope of our work.
\end{enumerate}
Since we focused on developing XAI for clinicians who already know how to extrapolate from murmur shape to disease, we omitted providing the low-level, concept-based, rationalization explanation in this work.
Nevertheless, a more useful deployed XAI system should combine diagrams and conceptual rationalization to explain deeply.
\subsection{Diagrammatization for confirmatory analysis}
Diagrammatization uses abductive reasoning to generate specific hypotheses and test them. It requires hypotheses to be mathematically formulated with defined goodness-of-fit evaluation metrics.
Diagrammatization is unsuitable for
1) \textit{exploratory analysis} without or with too many hypotheses, such as data scientists who debug models by looking for spurious effects rather than explicitly hypothesizing bugs.
Open-ended representations like feature attribution or saliency map would be more suitable here.
It is also unsuitable for
2) \textit{unbounded representations} like natural language explanations that acquire open-ended text from people without expectations on a finite set of explanations, so the number of hypotheses may also be unbounded.
However, categorizing the text responses into a discrete taxonomy would simplify identifying key hypotheses for abductive reasoning, and this could enable diagrammatization.
\section{Conclusion}
We presented diagrammatization as a general representation paradigm for XAI to support diagrammatic reasoning and provide abductive explanations \rev{to narrow the interpretability gap}.
We developed DiagramNet, a modular explainable deep neural network with multiple stages aligned with the Peircean abduction process, and trained it to predict cardiac disease from heart sounds
and generate murmur diagram explanations.
Our modeling evaluations found that DiagramNet not only had more faithful explanations but also better prediction performance than several baseline models.
Our qualitative user study found that clinicians prefer diagram-based explanations than saliency map explanations on spectrograms,
since these are aligned with their medical training and consider explicit medical notions that they were familiar with.
This work gives insights into diagram-based, abductive explainable AI, and contributes a new basis towards user-centered XAI.
\bibliographystyle{ACM-Reference-Format}
|
2,869,038,154,399 | arxiv | \section{Introduction}
\begin{figure*}
\centering
\includegraphics[width=.9\linewidth]{figures/SFPull3.pdf}
\caption{Overview of our approach. Features extracted from the input images are used to construct a 4D correlation volume. We initialize the SE3 motion field, $\mathbf{T}$, to be the identity at every pixel. During each iteration, the update operator uses the current SE3 motion estimate to index from the correlation volume, using the correlation features and hidden state to produce estimates of pixel correspondence and rigid-motion embeddings. These estimates are plugged into Dense-SE3, a least-squares optimization layer which uses geometric constraints to produce an update to the SE3 field. After successive iterations we recover a dense SE3 field, which can be decomposed into a rotational and translation component. The SE3 field can be projected onto the image to recover optical flow.}
\label{fig:RAFT3D}
\end{figure*}
Scene flow is the task of estimating pixelwise 3D motion between a pair of video frames\cite{vedula1999three}. Detailed 3D motion is requisite for many downstream applications including path planning, collision avoidance, virtual reality, and motion modeling. In this paper, we focus on stereo scene flow and RGB-D scene flow, which address stereo video and RGB-D video respectively.
Many scenes can be well approximated as a collection of rigidly moving objects. The motion of driving scenes, for example, can be modeled as a variable number of cars, buses, and trucks. The most successful scene flow approaches have exploited this structure by decomposing a scene into its rigidly moving components\cite{menze2015object,vogel20113d,vogel20113d,vogel20153d,ma2019deep,behl2017bounding,jaimez2015motion,jaimez2015primal,kumar2017monocular}. This introduces a powerful prior which can be used to guide inference. While optical flow approaches typically assume piecewise smooth motion, a scene containing rigid objects will exhibit piecewise constant 3D motion fields (Fig. \ref{fig:RAFT3D}).
Recently, many works have proposed integrating deep learning into scene flow estimation pipelines. A common approach has been to use object detection\cite{behl2017bounding,cao2019learning} or segmentation \cite{behl2017bounding,ma2019deep,lv2018learning,ren2017cascaded} networks to decompose the scene into a collection of potentially rigidly moving objects. Once the scene has been segmented into its rigidly moving components, more traditional optimization can be used to fit a motion model to each of the objects. One limitation of this approach is that the networks require instance segmentations to be trained and cannot recover the motion of new unknown objects. Object detection and instance segmentation introduce non-differentiable components into the network, making end-to-end training difficult without bounding box or instance level supervision.
We introduce RAFT-3D, an end-to-end differentiable architecture which estimates pixelwise 3D motion from stereo or RGB-D video. RAFT-3D is built on top of RAFT~\cite{teed2020raft}, a state-of-the-art optical flow architecture that builds all-pairs correlation volumes and uses a recurrent unit to iteratively refine a 2D flow field. We retain the basic iterative structure of RAFT but introduce a number of novel designs.
The main innovation we introduce is rigid-motion embeddings, which are per-pixel vectors that represent a soft grouping of pixels into rigid objects. During inference, RAFT-3D iteratively updates the rigid-motion embeddings such that pixels with similar embeddings belong to the same rigid object and follow the same SE3 motion.
Integral to rigid-motion embeddings is \emph{Dense-SE3}, a differentiable layer that seeks to ensure that the embeddings are geometrically meaningful. Dense-SE3 iteratively updates a dense field of per-pixel SE3 motion by performing unrolled Gauss-Newton iterations such that the per-pixel SE3 motion is geometrically consistent with the current estimates of rigid-motion embeddings and pixel correspondence. Because of Dense-SE3, the rigid-motion embeddings can be indirectly supervised from only ground truth 3D scene flow, and our approach does not need any supervision of object boxes or masks.
Fig. \ref{fig:RAFT3D} provides an overview of our approach. RAFT-3D take a pair of RGB-D images as input. It extracts features from the input images and builds a 4D correlation volume by computing the visual similarity between all pairs of pixels. RAFT-3D maintains and updates a dense field of pixelwise SE3 motion. During each iteration, it uses the current estimate of SE3 motion to index from the correlation volume. A recurrent GRU-based update operator takes the correlation features and produces an estimate of pixel correspondence, which is then used by Dense-SE3 to generate updates to the SE3 motion field.
RAFT-3D achieves state-of-the-art accuracy.
On FlyingThings3D, under the two-view evaluation~\cite{liu2019flownet3d}, RAFT-3D improves the best published accuracy ($\delta < 0.05$) from 34.3\% to 83.7\%. On KITTI, RAFT-3D achieves an error of 5.77, outperforming the best published method (6.31), despite using no object instance supervision.
\section{Related Work}
The task of reconstructing a 3D motion field from video is often referred to as estimating ``scene flow''.
\vspace{1mm} \noindent \textbf{Optical Flow:} Optical flow is the problem of estimating dense 2D pixel-level motion between a pair of frames. Early work formulated optical flow as a energy minimization problem, where the objective was a combination of a data term---encouraging the matching of visually similar image regions---and a regularization term---favoring piecewise smooth motion fields. Many early scene flow approaches evolved from this formulation, replacing piecewise \emph{smooth} flow priors with a piecewise \emph{constant} rotation/translation field prior\cite{vogel2013piecewise,menze2015object}. This greater degree of structure allowed scene flow methods to outperform approaches which treated optical flow or stereo separately\cite{vogel20113d}.
Recently, the problem of optical flow has been reformulated in the context of deep learning. Many works have demonstrated that a neural network can be directly trained to estimate optical flow between a pair of frames, and a large variety of network architectures have been proposed for the task \cite{flownet1,flownet2,pwcnet,ranjan2017optical,lu2020devon,yang2019volumetric,teed2020raft}. RAFT\cite{teed2020raft} is a recurrent network architecture for estimating optical flow. RAFT builds a 4D correlation volume by computing the visual similarity between all pairs of pixels; then, during inference, a recurrent update operator indexes from the correlation volume to produce a flow update. A unique feature of RAFT is that a single, high resolution, flow field is updated and maintained.
Our approach is based on the RAFT architecture, but instead of a flow field, we estimate a SE3 motion field, where a rigid body transformation is estimated for each pixel. When projected onto the image, our SE3 motion vectors give more accurate optical flow than RAFT.
\vspace{1mm} \noindent \textbf{Rectified Stereo:} Rectified stereo can be viewed as a 1-dimensional analog to optical flow, where the correspondence of each pixel in the left image is constrained to lie on a horizontal line spanning the right image. Like optical flow, traditional methods treated stereo as an energy minimization problem\cite{hirschmuller2005accurate,ranftl2012pushing} often exploiting planar information\cite{bleyer2011patchmatch}.
Recent deep learning approaches have borrowed many core concepts from conventional approaches such as the use of a 3D cost volume \cite{gcnet}, replacing hand-crafted features and similarity metrics with learned features, and cost volume filtering with a learned 3D CNN. Like optical flow, a variety of network architectures have been proposed \cite{gcnet,zhang2019ga,guo2019group,chang2018pyramid}. Here we use GA-Net\cite{zhang2019ga} to estimate depth between the each left/right image pair.
\vspace{1mm} \noindent \textbf{Scene Flow:} Like optical flow and stereo, scene flow can be approached as a energy minimization problem. The objective is to recover a flow field such that (1) visually similar image regions are aligned and (2) the flow field maximizes some prior such as piecewise rigid motion and piecewise planar depth. Both variational optimization\cite{quiroga2014dense,jaimez2015motion,jaimez2015primal} and discrete optimization\cite{menze2015object,jaimez2015primal} approaches have been explored for inference. Our network is designed to mimic the behavior an optimization algorithm. We maintain an estimate of the current motion field which is updated and refined with each iteration.
Jaimez et al.\cite{jaimez2015motion} proposed an alternating optimization approach for scene flow estimation from a pair of RGB-D images, iterating between grouping pixels into rigidly moving clusters and estimating the motion model for each of the cluster. Our method shares key ideas with this approach, namely the grouping of pixels into rigidly moving objects, however, we avoid a hard clustering by using rigid-motion embeddings, which softly and differentiably group pixels into rigid objects.
Recent works have leveraged the object detection and semantic segmentation ability of deep networks to improve scene flow accuracy\cite{ma2019deep,cao2019learning,ren2017cascaded,behl2017bounding,gordon2019depth}. In these works, an object detection or instance segmentation network is trained to identify potentially moving objects, such as cars or buses. While these approaches have been very effective for driving datasets such as KITTI where moving objects can be easily identified using semantics, they do not generalize well to novel objects. An additional limitation is that the detection and instance segmentation introduces non-differentiable components into the pipeline, requiring these components to be trained separately on ground truth annotation. Ma et al. \cite{ma2019deep} was able to train an instance segmentation network jointly with optical flow estimation by differentiating through Gauss-Newton updates; however, this required additional instance supervision and pre-training on Cityscapes\cite{cordts2016cityscapes}. On the other hand, our network outperforms these approaches without using object instance supervision.
Yang and Ramanan\cite{yang2020upgrading} take a unique approach and use a network to predict optical expansion, or the change in perceived object size. Combining optical expansion with optical flow gives normalized 3D scene flow. The scale ambiguity can be recovered using Lidar, stereo, or monocular depth estimation. This approach does not require instance segmentation, but also cannot directly enforce rigid motion priors.
Another line of work has focused on estimating 3D motion between a pair \cite{liu2019flownet3d,wang2020flownet3d++,gu2019hplflownet} or sequence\cite{liu2019meteornet,fan2019pointrnn} of point clouds. These approaches are well suited for Lidar data where the sensor produces sparse measurements. However, these works do not directly exploit scene rigidity. As we demonstrate in our experiments, reasoning about object level rigidity is critical for good accuracy.
\section{Approach}
We propose an iterative architecture for scene flow estimation from a pair of RGB-D images. Our network takes in two image/depth pairs, $(I_1, Z_1)$, $(I_2, Z_2)$, and outputs a dense transformation field $\mathbf{T} \in SE(3)^{H \times W}$ which assigns a rigid body transformation to each pixel. For stereo images, the depth estimates $Z_1$ and $Z_2$ are obtained using an off-the-shelf stereo network.
\subsection{Preliminaries}
We use the pinhole projection model and assume known camera intrinsics. We use an augmented projection function which maps a 3D point to its projected pixel coordinates, $(x,y)$, in addition to inverse depth $d = 1/Z$. Given a homogeneous 3D point $\mathbf{X} = (X, Y, Z, 1)$
\begin{equation}
(x, y, d) = \pi(\mathbf{X}) = \begin{pmatrix} f_x (X/Z) + c_x \\ f_y (Y/Z) + c_y \\ 1/Z \end{pmatrix}
\end{equation}
where $(f_x, f_y, c_x, c_y)$ are the camera intrinsics.
Given a dense depth map $Z \in \mathbb{R_+}^{H\times W}$, we can use the inverse projection function.
\begin{equation}
\begin{pmatrix} X \\ Y \\ Z \\ 1 \end{pmatrix} = \pi^{-1}(x,y,d) = \frac{1}{d} \begin{pmatrix} (x-c_x)/f_x \\ (x-c_y)/f_y \\ 1 \\ d \end{pmatrix}
\end{equation}
which maps from pixel $(x, y, d)$ to the point $(X, Y, Z, 1)$, again with inverse depth $d = 1/z$.
\vspace{1mm} \noindent \textbf{Mapping Between Images:} We use a dense transformation field, $\mathbf{T} \in SE(3)^{H \times W}$ to represent the 3D motion between a pair of frames. Using $\mathbf{T}$, we can construct a function which maps points in frame $I_1$ to $I_2$. Letting $\mathbf{x}_{i}=(x_{i},y_{i},d_{i})$ be the pixel coordinate at index $i$ then the mapping
\begin{equation}
\mathbf{x}'_{i} = (x_i', y_i', d_i') = \pi(\mathbf{T}_{i} \cdot \mathbf{X}_{i}), \qquad \mathbf{X}_{i} = \pi^{-1}(\mathbf{x}_{i})
\label{eqn:mapping}
\end{equation}
can be used to find the correspondence of $\mathbf{x}_{i}$ in $I_2$.
A flow vector can be obtained by taking the difference $\mathbf{x}'_{i} - \mathbf{x}_{i}$. The first two components of the flow vector give us the standard optical flow. The last component provides the change in inverse depth between the pair of frames. The focus of this paper is to recover $\mathbf{T}$ given a pair of frames.
\vspace{1mm} \noindent \textbf{Jacobians:} For optimization purposes, we will need the Jacobian of the Eqn. \ref{eqn:mapping}. Using the chain rule, we can compute the Jacobian of Eqn. \ref{eqn:mapping} as the product of the projection Jacobian
\begin{equation}
\mathbf{J}_{\pi} = \frac{\partial \pi(\mathbf{X}')}{\partial \mathbf{X}'} =
\begin{pmatrix}
f_x d' & 0 & -f_x X' {d'}^2 \\
0 & f_y d' & -f_y Y' {d'}^2 \\
0 & 0 & -{d'}^2\end{pmatrix}
\end{equation}
and the transformation Jacobian
\begin{equation}
\mathbf{J}_{T} =
\left(\mathbf{I}_{3\times 3}, (\mathbf{X}')^\wedge \right), \ \ \ \mathbf{w}^\wedge = \begin{pmatrix} 0 & \text{-}w_3 & w_2 \\ w_3 & 0 & \text{-}w_1 \\ \text{-}w_2 & w_1 & 0 \end{pmatrix}
\end{equation}
using local coordinates defined by the retraction $\exp(\boldsymbol{\delta}^\wedge)\cdot\mathbf{T}$. Giving the Jacobian of Eqn. \ref{eqn:mapping} as $\mathbf{J} = \mathbf{J}_\pi \cdot \mathbf{J}_T \in \mathbb{R}^{3\times 6}$.
\vspace{1mm} \noindent \textbf{Optimization on Lie Manifolds:} The space of rigid-body transformations forms a Lie group, which is a smooth manifold and a group. In this paper, we use the Gauss-Newton algorithm to perform optimization steps over the space of dense SE3 fields.
Given a weighted least squares objective
\begin{equation}
E(\mathbf{x}) = \sum_i w_i \cdot (f_i(\mathbf{x}) - y_i)^2
\end{equation}
the Gauss-Newton algorithm linearizes the residual terms, and solves for the update
\begin{align}
&\mathbf{J}^T \ \diag(\mathbf{w}) \ \mathbf{J} \Delta \mathbf{x} = \ \mathbf{J}^T \mathbf{r}(\mathbf{x}) \\
&r_i = f_i(\mathbf{x}) - y_i \qquad \mathbf{J}_i = \left. \frac{\partial f_i(\exp(\delta^\wedge) \mathbf{x})}{\partial \delta} \right|_{\delta=0}
\label{eqn:update}
\end{align}
The update is found by solving Eqn.~\ref{eqn:update} and applying the retraction $\mathbf{T}' = \exp(\Delta \mathbf{x}^\wedge) \cdot \mathbf{T}$. Eqn.~\ref{eqn:update} can be rewritten as the linear system
\begin{equation}
\mathbf{H} \Delta \mathbf{x} = \mathbf{b} \qquad
\mathbf{H} = \mathbf{J}^T \ \diag(\mathbf{w}) \ \mathbf{J}, \
\mathbf{b} = \mathbf{J}^T \mathbf{r}(\mathbf{x})
\end{equation}
and $\mathbf{H}$ and $\mathbf{b}$ can be constructed without explicitly forming the Jacobian matrices
\begin{equation}
\mathbf{H} = \sum_i w_i \cdot \mathbf{J}_i^T \mathbf{J}_i, \qquad \mathbf{b} = \sum_i w_i \cdot \mathbf{J}_i^T r_i(\mathbf{x}).
\label{eqn:inplace}
\end{equation}
This fact is especially useful when solving optimization functions with millions of residual terms. In this setting, storing the full Jacobian matrix becomes impractical.
\subsection{Network Architecture}
Our network architecture is based on RAFT\cite{teed2020raft}. We construct a full 4D correlation volume by computing the visual similarity between all pairs of pixels between the two input images. During each iteration, the network uses the current estimate of the SE3 field to index from the correlation volume. Correlation features are then fed into an recurrent update operator which estimates a dense flow field. We provide an overview of the RAFT architecture here, but more details can be found in \cite{teed2020raft}.
\vspace{1mm} \noindent \textbf{Feature Extraction:}
We first extract features from the two input images. We use two separate feature extract networks. The feature encoder, $f_\theta$, is applied to both images with shared weights. $f_\theta$ extracts a dense 128-dimension feature vector at 1/8 resolution. It consists of 6 residuals blocks, 2 at 1/2 resolution, 2 at 1/4 resolution, and 2 at 1/8 resolution. We provide more details of the network architectures in the appendix.
The context encoder extracts semantic and contextual information from the first image. Different from the original RAFT\cite{teed2020raft}, we use a pretrained ResNet50\cite{resnet} with a skip connection to extract context features at 1/8 resolution. The reason behind this change is that grouping objects into rigidly moving regions requires a greater degree of semantic information and larger receptive field. During training, we freeze the batch norm layers in the context encoder.
\vspace{1mm} \noindent \textbf{Computing Visual Similarity:} We construct a 4D correlation volume by computing the dot product between all-pairs of feature vectors between the input images
\begin{equation}
\mathbf{C}_{ijkh}(I_1, I_2) = \langle f_\theta(I_1)_{ij}, \ f_\theta(I_2)_{kh} \rangle \in \mathbb{R}^{H \times W \times H \times W}
\end{equation}
We then pool the last two dimensions of the correlation volume 3 times using average pooling with a $2\times 2$ kernel, resulting in a correlation pyramid $\{\mathbf{C}_1, \mathbf{C}_2, \mathbf{C}_3, \mathbf{C}_4\}$ with
\begin{equation}
\mathbf{C}_k \in \mathbb{R}^{H \times W \times H/2^{k-1} \times W/2^{k-1}}
\end{equation}
\vspace{1mm} \noindent \textbf{Indexing the Correlation Pyramid: } Given a current estimate of correspondence $\mathbf{x}'=(u,v)$, we can index from the correlation volume to produce a set of correlation features. First we construct a neighborhood grid around $\mathbf{x}$
\begin{equation}
\mathcal{N}_{\mathbf{x}} = \{(u + d_u, v+d_v) \ | \ d_u, d_v \in \{-r, ..., r\} \ \} \ \
\end{equation}
and then use the neighboorhood to sample from the correlation volume using bilinear sampling. We note that the constructing and indexing from the correlation volume is performed in an identical manner to RAFT\cite{teed2020deepv2d}.
\vspace{1mm} \noindent \textbf{Update Operator:} The update operator is a recurrent GRU-unit which retrieves features from the correlation volume using the indexing operator and outputs a set of revisions. RAFT uses a series of 1x5 and 5x1 GRU units; we use a single 3x3 unit but use a kernel composed of 1 and 3 dilation rates. We provide more details on the architecture of the update operator in the appendix.
Using Eqn. \ref{eqn:mapping}, we can use the current estimate of $\mathbf{T}$ to estimate 2D correspondences $\mathbf{x}' = \pi(\mathbf{T} \cdot \pi^{-1}(\mathbf{x}))$. The following features are used as input to the GRU
\begin{itemize}
\item[--] Flow field: $\mathbf{x}' - \mathbf{x}$
\item[--] Twist field: $\log_{SE3} (\mathbf{T})$
\item[--] Depth residual: $\mathbf{d}' - \mathbf{\bar{d}}'$
\item[--] Correlation features: $L_\mathbf{C}(\mathbf{x}')$
\end{itemize}
In the depth residual term, the inverse depth $d'_i$ is obtained from the depth component of $\mathbf{x}_i'$, i.e.\@ the backprojected pixel $i$ expressed in the coordinate system of frame 2. The inverse depth $\bar{d}'_i$ is obtained by indexing the inverse depth map of frame 2 using
the correspondence $x_i'$ of pixel $i$.
If pixel $i$ is non-occluded, an accurate SE3 field $\mathbf{T}$ should result in a depth residual of 0. Each of the derived features are processed through 2 convolutional layers and then provided as input to the convolutional GRU.
The hidden state is then used to predict the inputs to the Dense-SE3 layer. We apply two convolutional layers to hidden state to output a rigid-motion embedding map $\mathbf{V}$. We additionally predict a ``revision map'' $\mathbf{r}_x, \mathbf{r}_y, \mathbf{r}_z$ and corresponding confidence maps $\mathbf{w}_x, \mathbf{w}_y, \mathbf{w}_z \in [0,1]$. The revisions $\mathbf{r}_x$ and $\mathbf{r}_y$ correspond to corrections that should be made to the optical flow induced by the current SE3 field. In other words, the network is trying to get a new estimate of pixel correspondence, but is expressing it on top of the flow induced by the SE3 field.
The revisions $\mathbf{r}_z$ is the corrections that should be made to the inverse depth in frame 2 when the inverse depth is used by Dense-SE3 to enforce geometric consistency. This is to account for noise in the input depth as well as occlusions. The embedding map and revision maps are taken as input to the Dense-SE3 layer to produce an update to the SE3 motion field.
\vspace{1mm} \noindent \textbf{SE3 Upsampling:} The SE3 motion field estimated by the network is at 1/8 of the resolution. We use convex up-sampling \cite{teed2020raft} to upsample the transformation field to the full input resolution. In RAFT\cite{teed2020raft}, the high resolution flow field was taken to be the convex combination of $3 \times 3$ grids at the lower resolution with combination weights predicted by the network. However, the SE3 field $\mathbf{T}$ lies on a manifold and is not closed under linear combinations. Instead we perform upsampling by first mapping $\mathbf{T}$ to the Lie algebra using the logarithm map, performing convex upsampling in the lie algebra, and then mapping back to the manifold using the exponential map.
\subsection{Dense-SE3 Layer}
The key ingredient to our approach is the Dense-SE3 layer. Each application of the update module produces a revision map $\mathbf{r}=(\mathbf{r}_x, \mathbf{r}_y, \mathbf{r}_z)$. The Dense-SE3 layer is a differentiable optimization layer which maps the revision map to a SE3 field update.
The rigid-motion embedding vectors are used to softly group pixels into rigid objects. Given two embedding vectors $\mathbf{v}_i$ and $\mathbf{v}_j$, we compute an affinity $a_{ij} \in [0,1]$ by taking the sigmoid of the negative L2 distance
\begin{equation}
a_{ij} = 2 * \sigma(-||\mathbf{v}_i - \mathbf{v}_j||^2) \in [0,1]
\end{equation}
\noindent \textbf{Objective Function:} Using the affinity terms, we define an objective function based on the reprojection error
\begin{align}
E(\delta) &= \sum_{i \in \Omega} \sum_{j \in \mathcal{N}_i} a_{ij} e^2_{ij}(\delta_i) \\
e^2_{ij}(\delta_i) &= ||\mathbf{r}_j + \pi(\mathbf{T}_j \mathbf{X}_j) - \pi(e^{\delta_i} \mathbf{T}_i \mathbf{X}_j)||^2_{w_j}
\label{eqn:objective}
\end{align}
with $||\mathbf{x}||_\mathbf{w}^2 = \mathbf{x}^T \diag(\mathbf{w}) \mathbf{x}$. The objective states that for every pixel $i$, we want a transformation $\mathbf{T}_i$ which describes the motion of pixels in a neighborhood $j \in \mathcal{N}_i$. However, not every pixel $j \in \mathcal{N}_i$ belongs to the same rigidly moving object. That is the purpose for the embedding vector. Only pairs $(i,j)$ with similar embeddings significantly contribute to the objective function.
\vspace{1mm} \noindent \textbf{Efficient Optimization:} We apply a single Gauss-Newton update to Eqn. \ref{eqn:objective} to generate the next SE3 estimate. Since the Dense-SE3 layer is applied after each application of the update operator, 12 iterations of the update operator yields 12 Gauss-Newton updates.
The objective defined in Eqn. \ref{eqn:objective} can result in a very large optimization problem. We generally use a large neighborhood $\mathcal{N}_i$ in practice; in some experiments we take $\mathcal{N}_i$ to be the entire image. For the FlyingThings3D dataset, with $540\times960$ resolution, this results in \emph{200 million} equations and 50,000 variables (Dense-SE3 layer operators at 1/8 the input resolution). Trying the store the full system would exceed available memory.
However, each term in Eqn. \ref{eqn:objective} only includes a single $\mathbf{T}_i$. This means that instead of solving a single optimization problem with $H \times W \times 6$ variables, we can instead solve a set of $H \times W$ problems each with only $6$ variables. Furthermore, we can leverage Eqn. \ref{eqn:inplace} and build the linear system in place without explicitly constructing the Jacobian. When implemented directly in Cuda, a Gauss-Newton update of Eqn. \ref{eqn:objective} can be performed very quickly and is not a bottleneck in our approach.
\begin{figure*}
\small \hspace{-8mm} image \hspace{28mm} flow \hspace{30mm} $\tau$ \hspace{30mm} $\phi$
\centering
\includegraphics[width=.8\linewidth]{figures/qual_figs.jpg}
\caption{Visualization of the predicted motion fields on FlyingThings3D (top) and KITTI (bottom). Our network outputs a dense SE3 motion field, which can be used to compute optical flow. We visualize the SE3 field as the twist field where $(\tau, \phi) = \log_{SE3}(\mathbf{T})$. Note that the twist fields are piecewise constant---pixels from the same rigid object are assigned the same SE3 motion.}
\label{fig:examples}
\end{figure*}
\subsection{bi-Laplacian Embedding Optimization}
Since our architecture operates primarily at high resolution, it can be difficult for the network to group pixels which span large objects. We implement a differentiable bi-Laplacian optimization layer in order to smooth embedding vectors within motion boundaries. Vogel et al. \cite{vogel2018learning} used a similar differentiable optimization layer to smooth optical flow within motion boundaries; however, they use iterative methods to solve the linear system while we use direct Cholesky factorization which allows us to reuse the factorization for each channel of the embedding vector.
Given an embedding map $\mathbf{V} \in \mathbb{R}^{H \times W \times C}$, we have the GRU predict additional edge weights $\mathbf{w}_x, \mathbf{w}_y \in \mathbb{R}_+^{H \times W}$ and define the objective
\begin{equation}
\mathbf{u}^* = \min_\mathbf{u} \left\{ ||D_x \mathbf{u}||_{\mathbf{w}_x}^2 + ||D_x \mathbf{u}||_{\mathbf{w}_y}^2 + ||\mathbf{u} - \mathbf{v}||^2 \right\}
\label{eqn:bilaplacian}
\end{equation}
where $D_x$ and $D_y$ are linear finite difference operators, and $\mathbf{v}$ is the flattened feature map.
In other words, we want to solve for a new embedding map $\mathbf{u}$ which is smooth within motion boundaries and close to the original embedding map $\mathbf{v}$. At boundaries, the network can set the weights to 0 so that edges do not get smoothed over. Eqn.~\ref{eqn:bilaplacian} can be solved in closed form using sparse Cholesky decomposition and we use the Cholmod library\cite{chen2008algorithm}. Using nested dissection\cite{george1973nested} factorization can be performed in $O({(HW)}^{1.5})$ time and backsubstition can be performed in $O({C\cdot(HW)}^{1.5})$ time. In the appendix, we derive the gradients of Eqn.~\ref{eqn:bilaplacian}. Since the optimization layer is differentiable, the inputs $\mathbf{w}_x$ and $\mathbf{w}_y$ don't require direct supervision.
\subsection{Supervision}
We supervise our network on a combination of ground truth optical flow and inverse depth change. Our network outputs a sequences of $\{\mathbf{T}_1, \mathbf{T}_2, \hdots, \mathbf{T}_K \}$. For each transformation, $\mathbf{T}_k$, we computed the induced optical flow and inverse depth change
\begin{equation}
\mathbf{f}_{est}^k = \pi(\mathbf{T}_k \cdot \pi^{-1}(\mathbf{x})) - \mathbf{x}
\end{equation}
where $\mathbf{x}$ is a dense coordinate grid in $I_1$. We compute the loss as the sequence over all estimations
\begin{equation}
\mathcal{L} = \sum_{k=1}^N \gamma^{N-k}||\mathbf{f}_{est}^k - \mathbf{f}_{gt} {||}_1
\end{equation}
with $\gamma=0.9$. Note that no supervision is applied to the embedding vectors, and that rigid-motion embeddings are implicitly learned by differentiating through the dense $SE(3)$ update layer. We also apply an additional loss directly to the revisions predicted by the GRU with 0.2 weight.
\begin{table*}
\centering
\resizebox{.72\textwidth}{!}{
\begin{tabular}{lcccccc}
\toprule
\multirow{2}{*}{Method} & \multirow{2}{*}{Input} & \multicolumn{2}{c}{\underline{2D Metrics}} & \multicolumn{3}{c}{\underline{3D Metrics}} \\
& & $\delta_{2D}<$1px & EPE & $\delta_{3D}<.05$ & $\delta_{3D}<0.10$ & EPE \\
\midrule
RAFT \cite{teed2020raft} & RGB & 79.4\% & 3.53 & - & - & - \\
RAFT (2D flow backprojected) & RGB-D & 78.8\% & 3.42 & 50.6\% & 55.7\% & 5.442 \\
RAFT (2D flow + depth change) & RGB-D & 75.2\% & 3.66 & 33.9\% & 47.2\% & 1.218 \\
RAFT (3D flow) & RGB-D & 73.6\% & 4.42 & 36.2\% & 55.4\% & 0.266 \\
Ours & RGB-D & \textbf{86.4}\% & \textbf{2.46} & \textbf{87.8}\% & \textbf{91.5}\% & \textbf{0.062} \\
\midrule
\end{tabular}
}
\caption{Results on the FlyingThings3D dataset using the images from the FlowNet3D split. We evaluate on the full images (excluding pixels at infinity and extremely fast moving regions with flow $>250$px)}
\label{table:FlyingThingsResults}
\end{table*}
\section{Experiments}
We evaluate our approach on a variety of real and synthetic datasets. For all experiments we use the AdamW optimizer\cite{loshchilov2017decoupled} with weight decay set to $1\times10^{-5}$ and unroll 12 iterations of the update operator. All components of the network are trained from scratch, with the exception of the context encoder which uses ImageNet~\cite{deng2009imagenet} pretrained weights.
Training RAFT-3D involves differentiating a computation graph which consists of both Euclidean tensors (e.g. network weights, feature activation) and Lie Groups elements (e.g. SE3 transformation field). We use the LieTorch library\cite{teed2021tangent} to perform backpropagation in the tangent space of manifold elements in the computation graph.
\begin{table}[h]
\centering
\resizebox{\columnwidth}{!}{
\begin{tabular}{llccc}
\toprule
Method & Input & $\delta_{3D}<.05$ & $\delta_{3D}<0.10$ & EPE$_{3D}$ \\
\midrule
FlowNet3D \cite{liu2019flownet3d} & XYZ & 25.4\% & 57.9\% & 0.169 \\
FlowNet3D++\cite{wang2020flownet3d++} & RGB-D & 30.3\% & 63.4\% & \underline{0.137} \\
FLOT\cite{puy20flot} & XYZ & \underline{34.3}\% & \underline{64.3}\% & 0.156 \\
\midrule
Ours & RGB-D & \textbf{83.7}\% & \textbf{89.2}\% & \textbf{0.064} \\
\midrule
\end{tabular}
}
\caption{3D scene flow results on the FlyingThings3D dataset using the split proposed by Liu et al \cite{liu2019flownet3d} where only non-occluded points with depth $<$35m are considered for evaluation. Our method outperforms existing point-based scene flow networks by a large margin.}
\label{table:FlyingThingsResults3D}
\end{table}
\begin{table*}[tb]
\centering
\resizebox{.85\textwidth}{!}{
\begin{tabular}{lccccccccccccc}
\toprule
&&\multicolumn{3}{c}{Disparity 1}&\multicolumn{3}{c}{Disparity 2}&\multicolumn{3}{c}{Optical Flow} &\multicolumn{3}{c}{Scene Flow}\\
Methods & Runtime &\emph{bg} &\emph{fg} &{all} &\emph{bg} &\emph{fg} &{all} &\emph{bg} &\emph{fg} &{all} &\emph{bg} &\emph{fg} &{all}\\
\hline
OSF \cite{menze2015object} &50 mins&4.54&12.03&5.79&5.45&19.41&7.77&5.62&18.92&7.83&7.01&26.34&10.23\\
SSF \cite{ren2017cascaded} &5 mins& 3.55& 8.75& 4.42& 4.94& 17.48& 7.02& 5.63& 14.71& 7.14& 7.18& 24.58& 10.07\\
Sense \cite{jiang2019sense} & 0.31s & 2.07 & 3.01 & 2.22 & 4.90 & 10.83 & 5.89 & 7.30 & 9.33 & 7.64 & 8.36 & 15.49 & 9.55 \\
DTF Sense \cite{schuster2020deep} & 0.76 sec & 2.08 & 3.13 & 2.25 & 4.82 & 9.02 & 5.52 & 7.31 & 9.48& 7.67 & 8.21 & 14.08 & 9.18 \\
PRSM* \cite{vogel20153d} & 5 mins &3.02&10.52&4.27&5.13&15.11&6.79&5.33&13.40&6.68&6.61&20.79&8.97\\
OpticalExp \cite{yang2020upgrading} & 2.0 sec & \textbf{1.48} & \textbf{3.46} & \textbf{1.81} & 3.39 & \textbf{8.54} & 4.25 & 5.83 & \textbf{8.66} & 6.30 & 7.06 & 13.44 & 8.12 \\
ISF \cite{behl2017bounding} & 10 mins&4.12&6.17&4.46&4.88&11.34&5.95&5.40&10.29&6.22&6.58&15.63&8.08\\
ACOSF \cite{Cong2020ICPR} & 5mins & 2.79 & 7.56 & 3.58 & 3.82 & 12.74 & 5.31 & 4.56 & 12.00 & 5.79 & 5.61 & 19.38 & 7.90 \\
DRISF\cite{ma2019deep} & 0.75 sec (2 GPUs) & 2.16 & 4.49 & 2.55 & 2.90 & 9.73 & 4.04 & 3.59 & 10.40 & 4.73 & 4.39 & 15.94 & 6.31 \\
\midrule
Ours & 2.0 sec & \textbf{1.48} & \textbf{3.46} & \textbf{1.81} & \textbf{2.51} & 9.46 & \textbf{3.67} & \textbf{3.39} & 8.79 & \textbf{4.29} & \textbf{4.27} & \textbf{13.27} & \textbf{5.77} \\
\midrule
\end{tabular}
}
\caption{Results of the top performing methods on the KITTI leaderboard. Ours ranks first on the leaderboard among all published methods.}
\label{tab:kittiresults}
\end{table*}
\subsection{FlyingThings3D}
The FlyingThings3D dataset was introduced as part of the synthetic Scene Flow datasets by Mayer et al. \cite{mayer2016large}. The dataset consists of ShapeNet \cite{chang2015shapenet} shapes with randomized translation and rotations placed in a scene populated with background objects. While the dataset is not naturalistic, it offers a challenging combination of camera and object motion, each of which span all 6 degrees of freedom.
We train our network for 200k iterations with a batch size of 4 and a crop size of [320, 720]. We perform spatial augmentation by random cropping and resizing and adjust intrinsics accordingly. We use an initial learning rate of .0001 and decay the learning rate linearly during training.
We evaluate our network using 2D and 3D end-point-error (EPE). 2D EPE is defined as the euclidean distance between the ground truth optical flow and the predicted optical flow which can be obtained from the 3D transformation field using Eqn. \ref{eqn:mapping}. 3D EPE is the euclidean distance between the ground truth 3D scene flow and the predicted scene flow. We also report threshold metrics, which measure the portion of pixels which lie within a given threshold.
In Tab. \ref{table:FlyingThingsResults3D} we compare to point cloud based scene flow methods\cite{liu2019flownet3d,wang2020flownet3d++,puy20flot} using the split proposed in FlowNet3D\cite{liu2019flownet3d} containing roughly 2000 test examples sampled from the FlyingThings3D test set. In this evaluation setup, only non-occluded pixels with depth $<$35 meters are used for evaluation. Our method improves the 3D $\delta<0.05$ accuracy from 34.3\% to 83.7\%.
In Tab. \ref{table:FlyingThingsResults} we compare to RAFT\cite{teed2020raft} and several baselines we implement to extend RAFT to predict 3D motion. All RAFT baselines use the same network architecture as our approach, including the pretrained ResNet-50. All baselines are provided with inverse depth as input which is concatenate with the input images. We also experiment with directly provided depth as input, but found that inverse depth gives the best results.
RAFT (2D flow backprojected) uses the depth maps to backproject 2D motion into a 3D flow vector, but this only works for non-occluded pixels, which is the reason for the very large 3D EPE error. RAFT (2D flow + depth change) predicts 2D flow in addition to inverse depth change, which can be used to recover 3D flow fields. Finally, we also test a version of RAFT which predicts 3D motion fields directly; RAFT(3D flow). We find that our method outperforms all these baselines by a large margin, particularly on the 3D metrics. This is because our network operates directly on the SE3 motion field, which offers a more structured representation than flow fields and we produce analytically constrained updates which the other baselines lack.
In this experiment, we evaluate over all pixels (excluding \emph{extremely} fast moving objects with flow $>$250 pixels). Since we decompose the scene into rigidly moving components, our method can estimate the motion of occluded regions as well. We provide qualitative results in Fig.~\ref{fig:examples}. These examples show that our network can segment the scene into rigidly moving regions, producing piecewise constant SE3 motion fields, even though no supervision is used on the embeddings.
\subsection{KITTI}
Using our model trained on FlyingThings3D, we finetune on KITTI for an additional 50k iterations with an initial learning rate of $5\times10^{-5}$. We use a crop size of [288, 960] and perform spatial and photometric augmentation. To estimate disparity, we use GA-Net\cite{zhang2019ga}, which provides the input depth maps for our method.
\begin{table*}
\centering
\resizebox{.68\textwidth}{!}{
\begin{tabular}{clccccc}
\toprule
\multirow{2}{*}{Experiment} & \multirow{2}{*}{Configuration} & \multicolumn{2}{c}{\underline{2D Metrics}} & \multicolumn{3}{c}{\underline{3D Metrics}} \\
& & $\delta_{2D}<$1px & EPE & $\delta_{3D}<.05$ & $\delta_{3D}<0.10$ & EPE \\
\midrule
\multirow{5}{*}{Iterations}
& 1 & 62.1 & 6.05 & 56.0 & 65.9 & 0.212 \\
& 3 & 82.8 & 2.95 & 80.5 & 85.7 & 0.098 \\
& 8 & 85.5 & 2.47 & 86.4 & 90.5 & 0.062 \\
& \underline{16} & \textbf{85.8} & \textbf{2.43} & \textbf{87.1} & \textbf{91.0} & \textbf{0.059} \\
& 32 & 85.7 & 2.50 & 87.0 & 90.9 & 0.061 \\
\midrule
\multirow{4}{*}{Neighborhood Radius (px) }
& 8 & 73.2 & 4.01 & 38.7 & 59.0 & 0.192 \\
& 64 & 83.8 & 2.52 & 78.1 & 86.6 & 0.078 \\
& \underline{256} & \textbf{85.8} & \textbf{2.43} & \textbf{87.1} & \textbf{91.0} & \textbf{0.059} \\
& Full Image & 83.3 & 2.91 & 83.2 & 88.1 & 0.078 \\
\midrule
\multirow{2}{*}{Revision Factors}
& Flow & \textbf{86.1} & \textbf{2.29} & 84.6 & 88.7 & 0.081 \\
& \underline{Flow + Inv. Depth} & 85.8 & 2.43 & \textbf{87.1} & \textbf{91.0} & \textbf{0.059} \\
\midrule
\multirow{2}{*}{bi-Laplacian Smoothing}
& No & 85.8 & \textbf{2.43} & 87.1 & 91.0 & \textbf{0.059} \\
& \underline{Yes} & \textbf{86.3} & 2.45 & \textbf{87.8} & \textbf{91.5} & 0.062\\
\bottomrule
\end{tabular}
}
\caption{Ablation experiments, details of the individual experiments are provided in \ref{sec:ablations}}
\label{table:ablations}
\end{table*}
We submit our method to the KITTI leaderboard and report results from our method and other top performing methods in Tab. \ref{tab:kittiresults}. Our approach outperforms all published methods. DRISP \cite{ma2019deep} is the next best performing approach, and combines PSMNet\cite{chang2018pyramid}, PWC-Net\cite{pwcnet}, and Mask-RCNN\cite{he2017mask}. Mask-RCNN is pretrained on Cityscapes and fine-tuned on KITTI using bounding box and instance mask supervision. Our network outperforms DRISP despite only training on FlyingThings3D and KITTI, and uses no instance supervision.
\subsection{Ablations}
\label{sec:ablations}
We ablate various components of our model on the FlyingThings dataset and report results in Tab. \ref{table:ablations}. For all ablations, we use our network without bi-Laplacian optimization as the baseline architecture.
\vspace{1mm} \noindent \textbf{Iterations:} We evaluate the performance of our model as function of the number of application of the update operator. We find that more iterations gives better performance up to about 16, after which we observe a slight degradation.
\vspace{1mm} \noindent \textbf{Neighborhood Radius:} The Dense-SE3 layer defines an objective which such at all pairs of pixels within a specific radius $r$ contribute to the objective. Here, we train networks where $r$ is set to $\{8, 64, 256, \infty\}$. In the last case, all pairs of pixels in the image contribute to the objective. We find that $256$ gives the better performance than smaller radii; however, using the full image gives worse performance. This is likely due to the fact that most rigid objects will be less than 512 pixels in diameter, and imposing a restriction on the radius is a useful prior.
\vspace{1mm} \noindent \textbf{Revision Factors:} The update operator produces a set of revisions which are used as input to the Dense-SE3 layer. Here we experiment with different revisions. In \emph{Flow} we only use the optical flow revisions $\mathbf{r}_x$ and $\mathbf{r}_y$. In \emph{flow + inv. depth} we include inverse depth revisions. We find that including inverse depth revisions leads to better performance on 3D metrics because it leverages depth consistency.
\begin{figure}
\centering
\small with bi-Laplacian \hspace{10mm} without bi-Laplacian
\includegraphics[width=.9\columnwidth]{figures/biLaplacian.jpg}
\caption{Impact of bi-Laplacian optimization layer on motion fields. This layer improves the ability of the network to aggregate embedding vectors within motion boundaries.}
\label{fig:gbap}
\vspace{-4mm}
\end{figure}
\vspace{1mm} \noindent \textbf{bi-Laplacian Optimization:} Here we test the impact bi-Laplacian optimization layer. Our pooling layer improves the accuracy of the threshold metrics improving 1px accuracy from 85.8 to 86.3, and 3D accuracy from 87.1 to 87.8 and gives comparable average EPE. In Fig. \ref{fig:gbap} we see that the pooling layer produces qualitatively better results, particularly over large objects.
\vspace{1mm}
\noindent \textbf{Parameter Count and Timing: } RAFT-3D has 45M trainable parameters. The ResNet50 backbone has 40M parameters, while the feature extractor and update operator make up the remaining 5M parameters.
We provide a breakdown of the inference time in Tab. \ref{table:timing}. Timing results are computed on 540x960 images with a GTX 1080Ti GPU using 16 updates. Inference on 540x960 images requires 1.6G of GPU memory, which is mainly required to store the 4D correlation volume.
\begin{table}[h]
\centering
\resizebox{.6\columnwidth}{!}{
\begin{tabular}{ll}
\toprule
Component & Time (ms) \\
\midrule
Feature Extraction & 52ms\\
Cost Volume & 4ms \\
Update Operator (GRU) & 208ms (13ms/iter) \\
Gauss Newton Iteration & 120ms (7.5ms/iter) \\
SE3 Upsampling & 2ms \\
\midrule
Total & 386ms \\
\midrule
\end{tabular}
}
\caption{Forward pass timing for different components.}
\label{table:timing}
\end{table}
\vspace{-5mm}
\section{Conclusion}
We have introduced RAFT-3D, an end-to-end network for scene flow. RAFT-3D uses rigid-motion embeddings, which represent a soft grouping of pixels into rigidly moving objects. We demonstrate that these embeddings can be used to solve for dense and accurate 3D motion fields.
\vspace{2mm} \noindent \textbf{Acknowledgements:} This research is partially supported by the National Science Foundation under Grant IIS-1942981.
{\small
\bibliographystyle{ieee_fullname}
\section{Network Architecture}
Details of the network architecture, including feature encoders and the GRU-based update operator are shown in Figure \ref{fig:architecture}.
\section{bi-Laplacian Optimization Layer Gradients}
This layer minimizes an objective function in the form
\begin{equation}
||D_x \mathbf{u}||_{\mathbf{w}_x}^2 + ||D_x \mathbf{u}||_{\mathbf{w}_y}^2 + ||\mathbf{u} - \mathbf{v}||^2
\end{equation}
where $D_x$ and $D_y$ are linear finite difference operators, and $\mathbf{v}$ is the flattened feature map.
First consider the case of single channel, $\mathbf{v}\in \mathbb{R}^{HW}$. Let $W_x = \diag(\mathbf{w}_x), W_y = \diag(\mathbf{w}_y) \in \mathbb{R}^{HW\times HW}$. We can solve for $\mathbf{u}^*$
\begin{equation}
(\mathbf{I} + D_x^T W_x D_x^T + D_y^T W_y D_y^T)\mathbf{u}^* = \mathbf{v}
\label{eqn:pooling}
\end{equation}
We perform sparse Cholesky factorization and backsubstition to solve for $\mathbf{u}^*$ using the Cholmod library\cite{chen2008algorithm}.
\vspace{1mm} \noindent \textbf{Gradients:} In the backward pass, given the gradient $\frac{\partial L}{\partial\mathbf{u}^*}$, we need to find the gradients with respect to the boundary weights $\frac{\partial L}{\partial\mathbf{w}_x}$ and $\frac{\partial L}{\partial\mathbf{w}_y}$.
Given the linear system $\mathbf{H} \mathbf{u} = \mathbf{v}$, the gradients with respect to $\mathbf{H}$ and $\mathbf{v}$ can be found by solving the system in the backward direction \cite{amos2017optnet}
\begin{align}
\frac{\partial L}{\partial \mathbf{v}} &= \mathbf{H}^{-T} \frac{\partial L}{\partial\mathbf{u}^*} \\
\frac{\partial L}{\partial \mathbf{H}} &= \mathbf{u}^* \mathbf{d}_v^T \\
\mathbf{d}_v &= \mathbf{H}^{-T} \frac{\partial L}{\partial\mathbf{u}^*}
\end{align}
Here the column vector $\mathbf{d}_v$ is defined for notational convenience. Since $\mathbf{H}$ is positive definite, $\mathbf{H}^{-T}=\mathbf{H}^{-1}$ so we can reuse the factorization from the forward pass.
To compute the gradients with respect to $\mathbf{w}_x$ and $\mathbf{w}_x$
\begin{align}
\frac{\partial L}{\partial \mathbf{w}_x} = \diag\left( \frac{\partial L}{\partial \mathbf{H}} \frac{\partial \mathbf{H}}{\partial W_x} \right) \\
= \diag\left((D_x \mathbf{u}^*)(D_x \mathbf{d}_v)^T \right)
\end{align}
giving
\begin{equation}
\frac{\partial L}{\partial \mathbf{w}_x} = (D_x \mathbf{u}^*) \odot (D_x \mathbf{d}_v)
\end{equation}
where $\odot$ is elementwise multiplication. Similarly
\begin{equation}
\frac{\partial L}{\partial \mathbf{w}_y} = (D_y \mathbf{u}^*) \odot (D_y \mathbf{d}_v)
\end{equation}
\vspace{1mm} \noindent \textbf{Multiple Channels:} We can easily extend Eqn. \ref{eqn:pooling} to work with multiple channels. Since the matrix $\mathbf{H}$ does not depend on $\mathbf{v}$, it only needs to be factored once. We can solve Eqn. \ref{eqn:pooling} for all channels by reusing the factorization, treating $\mathbf{v}$ as a $HW \times C$ matrix. The gradient formulas can also be updated by summing the gradients over the channel dimensions.
|
2,869,038,154,400 | arxiv | \chapter{Extracting Parts of 2D Shapes Using Local and Global Interactions Simultaneously\label{ch1}}
\author[S. Tari]{Sibel Tari\footnote{Work is supported in part by TUBITAK 108E015.}}
\index{skeleton|see{medial axis}}
\index{local symmetry|see{medial axis}}
\index{energy!non-local|see{non-local}}
\index{interaction!non-local|see{non-local}}
\index{PDE!curve evolution|see{curve evolution}}
\index{shape}
\index{interaction}
\index{shape!peripheral structure}
\index{shape!primitives}
\index{shape!boundary texture}
\index{shape!gross structure}
\index{shape!PDE based representation}
\index{parts of visual form}
\index{shape!parts}
\index{shape!parts!the least deformable}
\index{human visual system}
\index{features!region-based}
\index{features!boundary-based}
\index{non-local}
\index{medial axis}
\address{Middle East Technical University, Department of Computer Engineering,\\
Ankara, Turkey 06531, \\
stari@metu.edu.tr}
\begin{abstract}
Perception research provides strong evidence in favor of part based
representation of shapes in human visual system. Despite
considerable differences among different theories in terms of how
part boundaries are found, there is substantial agreement on that
the process depends on many local and global geometric factors.
This poses an important challenge from the computational point of
view. In the first part of the chapter, I
present a novel decomposition method by taking both local and global
interactions within the shape domain into account. At the top of the partitioning hierarchy, the
shape gets split into two parts capturing, respectively, the gross
structure and the peripheral structure. The gross structure may be
conceived as the least deformable part of the shape which remains
stable under visual transformations. The peripheral structure
includes limbs, protrusions, and boundary texture. Such a
separation is in accord with the behavior of the artists who start
with a gross shape and enrich it with details. The method is
particularly interesting from the computational point of view as it
does not resort to any geometric notions (e.g. curvature, convexity)
explicitly. In the second part of the chapter, I relate the new method to PDE
based shape representation schemes.
\end{abstract}
\section{Introduction}\label{sec1.1}
Perception research provides strong evidence in favor of part based
representation of shapes in human visual system~\cite{BarenholtzFeldman}. Recent work using
single cell recordings in area V4 in the primate visual cortex
supports part based coding at intermediate levels~\cite{Pasu2002}. Many influential
shape representation theories either explicitly or implicitly assume
an organization in terms of constituent components. In
Binford~\cite{Binford71}; Marr and Nishiara~\cite{MarrNish}; and
Biederman~\cite{Biederman}, shape is represented via pre-defined
simple shapes (primitives) and their spatial layout. Medial axis or
local symmetry set (one of the influential ideas in shape
representation) is closely connected with the notion of parts.
Large number of\vadjust{\pagebreak} computational methods utilize medial axis to infer part
structure~\cite{Zeng07,Mi07,Svensson02,Arcelli97,Aslan05,Rom}.
In a recent work, Super~\cite{Super07} presents a quite successful part based recognition scheme.
There are powerful theories accompanied by computational
implementations on what constitutes a good partition without
resorting to predefined shapes or categorical units,
e.g.~\cite{Cohen07,HofmannRichards,Latecki99,Renninger03,Siddiqi95,SiddiqiTresness,Wagemans99}.
Despite considerable differences among different theories in terms
of how part boundaries are found, there is substantial agreement
on that the process depends on many geometric factors both at the
global and the local
levels~\cite{Cohen07,BarenholtzFeldman,SiddiqiTresness,Wagemans99,Burbeck95,Navon77}.
Indeed, part of the difficulty in devising computational mechanisms
for shape decomposition lies in the difficulty in integrating local
boundary concavities with non-local shape descriptions. Recent
works by Mi and DeCarlo~\cite{Mi07} and Zeng et al.~\cite{Zeng07}
address this challenging issue using contour curvature and local
symmetry axis simultaneously. In another recent work, Xu, Liu and
Tang~\cite{Xu09} combine effectiveness of both local and global
features for matching shapes. One of the successful recognition
schemes, the shape context by Belongie, Malik and
Puzicha~\cite{Belongie00}, is based on quantifying non-local
interactions among boundary points. Moreover, some of the successful
methods for partitioning images into pixel
groups~\cite{Ncut,NLMeans} are based on non-local relations.
\index{features!local}
\index{features!global}
\index{non-local}
The rest of the chapter is organized as follows. In \ref{S1}, the new decomposition method is presented.
The new method takes both local and global interactions within the shape domain into account. At the top of the
partitioning hierarchy, the shape gets split into two parts capturing, respectively,
the gross structure and the peripheral structure. The
gross structure may be conceived as the least deformable part of the
shape which remains stable under visual transformations. The
peripheral structure includes limbs, protrusions, and
boundary texture. Such a separation is in accord with the experimental
studies suggesting that the global shape is processed before the
details~\cite{Navon77,Burbeck95}; and with the behavior of the
artists who start with a gross shape and enrich it with
details~\cite{Koenderink82}.
\index{shape!global}
\index{shape!PDE based representation}
\index{medial axis}
In \ref{S2}, the new method formulated in a discrete setting is related to
PDE based shape representation approach with particular emphasis given to a recent
skeleton based scheme which I had previously developed with my student Cagri Aslan\cite{Aslan05,Aslan08,Aslantez}.
\section{The New Method}
\label{S1}
\index{energy!minimizer}
\index{energy!region-based}
\index{energy!boundary-based}
\index{emergent structures}
The basic idea is to create a field within the shape domain with emergent structures
capturing the parts automatically.
This field is
computed by minimizing an energy which captures both local and
global; and both region and boundary based interactions among
shape points.
Let us define a function $\omega$ defined on a discrete shape
domain $\Omega$ as the minimizer of an energy $E(\omega)$. Let this
energy be a sum of a region based energy $E_{Reg}(\omega)$ and a
boundary based energy $E_{Bdy}(\omega)$; and the region based energy
$E_{Reg}(\omega)$ is a sum of two energies which model the global
({\sl G}) and the local ({\sl L}) interactions within the shape
domain $\Omega$:
\begin{eqnarray}
E(\omega) &=& E_{Reg}(\omega) + w_{Bdy} E_{Bdy}(\omega) \nonumber \\
&=& E_{Reg}^G(\omega) + E_{Reg}^L(\omega) + w_{Bdy} E_{Bdy}(\omega) \label{eq:eq1}
\end{eqnarray}
Assuming that each of the three terms in (\ref{eq:eq1}) can be expressed as a sum of
energies defined at each pixel $i,j$, the following form is
obtained:
\begin{equation}
E(\omega)= \sum_{i,j\in \Omega}
E_{Reg}^G(\omega_{i,j}) + E_{Reg}^L(\omega_{i,j}) +w_{Bdy}
E_{Bdy}(\omega_{i,j}) \label{eq:energy}
\end{equation}
In the absence of any specific purpose or bias, equal importance can
be given to both the region and the boundary by setting $w_{Bdy}=2$.
For computational reasons, it is preferable to choose a quadratic
form for each energy. Let $E_{Reg}^G(\omega_{i,j})$ be:
\begin{equation}
E_{Reg}^G(\omega_{i,j}) = \frac{1}{\left| \Omega \right|}
\left(\sum_{k,l \in \Omega} \omega_{k,l} \right)^2 \label{eq:G}
\end{equation}
The minimizer $\omega_{i,j}$ of (\ref{eq:G}) satisfies:
\begin{equation}
\frac{1}{\left| \Omega \right|}\sum_{k,l \in \Omega} \omega_{k,l}=0
\label{eq:turevG}
\end{equation}
The condition satisfied by the minimizer of
$E_{Reg}^G(\omega_{i,j})$ is independent of the pixel location
$(i,j)$ and it explicitly states that the global average over the
shape domain should be zero.
It forces $\omega$ to attain both positive and negative values
within the shape domain $\Omega$. This behavior when complemented with
the behavior induced by the other terms will be shown to be quite
instrumental in obtaining robust and parameter free separation of
the gross structure and
the peripheral structure. %
The second term of the region based energy, $E_{Reg}^L$, has the
following form:
\begin{equation}
E_{Reg}^L(\omega_i,j) = - \left( \omega_{i+1,j} \cdot \omega_{i-1,j} +
\omega_{i,j+1} \cdot \omega_{i,j-1} \right)
\label{eq:L}
\end{equation}
Clearly, $E_{Reg}^L(\omega_i) $ is
minimized when the values of the neighboring pixels are similar.
Thus the second term of the energy
imposes smoothness on $\omega$. The condition for the minimizer of $E_{Reg}^L$
is not straightforward to calculate as that of $E_{Reg}^G$.
First, a local continuous approximation at location $i,j$ is
considered with the help of Taylor series. Second, the Gateaux
derivative is calculated. Third, the Gateaux derivative is
discretized and set to {\sl zero}, to obtain the condition for the
minimizer:
\begin{equation}
(-2+ 4)*\omega_{i,j} - \omega_{i-1,j}
-\omega_{i+1,j}-\omega_{i,j-1}-\omega_{i,j+1} =0
\label{eq:turevL}
\end{equation}
\index{interaction!pairwise}
\index{shape!minima rule}
The boundary based energy $E_{Bdy}(\omega)$ is chosen as a measure
of pairwise interaction between two properly chosen boundary points
such that the pairs indicate parts. One motivation to consider
pairwise interaction between two boundary points comes from the
well-established minima rule~\cite{HofmannRichards,Cohen07} which is
used in many computational procedures for shape decomposition.
However, computationally, it is not an easy task to model pairwise
interactions among boundary points. These interactions are neither
local nor global. A simple alternative is constructed by exploiting
the connection between the concept of pairwise interaction among
boundary points and the shape skeleton~\cite{Blum73}. With the help
of this connection, $E_{Bdy}$ is expressed as an energy defined over
the entire shape domain.
\index{medial axis!grass-fire}
\index{medial axis!Blum}
\index{distance transform}
The connection can be explained with the help of the grass-fire
model by Blum~\cite{Blum73}. Assume that one initiates fire fronts
at time $t = 0$ along all the points of the shape boundary and lets
these fronts propagate toward the center of the shape at a uniform
speed. The locus of points where these fronts meet and extinguish
defines the shape skeleton. Each skeleton point is formed as a
result of interaction between two or more boundary points. During the course
of the propagation, the time $t$ may be thought of as a function
$t_{i,j}$ defined over the shape domain by setting the value to the
time when the propagating fronts passes through the pixel $(i,j)$.
The value of $t_{i,j}$ will be proportional to the minimum distance
from $(i,j)$ to the nearest boundary points. Skeleton points are
the ones which are equidistant from at least
two boundary points~\cite{Blum73}~(Fig.~\ref{fig:Blum}.)
\begin{figure}[!h]
\centering
\includegraphics[width=5cm]{Figs/2_3b1.png}
\vglue -4pt
\caption {Each skeleton point is formed as a result of interaction
between two or more boundary points. Skeleton points are the ones
which are equidistant from at least two boundary
points~\cite{Blum73}. } \label{fig:Blum}
\end{figure}
This insight enables the expression of $E_{Bdy}(\omega)$ as a
quadratic energy defined over the entire shape region $\Omega$ as
the following form:
\begin{equation}
E_{Bdy}(\omega_{i,j}) = \left( \omega_{i,j} - t_{i,j} \right)^2
\label{eq:B}
\end{equation}
It is straightforward to calculate the condition for the minimizer
of the $E_{Bdy}(\omega_{i,j})$ given in (\ref{eq:B}) as:
\begin{equation}
\left( \omega_{i,j} - t_{i,j} \right) =0
\label{eq:turevB}
\end{equation}
In the absence of other terms, (\ref{eq:turevB}) states that the field $\omega$ should be
equal to the
distance transform defined over $\Omega$. Putting together all three
competing terms, (\ref{eq:turevG},\ref{eq:turevL},\ref{eq:turevB}),
the first order derivative w.r.t. each unknown $\omega_{i,j}$ takes
the following form:
\begin{eqnarray}
\frac{ \partial E}{\partial \omega_{i,j}}&=& \frac{1}{\left|\Omega\right|} \left( \sum_{k,l \in \Omega} \omega_{k,l} \right) \nonumber + (w_{Bdy}-2 +4)\omega_{i,j} - w_{Bdy}t_{i,j} \nonumber\\
&-& \omega_{i-1,j} -\omega_{i+1,j}-\omega_{i,j-1}-\omega_{i,j+1}
\label{eq:turev_all}
\end{eqnarray}
Setting the derivative equal to {\sl zero} yields that the minimizer
of $E(\omega)$ satisfies the following condition at each pixel $({i,j})$:
\pagebreak
\noindent
\begin{eqnarray}
w_{Bdy}t_{i,j} &=& (w_{Bdy}-2 +4)\omega_{i,j} -\omega_{i-1,j} -\omega_{i+1,j}
-\omega_{i,j-1}-\omega_{i,j+1} \nonumber \\
&+& \frac{1}{\left|\Omega\right|} \left( \sum_{k,l \in \Omega} \omega_{k,l}\right)
\label{eq:method_withw}
\end{eqnarray}
Recall that, in the absence of any specific purpose or bias, the
weight $w_{Bdy}$ is set to $2$ to give equal importance to both the
region and the boundary. Thus, $\omega_{i,j}$ is computed by
solving (\ref{eq:method}) given below, at all the pixels simultaneously,
assuming that the values are zero at the boundary pixels.
\begin{equation}
t_{i,j} = 4\omega_{i,j} -\omega_{i-1,j} -\omega_{i+1,j}-\omega_{i,j-1}-\omega_{i,j+1}
+ \frac{1}{\left|\Omega\right|} \left( \sum_{k,l\in \Omega} \omega_{k,l}\right)
\label{eq:method}
\end{equation}
The field $\omega$, computed by solving (\ref{eq:method}), is
depicted in Fig.~\ref{fig:w} for two sample shapes. It attains both
negative and positive values. This behavior is dictated by the
global region energy $E_{Reg}^G$ which explicitly states that the
global average over the shape domain should be small.
In Fig.~\ref{fig:w} (a), the restriction of $\omega$ where it is
negative is displayed. This set is denoted by $\Omega^-$. It
captures the peripheral structure (protrusions, limbs, boundary
texture). The darkest blue denotes {\sl zero}; the darkest red
denotes
the lowest negative value.
The removed inner part is the part on which $\omega$ is positive;
and is denoted by $\Omega^+$. This blob-like part captures the
gross structure. The gross structure is the least deformable part of
the shape which remains stable under a variety of visual changes as
demonstrated in Fig.~\ref{fig:handgross} using eight different instances of a hand silhouette.
\begin{figure}[t]
\vglue -2pt
\centering
{\footnotesize
\begin{tabular}{cc}
\includegraphics[height=4cm]{Figs/imgdata_3_wn.png}&
\includegraphics[height=4cm]{Figs/imgdata_3_wabs.png}\\
\includegraphics[height=4cm]{Figs/objdata_4_wn.png}&
\includegraphics[height=4cm]{Figs/objdata_4_wabs.png}\\
(a) & (b)
\end{tabular}}
\caption {The field $\omega$ computed by
solving~(\ref{eq:method}).
For visualization purposes, the values are
normalized. (a) The restriction of $\omega$ to areas where its
values are negative. This part denotes the peripheral structure {\sl
i.e.} protrusions, limbs, and boundary texture. The removed inner
part on which the values are positive is the gross structure. (b)
The absolute value of $\omega$.} \label{fig:w}
\vglue 12pt
\centering
\begin{tabular}{cc}
\includegraphics[height=4.4cm]{Figs/gross_hand.png}&
\includegraphics[height=4.4cm]{Figs/gross_handaslan.png}
\end{tabular}
\caption {The areas where $\omega<0$ is shown in black. The gross
structure (inner white blob) may be conceived as the least
deformable part of the shape. It remains stable under a variety of
changes. } \label{fig:handgross}
\end{figure}
In Fig.~\ref{fig:w} (b), the absolute value of $\omega$ is displayed
on the entire shape domain. For visualization purposes, the
negative values and the positive values are normalized, separately,
to the $[0,1]$ interval. The darkest blue denotes {\sl zero}; the
darkest red denotes {\sl one}. Notice that various local maxima
capture intuitive parts such as the body, the head, the tail, and
the legs of the horse. These parts can be easily extracted by
considering a growth starting from each local maxima. Separate
growths from each pair of neighboring maxima meet at a saddle point.
\subsection{Experimental Results and Discussion}
The method is is discussed via a set of highly illustrative
examples. These examples are silhouettes collected from various
sources~\cite{LEMS,Aslan05,Gorelick06,Zeng07}. Some of the original
silhouettes are modified by the author to obtain shapes with
holes, missing and/or extra parts.
\index{watershed}
In Figs.~(\ref{fig:occlusion}-\ref{fig:sample}), some decomposition
results are provided. These results are obtained by applying
Matlab's {\sl watershed} command to $\Omega^-$. (This is equivalent
to considering a growth starting from each local minima of
$\omega$.)
In Fig.~\ref{fig:occlusion}, the first column depicts the given
shape. The second column depicts the normalized absolute value of
$\omega$. The third column depicts the parts. Bright colors are used
for parts on which $\omega$ is negative; and gray is used for the
part on which $\omega$ is positive.
\index{shape!parts!semantically meaningful}
\index{shape!with holes}
\index{shape!occlusion}
The silhouette shown in the first row is a sampled down version of a
human silhouette~\cite{Gorelick06} from
its original resolution of $414 \times 459$ to $60 \times 60$. The
silhouettes on the remaining rows are drawn by the author to
introduce holes, missing portions, occlusions. In each case, semantically meaningful parts (the torso,
the legs, the arms, the head) are captured. Even though the
formulation includes a global term, the local changes, no matter how
significant they are, do not affect the detected parts.
\begin{figure}[t]
\centering
\begin{tabular}{ccc}
\includegraphics[height=3.5cm]{Figs/modif_2_s.png}&
\includegraphics[height=3.5cm]{Figs/modif_2_wabs.png}&
\includegraphics[height=3.5cm]{Figs/iccv_modified_2.png}\\
\includegraphics[height=3.5cm]{Figs/modif_3_s.png}&
\includegraphics[height=3.5cm]{Figs/modif_3_wabs.png}&
\includegraphics[height=3.5cm]{Figs/iccv_modified_3.png}\\
\includegraphics[height=3.5cm]{Figs/modif_8_s.png}&
\includegraphics[height=3.5cm]{Figs/modif_8_wabs.png}&
\includegraphics[height=3.5cm]{Figs/iccv_modified_8.png}\\
\includegraphics[height=3.5cm]{Figs/modif_5_s.png}&
\includegraphics[height=3.5cm]{Figs/modif_5_wabs.png}&
\includegraphics[height=3.5cm]{Figs/iccv_modified_5.png}\\
\end{tabular}
\caption {The method is robust with respect to occlusion, missing
data, extra objects, and holes. (a) Input silhouettes. (b) Absolute
value of $w$. (c) Parts extracted by applying Matlab's {\sl
watershed} command to $w$.} \label{fig:occlusion}
\vglue 5pt
\end{figure}
In Fig.~\ref{fig:multiple}, the applicability of the method when the
input consists of disconnected sets (multiple objects in a scene) is
demonstrated.
\vfill\pagebreak
\begin{figure}[!ht]
\centering
\begin{tabular}{cc}
\includegraphics[height=6.5cm]{Figs/twopeople_sh_crop.png}
\includegraphics[height=6.5cm]{Figs/twopeople_seg_crop.png}
\end{tabular}
\caption{A scene with two silhouettes. The method is applicable to
disconnected sets.} \label{fig:multiple}
\vglue 20pt
\centering
{\footnotesize\begin{tabular}{cc}
\includegraphics[height=4.0cm]{Figs/tulip_hr_wabs.png}&
\includegraphics[height=4.0cm]{Figs/tulip_lr_wabs.png}\\[4pt]
\includegraphics[height=4.0cm]{Figs/tulip_hr_seg.png} &
\includegraphics[height=4.0cm]{Figs/tulip_lr_seg.png}\\[2pt]
(a) & (b)
\end{tabular}}
\caption {The method is robust with respect to resolution changes.
An artificial shape~\cite{Aslan05} on (a)~$220 \times 220$, (b) $60
\times 60$ lattices.} \label{fig:resolution}
\end{figure}
\vfill\pagebreak
\begin{figure}[!b]
\centering
\begin{tabular}{cccc}
\includegraphics[height=2.4cm]{Figs/iccv_Aslandata_4.png} &
\includegraphics[height=2.4cm]{Figs/iccv_Aslandata_25.png}&
\includegraphics[height=2.4cm]{Figs/iccv_Aslandata_5.png}&
\includegraphics[height=2.4cm]{Figs/iccv_Aslandata_26.png}
\\
\includegraphics[height=2.4cm]{Figs/iccv_imgdata_2.png}&
\includegraphics[height=2.4cm]{Figs/iccv_Aslandata_2.png}&
\includegraphics[height=2.4cm]{Figs/iccv_Aslandata_1.png}&
\includegraphics[height=2.4cm]{Figs/iccv_Aslandata_7.png}\\
\includegraphics[height=2.4cm]{Figs/iccv_Aslandata_17.png}&
\includegraphics[height=2.4cm]{Figs/iccv_Aslandata_18.png}&
\includegraphics[height=2.4cm]{Figs/iccv_Aslandata_19.png}&
\includegraphics[height=2.4cm]{Figs/iccv_Aslandata_8.png}\\
\includegraphics[height=2.4cm]{Figs/iccv_Aslandata_32.png}&
\includegraphics[height=2.4cm]{Figs/iccv_Aslandata_33.png}&
\includegraphics[height=2.4cm]{Figs/iccv_Aslandata_31.png}&
\includegraphics[height=2.4cm]{Figs/iccv_Aslandata_15.png}
\end{tabular}
\caption {Sample decompositions. Similar
shapes are partitioned similarly; and the parts are
compatible with intuition. } \label{fig:sample}
\end{figure}
\begin{figure}[ht]
\centering
\begin{tabular}{ccc}
\includegraphics[height=3.2cm]{Figs/objdata_10_wabs.png}&
\includegraphics[height=3.2cm]{Figs/ekfeb25_5_wabs.png}&
\includegraphics[height=3.2cm]{Figs/ekfeb25_4_wabs.png}
\\
\includegraphics[height=3.2cm]{Figs/iccv_Aslandata_10.png}&
\includegraphics[height=3.2cm]{Figs/iccv_ek_5.png}&
\includegraphics[height=3.2cm]{Figs/iccv_ek_4.png}\\
\end{tabular}
\caption{Un-intuitive parts. See text. }
\label{fig:turtle}
\vglue 20pt
\centering
{\footnotesize
\begin{tabular}{cccc}
\includegraphics[height=2.5cm]{history/turtlehistory_8_wn.png}&
\includegraphics[height=2.5cm]{history/turtlehistory_7_wn.png}&
\includegraphics[height=2.5cm]{history/turtlehistory_6_wn.png}&
\includegraphics[height=2.5cm]{history/turtlehistory_5_wn.png}\\
(a)& (b) & (c)& (d)\\
\includegraphics[height=2.5cm]{history/turtlehistory_4_wn.png}&
\includegraphics[height=2.5cm]{history/turtlehistory_3_wn.png}&
\includegraphics[height=2.5cm]{history/turtlehistory_2_wn.png}&
\includegraphics[height=2.5cm]{history/turtlehistory_1_wn.png}\\
(e)& (f) & (g)& (h)
\end{tabular}}
\caption{Saliency of a part. In each figure, the restriction of
$\omega$ to locations, where its value is less than a given
threshold, is depicted. The thresold is increased gradually.
(a)-(h) $\omega <-4,-9,-14,-19,-24,-29,-34,-37$, respectively. }
\label{fig:turtle_history}
\end{figure}
In Fig.~\ref{fig:resolution}, the robustness of the
method with respect to changes in resolution is demonstrated. An artificial shape
from Aslan~\cite{Aslan05,Aslantez,Aslan08} is used in its original resolution in (a), and in a
reduced resolution in (b).
In Fig.~\ref{fig:sample}, the decomposition
results for a variety of shapes are provided. The decompositions are consistent; similar
shapes are partitioned similarly and the captured parts are
compatible with our intuition.
In some cases, as in Fig.~\ref{fig:turtle}, the decomposition
process starting from each and every local minima ({\sl i.e.}
ignoring saliency) may create un-intuitive parts. Normalized
absolute value of $\omega$ for three sample turtle shapes are
depicted in the first row. In all of the three cases, one can easily
spot five local maxima in the peripheral part, and one local maxima
in the central part. However, the peripheral structure of the first
turtle shape is decomposed into seven pieces. For the second
turtle, there are six pieces corresponding to the head, the two
legs, the two arms, and the tail. For the third turtle, there are
five pieces corresponding to the head, the two legs, the two arms.
In the third turtle, due to smoother transition between the two
legs, the tail part is missed. This produces inconsistencies among
silhouettes from the same category. However, inconsistent parts are
not as salient as the consistent parts. A clarifying illustration is
given in Fig.~\ref{fig:turtle_history} with the help of the first
turtle which is decomposed into eight pieces including the torso. A
growth process is simulated starting from
Fig.~\ref{fig:turtle_history}~(h) and ending at
Fig.~\ref{fig:turtle_history}~(a).
In each sub figure, the restriction of $\omega$ to the
locations, where its value is less than a given threshold is
depicted. The threshold is increased gradually.
At the first threshold level in (h), only the head appears.
At the
third threshold level in (f), the arms appear. The head still continues
as an individual blob. At the fourth threshold in (e) level the two legs
appear. The five pieces remain separate.
At the next threshold level in (d) there are still five pieces.
However, notice that in somewhere between (e)
and (d), the tail piece comes to existence and then gets combined with the
rightmost leg.
\index{saliency}
It is appropriate to say that the saliency of the tail is quite low compared
to the saliency of the other five pieces, due to its short life span as an individual entity.
At the next threshold level in (c), the two legs combine to form a single
piece corresponding to the lower body; and the two arms and the head combine to form
the upper body.
The separation of the peripheral structure into upper and lower body divides
the elliptical gross structure from the minor axis.
Finally in
(a), the upper and lower parts combine to form a single peripheral
structure.
In all of the previous examples, the gross structure is shown in
gray, as a single part. There may be shapes such that the gross
structure is composed of multiple parts, i.e. shapes with strong
necks. The parts of the butterfly shape in Fig.~\ref{fig:kelebek10}
(a) are shown in (b) and (c). In (b), the parts in the peripheral
structure are shown in bright colors; the gross structure is in
gray. In (c) the parts in the gross structure are shown in bright
colors; the peripheral structure is in gray. For reference, the
restriction of $\omega$ to the peripheral structure, i.e. where it is
negative, is shown in (d). As the neck which connects the two lobes
of the butterfly gets thinner, the gross structure may be split into
two disjoint sets. (See Fig.~\ref{fig:kelebek8}.)
\begin{figure}[b]
\centering
{\footnotesize\begin{tabular}{cccc}
\includegraphics[height=2.45cm]{Figs/ekfeb25_10_s.png}&
\includegraphics[height=2.45cm]{Figs/iccv_ek_10.png}&
\includegraphics[height=2.45cm]{Figs/iccv_ek_10_segp_wtimes01.png}&
\includegraphics[height=2.45cm]{Figs/ekfeb25_10_wn.png}\\
(a) & (b) & (c) & (d)
\end{tabular}}
\caption{A shape whose gross structure is composed of two blobs.
(a) A butterfly shape on a $60 \times 60$ lattice. (b-c) The parts.
(d) The restriction of $\omega$ to areas where the values are
negative.} \label{fig:kelebek10}
\vglue 18pt
\centering
{\footnotesize\begin{tabular}{cccc}
\includegraphics[height=2.45cm]{Figs/ekfeb25_8_s.png}&
\includegraphics[height=2.45cm]{Figs/kelebek_quant04_seg.png}&
\includegraphics[height=2.45cm]{Figs/iccv_ek_8_segp.png}&
\includegraphics[height=2.45cm]{Figs/ekfeb25_8_wn.png}\\
(a) & (b) & (c) & (d)
\end{tabular}}
\caption{When the neck gets thinner, the gross structure may split
into two disjoint sets.} \label{fig:kelebek8}
\end{figure}
\begin{figure}[t]
\centering
{\footnotesize \begin{tabular}{ccc}
\includegraphics[height=2.9cm]{Figs/imgdata_10_s.png}&
\includegraphics[height=2.9cm]{Figs/prickly_iccv07.png}&
\includegraphics[height=2.9cm]{Figs/prickly_isvc07.png}\\
(a)&(b) &(c)\\[6pt]
\includegraphics[height=2.9cm]{Figs/iccv_imgdata_10.png}&
\includegraphics[height=2.9cm]{Figs/prickly_watersheds_p.png}&
\includegraphics[height=2.9cm]{Figs/prickly_hist_thr3.png}\\
(d)& (e) & (f)
\end{tabular}}
\caption{A noisy shape composed of two blobs. (a) The prickly pear
on a $80 \times 80$ lattice. (b-c) The decompositions presented
in~\cite{Mi07} and \cite{Zeng07}, respectively. (d-e) The
decomposition by the new method. (f) The restriction of $\omega$ to where
$\omega <-3$. See text. [(b-c) taken from the original sources~\cite{Mi07} and \cite{Zeng07}]}
\label{fig:prickly}
\end{figure}
\subsection{Comparison to Recent Decomposition Methods by Mi and DeCarlo~\cite{Mi07} and Zeng
{\sl et al.}~\cite{Zeng07}}
In two recent papers, Mi and DeCarlo~\cite{Mi07}, and Zeng {\sl
et al.}~\cite{Zeng07} present decomposition methods which exploit
both the skeleton and the boundary curvature information. Neither
of the methods consider a fully global context. \index{context} It is worth
comparing the three methods using an illustrative example: {\sl the
prickly pear}, which is shown in Fig.~\ref{fig:prickly}(a).
\index{shape!prickly}
The decompositions obtained in Mi and DeCarlo~\cite{Mi07} and Zeng
{\sl et al.}~\cite{Zeng07} using local symmetry axis and contour
curvature are shown in (b) and (c), respectively. The new
decomposition (using a reduced $80 \times 80$ resolution) is shown
in (d) and (e). All of the three methods find two blobs. Similar to
the new decomposition, the decomposition by Mi and
DeCarlo~\cite{Mi07} separates the boundary texture (shown in gray
in (b)) from the main structure. On the other hand, the
decomposition by Zeng {\sl et al.}~\cite{Zeng07} does not separate
boundary texture from the main structure leading to the
interpretation of the shape as two prickly balls glued together.
\index{shape!parsing}
\index{context}
\index{medial axis}
As experimental studies on human subjects
demonstrate~\cite{Renninger03}, multiple (and mutually exclusive)
parses of a given shape are possible. Thus, in a purely bottom-up
process without considering a specific application or a context,
one can not decide which partitioning scheme is the best.
I remark that the advantages of the new method are purely from the
computational point of view. It does not involve any parameters or
thresholds. It can work at very low resolutions as opposed to other
methods which involve the computation of curvature or local symmetry
axes, since their computation requires certain resolution.
\index{context}
Notice
that the restriction of the field $\omega$ to where the values are
less than a given threshold $-3$, in (f), indicates the first
partitioning of the peripheral structure along the minor axis,
similar to the turtle case in Fig.~\ref{fig:turtle}.
This indication is consistent with the
result of the method of Zeng {\sl et al.}~\cite{Zeng07} which
computes the partition line by sequentially eliminating the boundary
detail using Discrete Curve Evolution~\cite{Latecki99}.
\index{curve evolution!discrete}
\index{features!local} \index{features!global} \index{non-local}
The new method is essentially a parameter free method
with the assumption
that equal importance should be given to local and global terms.
However, the other two
methods do not use global features (note that non-local is not
necessarily global). Thus, for a comparative evaluation purpose, it
is worth trying to reduce the effect of the global term by imagining
a constant weight $c < 1$ in front of $E_{Reg}^G$
in~\ref{eq:energy}. In Fig.~\ref{fig:prickly_param} (a-b), $c=0.5$.
One can notice the slight reduction of the peripheral region. The
global term is responsible for the balance between the negative
values and the positive values of $\omega$. As its importance
decreases, more pixels tend to attain positive values. In (c),
$c=0.125$. As $c$ decreases, the peripheral structure shrinks
further and the implied decomposition approaches to the one shown in
Fig.~\ref{fig:prickly} (c). In Fig.~\ref{fig:kelebek_param}, the
effect of reducing the importance of the global term is demonstrated
using butterfly shapes.
\begin{figure}[ht]
\centering
{\footnotesize \begin{tabular}{ccc}
\includegraphics[height=2.9cm]{Figs/prickly_j0dot5_seg.png}&
\includegraphics[height=2.9cm]{Figs/prickly_j0dot5_segp.png}&
\includegraphics[height=2.9cm]{Figs/prickly_j0dot125_seg.png}\\
(a)&(b) &(c)\\
\end{tabular}}
\caption{The effect of reducing the importance of the global term.
(a-b) $c=0.5$. (c) $c=0.125$. } \label{fig:prickly_param}
\vglue 18pt
\centering
\begin{tabular}{ccc}
\includegraphics[height=2.9cm]{Figs/kelebek1_025_clean.png}
\includegraphics[height=2.9cm]{Figs/kelebek1_025_segp.png}
\includegraphics[height=2.9cm]{Figs/kelebek2_025_segp.png}
\end{tabular}
\caption{The effect of reducing the importance of the global term.
$c=0.25$.} \label{fig:kelebek_param}
\end{figure}
\begin{figure}[t]
\vglue -4pt
\centering
{\footnotesize\begin{tabular}{ccc}
\includegraphics[height=2.6cm]{Figs/leaf1_j0dot025_seg.png}&
\includegraphics[height=3cm]{Figs/leaf_iccv07.png}&
\includegraphics[height=2.7cm]{Figs/lakamper_leaf.png}\\
(a)& (b)& (c)
\end{tabular}}
\caption{The effect of significantly reducing the importance of the
global term. (a) The decomposition with the new method, $c=0.025$.
(b-c) The decompositions presented by Mi and DeCarlo~\cite{Mi07} and
Zeng {\sl et al.}~\cite{Zeng07}, respectively. [(b-c) taken from
the original sources~\cite{Mi07,Zeng07}] } \label{fig:leaf}
\end{figure}
In Fig.~\ref{fig:leaf}, the importance of the global term is
significantly reduced by setting $c=0.025$. The decomposition result
using the new method is shown in (a). For reference, the decomposition
results by the previous methods are shown in (b-c).
\section{Connection to the methods of Tari,
Shah and Pien~\cite{Tari96,Tari97,Tari98},
Aslan and Tari~\cite{Aslan05,Aslantez,Aslan08}, and Gorelick {\sl et al.}}
\label{S2}
\index{PDE}
\index{PDE!Laplace operator}
\index{medial axis!Aslan and Tari}
\index{medial axis!Tari, Shah and Pien}
Recall that the field $\omega$ is the minimizer of the following energy :
\begin{equation*}
E(\omega) = \sum_{i,j\in \Omega} E_{Reg}^G(\omega_{i,j}) + E_{Reg}^L(\omega_{i,j}) + w_{Bdy} E_{Bdy}(\omega_{i,j})
\end{equation*}
Let us omit the term $E_{Reg}^G$ that models the global interaction among the
shape pixels to obtain a reduced energy:
\begin{equation}
\sum_{i,j\in \Omega} - \left( \omega_{i+1,j} \cdot \omega_{i-1,j} +
\omega_{i,j+1} \cdot \omega_{i,j-1} \right)
+ w_{Bdy} \left( \omega_{i,j} - t_{i,j} \right)^2
\label{eq:reducedE}
\end{equation}
By setting the first derivative of (\ref{eq:reducedE}) with respect
to $\omega_{i,j}$ equal to $zero$, the condition satisfied by the
minimizer is obtained as:
\begin{equation}
(w_{Bdy}-2 )\omega_{i,j} - w_{Bdy} t_{i,j} +4 \omega_{i,j} - \omega_{i-1,j} -\omega_{i+1,j}-\omega_{i,j-1}-\omega_{i,j+1}=0
\end{equation}
Letting $(w_{Bdy}-2 ) =\alpha >0 $ gives
\begin{equation}
\left( 4 \omega_{i,j} - \omega_{i-1,j} -\omega_{i+1,j}-\omega_{i,j-1}-\omega_{i,j+1} \right)
- \alpha \omega_{i,j} = \left( \alpha +2 \right) t_{i,j}
\label{eq:reduced_cond}
\end{equation}
(\ref{eq:reduced_cond}) is clearly the discretization using central
difference approximation, of the PDE (\ref{eq:tsp_withf}) given
below:
\begin{eqnarray}
\label{eq:reduced} \left( \bigtriangleup - \alpha \right) w(x,y)
&=& f(x,y) \label{eq:tsp_withf} \\
{\mbox{with }} w({\mathbf x}) &=0& \mbox{ for } {\mathbf x}=(x,y)\in {\partial \Omega} \nonumber
\end{eqnarray}
where $ \bigtriangleup$ denotes the Laplace operator, and the right hand side inhomogeneity $f(x,y)$ is a scaled distance transform.
(\ref{eq:tsp_withf}) is defined on a planar shape domain $\Omega$
which is a connected, bounded, open domain of ${\mathbf R^2}$.
Interestingly, when the right hand side inhomogeneity
is replaced with a constant function $f(x,y)=1$ and
$\alpha$ is set to {\sl zero} (i.e. $w_{Bdy}=2$ as in the new method)
one obtains the Poisson equation which has been recently proposed by
Gorelick {\sl et al.}~\cite{Gorelick06} as a shape representation tool.
On the other hand, when $\alpha>0$ and $f(x,y)=-\alpha$, one
obtains the PDE:
\index{PDE!Poisson equation}
\begin{eqnarray}
\left( \bigtriangleup - \alpha \right) v &=&-\alpha \label{eq:tsp}
\\
{\mbox{with }} v({\mathbf x}) &=0& \mbox{ for } {\mathbf x}=(x,y)\in {\partial \Omega} \nonumber
\end{eqnarray}
which has been proposed earlier by Tari, Shah and
Pien~\cite{Tari96,Tari97,Tari98}. The qualitative behavior of
the $v$ function for small $\alpha$ is essentially the same with
that of the function obtained by solving the
Poisson~\cite{Gorelick06} equation. Both of them are essentially
weighted distance transforms~\cite{MaragosButt2000}
where the local steps between neighboring points are given different
costs. Tari, Shah and Pien~\cite{Tari96} have initially proposed $v$ function as a
linear and
a computationally efficient alternative
to the curve evolution scheme by Kimia, Tanenbaum and Zucker~\cite{Kimia95} by
showing that the successive level curves of $v$ mimic the motion of curves with
a curvature dependent speed in the direction of the inward normal~\cite{Tari96}.
\index{curve evolution} \index{curve evolution!efficient
alternative} \index{curve evolution!Kimia, Tanenbaum and Zucker} \index{distance transform}
Furthermore, they
demonstrated that the gradient of $v$ along a level
curve approximates the curvature of level curves, thus, suggesting a
robust method for skeleton extraction by locating the extrema of the
gradient of $v$ along the level curves. The skeleton computation
method of Tari, Shah and Pien~\cite{Tari96,Tari97,Tari98} exploits
the connection among morphology, distance transforms and fronts
propagating with curvature dependent speeds. Such connections have
stimulated many interesting approaches in solving shape related
problems~\cite{MaragosButt2000}. The
importance of Tari, Shah and Pien approach is that it is the first
attempt to unify segmentation and local symmetry computation into a
single formulation by exploiting the connection between
(\ref{eq:tsp}) and the Mumford and Shah~\cite{Mumford89}
segmentation functional (via its Ambrosio and
Tortorelli~\cite{Ambrosio90} approximation). It naturally extends to
shapes in arbitrary dimension~\cite{Tari98}. (In a related publication~\cite{Tari09}, the author
introduces an additive normalization term to (\ref{eq:tsp}) which
forces the solution to oscillate, yielding the same boundary texture
and gross structure separation. The proposed method is connected to
a variety of morphological ideas as well as to the method of Tari,
Shah and Pien in the variational calculus and PDE setting.) \index{segmentation}
\index{morphology} \index{Ambrosio and Tortorelli} \index{Mumford
and Shah} \index{level curve}
\index{diffusion}
\index{shape!topology}
\index{shape!evolution}
In Fig.~\ref{fig:tsp_mri1}, the basic method of Tari, Shah and Pien
is illustrated using an example by C. Aslan~\cite{Aslantez,Aslan08}.
At the top row, the level curves of $v$ function,
mimicking the behavior of fronts propagating with curvature
dependent speed, are shown. In each column, a different $\alpha$
value is used when computing $v$ via (\ref{eq:tsp}). As $\alpha$
decreases from left to right, the relative speed of the high
curvature points increases. Consequently, the inner level curves
lose their concavities and become smoother earlier. Thus the value
of $\alpha$ determines the level of smoothing (or {\sl diffusion}).
The arrows indicate the maxima and the saddle points. Topological
interpretation of the shape varies with $\alpha$. When, in the
first column, $\alpha=1/{4^2}$, the evolving shape boundary gets
split into three curves, each of which shrinks into three distinct
maxima separated by the two saddle points, indicating three parts.
In (b) and (c) $\alpha$ is reduced to $1/{8^2}$ and
$1/{16^2}$, respectively. The level curves lose their
concavities (which imply parts) earlier and evolving curves split
into two parts instead of three.
\begin{figure}
\centering
{\footnotesize \begin{tabular}{ccc}
\includegraphics[height=8cm]{eps_pami08/Fig5a.png} &
\includegraphics[height=8cm]{eps_pami08/Fig5b.png} &
\includegraphics[height=8cm]{eps_pami08/Fig5c.png} \\
(a) & (b) & (c)
\end{tabular}}
\caption{ Tari, Shah and Pien method using three different $\alpha$
values. (a) $\alpha=1/{4^2}$ (b) $\alpha=1/{8^2}$ (c)
$\alpha=1/{16^2}$. The top row (level curves of $v$) illustrates
different topological interpretations by varying
$\alpha$. The arrows indicate the maxima and the saddle points. The
bottom row displays the skeletons computed from the respective $v$
functions. [Figures by C. Aslan~\cite{Aslantez,Aslan08}].}
\label{fig:tsp_mri1}
\end{figure}
\begin{figure}
\centering
{\footnotesize\begin{tabular}{cc}
\includegraphics[height=4.3cm]{eps_pami08/Fig6aa.png} &
\includegraphics[height=4.3cm]{eps_pami08/Fig6bb.png} \\
(a) & (b) \\
\end{tabular}}
\caption{ Un-intuitive skeleton branches (red color) in Tari, Shah
and Pien~\cite{Tari96,Tari97}. (a) $\alpha=1/{8^2}$ (b)
$\alpha=1/{16^2}$. See text for discussion. [Figures by C.
Aslan~\cite{Aslantez,Aslan08}].} \label{fig:tsp_mri2}
\vglue 6pt
\end{figure}
Skeleton branches computed from the respective $v$ functions
(displayed at the bottom row) typically track the evolution of the
indentations and protrusions of the shape. However, some of them exhibit a
pathological behavior that frequently occurs in the Tari, Shah and
Pien method when a limb is close to a neck. Notice that the
skeletons contain branches that do not correspond to any protrusion
or indentation of the shape. Such branches are marked with red color
in Fig.~\ref{fig:tsp_mri2}. Aslan and
Tari~\cite{Aslan05,Aslantez,Aslan08} claim that the reason of this
pathology is {\sl insufficient diffusion}. \index{diffusion}
\index{shape!topological change}
As seen in Fig. \ref{fig:tsp_mri2} increasing the amount of
diffusion by decreasing $\alpha$ makes such branches disappear. In
(a), the computation stopped while the shape was transforming from a
shape with three major blobs to a shape with two major blobs. The
circular branch colored with red is due to the interaction of the
center of parts two and the neck between parts one and two. As
shown in (b), increasing the amount of diffusion by decreasing
$\alpha$ makes this branch disappear since the topological change
is complete. This time, the shape is between the state with two
blobs (parts one and two together and part three) and the state with
one blob. The red branch is due to the interaction of the center
point of part
three and the neck between parts two and three.
Thus, as a remedy, they propose to increase the diffusion by
gradually decreasing
$\alpha$ so that almost every shape is forcefully
interpreted as a single blob ignoring the part structure.
Following this ad-hoc modification, the new $v$ function has been successfully applied in shape matching applications~\cite{Aslantez,Aslan08,Baseski08,Erdem09}.
However, the method can not be applied to shapes which can not be
reduced to a single blob. Such cases include:
\begin{itemize}
\item{shapes with holes;}
\item{thin and long shapes with constant width; }
\item{shapes with more than one equally prominent parts.}
\end{itemize}
The strategy adopted~\cite{Aslan05,Aslantez,Aslan08} for shapes with
two equally prominent parts (dumbbell-like shapes) is to retain
their dumbbell-like character. This ad-hoc solution introduces a
representational instability as the width of the neck that separates
two prominent parts change.
\begin{figure}[ht]
\vglue 6pt
\centering
\begin{tabular}{ccc}
\includegraphics[height=4.5cm]{dumbell_levels/dumbell3ax_lsc.png}&
\includegraphics[height=4.5cm]{dumbell_levels/dumbell1ax_lsc.png}&
\includegraphics[height=4.5cm]{dumbell_levels/dumbell2ax_lsc.png}
\end{tabular}
\caption{The level curves of $\omega$. The inner black level curve
is the zero-level curve. In all of the three cases, $\Omega^+$
denotes the gross structure.} \label{fig:dumbell_lc}
\end{figure}
\vfill\pagebreak
\begin{figure}[t]
\centering
\begin{tabular}{ccccc}
\includegraphics[height=3.6cm]{Figs/iccv_Aslandata_22.png}&
\includegraphics[height=3.6cm]{Figs/iccv_Aslandata_20.png}&
\includegraphics[height=3.6cm]{Figs/iccv_Aslandata_21.png}
\end{tabular}
\caption {Parts of the peripheral structure for dumbbell-like shapes
of varying neck thickness. } \label{fig:dumbell_parts}
\end{figure}
In Fig.~\ref{fig:dumbell_lc}-\ref{fig:dumbell_parts}, the level
curves of $w$ are shown for three dumbbell-like shapes with varying
neck thickness. In all of the three cases, $\Omega^+$ denotes the
gross structure. Instead of relying on a single point to be the
shape center as in the method of Aslan and Tari~\cite{Aslan05,Aslantez,Aslan08},
the new method takes a different attitude. Robustness
is obtained by replacing a point estimate for the center with an
interval estimate. Parts of the peripheral structure are depicted in
Fig.~\ref{fig:dumbell_parts}.
\begin{figure}[b]
\centering
{\footnotesize\begin{tabular}{cc}
\includegraphics[height=2.5cm]{epsler_kedi/kedi1_K.png} &
\includegraphics[height=2.5cm]{epsler_kedi/kedi1_L2axes.png} \\
(a) & (b) \\[4pt]
\includegraphics[height=2.5cm]{epsler_kedi/obj9_1skel.png}&
\includegraphics[height=2.5cm]{epsler_kedi/obj9_1partsEmre.png}\\
(c) & (d)
\end{tabular}}
\caption{(a) Skeleton points detected with the method of Tari, Shah
and Pien~\cite{Tari97} using the modified
function~\cite{Aslan05,Aslantez,Aslan08}. (b) After pruning and
grouping skeleton points. (c) Disconnecting the major skeleton
branch~\cite{Aslan08}. (d) Parts obtained from disconnected
branches. [Unpublished result by the author's former student E. Baseski.] }
\label{fig:Emrekedi}
\end{figure}
\begin{figure}[t]
\centering
\begin{tabular}{cc}
\includegraphics[height=2.5cm]{Figs/objdata_27_wabs-NEW.png} &
\includegraphics[height=2.5cm]{epsler_kedi/cat_wsold-NEW.png}\\
\end{tabular}
\caption{(a) $|\omega|$. (b) Parts.} \label{fig:kediparts}
\end{figure}
\index{medial axis!disconnected skeleton}
The method of Aslan and Tari implicitly
codes the part structure via disconnected skeleton branches.
Fig.~\ref{fig:Emrekedi} demonstrates the possibility of extracting
parts from these disconnected branches. In (a) skeleton points
detected with the method of Tari, Shah and Pien~\cite{Tari97} using
the modified function~\cite{Aslan05,Aslantez,Aslan08} is shown. In
(b) the result of pruning and grouping procedure is shown. In (c)
final representation (called the disconnected Aslan
skeleton~\cite{Aslan05,Aslan08,Aslantez} to distinguish it from the
Tari, Shah and Pien skeleton~\cite{Tari96,Tari97}) is depicted. In
(d) parts are extracted, for each disconnected branch, by fitting a
spline that passes through the disconnection point and the nearest
indentations. The parts captured by the new method is presented in
Fig.~\ref{fig:kediparts} for the same cat shape (reduced
resolution) for comparison.
|
2,869,038,154,401 | arxiv | \section{Introduction}
In recent years, there has been significant interest in studying estimation and control problems for quantum systems \cite{WM1,YK1,YK2,NY,JNP,NJP,GGY,MP1,MP2,YNJP,GJ,GJN,IRP1}. Linear quantum systems are an important class of quantum systems and have been of particular interest in this context \cite{WM1,NY,JNP,NJP,GGY,GJN,WM2,GZ,WD,NJD,HM,SSM,IRP3}. Such linear quantum systems have been useful in describing quantum optical devices such as linear quantum amplifiers \cite{GZ}, finite bandwidth squeezers \cite{GZ} and optical cavities \cite{WM2,BR}. Coherent feedback control has also been studied a lot recently for linear quantum systems \cite{JNP,NJP,MP1,MP2,HM,WM3,SL,GW,IRP3}. The authors have previously explored a related coherent-classical estimation problem \cite{IRP2,RPH}, where the estimator consists of a classical part, that produces the desired final estimate and a quantum part, which may also involve coherent feedback. In this work, the authors have studied optimal classical estimation for linear quantum systems using a complex Kalman filter.
Ref. \cite{MJ} considered a quantum observer, that is a purely quantum system, which produces a quantum estimate of a variable for the quantum plant. By contrast, we here consider classical estimation for linear quantum systems, where the estimator is a classical system that yields a classical estimate of a variable for the quantum plant. A robust quantum observer for uncertain quantum systems was constructed in Ref. \cite{NY}. On the other hand, here we build a robust complex classical estimator for uncertain linear quantum systems. A robust classical $H_\infty$ estimator for uncertain linear systems was presented in Ref. \cite{FDX}. Here, we develop a more general and complex $H_\infty$ filter for robust classical estimation of uncertain linear quantum systems. The solution to the $H_\infty$ estimation problem is obtained by means of two algebraic Riccati equations, upon converting the estimation problem to a scaled $H_\infty$ control problem.
The paper is structured as follows. We introduce the class of linear quantum systems considered in this paper in Section \ref{sec:lqs}. The complex Kalman filter is discussed in Section \ref{sec:kalman} for optimal classical estimation of linear quantum systems. Section \ref{sec:robust} then considers the robust $H_\infty$ estimation problem for uncertain linear quantum systems and presents our main result. Illustrative examples of our robust estimator for two different scenarios are provided in Sections \ref{sec:num_ex1} and \ref{sec:num_ex2}. Finally, Section \ref{sec:conc} concludes the paper with relevant remarks and possible future work.
\section{Linear Quantum Systems}\label{sec:lqs}
The class of linear quantum systems we consider here are described by the quantum stochastic differential equations (QSDEs) \cite{IRP2}:
\begin{equation}\label{eq:lqs_1}
\begin{split}
\left[\begin{array}{c}
da(t)\\
da(t)^{\#}
\end{array}\right] &= A \left[\begin{array}{c}
a(t)\\
a(t)^{\#}
\end{array}\right] dt + B \left[\begin{array}{c}
d\mathcal{A}(t)\\
d\mathcal{A}(t)^{\#}
\end{array}\right],\\
\left[\begin{array}{c}
d\mathcal{Y}(t)\\
d\mathcal{Y}(t)^{\#}
\end{array}\right] &= C \left[\begin{array}{c}
a(t)\\
a(t)^{\#}
\end{array}\right] dt + D \left[\begin{array}{c}
d\mathcal{A}(t)\\
d\mathcal{A}(t)^{\#}
\end{array}\right],
\end{split}
\end{equation}
where
\begin{equation}\label{eq:lqs_2}
\begin{split}
A &= \Delta(A_1,A_2), \qquad B = \Delta(B_1,B_2),\\
C &= \Delta(C_1,C_2), \qquad D = \Delta(D_1,D_2).
\end{split}
\end{equation}
Here, $a(t) = [a_1(t) \hdots a_n(t)]^T$ is a vector of annihilation operators. The adjoint of the operator $a_i$ is called a creation operator, denoted by $a_i^{*}$. The vector $\mathcal{A}(t) = [\mathcal{A}_1(t) \hdots \mathcal{A}_m(t)]^T$ represents a collection of external independent quantum field operators and the vector $\mathcal{Y}$ represents the corresponding vector of output field operators. Also, the notation $\Delta(A_1,A_2)$ denotes the matrix $\left[\begin{array}{cc} A_1 & A_2\\
A_2^{\#} & A_1^{\#}
\end{array}\right]$. Here, $A_1$, $A_2 \in \mathbb{C}^{n \times n}$, $B_1$, $B_2 \in \mathbb{C}^{n \times m}$, $C_1$, $C_2 \in \mathbb{C}^{m \times n}$, and $D_1$, $D_2 \in \mathbb{C}^{m \times m}$. Moreover, $^{\#}$ denotes the adjoint of a vector of operators or the complex conjugate of a complex matrix. Furthermore, $^\dagger$ denotes the adjoint transpose of a vector of operators or the complex conjugate transpose of a complex matrix.
\begin{definition}
\cite{IRP2} A complex linear quantum system of the form (\ref{eq:lqs_1}), (\ref{eq:lqs_2}) is said to be physically realizable if there exists a complex commutation matrix $\Theta = \Theta^\dagger$, a complex Hamiltonian matrix $M = M^\dagger$, and a coupling matrix $N$ such that
\begin{equation}\label{eq:theta}
\Theta = TJT^\dagger,
\end{equation}
where $J = \left[\begin{array}{cc}
I & 0\\
0 & -I
\end{array}\right]$, $T = \Delta(T_1,T_2)$ is non-singular, $M$ and $N$ are of the form
\begin{equation}
M = \Delta(M_1,M_2), \qquad N = \Delta(N_1,N_2),
\end{equation}
and
\begin{equation}
\begin{split}
A &= -\iota\Theta M - \frac{1}{2}\Theta N^\dagger JN,\\
B &= -\Theta N^\dagger J,\\
C &= N,\\
D &= I.
\end{split}
\end{equation}
\end{definition}
Here, $\iota = \sqrt{-1}$ is the imaginary unit, and the commutation matrix $\Theta$ satisfies the following commutation relation:
\begin{equation}\label{eq:comm_rel1}
\begin{split}
&\left[\left[\begin{array}{c}
a\\
a^{\#}
\end{array}\right], \left[\begin{array}{c}
a\\
a^{\#}
\end{array}\right]^\dagger\right]\\
&= \left[\begin{array}{c}
a\\
a^{\#}
\end{array}\right]\left[\begin{array}{c}
a\\
a^{\#}
\end{array}\right]^\dagger - \left(\left[\begin{array}{c}
a\\
a^{\#}
\end{array}\right]^\# \left[\begin{array}{c}
a\\
a^{\#}
\end{array}\right]^T\right)^T\\
&= \Theta .
\end{split}
\end{equation}
One can verify that $\Theta$ is a $2n \times 2n$ matrix, the elements of which are as follows, given $i,j = 1 \hdots n$:
\begin{align*}
\Theta_{ij} &= [a_i,a_j^{*}],\\
\Theta_{(n+i)(n+j)} &= [a_i^{*},a_j],\\
\Theta_{i(n+j)} &= [a_i,a_j],\\
\Theta_{(n+i)j} &= [a_i^{*},a_j^{*}],\\
\Theta_{ii} &= 1,\\
\Theta_{(n+i)(n+i)} &= -1,\\
\Theta_{i(n+i)} &= \Theta_{(n+i)i} = 0.
\end{align*}
The annihilation and creation operators can be used to construct the number operator $N = a^\dagger a$, the eigenstates of which form the orthonormal number (or Fock) states \cite{BR}:
\begin{equation*}
N|q\rangle = a^\dagger a|q\rangle = q|q\rangle , \quad q = 0,1,2,\hdots
\end{equation*}
In particular, the state $|0\rangle$ is called the vacuum state \cite{BR}:
\begin{equation*}
a|0\rangle = 0.
\end{equation*}
The annihilation and creation operators have the properties of lowering and raising the number of a state respectively \cite{BR}:
\begin{align*}
a|q\rangle &= \sqrt{q}|q-1\rangle,\\
a^\dagger |q\rangle &= \sqrt{q+1}|q+1\rangle.
\end{align*}
\begin{theorem}\label{thm:phys_rlz}
\cite{IRP2} The linear quantum system (\ref{eq:lqs_1}), (\ref{eq:lqs_2}) is physically realizable if and only if there exists a complex matrix $\Theta = \Theta^\dagger$ such that $\Theta$ is of the form in (\ref{eq:theta}), and
\begin{equation}\label{eq:phys_rlz}
\begin{split}
A\Theta + \Theta A^\dagger + BJB^\dagger &= 0,\\
B &= -\Theta C^\dagger J,\\
D &= I.
\end{split}
\end{equation}
\end{theorem}
If the system (\ref{eq:lqs_1}) is physically realizable, then the matrices $M$ and $N$ define a complex open harmonic oscillator with a Hamiltonian operator
\[ \mathbf{H} = \frac{1}{2} \left[\begin{array}{cc}
a^\dagger & a^T
\end{array}\right] M \left[\begin{array}{c}
a\\
a^{\#}
\end{array}\right],\]
and a coupling operator
\[ \mathbf{L} = \left[\begin{array}{cc}
N_1 & N_2
\end{array}\right] \left[\begin{array}{c}
a\\
a^{\#}
\end{array}\right].\]
\section{Kalman Filter}\label{sec:kalman}
The schematic diagram of a classical estimation scheme is provided in Fig. \ref{fig:cls_scm}. We consider a quantum plant, which is a quantum system of the form (\ref{eq:lqs_1}), (\ref{eq:lqs_2}), defined as follows:
\begin{equation}\label{eq:plant}
\begin{split}
\left[\begin{array}{c}
da(t)\\
da(t)^{\#}
\end{array}\right] &= A \left[\begin{array}{c}
a(t)\\
a(t)^{\#}
\end{array}\right] dt + B \left[\begin{array}{c}
d\mathcal{A}(t)\\
d\mathcal{A}(t)^{\#}
\end{array}\right],\\
\left[\begin{array}{c}
d\mathcal{Y}(t)\\
d\mathcal{Y}(t)^{\#}
\end{array}\right] &= C \left[\begin{array}{c}
a(t)\\
a(t)^{\#}
\end{array}\right] dt + D \left[\begin{array}{c}
d\mathcal{A}(t)\\
d\mathcal{A}(t)^{\#}
\end{array}\right],\\
z &= L\left[\begin{array}{c}
a(t)\\
a(t)^{\#}
\end{array}\right].
\end{split}
\end{equation}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{F2.eps}
\caption{Schematic diagram of classical estimation for a quantum plant.}
\label{fig:cls_scm}
\end{figure}
Here, $z$ denotes a scalar operator on the underlying Hilbert space and represents the quantity to be estimated. Also, $\mathcal{Y}(t)$ represents the vector of output fields of the plant, and $\mathcal{A}(t)$ represents a vector of quantum disturbances acting on the plant.
A quadrature of each component of the vector $\mathcal{Y}(t)$ is measured using homodyne detection to produce a corresponding classical signal $y_i$:
\begin{equation}\label{eq:class_hd}
\begin{split}
dy_1 &= \frac{e^{-\iota\theta_1}}{\sqrt{2}}d\mathcal{Y}_1 + \frac{e^{\iota\theta_1}}{\sqrt{2}}d\mathcal{Y}_1^{*},\\
&\vdots\\
dy_m &= \frac{e^{-\iota\theta_m}}{\sqrt{2}}d\mathcal{Y}_m + \frac{e^{\iota\theta_m}}{\sqrt{2}}d\mathcal{Y}_m^{*}.
\end{split}
\end{equation}
Here, the angles $\theta_1,\hdots,\theta_m$ determine the quadrature measured by each homodyne detector. The vector of classical signals $y = [y_1 \hdots y_m]^T$ is then used as the input to a classical estimator defined as follows:
\begin{equation}\label{eq:class_estimator}
\begin{split}
dx_e &= A_ex_edt + K_edy,\\
\hat{z} &= L_ex_e.
\end{split}
\end{equation}
For the sake of comparison, we will first consider the optimal estimation problem for quantum linear systems; see also \cite{NY,BHJ}. The optimal classical estimator is given by the standard (complex) Kalman filter defined for the system (\ref{eq:plant}), (\ref{eq:class_hd}). This optimal classical estimator is obtained from the solution to the algebraic Riccati equation:
\begin{equation}\label{eq:class_riccati}
\begin{split}
AP &+ PA^\dagger + BB^\dagger - (B + PC^\dagger)S^\dagger S(B + PC^\dagger)^\dagger =0,
\end{split}
\end{equation}
where
\begin{equation}
\begin{split}
S &= \left[\begin{array}{cc}
S_1 & S_2
\end{array}\right],\\
S_1 &= \left[\begin{array}{cccc}
\frac{e^{-\iota\theta_1}}{\sqrt{2}} & 0 & \hdots & 0\\
0 & \frac{e^{-\iota\theta_2}}{\sqrt{2}} & \hdots & 0\\
& & \ddots & \\
& & & \frac{e^{-\iota\theta_m}}{\sqrt{2}}
\end{array}\right],\\
S_2 &= \left[\begin{array}{cccc}
\frac{e^{\iota\theta_1}}{\sqrt{2}} & 0 & \hdots & 0\\
0 & \frac{e^{\iota\theta_2}}{\sqrt{2}} & \hdots & 0\\
& & \ddots & \\
& & & \frac{e^{\iota\theta_m}}{\sqrt{2}}
\end{array}\right].
\end{split}
\end{equation}
Here we have assumed that the quantum disturbance $\mathcal{A}$ is purely canonical, i.e. $d\mathcal{A}d\mathcal{A}^\dagger = Idt$ and hence $D=I$.
Then, the corresponding optimal classical estimator (\ref{eq:class_estimator}) is defined by the equations:
\begin{equation}\label{eq:cls_sys_est}
\begin{split}
A_e &= A - K_eSC,\\
K_e &= (B + PC^\dagger)S^\dagger,\\
L_e &= L.
\end{split}
\end{equation}
\section{Robust $H_\infty$ Filter}\label{sec:robust}
Corresponding to the system described by (\ref{eq:plant}), (\ref{eq:class_hd}), we define our uncertain system modelled as follows:
\begin{equation}\label{eq:uncertain1}
\begin{split}
(\Sigma_1): \dot{x}(t) &= [A+\Delta A(t)]x(t) + [B+\Delta B(t)]w(t),\\
z(t) &= Lx(t),\\
y'(t) &= S[C+\Delta C(t)]x(t) + SDw(t),
\end{split}
\end{equation}
where $x(t) := \left[\begin{array}{c}
a(t)\\
a(t)^{\#}
\end{array}\right]$ is the state, $w(t)$ is the disturbance input, $z(t)$ is a linear combination of the state variables to be estimated, $y'(t)$ is the measured output, $L \in \mathbb{C}^{p \times 2n}$, $SC \in \mathbb{C}^{m \times 2n}$, $SD \in \mathbb{C}^{m \times 2m}$, and $\Delta A(\cdot)$, $\Delta B(\cdot)$ and $\Delta C(\cdot)$ denote the time-varying parameter uncertainties. These uncertainties are in the following structure
\begin{equation}\label{eq:unc_pars}
\begin{split}
\left[\begin{array}{c}
\Delta A(t)\\
\Delta C(t)
\end{array}\right] &= \left[\begin{array}{c}
H_1\\
H_3
\end{array}\right]F_1(t)E,\\
\Delta B(t) &= H_2F_2(t)G,
\end{split}
\end{equation}
where $H_1$, $H_2$, $H_3$, $E$ and $G$ are known complex constant matrices with appropriate dimensions, and the unknown matrix functions $F_1(\cdot)$ and $F_2(\cdot)$ satisfy the following:
\begin{equation}\label{eq:unc_constraint}
\begin{split}
F_1^\dagger(t)F_1(t) &\leq I, \qquad \forall t,\\
F_2^\dagger(t)F_2(t) &\leq I, \qquad \forall t.
\end{split}
\end{equation}
Note that for the system (\ref{eq:uncertain1}) to be physically realizable, the following constraints are required to be satisfied by the uncertainties:
\begin{equation}\label{eq:phys_rlz_unc}
\begin{split}
\Delta A\Theta + \Theta\Delta A^\dagger + BJ\Delta B^\dagger + \Delta BJB^\dagger + \Delta BJ\Delta B^\dagger &= 0,\\
\Delta B &= -\Theta\Delta C^\dagger J.
\end{split}
\end{equation}
The robust $H_\infty$ estimation problem for the uncertain system (\ref{eq:uncertain1}) can be converted into a scaled $H_\infty$ control problem, as described in Ref. \cite{FDX}, by introducing the following parameterized linear time-invariant system corresponding to system (\ref{eq:uncertain1}):
\begin{equation}\label{eq:uncertain2}
\begin{split}
(\Sigma_2): \dot{x}(t) &= Ax(t) + \left[\begin{array}{ccc}
B & \frac{\gamma}{\epsilon_1}H_1 & \frac{\gamma}{\epsilon_2}H_2
\end{array}\right]\tilde{w}(t),\\
\tilde{z}(t) &= \left[\begin{array}{c}
\epsilon_1E\\
0\\
L
\end{array}\right]x(t) + \left[\begin{array}{ccc}
0 & 0 & 0\\
\epsilon_2G & 0 & 0\\
0 & 0 & 0
\end{array}\right]\tilde{w}(t) + \left[\begin{array}{c}
0\\
0\\
-I
\end{array}\right]u(t),\\
y'(t) &= SCx(t) + \left[\begin{array}{ccc}
SD & \frac{\gamma}{\epsilon_1}SH_3 & 0
\end{array}\right]\tilde{w}(t).
\end{split}
\end{equation}
Here, $u(t)$ is the control input, $\tilde{z}(t)$ is the controlled output, $\epsilon_1$, $\epsilon_2 > 0$ are suitably chosen scaling parameters and $\gamma > 0$ is the desired level of disturbance attenuation for the robust $H_\infty$ estimation problem. We also have the augmented disturbance
\[ \tilde{w}(t) := \left[\begin{array}{c}
w(t)\\
\frac{\epsilon_1}{\gamma}\eta(t)\\
\frac{\epsilon_2}{\gamma}\xi(t)
\end{array}\right],\]
where
\[ \eta(t) := F_1(t)Ex(t),\]
and
\[ \xi(t) := F_2(t)Gw(t).\]
The following assumptions are made for the system (\ref{eq:uncertain2}):
\begin{assumption}
\begin{itemize}
\item[]
\item[A1.] The system matrix $A$ is stable.
\item[A2.] $\epsilon_2^2G^\dagger G < I$.
\item[A3.] $\left[\begin{array}{cc}
SD & SH_3
\end{array}\right]$ is of full row rank.
\item[A4.] rank $\left[\begin{array}{cc}
A-\iota\omega I & B\\
C & D
\end{array}\right] = 2n+2m$, \qquad $\forall\omega\geq 0$.
\end{itemize}
\end{assumption}
\begin{remark}
The assumption A1 is required so that the $H_\infty$ norm for the combined plant-estimator system is finite. The remaining assumptions are technical assumptions arising in $H_\infty$ control theory which are required in order to obtain a solution using Riccati equations.
\end{remark}
A complete solution to the robust $H_\infty$ estimation problem is then provided below.
\begin{theorem}\label{thm:h_infinity}
Consider the robust $H_\infty$ estimation problem for the uncertain system (\ref{eq:uncertain1}) converted to a scaled $H_\infty$ control problem for the system (\ref{eq:uncertain2}), satisfying the assumptions A1 to A4. Given a prescribed level of disturbance attenuation $\gamma > 0$, the robust $H_\infty$ estimation problem for the uncertain system (\ref{eq:uncertain1}) is solvable if for some $\epsilon_1$, $\epsilon_2 > 0$, the following conditions are satisfied:
(a) There exists a stabilising solution $X = X^\dagger \geq 0$ to the algebraic Riccati equation:
\begin{equation}\label{eq:robust_riccati1}
\overline{A}^\dagger X+X\overline{A}+X(\gamma^{-2} \overline{B}_1\overline{B}_1^\dagger)X + \overline{C}_1^\dagger (I-\overline{D}_{12}\overline{E}_1^{-1}\overline{D}_{12}^\dagger)\overline{C}_1 = 0.
\end{equation}
(b) There exists a stabilising solution $Y = Y^\dagger \geq 0$ to the algebraic Riccati equation:
\begin{equation}\label{eq:robust_riccati2}
\begin{split}
\overline{A}Y&+Y\overline{A}^\dagger +Y\overline{C}_1^\dagger\overline{C}_1Y+ \gamma^{-2}\overline{B}_1\overline{B}_1^\dagger \\ &-(\gamma^{-1}\overline{B}_1\overline{D}_{21}^\dagger+\gamma Y\overline{C}_2^\dagger) \overline{S}^\dagger\overline{E}_2^{-1}\overline{S}(\gamma^{-1}\overline{B}_1\overline{D}_{21}^\dagger+\gamma Y\overline{C}_2^\dagger)^\dagger = 0.
\end{split}
\end{equation}
(c) $I-\gamma^{-2}XY > 0$.
Here, we have
\vspace*{-2mm}
\begin{equation}\label{eq:final_parameters1}
\begin{split}
\overline{A} &= A,\\
\overline{B}_1 &= \left[\begin{array}{ccc}
B(I-\epsilon_2^2G^\dagger G)^{-1/2} & \frac{\gamma}{\epsilon_1}H_1 & \frac{\gamma}{\epsilon_2}H_2
\end{array}\right],\\
\overline{C}_1 &= \left[\begin{array}{c}
\epsilon_1E\\
0\\
L
\end{array}\right],\\
\overline{D}_{12} &= \left[\begin{array}{c}
0\\
0\\
-I
\end{array}\right],\\
\overline{C}_2 &= C,\\
\overline{D}_{21} &= \left[\begin{array}{ccc}
D(I-\epsilon_2^2G^\dagger G)^{-1/2} & \frac{\gamma}{\epsilon_1}H_3 & 0
\end{array}\right],\\
\overline{S} &= S,
\end{split}
\end{equation}
and
\begin{equation}\label{eq:final_parameters3}
\begin{split}
\overline{E}_1 &= \overline{D}_{12}^\dagger\overline{D}_{12} = I,\\
\overline{E}_2 &= \overline{S}\overline{D}_{21}\overline{D}_{21}^\dagger \overline{S}^\dagger = SD(I-\epsilon_2^2G^\dagger G)^{-1}D^\dagger S^\dagger + \frac{\gamma^2}{\epsilon_1^2}SH_3H_3^\dagger S^\dagger .
\end{split}
\end{equation}
When conditions (a)-(c) are satisfied, a suitable estimator is given by:
\begin{equation}\label{eq:robust_estimator}
\begin{split}
\dot{\hat{x}}(t) &= A_K\hat{x}(t) + B_Ky'(t),\\
\hat{z}(t) &= C_K\hat{x}(t),
\end{split}
\end{equation}
where
\vspace*{-2mm}
\begin{equation}
\begin{split}
A_K &= \overline{A} - B_K\overline{S}\overline{C}_2 + \gamma^{-2}(\overline{B}_1-B_K\overline{S}\overline{D}_{21})\overline{B}_1^\dagger X,\\
B_K &= \gamma^2(I-YX)^{-1}(Y\overline{C}_2^\dagger\overline{S}^\dagger + \gamma^{-2} \overline{B}_1\overline{D}_{21}^\dagger\overline{S}^\dagger)\overline{E}_2^{-1},\\
C_K &= -\overline{E}_1^{-1}\overline{D}_{12}^\dagger\overline{C}_1.
\end{split}
\end{equation}
\end{theorem}
\begin{proof}
The system (\ref{eq:uncertain2}) is of the following form:
\begin{equation}\label{eq:uncertain3}
\begin{split}
(\Sigma_2): \dot{x}(t) &= Ax(t) + B_1\tilde{w}(t) + B_2u(t),\\
\tilde{z}(t) &= C_1x(t) + D_{11}\tilde{w}(t) + D_{12}u(t),\\
y'(t) &= SC_2x(t) + SD_{21}\tilde{w}(t) + SD_{22}u(t),
\end{split}
\end{equation}
where
\begin{equation}
\begin{split}
B_1 &= \left[\begin{array}{ccc}
B & \frac{\gamma}{\epsilon_1}H_1 & \frac{\gamma}{\epsilon_2}H_2
\end{array}\right],\\
B_2 &= 0,\\
C_1 &= \left[\begin{array}{c}
\epsilon_1E\\
0\\
L
\end{array}\right],\\
D_{11} &= \left[\begin{array}{ccc}
0 & 0 & 0\\
\epsilon_2G & 0 & 0\\
0 & 0 & 0
\end{array}\right],\\
D_{12} &= \left[\begin{array}{c}
0\\
0\\
-I
\end{array}\right],\\
C_2 &= C,\\
D_{21} &= \left[\begin{array}{ccc}
D & \frac{\gamma}{\epsilon_1}H_3 & 0
\end{array}\right],\\
D_{22} &= 0.
\end{split}
\end{equation}
We will use the results from Ref. \cite{PAJ} to solve the above $H_\infty$ control problem. However, Ref. \cite{PAJ} requires the matrix $D_{11}$ to be zero. The system (\ref{eq:uncertain2}) with non-zero $D_{11}$ can be converted to an equivalent system having $D_{11} = 0$ using the \emph{loop-shifting} technique, as outlined in Ref. \cite{ZDG}.
We define
\begin{equation}
\tilde{D}_{11} := \left[\begin{array}{cc}
D_{1111} & D_{1112}\\
D_{1121} & D_{1122}+D_\infty
\end{array}\right],
\end{equation}
where we let $D_\infty = -D_{1122}-D_{1121}(I - D_{1111}^\dagger D_{1111})^{-1}D_{1111}^\dagger D_{1112}$. Here, for our system (\ref{eq:uncertain2}), we take $D_{1111} = \left[\begin{array}{cc}
0 & 0\\
\epsilon_2G & 0\\
\end{array}\right]$, $D_{1121} = \left[\begin{array}{cc}
0 & 0\\
\end{array}\right]$, $D_{1112} = \left[\begin{array}{c}
0\\
0
\end{array}\right]$, and $D_{1122} = 0$. Note that $\left|\left|\tilde{D}_{11}\right|\right|<1$ follows from Assumption A2. Then, we get $D_\infty = 0$. Thus, we have $\tilde{D}_{11} = D_{11}$.
Hence, the $H_\infty$ control problem in (\ref{eq:uncertain3}) takes the following form:
\begin{equation}\label{eq:uncertain4}
\begin{split}
(\Sigma_3): \dot{x}(t) &= \tilde{A}x(t) + \tilde{B}_1\tilde{w}(t) + \tilde{B}_2u(t),\\
\tilde{z}(t) &= \tilde{C}_1x(t) + \tilde{D}_{11}\tilde{w}(t) + \tilde{D}_{12}u(t),\\
y'(t) &= \tilde{S}\tilde{C}_2x(t) + \tilde{S}\tilde{D}_{21}\tilde{w}(t) + \tilde{S}\tilde{D}_{22}u(t),
\end{split}
\end{equation}
where
\begin{equation}
\begin{split}
\tilde{A} &= A+B_2D_\infty C_2 = A,\\
\tilde{B}_1 &= B_1+B_2D_\infty D_{21} = B_1,\\
\tilde{B}_2 &= B_2 = 0,\\
\tilde{C}_1 &= C_1 + D_{12}D_\infty C_2 = C_1,\\
\tilde{D}_{12} &= D_{12},\\
\tilde{C}_2 &= C_2,\\
\tilde{D}_{21} &= D_{21},\\
\tilde{D}_{22} &= D_{22} = 0,\\
\tilde{S} &= S.
\end{split}
\end{equation}
The $H_\infty$ control problem equivalent to the above system (\ref{eq:uncertain4}) is \cite{ZDG}:
\begin{equation}\label{eq:uncertain5}
\begin{split}
(\Sigma_4): \dot{x}(t) &= \overline{A}x(t) + \overline{B}_1\tilde{w}(t) + \overline{B}_2u(t),\\
\tilde{z}(t) &= \overline{C}_1x(t) + \overline{D}_{11}\tilde{w}(t) + \overline{D}_{12}u(t),\\
y'(t) &= \overline{S}\overline{C}_2x(t) + \overline{S}\overline{D}_{21}\tilde{w}(t) + \overline{S}\overline{D}_{22}u(t),
\end{split}
\end{equation}
Here,
\begin{equation}\label{eq:final_parameters4}
\begin{split}
\overline{A} &= \tilde{A}+\tilde{B}_1R_1^{-1}\tilde{D}_{11}^\dagger \tilde{C}_1,\\
\overline{B}_1 &= \tilde{B}_1R_1^{-1/2},\\
\overline{B}_2 &= \tilde{B}_2 + \tilde{B}_1R_1^{-1}\tilde{D}_{11}^\dagger\tilde{D}_{12},\\
\overline{C}_1 &= \tilde{R}_1^{-1/2}\tilde{C}_1,\\
\overline{D}_{11} &= 0,\\
\overline{D}_{12} &= \tilde{R}_1^{-1/2}\tilde{D}_{12},\\
\overline{C}_2 &= \tilde{C}_2 + \tilde{D}_{21}R_1^{-1}\tilde{D}_{11}^\dagger \tilde{C}_1,\\
\overline{D}_{21} &= \tilde{D}_{21}R_1^{-1/2},\\
\overline{D}_{22} &= \tilde{D}_{21}R_1^{-1}\tilde{D}_{11}^\dagger\tilde{D}_{12},\\
\overline{S} &= S,
\end{split}
\end{equation}
where
\begin{equation}
\begin{split}
R_1 &= I - \tilde{D}_{11}^\dagger\tilde{D}_{11} = \left[\begin{array}{ccc}
I-\epsilon_2^2G^\dagger G & 0 & 0\\
0 & I & 0\\
0 & 0 & I
\end{array}\right],\\
\tilde{R}_1 &= I - \tilde{D}_{11}\tilde{D}_{11}^\dagger = \left[\begin{array}{ccc}
I & 0 & 0\\
0 & I-\epsilon_2^2GG^\dagger & 0\\
0 & 0 & I
\end{array}\right].
\end{split}
\end{equation}
One can verify that (\ref{eq:final_parameters4}) yields (\ref{eq:final_parameters1}), $\overline{B}_2 = 0$ and $\overline{D}_{22} = 0$. Note that from assumptions A1 to A4 and (\ref{eq:final_parameters3}), it follows that we have:
\begin{itemize}
\item $\overline{E}_1 > 0$.
\item $\overline{E}_2 > 0$.
\item rank $\left[\begin{array}{cc}
\overline{A}-\iota\omega I & \overline{B}_2\\
\overline{C}_1 & \overline{D}_{12}
\end{array}\right] = 2n + p$ for all $\omega \geq 0$.
\item rank $\left[\begin{array}{cc}
\overline{A}-\iota\omega I & \overline{B}_1\\
\overline{C}_2 & \overline{D}_{21}
\end{array}\right] = 2n + m$ for all $\omega \geq 0$.
\end{itemize}
Hence, the $H_\infty$ control problem for the system (\ref{eq:uncertain5}) can be solved using the results in Ref. \cite{PAJ}. However, Ref. \cite{PAJ} assumed $\gamma = 1$. The results from that paper may be generalised for different values of $\gamma > 0$, simply by scaling the coefficients of the disturbance $\tilde{w}(t)$ in (\ref{eq:uncertain5}), viz. $\overline{B}_1$ and $\overline{D}_{21}$ (note $\overline{D}_{11}$ = 0), by $\gamma^{-1}$. Note that this also has an effect on $\overline{E}_2$, which is scaled by $\gamma^{-2}$. This yields conditions (a)-(c) of the theorem, which are required to be satisfied by the system (\ref{eq:uncertain5}), such that a suitable estimator is given by (\ref{eq:robust_estimator}).
\end{proof}
The transfer function of the robust estimator (\ref{eq:robust_estimator}) can be obtained to be:
\begin{equation}\label{eq:rob_filter_tf}
G_K(s) := \frac{\hat{z}(s)}{y'(s)} = C_K(sI-A_K)^{-1}B_K.
\end{equation}
The estimation error is given as:
\begin{equation}
e(t) := \hat{z}(t)-z(t) = C_K\hat{x}(t)-Lx(t).
\end{equation}
Then, the disturbance-to-error transfer function may be obtained as:
\begin{equation}\label{eq:error_spectrum}
\tilde{G}_{we}(s) := \frac{e(s)}{w(s)} = \left[\begin{array}{cc}
-L & C_K
\end{array}\right]\left(sI - \left[\begin{array}{cc}
A+\Delta A & 0\\
B_KS(C+\Delta C) & A_K
\end{array}\right]\right)^{-1}\left[\begin{array}{c}
B+\Delta B\\
B_KSD
\end{array}\right].
\end{equation}
We are interested in the disturbance $\mathcal{A}$ to error $e$ transfer function, which is simply the first component of the matrix transfer function $\tilde{G}_{we}(s)$. The other component is the disturbance $\mathcal{A}^{\#}$ to error $e$ transfer function, which we shall ignore.
\begin{remark}
Note that the fact that the plant is a quantum system that will be physically realizable \emph{restricts} the class of plants under consideration, when compared to the case when the plant is a classical system as in Ref. \cite{FDX}, owing to the conditions in (\ref{eq:phys_rlz}) (and also (\ref{eq:phys_rlz_unc})) required to be satisfied by the system matrices of the quantum plant (uncertain plant).
\end{remark}
\section{Numerical Example 1}\label{sec:num_ex1}
An example of a linear quantum system from quantum optics is a linearized dynamic squeezer. This corresponds to an optical cavity with a non-linear active medium inside. Let us consider the quantum plant to be a linearized dynamic squeezer, described by the QSDEs \cite{RPH}:
\begin{equation}\label{eq:sqz_plant}
\begin{split}
\left[\begin{array}{c}
da(t)\\
da(t)^{*}
\end{array}\right] &= \left[\begin{array}{cc}
-\frac{\beta}{2} & -\chi\\
-\chi^{*} & -\frac{\beta}{2}
\end{array}\right] \left[\begin{array}{c}
a(t)\\
a(t)^{*}
\end{array}\right] dt - \sqrt{\kappa} \left[\begin{array}{c}
d\mathcal{A}(t)\\
d\mathcal{A}(t)^{*}
\end{array}\right],\\
\left[\begin{array}{c}
d\mathcal{Y}(t)\\
d\mathcal{Y}(t)^{*}
\end{array}\right] &= \sqrt{\kappa} \left[\begin{array}{c}
a(t)\\
a(t)^{*}
\end{array}\right] dt + \left[\begin{array}{c}
d\mathcal{A}(t)\\
d\mathcal{A}(t)^{*}
\end{array}\right],\\
z(t) &= \left[\begin{array}{cc}
0.1 & -0.1
\end{array}\right] \left[\begin{array}{c}
a(t)\\
a(t)^{*}
\end{array}\right],
\end{split}
\end{equation}
where $\beta > 0$ is the overall cavity loss, $\kappa > 0$ determines the loss arising from the cavity mirrors, $\chi \in \mathbb{C}$ quantifies the size of the non-linearity of the active medium, and $a$ is a single annihilation operator of the cavity mode.
Here, we choose $\beta = 4$, $\kappa = 4$, and $\chi = -0.5$. Then, the above quantum system is physically realizable, since we have $\beta = \kappa$. Moreover, we fix the homodyne detection angle at $90^{\circ}$. We, thus, have the following:
\begin{equation}\label{eq:sqz_plant_matrices}
\begin{split}
A &= \left[\begin{array}{cc}
-2 & 0.5\\
0.5 & -2
\end{array}\right], \,
B = \left[\begin{array}{cc}
-2 & 0\\
0 & -2
\end{array}\right], \,
C = \left[\begin{array}{cc}
2 & 0\\
0 & 2
\end{array}\right],\\
D &= \left[\begin{array}{cc}
1 & 0\\
0 & 1
\end{array}\right] = I, \,
L = \left[\begin{array}{cc}
0.1 & -0.1
\end{array}\right], \,
S = \left[\begin{array}{cc}
\frac{e^{-\iota 90^{\circ}}}{\sqrt{2}} & \frac{e^{\iota 90^{\circ}}}{\sqrt{2}}
\end{array}\right].
\end{split}
\end{equation}
We introduce uncertainty in the parameter $\alpha := \sqrt{\kappa}$ as follows: $\alpha \rightarrow \alpha+\mu\delta\alpha$, where $|\delta| \leq 1$ is an uncertain parameter and $\mu \in [0,1)$ determines the level of uncertainty. Then, we will have the following:
\begin{equation}\label{eq:sqz_plant_unc1}
\begin{split}
\Delta A &= \left[\begin{array}{cc}
-\mu\delta\alpha^2-\frac{\mu^2\delta^2\alpha^2}{2} & 0\\
0 & -\mu\delta\alpha^2-\frac{\mu^2\delta^2\alpha^2}{2}
\end{array}\right],\\
\Delta B &= \left[\begin{array}{cc}
-\mu\delta\alpha & 0\\
0 & -\mu\delta\alpha
\end{array}\right],\\
\Delta C &= \left[\begin{array}{cc}
\mu\delta\alpha & 0\\
0 & \mu\delta\alpha
\end{array}\right].
\end{split}
\end{equation}
Then, we define the relevant matrices in (\ref{eq:unc_pars}) as follows:
\begin{equation}\label{eq:sqz_plant_unc2}
\begin{split}
F_1(t) &= \left[\begin{array}{cccc}
\delta & 0 & 0 & 0\\
0 & \delta & 0 & 0\\
0 & 0 & \delta^2 & 0\\
0 & 0 & 0 & \delta^2
\end{array}\right],\\
F_2(t) &= \left[\begin{array}{cc}
\delta & 0\\
0 & \delta
\end{array}\right],\\
E &= \left[\begin{array}{cc}
-\frac{1}{2} & 0\\
0 & -\frac{1}{2}\\
-\frac{1}{2} & 0\\
0 & -\frac{1}{2}
\end{array}\right],\\
G &= \left[\begin{array}{cc}
1 & 0\\
0 & 1
\end{array}\right],\\
H_1 &= \left[\begin{array}{cccc}
2\mu\alpha^2 & 0 & \mu^2\alpha^2 & 0\\
0 & 2\mu\alpha^2 & 0 & \mu^2\alpha^2
\end{array}\right],\\
H_2 &= \left[\begin{array}{cc}
-\mu\alpha & 0\\
0 & -\mu\alpha
\end{array}\right],\\
H_3 &= \left[\begin{array}{cccc}
-2\mu\alpha & 0 & 0 & 0\\
0 & -2\mu\alpha & 0 & 0
\end{array}\right].
\end{split}
\end{equation}
One can then verify that we have $\Delta A = H_1F_1(t)E$, $\Delta B = H_2F_2(t)G$ and $\Delta C = H_3F_1(t)E$, as required in (\ref{eq:unc_pars}). We set the uncertainty level to $\mu = 0.1$. Moreover, in our simulations, we choose a fixed value of $\delta = 1$.
We now solve the associated $H_\infty$ estimation problem using the Riccati equation approach described in the previous section. We choose the desired disturbance attenuation level to be $\gamma = 0.65$. Then, the scaling parameters are suitably chosen to be $\epsilon_1 = 0.2$ and $\epsilon_2 = 0.6$. A robust $H_\infty$ estimator is obtained as in (\ref{eq:robust_estimator}) with the following parameters:
\begin{equation}
\begin{split}
A_K &= \left[\begin{array}{cc}
0.1905 & -1.4676\\
-1.4676 & 0.1905
\end{array}\right],\\
B_K &= \left[\begin{array}{c}
-1.4717\iota\\
1.4717\iota
\end{array}\right],\\
C_K &= \left[\begin{array}{cc}
0.1 & -0.1
\end{array}\right].
\end{split}
\end{equation}
The transfer function (\ref{eq:rob_filter_tf}) of the estimator is obtained to be:
\begin{equation}
G_K(s) = \frac{-0.2943\iota s-0.3759\iota}{s^2-0.381s-2.118}.
\end{equation}
Fig. \ref{fig:errors_spectra1} shows the disturbance to error transfer function (\ref{eq:error_spectrum}) of the above robust filter. It also shows those of the optimal $H_\infty$ filter and the Kalman filter for the uncertain system (\ref{eq:uncertain1}) with $\delta = 1$ in our example for comparison, which corresponds to the uncertain parameter taking on its maximum value. The optimal Kalman and $H_\infty$ filters are built for the nominal system of (\ref{eq:uncertain1}).
\begin{figure}[!b]
\centering
\includegraphics[width=\textwidth]{robust_filter.eps}
\caption{Example 1: Disturbance to estimation error transfer functions for various estimators.}
\label{fig:errors_spectra1}
\end{figure}
Clearly, the robust $H_\infty$ filter provides for preferable disturbance attenuation and performance than the standard optimal $H_\infty$ filter or the Kalman filter for this value of the uncertain parameter $\delta$.
\section{Numerical Example 2}\label{sec:num_ex2}
In this section, we consider another numerical example involving the dynamic squeezer plant (\ref{eq:sqz_plant}), (\ref{eq:sqz_plant_matrices}). Here, we do not consider uncertainty in $\beta$ or $\kappa$. Instead, we introduce uncertainty in the squeezing parameter $\chi$ as follows: $\chi \rightarrow \chi + \mu\delta\chi$, where again $|\delta| \leq 1$ is an uncertain parameter and $0 \leq \mu < 1$ is the extent of uncertainty. Then, we will have the following:
\begin{equation}\label{eq:sqz_plant_unc3}
\begin{split}
\Delta A &= \left[\begin{array}{cc}
0 & -\mu\delta\chi\\
-\mu\delta\chi & 0
\end{array}\right],\\
\Delta B &= \left[\begin{array}{cc}
0 & 0\\
0 & 0
\end{array}\right],\\
\Delta C &= \left[\begin{array}{cc}
0 & 0\\
0 & 0
\end{array}\right].
\end{split}
\end{equation}
Note that the results from Ref. \cite{FDX} are enough to construct the robust estimator for this scenario. When $\Delta B = 0$ as in this case, $\epsilon_2$ has no impact on the estimator.
Here, the relevant matrices may be defined as follows:
\begin{equation}\label{eq:sqz_plant_unc4}
\begin{split}
F_1(t) &= \left[\begin{array}{cc}
\delta & 0\\
0 & \delta\\
\end{array}\right],\\
F_2(t) &= 0,\\
E &= \left[\begin{array}{cc}
\chi & 0\\
0 & \chi\\
\end{array}\right],\\
G &= 0,\\
H_1 &= \left[\begin{array}{cc}
0 & -\mu\\
-\mu & 0
\end{array}\right],\\
H_2 &= 0,\\
H_3 &= \left[\begin{array}{cc}
0 & 0\\
0 & 0
\end{array}\right].
\end{split}
\end{equation}
We fix the uncertainty level at $\mu = 0.1$. We then solve the associated $H_\infty$ estimation problem using our Riccati equation approach. We again choose the desired disturbance attenuation level to be $\gamma = 0.65$. The scaling parameters are chosen to be $\epsilon_1 = 0.7$ and $\epsilon_2 = 1$. A robust $H_\infty$ estimator is obtained as in (\ref{eq:robust_estimator}) with the following parameters:
\begin{equation}
\begin{split}
A_K &= \left[\begin{array}{cc}
0.3231 & -1.3660\\
-1.3660 & 0.3231
\end{array}\right],\\
B_K &= \left[\begin{array}{c}
-1.4852\iota\\
1.4852\iota
\end{array}\right],\\
C_K &= \left[\begin{array}{cc}
0.1 & -0.1
\end{array}\right].
\end{split}
\end{equation}
The transfer function (\ref{eq:rob_filter_tf}) of the estimator is obtained to be:
\begin{equation}
G_K(s) = \frac{-0.297\iota s-0.3098\iota}{s^2-0.6461s-1.762}.
\end{equation}
The disturbance to error transfer function (\ref{eq:error_spectrum}) of the robust filter in this example is shown in Fig. \ref{fig:errors_spectra_chi}. We also plot those of the optimal $H_\infty$ filter and the Kalman filter for the uncertain system (\ref{eq:uncertain1}) with $\delta = 1$ here for comparison. The optimal $H_\infty$ and Kalman filters are constructed for the nominal system of (\ref{eq:uncertain1}).
\begin{figure}
\centering
\includegraphics[width=\textwidth]{robust_filter_chi.eps}
\caption{Example 2: Disturbance to estimation error transfer functions of various estimators.}
\label{fig:errors_spectra_chi}
\end{figure}
Clearly, the robust $H_\infty$ filter again provides for preferable disturbance attenuation and performance compared with the standard optimal $H_\infty$ filter or the Kalman filter for this value of the uncertain parameter $\delta$.
\section{Conclusion}\label{sec:conc}
This paper considered the problem of robust $H_\infty$ estimation for uncertain linear quantum systems. The estimator is a classical filter, that produces a classical estimate of a variable of the quantum plant. The $H_\infty$ estimation problem is solved by converting it to a scaled $H_\infty$ control problem. The solution is obtained in the form of two algebraic complex Riccati equations. We have illustrated the results obtained by means of some numerical examples involving dynamic optical squeezers. As part of future work, the robust $H_\infty$ estimator constructed here could be applied in studying robust coherent-classical estimation. For example, it might be interesting to explore if and when a robust $H_\infty$ coherent-classical estimator (with and/or without coherent feedback) can provide better estimation precision than the robust $H_\infty$ purely-classical estimator considered in this paper. Such a comparison of optimal estimators was presented in Ref. \cite{RPH1}.
\ack The first author would like to thank Dr. Obaid Ur Rehman for useful discussion related to this work.
|
2,869,038,154,402 | arxiv | \section{Introduction}
Over the last few years there has been increased experimental
interest in double-pion production near threshold in several
hadronic reactions. These include studies in
pion--proton~\cite{CB} and proton--proton collisions~\cite{Heinz},
as well as in the \mbox{$pd\to\,^{3}\textrm{He}\,\pi\pi$}~\cite{MOMO,Andersson,Heinz2} and
\mbox{$dd\to\,^{4}\textrm{He}\,\pi\pi$}~\cite{Pia1,Pia2,Pia3} reactions. For excess energies $Q$
below about 100$\:$MeV one sees no sign of the low mass $s$-wave
$\pi\pi$ enhancement, known as the ABC effect~\cite{ABC}, and the
maxima in the invariant mass distributions tend more to be pushed
to the highest possible values.
Due in part to an isospin filter effect, the most spectacular
manifestation of the ABC is to be found in the case of \mbox{$dd\to\,^{4}\textrm{He}\,\pi\pi$}\
for $Q\approx 200\!-\!300\:$MeV~\cite{Ban73}. These cross section
data, as well those representing the deuteron analysing
powers~\cite{SPESIII}, can be well understood within a model where
there are two independent pion productions, through the $pn\to
d\pi^0$ reaction, with a final state interaction between the two
deuterons to yield the observed $\alpha$--particle~\cite{Anders}.
Since the $pn\to d\pi^0$ amplitudes are dominated by $p$--wave
production, driven by the $\Delta$ isobar, this leads to much
structure in the predictions. Although such double--$\Delta$
effects are generally observed in the medium energy
data~\cite{Ban73,Anders}, there is little evidence of them nearer
to threshold~\cite{Pia1,Pia2,Chapman}. Furthermore, the cross
sections measured at low energies are over an order of magnitude
higher than the predictions of models behaving like the square of
$p$-wave production, where the amplitudes must be proportional to
$Q$.
In an alternative approach to the \mbox{$pd\to\,^{3}\textrm{He}\,\pi\pi$}\ reaction, the low
energy cross sections have been discussed in terms of a two--step
model, where a pion is produced through a $pp\to d\pi^+$ reaction
on the proton in the deuteron, with a further pion being created
in a secondary $\pi^+n\to p\pi^0\pi^0$ reaction~\cite{FGW}. There
are, of course, other contributions related to this through
isospin invariance. Semi-phenomenological models of the $\pi^+n\to
n\pi^0\pi^0$ amplitudes show strong $s$--wave production, behaving
rather like a contact term, plus another contribution involving
the decay chain $N^*(1440)\to \Delta(1232)\,\pi \to
N\pi\pi$~\cite{Oset}. The $s$--wave term is sufficient, in the
two--step model, to lead to reasonable agreement with the
available data on the \mbox{$pd\to\,^{3}\textrm{He}\,\pi\pi$}\ total cross section. Moreover,
combined with $p$--waves required by the decay chain, it
reproduces the shift of the mass spectrum away from the ABC region
towards that of higher missing masses~\cite{MOMO,Andersson}. It is
therefore reasonable to ask whether a similar approach could not
be usefully tried for the low energy \mbox{$dd\to\,^{4}\textrm{He}\,\pi\pi$}\ reaction.
The two--step model with an intermediate
$pd\to\,^{3}\textrm{He}\,\pi^0$ step has in fact been applied
successfully to the production of $\eta$--mesons in the \mbox{$dd\to\,^{4}\textrm{He}\,\eta$}\
reaction near threshold~\cite{FW2}, where it reproduces reasonably
well the magnitude of the total cross
section~\cite{Frascaria,Willis}. The approach is here extended in
section~\ref{sec2} to describe the \mbox{$dd\to\,^{4}\textrm{He}\,\pi\pi$}\ reaction, using the
same $\pi N\to\pi\pi N$ amplitudes as those that worked for
\mbox{$pd\to\,^{3}\textrm{He}\,\pi\pi$}. The other element that is crucial for the evaluation of
this model is the cluster decomposition of the $\alpha$--particle
in terms of $^3$He$\,n/\,^3$H$\,p$ constituents. This is discussed
in section~\ref{sec3}, where we rely on the work of the Argonne
group~\cite{VMC}. The results presented in section~\ref{Results}
show that the model is capable of describing the shape of the
$\pi\pi$ effective mass distribution, without the oscillatory
structure predicted by the double--$\Delta$ model~\cite{Anders}.
However, the total cross section estimates fall over an order of
magnitude below the experimental results found at low
energies~\cite{Pia1,Pia2,Chapman}. These data have low statistics
and limited acceptance, though they will be supplemented by more
precise results expected soon from CELSIUS~\cite{Pia3}. Since
neither this nor the double--$\Delta$ model gets even close to the
observed production rates, alternative approaches are necessary.
\section{The reaction model}
\label{sec2}
The two--step model for the \mbox{$dd\to\,^{4}\textrm{He}\,\pi\pi$}\ amplitudes, in terms of those
for $pd\to\,^3\textrm{H}\,\pi^+$ and $\pi^+n\to (\pi\pi)^0p$, is
depicted in Fig.~\ref{diagram}. Contributions involving
intermediate $^3$He and $\pi^0/\pi^-$ are all related to the
results for this diagram through isospin invariance. Due to the
identical nature of the incident deuterons, there is a similar set
of diagrams where the initial production takes place on the upper
deuteron.\vspace{-2mm}
\begin{figure}[htb]
\begin{center}
\centerline{\epsfxsize=8cm{\epsfbox{fig.eps}}}
\end{center}
\vspace{-0.2cm} \caption{\label{diagram} Two--step model for the
production of $\pi^+\pi^-$ and $\pi^0\pi^0$ pairs through the
\mbox{$dd\to\,^{4}\textrm{He}\,\pi\pi$}\ reaction. There are also contributions related to this by
isospin in addition to the terms arising from the interchange of
the two deuterons.}
\end{figure}
The cross section corresponding to such a diagram has been
evaluated for the $dd\to\alpha\,\eta$ reaction~\cite{FW2} and we
follow closely the techniques used there. The unpolarised \mbox{$dd\to\,^{4}\textrm{He}\,\pi\pi$}\
differential cross section is expressed in terms of the Lorentz
invariant matrix element $\mathcal{M}$ through
\begin{eqnarray}
\mbox{{\rm d}}\sigma&=&\frac{p_{\alpha}}{ 144p_dW^2} \frac{1}{(2\pi)^4}
\sum_{\rm spins} \mid \mathcal{M}\mid^2\, k_{\pi\pi}\,
\mbox{{\rm d}}{}m_{\pi\pi}\,
\mbox{{\rm d}}\Omega_{\alpha}\,\frac{\mbox{{\rm d}}\Omega_{\pi\pi}}{4\pi}\:\cdot
\label{Start-cross}\nonumber\\
\end{eqnarray}
Here $p_d$ and $p_{\alpha}$ are the initial and final momenta in
the overall cm system where the total energy is $W$. The angles
$\Omega_{\alpha}$ are also in the total cm system, whereas the
$\pi\pi$ relative momentum $k_{\pi\pi}$ and its angles
$\Omega_{\pi\pi}$ are evaluated in the dipion rest frame, where
the total energy is $m_{\pi\pi}$.
The matrix element of Fig.~\ref{diagram} involves the integration
of the pion propagator between the two production vertices over
the two Fermi momenta $\bmath{k}$ and $\bmath{q})$. If initially
we neglect the deuteron $D$--state and the Lorentz boost of the
wave functions, this can be written as
\begin{eqnarray}
\nonumber \mathcal{M}&=&\sqrt{\frac{2}{3 m_p^2}}\
\frac{1}{(2\pi)^3} \int{\mbox{{\rm d}}^3k }\,{\mbox{{\rm d}}^3q}\,
\frac{m_n}{E_n(\bmath{p}_n)}\frac{m_t}{E_t(\bmath{p}_t)}\\
&&\times\frac{i}{q_{\pi}^2-m_{\pi}^2+i\epsilon}\tilde{\mathcal{M}}_N\:,
\label{full}
\end{eqnarray}
where the particle masses are denoted by $m_i$. The reduced
nuclear matrix element is
\begin{eqnarray}
\tilde{\mathcal{M}}_N&=&\textrm{Tr}\left[
\frac{-1}{\sqrt{2}}\,\bmath{\sigma}\cdot\bmath{\epsilon}_d
\left\{-\mathcal{A}\,\hat{\bmath{p}}_d\cdot
\bmath{\epsilon}_{d\,'} -i\mathcal{B}\,\hat{\bmath{p}}_d\cdot
(\bmath{\epsilon}_{d\,'}
\times\bmath{\sigma})\right\}\right.\nonumber\\
&&\left.\times\ \frac{-1}{\sqrt{2}}\,a(m_{\pi\pi},Q)\,
\bmath{\sigma}\cdot\bmath{p}_{\pi}\right]
\tilde{\varphi}_d(\bmath{q}) \,
\tilde{\psi}^{\dagger}_{\alpha}\,(\bmath{k})\:, \label{reduced}
\end{eqnarray}
where the $(\bmath{\epsilon}_{d},\bmath{\epsilon}_{d\,'})$ are the
polarisation vectors of the two incident deuterons and the
kinematics are defined as in the figure. The $S$-state
momentum--space wave functions for the deuteron and the
triton--proton configuration of the $\alpha$--particle are denoted
by $\tilde{\varphi}_d(\bmath{q})$ and
$\tilde{\psi}_{\alpha}\,(\bmath{k})$ respectively.
In the forward and backward (cm) directions, only two terms are
needed to describe the spin structure of the
$dp\to\,^{3}\textrm{He}\,\pi^0$ amplitude. Using two--component
spinors to denote the $^3$He ($u_h$) and proton ($u_p$), this
reads
\begin{equation}
\mathcal{M}(dp\to\,^{3}\textrm{He}\,\pi^0) =
u^{\,\dagger}_h\left[\mathcal{A}\,\hat{\bmath{p}}_d\cdot
\bmath{\epsilon}_d +i\mathcal{B}\,\hat{\bmath{p}}_d\cdot
(\bmath{\epsilon}_d\times\bmath{\sigma}) \right]u_p\:,
\end{equation}
where $\bmath{p}_d$ and $\bmath{p}_{\pi}$ are the momenta of the
incident deuteron and produced pion respectively. In our
normalisation, the unpolarised differential cross section and
deuteron tensor analysing power $t_{20}$ are given in terms of the
two dimensionless spin amplitudes $\mathcal{A}$ and $\mathcal{B}$
by
\begin{eqnarray}
\nonumber \frac{\mbox{{\rm d}}\sigma}{\mbox{{\rm d}}\Omega} &=&\frac{1}{3(8 \pi
W)^2}\frac{p_{\pi}}{p_d}\Big[\mid\mathcal{A}\mid^2
+2\mid\mathcal{B}\mid^2 \Big]\\
t_{20}&=&\sqrt{2}\left[\frac{\mid\mathcal{B}\mid^2
-\mid\mathcal{A}\mid^2}{\mid\mathcal{A}\mid^2
+2\mid\mathcal{B}\mid^2}\right]\,,
\end{eqnarray}
and these observables have been well measured in collinear
kinematics at Saturne~\cite{Kerboul}.
For deuteron kinetic energies of interest here, the backward
($\theta_{p\pi}=180^{\circ}$) values of $t_{20}$ are strongly
negative, so that $|\mathcal{A}|\gg|\mathcal{B}|$. In the 0.5 --
0.8$\:$GeV range the results may be represented by
\begin{eqnarray}
\nonumber%
|\mathcal{A}|^2&\approx&
-565.6+2318.7T_d-2869.9T_d^2+1122.9T_d^3\hspace{10mm}\\
|\mathcal{B}|^2&\approx&
-197.9+1144.9T_d-2113.0T_d^2+1261.8T_d^3\:,
\end{eqnarray}
where the deuteron kinetic energy $T_d$ is measured in GeV.
The spin structure of the $\pi^-p\to \pi^0\pi^0n$ amplitude is
unique near threshold:
\begin{equation}
\label{e0} M(\pi^-p\to \pi^0\pi^0n)= a(m_{\pi\pi},Q)\,
u_n^{\dagger} \bmath{\sigma}\cdot\bmath{p}_{p}\,u_{p}\:.
\end{equation}
In terms of the amplitude $a$, the unpolarised differential cross
section is
\begin{equation}
\label{e1} \textrm{d}\sigma(\pi^-p\to \pi^0\pi^0n) =
\frac{1}{64\pi^3}\,\frac{p_p\,p_n}{W_{\pi
N}^2}\,|a(m_{\pi\pi},Q)|^2\, k_{\pi\pi}\,\textrm{d}m_{\pi\pi}\:.
\end{equation}
Here $p_p$ and $p_n$ are respectively the initial and final
nucleon momenta, $W_{\pi N}$ the cm energy in the $\pi N$ system,
and $Q=W_{\pi N}-2m_{\pi}-m_N$ the excess energy above the
two--pion threshold.
The low energy data in different isospin channels are well
described by the Valencia model~\cite{Oset} and this allows one to
project out the $I=0$ combination required as input in
equation~(\ref{reduced}). The results can be parameterised as:
\begin{eqnarray}
\nonumber
\lefteqn{\hspace{-5mm}\frac{1}{64\pi^3}\,|a(m_{\pi\pi},Q)|^2 =
(1.092-0.0211Q+0.00015Q^2)}&&\\ \nonumber
&&+(4.18+0.0075Q-0.00098Q^2)\,x\\
\label{e7}
&&+(47.65-0.935Q+0.00743Q^2)\,x^2\:\mu\textrm{b/MeV}^2\:,
\end{eqnarray}
where $x=(m_{\pi\pi}-2m_{\pi})/m_{\pi}$.
Due to small recoil corrections, this parameterisation should be
used at an excess energy of $Q'$, where
\begin{eqnarray}\nonumber
Q'\approx xm_{\pi}+(Q-xm_{\pi})(1+2m_{\pi}/m_{\alpha})/(1+2m_{\pi}/m_p).\\
\end{eqnarray}
Since large Fermi momenta are not required in the two--step model,
the $dp\to\,^{3}\textrm{He}\,\pi^0$ and $\pi N\to \pi\pi N$
amplitudes can be taken outside of the integration in
eq.~(\ref{full}) with the values pertaining at zero Fermi momenta.
Considering only the positive energy pion pole, to first order in
$\bmath{k}$ and $\bmath{q}$ one is left with a difference between
the external and internal energies of
\begin{equation}
\Delta E=E_{\rm ext}-E_{\rm int}= \Delta
E_{0}+\bmath{k}\cdot\bmath{W}+\bmath{q}\cdot\bmath{V}\:,
\end{equation}
where
\begin{equation}
\Delta E_0 = E_{\pi}^0 -E_{\pi}\:,
\end{equation}
with
\begin{equation}
E_{\pi}^0=2E_d-E_t-E_n -E_{\pi}\:.
\end{equation}
Here ($E_{\pi},\,E_d,\,E_t,\,E_n$) are the pion, deuteron, triton,
and nucleon total energies, evaluated respectively at momenta
$-\fmn{3}{4} \bmath{p}_{\alpha}-\fmn{1}{2}\bmath{p}_d$,
$\bmath{p}_d$, $\fmn{1}{2}\bmath{p}_d$, and
$\fmn{3}{4}\bmath{p}_{\alpha}$. \vspace{1mm}
The relativistic relative velocity vectors $\bmath{V}$ and
$\bmath{W}$ depend only upon external kinematic variables:
\begin{eqnarray}
\nonumber \bmath{V} &=& \bmath{v}_{\pi}(-\fmn{3}{4}
\bmath{p}_{\alpha}-\fmn{1}{2}\bmath{p}_d)
- \bmath{v}_{n}(\fmn{1}{2}\bmath{p}_d)\\[1ex]
&=&-\frac{3}{4E_{\pi}}
\,\bmath{p}_{\alpha}-\frac{1}{2}\left[\frac{1}{E_{\pi}}
+\frac{1}{E_{n}}\right]\bmath{p}_d\:,
\nonumber \\[1ex] \nonumber
\bmath{W}&=&-\bmath{v}_{\pi}(-\fmn{3}{4} \bmath{p}_{\alpha}
-\fmn{1}{2}\bmath{p}_d)
+ \bmath{v}_{t}(\fmn{3}{4}\bmath{p}_{\alpha})\\[1ex]
&=&\frac{3}{4}\left[\frac{1}{E_{\pi}}+\frac{1}{E_t}\right]
\bmath{p}_{\alpha}+\frac{1}{2E_{\pi}} \,\bmath{p}_d\:.
\label{VandW}
\end{eqnarray}
The resulting form factor
\begin{eqnarray}
\nonumber%
\lefteqn{\mathcal{S}(\bmath{V},\bmath{W},\Delta
E_0)=}\\%
&&\hspace{-5mm}-i\int{\mbox{{\rm d}}^3k} \,{\mbox{{\rm d}}^3q}\,\frac{1}{\Delta
E_0+\bmath{k}\cdot\bmath{W} + \bmath{q}\cdot\bmath{V}
+i\epsilon}\,\tilde{\psi}^{*}(\bmath{k})\,
\tilde{\varphi}(\bmath{q})\nonumber\\
&=&(2\pi)^3\int_0^{\infty}\mbox{{\rm d}} t \,e^{it\Delta E_0} \;
\psi^{*}(-t\bmath{W})\,\varphi(t\bmath{V})\:, \label{FF0}
\end{eqnarray}
then involves a one--dimensional integration over wave functions
in configuration space. In terms of this form factor the \mbox{$dd\to\,^{4}\textrm{He}\,\pi\pi$}\
differential cross section becomes:
\begin{eqnarray}
\nonumber \mbox{{\rm d}}\sigma&=&\frac{N_{\alpha}}{48\,(2\pi)^{10}}
\frac{p_{\alpha}p_d}{[m_pW(E_{\pi}+E_{\pi}^0)]^2}\,
|a(m_{\pi\pi},Q)|^2\,\times\\
&& \left|\mathcal{S}(\bmath{V},\bmath{W},\Delta E_0)+ (\bmath{p}_d
\Leftrightarrow -\bmath{p}_d)\right|^2\times \nonumber\\
&&\vphantom{\int}
\left\{\mid\mathcal{A}\mid^2+2\mid\mathcal{B}\mid^2\right\}\,
k_{\pi\pi}\,\mbox{{\rm d}}{}m_{\pi\pi} \mbox{{\rm d}}\Omega_{\alpha}\:,
\label{simple}
\end{eqnarray}
where $N_{\alpha}$ is the normalisation of the $^4$He wave
function and the extra form--factor contribution coming from the
interchange of the two incident deuterons is indicated. All
isospin factors have been included, but it must be stressed that
in eq.~(\ref{simple}) $\mathcal{A}$ and $\mathcal{B}$ refer to the
$dp\to\,^{3}\textrm{He}\,\pi^0$ and $\pi^-p\to \pi^0\pi^0n$ charge
states respectively. Isospin invariance dictates that $\pi^+\pi^-$
production in \mbox{$dd\to\,^{4}\textrm{He}\,\pi\pi$}\ should be a factor of two larger than
$\pi^0\pi^0$, but this simple rule is significantly modified near
threshold by phase--space factors arising from the pion mass
difference.
Two further refinements need to be implemented in
eq.~(\ref{simple}) before comparing its predictions with
experiment. Although the final $\alpha$-particle is slow in the cm
system, relativistic corrections cannot be neglected for the
incident deuterons. These can be included by boosting
$V_{\parallel}$, the longitudinal component of $\bmath{V}$,
\emph{i.e} by taking as argument of the deuteron wave
function~\cite{FW2}
\begin{equation}
\bmath{V}'=(\bmath{V}_{\!\perp},\,E_d V_{\parallel}/m_d)\:.
\label{V-prime-def}
\end{equation}
Secondly, the effects of the deuteron $D$--state have to be
considered and this can be accomplished by introducing two form
factors:
\begin{eqnarray}
\nonumber%
\lefteqn{\mathcal{S}_{S,D}({V'},{W},\Delta E_0)}\\
&&=2\pi^2\!\!\int_0^{\infty}\mbox{{\rm d}} t \,e^{it\Delta E_0} \,
\Psi^{*}(-t{W})\,\Phi_{S,D}(t{V'})\:,\label{FF1}
\end{eqnarray}
where $\Phi_{S,D}(r)$ are the deuteron $S$-- and $D$--state
configuration space wave functions normalised by
\begin{equation}
\int_{0}^{\infty}r^2\,\left\{\Phi_S(r)^2+\Phi_D(r)^2\right\}\,\mbox{{\rm d}}
r =1\:.
\end{equation}
The $S$-- and $D$--state form factors enter in different
combinations for the $\mathcal{A}$ and $\mathcal{B}$ amplitudes
and, after making kinematic approximations in respect of the
$D$--state combined with the Lorentz boost, one finds
\begin{eqnarray}
\nonumber \lefteqn{\mbox{{\rm d}}\sigma=\frac{N_{\alpha}}{48\,(2\pi)^{10}}
\frac{p_{\alpha}p_d}{[m_pW(E_{\pi}+E_{\pi}^0)]^2}\,
|a(m_{\pi\pi},Q)|^2\,k_{\pi\pi}}\\
&& \times
\left\{\mid\mathcal{A}\mid^2\left|\mathcal{S}_S({V'},{W},\Delta
E_0)-\sqrt{2}\,\mathcal{S}_D({V'},{W},\Delta E_0)
\right|^2\right. \nonumber\\
&& \nonumber\left.
+2\mid\mathcal{B}\mid^2\left|\mathcal{S}_S({V'},{W},\Delta
E_0)+\frac{1}{\sqrt{2}}\,\mathcal{S}_D({V'},{W},\Delta
E_0)\right|^2 \right\}\,\\
&&\hspace{5cm}\times\,\mbox{{\rm d}}{}m_{\pi\pi}\, \mbox{{\rm d}}\Omega_{\alpha}\:,
\label{complex}
\end{eqnarray}
where, as in eq.~(\ref{simple}), it is assumed that contributions
from form factors resulting from the interchange $\bmath{p}_d
\Leftrightarrow -\bmath{p}_d$ have been included.
\newpage
\section{The $\mathbf{^4}$He wave function}
\label{sec3}%
Over the last few years there has been remarkable progress in
\emph{ab initio} calculations of the structure of light nuclei
using variational Monte Carlo techniques~\cite{VMC}. Starting from
realistic nucleon--nucleon potentials, it has been possible to
identify various cluster sub-structures in nuclei as heavy as
$^9$Be. The results for the unnormalised
$^4\textrm{He}\,$:$\,^3\textrm{H}\,p$ overlap function in
configuration space are shown in Fig.~\ref{ANL}, where the error
bars arise from the sampling procedure.\vspace{-10mm}
\begin{figure}[htb]
\begin{center}
\centerline{\epsfxsize=8cm{\epsfbox{pandar.eps}}}
\end{center}
\vspace{-1cm} \caption{\label{ANL} Unnormalised
$^4\textrm{He}\,$:$\,^3\textrm{H}\,p$ overlap function as a
function of the $^3$H--$p$ separation distance. For the purposes
of presentation, this has been multiplied by
$r\,\textrm{e}^{\alpha r}$, where the charge average
$\alpha=0.854$~fm$^{-1}$. The results of Ref.~\cite{VMC} have been
parameterised as in eq.~(\ref{param}).}
\end{figure}
The overlap function has been parameterised by
\begin{equation}
\label{param} \psi(r)=\sqrt{N_{\!\alpha}}\,\frac{1}{r}
\sum_{n=1}^{6}a_n\,\textrm{e}^{-n\alpha r}\:,
\end{equation}
where $\alpha=0.854$~fm$^{-1}$ represents the average for the
$^3$H$\,p$ and $^3$He$\,n$ configurations. To ensure good
behaviour at the origin, the final parameter is fixed by
$a_6=-\sum_{n=0}^{5}a_n$, while the other values are sequentially
$5.1525$, $-2.8414$, $-45.1886$, $110.7401$, and $-100.3994$. The
normalisation has been chosen such that
\begin{equation}
\int_{0}^{\infty}r^2\,\left[\psi(r)\right]^2\,\mbox{{\rm d}} r =
N_{\alpha}\:.
\end{equation}
In the spirit of our approach here to pion production, where only
these cluster contributions are considered, it is appropriate to
assume that the $p\,^3$H and $n\,^3$He components saturate the
wave function and take $N_{\,\alpha}=4$ rather than the reduced
spectroscopic factor obtained in ref.~\cite{VMC}.
\section{Results and Conclusions}
\label{Results}
In Fig.~\ref{piafig} we show the prediction of the shape of the
missing--mass distribution for inclusive two--pion production at
an excess energy of $Q=29\:$MeV with respect to the $2\pi^0$
threshold. Though the general form is in good agreement with the
experimental data~\cite{Pia1,Pia2}, the results are too low by
almost a factor of twenty! The peak of the distribution is
predicted to be a little to the right of that corresponding to
pure phase space, which is also shown. Such a feature was clearly
observed for the \mbox{$pd\to\,^{3}\textrm{He}\,\pi\pi$}\ reaction at low energies~\cite{FGW}, but
the limited statistics in the $dd$ case prevents us from drawing
firm conclusions here. Estimates in the double--$\Delta$
model~\cite{Anders}, which agreed convincingly with the data in
the resonance region, were even poorer compared to the
near--threshold data. Apart from being a similar factor of twenty
too low, this model also predicted significant structure in the
mass distribution which is absent from the experimental
data.\vspace{-5mm}
\begin{figure}[htb]
\begin{center}
\centerline{\epsfxsize=8cm{\epsfbox{piafig.eps}}}
\end{center}
\vspace{-1cm} \caption{\label{piafig} Missing--mass distribution
for the $dd\to\,^{4\,}\textrm{He}\,X$ reaction measured at
570$\:$MeV~\cite{Pia1}. The chain curve corresponds to
$\pi^0\pi^0$ production within the two--step model whereas the
solid one represents the sum of this and $\pi^+\pi^-$ production.
The predictions are normalised to the integrated cross section by
multiplying by a factor of 17.6. The dotted and broken curves are
the similar predictions from phase space, again normalised to the
total rate.}
\end{figure}
The discrepancy is similar for the other low energy
data~\cite{Chapman}, though here the acceptance was small and
assumptions had to be made in order to extract a total cross
section. In Fig.~\ref{sigtotpia} we show the estimates of the
total cross sections for the production of charged and neutral
pions within the two--step model.
\begin{figure}[htb]
\begin{center}
\centerline{\epsfxsize=8cm{\epsfbox{sigtotpia.eps}}}
\end{center}
\vspace{-1cm} \caption{\label{sigtotpia} Total cross section for
the \mbox{$dd\to\,^{4}\textrm{He}\,\pi\pi$}\ reaction. The experimental data from Ref.~\cite{Pia1}
(star) and Ref.~\cite{Chapman} (circle) are compared to the
predictions of the two--step model scaled by a factor of $17.6$.
The chain curve corresponds to $\pi^0\pi^0$ production, the broken
to $\pi^+\pi^-$, and the solid to their sum. }
\end{figure}
The central problem for any model that attempts to describe the
\mbox{$dd\to\,^{4}\textrm{He}\,\pi\pi$}\ cross section at low energies is that the production of
isoscalar pion pairs is very similar in deuteron--deuteron and
proton--deuteron collisions. Thus at
$Q=29\:$MeV~\cite{Andersson,Pia1},
\begin{equation}
\frac{\sigma_{\rm tot}(\mbox{$dd\to\,^{4}\textrm{He}\,\pi\pi$})}{\sigma_{\rm tot}(\mbox{$pd\to\,^{3}\textrm{He}\,\pi\pi$})}\approx
\frac{40\:\textrm{nb}}{60\:\textrm{nb}}=\frac{2}{3}\:\cdot
\end{equation}
On the other hand, the production of the $\eta$ meson is much
weaker in the $dd$ case, with the ratio of the squares of the
amplitudes being~\cite{Frascaria,Willis,Berger2,Mayer}
\begin{equation}
\frac{\left|f(\mbox{$dd\to\,^{4}\textrm{He}\,\eta$})\right|^2}{\left|f(\mbox{$pd\to\,^{3}\textrm{He}\,\eta$})\right|^2} \approx
\frac{1}{50}\,,
\end{equation}
though perhaps this would be increased by a factor of two if
corrections were made for the effects of the $\eta$--nucleus
final--state interaction. Since the low energy \mbox{$pd\to\,^{3}\textrm{He}\,\pi\pi$}, \mbox{$pd\to\,^{3}\textrm{He}\,\eta$},
and \mbox{$dd\to\,^{4}\textrm{He}\,\eta$}\ cross sections are all successfully described by the
two--step model, a factor of ten undershoot in the \mbox{$dd\to\,^{4}\textrm{He}\,\pi\pi$}\ case
is not too surprising. The crude comparison made here does not
take into account fully the spin--parity considerations and the
prediction would have been increased by more than a factor of two
if the sign of the deuteron $D$--wave were reversed in
eq.~(\ref{complex}).
Given that neither the two--step nor the double--$\Delta$ model
seems capable of describing the magnitude of the \mbox{$dd\to\,^{4}\textrm{He}\,\pi\pi$}\ cross
section near threshold, one must seek alternative explanations or
modifications to the existing mechanisms. Other diagrams, such as
that of the impulse approximation where the process is driven by
\mbox{$pd\to\,^{3}\textrm{He}\,\pi\pi$}\ with a spectator nucleon, give very small cross sections
due to the large momentum transfer. We have not included any
specific $\pi\pi$ final--state interaction, but the $s$--wave
scattering lengths are relatively small~\cite{Batley} and, in any
case, the effects are implicitly included through the use of
empirical $\pi N\to \pi\pi N$ amplitudes~\cite{Oset}.
The interaction of the low energy pions with the final $^4$He
nucleus might enhance the cross section since it is known that the
$p$--wave pion--nucleus interaction is attractive near
threshold~\cite{Ericson}. However, the effect will steadily
diminish with energy and eventually change sign at the resonance.
Crude estimates indicate that any effects due to such final--state
interactions are likely to be less than 50\%, even very close to
threshold, and so they are very unlikely to provide the
explanation of the defect.
Now, although we have normalised the $^4$He wave function as if it
consisted purely of $p\,^3$H/$n\,^3$He pairs, in reality the $^3$H
in such a nucleus is on average smaller than the physical triton.
Nevertheless we have taken the amplitudes for
$pd\to\,^{3}\textrm{He}\,\pi^0$ from the measured data. The same
criticism can be levelled at the double--$\Delta$ model, where the
final deuteron in the $pp\to d\pi^+$ input would really be
required for a \emph{small} deuteron. If there were major
corrections due to such effects they would be likely to be present
at all energies and hence destroy the excellent agreement with
data achieved at higher energies~\cite{Anders}. Further
inspiration is therefore clearly needed to resolve this dilemma.
\begin{acknowledgement}
This work has been much influenced by long--standing discussions
with Pia Th\"orngren, which have been beneficial to both sides.
One of the authors (CW) is appreciative of the hospitality shown
to him by Uppsala University. Support from the EtaNet programme of
the EU is gratefully acknowledged.
\end{acknowledgement}
|
2,869,038,154,403 | arxiv | \section{Introduction}
A quantum integrable model is a vector space $V$
and an algebra $\B$ of commuting linear operators on $V$,
called the Bethe algebra of Hamiltonians.
The problem is to find eigenvectors and eigenvalues.
If the vector space is a space of functions, then the Hamiltonians are differential or difference operators.
We say that a quantum integrable model can be geometrized,
if there is a topological space (a scheme) $X$ with an algebra
$\O_X$ of functions on $X$, an isomorphism of vector spaces $\psi :\O_X\to V$, an isomorphism
of algebras $\tau:\O_X\to\B$ such that
\bea
\psi(fg) = \tau(f)\,\psi(g),\qquad \forall f,g\in\O_X.
\eea
These objects $\O_X, \psi,\tau$ identify the $\B$-module $V$ with the regular representation of the algebra
$\O_X$ of functions.
If a quantum integrable model $(V,\B)$ is geometrized, then the eigenvectors
of $\B$ in $V$ are identified with delta-functions of points of $X$ and the eigenvalues of an eigenvector
in $ V$ correspond to evaluations of functions on $X$ at the corresponding point of $X$.
\smallskip
Our motivation to geometrize the Bethe algebras came from the examples
considered in \cite{MTV3, MTV5}, where the algebra of Hamiltonians acting on a subspace
of a tensor product of $\gln$-modules was identified with the
algebra of functions on the intersection of suitable Schubert cycles in a
Grassmannian. That identification gave an unexpected relation between the representation
theory and Schubert calculus.
\smallskip
The examples in \cite{MTV3, MTV5} are related to models with a finite-dimensional vector space $V$.
How to proceed in the infinite-dimensional case of commuting differential operators
is not clear yet. In this paper we discuss an example.
In our infinite-dimensional space $V$
we distinguish a family of finite-dimensional subspaces $E[\mu]$, $\mu\in\C$, each of which is invariant
with respect to the algebra $\B$ of commuting differential operators. We geometrize each of the pairs
$(E[\mu], \B\big\vert_{E[\mu]})$, thus constructing a family of topological spaces $X[\mu]$, $\mu\in\C$.
We observe that natural interrelations between the subspaces $E[\mu]$ correspond to natural interrelations
between the topological spaces $X[\mu]$. For example, the Weyl involution
$V\to V$, available in our case, identifies $E[\mu]$ and
$E[-\mu]$. We show that this identification corresponds to a natural
isomorphism $X[\mu]\to X[-\mu]$.
\smallskip
Representation theory provides a source of commuting differential or difference operators. In this
paper we discuss the construction due to
V.\,Rubtsov, A.\,Silantyev, D.\,Talalaev, \cite{RST}. That quantum integrable model
is called the {\it quantum dynamical Gaudin model}. We study the $\slt$ trigonometric version
of the quantum dynamical Gaudin model, while in \cite{RST} the $\gln$ elliptic version
was considered.
\smallskip
Consider the Lie algebra $\slt$ and its Cartan subalgebra $\h\subset \slt$,
$\dim \h = 1$.
For $s=1,\dots,n$, let $V_{m_s}$ be the irreducible $\slt$-module of dimension $m_s+1$.
Let $V=\otimes_{s=1}^nV_{m_s}$,
\bea
V[0] = \{ v\in\ V\ | \ hv=0, \ \ \forall h\in\h\}\,,
\eea
the zero weight subspace. The space $V[0]$ is nontrivial if $M=\sum_{s=1}^nm_s$ is even.
Let
$\Fun$ be the
space of $V[0]$-valued functions on $\h$.
Fix a subset $z=\{z_1,\dots,z_n\}\subset \C^\times$.
Having these data, Rubtsov, Silantyev, and Talalaev construct a family of commuting differential operators acting on $\Fun$.
First, one constructs a $2\times 2$-matrix
$\left[
\begin{matrix}
x \der_x & 0
\\
0 & x \der_x
\end{matrix}
\right]+ L(x)\,
=(\delta_{ij}\der_x + L_{ij}(x))$,
where $x$ is a parameter,
$\der_x =\frac{\der}{\der x}$,
and $L_{ij}(x)$ are differential operators on $\Fun$
depending on $x$.
Let
$\mc C = \on{cdet}\big[\delta_{ij}\der_x + L_{ij}(x)\big],$
where {\it cdet} is the column determinant of the matrix with non-commuting entries,
$\on{cdet}
\left[\begin{array}{cc}
a & b\\
c & d
\end{array}\right] = ad-cb.$
The operator $\mc C$ can be rewritten in the form
\bea
\der_x^2 + C_1(x)\der_x+C_2(x) ,
\eea
where $C_1(x), C_2(x)$ are differential operators on $\Fun$, whose coefficients are rational functions of $x$.
For any $a,b\in \C -\{z_1,\dots,z_n\}$ and $i,j=1,2$ the operators $C_i(a)$, $C_j(b)$ commute.
The space $\Fun$ with the algebra
$\B$ generated by these commuting differential operators
is called the {\it quantum dynamical Gaudin model}.
\smallskip
We show that the algebra $\B$ is generated by the trigonometric KZB operators $H_0$,
$H_1(z)$, \dots, $H_n(z)$, see them in \cite{FW, JV}.
The KZB operator $H_0$ is also known as the
trigonometric Hamiltonian operator of the quantum two-particle
Calogero-Moser system with spin space $V$. The operator $H_0$
is the second order differential operator independent of $z$.
\smallskip
For any $\mu\notin\Z$ we define the subspace $E[\mu]\subset \Fun$ as
the space of meromorphic
eigenfunctions of $H_0$ with eigenvalue $\pi \sqrt{-1}\, \frac{\mu^2}2$ and prescribed poles. The subspaces
$E[\mu]$ were introduced in \cite{FV2} and studied in \cite{JV}. We have $\dim E[\mu]=\dim V[0]$.
The Bethe algebra $\B$ preserves each of $E[\mu]$.
\smallskip
The $\slt$ Weyl involution acts on $\Fun$. The Bethe algebra $\B$ is Weyl group invariant.
The Weyl involution induces an isomorphism $E[\mu]\to E[-\mu]$, which is called in \cite{FV2}
the scattering matrix. The scattering matrix
$E[\mu]\to E[-\mu]$ is an isomorphism of $\B$-modules.
\smallskip
The basis of the geometrization procedure lies in the following observation. Let $\psi\in E[\mu]$ be an eigenvector of $\B$,
\bea
C_i(x) \psi = E_i(x, \psi) \psi, \qquad i=1,2,
\eea
where $E_i(x,\psi)$ are scalar eigenvalue functions of $x$. We assign to $\psi$ the scalar differential operator
\bea
\mc E_\psi = \der_x^2 + E_1(x,\psi)\der_x+E_2(x,\psi).
\eea
We show that the kernel of $\psi$ is spanned by
two quasi-polynomials
$x^{-\mu/2} f(x), x^{\mu/2} g(x)$, where $f(x),g(x)$ are monic polynomials of degree $M/2$, with the property
that the Wronskian
of the two quasi-polynomials is
\bean
\label{f1}
\Wr(x^{-\mu/2} f(x), x^{\mu/2} g(x))\, = \,\frac {\mu}x\,\prod_{s=1}^n (x-z_s)^{m_s}.
\eean
This fact suggests that the space $X[\mu]$ geometrizing $(E[\mu], \B\big\vert_E[\mu])$
is the space of pairs $(x^{-\mu/2} f(x)$, $x^{\mu/2} g(x))$ of quasi-polynomials
with Wronskian given by \eqref{f1}.
\smallskip
In this paper we show that this is indeed so.
We also show that
the scattering matrix isomorphism
$E[\mu]\to E[-\mu]$ corresponds to the natural isomorphism $X[\mu]\to X[-\mu]$ defined by the transposition of
the quasi-polynomials.
\smallskip
The main message of this paper is the deep relation between the quantum dynamical Gaudin model $(\B, \Fun)$ and
the spaces of pairs of quasi-polynomials.
\smallskip
It would be interesting to develop the elliptic version of this correspondence. In the elliptic version the pairs of quasi-polynomials
are replaced with pairs of theta-polynomials, see \cite{ThV}, but the elliptic KZB operator
$H_0$ does depend on $z$ and does not have apparent analogs of the
subspaces $E[\mu]$.
\medskip
The paper is organized as follows. In Section \ref{sec 2} we define the
$\slt$ quantum dynamical Gaudin model.
In Section \ref{sec 3} we discuss properties of the spaces $E[\mu]$.
In Section \ref{sec 4} we introduce the quantum trigonometric Gaudin model
$(V[\nu],\B(z,\mu,V[\nu])$ on a weight subspace $V[\nu]\subset V$ and show that
the quantum dynamical Gaudin model $(E[\mu], \B\big\vert_{E[\nu]})$
is isomorphic to the quantum trigonometric Gaudin model
$(V[0],\B(z,\mu,V[0])$ on the zero weight subspace. In Section \ref{sec 5} we describe
the Bethe ansatz for the quantum trigonometric Gaudin model.
In Sections \ref{sec 6} and \ref{sec 7} we describe the kernel of the operator $\mc E_\psi$.
In Sections \ref{sec 8} - \ref{sec 11} we develop the geometrization procedure. The constructions
of Sections \ref{sec 9} - \ref{sec 11} are parallel to the geometrization constructions in
\cite{MTV3, MTV2}.
\medskip
The authors thank V.\,Tarasov for useful discussions.
\section{Quantum dynamical Gaudin model}
\label{sec 2}
\subsection{$\glt$ $RST$-operator}
\label{ssec RST}
\label{ssec not}
Consider the complex Lie algebra $\glt$ with standard basis $e_{11}$, $e_{12}$, $e_{21}$, $e_{22}$.
Denote by $\h$ the Cartan subalgebra of $\glt$ with basis $e_{11},e_{22}$ and elements
$\la_1 e_{11}+\la_2 e_{22}$.
Denote
\bea
\la :=\la_1-\la_2.
\eea
Let $z=\{z_1,\dots,z_n\}\subset \C^\times$ be a set of nonzero pairwise distinct numbers.
\smallskip
Let $V^1,\dots,V^n$ be $\glt$-modules and $V=\otimes_{k=1}^nV^k$.
Let $V = \oplus_{\nu\in\h^*} V[\nu]$
be the weight decomposition, where $ V[\nu] = \{v\in V\ |\ e_{jj}v = \nu (e_{jj})v \ \on{for}\ j=1,2 \}$.
In particular,
\bea
V[0] = \{v\in V\ |\ e_{11}v = e_{22}v = 0\}.
\eea
For $g\in \glt$, denote $g^{(s)} = 1 \otimes \cdots \otimes g \otimes \cdots \otimes 1\in \End(V)$, with $g$ in the $s$-th factor.
An element $e_{jk}$ acts on $V$ by $e_{jk}^{(1)}+ \dots + e_{jk}^{(n)}$.
\smallskip
Let $u$ be a variable. Denote
\bea
x= e^{-2\pi \sqrt{-1}u}.
\eea
Let $\der_u = \frac{\der }{\der u}$, \, $\der_x = \frac{\der }{\der x}$,\, $\der_{\la_j} = \frac{\der }{\der \la_j}$ and so on.
\smallskip
Introduce a $2\times 2$-matrix $\L$,
\bean
\label{matrixL0}
\phantom{aaa}
\\
\notag
\begin{bmatrix} \L_{11} &\L_{12}
\\
\L_{21} &\L_{22}
\end{bmatrix}
=
\begin{bmatrix}
\pi
\sqrt{-1}
\sum_{s=1}^n \frac{z_s+x}{z_s-x} \,e_{11}^{(s)}
+ \pi \cot (\pi \la) \, e_{22} \; & \;
\pi \sqrt{-1} \sum_{s=1}^n \frac{z_s+x}{z_s-x} \,e_{21}^{(s)} - \pi \cot (\pi \la) e_{21}
\\
\pi \sqrt{-1} \sum_{s=1}^n \frac{z_s+x}{z_s-x} \,e_{12}^{(s)} + \pi \cot (\pi \la) e_{12} \;
& \; \pi \sqrt{-1} \sum_{s=1}^n \frac{z_s+x}{z_s-x} \,e_{22}^{(s)} - \pi \cot (\pi \la) e_{11}
\\
\end{bmatrix},
\eean
\vsk.2>
\noindent
The entries of $\L$ are $\End(V)$-valued trigonometric functions of $u$ and $\la$.
\smallskip
The {\it universal dynamical differential operator} (or the {\it $RST$-operator})
is defined by the formula
\bean
\label{trigRST}
\mc C = \cdet (\delta_{jk}\partial_u - \delta_{jk} \, \partial_{\la_{j}} + \L_{jk}),
\eean
where for a $2\times 2$-matrix $A = (a_{jk})$ with noncommuting entries the column determinant
is defined by the formula
\bea
\cdet A = a_{11}a_{22} - a_{21}a_{12}\,.
\eea
Write the $RST$-operator in the form
\bean
\label{D}
\mc C = \partial_u^2 \,+ \,C_1(x)\, \partial_u \,+ \,C_2(x),
\eean
where $C_1(x)$ and $C_2(x)$ are functions in $x$ with
values in the space of linear differential operators in variables $\la_1,\la_2$ with coefficients in $\End(V)$.
\begin{thm}
[\cite{RST}]
\label{thm RST}
Fix $z=\{z_1,\dots,z_n\}\subset \C^\times$.
Then for any $a \in \C-\{z_1,\dots,z_n\}$ the operators $C_1(a), C_2(a)$,
restricted to $V[0]$-valued functions of $\la_1,\la_2$,
define
linear differential operators in $\la_1, \la_2$ with coefficients in $\End(V[0])$. Moreover, for any
$a,b \in \C -\{z_1,\dots,z_n\}$,
the differential operators $C_j(a)$, $C_k(b)$, $j,k=1,2$, acting on the space of $V[0]$-valued functions of $\la_1,\la_2$
commute:
\bean
\label{cC}
[C_j(a), C_k(b)] = 0, \qquad j,k=1,2.
\eean
\end{thm}
\smallskip
The elliptic version of the $RST$-operator for $\gln$ was introduced by V.\,Rubtsov, A.\,Silantyev, D.\,Talalaev in \cite{RST}.
The elliptic version of the $\gln$ $RST$-operator, in particular for the case of $N=2$,
was discussed in \cite{ThV}. The $RST$-operator,
defined in \eqref{D}, is the trigonometric
degeneration of the elliptic $\glt$ $RST$-operator.
\subsection{Dynamical Bethe algebra of $\Fun$}
In this paper, we are interested in the $\slt$ version of the $RST$-operator.
\smallskip
The Lie algebra $\slt$ is a Lie subalgebra of $\glt$. We have $\glt=\slt\oplus \C(e_{11}+e_{22})$,
where $e_{11}+e_{22}$ is a central element.
Let $V^1,\dots,V^n$ be $\slt$-modules, thought of as $\glt$-modules,
where the central element $e_{11}+e_{22}$ acts by zero.
Let $V=\ox_{k=1}^nV^k$ be the tensor product of the $\slt$-modules.
\smallskip
In this paper {\it we consider only such tensor products.}
\smallskip
We consider the Cartan subalgebra of $\slt$ consisting of elements $\la_1e_{11}+\la_2e_{22}$
with $\la_1+\la_2=0$.
We identify the algebra of functions
on the Cartan subalgebra of $\slt$ with the algebra of functions in
the variable
\bea
\la=\la_1-\la_2\,,
\eea
since the elements
$\la_1e_{11}+\la_2e_{22}$
with $\la_1+\la_2=0$ are uniquely determined by the difference of coordinates.
Denote by $\Fun$ the space of $V[0]$-valued meromorphic functions on the Cartan subalgebra of $\slt$.
In other words, $\on{Fun}_{\slt}\!\!V[0]$ is the space of $V[0]$-valued meromorphic functions in
one variable $\la$.
\smallskip
Each coefficient $C_1(x), C_2(x)$ of the $RST$-operator, defines a differential operator
acting on $\Fun$. From now on {\it we consider the coefficients $C_1(x)$, $C_2(x)$ as
a family of commuting differential operators on $\Fun$}, depending on the parameter $x$.
\smallskip
The commutative algebra of differential operators on $\Fun$ generated by the identity operator
and the operators $\{ C_j(a)\ |\ j=1,2, \ a\in \C -\{z_1,\dots,z_n \}\}$
is called the {\it dynamical Bethe algebra} of $\Fun$. The dynamical Bethe algebra depends on the choice of
the numbers $\{z_1,\dots,z_n\}$.
\subsection{Tensor product of $\slt$-modules}
\label{sec TP}
Given $m\in \Z_{\geq 0}$, denote by $V_{m}$ the
irreducible $\slt$-module with highest weight $m$.
It has a basis $v^m_0,\dots,v^m_m$ such that
\bean
\label{V_m}
\phantom{aaaaa}
(e_{11}-e_{22})v^m_k=(m-2k)v^m_k,
\quad
e_{21}v^m_k=(k+1)v^m_{k+1},
\quad
e_{12}v^m_k=(m-k+1)v^m_{k-1}.
\eean
\smallskip
From now on our tensor product $V$ is of the form
\bean
\label{ten V}
V\,=\, \ox_{s=1}^n V_{m_s}\,, \qquad m_s\in\Z_{> 0}\,.
\eean
We have the weight decomposition $V =\oplus_{\nu\in\Z}V[\nu]$ consisting of weight subspaces
\bean
\label{wdec}
V[\nu] \,=\, \{v\in V\ |\ (e_{11}-e_{22}) v = \nu v\,\}\,.
\eean
If $V[\nu]$ is nonzero, then
\bean
\label{mu m}
\nu = \sum_{s=1}^n m_s - 2k,
\eean
for some nonnegative integer $k$.
The dimension of $V[0]$ is positive if the sum $\sum_{s=1}^n m_s$ is even.
\subsection{Operator $\mc A(\mu)$}
\label{sec WG}
The $\slt$ Weyl group $W$ consists of two elements: identity and involution $\si$.
The projective action of $W$ on $V_m$ is given by the formula
\bea
\si: v^m_k \mapsto (-1)^kv^m_{m-k}
\eea
for any $k$. We have $\si^2=(-1)^m$. The Weyl group $W$ acts on the tensor product $V$ diagonally.
\smallskip
Following \cite{TV}, introduce
\bea
p(\mu) \,=\, \sum_{k=0}^\infty \,e_{21}^k e_{12}^k\,\frac1{k!}\,
\prod_{j=0}^{k-1}\,\frac 1{\mu +e_{22}-e_{11}-j}\,,
\qquad \mu\in \C.
\eea
The series $p(\mu)$ acts on $V_m$, since only a finite number of terms acts nontrivially.
The formula for the action of $p(\mu)$ on a basis vector $v^m_k$
becomes more symmetric if $\mu$ is replaced by $\mu+\frac {\nu}2-1$, where
$\nu=m-2k$ is the weight of $v^m_k$,
\bean
\label{ppro}
p\Big(\mu+\frac {\nu}2-1\Big) v^m_k\,=\,\prod_{j=0}^{k-1}\, \frac{\mu +\frac m2-j}{\mu -\frac m2 + j} \,v^m_k\,,
\eean
see \cite[Section 2.5]{TV}.
\smallskip
The series $p(\mu)$ acts on the tensor product $V$ in the standard way.
Introduce the operator
\bean
\label{mc A}
\mc A(\mu) \,:\,V\,\to\, V,
\quad
v \ \mapsto\ \si p(\mu) \,v\,.
\eean
The operator $\mc A(\mu)$ is a meromorphic function of $\mu$. For any $\nu$, we have
$\mc A(\mu) V[\nu] \subset V[-\nu]$, and
$\lim_{\mu\to \infty} \mc A(\mu) = \si$\,. The operator $\mc A(\mu)$ may be considered as a deformation of the Weyl group
operator $\si$.
\begin{lem}
\label{lem v iso}
For $V= \ox_{s=1}^n V_{m_s}$ as in \eqref{ten V}, denote $M=\sum_{s=1}^n m_s$.
Assume that $\mu \notin \frac M2+\Z$.
Then for any $\nu$ the operator
\bean
\label{A mu}
\mc A\Big(\mu +\frac {\nu}2-1\Big)\Big\vert_{V[\nu]}\,:\,V[\nu]\,\to\,V[-\nu]
\eean
is an isomorphism of vector spaces. The composition of the operator
$\mc A\Big(\mu +\frac {\nu}2-1\Big)\Big\vert_{V[\nu]}$ and the operator
\bean
\label{A-mu}
\mc A\Big(-\mu -\frac {\nu}2+1\Big)\Big\vert_{V[-\nu]}\,:\,V[-\nu]\,\to\,V[\nu]
\eean
is the scalar operator on $V[\nu]$ of multiplication by $(-1)^M \frac{\mu -\nu/2}{\mu +\nu/2}$\,.
\end{lem}
\begin{proof}
The $\slt$ irreducible decomposition $V=\oplus _m V_m$ of the tensor product $V$
has the highest weights $m$ of the form $m=M-2k$ for $k\in \Z_{\geq 0}$, only.
Now \eqref{A mu} is an isomorphism by formula \eqref{ppro}.
The statement on the composition is \cite[Theorem 10]{TV}.
\end{proof}
\begin{rem}
The operator $\mc A(\mu)$ is the (only) nontrivial element of the $\slt$ dynamical Weyl group of $V$, see definitions in
\cite{EV}.
\end{rem}
\subsection{KZB operators}
Introduce the following elements of $\glt\otimes\glt$\,,
\bea
&&
\Om_{12} = e_{12}\otimes e_{21}, \phantom{aaaaaaaa} \qquad \Om_{21} = e_{21}\otimes e_{12},
\\
&&
\Om_0=e_{11}\otimes e_{11} + e_{22}\otimes e_{22}, \qquad
\Om = \Om_0 + \Om_{12} + \Om_{21}.
\eea
The {\emph{KZB operators}} $H_0, H_1(z), \dots,H_n(z)$
are the following differential operators in variables $\la_1,\la_2$ acting on the space $\Fun$\,,
\bean
\label{tKZB}
&&
\\
\notag
H_0
&=&
\frac{1}{4\pi \sqrt{-1}} (\partial_{\la_1}^2 + \partial_{\la_2}^2) + \frac{\pi \sqrt{-1}}{4}
\sum_{s,t=1}^n \left[ \frac{1}{2} \Om^{(s,t)}_0 + \frac{1}{\sin^2(\pi\la)}
\left(\Om_{12}^{(s,t)} + \Om_{21}^{(s,t)} \right) \right],
\\
\notag
H_s(z)
&=&
- (e_{11}^{(s)} \partial_{\la_1} + e_{22}^{(s)} \partial_{\la_2}) +
\sum_{t:\, t \ne s} \left[ \pi \sqrt{-1} \frac{z_t+ z_s}{z_t-z_s} \Om^{(s,t)} - \pi \cot (\pi \la)
\left( \Om_{12}^{(s,t)} - \Om_{21}^{(s,t)} \right) \right],
\eean
cf. formulas in Section 3.4 of \cite{JV}. The elliptic KZB operators were introduced in \cite{FW}.
In \eqref{tKZB} we consider the trigonometric degeneration of the elliptic KZB operators.
\smallskip
By \cite{FW} the operators $H_0, H_1(z), \dots, H_n(z)$ commute and
$\sum_{s=1}^n H_s(z) = 0$.
\begin{rem}
The differential operator $H_0$ is the {\it Hamiltonian operator of the trigonometric quantum two-particle
system with spin space} $V$.
\end{rem}
\subsection{Coefficients $C_1(x)$, $C_2(x)$}
\begin{lem}
We have
\bea
C_1(x) = \L^0_{11}(x)+\L^0_{22}(x) - \partial_{\la_1} - \partial_{\la_2}.
\eea
Hence the coefficient $C_1(x)$ acts by zero on $\Fun$.
\qed
\end{lem}
\begin{cor} The $RST$-operator \eqref{D} has the form
\bean
\label{mc C}
\mc C=\der_u^2+C_2(x)
\eean
as an operator on $\Fun$.
\end{cor}
\begin{thm} [\cite{ThV}]
\label{S_2(X)}
We have
\bean
\label{C2}
&&
\\
&&
\notag
C_2(x) = -2\pi \sqrt{-1} H_0 - \sum_{s=1}^n
\left[ 2\pi \sqrt{-1} \frac{H_s(z)}{1-x/z_s} +
4\pi^2 \Big(-\frac{c_2^{(s)}}{1-x/z_s} + \frac{c_2^{(s)}}{(1-x/z_s)^2}\Big) \right],
\eean
where $c_2 = e_{11}e_{22} - e_{12}e_{21} + e_{11}$ is a central element of $\glt$.
\end{thm}
\begin{proof}
This is the trigonometric degeneration of the elliptic version of this theorem, see
\cite[Theorem 4.9]{ThV}.
\end{proof}
\begin{cor} The dynamical Bethe algebra of $\Fun$ is generated by the identity operator and
the KZB operators $H_0, H_1(z), \dots, H_n(z)$.
\end{cor}
The commutativity of the KZB operators and formulas \eqref{mc C},
\eqref{C2} imply the commutativity $[C_2(a),C_2(b)]=0$ independently of Theorem
\ref{thm RST}.
\subsection{Weyl group invariance}
\label{sec Wgi}
The Weyl group acts on $V[0]$
as explained in Section \ref{sec WG}.
Hence the Weyl group acts on $\Fun$ by the formula
\bean
\label{siF}
\si : \psi(\la) \mapsto \si (\psi (-\la)), \qquad
\psi \in\Fun\,.
\eean
\smallskip
\noindent
This action extends to a Weyl group action on
$\End(\Fun)$, where for
$T \in \End(\Fun)$ the operator $\si(T)$ is defined as the product
$\si T\si^{-1}$ of the three elements of $\End(\Fun)$.
\begin{lem} [\cite{ThV}]
\label{weyl inv}
For any $a\in \C-\{z_1,\dots,z_n\}$ the operator $C_2(a) \in \End(\Fun)$ is Weyl group invariant.
\end{lem}
\begin{proof}
By \cite{FW} the KZB operators $H_0,H_1(z),\dots,H_n(z)$ are Weyl group invariant.
The lemma follows from formula \eqref{C2}.
\end{proof}
\section{Eigenfunctions of $H_0$}
\label{sec 3}
\subsection{Trigonometric Gaudin operators}
The trigonometric $r$-matrix is defined by
\bean
\label{r-matrix}
r(x) = \frac{\Om_+ x + \Om_-}{x-1},
\eean
where
$\Om_+ = \frac12 \Om_0 + \Om_{12}$,\ $\Om_- = \frac12 \Om_0 + \Om_{21}$.
For $\mu \in \C$ the trigonometric Gaudin operators acting on $V$ are defined as
\bean
\label{trig Gaudin}
\K_s( z,\mu) = \frac{\mu}{2}\,(e_{11}-e_{22})^{(s)} + \sum_{t:\, t \ne s} r^{(s,t)} (z_s/ z_t), \qquad s=1,\dots,n.
\eean
Each operator $\K_s(z,\mu)$ preserves each of the weight subspaces $V[\nu]$
and
\bea
[\K_s(z,\mu), \K_t( z,\mu)] = 0
\eea
for all $s,t$, see \cite{Ch, EFK}.
\subsection{Dynamical Bethe algebra of $E(\mu)$}
\label{sec BE}
Let
\bea
\La=e^{-2\pi \sqrt{-1} \la}\,, \qquad\on{where}\ \la=\la_1-\la_2\,.
\eea
Let $\A$ be the algebra of functions in $\la$, which can be represented
as meromorphic functions of $\La$ with poles only at the set $\{\La=1\}$.
\smallskip
For $\mu\in\C$ introduce the $\A$-module $\A[\mu]$
of functions of the form
$e^{\pi \sqrt{-1} \mu\la}f,$ where $f\in\A$. This module
is preserved by derivatives with respect to $\la_1,\la_2$. Therefore the KZB operator $H_0$ preserves the space
$\A[\mu]\otimes V[0]$. Any $\psi \in \A[\mu]\otimes V[0]$
has the form
\bea
\psi(\la) = e^{\pi \sqrt{-1} \mu\la}\sum_{k=0}^\infty \La^k \psi^k, \qquad \psi^k\in V[0].
\eea
\begin{thm} [\cite{FV2}]
\label{prop}
Let $\mu \not\in\Z_{>0}$.
Then for any nonzero $v \in V[0]$, there exists a unique $\psi \in \A[\mu] \ox V[0]$ such that
\bea
H_0\, \psi\, = \,\epsilon \,\psi\,,
\eea
for some $\epsilon\in\C$ and $\psi^0=v$. Moreover,
$\epsilon = \,\pi \sqrt{-1}\, \frac{\mu^2}2$\,.
\end{thm}
Cf. \cite{JV}. This function $\psi$ is denoted by $\psi_v$.
\smallskip
We denote by $E[\mu]$ the vector space of functions $\psi\in \A[\mu] \ox V[0]$ such that $H_0\,\psi =\,\pi \sqrt{-1}
\, \frac{\mu^2}2\,\psi$. For more information on this space see \cite[Section 9]{JV}.
\begin{cor}
For $\mu \notin \Z_{>0}$, the map
\bean
\label{is VE}
V[0]\to E[\mu], \qquad v \mapsto \psi_v,
\eean
is an isomorphism.
\end{cor}
\begin{thm} [\cite{JV}]
\label{H_s eigen}
Let $\mu \notin \Z_{>0}$. Then for $s=1,\dots,n$, the KZB operators $H_s(z)$ preserve the space $E[\mu]$.
Moreover, for any $v \in V[0]$ we have
\bea
H_s(z) \psi_v = \psi_w,
\eea
where $w=-2\pi \sqrt{-1} \, \K_s (z,\mu)\, v$.
\end{thm}
\begin{thm}
\label{B20 thm}
Let $\mu \notin \Z_{>0}$, $V= \ox_{s=1}^n V_{m_s}$\,, and $v\in V[0]$. Then
\bea
C_2(x)\,\psi_v\,=\,\psi_w\,,
\eea
where
\bean
\label{B20}
\phantom{aaa}
w\,=\, (2\pi \sqrt{-1})^2 \bigg[-\frac{\mu^2}{4} + \sum_{s=1}^n \Big[\frac{m_s(m_s+2)/4+\K_s(z,\mu)}{1-x/ z_s}
- \; \frac{m_s(m_s+2)/4}{(1-x/z_s)^2} \Big] \bigg] v\,.
\eean
\end{thm}
\begin{proof}
One computes the action of $C_2(X)$ on $\psi_v$ using Theorem \ref{S_2(X)}. The computation is based on
Theorem \ref{H_s eigen} and the fact that $c_2$ acts on $V_{m_s}$ as multiplication by
$-\frac{m_s(m_s+2)}{4}$.
\end{proof}
By Theorem \ref{H_s eigen} the subspace $E[\mu]\subset \Fun$ is invariant with respect to the
action of the dynamical Bethe algebra. The restriction
of the dynamical Bethe algebra to $E[\mu]$ is called the
{\it dynamical Bethe algebra} of $E[\mu]$ and denoted by
$\B(z; E[\mu])$.
\vsk.2>
Notice that $E[\mu]$
is a finite-dimensional vector space of dimension $\dim V[0]$. The space $E[\mu]$
does not depend on $ z$,
since the KZB operator $H_0$ does not depend on $ z$.
The algebra $\B(z; E[\mu])$ is generated by the identity operator and
the KZB operators $H_1(z), \dots, H_s(z)$ and does depend on $ z$.
\subsection{Two-particle scattering matrix}
\begin{thm}
[{\cite[Lemma 6.2]{FV2}}]
\label{thm x=A}
For $\mu\notin\Z$, the action \eqref{siF} of the Weyl group involution $\si$ on $\Fun$ identifies
the spaces $E[\mu]$ and $E[-\mu]$. More precisely, for any $v\in V[0]$ we have
\bean
\label{vAw}
\si (\psi_v^\mu(-\la)) \,=\, \psi_{\mc A(\mu-1)v}^{-\mu}(\la),
\eean
where $\psi_v^\mu(\la)$ is the element of $E[\mu]$ with initial term $v$ and
$\psi_{\mc A(\mu-1)v}^{-\mu}(\la)$ is the element of $E[-\mu]$ with
initial term $\mc A(\mu-1)v$. Here $\mc A(\mu-1):V[0]\to V[0]$ is the vector isomorphism, defined in \eqref{mc A}.
\end{thm}
\begin{proof}
Formula \eqref{vAw} is proved in the example next to Lemma 6.2 in \cite{FV2}.
\end{proof}
\section{Quantum trigonometric Gaudin model}
\label{sec 4}
\subsection{Universal differential operator}
Let $V= \ox_{s=1}^n V_{m_s}$.
Introduce a $2\times 2$-matrix
\bean
\label{matM}
\mc M = \begin{bmatrix} \mc M_{11} \; & \; \mc M_{12} \\
\mc M_{21} \; & \; \mc M_{22} \\
\end{bmatrix} = -2 \pi \sqrt{-1} \sum_{s=1}^n r^{(0,s)}(x/z_s),
\eean
where $r(x)$ is the trigonometric $r$-matrix defined in \eqref{r-matrix}.
More explicitly,
\bea
\M = \begin{bmatrix}
2\pi \sqrt{-1} \sum_{s=1}^n \frac{1}{1-x/ z_s} e_{11}^{(s)} - \pi \sqrt{-1} e_{11} \; &
\; 2 \pi \sqrt{-1} \sum_{s=1}^n \frac{1}{1-x/ z_s} e_{21}^{(s)} - 2 \pi \sqrt{-1} e_{21} \\
2 \pi \sqrt{-1} \sum_{s=1}^n \frac{1}{1-x/ z_s} e_{12}^{(s)} \; &
\; 2 \pi \sqrt{-1} \sum_{s=1}^n \frac{1}{1-x/ z_s} e_{22}^{(s)} - \pi \sqrt{-1} e_{22} \\
\end{bmatrix}.
\eea
The {\it universal (trigonometric) differential operator} for $V$ with parameter $\mu\in\C$
is defined by the formula
\bean
\label{Dcdet}
\D = \cdet \begin{bmatrix}
\der_u -\pi \sqrt{-1} \mu + \M_{11} \; & \; \M_{12}
\\
\M_{21} \; & \; \der_u + \pi \sqrt{-1} \mu + \M_{22}
\\
\end{bmatrix}.
\eean
Write the operator $\D$ in the form
\bea
\D = \der_u^2 +D_1(x)\der_u + D_2(x),
\eea
where $D_1(x)$, $D_2(x)$ are $\End(V)$-valued functions of $x$. It is clear that
$\D$ commutes with the action on $V$ of the Cartan subalgebra of $\slt$. In particular, it means that
$D_1(x)$, $D_2(x)$ preserve the weight decomposition of $V$.
\subsection{Coefficients and Gaudin operators}
\label{sec ctG}
\begin{thm}
\label{Dthm}
We have $D_1(x)=0$ and $(2\pi \sqrt{-1})^{-2} D_2(x)$ equals
\bean
\label{DD2}
\phantom{aaaqa}
-\frac{\mu^2+\mu (e_{11} - e_{22}) - e_{11} e_{22}}4
+ \sum_{s=1}^n \Big[ \frac{m_s(m_s+2)/4 + \K_s ( Z,\mu)}{1-x/z_s} - \frac{m_s(m_s+2)/4}{(1-x/z_s)^2} \Big] .
\eean
\end{thm}
\begin{proof}
The proof is by straightforward calculation. We have
\bea
\D
&=&\Big(\der_u - \pi \sqrt{-1} \mu + 2\pi \sqrt{-1} \sum_{s=1}^n \frac{1}{1-x/ z_s} e_{11}^{(s)} - \pi \sqrt{-1} e_{11}\Big)
\\
&
\times
&
\Big(\der_u + \pi \sqrt{-1} \mu + 2 \pi \sqrt{-1} \sum_{s=1}^n \frac{1}{1-x/ z_s} e_{22}^{(s)} - \pi \sqrt{-1} e_{22}\Big)
\\
&-&
\Big( 2 \pi \sqrt{-1} \sum_{s=1}^n \frac{1}{1-x/ z_s} e_{12}^{(s)}\Big)
\Big( 2 \pi \sqrt{-1} \sum_{s=1}^n \frac{1}{1-x/ z_s} e_{21}^{(s)} - 2 \pi \sqrt{-1} e_{21} \Big) .
\eea
Then
\bea
D_1(x) = 2 \pi \sqrt{-1} \sum_{s=1}^n \frac{e_{11}^{(s)}+e_{22}^{(s)}}{1-x/ z_s} - \pi \sqrt{-1} (e_{11} + e_{22}) = 0.
\eea
Since $x=e^{-2 \pi \sqrt{-1} u}$ and $\der_u = -2 \pi \sqrt{-1} \,x \der_x$, the coefficient
of $(2\pi \sqrt{-1} )^{-2}D_2(x)$ equals
\bean
\label{D/}
&&
-\frac{\mu^2}{4} - \sum_{s=1}^n \frac{e_{22}^{(s)}}{(1-x/z_s)^2} + \sum_{s=1}^n \frac{e_{22}^{(s)}}{1-x/z_s}
+ \frac{\mu}{2} \sum_{s=1}^n \frac{e_{11}^{(s)}-e_{22}^{(s)}}{1-x/z_s}
- \frac{\mu}{4}(e_{11}-e_{22})
\\
\notag
&&
+ \sum_{s=1}^n \frac{e_{11}^{(s)} e_{22}^{(s)}}{(1-x/z_s)^2}
+ \sum_{s=1}^n \Big( \sum_{t: \, t\ne s} \frac{e_{11}^{(s)} e_{22}^{(t)} + e_{11}^{(t)} e_{22}^{(s)}}{1- z_s/ z_t} \Big)
\frac{1}{1-x/z_s} - \sum_{s=1}^n \frac{e_{11}^{(s)} e_{22}^{(s)}}{1-x/z_s}
\\
\notag
&&
- \sum_{s=1}^n \Big( \sum_{t: \, t\ne s} e_{22}^{(t)} \Big) \frac{e_{11}^{(s)}}{1-x/z_s}
+ \frac14 e_{11} e_{22} - \sum_{s=1}^n \frac{e_{12}^{(s)} e_{21}^{(s)}}{(1-x/z_s)^2} \phantom{aaaaaaaaaa}
\\
\notag
&&
- \sum_{s=1}^n \Big( \sum_{t: \, t\ne s} \frac{e_{12}^{(s)}e_{21}^{(t)} + e_{12}^{(t)} e_{21}^{(s)}}{1- z_s/ z_t} \Big)
\frac{1}{1-x/z_s} +
\sum_{s=1}^n \frac{e_{12}^{(s)} e_{21}^{(s)}}{1-x/z_s} + \sum_{s=1}^n \Big( \sum_{t: \, t\ne s} e_{21}^{(t)} \Big) \frac{e_{12}^{(s)}}{1-x/z_s} .
\eean
The constant term in \eqref{D/} equals $-\frac{\mu^2+\mu (e_{11} - e_{22}) - e_{11} e_{22}}4$.
For $s=1,\dots,n$, the coefficient of $\frac{1}{1-x/z_s}$ in \eqref{D/} equals
\bea
&&
-c_2^{(s)}\, +\, \frac{\mu}{2}\, (e_{11}-e_{22})^{(s)} \,+
\,\sum_{t: \, t \ne s} \frac{e_{12}^{(s)} e_{21}^{(t)} z_s
+ e_{12}^{(t)} e_{21}^{(s)} z_t}{ z_s - z_t} + e_{22}^{(s)} \big( e_{22} - e_{22}^{(s)} \big)
\\
&&
\phantom{aaa}
= \ -c_2^{(s)} \,+ \,\K_s( z,\mu) \,= \,m_s(m_s+2)/4\, +\, K_s(z, \mu).
\eea
The coefficient of $\frac{1}{(1-x/z_s)^2}$ in \eqref{D/} equals
\bea
-e_{22}^{(s)} + e_{11}^{(s)}e_{22}^{(s)} - e_{12}^{(s)}e_{21}^{(s)} = (e_{11} e_{22} - e_{12}e_{21} + e_{11})^{(s)} = c_2^{(s)}.
\eea
The theorem is proved.
\end{proof}
\begin{lem}
\label{D2 comm}
For any $a,b\in \C-\{z_1,\dots,z_n\}$ the operators
$D_2(a), D_2(b)\in \End(V)$ commute.
They also commute with the $\glt$ Cartan subalgebra.
\end{lem}
\begin{proof}
It is clear that the trigonometric Gaudin operators $\K_s(z,\mu)$ commute with the
$\glt$ Cartan subalgebra. Now the lemma follows from the commutativity of
trigonometric Gaudin operators.
\end{proof}
\begin{cor}
\label{cor D2 mu}
Choose a weight subspace $V[\nu]$ of $V$. Then
$(2\pi \sqrt{-1})^{-2} D_2(x)$ restricted to $V[\nu]$ equals
\bean
\label{D2}
-\, \frac{(\mu+\nu/2)^2}{4} + \sum_{s=1}^n \Big[ \frac{m_s(m_s+2)/4 + \K_s ( z,\mu)}{1-x/z_s} - \frac{m_s(m_s+2)/4}{(1-x/z_s)^2} \Big] .
\eean
\qed
\end{cor}
The commutative algebra of operators on $V[\nu]$ generated by the identity operator
and the operators $\{ D_2(a)\ |\ a\in \C -\{z_1,\dots,z_n \}\}$
is called the {\it Bethe algebra} of $V[\nu]$ with parameter $\mu$ and denoted by
$\B(z;\mu; V[\nu])$. The Bethe algebra $\B(z;\mu; V[\nu])$
is generated by the identity operator and the trigonometric Gaudin operators
$\K_1 ( z,\mu), \dots,\K_n ( z,\mu)$.
\smallskip
The pair $(V[\nu], \B(z;\mu; V[\nu]))$ is called the {\it trigonometric Gaudin model on $V[\nu]$}.
\begin{cor}
\label{iso B}
If $\mu\notin \Z_{>0}$, the isomorphism $V[0]\to E[\mu]$ in \eqref{is VE} induces an isomorphism
$\B(z;\mu;V[0])\to B(z;E[\mu])$ between the Bethe algebra of $V[0]$ with parameter $\mu$ and the
dynamical Bethe algebra of the space $E[\mu]$.
\end{cor}
\begin{proof}
The corollary is proved by comparing formulas \eqref{B20} and \eqref{D2}.
\end{proof}
\subsection{Gaudin operators and Weyl group}
\begin{lem}
[{\cite[Lemma 18]{TV}}, cf. {\cite[Lemma 5.5]{MV2}}]
\label{lem AK}
For any weight subspace $V[\nu]$, any $v\in V[\nu]$, $s=1,\dots,n$, we have
\bean
\label{AK}
\mc A\Big(\mu +\frac {\nu}2-1\Big) \K_s(z,\mu) v\,=\,
\K_s(z, - \mu)\mc A\Big(\mu +\frac {\nu}2-1\Big) v.
\eean
\end{lem}
\begin{thm}
\label{thm isom mu}
For $V= \ox_{s=1}^n V_{m_s}$ as in \eqref{ten V}, denote $M=\sum_{s=1}^n m_s$.
Assume that $\mu \notin \frac M2+\Z$.
Then for any $\nu$ the isomorphism of vector spaces
\bean
\label{A mu n}
\mc A\Big(\mu +\frac {\nu}2-1\Big)\Big\vert_{V[\nu]}\,:\,V[\nu]\,\to\,V[-\nu]
\eean
induces an isomorphism of Bethe algebras
\bean
\label{B iso}
\phantom{aaa}
\B(z;\mu; V[\nu]) \to \B(z;-\mu; V[-\nu]), \quad
T\mapsto\mc A\Big(\mu +\frac {\nu}2-1\Big) T
\mc A\Big(\mu +\frac {\nu}2-1\Big)^{-1}.
\eean
\end{thm}
\begin{proof}
The theorem is a corollary of Lemmas \ref{lem v iso} and \ref{lem AK}.
\end{proof}
\subsection{Commutative diagram}
Assume that $\mu\notin\Z$ and $M$ is even. Then $V[0]$ is a nonzero weight subspace.
\smallskip
Consider the $\B(z;\mu;V[0])$-module $V[0]$ and $\B(z;-\mu;V[0])$-module $V[0]$.
Consider the $\B(z; E[\mu])$-module $E[\mu]$ and $\B(z; E[-\mu])$-module $E[-\mu]$.
Consider the diagram relating these modules
\bean
\label{comD}
\begin{tikzcd}
(\B(z;\mu;V[0]),\, V[0]) \arrow[r, ] \arrow[d, ] & (\B(z;-\mu;V[0]), \,V[0]) \arrow[d, ]
\\
(\B(z; E[\mu]),\, E[\mu]) \arrow[r, ] & (\B(z; E[-\mu]),\, E[-\mu])
\end{tikzcd}\ \ .
\eean
\noindent
Here the map
$(\B(z;\mu;V[0]),\, V[0]) \to (\B(z;-\mu;V[0]), \,V[0])$ is the module isomorphism of Theorem \ref{thm isom mu}.
The map
$(\B(z; E[\mu]),\, E[\mu]) \to (\B(z; E[-\mu]),\, E[-\mu])$ is the module isomorphism induced by the action
of the Weyl involution $\si$
and the fact that the $RST$-operator is Weyl group invariant, see Lemma \ref{weyl inv}.
The maps
$(\B(z;\mu;V[0]),\, V[0])\to (\B(z; E[\mu]),\, E[\mu])$ and
$(\B(z;-\mu;V[0]),\, V[0])\to (\B(z; E[-\mu]),\, E[-\mu])$
are the module isomorphisms of Corollary \ref{iso B}.
\begin{thm}
\label{thm tra}
Diagram \eqref{comD} is commutative.
\end{thm}
\begin{proof}
The theorem follows from Theorems \ref{H_s eigen}, \ref{thm x=A}, \ref{thm isom mu}.
\end{proof}
\section{Bethe ansatz}
\label{sec 5}
\subsection{Bethe ansatz equations for triple $(z;\mu;V[\nu])$}
\label{sec BAE}
Let
$V= \ox_{s=1}^n V_{m_s}$, as in \eqref{ten V},
and $M=\sum_{s=1}^n m_s$. Let $V[\nu]$ be a nonzero
weight subspace of $V$.
Then $\nu = M-2m$ for some nonnegative integer $m$.
\smallskip
Let $z=\{z_1,\dots,z_n\}\subset \C^\times$ be a set of nonzero pairwise distinct numbers,
as in Section \ref{ssec not}. Let $\mu\in\C$.
\smallskip
Introduce the {\emph{master function}} of the variables $t=(t_1,\dots,t_m), \mu, z$,
\bea
\Phi(t,z,\mu) \,
&=&
\, \Big(1-\mu + \frac{\nu}{2} \Big) \sum_{i=1}^m \,\ln t_i \,+
\,\sum_{s=1}^n \,\frac{m_s}{4}\, (2\mu + m_s - \nu) \,\ln z_s
\\
& + &
2 \sum_{1 \leqslant i < j \leqslant m} \ln(t_i - t_j)
- \sum_{i=1}^m \sum_{s=1}^n m_s \ln(t_i - z_s) +
\sum_{1 \le s < r \leqslant n} \frac{m_s m_r}{2} \ln(z_s - z_r).
\eea
The {\emph{Bethe ansatz equations}} are the critical point equations for
the master function $\Phi(t,z,\mu)$ with respect to the variables $t_1,\dots,t_m$,
\bean
\label{tr.BAE}
\frac{1-\mu+\nu/2}{t_i} \,+ \,\sum_{j \ne i} \frac{2}{t_i-t_j}\, -\, \sum_{s=1}^n \frac{m_s}{t_i-z_s}\, = \,0,
\qquad i=1,\dots,m.
\eean
The master function $\Phi(t,z,\mu)$ is the trigonometric degeneration
of the elliptic master function considered in Section 5 of \cite{ThV}, see also \cite{FV1, MaV}.
\smallskip
The symmetric group $S_m$ acts on the critical set. If $(t_1^0,\dots, t^0_m; z;\mu)$
is a solution of the Bethe ansatz equations, then for any $\rho\in S_m$,
the point $(t_{\rho(1)}^0,\dots, t^0_{\rho(m)};z;\mu)$ is also a solution.
\subsection{Bethe vectors}
Define
\bea
\mc{C} &=& \{\ell = (\ell_1,\dots,\ell_n)\in\Z_{\geq 0}\ |\ \ell_s\leq m_s, \, \ell_1+\dots+\ell_n=m\},
\\
\om_{\ell} (t,z)
&=&
\Sym \; \Big[
\prod_{s=1}^n \prod_{i=\ell_1+\dots+\ell_{s-1}+1}^{\ell_1+\dots+\ell_s} \frac{1}{t_i-z_s}\Big],
\eea
where $\Sym f(t_1,\dots,t_m) = \sum_{\rho \in S_m} f(t_{\rho(1)}, \dots,t_{\rho(m)})$.
Introduce the {\it weight function}
\bean
\label{wght_f}
\om(t,z) \,=\, \sum_{\ell \in \mc{C}} \,\om_{\ell} (t,z)\,
v^{m_1}_{\ell_1}\otimes \dots\otimes v^{m_n}_{\ell_n}\,,
\eean
see Section \ref{sec TP}. This weight function see in \cite{MV2}, also in \cite{JV, MaV, SV}.
\smallskip
Notice that the weight function is a symmetric function of the variables $t_1,\dots,t_m$.
\smallskip
If $(t^0\!\,;z;\mu)$ is a solution of the Bethe ansatz equations \eqref{tr.BAE},
then the vector $\om(t^0\!, z)$ is called the {\it Bethe vector}.
\begin{thm}
[\cite{MTV6,V}]
\label{thm Bnon}
Let $(t^0\!; z; \mu)$ be a solution of the Bethe ansatz equations \eqref{tr.BAE}.
Then the Bethe vector $\om(t^0\!, z)$ is nonzero.
\end{thm}
\begin{thm} [\cite{FV1, JV}, cf. \cite{RV}]
\label{eigenv}
Let $(t^0 ; z; \mu)$ be a solution of the Bethe ansatz equations \eqref{tr.BAE}.
Then the Bethe vector $\om(t^0\!, z)$ is an eigenvector of the trigonometric Gaudin operators,
\bea
\K_s(z,\mu) \, \om(t^0\!,z) \,=\, z_s\,\frac{\der \Phi}{\der z_s}(t^0\!, z,\mu) \, \om(t^0\!, z)\,,
\qquad s=1,\dots,n.
\eea
\end{thm}
Denote
\bean
\label{k_s}
\phantom{aaa}
k_s(t^0\!,z,\mu)
&=&
z_s\,\frac{\der \Phi}{\der z_s}(t^0\!,z,\mu)
\\
\notag
&=&
\frac{m_s}{2} \Big[ (\mu - \nu/2 + m_s/2)\, +\,
\sum_{p: \,p \ne s} m_p \,\frac{z_s}{z_s-z_p}
\, + \,
2 \sum_{i=1}^m \frac{z_s}{t_i^0-z_s} \Big].
\eean
\subsection{Bethe vectors and coefficient $D_2(x)$}
\begin{lem}
\label{hatB2 lem}
If $(t^0\!\,; z; \mu)$ is a solution of the Bethe ansatz equations
\eqref{tr.BAE}, then the Bethe vector $\om(t^0\!,z)$ is an eigenvector of
all operators of the Bethe algebra $\B(z;\mu; V[\nu])$. In particular, the
operator $D_2(x)$ acts on $\om(t^0\!,z)$ by multiplication by the scalar
\bean
\label{sc D2}
\phantom{aaaa}
(2\pi \sqrt{-1})^2\Big[- \frac{(\mu+\nu/2)^2}{4} + \sum_{s=1}^n \Big[ \frac{m_s(m_s+2)/4 + k_s (t^0\!, z,\mu)}{1-x/z_s}
- \frac{m_s(m_s+2)/4}{(1-x/z_s)^2} \Big]\Big] .
\eean
\end{lem}
\begin{proof}
The lemma follows from Theorem \ref{eigenv} and Corollary \ref{cor D2 mu}.
\end{proof}
For a solution $(t^0\!; z; \mu)$ of the Bethe ansatz equations
\eqref{tr.BAE}, we introduce the {\it fundamental differential operator}
\bean
\label{hatD}
\mc E_{\,t^0\!,z,\mu} = \der_u^2 + E_2(x, t^0\!,z,\mu),
\eean
where the function $E_2(x, t^0\!,z,\mu)$ is given by formula \eqref{sc D2}.
\subsection{Basis of Bethe vectors}
The Bethe ansatz method is the method to construct eigenvectors of commuting operators,
see Lemma \ref{hatB2 lem} as an example. The standard problem
is to determine if the Bethe ansatz method gives a basis of eigenvectors of the vector
space, on which the commuting operators act.
In the case of Lemma \ref{hatB2 lem} the answer is positive.
\begin{lem}
\label{basis}
Let $\mu\notin \frac\nu 2+\Z_{>0}$. Then for generic $z=\{z_1,\dots,z_n\}\subset \C^\times$,
the set of solutions $(t^0;z;\mu)$
of system \eqref{tr.BAE}
of the Bethe ansatz equations
is such that
the corresponding Bethe vectors $\om(t^0\!,z,\mu)$ form a basis of the space $V[\nu]$.
\end{lem}
\begin{proof}
Here the word generic means
that the subset of all acceptable sets $\{z_1,\dots,z_n\}$ forms a Zariski open subset in the space
of all sets $\{z_1,\dots,z_n\}$. The proof of the lemma is standard. It is a modification of \cite[Theorem 8]{ScV},
cf. \cite[Section 4.4]{MV1}, \cite[Section 5.4]{MV2}, \cite[Section 10.6]{MTV1}.
\end{proof}
\section{Function $w(x)$ in the kernel of $\mc E_{\,t^0;z;\mu}$}
\label{sec 6}
Let $(t^0;z;\mu)$ be a solution of system \eqref{tr.BAE} of Bethe ansatz equations, where
$t^0=(t^0_1,\dots,t^0_m)$. Define
\bean
\label{Y&u}
y(x) \,=\, \prod_{i=1}^m (x- t^0_i), \qquad
w(x) \,=\,y(x)\, x^{\frac{\nu/2-\mu}{2}} \,\prod_{s=1}^n (x-z_s)^{-m_s/2}\,.
\eean
\begin{thm}
\label{Dthm}
We have
\bean
\label{Dform}
\mc E_{\,t^0;z;\mu}\, =\, \big( \der_u + (\ln w)' \big) \big( \der_u - (\ln w)' \big).
\eean
where $' = \der/\der u$. In other words,
\bean
\label{hatB2form}
E_2(x, t^0\!,z,\mu) = - (\ln w)'' - ((\ln w)')^2.
\eean
\end{thm}
\begin{rem}
For $\nu=0$ this statement is the trigonometric degeneration of its elliptic version \cite[Theorem 5.3]{ThV}.
\end{rem}
\begin{proof} Recall that $\partial_u\, =\, -2\pi \sqrt{-1} \,x \partial_x$. We have
\bea
(\ln w)'
&=&
-2\pi \sqrt{-1} \,\Big[- \frac{\nu/2+\mu }{2} + \sum_{i=1}^m \frac{t^0_i}{x-t^0_i}
- \frac12 \sum_{s=1}^n \frac{z_s m_s}{x- z_s} \Big],
\\
(\ln w)''
&=&
(2 \pi \sqrt{-1} )^2 \Big[ - \sum_{i=1}^m \frac{t^0_i}{x-t^0_i}
- \sum_{i=1}^m \frac{(t^0_i)^2}{(x-t^0_i)^2}
+ \frac12 \sum_{s=1}^n \frac{z_s m_s}{x-z_s}
+ \frac12 \sum_{s=1}^n \frac{z_s^2 m_s}{(x-z_s)^2} \Big].
\eea
Hence, $(2\pi \sqrt{-1})^{-2}(- (\ln w)'' - ((\ln w)')^2)$ equals
\bea
&& \sum_{i=1}^m \frac{t^0_i}{x-t^0_i}
+ \sum_{i=1}^m \frac{(t^0_i)^2}{(x-t^0_i)^2}
- \frac12 \sum_{s=1}^n \frac{z_s m_s}{x-z_s}
- \frac12 \sum_{s=1}^n \frac{z_s^2 m_s}{(x-z_s)^2}
\\
&&
\phantom{aa}
- \frac14 (\mu+\nu/2)^2 - \sum_{i=1}^m \frac{(t_i^0)^2}{(x-t^0_i)^2}
- 2 \sum_{i=1}^m \sum_{j:\,j \ne i} \frac{t^0_it^0_j}{t^0_i - t^0_j} \frac{1}{x-t^0_i}
\\
&&
\phantom{aa}
- \frac14 \sum_{s=1}^n \frac{z_s^2 m_s^2}{(x-z_s)^2}
- \frac 12\sum_{s=1}^n \sum_{p:\,p \ne s} \frac{z_s z_pm_s m_p}{z_s-z_p}
\frac{1}{x-z_s} + (\mu+\nu/2) \sum_{i=1}^m \frac{t^0_i}{x-t^0_i}
\\
&&
\phantom{aa}
-\frac12\,(\mu+\nu/2)\sum_{s=1}^n \frac{z_s m_s}{x-z_s}
+ \sum_{i=1}^m \sum_{s=1}^n \frac{t^0_i z_s m_s}{t^0_i-z_s} \frac{1}{x-t^0_i} -
\sum_{i=1}^m \sum_{s=1}^n \frac{t^0_i z_s m_s}{t^0_i-z_s} \frac{1}{x-z_s} .
\eea
In the expression above for each $i=1,\dots,m$ the coefficient of $\frac{1}{(x-t^0_i)^2}$
equals zero. The coefficient of $\frac{1}{x-t^0_i}$ equals
\bea
&&
(\mu+\nu/2+1)t^0_i -
\sum_{j:\,j \ne i} \frac{2t^0_it^0_j}{t^0_i-t^0_j}
+ \sum_{s=1}^n \frac{t^0_i z_sm_s}{t^0_i-z_s}
\\
&&
\phantom{aa}
= t^0_i
\Big[ \mu+\nu/2+1 + 2 \sum_{j:\,j \ne i} \frac{t^0_j-t^0_i+t^0_i}{t^0_j-t^0_i}
- \sum_{s=1}^n \frac{(z_s - t^0_i+t^0_i)m_s}{z_s-t^0_i} \Big]
\\
&&
\phantom{aa}
= t^0_i \Big[ \mu+\nu/2+1 + 2(m-1) + \sum_{j:\,j \ne i} \frac{2t^0_i}
{t^0_j-t^0_i} - \sum_{s=1}^n m_s - \sum_{s=1}^n \frac{t^0_i m_s}{z_s-t^0_i} \Big]
\\
&&
\phantom{aa}
= -(t^0_i)^2 \Big[ \frac{1-\mu+\nu/2}{t^0_i} +
\sum_{j:\,j \ne i} \frac{2}{t^0_i-t^0_j} -
\sum_{s=1}^n \frac{m_s}{t^0_i-z_s} \Big] = 0,
\eea
where the last equality follows from
the Bethe ansatz equations \eqref{tr.BAE}. For each $s=1,\dots,n$
the coefficient of $\frac{1}{(1-x/z_s)^2}$ equals
$-m_s(m_s+2)/4$. The coefficient of $\frac{1}{1-x/z_s}$ equals
\bea
&&
\frac12 m_s + \frac12 m_s \sum_{p:\,p \ne s} \frac{z_p m_p}{z_s-z_p}
+ \frac12 (\mu+\nu/2) m_s + \sum_{i=1}^m \frac{t^0_i m_s}{t^0_i-z_s}
\\
&&
\phantom{aa}
= \frac{m_s}{2} \Big[ 1+ \mu+ \nu/2 - \sum_{p:\,p \ne s}
\frac{(z_p-z_s+z_s)m_p}{z_p-z_s} + 2 \sum_{i=1}^m \frac{t^0_i-z_s+z_s}{t^0_i-z_s} \Big]
\\
&&
\phantom{aa}
= \frac{m_s}{2} \Big[ 1+ \mu+ \nu/2 - \sum_{p \ne s} m_p - \sum_{p:\,p \ne s}
\frac{z_s m_p}{z_p-z_s} +
2\sum_{i=1}^m1 + 2 \sum_{i=1}^m \frac{z_s}{t^0_i-z_s} \Big]
\\
&&
\phantom{aa}
= \frac{m_s}{2} \Big[ (1+ \mu - \nu/2 + m_s) + \sum_{p:\,p \ne s}
\frac{z_s m_p}{z_s-z_p} + 2 \sum_{i=1}^m \frac{z_s}{t^0_i-z_s} \Big]
\\
&&
\phantom{aa}
= m_s(m_s+2)/4 + \frac{m_s}{2} \Big[ (\mu - \nu/2 + m_s/2) + \sum_{p:\,p \ne s}
m_p \frac{z_s}{z_s-z_p} + 2 \sum_{i=1}^m \frac{z_s}{t^0_i-z_s} \Big]
\\
&&
\phantom{aa}
= m_s(m_s+2)/4 + k_s(t^0,z,\mu),
\eea
where $k_s(t^0,z,\mu)$ are defined in \eqref{k_s}. Hence, $ E_2 = - (\ln w)'' - ((\ln w)')^2$.
\end{proof}
\begin{cor}
The function $w(x)$ lies in the kernel of $\mc E_{\,t^0;z;\mu}$.
\end{cor}
\section{Function $\tilde w(x)$ in the kernel of $\mc E_{\,t^0;z;\mu}$}
\label{sec 7}
\subsection{Wronskian}
\label{ssec Wr}
The {\emph{Wronskian}} of two functions $f(a)$ and $g(a)$ is
\bean
\label{Wr}
\Wr_a (f,g) = f \frac{d g}{d a} - \frac{d f}{d a} g.
\eean
We have
\bean
\label{h^2}
\Wr_a(hf, hg) = h^2 \Wr(f,g)
\eean
for any function $h(a)$.
\subsection{Wronskian and Bethe ansatz equations}
\begin{lem}
\label{lem WBA}
The following two statements hold:
\begin{enumerate}
\item[(i)]
Let $\mu\notin \frac\nu2+\Z_{\geq 0}$. Let $(t^0;z;\mu)$ be a solution
of the Bethe ansatz equations \eqref{tr.BAE} and
$y(x)= \prod_{i=1}^m(x-t^0_i)$. Then there exists a unique
monic polynomial $\tilde y(x)$ of degree $M-m$, such that
\bean
\label{Wr.eqn22}
\Wr_x (y(x),x^{\mu-\nu/2} \tilde y(x)) \,=\,\const\, x^{\mu-\nu/2-1} \prod_{s=1}^n (x-z_s)^{m_s},
\eean
where $\const$ is a nonzero constant.
\item[(ii)]
Let $\mu\ne \frac\nu2$. Assume that $y(x)= \prod_{i=1}^m(x-t^0_i)$ is a polynomial
with distinct roots
such that $y(z_s)\ne 0$, $s=1,\dots,n$. Assume that
there exists a polynomial $\tilde y(x)$ such that
equation \eqref{Wr.eqn22} holds. Then $(t^0_1,\dots,t^0_m;z;\mu)$ is a solution of the Bethe ansatz equations \eqref{tr.BAE}.
\end{enumerate}
\end{lem}
\begin{proof}
This lemma is a reformulation of Theorem 3.2 and Corollary 3.3 in \cite{MV2}.
\end{proof}
\subsection{Function $\tilde w(x)$} \label{ssec utild}
Recall that we have a solution $(t^0; z; \mu)$ of the Bethe ansatz equations, the differential
operator $\mc E_{\,t^0\!,z,\mu}$ and the function
$w(x) \,=\,y(x)\, x^{\frac{\nu/2-\mu}{2}} \,\prod_{s=1}^n (x-z_s)^{-m_s/2}$, where
$y(x) =\prod_{i=1}^m (x- t^0_i)$.
\begin{thm}
\label{quasipthm}
Let $\mu\notin \frac\nu2+\Z_{\geq 0}$. Then there exists a unique
monic polynomial $\tilde y(x)$ of degree $M-m$, such that the function
\bean
\label{utild}
\tilde w(x) \,= \,\tilde y(x)\, x^{\frac{\mu-\nu/2}{2} }\, \prod_{s=1}^n (x-z_s)^{-m_s/2}
\eean
lies in the kernel of $\mc E_{\,t^0; z;\mu}$. The functions $w(x), \tilde w(x)$ span the kernel of
$\mc E_{\,t^0 ; z; \mu}$.
\end{thm}
\begin{proof}
The differential operator $\mc E_{\,t^0; z; \mu}$ introduced in \eqref{hatD} has no first order term.
Hence the kernel of $\mc E_{\,t^0; z; \mu}$ consists of the functions $\tilde w(x)$ satisfying the equation
\bean
\label{WSo}
\Wr_u (w(x), \tilde w(x)) = \const\,.
\eean
By Lemma \ref{lem WBA}, there exists a unique monic polynomial $\tilde y(x)$ of degree $M-m$, such that
equation \eqref{Wr.eqn22} holds.
Dividing both sides of \eqref{Wr.eqn22} by $x^{\mu-\nu/2} \prod_{s=1}^n (x-z_s)^{m_s}$ we obtain
\bean
\label{Wr.eqn2}
\phantom{aaa}
\Wr_x\! \Big( y(x)\, x^{\frac{\nu/2-\mu}{2}} \prod_{s=1}^n (x-z_s)^{-m_s/2},\,
\tilde y(x) \,x^{\frac{\mu-\nu/2}{2} } \prod_{s=1}^n (x-z_s)^{-m_s/2} \Big) = \,\const\,x^{-1}.
\eean
Recall that $x=e^{-2\pi \sqrt{-1} u}$, hence $\der_u = -2\pi \sqrt{-1}\, x \der_x$. This implies equation \eqref{WSo}.
The theorem is proved.
\end{proof}
\subsection{Bethe ansatz equations for triples $(z;\mu; V[\nu])$ and $(z; -\mu; V[-\nu])$}
\begin{lem}
\label{lem yty}
Let $\mu\notin \frac\nu2+\Z_{\geq 0}$.
Let $(t^0; z; \mu)$ be a solution of the Bethe ansatz equations \eqref{tr.BAE} assigned to
the triple $(z; \mu; V[\nu])$ in Section \ref{sec BAE}. Let
\bean
\label{roty}
\tilde y(x) \,=\, \prod_{i=1}^{M-m}(x-\tilde t^{\,0}_i)
\eean
be the polynomials assigned to $(t^0; z; \mu)$ in Theorem \ref{quasipthm}. If
$\tilde y(x)$ has distinct roots and $\tilde y(z_s)\ne 0$ for $s=1,\dots,n$,
then
$(\tilde t^{\,0}_1,\dots,\tilde t^{\,0}_{M-m}; z; -\mu)$ is a solution of the Bethe ansatz equations
\eqref{tr.BAE} assigned to the triple $(z; -\mu; V[-\nu])$.
\end{lem}
\begin{proof}
Equation \eqref{Wr.eqn22} can be rewritten as
\bean
\label{Wr2}
\Wr_x (x^{-\mu+\nu/2}y(x), \tilde y(x)) \,=\,\const\, x^{-\mu+\nu/2-1} \prod_{s=1}^n (x-z_s)^{m_s}.
\eean
Now the lemma follows from the equalities $-\nu = -M+2m=M-2(M-m)$
and Lemma \ref{lem WBA}.
\end{proof}
\begin{thm}
[{\cite[Theorem 5.7]{MV2}}]
\label{thm WB}
Under assumptions of Lemma \ref{lem yty} consider
the Bethe vectors $\om(t^0\,,z, \mu) \in V[\nu]$ and
$\om(\tilde t^{\,0}\!,z,-\mu) \in V[\nu]$. Then
\bean
\label{BVA}
\mc A\Big(\mu +\frac {\nu}2-1\Big)
\om(t^0\,,z,\mu) =\, \const\,
\om(\tilde t^{\,0}\!,z,-\mu),
\eean
where $\const$ is a nonzero constant.
\end{thm}
\begin{cor}
\label{cor =eig}
Under assumptions of Lemma \ref{lem yty}, for $s=1,\dots,n$,
the eigenvalue of $\K_s(z,\mu)$ on $\om(t^0\!,z,\mu)$ equals
the eigenvalue of $\K_s(z,-\mu)$ on $\om(\tilde t^{\,0}\!,z,\mu)$.
\end{cor}
\begin{proof}
The corollary follows from Lemma \ref{lem AK} and Theorem \ref{thm isom mu}.
\end{proof}
\section{Conjugates of $\D$ and $\mc E_{\,t^0; z; \mu}$}
\label{sec 8}
\subsection{Conjugate of $\D$}
Recall the universal differential operator $\D=\der_u^2+D_2(x)$ introduced in \eqref{Dcdet}, where
the coefficient $D_2(x)$ is determined by formula \eqref{DD2}.
We introduce the operator
\bean
\label{Dconj}
\D^{c} = \frac{1}{(2\pi \sqrt{-1}\,x)^2}\,\prod_{s=1}^n (x-z_s)^{m_s/2} \cdot \D \cdot \prod_{s=1}^n (x-z_s)^{-m_s/2},
\eean
where the superscript $^c$ stays for the word ``conjugated''.
\begin{thm}
\label{Dconjthm}
We have
\bean
\label{Dconjform}
\phantom{aaaaaa}
\D^{c}
&=&
\der_x^2 + \Big[ \frac{1}{x} - \sum_{s=1}^n \frac{m_s}{x- z_s} \Big] \der_x
- \frac{1}{x} \sum_{s=1}^n \frac{m_s/2}{x- z_s} + \sum_{s=1}^n \frac{m_s(m_s+2)/4}{(x- z_s)^2}
\\
&+&
\sum_{s \ne p} \frac{m_s m_p/4}{(x- z_s)(x- z_p)}
-\frac{\mu^2+\mu (e_{11} - e_{22}) - e_{11} e_{22}}{4x^2}
\notag
\\
&-&
\frac{1}{x^2} \,\sum_{s=1}^n \left[ z_s \frac{m_s(m_s+2)/4 +
\K_s(z,\mu)}{x- z_s} + z_s^2 \frac{m_s(m_s+2)/4}{(x- z_s)^2}
\right].
\nonumber
\eean
\end{thm}
\begin{proof}
Recall that $x = e^{-2 \pi \sqrt{-1} u}$,
$\partial_u = -2 \pi \sqrt{-1} \,x \partial_x, \; \partial_u^2 = (2\pi \sqrt{-1})^2 ( x \partial_x + x^2 \partial_x^2 )$.
Denote \\
$f = \prod_{s=1}^n (x-z_s)^{-m_s/2}$. We have
$f' = -\sum_{s=1}^n \frac{m_s/2}{x-z_s} \,f\,$,
\bea
f'' = \Big( \sum_{s=1}^n \frac{m_s^2/4}{(x-z_s)^2} +
\sum_{s \ne p} \frac{m_s m_p / 4}{(x-z_s)(x-z_p)} + \sum_{s=1}^n \frac{m_s/2}{(x-z_s)^2} \Big) f,
\eea
where $' = \der/\der x$. Therefore,
\bea
\D^{c}
& =& \frac{1}{x^2} f^{-1} \Big[x^2 \der_x^2 + x\der_x + \frac{1}{(2\pi \sqrt{-1})^2} D_2(x) \Big] f
\\
&=&
\frac{1}{x^2} f^{-1} \Big[ x^2 \big( f \der_x^2 + 2 f' \der_x + f'' \big)
+ x \big(f \der_x + f' \big) + \frac{1}{(2\pi\sqrt{-1} )^2} D_2(x) f \Big]
\\
&=&
\der_x^2 + \Big[ 2 f^{-1} f' + \frac{1}{x} \Big] \der_x + \Big[ f^{-1} f'' + \frac{1}{x} f^{-1} f' +
\frac{1}{x^2} \frac{1}{(2\pi \sqrt{-1})^2} D_2(x) \Big],
\eea
which gives the right-hand side of formula \eqref{Dconjform}.
\end{proof}
\subsection{Conjugate of $\mc E_{\,t^0;z;\mu}$\,} Similarly to the conjugation of $\D$ we conjugate
$\mc E_{\,t^0\!,z,\mu}$ and consider the differential operator
\bean
\label{conj Dt}
\mc E_{\,t^0; z; \mu}^c = \frac{1}{(2\pi \sqrt{-1}\,x)^2}\,\prod_{s=1}^n (x-z_s)^{m_s/2} \cdot
\mc E_{\,t^0; z; \mu} \cdot \prod_{s=1}^n (x-z_s)^{-m_s/2}.
\eean
\begin{lem}
\label{kerDconj}
The kernel of $\mc E_{\,t^0; z; \mu}^c$ is spanned by quasi-polynomials
\bean
\label{qp PQ}
x^{\frac{\nu/2-\mu}{2}} y(x),
\qquad
x^{\frac{\mu-\nu/2}{2} }\tilde y(x),
\eean
where $y(x)$ is the monic polynomial of degree $m$, defined in \eqref{Y&u}, and $\tilde y(x)$ is the monic polynomial of degree
$M-m$, defined in Theorem \ref{quasipthm}.
\qed
\end{lem}
\begin{lem}
\label{lem e=e}
Under assumptions of Lemma \ref{lem yty}, let
$(t^0; z; \mu)$ be a solution of the Bethe ansatz equations \eqref{tr.BAE} assigned to
the triple $(z; \mu; V[\nu])$. Assume that
the numbers $\tilde t^{\,0}_1,\dots,\tilde t^{\,0}_{M-m}$ defined in Lemma \ref{lem yty}
are such that
$(\tilde t^{\,0}_1,\dots,\tilde t^{\,0}_{M-m}; z; -\mu)$ is a solution of the Bethe ansatz equations
\eqref{tr.BAE} assigned to the triple $(z; -\mu; V[-\nu])$. Then
\bean
\label{e=e}
\mc E_{\,\tilde t^{\,0}; z; -\mu}^c
=
\mc E_{\,t^0; z; \mu}^c\,.
\eean
\qed
\end{lem}
\section{Space of $V$-valued functions of $z_1,\dots,z_n$}
\label{sec 9}
\subsection{Space $V_1^{\ox n}[\nu]$}
\label{sec V1}
Recall the two-dimensional irreducible $\slt$-module $V_1$
with basis
$v^1_0\,,\,v^1_1$\,, see \eqref{V_m}.
In the rest of the paper we assume that $V$ is the tensor power of $V_1$,
\bean
\label{V1n}
V\,=\, V_1^{\ox n}\,, \qquad\on{where} \ \ n>1.
\eean
The space $V$ has a basis of vectors
\bea
v_I=v_{i_1}^1\ox\dots\ox v_{i_n}^1\,,
\eea
labeled by partitions $I=(I_1,I_2)$ of $\{1,\dots, n\}$, where
$\,i_j =0$ if $j\in I_1$, and $\,i_j =1$ if $j\in I_2$.
We have the weight decomposition $V=\oplus_{m=0}^n V[n-2m]$, where
$V[n-2m]$
is of dimension $\binom{n}{m}$ and
has the basis $\{v_I\ |\ I=(I_1,I_2), \ |I_1|=m, \ |I_2|=n-m\}$.
\smallskip
We use notations \
$\nu=n-2m$, \ $\ell = n-m$,\ and hence\ $ m+\ell=n$.
\subsection{Space $\V^{S}$}
Let $z=(z_1,\dots,z_n)$ be variables. The symmetric group
$S_n$ acts on the algebra $\C[z_1,\dots,z_n]$ by permuting the variables. Let $\si_s(z)$,
$s=1,\dots,n$, be the $s$-th elementary symmetric polynomial in $z_1,\dots, z_n$.
The algebra of
symmetric polynomials $\C[z_1,\dots, z_n]^S$ is a free polynomial algebra with generators
$\si_1(z),\dots,\si_n(z)$.
\smallskip
Let $\V$ be the space of polynomials in $z_1,\dots,z_n$ with coefficients
in $V_1^{\ox n}$,
\bea
\V = V_1^{\ox n}\ox \C[z_1,\dots,z_n].
\eea
The symmetric group $S_n$ acts on $\V$ by permuting the factors of $V_1^{\ox n}$
and the variables $z_1,\dots,z_n$ simultaneously,
\bea
\rho(v_1\ox\dots\ox v_n\ox p(z_1,\dots,z_n))=
v_{(\rho^{-1})(1)}\ox\dots\ox v_{(\rho^{-1})(n)}\ox p(z_{\rho(1)}, \dots, z_{\rho(n)}),\quad \rho\in S_n.
\eea
We denote by $\V^S$ the subspace of $S_n$-invariants in $\V$.
\begin{lem}
[\cite{MTV3}]
The space $\V^S$ is a free $\C[z_1,\dots, z_n]^S$-module of rank $2^n$.
\end{lem}
Consider the grading on $\C[z_1,\dots,z_n]$
such that $\deg z_s = 1$ for all $s = 1,\dots, n$.
We define a
grading on $\V$ by setting $\deg(v \ox p) = \deg p$ for any $v \in V_1^{\ox n}$ and
$p\in\C[z_1,\dots,z_n]$. The
grading on $\V$ induces a grading on $\End(\V)$.
\smallskip
The Lie algebras $\slt\subset\glt$ naturally act on $\V^S$. We have the weight decomposition
\bea
\V^S=\oplus_{m=0}^n \V^S[n-2m],
\qquad
\V^S[n-2m] = (V[n-2m]\ox\C[z_1,\dots,z_n])^{S}\,.
\eea
\smallskip
Let $M$ be a $\Z_{>0}$-graded space with finite-dimensional homogeneous components. Let
$M_j\subset M$ be the homogeneous component of degree $j$. The formal power series in a
variable $\al$,
$\ch_M(\al) =\sum_{j=0}^\infty (\dim M_j)\, \al^j,$\,
is called the graded character of $M$.
\begin{lem}
[\cite{MTV2}]
\label{lem frV}
The space $\V^S[n-2m]$ is a free $\C[z_1,\dots, z_n]^S$-module of rank $\binom{n}{m}$
and
\bean
\label{ch V}
\ch_{\V^S[n-2m]}(\al) \,=\, \prod_{i=1}^m \frac 1{1-\al^i} \cdot \prod_{i=1}^{n-m} \frac 1{1-\al^i}\,.
\eean
\end{lem}
\subsection{Bethe algebra of $\V^S[\nu]$}
Recall the differential operator $\D^c$ introduced in \eqref{Dconjform}
for $V=\oplus_{s=1}^nV_{m_s}$ and depending on parameter $\mu\in \C$.
For $V=V_1^{\ox n}$ the operator $\D^c$ takes the form
\bean
\label{Dz}
\mc F= \der_x^2 + F_1(x)\der_x + F_2(x),
\eean
where
\bean
\label{B}
\phantom{aaa}
F_1(x)
&=&
\frac{1}{x} - \sum_{s=1}^n \frac{1}{x- z_s}\, ,
\\
\notag
F_2(x)
&=&
- \frac{1}{x} \, \sum_{s=1}^n \frac{1/2}{x- z_s} + \sum_{s=1}^n \frac{3/4}{(x- z_s)^2}
+ \sum_{s \ne p} \frac{1/4}{(x- z_s)(x- z_p)}
\\
&-&
\frac{\mu^2+\mu (e_{11} - e_{22}) - e_{11} e_{22}}{4x^2}
-
\frac{1}{x^2} \,\sum_{s=1}^n \left[ z_s \frac{3/4 +
\K_s(z,\mu)}{x- z_s} + z_s^2 \frac{3/4}{(x- z_s)^2}
\right].
\notag
\eean
In formula \eqref{Dconjform} we had $\{z_1,\dots,z_n\}$ being
a subset of $ \C^\times$. From now on we consider
$z_1,\dots,z_n$ as independent variables.
\smallskip
The operator $\mc F$ in formula \eqref{Dz} with variables $z_1,\dots,z_n$
is called the {\it universal differential operator} for $\V^S$ with parameter $\mu\in\C$.
\begin{lem}
[{cf. {\cite[Section 2.7]{MTV3}}}]
\label{lem LG}
The Laurent expansions of $F_1(x)$ and $F_2(x)$ at infinity have the form
\bean
\label{Fij}
F_1(x) = \sum_{j=1}^\infty F_{1j} x^{-j}\,,
\qquad
F_2 (x) = \sum_{j=2}^\infty F_{2j} x^{-j}\,,
\eean
where
\bea
F_{11}
&=&
1 - n,
\qquad
F_{1j} = - \sum_{s=1}^n z_s^{j-1} \quad \on{for} \; j\geq 2 .
\eea
For any $j\geq 2$, the element $F_{2j}$ is a homogeneous polynomial
in $z_1,\dots,z_n$ of degree $j-2$ with coefficients in
$\End(V)$. The element $F_{2j}$ preserves the weight decomposition of
$\V$. Each of the elements $F_{1j}$, $j\geq 1$, $F_{2j}$, $j\geq 2$, defines an endomorphism
of the $\C[z_1,\dots,z_n]^{S}$-module $\V^S$.
\end{lem}
\begin{proof}
The proof follows from straightforward calculations.
\end{proof}
\begin{lem}
The elements $F_{1j}$, $j\geq 1$, $F_{2j}$, $j\geq 2$, considered as endomorphisms of
the $\C[z_1,\dots,z_n]^{S}$-module $\V^S$, commute.
\end{lem}
\begin{proof}
The commutativity follows from the commutativity of the trigonometric Gaudin operators
in formula \eqref{Dconjform}.
\end{proof}
For a weight subspace $\V^S[\nu]$, $\nu = n-2m$, $\ell = n-m$,
consider the commutative subalgebra
$\B(\mu; m;\ell)$
of the algebra of endomorphisms of the $\C[z_1,\dots,z_n]^{S}$-module $\V^S[\nu]$,
generated by the elements $F_{1j}$, $j\geq 1$, $F_{2j}$, $j\geq 2$.
The subalgebra $\B(\mu; m;\ell)$
is called the {\it Bethe algebra} of $\V^S[\nu]$ with parameter $\mu\in \C$.
\begin{lem}
\label{lem z in B}
The Bethe algebra $\B(\mu; m;\ell)$ contains the subalgebra of
operators of multiplication by
elements of $\C[z_1,\dots,z_n]^S$.
\end{lem}
\begin{proof}
The subalgebra of operators of multiplication by elements of
$\C[z_1,\dots,z_n]^S$ is generated by
the elements $F_{1j}$, $j\geq 1$, see Lemma \ref{lem LG}.
\end{proof}
Lemma \ref{lem z in B} makes the Bethe algebra $\B(\mu; m;\ell)$
a $\C[z_1,\dots,z_n]^S$-module.
\subsection{Weyl group invariance}
For a weight subspace $V[\nu]=V_1^{\ox n}[\nu]$ recall the operator
$\mc A\big(\mu+ \nu/2-1\big) : V[\nu] \to V[-\nu]$, defined in \eqref{mc A}.
It is an isomorphism of vector spaces, if $\mu \notin \frac n2 + \Z$.
That operator induces an isomorphism of $\C[z_1,\dots,z_n]^S$-modules,
\bean
\label{mc Amu}
\mc A\big(\mu+ \nu/2-1\big) : \V^S[\nu] \to \V^S[-\nu].
\eean
\vsk.2>
\begin{lem}
\label{lem BB iso}
Let $\mu \notin \frac n2 + \Z$. Let $F_{ij}(\mu, m,\ell)$ be the generators
of $\B(\mu;m;\ell)$, defined in \eqref{Fij}, and
$F_{ij}(-\mu,\ell, m)$ the generators
of $\B(-\mu;\ell;m)$. Then
\bean
\label{Fij iso}
F_{ij}(-\mu,\ell, m) \,=\,
\mc A\Big(\mu +\frac {\nu}2-1\Big) F_{ij}(\mu, m,\ell)\,
\mc A\Big(\mu +\frac {\nu}2-1\Big)^{-1}
\eean
for all $i,j$.
The map
\bean
\label{mu-mu}
\B(\mu;m;\ell) \to \B(\mu;m;\ell), \quad F_{ij}(\mu; m; \ell)
\mapsto
F_{ij}(-\mu,\ell, m),
\eean
\smallskip
\noindent
is an isomorphism of algebras and of $\C[z_1,\dots,z_n]^S$-modules.
The maps in \eqref{mc Amu} and \eqref{mu-mu} define an isomorphism
between the $\B(\mu;m;\ell)$-module $\V^S[\nu]$
and the $\B(-\mu;\ell;m)$-module $\V^S[-\nu]$.
\end{lem}
\begin{proof}
The lemma follows from Lemma \ref{lem AK}.
\end{proof}
\subsection{Generic fibers of $\V^S[\nu]$}
Given $a=(a_1,\dots,a_n)\in \C^n$, denote by $I_a\subset \C[z_1,\dots, z_n]$ the ideal
generated by the polynomials $\si_s(z)-a_s$, $s=1,\dots,n$. Define
\bean
\label{ide}
I_a\V^S[\nu] \,:=\, \V^S \cap (V[\nu]\ox I_a).
\eean
Assume that $a$ is such that the polynomial $x^n + \sum_{s=1}^n(-1)^s a_sx^{n-s}$ has distinct
nonzero roots $b_1,\dots,b_n$.
\begin{lem}
[{\cite[Lemma 2.13]{MTV3}}]
\label{lem fib}
The quotient $\V^S[\nu]/ I_a\V^S[\nu]$ is a finite-dimensional complex vector space canonically isomorphic
to $V[\nu]$. Under this isomorphism
the Bethe algebra $\B(\mu; m;\ell)$ induces a commutative algebra of operators on
$V[\nu]$. That commutative algebra of operators is canonically isomorphic to the Bethe algebra
$\B(b_1,\dots,b_n; \mu; V[\nu])$ introduced in Section \ref{sec ctG}.
\end{lem}
\section{Functions on pairs of quasi-polynomials}
\label{sec 10}
\subsection{Space of pairs of quasi-polynomials}
Let $m,\ell,n$ be positive integers, $m+\ell=n$. Denote
$\nu=n-2m$, cf. Section \ref{sec V1}.
Let
\bea
\zeta\,\in \,\C - \frac 12\,\Z\,.
\eea
Let $\Om(\zeta,m,\ell)$ be the affine $n$-dimensional space
with coordinates $p_i$, $i=1,\dots,m$, $q_j$, $j=1,\dots,\ell$.
Introduce the generating functions
\bean
\label{pq}
p(x) &=& x^{-\zeta}\,(x^m + p_1x^{m-1} + \dots + p_m),
\\
\notag
q(x) &=& x^{\zeta}\,(x^\ell + q_1x^{\ell-1} + \dots + q_\ell).
\eean
We identify points $U$ of $ \Om(\zeta, m,\ell)$ with two-dimensional complex vector spaces
generated by quasi-polynomials
\bean
\label{pqU}
p(x,U) &=& x^{-\zeta}\,(x^m + p_1(U)x^{m-1} + \dots + p_m(U)),
\\
\notag
q(x,U) &=& x^{\zeta}\,(x^\ell + q_1(U)x^{\ell-1} + \dots + q_\ell(U)).
\eean
Denote by $\O(\zeta,m,\ell)$
the algebra of regular functions on $\Om(\zeta,m,\ell)$,
\bea
\O(\zeta,m,\ell) = \C[p_1,\dots,p_m, q_1,\dots,q_\ell].
\eea
Define the grading on
$\O(\zeta,m,\ell)$ by $\deg p_i=\deg q_i=i$ for
all $i$.
\begin{lem}
\label{lem grO}
The graded character of the algebra $\O(\zeta, m,\ell)$ equals
\bean
\ch_{\O(\zeta, m,\ell)} (\al) = \prod_{i=1}^m \frac{1}{1-\al^i} \cdot \prod_{j=1}^\ell \frac{1}{1-\al^j}.
\eean
\qed
\end{lem}
\subsection{Wronski map}
Let $p(x), q(x)$ be the generating functions in \eqref{pq}. We have
\bean
\label{Wpq}
\Wr_x(p,q) = \frac {2\zeta + \ell-m}x\,
\Big(x^n + \sum_{s=1}^n\,(-1)^s\,\Si_s \,x^{n-s}\Big),
\eean
where $\Si_1,\dots,\Si_n$ are elements of $\O(\zeta, m,\ell)$.
Notice that
$2\zeta + \ell-m = 2\zeta +\nu\,\notin\Z$ according to our assumptions.
The elements $\Si_1,\dots,\Si_n$ are homogeneous with
$\deg \Si_s = s$.
\smallskip
Define the {\it Wronski map}
\bea
\Wr\, :\, \Om(\zeta, m,\ell) \to \C^n, \quad
U \mapsto
(\Si_1(U), \dots, \Si_n(U)).
\eea
\begin{lem}
\label{lem pdeg}
For $\zeta\in \C-\frac 12\Z$, \,the Wronski map is a map of positive degree.
\end{lem}
\begin{proof}
The proof is a slight modification of the proof of \cite[Proposition 3.1]{MTV4}.
\end{proof}
Let $\O^S \subset \Oz$ be the subalgebra generated by $\Si_1,\dots,\Si_n$.
Let $\si_1,\dots,\si_n$ be coordinates on $\C^n$, which is the image of the Wronski
map. Introduce the grading on $\C[\si_1,\dots,\si_n]$ by $\deg \si_s=s$ for all $s$.
The Wronski map induces the isomorphism
$\C[\si_1,\dots,\si_n]\to \O^S$, $\si_s \mapsto \Si_s$, of graded algebras, see Lemma
\ref{lem pdeg}.
This isomorphism makes $\Oz$ a $\C[\si_1,\dots,\si_n]$-module.
\subsection{Another realization of $\O(\zeta, m,\ell)$}
Define the differential operator $\mc G$ by
\bean
\label{DO1}
\mc G = \frac{1}{\Wr_x(p,q)} \,
\on{rdet}\begin{bmatrix} p & p' & p''
\\
q & q' & q''
\\
1 & \der_x & \der^2_x \end{bmatrix},
\eean
where $\on{rdet}$ is the row determinant.
We have
\bean
\label{DO2}
\mc G = \der_x^2 + G_1(x) \der_x + G_2(x),
\eean
\medskip
\noindent
cf. \cite{MTV3}.
It is a differential operator in variable
$x$ and \ $ G_1(x)$, $G_2(x)$ are rational functions in $x$ with coefficients in $\O(\zeta, m,\ell)$.
\smallskip
Notice that
\bean
\label{G1}
G_1 \,=\,-\,\frac{(\Wr_x(p,q))'}{\Wr_x(p,q)}\,.
\eean
\vsk.2>
\begin{lem}
[{cf. {\cite[Section 2.7]{MTV3}}}]
\label{lem LGG}
The Laurent expansions of $G_1(x)$ and $G_2(x)$ at infinity have the form
\bean
\label{LaG}
G_i (x) = \sum_{j=i}^\infty G_{ij} x^{-j}, \qquad i=1,2,
\eean
where for
any $i,j$, the element $G_{ij}$ is a homogeneous element of $\O(\zeta, m,\ell)$
of degree $j-i$.
\end{lem}
\begin{proof}
The proof is by straightforward calculation.
\end{proof}
\begin{lem}
[{\cite[Lemma 3.4]{MTV3}}, {\cite[Lemma 4.3]{MTV2}}]
\label{O_thm}
Let $\zeta\in \C-\frac 12\Z$.
Then the elements $G_{ij}$, $i=1,2$, $j\geq i$,
generate the algebra $\O(\zeta, m,\ell)$.
\qed
\end{lem}
\subsection{Fibers of Wronski map}
Given $a=(a_1,\dots,a_n)\in \C^n$, denote by $J_a\subset \O(\zeta,m,\ell)$ the ideal
generated by the elements $\Si_s-a_s$, $s=1,\dots,n$. Define
\bean
\label{ide}
\O_a(\zeta,m,\ell) \,:=\, \O(\zeta,m,\ell)\big/ J_a .
\eean
\noindent
The algebra $\O_a(\zeta,m,\ell)$ is the algebra of functions on the fiber $\Wr^{-1}(a)$ of the Wronski map.
\smallskip
Let
\bean
\label{ab}
x^n+\sum_{s=1}^n\,(-1)^{n-s}\,a_s\,x^{n-s} = \prod_{s=1}^n(x-b_s)
\eean
for some $b_s\in\C$. Let $U=\langle p(x,U), q(x,U)\rangle$ be a point of
$\Om(\zeta,m,\ell)$ and
\bea
p(x,U)=x^{-\zeta}\prod_{i=1}^m(x-t^0_i),
\qquad
q(x,U)=x^{\zeta}\prod_{i=1}^\ell(x-\tilde t^{\,0}_i),
\eea
for some $t_i^0, \tilde t^{\,0}_i\in\C$.
\begin{lem}
\label{lem gen f}
Let $\zeta\in \C-\frac 12\Z$. Then there exists
a Zariski open subset $X\subset \C^n$ such that for any $a\in X$
all the numbers $b_1,\dots,b_n$ are nonzero and distinct.
Moreover, for any point $U\in\Wr^{-1}(a)$ all the numbers
$b_1,\dots,b_n$, $t^0_1,\dots,t^0_m$, $\tilde t^{\,0}_1,\dots,\tilde t^{\,0}_\ell$ are distinct.
\qed
\end{lem}
\begin{lem}
\label{cor gen f}
If $a\in X$ and $U\in \Wr^{-1}(a)$, then
$(t^0_1,\dots,t^0_m; b_1,\dots,b_n; 2\zeta + \nu/2)$ is a solution
of the Bethe ansatz equations \eqref{tr.BAE} assigned to the triple
$(b_1,\dots,b_n; 2\zeta + \nu/2; V[\nu])$, and
$(\tilde t^{\,0}_1,\dots,\tilde t^{\,0}_\ell; b_1,\dots,b_n; - 2\zeta - \nu/2)$ is a solution
of the Bethe ansatz equations \eqref{tr.BAE} assigned to the triple
$(b_1,\dots,b_n; -2\zeta - \nu/2; V[-\nu])$.
\end{lem}
\begin{proof}
We have
\bea
\Wr_x(p(x,U),q(x,U))= \frac {2\zeta + \ell-m}x\,
\Big(x^n + \sum_{s=1}^n\,(-1)^s\,a_s \,x^{n-s}\Big).
\eea
Now the lemma follows from Lemmas \ref{lem gen f}, \ref{lem WBA}.
\end{proof}
For $U\in \Om(\zeta, m,\ell)$ denote by $\mc G_U$ the monic differential operator
with kernel $U$,
\bean
\label{GU}
\mc G_U = \der_x^2 + F_{1;U}(x)\der_x + F_{2,U}(x).
\eean
The operator $\mc G_U$ is
obtained from the operator $\mc G$ by evaluating the generating functions $p,q$
at the point $U$.
\begin{lem}
\label{lem e=g}
Let $a\in X$ and $U\in \Wr^{-1}(a)$. Let $(t^0; b; 2\zeta + \nu/2)$ be the solution of the Bethe ansatz equations
described in Lemma \ref{cor gen f}. Let $\mc E_{t^0;\, b;\, 2\zeta + \nu/2}^c$
be the differential operator defined in \eqref{conj Dt}.
Then
\bea
\mc E_{t^0;\, b; \,2\zeta + \nu/2}^c = \mc G_U\,.
\eea
\end{lem}
\begin{proof}
The lemma follows from Lemma \ref{kerDconj}.
\end{proof}
\section{Isomorphisms}
\label{sec 11}
In Section \ref{sec 9} we introduced the $\B(\mu, m,\ell)$-module $\V^S[\nu]$, where
$\mu\in \C$, \ $\nu = n-2m$, $m+\ell=n$.
In Section \ref{sec 10} we discussed the properties of
the algebra $\Oz$ under the assumption that
$\zeta\in \C-\frac 12\Z$.
\smallskip
We consider $\Oz$ as the $\Oz$-module with action defined by
multiplication.
\smallskip
In this section we construct an isomorphism between the $\B(\mu, m,\ell)$-module $\V^S[\nu]$
and the $\Oz$-module $\Oz$ under the assumption that
\bean
\label{ass}
\zeta = \frac\mu 2 - \frac \nu 4\quad \on{and} \quad \zeta\in \C-\frac 12\Z\,,
\eean
where the last inclusion can be reformulated as
\bean
\label{assm}
\mu\notin\ \frac n2 + \Z\,,
\eean
cf. the
assumptions on $\mu$ and $\zeta$ in Theorems \ref{thm isom mu}, \ref{quasipthm},
Lemmas \ref{lem WBA}, \ref{lem yty} and Section \ref{sec 10}.
\smallskip
The construction of the isomorphism is similar to the constructions in \cite{MTV3, MTV2}.
\subsection{Isomorphism of algebras}
Consider the map
\bea
\tau : \Oz \to \B(\mu, m,\ell), \quad G_{ij}\mapsto F_{ij}.
\eea
${}$
\begin{thm}
[cf. {\cite[Theorem 5.3]{MTV3}}, {\cite[Theorem 6.3]{MTV2}}]
\label{thm isoa}
Under the assumptions \eqref{ass}
the map $\tau$ is a well-defined
isomorphism of graded algebras.
\end{thm}
\begin{proof}
Let a polynomial $R(G_{ij})$ in generators $G_{ij}$ be equal to zero in
$\Oz$. Let us prove that the
corresponding polynomial $R(F_{ij})$ is equal to zero in $\Bm$.
Indeed, $R(F_{ij})$ is a polynomial
in $z_1,\dots,z_n$ with values in $\End(V[\nu])$. By Lemmas \ref{lem gen f} - \ref{lem e=g},
\ref{basis}, for generic $b_1,\dots,b_n$ the
value of the polynomial $R(F_{ij})$ at
$z_1 = b_1,\dots, z_n=b_n$ equals zero. Hence, the polynomial $R(F_{ij})$ equals zero identically and
the map $\tau$ is a well-defined defined homomorphism of algebras.
The elements $G_{ij}$, $F_{ij}$ are of the same degree. Hence $\tau$ is a graded homomorphism.
Let a polynomial $R(G_{ij})$ in generators $G_{ij}$ be a nonzero element of
$\Oz$. Then the value of
$R(G_{ij})$ at a generic point $U \in \Om(\zeta, m,\ell)$ is not equal to zero by Lemma \ref{lem e=g}.
Then the polynomial $R(F_{ij})$ is not identically equal to zero. Therefore, the map $\tau$ is injective.
Since the elements $F_{ij}$ generate the algebra $\Bm$, the map $\tau$ is surjective.
\end{proof}
The algebra $\C[z_1,\dots,z_n]^S$
is embedded into the algebra $\Bm$ as the subalgebra of operators
of multiplication by symmetric polynomials.
The
algebra $\C[z_1,\dots,z_n]^S$ is embedded into the algebra $\Oz$, the elementary symmetric polynomials
$\si_1(z),\dots,\si_n(z)$ being mapped to the elements
$\Si_1,\dots,\Si_n$. These
embeddings give the algebras $\Bm$ and $\Oz$ the structure of $\C[z_1,\dots,z_n]^S$-modules.
\begin{lem}
[{\cite[Lemma 6.4]{MTV3}}]
\label{lem iso m}
Under assumptions \eqref{ass} the map $\tau$ is an isomorphism of $\C[z_1,\dots,z_n]^S$-modules.
\end{lem}
\begin{proof}
The lemma follows from formulas \eqref{Wr.eqn2}, \eqref{G1}.
\end{proof}
\subsection{Isomorphism of modules}
\label{sec imo}
The subspace of $\V^S[\nu]$ of all elements of degree $0$ is of dimension one and is generated by the vector
\bea
v_+ = \sum_{I=(I_1,I_2),\, |I_1|=m, |I_2|=\ell} v_I\,.
\eea
The subspace of $\Oz$ of all elements of degree $0$ is of dimension one and is generated by the element $1$.
Define the $\C[z_1,\dots,z_n]^S$-linear map
\bean
\label{ups}
\phi : \Oz \to \V^S[\nu], \quad G\mapsto \tau(G)\,v_+\,.
\eean
\medskip
\begin{thm}
[{\cite[Theorem 6.7]{MTV3}}]
\label{thm ups}
Under assumptions \eqref{ass}, the map $\phi$ is a graded isomorphism of graded $\C[z_1,\dots,z_n]^S$-modules.
The maps $\tau$ and $\phi$ intertwine the action of multiplication
operators on $\Oz$ and the action of the Bethe algebra $\Bm$ on $\V^S[\nu]$, that is, for any
$f,g \in\Oz$, we have
\bean
\label{inter}
\phi(fg) = \tau(f)\,\phi(g).
\eean
${}$
\noindent
In other words, the maps $\tau$ and $\phi$ define an isomorphism between the $\Oz$-module $\Oz$ and the
$\Bm$-module $\V^S[\nu]$.
\end{thm}
\begin{proof}
First we show that the map $\phi$ is injective. Indeed,
the algebra $\Oz$ is a free polynomial algebra containing the subalgebra $\C[z_1,\dots,z_n]^S$.
The quotient algebra $\Oz/\C[z_1,\dots,z_n]^S$ is finite-dimensional by Lemma \ref{lem pdeg}.
The kernel of $\phi$ is a proper ideal $\mc I$ in $\Oz$. Then $\tau(\mc I)$ is an ideal in $\Bm$.
Any proper ideal in $\Bm$ has zero intersection with $\C[z_1,\dots,z_n]^S$. Hence
$\mc I$ has zero intersection with $\C[z_1,\dots,z_n]^S$ and therefore is
the zero ideal. The injectivity is proved.
The map $\phi$ is graded.
The graded characters of $\V^S[\nu]$ and $\Oz$ are equal by Lemmas \ref{lem frV} and \ref{lem grO}.
Hence $\phi$ is an isomorphism.
\end{proof}
\begin{cor}
\label{lem isofi}
Assume that $a=(a_1,\dots,a_n)\in \C^n$ is such that the polynomial
$x^n + \sum_{s=1}^n(-1)^s a_sx^{n-s}$ has distinct roots
$b_1,\dots,b_n$. Then under assumptions \eqref{ass}, the isomorphisms
$\tau$, $\phi$ induce the isomorphism of the $\B(b_1,\dots,b_n;\mu; V[\nu])$-module
$V[\nu]$ and the $\O_a(\zeta;m,\ell)$-module $\O_a(\zeta;m,\ell)$, where
$\O_a(\zeta;m,\ell)$ is the algebra of functions on the fiber $\Wr^{-1}(a)$
of the Wronski map, see \eqref{ide}.
\end{cor}
\begin{proof}
The corollary follows from Lemma \ref{lem fib} and Theorems \ref{thm isoa}, \ref{thm ups}.
\end{proof}
\begin{cor}
\label{cor deg}
The degree of the Wronski map $\Wr$ equals $\dim V[\nu] = \binom{n}{m}$.
\qed
\end{cor}
\subsection{Dynamical Bethe algebra and quasi-polynomials}
The space $V = V_1^{\ox n}$ has a nontrivial zero weight subspace if $n$ is even.
Let $n=2m$. For the zero weight subspace $V[0]$,
we have $\nu=0, \,m=\ell$, and
assumptions \eqref{ass} take the form
\bean
\label{asss}
\zeta = \frac\mu 2 \quad \on{and} \quad
\mu\notin\ \Z\,.
\eean
\smallskip
Let $a=(a_1,\dots,a_n)\in \C^n$ be such that the polynomial
$x^n + \sum_{s=1}^n(-1)^s a_sx^{n-s}$ has distinct nonzero roots
$b_1,\dots,b_n$.
Consider the functional space $E[\mu]$ as the module
over the dynamical Bethe algebra $\B(b_1,\dots,b_n;E[\mu])$, see
Section \ref{sec BE}. Consider the
$\O_a(\zeta;m,m)$-module $\O_a(\zeta;m,m)$, where
$\O_a(\zeta;m,m)$ is the algebra of functions on the fiber $\Wr^{-1}(a)$
of the Wronski map.
\begin{cor}
\label{lem isofi}
Under assumptions \eqref{asss}, the isomorphisms
$\tau$, $\phi$ and the isomorphism $V[0]\to E[\mu]$ in Corollary \ref{iso B}
induce the isomorphism of the $\B(b_1,\dots,b_n;E[\mu])$-module $E[\mu]$ and
the $\O_a(\zeta;m,m)$-module $\O_a(\zeta;m,m)$.
\qed
\end{cor}
\subsection{Weyl involution and transposition of quasi-polynomials}
Consider the
\\
$\Bm$-module $\V^S[\nu]$ and $\B(-\mu, \ell,m)$-module
$\V^S[-\nu]$. Consider the $\Oz$-module $\Oz$ and
$\mc O(-\zeta, \ell, m)$-module $\mc O(-\zeta, \ell, m)$.
Under assumptions \eqref{ass}, consider the diagram,
\bean
\label{comd}
\begin{tikzcd}
(\Bm,\, \V^S[\nu]) \arrow[r, ] \arrow[d, ] & (\B(-\mu, \ell,m),\,\V^S[-\nu]) \arrow[d, ]
\\
(\Oz,\, \Oz) \arrow[r, ] & (\mc O(-\zeta, \ell, m), \,\mc O(-\zeta, \ell, m)
)
\end{tikzcd}\ \ .
\eean
Here $\V^S[\nu] \to \Oz$ and
$\V^S[-\nu] \to \mc O(-\zeta, \ell, m)$ are the module isomorphisms of Theorem \ref{thm ups}.
The map $ \V^S[\nu] \to \V^S[-\nu]$ is the module isomorphism of Lemma \ref{lem BB iso}.
The map $\Oz \to \mc O(-\zeta, \ell, m)$ is the module isomorphism defined by the transposition
of the quasi-polynomials $p,q$.
\begin{thm}
\label{thm tra}
The diagram \eqref{comd} is commutative.
\end{thm}
\begin{proof}
The theorem follows from Lemma \ref{e=e}.
\end{proof}
The commutativity of diagram \eqref{comd} implies the commutativity of the
diagram of fibers over a generic point $a\in\C^n$,
\bean
\label{cofd}
\begin{tikzcd}
(\B(b_1,\dots,b_n;\mu, V[\nu]), \, V[\nu]) \arrow[r, ] \arrow[d, ] &
(\B(b_1,\dots,b_n;-\mu, V[-\nu]), \, V[-\nu]) \arrow[d, ]
\\
(\O_a(\zeta, m,\ell),\,\O_a(\zeta, m,\ell)) \arrow[r, ] & (\mc O_a(-\zeta, \ell, m),\,\O_a(-\zeta,\ell,m))
\end{tikzcd} \ \ ,
\eean
see notations in Section \ref{sec imo}.
Combining commutative diagrams \eqref{cofd} and \eqref{comD} we obtain the commutative
diagram
\bean
\label{cofd}
\begin{tikzcd}
(\B(z; E[\mu]),\, E[\mu]) \arrow[r, ]\arrow[d,] & (\B(z; E[-\mu]),\, E[-\mu])\arrow[d, ]
\\
(\O_a(\zeta, m,m),\,\O_a(\zeta, m,m)) \arrow[r, ] & (\mc O_a(-\zeta, m, m),\,\O_a(\zeta, m,m))
\end{tikzcd} \ \ ,
\eean
which holds if $n=2m$ is even and $\mu\notin\Z$. The diagram
identifies the Weyl involution $E[\mu]\to E[-\mu]$ in the functional spaces
of eigenfunctions of the KZB operator $H_0$
with the isomorphism $\O_a(\zeta, m,m)\to \O_a(-\zeta, m,m)$
induced by the transposition of quasi-polynomials.
\bigskip
|
2,869,038,154,404 | arxiv | \section{Introduction}
Variational estimation of the partition function has been one of the
standard technic in statistical mechanics. For a
two-dimensional (2D) classical lattice model defined by a transfer matrix
$T$, the variational partition function per row is written as
\begin{equation}
\lambda = \frac{\langle V | \, T \, | V \rangle}{\langle V | V \rangle} \, ,
\end{equation}
where $| V \rangle$ represents the trial state and $\langle V |$ is its
conjugate; $\lambda$ is maximized when $\langle V |$ and $| V \rangle$
coincide with the left and the right eigenvectors of $T$, respectively. In
1941 Klamers and Wannier\cite{Krm,Kik} investigated the Ising model,
assuming that $| V \rangle$ is well approximated by a product of
matrices
\begin{equation}
V(\ldots, i, j, k, l,\ldots)
= \ldots F^{ij}_{~} F^{jk}_{~} F^{kl}_{~} \ldots \, ,
\end{equation}
where $i, j, k, l$, etc., are the Ising variables, and $F^{ij}_{~}$ is a
symmetric 2 by 2 matrix. The approximation is more accurate than
both the mean-field and the Bethe
approximations.~\cite{Bethe} Baxter improved the trial state by
introducing additional degrees of freedom.~\cite{Bx1,Bx2,Bx3} His
variational state is defined as
\begin{equation}
V(\ldots, i, j, k, l,\ldots)
= \sum_{\ldots, a, b, c, d, \ldots}
\ldots F^{ij}_{ab} F^{jk}_{bc} F^{kl}_{cd} \ldots \, ,
\end{equation}
where $a, b, c, d$, etc., are $2^n_{~}$-state group spin variables. The
tensor $F^{ij}_{ab}$ contains $4 \cdot 2^{2n}_{~}$ elements, and
therefore it is not easy to optimize $F^{ij}_{ab}$ --- adjust the elements
--- so that $\lambda$ is maximized. He solved the optimization problem
by introducing the corner transfer matrix (CTM), and by solving
self-consistent equations for the tensors.~\cite{Bx3} In 1985
Nightingale and Bl\"ote used Baxter's tensor product as a initial state in
the projector Monte Carlo simulation for the Haldane system.~\cite{NB}
Baxter suggested an outline of generalizing his CTM method to 3D
systems,~\cite{Bx3} however, the project has not been completed.
Similar variational formulations have been applied to one-dimensional
(1D) quantum systems, especially for $S = 1$ spin chains. The variational
ground state $| \Psi \rangle$ is given by a modified tensor product
\begin{equation}
\Psi (\ldots, i, j, k, l, \ldots) = \sum_{\ldots, a, b, c, d, e, \ldots}
\ldots A^i_{ab} A^j_{bc} A^k_{cd} A^l_{de} \ldots \, ,
\end{equation}
where the subscripts $a, b, c, d, e,$ etc., are $m$-state group spin
variables. Affleck, Lieb, Kennedy, and Tasaki (AKLT) showed that the
ground-state of a bilinear-biquadratic $S = 1$ spin chain is exactly
expressed by the tensor product with $m = 2$.~\cite{AKLT} The
variational formulation has been generalized by Fannes et. al.
for the arbitrary large $m$.~\cite{Fannes1,Fannes2,Fannes3} Now such
ground state is called `finitely correlated state'~\cite{Fannes2} or
`matrix product state'.~\cite{Zitt1} Quite recently Niggemann et.
al.~\cite{Zitt2} showed that the ground state of a 2D quantum systems
can be exactly written in terms of a two-dimensional tensor product.
Although $| \Psi \rangle$ in Eq.(1.3) does not look like $| V \rangle$ in
Eq.(1.4), they are essentially the same. We can transform $| V \rangle$
into $| \Psi \rangle$ by obtaining $A^i_{ab}$ from $F^{ij}_{ab}$ through a
kind of duality transformation;~\cite{Takas} the opposite is also possible.
The application of both $| V \rangle$ in Eq.(1.3) and $| \Psi \rangle$ in
Eq.(1.4) are limited to translationally invariant (or homogeneous)
systems. In 1992 White established a more flexible numerical variational
method from the view point of the real-space renormalization group
(RG).~\cite{Wh1,Wh2} Since his numerical algorithm is written in terms
of the density matrix, the algorithm is called `density matrix
renormalization group' (DMRG). White's variational state is written in a
position dependent tensor product~\cite{Ostlund1,Ostlund2}
\begin{equation}
\Phi (\ldots, i, j, k, l, \ldots) = \sum_{\ldots, a, b, c, d, e, \ldots}
\ldots A^i_{ab} B^j_{bc} C^k_{cd} D^l_{de} \ldots \, ,
\end{equation}
where $A^i_{ab}$ is not always equal to $B^i_{ab}$, etc. This
inhomogeneous property in $| \Phi \rangle$ makes DMRG possible to treat
open boundary systems~\cite{Huse} and random systems.~\cite{Hida} Now
the DMRG is widely used for both quantum~\cite{Escorial} and
classical~\cite{Ni,Carlon1,Carlon2} problems. Quite recently Dukelsky et.
al. analyzed the correspondence (and a small discrepancy) between DMRG
and the variational formula in
Eq.(1.4).~\cite{RecGerman0,RecGerman1,RecGerman2}
The decomposition of the trial state into the tensor product tells us how
to treat lattice models when we try to obtain the partition function. The
essential point is to break-up the system into smaller pieces --- like the
local tensor $F^{ij}_{ab}$ in Eq.(1.3) or $A^i_{ab}$ in Eq.(1.4) --- and
reconstruct the original system by multiplying them. According to
this idea, the authors combine DMRG and Baxter's method of CTM, and
proposed the corner transfer matrix renromalization group (CTMRG)
method.~\cite{CTMRG1,CTMRG2} It has been shown that CTMRG is
efficient for determinations of critical indices~\cite{CTMRG2} or latent
heats.~\cite{q5}
The purpose of this paper is to generalize the algorithm of CTMRG to
three-dimensional (3D) classical systems. We focus on the RG algorithm
rather than its practical use. In the next section, we briefly review the
outline of CTMRG. The key point is that the RG transformation is obtained
through the diagonalization of the density matrix. In \S 3 we define the
density matrix for a 3D vertex model, and in \S 4 we explain the way to
obtain the RG transformation. The numerical algorithm is shown in \S 5.
A trial application with $m = 2$ is shown for the 3D Ising Model.
Conclusions are summarized in \S 6.
\section{Formulation in Two Dimension}
\begin{figure}
\figureheight{6cm}
\caption{
Square cluster of a symmetric vertex model; the shown system is the
example with linear dimension $2N = 6$. The cross marks $\times$ show
the boundary spins.
}
\label{fig:1}
\end{figure}
The aim of CTMRG is to obtain variational partition
functions of 2D classical models. Let us consider a square cluster of
a 16-vertex model (Fig.1) as an example of 2D systems. We
impose the fixed boundary condition, where the boundary spins shown by
the cross marks point the same direction. In order to
simplify the following discussion, we assign a symmetric Boltzmann
weight $W^{~}_{ijkl} = W^{~}_{jkli} = W^{~}_{ilkj}$ to each
vertex,~\cite{Simple} where $i,j,k$ and $l$ denote two-state spins (=
Ising spins or arrows) on the bonds. (Fig.2(a))
\begin{figure}
\figureheight{6cm}
\caption{
Boltzmann weight and transfer matrices. The dots represent
spin variables inside the square cluster shown in Fig.1, and the cross
marks represent the boundary spins.
(a) Vertex weight $W^{~}_{ijkl}$.
(b) Half-row transfer matrix $P^{~i}_{ab}$.
(c) Corner transfer matrix $C^{~}_{ab}$.
}
\label{fig:2}
\end{figure}
We employ two kinds of transfer matrices in order to express the
partition function $Z_{2N}^{~}$ of the square cluster with linear
dimension $2N$. One is the half-row transfer matrix (HRTM). Figure 2(b)
shows the HRTM $P^{~i}_{ab}$ with length $N = 3$, where the subscripts
$a = (a_1^{~}, a_2^{~}, \ldots, a_N^{~})$ and $b = (b_1^{~}, b_2^{~}, \ldots,
b_N^{~})$, respectively, represent row spins --- in-line spins --- on the
left and the right sides of the HRTM. We think of $P^{~i}_{ab}$ as a matrix
labeled by the superscript $i$. The other is the Baxter's
corner transfer matrix (CTM),~\cite{Bx1,Bx2,Bx3} that represents
Boltzmann weight of a quadrant of the square. Figure 2(c) shows the CTM
$C^{~}_{ab}$ with linear dimension $N = 3$. The partition function
$Z_{2N}^{~}$ is then expressed as
\begin{equation}
Z_{2N}^{~} = {\rm Tr} \, \rho = {\rm Tr} \, C^4_{~} \, ,
\end{equation}
where $\rho^{~}_{ab} \equiv \left( C^4_{~} \right)^{~}_{ab}$ is the density
matrix. From the symmetry of the vertex weight $W^{~}_{ijkl}$, the
matrices $P^{~i}_{ab}$, $C^{~}_{ab}$, and $\rho^{~}_{ab}$ are invariant
under the permutation of subscripts.
\begin{figure}
\figureheight{6cm}
\caption{
Extensions of (a) the HRTM (Eq.(2.2)), and (b) the CTM. (Eq.(2.3))
}
\label{fig:3}
\end{figure}
There are recursive relations between $W$, $P$, and $C$.
We can increase the length of HRTM by joining a vertex
\begin{equation}
{P}^{~i}_{{\bar a}{\bar b}} =
\sum_k W^{~}_{ijkl} P^{~k}_{ab} \, ,
\end{equation}
where the extended row-spins are defined as
${\bar a} = (a, l) = (a_1^{~}, a_2^{~}, \ldots, a_N^{~}, l)$ and
${\bar b} = (b, j) = (b_1^{~}, b_2^{~}, \ldots, b_N^{~}, j)$. (Fig.3(a))
In the same manner, the area of CTM can be extended by joining two HRTMs
and a vertex to the CTM
\begin{equation}
C^{~}_{{\bar a}{\bar b}} =
\sum_{cd~kj} W^{~}_{ijkl} P^{~j}_{db} P^{~k}_{ac} C^{~}_{cd} \, ,
\end{equation}
where the extended row-spins ${\bar a}$ and ${\bar b}$ are defined as
${\bar a} = (a, l) = (a_1^{~}, a_2^{~}, \ldots, a_N^{~}, l)$ and
${\bar b} = (b, i) = (b_1^{~}, b_2^{~}, \ldots, a_N^{~}, i)$. (Fig.3(b))
In this way, we can construct HRTM and CTM with arbitrary size $N$ by
repeating the extension Eqs.(2.2) and (2.3).
It should be noted that the matrix dimension of both $C^{~}_{ab}$ and
$P^{~i}_{ab}$ increases very rapidly with their linear size $N$. The fact
prevents us to store the matrix elements of $C^{~}_{ab}$ and $P^{~i}_{ab}$
when we numerically calculate the partition function $Z_{2N}^{~}$.
This difficulty can be overcomed by compressing CTM and HRTM into
smaller matrices via the density matrix algorithm,~\cite{Wh1,Wh2} where
the RG transformation is obtained through the diagonalization of
the density matrix $\rho^{~}_{ab} \equiv {(C^4_{~})}_{ab}^{~}$.
Let us consider the eigenvalue equation for the density matrix
\begin{equation}
\sum_{b} \rho^{~}_{ab} A^{\alpha}_{b} =
\lambda_{\alpha}^{~} A^{\alpha}_{a} \, ,
\end{equation}
where $\lambda_{\alpha}^{~}$ is the eigenvalue in decreasing order
$\lambda_{1} \geq \lambda_{2} \geq \ldots \geq 0$, and
${\bf A}^{\alpha}_{~} = ( A^{\alpha}_{1}, A^{\alpha}_{2}, \ldots )^T_{~}$ is
the corresponding eigenvector that satisfies the orthogonal relation
\begin{equation}
\left( {\bf A}^{\alpha}_{~}, \, {\bf A}^{\beta}_{~} \right) =
\sum_{a} A^{\alpha}_{a} A^{\beta}_{a} = \delta^{\alpha \beta}_{~} \, .
\end{equation}
It has been known that $\lambda_{\alpha}$ rapidly approaches to zero
with respect to $\alpha$,~\cite{Bx3,Wh2} and that we can neglect tiny
eigenvalues from the view point of numerical calculation. We consider only
$m$ numbers of dominant eigenvalues in the following; the greek indices
run from $1$ to $m$. The number $m$ is determined so that
$\sum_{\alpha=1}^m \lambda_{\alpha}$ becomes a good lower bound for
the partition function $Z_{2N}^{~} = {\rm Tr} \, \rho$.
Equation (2.4) shows that for a sufficiently large $m$ the
density matrix $\rho$ can be well approximated as
\begin{equation}
\rho^{~}_{ab} \sim
\sum_{\alpha=1}^m A^{\alpha}_{a} A^{\alpha}_{b}
\lambda_{\alpha} \, .
\end{equation}
The above approximation shows that the $m$-dimensional diagonal matrix
\begin{equation}
{\tilde \rho}^{~}_{\alpha \beta} =
\sum^{~}_{a b} A^{\alpha}_{a} A^{\beta}_b {\rho}^{~}_{a b} =
\delta_{\alpha \beta} \lambda_{\beta}
\end{equation}
contains the relevant information of $\rho$; we can regard ${\tilde \rho}$
as the renormalized density matrix. This is the heart of the density
matrix algorithm: {\it the RG transformation is defined by
the matrix $A = \left( {\bf A}^{1}_{~}, {\bf A}^{2}_{~}, \ldots, {\bf
A}^{m}_{~} \right)$ which is obtained through the diagonalization of the
density matrix.}
As we have applied the RG transformation to the density matrix
$\rho$, we can renormalize the CTM by applying the matrix $A$ as
\begin{equation}
{\tilde C}^{~}_{\alpha \beta} =
\sum^{~}_{a b} A^{\alpha}_{a} A^{\beta}_b C^{~}_{a b} \, .
\end{equation}
Since $C^{~}_{a b}$ and $\rho^{~}_{a b}$ have the common
eigenvectors --- remember that $\rho = C^4_{~}$ --- the renormalized
CTM is an $m$-dimensional diagonal matrix
\begin{equation}
{\tilde C} = {\rm diag}(\omega_1, \omega_2, \ldots, \omega_m) \, ,
\end{equation}
where $\omega_{\alpha}$ is the eigenvalue of the CTM that satisfies
$\lambda_{\alpha} = \omega_{\alpha}^4$.
In the same manner, we obtain the renormalized HRTM
\begin{equation}
{\tilde P}^{~i}_{\alpha \beta} = \sum^{~}_{a b}
A^{\alpha}_{a} A^{\beta}_{b} P^{~i}_{ab} \, .
\end{equation}
In this case ${\tilde P}^{~i}_{\alpha \beta}$ is not diagonal
with respect to $\alpha$ and $\beta$; {\it the RG transformation is not
always diagonalization.}
We can extend the linear size of CTM and HRTM using Eqs.(2.2) and (2.3),
and we can reduce their matrix dimension by the RG transformation in
Eqs.(2.7) and (2.8). By repeating the extension and the renormalization, we
can obtain the renormalized density matrix ${\tilde \rho}$ and the
approximate partition function ${\tilde Z}_{2N}^{~} = {\rm Tr} \, {\tilde
\rho}$ for arbitrary system size $N$. This is the outline of the CTMRG.
\section{Density matrix in Three Dimension}
In order to generalize the density matrix algorithm to 3D systems, we
first construct the density matrix in three dimension. As an example of 3D
systems, we consider a 64-vertex model, that is defined by a
Boltzmann weight $W^{~}_{ijklmn}$. (Fig.4(a)) In order to simplify the
following discussion, we consider the case where $W^{~}_{ijklmn}$ is
invariant under the permutations of the two-state spins $i, j, k, l, m$ and
$n$.~\cite{enough} As we have considered a square cluster in
two-dimension, (Fig.1) we consider a cube with linear dimension $2N$,
where the boundary spins (on the surface of the cube) are fixed to the
same direction. According to the variational formulation shown in \S 1,
we first decompose the cube into several parts shown in Fig.4(b)-(d).
\begin{figure}
\figureheight{6cm}
\caption{
Parts of the cubic cluster with linear dimension $2N$:
(a) Vertex weight $W^{~}_{ijklmn}$.
(b) The tensor $P^{~i}_{abcd}$.
(c) The tensor $S^{XY}_{ab}$.
(d) Corner Tensor $C_{~}^{XYZ}$.
The cross marks $\times$ represent the boundary spins.
}
\label{fig:4}
\end{figure}
The tensor $P^{~i}_{abcd}$ shown in Fig.4(b) is a kind of
three-dimensional HRTM. The superscript $i$ represents the two-state
spin at the top. The spin at the bottom is fixed, because it is at the
boundary of the system. The subscript $a$ represents the group of in-line
spins $a = (a_1^{~}, a_2^{~}, \ldots, a_N^{~})$; $b$, $c$, and $d$ are
defined in the same way. From the symmetry of the vertex weight,
$P^{~i}_{abcd}$ is invariant under the permutations of subscripts.
The tensor $S^{XY}_{a \, b}$ shown in Fig.4(c) does not have its 2D
analogue; it is an array of vertices.
The subscripts $a$ and $b$ represent in-line spins;
other two sides are the boundary of the cube. The
superscript $X$ represents an $N$ by $N$ array of spins
on the square surface
\begin{equation}
X = \left(
\begin{array}{cccc}
x_{11}^{~} & x_{12}^{~} & \ldots & x_{1N}^{~} \\
x_{21}^{~} & x_{22}^{~} & ~ & \vdots \\
\vdots & ~ & \ddots & \vdots \\
x_{N1}^{~} & \cdots & \cdots & x_{NN}^{~}
\end{array}
\right) \, ,
\end{equation}
where $x_{NN}^{~}$ is closest to the center of the cube, and $Y$ is the
spin array on the other surface; $x_{ij}^{~}$ and $y_{ij}^{~}$ are
connected to the same vertex at the position $\{ij\}$. The tensor is
invariant under the permutation of $X$ and $Y$ ($S^{XY}_{a \, b} = S^{YX}_{a
\, b}$), but is not invariant for $a$ and $b$ ($S^{XY}_{a \, b} \neq S^{XY}_{b
\, a}$); $S^{XY}_{a \, b}$ is equal to $S^{ZW}_{b \, a}$ where $Z = X^T_{~}$
and $W = Y^T_{~}$.
Figure 4(d) shows the corner tensor $C^{XYZ}_{~}$, which is a kind
of three-dimensional CTM.~\cite{Bx3} The superscripts are defined in the
same way as Eq.(3.1). (The boundary spins on the surfaces of the original
cube are fixed.) It should be noted that $C^{XYZ}_{~}$ is not
equal to $C^{WYZ}_{~}$ where $W = X^T_{~}$, because each surface has
its own orientation.
\begin{figure}
\figureheight{6cm}
\caption{
Extensions of (a) $P$ in Eq.(3.2), (b) $S$ in Eq.(3.3), and (c) $C$
in Eq.(3.5).
}
\label{fig:5}
\end{figure}
Following the formulation in two-dimension, let us consider the
size extension of $P$, $S$, and $C$. The length of $P$ can be
increased by joining a vertex (Fig.5(a))
\begin{equation}
P^{~i}_{{\bar a}{\bar b}{\bar c}{\bar d}} =
\sum_n W^{~}_{ijklmn} \, P^{~n}_{abcd} \, ,
\end{equation}
where the extended in-line spins are defined as
${\bar a} = (a, j) = (a_1^{~}, a_2^{~}, \ldots, a_N^{~}, j)$,
${\bar b} = (b, k) = (b_1^{~}, b_2^{~}, \ldots, b_N^{~}, k)$,
${\bar c} = (c, l) = (c_1^{~}, c_2^{~}, \ldots, c_N^{~}, l)$, and
${\bar d} = (d, m) = (d_1^{~}, d_2^{~}, \ldots, d_N^{~}, m)$.
The linear size of $S$ can be increased by joining two $P$
and a vertex (Fig.5(b))
\begin{equation}
S^{{\bar X}{\bar Y}}_{{\bar a} \, {\bar b}} =
\sum_{ln} \sum_{ce} W^{~}_{ijklmn} \,
P^{~n}_{abcd} \, P^{~l}_{efgh} \, S^{XY}_{c \, e}
\end{equation}
where the extended in-line spins are defined as
${\bar a} = (a, j) = (a_1^{~}, a_2^{~}, \ldots, a_N^{~}, j)$, and
${\bar b} = (g, i) = (g_1^{~}, g_2^{~}, \ldots, g_N^{~}, i)$.
The extended spin array ${\bar X}$ is defined as
\begin{equation}
{\bar X} = \left(
\begin{array}{ccc|c}
x_{11}^{~} & \ldots & x_{1N}^{~} & f_{1}^{~} \\
\vdots & \ddots & \vdots & \vdots \\
x_{N1}^{~} & \ldots & x_{NN}^{~} & f_{N}^{~} \\ \cline{1-4}
b_{1}^{~} & \ldots & b_{N}^{~} & k
\end{array}
\right) \, ,
\end{equation}
and ${\bar Y}$ is defined in the same way from the indices $m$, $d$, $h$
and $Y$. The linear size of the corner tensor $C$ can be increased by
joining three $P$, three $S$, and a vertex (Fig.5(c))
\begin{equation}
C^{{\bar X}{\bar Y}{\bar Z}}_{~} =
\sum_{lmn} \sum_{cd \, eh \, qr} \sum_{TUV}
W^{~}_{ijklmn} \, P^{~n}_{abcd} \, P^{~l}_{efgh} \, P^{~m}_{opqr}
S^{XT}_{q \, d} \, S^{YU}_{c \, e} \, S^{ZV}_{h \, r} \,
C^{TUV}_{~} \, .
\end{equation}
The extended superscripts ${\bar X}$, ${\bar Y}$, and ${\bar Z}$ are
defined in the same way as Eq.(3.4). In equation (3.5) we have to take
care of the orientation of the surfaces $T$, $U$, and $V$.
\begin{figure}
\figureheight{6cm}
\caption{
The density matrix $Q$ in Eq.(3.8) is obtained by joining two corner
tensors (Eq.(3.6)) to obtain the tensor $D$, and then joining
four of them.(Eq.(3.8))
}
\label{fig:6}
\end{figure}
Now we can express the partition function ${\Xi}_{2N}^{~}$ of the cube
with linear size $2N$ using the corner tensors. We first join two
corner tensors (Fig.5) to obtain a symmetric matrix
\begin{equation}
D^{(XU)(ZV)}_{~} = \sum_Y C^{XYZ}_{~} \, C^{UYV}_{\rm m} \, ,
\end{equation}
where we regard the pair $(ZV)$ as the column index of $D$, and $(XU)$ as
the row index. The tensor $C^{~}_{\rm m}$ is the mirror image of $C$:
$C^{UYV}_{\rm m} \equiv C^{{U^T_{~}}Y{V^T_{~}}}_{~}$.
The partition function ${\Xi}_{2N}^{~}$ is then expressed as
\begin{equation}
{\Xi}_{2N}^{~} = {\rm Tr} \, D^4_{~} = \sum_{XU} Q^{(XU)(XU)}_{~} \, ,
\end{equation}
where the matrix $Q$ is the forth power of $D$ (Fig.6)
\begin{equation}
Q^{(XU)(ZV)}_{~} = \sum_{(AB)(CD)(EF)}
D^{(XU)(AB)}_{~} \, D^{(AB)(CD)}_{~} \,
D^{(CD)(EF)}_{~} \, D^{(EF)(ZV)}_{~} \, .
\end{equation}
The matrix $Q$ can be seen as a density matrix for the cube,
because ${\rm Tr} \, Q$ is the partition function
${\Xi}_{2N}^{~}$. By contracting two superscripts of $Q$, we obtain a
density submatrix
\begin{equation}
\rho^{XZ}_{~} = \sum_U Q^{(XU)(ZU)}_{~} \, ,
\end{equation}
which will be used for the RG transformation for the spin array.
Let us consider a density submatrix $\rho^{{\bar X}{\bar Z}}_{~}$ for the
extended cube with size $2(N + 1)$, where ${\bar X}$ is the extended
spin array Eq.(3.4); for a while we label the elements of ${\bar
Z}$ as
\begin{equation}
{\bar Z} = \left(
\begin{array}{ccc|c}
{x'}_{11}^{~} & \ldots & {x'}_{1N}^{~} & {f'}_{1}^{~} \\
\vdots & \ddots & \vdots & \vdots \\
{x'}_{N1}^{~} & \ldots & {x'}_{NN}^{~} & {f'}_{N}^{~} \\ \cline{1-4}
{b'}_{1}^{~} & \ldots & {b'}_{N}^{~} & {k'}
\end{array}
\right) \,
\end{equation}
in order to define another density submatrix.
By tracing out $N$ by $N + 1$ variables of the extended density matrix
$\rho^{{\bar X}{\bar Z}}_{~}$
\begin{equation}
\rho_{{\bar f}{\bar g}}^{~}
= \sum_{b_i^{~} = {b'}_i^{~}~~x_{ij}^{~} = {x'}_{ij}^{~}}
\rho^{{\bar X}{\bar Z}}_{~} \, ,
\end{equation}
where
${\bar f} = (f_1^{~}, \ldots, f_N^{~}, k)$ and
${\bar g} = ({f'}_1^{~}, \ldots, {f'}_N^{~}, k')$, we obtain another density
submatrix for the extended in-line spins. In the same way, we obtain
$\rho_{fg}^{~}$ for the in-line spins of length $N$ ---
$f = (x_{1N}^{~}, \ldots, x_{NN}^{~})$ and
$g = ({x'}_{1N}^{~}, \ldots, {x'}_{NN}^{~})$ ---
by tracing out $N-1$ by $N$ variables of $\rho^{XZ}_{~}$ in Eq.(3.9).
\section{RG Algorithm in Three Dimension}
As we have done in \S 2, we obtain RG transformations by way of the
diagonalizations of the density submatrices. We first consider the
eigenvalue relation
\begin{equation}
\sum_Z^{~} \rho^{XZ}_{~} U^Z_{\Psi} = \Lambda_{\Psi} U^X_{\Psi} \, ,
\end{equation}
where we assume the decreasing order for $\Lambda_{\Psi}$. We keep
first $m'$ eigenvalues, ($\Psi = 1, \ldots, m'$) and neglect the rest of
relatively small ones. We then obtain the RG transformation matrix
$U^X_{\Psi}$, that maps the spin array $X$ to an $m'$-state block spin
$\Psi$. For example, the corner tensor $C^{XYZ}_{~}$ is renormalized as
(Fig.7(a))
\begin{equation}
{\tilde C}^{\Psi \Phi \Theta}_{~} = \sum_{XYZ}^{~}
U_{\Psi}^X U_{\Phi}^Y U_{\Theta}^Z C^{XYZ}_{~} \, .
\end{equation}
It should be noted that under the transpose of the spin array $X
\rightarrow X^T_{~}$ the matrix $U^X_{\Psi}$ transforms as
$\pm U^X_{\Psi}$ according to the parity of the block spin $\Psi$.
Let us consider another eigenvalue relation
\begin{equation}
\sum_g^{~} \rho_{fg}^{~} A_g^{\psi} =
\lambda_{~}^{\psi} A_f^{\psi}
\end{equation}
for the density submatrix $\rho_{fg}^{~}$, where $f$ and $g$ are in-line
spins. We assume
the decreasing order for $\lambda^{\psi}_{~}$ as before, and we keep $m$
numbers of large eigenvalues. ($\psi = 1, \ldots, m$) The matrix
$A_f^{\psi}$ then represent the RG transformation for the in-line spin $f$.
For example, $P^{~i}_{abcd}$ is renormalized as (Fig.7(b))
\begin{equation}
{\tilde P}^{~i}_{\alpha \beta \gamma \delta} = \sum_{abcd}^{~}
P^{~i}_{abcd} A^{\alpha}_a A^{\beta}_b A^{\gamma}_c A^{\delta}_d \, .
\end{equation}
By using both $U^X_{\Psi}$ and $A_f^{\psi}$, we can renormalize
$S_{ab}^{XY}$ as (Fig.7(c))
\begin{equation}
{\tilde S}_{\alpha \beta}^{\Psi \Phi} = \sum_{a b X Y}
S_{ab}^{XY} A_a^{\alpha} A_b^{\beta} U_{\Psi}^X U_{\Phi}^Y \, .
\end{equation}
As a result of RG transformations, the tensors $P^{~i}_{abcd}$,
$S_{ab}^{XY}$, and $C_{~}^{XYZ}$ are approximated as
\begin{eqnarray}
P^{~i}_{abcd} & \sim & \sum_{\alpha \beta \gamma \delta = 1}^m
A^{\alpha}_{a} A^{\beta}_{b} A^{\gamma}_{c} A^{\delta}_{d}
{\tilde P}^{~i}_{\alpha \beta \gamma \delta} \\
S^{XY}_{a \, b} & \sim & \sum_{\alpha \beta = 1}^m
\sum_{\Psi \Phi =1}^{m'}
A^{\alpha}_{a} A^{\beta}_{b} U^{X}_{\Psi} U^{Y}_{\Phi}
{\tilde S}^{\Psi \Phi}_{\alpha \, \beta} \\
C^{XYZ}_{~} & \sim & \sum_{\Psi \Phi \Theta = 1}^{m'}
U^{X}_{\Psi} U^{Y}_{\Phi} U^{Z}_{\Theta}
{\tilde C}^{\Psi \Phi \Theta}_{~} \, .
\end{eqnarray}
For the models that have unique ground-state spin configuration,
the above approximations become exact when $T = 0$ and $T = \infty$ even
for $m = m' = 1$.
\begin{figure}
\figureheight{6cm}
\caption{
The renormalized tensors
(a) ${\tilde P}^{~i}_{\alpha \beta \gamma \delta}$ in Eq.(4.4),
(b) ${\tilde S}_{\alpha \beta}^{\Psi \Phi}$ in Eq.(4.5), and
(c) ${\tilde C}^{\Psi \Phi \Theta}_{~}$ in Eq.(4.2).
The greek letters $\alpha$, $\beta$, $\gamma$, and $\delta$ denote
$m$-state renormalized in-line spins, and the capital ones $\Psi$, $\Phi$,
and $\Theta$ denote $m'$-state renormalized spin arrays.
}
\label{fig:7}
\end{figure}
Now we can directly generalize the algorithm of CTMRG to 3D lattice
models. The algorithm consists of the extensions for
$P^{~i}_{abcd}$ (Eq.(3.2)), $S^{XY}_{ab}$ (Eq.3.3)), and
$C^{XYZ}_{~}$ (Eq.(3.5)), and the RG transformations Eqs.(4.2),(4.4) and
(4.5). The procedure of the renormalization group is as follows:
\begin{itemize}
\item[(1)] Start from $N = 1$, where all the tensors can be expressed by
the Boltzmann weight $W_{ijklmn}^{~}$:
$P^{~i}_{abcd} = W_{iabcd \times}^{~}$,
$S^{XY}_{ab} = W_{aXbY \times \times}^{~}$, and
$C^{XYZ}_{~} = W_{ZXY \times \times \times}^{~}$, where the mark
`$\times$' represents the fixed boundary spin.
\item[(2)] Join the tensors $W_{ijklmn}^{~}$, $P^{~i}_{abcd}$,
$S^{XY}_{ab}$, and $C^{XYZ}_{~}$ using Eqs.(3.2), (3.3), and (3.5),
respectively, and obtain the extended ones $P^{~i}_{{\bar a}{\bar b}{\bar
c}{\bar d}}$, $S^{{\bar X}{\bar Y}}_{{\bar a}{\bar b}}$, and $C^{{\bar X}{\bar
Y}{\bar Z}}_{~}$. (Increment $N$ by one.)
\item[(3)] Using the extended corner tensor $C^{{\bar X}{\bar Y}{\bar
Z}}_{~}$ in Eq.(3.5), calculate the density matrix $\rho^{{\bar X}{\bar
Z}}_{~}$ via Eq.(3.9) and its submatrix $\rho^{~}_{{\bar f}{\bar g}}$ in
Eq.(3.11).
\item[(4)] Obtain the RG transformation matrix $U_{\Psi}^{\bar X}$ and
$A_{\bar a}^{\alpha}$ using Eqs.(4.1) and (4.3), respectively. We keep
$m'$ states for $\Psi$, and $m$ states for $\alpha$.
\item[(5)] Apply the RG transformations to the extended tensors to
obtain ${\tilde P}^{~i}_{\alpha \beta \gamma \delta}$ (Eq.(4.4)),
${\tilde S}_{\alpha \beta}^{\Psi \Phi}$ (Eq.(4.5)), and
${\tilde C}^{\Psi \Phi \Theta}_{~}$ (Eq.(4.2)).
\item[(6)] Goto the step (2) and repeat the procedures (2)-(5)
for the renormalized tensors ${\tilde P}$, ${\tilde S}$, and ${\tilde C}$.
\end{itemize}
Every time we extend the tensors in the step (2) the system size ---
the linear dimension of the cube --- increases by 2. After the step (3) we
can obtain the lower bound of the partition function by taking the trace of
the density submatrix $\Xi_{2(N+1)}^{~} = {\rm Tr} \, \rho^{{\bar X}{\bar
Z}}_{~} = {\rm Tr} \, \rho^{~}_{{\bar f}{\bar g}}$. We stop the iteration
when the partition function per vertex converges. Since the extended spin
array ${\bar X}$ of the density matrix $\rho^{{\bar X}{\bar Z}}_{~}$
contains the original (unrenormalized) spin variable, we can directly
calculate the local energy and the order parameter.~\cite{CTMRG2}
Let us apply the above algorithm to the 3D Ising model. The model
is equivalent to the 64-vertex model whose vertex weight is
given by
\begin{equation}
W_{ijklmn}^{~} = \sum_{\sigma = \pm 1}^{~}
U_{\sigma i}^{~} U_{\sigma j}^{~} U_{\sigma k}^{~}
U_{\sigma l}^{~}U_{\sigma m}^{~}U_{\sigma n}^{~} \, ,
\end{equation}
where $U_{\sigma i}^{~}$ is unity when $\sigma = i$, and is
$e^K_{~} + \sqrt{e^{2K}_{~}-1}$ when $\sigma \ne i$. The parameter $K$
denotes the inverse temperature $J / k_{\rm B} T$.
For this model the initial conditions for step (1) are slightly modified as
\begin{eqnarray}
P^{~i}_{abcd} & = & U_{\times i}^{~}
U_{\times a}^{~} U_{\times b}^{~}
U_{\times c}^{~} U_{\times d}^{~} \nonumber\\
S^{XY}_{ab} & = &
U_{\times X}^{~} U_{\times Y}^{~}
U_{\times a}^{~} U_{\times b}^{~} \nonumber\\
C^{XYZ}_{~} & = & U_{\times X}^{~} U_{\times Y}^{~} U_{\times Z}^{~} \, ,
\end{eqnarray}
where `$\times$' represent the boundary Ising spin. (The modification is
nothing but the change in normalizations.) We impose ferromagnetic
boundary condition $\times = 1$. As a trial calculation we keep only two
states for both in-line spins ($m = 2$) and spin arrays ($m' = 2$); when
$m' = 2$ the parity of the renormalized spin array $\Psi$ in Eq.(4.1) is
always even. Figure 8 shows the calculated spontaneous magnetization.
Because of the smallness of $m$ and $m'$, the transition temperature
$T_{\rm c}^{~}$ is over estimated, where the feature is
common to the Klamers-Wannier approximation.~\cite{Krm}
\begin{figure}
\figureheight{6cm}
\caption{
Calculated spontaneous magnetization of the 3D Ising model when $m =
m' = 2$. The arrow shows the true $T_{\rm c}^{~}$.
}
\label{fig:8}
\end{figure}
Compare to the CTMRG algorithm for 2D classical systems, the above RG
algorithm for 3D system requires much more computational time. The
reason is that after the step (2) the extended in-line spin ${\bar f}$
becomes $2 m$-state, and the extended spin array ${\bar X}$ becomes $2
m^2 m'$-state; in order to obtain $\rho^{{\bar X}{\bar Z}}_{~}$ in the step
(3) we have to create a matrix $D^{({\bar X}{\bar U})({\bar Z}{\bar
V})}_{~}$ by Eq.(3.6), whose dimension is $4 m^4 {m'}^2$. For the simplest
(non-trivial) case $m = m' = 2$ the dimension is already 256.
\section{Conclusion and discussion}
We have explained a way of generalizing the RG algorithm of
CTMRG~\cite{CTMRG1,CTMRG2} to 3D classical models, focusing on the
construction of the density matrix from eight corner tensors. The RG
transformations are obtained through the diagonalizations of the density
matrices. As far as we know it is the first generalization of the
infinite-system density matrix algorithm~\cite{Wh1,Wh2} to 3D classical
systems.
From the computational view point, the calculation in 3D is far more
heavy than that of CTMRG in 2D; we have to improve the numerical
algorithm in 3D for realistic use. What we have done is to approximate the
eigenstate of a transfer matrix in 3D as a two-dimensional product of
renormalized tensor ${\tilde P}$ (Eq.(4.4)); the most important process is
to improve the tensor elements in ${\tilde P}$ so that the variational
partition function is maximized. The improvement of tensor product
state for 1D quantum system proposed by Dukelsky et.
al.~\cite{RecGerman1,RecGerman2}, where their algorithm does not
explicitly require the density matrix, may of use to reduce the numerical
effort in three dimension.
The authors would like to express their sincere thanks to Y.~Akutsu,
M.~Kikuchi, for valuable discussions. T.~N. is greatful to G.~Sierra about
the discussion on the matrix product state. K.~O. is supported by JSPS
Research Fellowships for Young Scientists. The present work is partially
supported by a Grant-in-Aid from Ministry of Education, Science and Culture
of Japan. Most of the numerical calculations were done by NEC SX-4 in
computer center of Osaka University.
|
2,869,038,154,405 | arxiv | \section{Introduction}
Description logics have been designed as
knowledge representation formalisms that have good computational
properties \cite{BaaderCalvaneseMcGuinnessNardiPatelSchneider03}.
Correspondingly, there has been a lot of research into the computational
complexity of reasoning problems for different description logics.
This research has, however, focused entirely on the framework of classical
complexity theory to study the computational complexity
(see, e.g.,~\citealt{BaaderCalvaneseMcGuinnessNardiPatelSchneider03};~%
\citealt{BaaderHorrocksSattler08}).
The more fine-grained and multi-dimensional
framework of parameterized complexity theory
has hardly been applied to study the complexity of reasoning problems
for description logics.
Only a few works used the framework of parameterized complexity
to study description logic problems
\cite{BienvenuKikotKontchakovPodolskiiRyzhikovZakharyaschev17,%
BienvenuKikotKontchakovRyzhikovZakharyaschev17,%
CeylanPenaloza14,KikotKontchakovZakharyaschev11,Motik12,%
SimancikMotikHorrocks14,SimancikMotikKroetzsch11}.
Moreover, these works all use the framework in a traditional way,
focusing purely on one commonly used notion of tractability
(namely that of fixed-parameter tractability).
Parameterized complexity is designed to address the downside of
classical complexity theory that it is largely ignorant of structural properties
of problem inputs that can potentially be exploited algorithmically.
It does so by distuinguishing a problem parameter~$k$, in addition to the
input size~$n$, and measuring running times in terms of both of these.
The parameter~$k$ can be used to measure various types of structure
that are present in the problem input.
Parameterized complexity theory has grown into a large
and thriving research community over the last few decades
(see, e.g.,~\citealt{BodlaenderDowneyFominMarx12};
\citealt{Downey12};
\citealt{DowneyFellows13}).
Most results and techniques in parameterized complexity
theory revolve around the notion of \emph{fixed-parameter tractability}---%
a relaxation of polynomial-time solvability
based on running times of the
form~$f(k) \cdot n^{O(1)}$, for some computable
function~$f$ (possibly exponential or worse).
Due to the fact that reasoning problems related to
description logics are typically of high complexity
(e.g., complete for classes like \text{\normalfont PSPACE}{} and \text{\normalfont EXPTIME}{}),
it is unsurprising that one would
need very restrictive parameters to obtain fixed-parameter tractability results
for such problems.
It has been proposed recently that the investigation of
problems that are of higher complexity
can also benefit from the parameterized complexity point of view
\cite{DeHaan16,DeHaanSzeider14b,DeHaanSzeider14,DeHaanSzeider16,%
DeHaanSzeider17}---using tools and methods
that overstep the traditional focus on fixed-parameter tractability as positive results.
In this paper, we show how the complexity study of description logic
problems can benefit from using the framework of parameterized complexity
and all the tools and methods that it offers.
We do so using three case studies:
(1)~parameterized results for
concept satisfiability for \ALC{} with respect to nearly acyclic TBoxes,
(2)~parameterized results for
concept satisfiability for fragments of \ALC{} that are close
to \ALE{}, \ALU{} and \AL{}, respectively, and
(3)~parameterized results
addressing the notion of data complexity
for instance checking and conjunctive query entailment for \ELI{}.
The complexity results that we obtain
are summarized in Tables~\ref{table:alc-concept-sat-tboxes},%
~\ref{table:alc-concept-sat} and~\ref{table:data-complexity}---%
at the end of the sections where we present the case studies.
\paragraph{Outline.}
We begin by giving an overview of the theory of parameterized complexity---%
including commonly used (and more traditional) concepts and tools,
as well as more progressive notions.
Then we present our three case studies in three separate sections,
before sketching directions for future research and concluding.
\section{Parameterized Complexity Theory}
We begin by introducing relevant concepts
from the theory of parameterized complexity.
For more details, we refer to textbooks on the
topic~\cite{DowneyFellows13,FlumGrohe06}.
We introduce both concepts that are used commonly in
parameterized complexity analyses in the literature
and less commonly used concepts, that play a role
in this paper.
\paragraph{FPT and XP.}
The core notion in parameterized complexity is that
of fixed-parameter tractability, which is a relaxation of the
traditional notion of polynomial-time solvability.
Fixed-parameter tractability is a property of parameterized problems.
A \emph{parameterized problem~$Q$} is a subset of~$\Sigma^{*} \times \mathbb{N}$,
for some finite alphabet~$\Sigma$.
An instance of a parameterized problem is a pair~$(x,k)$
where~$x$ is the main part of the instance,
and~$k$ is the parameter.
Intuitively, the parameter captures some type of structure
of the instance that could
potentially be exploited algorithmically---%
the smaller the value of the parameter~$k$, the more structure
there is in the instance.
(When considering multiple parameters,
we take their sum as a single parameter.)
A parameterized problem is \emph{fixed-parameter tractable}
if instances~$(x,k)$ of the problem can be solved by a deterministic algorithm
that runs in time~$f(k)\Card{x}^{O(1)}$,
where~$f$ is a computable function of~$k$.
Algorithms running within such time bounds are called \emph{fpt-algorithms}.
\text{\normalfont FPT}{} denotes the class of all parameterized problems
that are fixed-parameter tractable.
Intuitively, the idea behind fixed-parameter tractability is that whenever the
parameter value~$k$ is small, the overall running time is reasonably small---%
assuming that the constant hidden behind~$O(1)$ is small.
In fact, for every fixed parameter value~$k$, the running time of an fpt-algorithm
is polynomial (where the order of the polynomial is constant).
A related parameterized complexity class is \text{\normalfont XP}{},
which consists of all parameterized problems for which instances~$(x,k)$
can be solved in time~$n^{f(k)}$, for some computable function~$f$.
Algorithms running within such time bounds are called \emph{xp-algorithms}.
That is, a parameterized problem~$Q$ is in \text{\normalfont XP}{} if there is an algorithm
that solves~$Q$ in polynomial time for each fixed value~$k$ of the parameter---%
where the order of the polynomial may grow with~$k$.
It holds that~$\text{\normalfont FPT} \subsetneq \text{\normalfont XP}{}$.
Intuitively, if a parameterized problem is in~$\text{\normalfont XP}{} \setminus \text{\normalfont FPT}{}$,
it is not likely to be efficiently solvable in practice.
Suppose, for example, that a problem is solvable in time~$n^{k}$
in the worst case.
Then already for~$n = 100$ and~$k = 10$, it could take ages to solve
this problem (see, e.g.,~\citealt{Downey12}).
\paragraph{Completeness Theory.}
Parameterized complexity also offers a \emph{completeness theory},
similar to the theory of \text{\normalfont NP}{}-completeness,
that provides a way to obtain evidence that
a parameterized problem is not fixed-parameter tractable.
Hardness for parameterized complexity classes is based on fpt-reductions,
which are many-one reductions where the parameter of one problem
maps into the parameter for the other.
More specifically, a parameterized problem~$Q$ is fpt-reducible to another
parameterized problem~$Q'$
if there is a mapping~$R$
that maps instances of~$Q$ to instances of~$Q'$ such that
(i)~$(I,k) \in Q$ if and only if~$R(I,k) = (I',k') \in Q'$,
(ii)~$k' \leq g(k)$ for a computable function~$g$, and
(iii)~$R$ can be computed in time~$f(k)\Card{I}^c$
for a computable function~$f$ and a constant~$c$.
A problem~$Q$ is \emph{hard} for a parameterized complexity
class~$\mtext{K}$
if every problem~$Q' \in \mtext{K}$ can be
fpt-reduced to~$Q$.
A problem~$Q$ is \emph{complete} for a parameterized
complexity class~$\mtext{K}$
if~$Q \in \mtext{K}$
and~$Q$ is $\mtext{K}$-hard.
Central to the completeness theory are the classes~$\W{1}
\subseteq \W{2} \subseteq \dotsc \subseteq \W{P} \subseteq \text{\normalfont XP}$
of the Weft hierarchy.
We will not define the classes~\W{\ensuremath{t}} in detail
(for details, see, e.g., \citealt{FlumGrohe06}).
It suffices to note that it is widely believed that~$\W{1} \neq \text{\normalfont FPT}$.%
\footnote{In fact, it holds that~$\W{1} \neq \text{\normalfont FPT}$, assuming that
$n$-variable 3SAT cannot be solved in subexponential time,
that is, in time~$2^{o(n)}$
\cite{ChenChorFellowsHuangJuedesKanjXia05,%
ChenKanj12,DowneyFellows13}.}
Thus, showing that a problem~$Q$ is \W{1}-hard gives evidence
that~$Q$ is not fpt-time solvable.
An example of a \W{1}-complete parameterized problem
is \mtext{\textsc{Clique}}{} \cite{DowneyFellows95,DowneyFellows13}.
Instances for this problem consist of~$(G,k)$,
where~$G = (V,E)$ is an undirected graph, and~$k \in \mathbb{N}$.
The parameter is~$k$, and the question is to decide
whether~$G$ contains a clique of size~$k$.
\paragraph{Para-K.}
For each classical complexity class~$\mtext{K}$,
we can construct a parameterized analogue~$\text{\normalfont para-}{\mtext{K}}$
\cite{FlumGrohe03}.
Let~$\mtext{K}$ be a classical complexity class, e.g., \text{\normalfont NP}{}.
The parameterized complexity
class~$\text{\normalfont para-}{\mtext{K}}$ is then defined as the class of all parameterized
problems~$L \subseteq \Sigma^{*} \times \mathbb{N}{}$
for which there exist a computable function~$f : \mathbb{N}{} \rightarrow \Sigma^{*}$
and a problem~$Q' \subseteq \Sigma^{*} \times \Sigma^{*}$
in~$K$, such
that for all instances~$(x,k) \in \Sigma^{*} \times \mathbb{N}{}$
it holds that~$(x,k) \in Q$ if and only if~$(x,f(k)) \in Q'$.
Intuitively, the class~$\text{\normalfont para-}{K}$ consists of all problems that are
in~$\mtext{K}$ after a precomputation that only involves the parameter.
A common example of such parameterized analogues of classical
complexity classes is the parameterized complexity class \text{\normalfont para-}{\text{\normalfont NP}}.
Another example is~$\text{\normalfont para-}{\P} = \text{\normalfont FPT}$.
If (the unparameterized variant of) a parameterized problem~$Q$
is in the class~$\mtext{K}$, then~$Q \in \text{\normalfont para-}{\mtext{K}}$.
Also, if~$Q$ is already $\mtext{K}$-hard for a finite set of
parameter values, then~$Q$ is $\text{\normalfont para-}{\mtext{K}}$-hard
\cite{FlumGrohe03}.
Using the classes \text{\normalfont para-}{\mtext{K}} and the notion of fpt-reductions,
one can also provide evidence that certain parameterized problems
are not fixed-parameter tractable.
If a \text{\normalfont para-}{\mtext{K}}-hard parameterized problem is fixed-parameter
tractable, then~$\mtext{K} = \P$.
For example, a \text{\normalfont para-}{\text{\normalfont NP}}-hard parameterized problem is not
fixed-parameter tractable, unless~$\P = \text{\normalfont NP}$.
\paragraph{Para-NP and para-co-NP.}
The classes \text{\normalfont para-}{\text{\normalfont NP}} and \text{\normalfont para-}{\text{\normalfont co-}{\text{\normalfont NP}}} are parameterized analogues
of the classes \text{\normalfont NP}{} and \text{\normalfont co-}{\text{\normalfont NP}}.
The class \text{\normalfont para-}{\text{\normalfont NP}} can alternatively be defined as the class of parameterized
problems that are solvable in fpt-time by a non-deterministic
algorithm~\cite{FlumGrohe03}.
Similarly, \text{\normalfont para-}{\text{\normalfont co-}{\text{\normalfont NP}}} can be defined using fpt-algorithms using
universal nondeterminism---%
i.e., nondeterministic fpt-algorithms that reject the input if at least
one sequence of nondeterministic choices leads the algorithm to reject.
It holds that~$\W{1} \subseteq \W{2} \subseteq \dotsm \subseteq \W{P}
\subseteq \text{\normalfont para-}{\text{\normalfont NP}}$.
Another alternative definition of the class \text{\normalfont para-}{\text{\normalfont NP}}---that can
be motivated by the amazing practical performance of SAT solving
algorithms (see, e.g.,~\citealt{BiereHeuleMaarenWalsh09})---%
is using the following parameterized variant of the
propositional satisfiability problem
\cite{DeHaan16,DeHaanSzeider14b,DeHaanSzeider14,%
DeHaanSzeider17}.
Let~$\ensuremath{\mtext{\sc SAT}}_{1} = \SB (\varphi,1) \SM \varphi \in \ensuremath{\mtext{\sc SAT}} \SE$ be the
problem SAT with a constant parameter~$k=1$.
The class \text{\normalfont para-}{\text{\normalfont NP}} consists of all problems that can be fpt-reduced
to~$\ensuremath{\mtext{\sc SAT}}_{1}$.
In other words, \text{\normalfont para-}{\text{\normalfont NP}} can be seen as the class of all parameterized
problems that can be solved by (1)~a fixed-parameter tractable encoding
into SAT, and (2)~using a SAT solving algorithm to then decide the problem.
The class \text{\normalfont para-}{\text{\normalfont co-}{\text{\normalfont NP}}} can be characterized in a similar way,
using UNSAT instead of SAT.
Consequently, problems in \text{\normalfont para-}{\text{\normalfont co-}{\text{\normalfont NP}}} can also be solved using the
combination of an fpt-time encoding and a SAT solving algorithm.
\paragraph{Para-PSPACE.}
The class \text{\normalfont para-}{\text{\normalfont PSPACE}}
can alternatively be defined as the class
of all parameterized problems~$Q$
for which there exists a (deterministic or nondeterministic) algorithm
deciding whether~$(x,k) \in Q$
using space~$f(k) |x|^{O(1)}$, for some computable function~$f$.
It holds that~$\text{\normalfont para-}{\text{\normalfont NP}} \cup \text{\normalfont para-}{\text{\normalfont co-}{\text{\normalfont NP}}} \subseteq \text{\normalfont para-}{\text{\normalfont PSPACE}}$.
Another alternative characterization of \text{\normalfont para-}{\text{\normalfont PSPACE}} is using
a parameterized variant of \ensuremath{\mtext{\sc TQBF}}{}---the problem of deciding
whether a given quantified Boolean formula is true.
Let~$\ensuremath{\mtext{\sc TQBF}}_{1} = \SB (\varphi,1) \SM \varphi \in \ensuremath{\mtext{\sc TQBF}} \SE$ be the
problem \ensuremath{\mtext{\sc TQBF}}{} with a constant parameter~$k=1$.
The class \text{\normalfont para-}{\text{\normalfont PSPACE}} consists of all problems that can be fpt-reduced
to~$\ensuremath{\mtext{\sc TQBF}}_{1}$.
In other words, \text{\normalfont para-}{\text{\normalfont PSPACE}} can be seen as the class of all parameterized
problems that can be solved by (1)~an fpt-time encoding
into TQBF, and (2)~using a TQBF solver to then decide the problem
(see, e.g.,~\citealt{BiereHeuleMaarenWalsh09}).
Yet another characterization of \text{\normalfont para-}{\text{\normalfont PSPACE}} uses alternating Turing
machines (ATMs).
An ATM is a nondeterministic Turing machine where the states are partitioned
into existential and universal states (see, e.g.,~\citealt{FlumGrohe06}, Appendix~A.1).
A configuration of the ATM with an existential state is accepting if at least one
successor configuration is accepting, and a configuration with a universal state
is accepting if all successor configurations are accepting.
Intuitively, an ATM can alternate between existential and universal nondeterminism.
The class \text{\normalfont para-}{\text{\normalfont PSPACE}} consists of all parameterized problems that can be
decided by an ATM in fixed-parameter tractable time.
\paragraph{Para-EXPTIME.}
The class \text{\normalfont para-}{\text{\normalfont EXPTIME}}
can be defined as the class
of all parameterized problems~$Q$
for which there exists a deterministic algorithm deciding whether~$(x,k) \in Q$
in time~$f(k) 2^{|x|^{O(1)}}$, for some computable function~$f$.
It holds that~$\text{\normalfont para-}{\text{\normalfont PSPACE}} \subseteq \text{\normalfont para-}{\text{\normalfont EXPTIME}}$
and that~$\text{\normalfont XP} \subseteq \text{\normalfont para-}{\text{\normalfont EXPTIME}}$.
For an overview of all parameterized complexity classes
that feature in this paper---and their relation---%
see Figure~\ref{fig:parameterized-landscape}.
\begin{figure}[htp!]
\begin{center}
\begin{tikzpicture}
%
\node[] (fpt) at (0,-0.15) {$\text{\normalfont FPT}$};
\node[] (w1) at (-1.25,0.5) {$\W{1}$};
\node[] (cow1) at (1.25,0.5) {$\text{\normalfont co-}{\W{1}}$};
\node[] (paranp) at (-2.5,1.25) {$\text{\normalfont para-}{\text{\normalfont NP}}$};
\node[] (paraconp) at (2.5,1.25) {$\text{\normalfont para-}{\text{\normalfont co-}{\text{\normalfont NP}}}$};
\node[] (xp) at (-1.25,1.25) {\phantom{p}\!\!\!\!$\text{\normalfont XP}{}$};
\node[] (parapspace) at (0,2.2) {$\text{\normalfont para-}{\text{\normalfont PSPACE}}$};
\node[] (paraexptime) at (0,3.1) {$\text{\normalfont para-}{\text{\normalfont EXPTIME}}$};
%
\draw[->] (fpt) -- (w1);
\draw[->] (fpt) -- (cow1);
\draw[->] (w1) -- (paranp);
\draw[->] (cow1) -- (paraconp);
\draw[->] (paranp) -- (parapspace);
\draw[->] (paraconp) -- (parapspace);
\draw[->] (w1) -- (xp);
\draw[->] (cow1) -- (xp);
\draw[->] (xp) edge[bend left=35] (paraexptime);
\draw[->] (parapspace) -- (paraexptime);
%
\end{tikzpicture}
\end{center}
\vspace{-10pt}
\caption{An overview of the landscape of parameterized
complexity classes that play a role in this paper.}
\label{fig:parameterized-landscape}
\end{figure}
\section{Case Study 1: Concept Satisfiability for \ALC{} with respect to Nearly Acyclic TBoxes}
\label{sec:alc-almost-acyclic}
In this section, we provide our first case study to illustrate how
parameterized complexity can be used to obtain a more detailed
image of the computational complexity of description logic reasoning.
In particular, we consider the problem of concept satisfiability
for the description logic \ALC{} with respect to general TBoxes.
This problem is \text{\normalfont EXPTIME}{}-complete in general.
We consider two parameters for this problem.
One of these parameters does not help to reduce the complexity
of the problem---that is,
for this parameter the problem is \text{\normalfont para-}{\text{\normalfont EXPTIME}}-complete.
The other of the two parameters does help to reduce the complexity
of the problem---that is,
for this parameter the problem is \text{\normalfont para-}{\text{\normalfont PSPACE}}-complete.
We begin by revisiting the description logic \ALC{},
the problem of concept satisfiability with respect
to acyclic and general TBoxes,
and classical complexity results for this problem.
We then discuss our parameterized complexity results,
and how to interpret these results.
\subsection{The Description Logic \ALC{}}
Let~$N_C$,~$N_R$ and~$N_O$ be
sets of
\emph{atomic concepts},
\emph{roles}, and \emph{individuals},
respectively. The triple~$(N_C,N_R,N_O)$ is called the \emph{signature}.
(We will often omit the signature if this is clear from the context.)
Concepts~$C$ are defined by the following grammar
in Backus-Naur form, for~$R \in N_R$ and~$A \in N_C$:
\[ C := A\ |\ \top\ |\ \bot\ |\ \neg C\ |\ C \sqcap C\ |\ C \sqcup C\ |\ \exists R. C\ |\ \forall R. C. \]
An \emph{interpretation~$\mathcal{I} = (\Delta^{\mathcal{I}},\cdot^{\mathcal{I}})$}
over a signature~$(N_C,N_R,N_O)$ consists of
a non-empty set~$\Delta^{\mathcal{I}}$ called the \emph{domain},
and an interpretation function~$\cdot^{\mathcal{I}}$ that maps
(1)~every individual~$a \in N_O$ to an
element~$a^{\mathcal{I}} \in \Delta^{\mathcal{I}}$,
(2)~every concept~$C$ to a subset of~$\Delta^{\mathcal{I}}$,
and (3)~every role~$R \in N_R$ to a subset
of~$\Delta^{\mathcal{I}} \times \Delta^{\mathcal{I}}$,
such that:\\[3pt]
\begin{minipage}{0.40\linewidth}
\begin{itemize}
\item $\top^{\mathcal{I}} = \Delta^{\mathcal{I}}$; $\bot^{\mathcal{I}} = \emptyset$;
\item $(\neg C)^{\mathcal{I}} = \Delta^{\mathcal{I}} \setminus C^{\mathcal{I}}$;
\end{itemize}
\end{minipage}
\begin{minipage}{0.58\linewidth}
\begin{itemize}
\item $(C_1 \sqcap C_2)^{\mathcal{I}} = (C_1)^{\mathcal{I}} \cap (C_2)^{\mathcal{I}}$;
\item $(C_1 \sqcup C_2)^{\mathcal{I}} = (C_1)^{\mathcal{I}} \cup (C_2)^{\mathcal{I}}$;
\end{itemize}
\end{minipage}
\begin{itemize}
\item $(\exists R. C)^{\mathcal{I}} = \SB x \in \Delta^{\mathcal{I}} \SM$
there exists some~$y \in C^{\mathcal{I}}$ such that $(x,y) \in R^{\mathcal{I}}$ $\SE$; and
\item $(\forall R. C)^{\mathcal{I}} = \SB x \in \Delta^{\mathcal{I}} \SM$
for each~$y$ such that $(x,y) \in R^{\mathcal{I}}$
it holds that~$y \in C^{\mathcal{I}}$ $\SE$.
\end{itemize}
A \emph{general concept inclusion (GCI)} is a statement of the
form~$C \sqsubseteq D$, where~$C,D$ are concepts.
We write~$\mathcal{I} \models C \sqsubseteq D$
(and say that~$\mathcal{I}$ satisfies~$C \sqsubseteq D$)
if~$C^{\mathcal{I}} \subseteq D^{\mathcal{I}}$.
A \emph{(general) TBox~$\mathcal{T}$} is a finite set of GCIs.
A \emph{concept definition} is a statement of the
form~$A \equiv C$, where~$A \in N_C$ is an atomic concept,
and~$C$ is a concept.
We write~$\mathcal{I} \models A \equiv C$
(and say that~$\mathcal{I}$ satisfies~$A \equiv C$) if~$A^{\mathcal{I}} = C^{\mathcal{I}}$.
An \emph{acyclic TBox~$\mathcal{T}$} is a finite set of concept definitions
such that (1)~$\mathcal{T}$ does not contain two different concept
definitions~$A \equiv C_1$ and~$A \equiv C_2$ for any~$A \in N_C$,
and (2)~$\mathcal{T}$ contains no (direct or indirect) cyclic definitions---%
that is, the graph~$G_{\mathcal{T}}$ with vertex set~$N_C$ that contains
an edge~$(A,B)$ if and only if~$\mathcal{T}$ contains a concept
definition~$A \equiv C$ where~$B$ occurs in~$C$ is acyclic.
An interpretation~$\mathcal{I}$ satisfies a (general or acyclic) TBox~$\mathcal{T}$
if~$\mathcal{I}$ satisfies all GCIs or concept definitions in~$\mathcal{T}$.
A \emph{concept assertion} is a statement of the form~$C(a)$,
where~$a \in N_O$ and~$C$ is a concept.
A \emph{role assertion} is a statement of the form~$R(a,b)$,
where~$a,b \in N_O$ and~$R \in N_R$.
We write~$\mathcal{I} \models C(a)$ (and say that~$\mathcal{I}$ satisfies~$C(a)$)
if~$a^{\mathcal{I}} \in C^{\mathcal{I}}$.
Moreover, we write~$\mathcal{I} \models R(a,b)$ (and say that~$\mathcal{I}$ satisfies~$R(a,b)$)
if~$(a^{\mathcal{I}},b^{\mathcal{I}}) \in R^{\mathcal{I}}$.
An \emph{ABox~$\AAA$} is a finite set of concept and role assertions.
\subsection{Classical Complexity Results}
An important reasoning problem for description logics is the problem
of \emph{concept satisfiability}.
In this decision problem, the input consists of a concept~$C$
and a TBox~$\mathcal{T}$, and the question is whether~$C$ is satisfiable
with respect to~$\mathcal{T}$---%
that is, whether there exists an interpretation~$\mathcal{I}$ such that~$\mathcal{I} \models \mathcal{T}$
and~$C^{\mathcal{I}} \neq \emptyset$.
The problem of concept satisfiability is \text{\normalfont PSPACE}{}-complete, both for the case
where~$\mathcal{T}$ is empty and for the case where~$\mathcal{T}$ is an acyclic TBox.
For the case where~$\mathcal{T}$ is a general TBox, the problem
is \text{\normalfont EXPTIME}{}-complete.
\begin{proposition}[\citealt{DoniniMassacci00};~\citealt{Schild91}]
\label{prop:alc-exptime}
Concept satisfiability for the logic \ALC{} with respect to general TBoxes
is \text{\normalfont EXPTIME}{}-complete.
\end{proposition}
\begin{proposition}[\citealt{BaaderLutzMilicicSattlerWolter05};~\citealt{SchmidtSchaussSmolka91}]
\label{prop:alc-pspace-acyclic}
Concept satisfiability for the logic \ALC{} with respect to acyclic TBoxes
is \text{\normalfont PSPACE}{}-complete.
\end{proposition}
\subsection{Parameterized Complexity Results}
We consider a parameterized variant of the problem of concept satisfiability
for \ALC{} where the parameter captures the distance towards acyclicity for
the given TBox.
That is, for this parameterized problem, the input consists of a concept~$C$,
an acyclic TBox~$\mathcal{T}_1$, and a general TBox~$\mathcal{T}_2$.
The parameter is~$k = |\mathcal{T}_2|$, and the question is whether~$C$ is
satisfiable with respect to~$\mathcal{T}_1 \cup \mathcal{T}_2$---%
that is, whether there exists an interpretation~$\mathcal{I}$ such that~$\mathcal{I} \models \mathcal{T}_1$,~%
$\mathcal{I} \models \mathcal{T}_2$, and~$C^{\mathcal{I}} \neq \emptyset$.
Parameterizing by the size of~$\mathcal{T}_2$ does not offer an improvement
in the complexity of the problem---that is, this parameter leads to
\text{\normalfont para-}{\text{\normalfont EXPTIME}}-completeness.
\begin{theorem}
\label{thm:alc-para-exptime}
Concept satisfiability for \ALC{} with respect to both an acyclic TBox~$\mathcal{T}_1$
and a general TBox~$\mathcal{T}_2$ is \text{\normalfont para-}{\text{\normalfont EXPTIME}}-complete
when parameterized by~$|\mathcal{T}_2|$.
\end{theorem}
\begin{proof}
Membership in \text{\normalfont para-}{\text{\normalfont EXPTIME}} follows from the fact that the unparameterized
version of the problem is in \text{\normalfont EXPTIME}{}.
To show \text{\normalfont para-}{\text{\normalfont EXPTIME}}-hardness, it suffices to show that the problem is already
\text{\normalfont EXPTIME}{}-hard for a constant value of the parameter \cite{FlumGrohe03}.
We do so by giving a reduction from the problem of concept satisfiability
for \ALC{} with respect to general TBoxes.
Let~$C$ be a concept and let~$\mathcal{T}$ be a general TBox.
Moreover, let~$\mathcal{T} = \SBs C_1 \sqsubseteq D_1,\dotsc,C_m \sqsubseteq D_m \SEs$.
We construct an acyclic TBox~$\mathcal{T}_1$
and a general TBox~$\mathcal{T}_2$ such that~$C$ is satisfiable with respect to~$\mathcal{T}$
if and only if it is satisfiable with respect to~$\mathcal{T}_1 \cup \mathcal{T}_2$.
Let~$A$ be a fresh atomic concept.
We let~$\mathcal{T}_1 = \SBs A \equiv \bigsqcap_{i=1}^{m} (\neg C_i \sqcup D_i) \SEs$,
and we let~$\mathcal{T}_2 = \SBs \top \sqsubseteq A \SEs$.
It is straightforward to verify that~$C$ is satisfiable with respect to~$\mathcal{T}$
if and only if it is satisfiable with respect to~$\mathcal{T}_1 \cup \mathcal{T}_2$.
Moreover,~$|\mathcal{T}_2|$ is constant.
From this, we can conclude that the problem is \text{\normalfont para-}{\text{\normalfont EXPTIME}}-hard.
\end{proof}
Intuitively, restricting only the number (and size) of the general TBox~$\mathcal{T}_2$
does not restrict the problem, as we can encode a general TBox of arbitrary
size in the acyclic TBox (together with a small general TBox).
If we restrict the number of concepts impacted by the general TBox, however,
we do get an improvement in the complexity of the problem.
Let~$\mathcal{T}_1$ be an acyclic TBox and let~$\mathcal{T}_2$ be a general TBox.
We define the set of \emph{concepts impacted by~$\mathcal{T}_2$ (w.r.t.~$\mathcal{T}_1$)}
as the smallest set~$I$ of concepts that is closed under (syntactic)
subconcepts and that satisfies that
(A)~whenever~$C \sqsubseteq D \in \mathcal{T}_2$,
then~$C,D \in I$, and
(B)~whenever~$A \in I$ and~$A \equiv C \in \mathcal{T}_1$,
then~$C \in I$.
If we parameterize the problem of concept satisfiability with respect to both
an acyclic TBox~$\mathcal{T}_1$ and a general TBox~$\mathcal{T}_2$ by the number
of concepts impacted by~$\mathcal{T}_2$, the complexity of the problem jumps
down to \text{\normalfont para-}{\text{\normalfont PSPACE}}.
\begin{theorem}
\label{thm:alc-para-pspace}
Concept satisfiability for \ALC{} with respect to both an acyclic TBox~$\mathcal{T}_1$
and a general TBox~$\mathcal{T}_2$ is \text{\normalfont para-}{\text{\normalfont PSPACE}}-complete when
parameterized by the number~$k$ of concepts
that are impacted by~$\mathcal{T}_2$ (w.r.t.~$\mathcal{T}_1$).
\end{theorem}
\begin{proof}
Hardness for \text{\normalfont para-}{\text{\normalfont PSPACE}} follows directly from the fact that the problem
is already \text{\normalfont PSPACE}{}-hard when~$\mathcal{T}_2$ is empty
(Proposition~\ref{prop:alc-pspace-acyclic})---and thus the number of concepts
impacted by~$\mathcal{T}_2$ is~$0$.
We show membership in \text{\normalfont para-}{\text{\normalfont PSPACE}} by exhibiting a
nondeterministic algorithm to solve the problem that runs in
space~$f(k) \cdot n^{O(1)}$, for some computable function~$f$.
Let~$C$ be a concept, let~$\mathcal{T}_1$ be an acyclic TBox,
and let~$\mathcal{T}_2$ be a general TBox.
We may assume without loss of generality that all concepts
occurring in~$\mathcal{T}_1$ are in negation normal form---%
that is, negations occur only directly in front of atomic concepts.
If this were not the case, we could straightforwardly
transform~$\mathcal{T}_1$ to a TBox that does have this property
in polynomial time, by introducing new atomic concepts~$A'$
for any negated concept~$\neg A$.
The algorithm that we use is the usual tableau
algorithm (with static blocking) for \ALC{}---see, e.g.,~\cite{BaaderSattler01}.
That is, it aims to construct a tree that can be used to construct
an interpretation satisfying~$C$,~$\mathcal{T}_1$ and~$\mathcal{T}_2$.
For each node in the tree, it first exhaustively applies the rules
for the~$\sqcup$ and~$\sqcap$ operators,
the rules for the concept definitions in~$\mathcal{T}_1$,
and the rules for the GCIs in~$\mathcal{T}_2$
(for each~$C \sqsubseteq D \in \mathcal{T}_2$ adding the
concept~$\neg C \sqsubseteq D$ to a node),
before applying the rules for the~$\exists$ and~$\forall$ operators.
Moreover, it applies all rules exhaustively to one node of the tree
before moving to another node.
Additionally, the algorithm uses the following usual
blocking condition (subset blocking):
the rule for the~$\exists$ operator cannot be
applied to a node~$x$ that has a predecessor~$y$ in the tree
that is labelled with all concepts that~$x$ is labelled with
(and possibly more).
It is straightforward to verify that this tableau algorithm correctly
decides the problem.
We argue that this algorithm requires space~$2^{k} \cdot n^{O(1)}$,
where~$k$ is the number of concepts impacted by~$\mathcal{T}_2$
and~$n$ denotes the input size.
It is straightforward to verify that there is a polynomial~$p$ such
that each node in the tree constructed by
the tableau algorithm that is more than~$p(n)$ steps
away from the root of the tree is only labelled with concepts that
are impacted by~$\mathcal{T}_2$.
Since there are only~$k$ concepts that are impacted by~$\mathcal{T}_2$,
we know that in each branch of the tree, the blocking condition
applies at depth at most~$2^k \cdot p(n)$, and thus that each
branch is of length at most~$2^k \cdot p(n)$.
From this, it follows that this algorithm requires
space~$2^k \cdot n^{O(1)}$,
and thus that the problem is in \text{\normalfont para-}{\text{\normalfont PSPACE}}.
\end{proof}
\subsection{Interpretation of the Results}
The results in this section are summarized
in Table~\ref{table:alc-concept-sat-tboxes}.
The parameterized results of Theorems~\ref{thm:alc-para-exptime}
and~\ref{thm:alc-para-pspace} show that parameterized complexity theory
can make a distinction between the complexity
of the two variants of the problem that classical complexity
theory is blind to.
Classically, both variants are \text{\normalfont EXPTIME}{}-complete,
but one parameter can be used to get a polynomial-space
algorithm (when an additional~$2^k$ factor for the parameter~$k$),
whereas the other parameter requires exponential space,
no matter what additional~$f(k)$ factor is allowed.
The \text{\normalfont para-}{\text{\normalfont PSPACE}} result
of Theorem~\ref{thm:alc-para-pspace} also
yields an algorithm
solving the problem using (1)~an fpt-time encoding into the
problem TQBF, and then (2)~using a TQBF solver
to decide the problem
(see, e.g.,~\citealt{BiereHeuleMaarenWalsh09}).
\begin{table}[!htb]
\centering
\begin{small}
\begin{tabular}{@{\ \ } p{3.0cm} @{\quad}|@{\quad}p{4.3cm} @{\ \ } } \toprule
\multirow{2}{*}{\textit{parameter}} & \textit{complexity of \ALC{} concept} \\
& \textit{satisfiability w.r.t.~$\mathcal{T}_1$ and~$\mathcal{T}_2$} \\
\midrule
\hspace{0.5pt}-- & \text{\normalfont EXPTIME}{}-c \hfill (Proposition~\ref{prop:alc-exptime}) \\[3pt]
$|\mathcal{T}_2|$ & \text{\normalfont para-}{\text{\normalfont EXPTIME}}-c \hfill (Theorem~\ref{thm:alc-para-exptime}) \\[3pt]
\# of concepts impacted\newline by~$\mathcal{T}_2$ (w.r.t.~$\mathcal{T}_1$) & \text{\normalfont para-}{\text{\normalfont PSPACE}}-c \hfill (Theorem~\ref{thm:alc-para-pspace}) \\[10pt]
\bottomrule
\end{tabular}
\end{small}
\caption{The parameterized complexity of \ALC{} concept satisfiability
w.r.t. both an acyclic TBox~$\mathcal{T}_1$ and
a general TBox~$\mathcal{T}_2$, for different parameters.}
%
\label{table:alc-concept-sat-tboxes}
\end{table}
\section{Case Study 2: Concept Satisfiability for \ALC{}, \ALE{}, \ALU{} and \AL{}}
In this section, we provide our second case study to illustrate how
parameterized complexity can be used to obtain a more detailed
image of the computational complexity of description logic reasoning.
In particular, we consider the problem of concept satisfiability
for the description logic \ALC{}.
This problem is \text{\normalfont PSPACE}{}-complete in general.
We consider several parameters that measure the distance
to the logics \ALE{} and \ALU{}.
The logics \ALE{} and \ALU{} are obtained from \ALC{} by
disallowing concept union and full existential qualification,
respectively.
The parameters that we consider both help to reduce the complexity
of the problem.
One parameter renders the problem \text{\normalfont para-}{\text{\normalfont co-}{\text{\normalfont NP}}}-complete.
The other parameter renders the problem \text{\normalfont para-}{\text{\normalfont NP}}-complete.
The combination of both parameters renders the problem
fixed-parameter tractable.
We begin by revisiting the description logics \ALE{} and \ALU{}
(and their intersection \AL{}),
and classical complexity results for the problem of concept
satisfiability for these logics.
We then discuss our parameterized complexity results,
and how to interpret these results.
\subsection{The Description Logics \ALE{}, \ALU{} and \AL{}}
In order to obtain the description logics \ALE{}, \ALU{} and \AL{},
we consider a (syntactic) variant of the logic \ALC{} where all concepts
are in \emph{negation normal form}.
That is, negations only occur immediately followed by
atomic concepts.
Put differently, we consider concepts~$C$ that are
defined
as follows, for~$R \in N_R$ and~$A \in N_C$:
\[ C := A\ |\ \neg A\ |\ \top\ |\ \bot\ |\ C \sqcap C\ |\ C \sqcup C\ |\ \exists R. C\ |\ \forall R. C. \]
One can transform any \ALC{} concept into
negation normal form in linear time
(see, e.g.,~\citealt{BaaderHorrocksSattler08}).
The semantics of this variant of \ALC{} is defined exactly as
described in the previous section.
Throughout this section, we will only consider this variant of \ALC{}.
The description logic \ALE{} is obtained from the logic \ALC{}
by forbidding any occurrence of the operator~$\sqcup$.
The description logic \ALU{} is obtained from the logic \ALC{}
by requiring that for every occurrence~$\exists R. C$ of the
existential quantifier it holds that~$C = \top$; that is,
only limited existential quantification~$\exists R. \top$ is allowed.
The description logic \AL{} contains those concepts
that are concepts in both \ALE{} and \ALU{}---that is,
\AL{} is the intersection of \ALE{} and \ALU{}.
Thus, the logics \ALE{} and \ALU{} are obtained from \ALC{}
by means of two orthogonal restrictions:
disallowing concept union
and replacing full existential qualification by limited existential quantification,
respectively.
The logic \AL{} is obtained from \ALC{}
by using both of these
restrictions.
\subsection{Classical Complexity Results}
In this section, we consider the problem
of \emph{concept satisfiability} with respect to empty TBoxes.
In this decision problem, the input consists of a concept~$C$,
and the question is whether~$C$ is satisfiable---%
that is, whether there exists an interpretation~$\mathcal{I}$
such that~$C^{\mathcal{I}} \neq \emptyset$.
This problem is \text{\normalfont PSPACE}{}-complete for \ALC{},
\text{\normalfont co-}{\text{\normalfont NP}}-complete for \ALE{}, \text{\normalfont NP}{}-complete for \ALU{},
and polynomial-time solvable for \AL{}.
\begin{proposition}[\citealt{SchmidtSchaussSmolka91}]
\label{prop:alc-pspace}
Concept satisfiability for the logic \ALC{} is \text{\normalfont PSPACE}{}-complete.
\end{proposition}
\begin{proposition}[\citealt{DoniniHollunderLenzeriniSpaccamelaNardiNutt02}]
\label{prop:ale-conp}
Concept satisfiability for the logic \ALE{}
is \text{\normalfont co-}{\text{\normalfont NP}}-complete.
\end{proposition}
\begin{proposition}[\citealt{DoniniLenzeriniNardiNutt97}]
\label{prop:alu-np}
Concept satisfiability for the logic \ALU{}
is \text{\normalfont NP}-complete.
\end{proposition}
\begin{proposition}[\citealt{SchmidtSchaussSmolka91}]
\label{prop:al-p}
Concept satisfiability for the logic \AL{}
is polynomial-time solvable.
\end{proposition}
\begin{table*}[ht!]
\begin{center}
\begin{small}
\begin{tabular}{p{1.75cm} p{12.4cm}}
\toprule
%
\multicolumn{2}{l}{\textbf{The $\sqcap$-rule}} \\
\textit{Condition} & $\AAA$ contains~$(C_1 \sqcap C_2)(x)$,
but it does not contain both~$C_1(x)$ and~$C_2(x)$. \\
\textit{Action} & $\AAA' = \AAA \cup \SBs C_1(x), C_2(x) \SEs$. \\
\midrule
%
\multicolumn{2}{l}{\textbf{The $\sqcup$-rule}} \\
\textit{Condition} & $\AAA$ contains~$(C_1 \sqcup C_2)(x)$,
but neither~$C_1(x)$ nor~$C_2(x)$. \\
\textit{Action} & $\AAA' = \AAA \cup \SBs C_1(x) \SEs$,
$\AAA'' = \AAA \cup \SBs C_2(x) \SEs$. \\
\midrule
%
%
\multicolumn{2}{l}{\textbf{The $\exists$-rule}} \\
\textit{Condition} & $\AAA$ contains~$(\exists R.C)(x)$,
but there is no individual~$z$ such
that~$C(z)$ and~$R(x,z)$ are in~$\AAA$. \\
\textit{Action} & $\AAA' = \AAA \cup \SBs C(y), R(x,y) \SEs
\setminus \SB (\exists R'. C')(x') \in \AAA \SM x' = x,
(\exists R'.C') \neq (\exists R.C) \SE$, \\
& where~$y$ is an arbitrary individual not occurring
in~$\AAA$. \\
\midrule
%
\multicolumn{2}{l}{\textbf{The $\forall$-rule}} \\
\textit{Condition} & $\AAA$ contains~$(\forall R.C)(x)$
and~$R(x,y)$,
but it does not contain~$C(y)$. \\
\textit{Action} & $\AAA' = \AAA \cup \SBs C(y) \SEs$. \\
\midrule
%
\multicolumn{2}{l}{\textbf{The $\bot$-rule}} \\
\textit{Condition} & $\AAA$ contains~$A(x)$
and~$(\neg A)(x)$,
but it does not contain~$\bot$. \\
\textit{Action} & $\AAA' = \AAA \cup \SBs \bot \SEs$. \\
%
\bottomrule
\end{tabular}
\end{small}
\end{center}
\vspace{-5pt}
\caption{Transformation rules of the tableau algorithm for \ALC{} concept satisfiability.}
\label{table:tableau-rules}
\vspace{-5pt}
\end{table*}
\subsection{Parameterized Complexity Results}
In order to conveniently describe the parameterized complexity results
that we will establish in this section, we firstly describe an algorithm
for deciding concept satisfiability for \ALC{} in polynomial space
(see, e.g.,
\citealt{BaaderCalvaneseMcGuinnessNardiPatelSchneider03},
Chapter 2).
To use this algorithm to prove the parameterized
complexity results in this section, we describe a variant of the
algorithm that can be implemented by a polynomial-time
alternating Turing machine---i.e., a nondeterministic Turing machine
that can alternate between existential and universal nondeterminism.
The algorithm uses ABoxes~$\AAA$ as data structures, and works
by extending these ABoxes by means of several transformation
rules.
These rules are described Table~\ref{table:tableau-rules}---%
however, not all rules are applied in the same fashion.
The $\sqcap$-rule, the $\forall$-rule and the $\bot$-rule
are used as deterministic rules, and are applied greedily
whenever they apply.
The $\sqcup$-rule and the $\exists$-rule are nondeterministic rules,
but are used in a different fashion.
The $\sqcup$-rule transforms an ABox~$\AAA$ into one of two
different ABoxes~$\AAA'$ or~$\AAA''$ nondeterministically.
The $\sqcup$-rule is implemented using existential nondeterminism---i.e.,
the algorithm succeeds if at least one choice of~$\AAA'$ and~$\AAA''$
ultimately leads to the algorithm accepting.
(For more details on existential and universal nondeterminism
and alternating Turing machines, see, e.g.,~\citealt{FlumGrohe06},
Appendix~A.1.)
The $\exists$-rule, on the other hand, transforms an ABox~$\AAA$ into
a unique next ABox~$\AAA'$, but it is a nonmonotonic rule that can
be applied in several ways---the condition can be instantiated in different
ways, and these instantiations are not all possible anymore after having
applied the rule.
The $\exists$-rule is implemented using universal nondeterminism---i.e.,
the algorithm succeeds if all ways of instantiating the condition of
the $\exists$-rule (and applying the rule accordingly) ultimately
lead to the algorithm accepting.
The tableau algorithm works as follows.
Let~$C_0$ be an \ALC{} concept for which we want to decide
satisfiability.
We construct an initial ABox~$\AAA_0 = \SBs C_0(x_0) \SEs$,
where~$x_0 \in N_{O}$ is an arbitrary individual.
We proceed in two alternating phases:~(I) and~(II)---%
starting with phase~(I).
In phase~(I), we apply the deterministic rules (the $\sqcap$-rule,
the $\forall$-rule and the $\bot$-rule) and
the nondeterministic $\sqcup$-rule exhaustively,
until none of these rules is applicable anymore.
For the $\sqcup$-rule we use existential nondeterminism to
choose which of~$\AAA'$ and~$\AAA''$ to use.
When none of these rules is applicable anymore,
we proceed to phase~(II).
In phase~(II), we apply the $\exists$-rule once, using
universal nondeterminism to choose how to instantiate the
condition (and we apply the rule accordingly).
Then, we go back to phase~(I).
Throughout the execution of the algorithm, there is always
a single current ABox~$\AAA$.
Whenever it holds that~$\bot \in \AAA$, the algorithm rejects.
If at some point no rule is applicable anymore---that is,
if at some point we are in phase~(II) and the $\exists$-rule
is not applicable---%
the algorithm accepts.
This algorithm essentially works the same way as known
tableau algorithms for \ALC{} concept satisfiability
(see, e.g.,
\citealt{BaaderCalvaneseMcGuinnessNardiPatelSchneider03},
Chapter 2).
The only difference is that in the algorithm described above
the implementation of the $\sqcup$-rule using existential
nondeterminism and the implementation of the $\exists$-rule
using universal nondeterminism is built in.
In the literature, typically descriptions of tableau algorithms leave
freedom for different implementations of the way in which the
search tree is traversed.
One can think of the algorithm described above as traversing
a search tree that is generated by the different (existential and
universal) nondeterministic choices that are made in the execution
of the algorithm.
This search tree is equivalent to the search tree of the usual tableau
algorithm for \ALC{} concept satisfiability.
Thus, we get that the algorithm is correct.
In fact, this algorithm is a reformulation
of the standard algorithm known from the literature
\cite{BaaderCalvaneseMcGuinnessNardiPatelSchneider03}.
\begin{proposition}[{\citealt{BaaderCalvaneseMcGuinnessNardiPatelSchneider03}}]
The tableau algorithm described above
for an alternating polynomial-time Turing machine
correctly decides concept satisfiability for \ALC{}.
\end{proposition}
We will now consider several parameterized variants
of the problem of concept satisfiability for \ALC{}.
These parameters, in a sense, measure the distance of
an \ALC{} concept to the logics \ALE{}, \ALU{} and \AL{},
respectively.
We will make use of the tableau algorithm described above
to establish upper bounds on the complexity of these problems.
Lower bounds follow directly from
Propositions~\ref{prop:ale-conp}--\ref{prop:al-p}.
We begin with the parameterized variant of \ALC{} concept
satisfiability where the parameter measures the distance to \ALE{}.
\begin{theorem}
\label{thm:alc-para-conp}
Concept satisfiability for the logic \ALC{},
parameterized by the number of occurrences
of the union operator~$\sqcup$ in~$C$,
is \text{\normalfont para-}{\text{\normalfont co-}{\text{\normalfont NP}}}-complete.
\end{theorem}
\begin{proof}
Hardness for \text{\normalfont para-}{\text{\normalfont co-}{\text{\normalfont NP}}} follows from the fact
that \ALE{} concept satisfiability is \text{\normalfont co-}{\text{\normalfont NP}}-complete
(Proposition~\ref{prop:ale-conp}).
Any \ALE{} concept is an \ALC{} concept with zero occurrences
of the union operator~$\sqcup$.
Therefore, the problem of \ALC{} concept satisfiability
parameterized by the number~$k$ of occurrences
of the union operator~$\sqcup$ in~$C$
is already \text{\normalfont co-}{\text{\normalfont NP}}-hard for the parameter value~$k = 0$.
From this, it follows that the parameterized problem
is \text{\normalfont para-}{\text{\normalfont co-}{\text{\normalfont NP}}}-hard \cite{FlumGrohe03}.
To show that the parameterized problem is also contained
in \text{\normalfont para-}{\text{\normalfont co-}{\text{\normalfont NP}}}, we describe an algorithm that can be implemented
by an alternating Turing machine that only makes use of universal
nondeterminism and that runs in fixed-parameter tractable time.
This algorithm is similar to the tableau algorithm for \ALC{}
described above, with the only difference that the $\sqcup$-rule is now
not implemented using existential nondeterminism.
Instead, we deterministically iterate over all possible choices that can
be made in executions of the $\sqcup$-rule.
That is, whenever the $\sqcup$-rule is applied, resulting in two possible
next ABoxes~$\AAA'$ and~$\AAA''$, we firstly continue the algorithm
with~$\AAA'$, and if the continuation of the algorithm with~$\AAA'$ failed,
we then continue the algorithm with~$\AAA''$ instead.
Let~$k$ be the number of occurrences of the union operator~$\sqcup$
in~$C$. For each occurrence, the $\sqcup$-rule is applied at most once.
Therefore, the total number of possible choices resulting from executions
of the $\sqcup$-rule is at most~$2^k$.
Therefore, this modification of the algorithm can be implemented by
an alternating Turing machine that only uses universal nondeterminism
and that runs in time~$2^k \cdot |C|^{O(1)}$.
In other words, the problem is in \text{\normalfont para-}{\text{\normalfont co-}{\text{\normalfont NP}}},
and thus is \text{\normalfont para-}{\text{\normalfont co-}{\text{\normalfont NP}}}-complete.
\end{proof}
\begin{theorem}
\label{thm:alc-para-np}
Concept satisfiability for the logic \ALC{},
parameterized by the number of occurrences
of full existential qualification~$\exists R. C$ in~$C$,
is \text{\normalfont para-}{\text{\normalfont NP}}-complete.
\end{theorem}
\begin{proof}
Hardness for \text{\normalfont para-}{\text{\normalfont NP}} follows from the fact
that \ALU{} concept satisfiability is \text{\normalfont NP}-complete
(Proposition~\ref{prop:alu-np}).
Any \ALU{} concept is an \ALC{} concept with zero occurrences
of full existential qualification~$\exists R. C$.
Therefore, the problem of \ALC{} concept satisfiability
parameterized by the number of occurrences
of full existential qualification~$\exists R. C$ in~$C$
is already \text{\normalfont NP}-hard for the parameter value~$k = 0$.
From this, it follows that the parameterized problem
is \text{\normalfont para-}{\text{\normalfont NP}}-hard \cite{FlumGrohe03}.
To show membership in \text{\normalfont para-}{\text{\normalfont NP}}, we modify the tableau
algorithm for \ALC{}, similarly to the way we did in the proof
of Theorem~\ref{thm:alc-para-conp}.
In particular, we describe an algorithm that can be implemented
by an alternating Turing machine that only makes use of existential
nondeterminism and that runs in fixed-parameter tractable time.
We do so by executing the $\exists$-rule deterministically, instead
of using universal nondeterminism.
That is, instead of using universal nondeterminism to choose which
instantiation of the condition of the $\exists$-rule to use, we iterate
over all possibilities deterministically.
Let~$k$ be the number of occurrences of full existential
quantification~$\exists R. C$ in~$C$.
At each point, there are at most~$k$ different ways of instantiating
the $\exists$-rule.
Moreover, after having applied the $\exists$-rule for at most~$k$
times, the $\exists$-rule is not applicable anymore.
Therefore, the total number of possible choices to iterate over
is at most~$k^k$.
Therefore, this modification of the algorithm can be implemented by
an alternating Turing machine that only uses existential nondeterminism
and that runs in time~$k^k \cdot |C|^{O(1)}$.
In other words, the problem is in \text{\normalfont para-}{\text{\normalfont NP}},
and thus is \text{\normalfont para-}{\text{\normalfont NP}}-complete.
\end{proof}
\begin{theorem}
\label{thm:alc-fpt}
Concept satisfiability for the logic \ALC{},
parameterized by both (i)~the number of occurrences
of the union operator~$\sqcup$ in~$C$
and (ii)~the number of occurrences
of full existential qualification~$\exists R. C$ in~$C$,
is fixed-parameter tractable.
\end{theorem}
\begin{proof}[Proof (sketch)]
We can modify the alternating polynomial-time
tableau algorithm for \ALC{} concept satisfiability
to work in deterministic fpt-time
by implementing both the $\sqcup$-rule and
the $\exists$-rule deterministically, iterating sequentially
over all possible choices that can be made for these rules.
That is, we combine the ideas behind the proofs of
Theorems~\ref{thm:alc-para-conp} and~\ref{thm:alc-para-np}.
We omit the details of this fpt-time algorithm.
\end{proof}
\subsection{Interpretation of the Results}
The results in this section are summarized
in Table~\ref{table:alc-concept-sat}.
Similarly as for the first case study,
the results for the second case study
show that parameterized complexity theory
can make distinctions that classical complexity theory does not see.
The problems studied in Theorems~\ref{thm:alc-para-conp},%
~\ref{thm:alc-para-np} and~\ref{thm:alc-fpt} are all \text{\normalfont PSPACE}{}-complete
classically, yet from a parameterized point of view their
complexity goes down to \text{\normalfont para-}{\text{\normalfont NP}}, \text{\normalfont para-}{\text{\normalfont co-}{\text{\normalfont NP}}} and \text{\normalfont FPT}{}.
The \text{\normalfont para-}{\text{\normalfont NP}}- and \text{\normalfont para-}{\text{\normalfont co-}{\text{\normalfont NP}}}-completeness results
of Theorems~\ref{thm:alc-para-conp} and~\ref{thm:alc-para-np}
also yield algorithms that (1)~firstly use an fpt-encoding
to an instance of SAT and (2)~then use a SAT solver to decide
the problem (see, e.g.,~\citealt{BiereHeuleMaarenWalsh09}).
\begin{table}[!h]
\centering
\begin{small}
\begin{tabular}{@{\ \ } p{3.5cm} @{\quad}|@{\quad}p{3.8cm} @{\ \ } } \toprule
\multirow{2}{*}{\textit{parameter}} & \textit{complexity of \ALC{}} \\
& \textit{concept satisfiability} \\
\midrule
\hspace{0.5pt}-- & \text{\normalfont PSPACE}{}-c \hfill (Proposition~\ref{prop:alc-pspace}) \\[3pt]
\# of occurrences of~$\sqcup$ & \text{\normalfont para-}{\text{\normalfont co-}{\text{\normalfont NP}}}-c \hfill (Theorem~\ref{thm:alc-para-conp}) \\[3pt]
\# of occurrences of~$\exists$ & \text{\normalfont para-}{\text{\normalfont NP}}-c \hfill (Theorem~\ref{thm:alc-para-np}) \\[3pt]
\# of occurrences of~$\sqcup$ and~$\exists$ & FPT \hfill (Theorem~\ref{thm:alc-fpt}) \\[3pt]
\bottomrule
\end{tabular}
\end{small}
\caption{The parameterized complexity of \ALC{} concept satisfiability
(with no TBoxes) for different parameters.}
%
\label{table:alc-concept-sat}
\end{table}
\section{Case Study 3: A Parameterized Complexity View on Data Complexity}
In this section, we provide our third case study illustrating
the use of parameterized complexity for the analysis
of description logic reasoning.
This third case study is about refining the complexity analysis for
cases where one part of the input is much smaller than another part.
Typically, these cases occur where there is a small TBox and a small
query, but where there is a large database of facts (in the form of an ABox).
What is often done is that the size of the TBox and the query are seen
as fixed constants---and the complexity results are grouped under the
name of ``data complexity.''
In this section, we will look at two concrete polynomial-time data complexity
results for the description logic \ELI{}.
Even though the data complexity view gives the same
outlook on the complexity of these problems,
we will use the viewpoint of parameterized complexity theory to argue
that these two problems in fact have a different complexity.
One of these problems is more efficiently solvable than the other.
We chose the example of \ELI{} to illustrate our point
because it is technically straightforward.
More intricate fixed-parameter tractability results
for conjunctive query answering in description logics
have been obtained in the literature
\cite{BienvenuKikotKontchakovPodolskiiRyzhikovZakharyaschev17,%
BienvenuKikotKontchakovRyzhikovZakharyaschev17,%
KikotKontchakovZakharyaschev11}.
We begin by reviewing the description logic \ELI{},
and the two reasoning problems for this logic that we will
look at (instance checking and conjunctive query entailment).
We will review the classical complexity results for these two problems,
including the data complexity results.
We will then use results from the literature to
give a parameterized complexity analysis for these two
problems, and argue why the parameterized complexity perspective gives
a more accurate view on the complexity of these problems.
\subsection{The Description Logic \ELI{}}
To define the logic \ELI{}, we first consider the logic \EL{}.
The description logic \EL{} is obtained from the logic \ALC{} by forbidding
any use of the negation operator ($\neg$),
the empty concept ($\bot$),
the union operator ($\sqcup$),
and universal quantification ($\forall R. C$).
The description logic \ELI{} is obtained from the logic \EL{} by
introducing \emph{inverse roles}.
That is, \ELI{} concepts are defined by the following grammar
in Backus-Naur form, for~$R \in N_R$ and~$A \in N_C$:
\[ C := A\ |\ \top\ |\ C \sqcap C\ |\ \exists R. C\ |\ \exists R^{-}. C. \]
Interpretations~$\mathcal{I} = (\Delta^{\mathcal{I}},\cdot^{\mathcal{I}})$ for \ELI{}
are defined as interpretations for \EL{}
with the following addition:
\begin{itemize}
\item $(\exists R^{-}. C)^{\mathcal{I}} = \SB x \in \Delta^{\mathcal{I}} \SM$
there exists some~$y \in C^{\mathcal{I}}$ such that $(y,x) \in R^{\mathcal{I}}$ $\SE$.
\end{itemize}
\subsection{Classical Complexity Results}
We consider two reasoning problems for the logic \ELI{}.
The first problem that we consider is the problem of
\emph{instance checking}.
In this problem, the input consists of an ABox~$\AAA$,
a (general) TBox~$\mathcal{T}$, an individual name~$a$ and a concept~$C$,
and the question is whether~$\AAA,\mathcal{T} \models C(a)$---%
that is, whether for each interpretation~$\mathcal{I}$ such that~$\mathcal{I} \models \AAA$
and~$\mathcal{I} \models \mathcal{T}$ it holds that~$\mathcal{I} \models C(a)$.
The second problem that we consider is the problem
of \emph{conjunctive query entailment} (which can be seen as a
generalization of the problem of instance checking).
A \emph{conjunctive query} is a set~$q$ of atoms
of the form~$C(v)$ and~$R(u,v)$,
where~$C$ is a concept, where~$R \in N_R$,
and where~$u,v$ are \emph{variables}.
Let~$\Var{q}$ denote the set of variables occurring in~$q$.
Let~$\mathcal{I} = (\Delta^{\mathcal{I}},\cdot^{\mathcal{I}})$ be an interpretation
and let~$\pi$ be a mapping from~$\Var{q}$ to~$\Delta^{\mathcal{I}}$.
We write~$\mathcal{I} \models^{\pi} C(v)$ if~$\pi(v) \in C^{\mathcal{I}}$,
we write~$\mathcal{I} \models^{\pi} R(u,v)$ if~$(\pi(u),\pi(v)) \in R^{\mathcal{I}}$,
we write~$\mathcal{I} \models^{\pi} q$ if~$\mathcal{I} \models^{\pi} \alpha$
for all~$\alpha \in q$, and we
write~$\mathcal{I} \models q$ if~$\mathcal{I} \models^{\pi} q$
for some~$\pi : \Var{q} \rightarrow \Delta^{\mathcal{I}}$.
For any ABox~$\AAA$ and TBox~$\mathcal{T}$, we write~$\AAA,\mathcal{T} \models q$
if~$\mathcal{I} \models q$ for each interpretation~$\mathcal{I}$ such
that~$\mathcal{I} \models \AAA$ and~$\mathcal{I} \models \mathcal{T}$.
In the problem of conjunctive query entailment,
the input consists of a (general) TBox~$\mathcal{T}$,
an ABox~$\AAA$, and a conjunctive query~$q$,
and the question is to decide whether~$\AAA,\mathcal{T} \models q$.
Both the problem of instance checking and the problem
of conjunctive query entailment for \ELI{}
are \text{\normalfont EXPTIME}{}-complete in general.
\begin{proposition}[\citealt{BaaderBrandtLutz05};~\citeyear{BaaderBrandtLutz08}]
\label{prop:eli-ic-exp}
Instance checking for \ELI{} is \text{\normalfont EXPTIME}{}-complete.
\end{proposition}
\begin{corollary}[\citealt{BaaderBrandtLutz05};~\citeyear{BaaderBrandtLutz08}]
\label{cor:eli-cqe-exp}
Conjunctive query entailment for \ELI{} is \text{\normalfont EXPTIME}{}-complete.
\end{corollary}
The results of Propositions~\ref{prop:eli-ic-exp}
and Corollary~\ref{cor:eli-cqe-exp} are typically called ``combined complexity'' results---%
meaning that all elements of the problem statement are given as
inputs for the problem.
To study how the complexity of these problems increases when the size of
the ABox~$\AAA$ grows---and when the size of the TBox~$\mathcal{T}$ and the
size of the query~$q$ remain the same---often different variants of the
problems are studied.
In these variants, the TBox~$\mathcal{T}$ and the query~$q$ are fixed
(and thus not part of the problem input), and only the ABox~$\AAA$ is given
as problem input.
That is, there is a variant of the problem for each choice of~$\mathcal{T}$ and~$q$.
The computational complexity of these problem variants are typically
called the ``data complexity'' of the problem.
From a data complexity perspective,
the problems of instance checking and conjunctive query entailment
for the logic \ELI{} are both polynomial-time solvable.
In other words, from a data complexity point of view
these problems are of the same complexity.
\begin{claim}[\citealt{Krisnadhi07}]
\label{claim:eli-ic-p}
Instance checking for \ELI{} is polynomial-time solvable
regarding data complexity.
\end{claim}
\begin{claim}[\citealt{KrisnadhiLutz07}]
\label{claim:eli-cqe-p}
Conjunctive query entailment for \ELI{} is polynomial-time solvable
regarding data complexity.
\end{claim}
\subsection{Parameterized Complexity Results}
We will argue that the computational complexity of the problems
of instance checking and conjunctive query entailment for \ELI{}---%
when only the ABox~$\AAA$ grows in size---%
is of a vastly different nature.
We will do so by using the parameterized complexity methodology.
Concretely, we will take (the size of) the TBox~$\mathcal{T}$
and (for the case of conjunctive query entailment) the query~$q$
as parameters, and observe that the parameterized
complexity of these two problems is different.
We begin by observing that the algorithm
witnessing polynomial-time data complexity
for the problem of instance checking for \ELI{}
corresponds to an fpt-algorithm for the problem
when parameterized by the size of the TBox~$\mathcal{T}$.
\begin{observation}
\label{obs:eli-ic-fpt}
Instance checking for \ELI{} is fixed-parameter tractable
when parameterized by~$|\mathcal{T}|$.
\end{observation}
\begin{proof}
The algorithm to solve the problem of instance checking for \ELI{}
described by Krisnadhi~\shortcite[Proposition~4.3]{Krisnadhi07}
runs in time~$2^{|\mathcal{T}|^{O(1)}} \cdot |\AAA|^{O(1)}$.
\end{proof}
The polynomial-time data complexity algorithm for the problem
of conjunctive query entailment, on the other hand,
does not translate to an fpt-algorithm,
but to an xp-algorithm instead---%
when the parameter is (the sum of) the size of the
TBox~$\mathcal{T}$ and the size of the query~$q$.
\begin{observation}
\label{obs:eli-cqe-xp}
Conjunctive query entailment for \ELI{} is in \text{\normalfont XP}{}
when parameterized by~$|\mathcal{T}|$ and~$|q|$.
\end{observation}
\begin{proof}
The algorithm to solve the problem of conjunctive query entailment for \ELI{}
described by Krisnadhi and Lutz~\shortcite[Theorem~4]{KrisnadhiLutz07}
runs in time~$(|\AAA| + |\mathcal{T}|)^{|q|^{O(1)}}$.
\end{proof}
For this parameter, the problem of conjunctive
query entailment for \ELI{} is in fact \W{1}-hard---%
and thus not fixed-parameter tractable,
assuming the widely believed conjecture
that~$\text{\normalfont FPT} \neq \W{1}$.
This follows immediately from the
\W{1}-hardness of conjunctive query answering
over databases when parameterized by the
size of the query
\cite[Theorem~1]{PapadimitriouYannakakis99}.
\begin{corollary}[{\citealt{PapadimitriouYannakakis99}}]
\label{cor:eli-cqe-w1-hard}
Conjunctive query entailment for \ELI{} is \W{1}-hard
when parameterized by~$|\mathcal{T}|$ and~$|q|$.
\end{corollary}
\begin{table}[!b]
\centering
\begin{small}
\begin{tabular}{p{2.4cm} @{\ \ \ }|@{\ \ \ } p{2.1cm} @{\ \ \ }|@{\ \ \ } p{2.4cm}} \toprule
& \textit{instance\newline checking}
& \textit{conjunctive\newline query entailm.} \\
\midrule\midrule
combined\newline complexity &
\text{\normalfont EXPTIME}{}-c \newline \phantom{a}\hfill (Prop~\ref{prop:eli-ic-exp}) &
\text{\normalfont EXPTIME}{}-c \newline \phantom{a}\hfill (Cor~\ref{cor:eli-cqe-exp}) \\[5pt]
\midrule
data complexity &
in \P{} \hfill (Claim~\ref{claim:eli-ic-p}) &
in \P{} \hfill (Claim~\ref{claim:eli-cqe-p}) \\[3pt]
\midrule
combined\newline complexity with\newline parameter~$|\mathcal{T}|+|q|$ &
in FPT \newline \phantom{a}\hfill (Obs~\ref{obs:eli-ic-fpt}) &
in XP \hfill (Obs~\ref{obs:eli-cqe-xp}) \newline
\W{1}-h \hfill (Cor~\ref{cor:eli-cqe-w1-hard}) \\[13pt]
\bottomrule
\end{tabular}
\end{small}
\caption{(Parameterized) complexity results for instance checking and
conjunctive query entailment for \ELI{}.}
%
\label{table:data-complexity}
\end{table}
\subsection{Interpretation of the Results}
The results in this section are summarized
in Table~\ref{table:data-complexity}.
Observation~\ref{obs:eli-ic-fpt}
and Corollary~\ref{cor:eli-cqe-w1-hard} show that parameterized
complexity can give a more accurate view on data complexity
results than classical complexity theory.
From a classical complexity perspective, the data complexity
variants of both problems are polynomial-time solvable,
whereas the parameterized data complexity variants of the
problems differ in complexity.
Both problems are solvable in polynomial
time when only the ABox~$\AAA$ grows in size.
However, for instance checking the order of the polynomial is constant
(Observation~\ref{obs:eli-ic-fpt}), and for conjunctive query entailment
the order of the polynomial grows with the size of the query~$q$
(Corollary~\ref{cor:eli-cqe-w1-hard}).
This is a difference with enormous effects on the practicality of
algorithms solving these problems
(see, e.g.,~\citealt{Downey12}).
\section{Directions for Future Research}
The results in this paper are merely an illustrative exposition
of the type of parameterized complexity results
that are possible for description logic reasoning problems when
using less commonly studied concepts
(e.g., the classes \text{\normalfont para-}{\text{\normalfont NP}}, \text{\normalfont para-}{\text{\normalfont co-}{\text{\normalfont NP}}} and \text{\normalfont para-}{\text{\normalfont PSPACE}}).
We hope that this paper sparks a structured investigation of the
parameterized complexity of different reasoning problems for the
wide range of description logics that have been studied.
For this, it would be interesting to consider a large assortment of
different parameters that could reasonably be expected to have
small values in applications.
It would also be interesting to investigate to what extent, say,
\text{\normalfont para-}{\text{\normalfont NP}}-membership results can be used to develop practical
algorithms based on the combination of fpt-time encodings into SAT
and SAT solving algorithms.
\section{Conclusion}
We showed how the complexity study of description logic
problems can benefit from using the framework of parameterized complexity
and all the tools and methods that it offers.
We did so using three case studies.
The first addressed the problem of
concept satisfiability for \ALC{} with respect to nearly acyclic TBoxes.
The second was about the problem of
concept satisfiability for fragments of \ALC{} that are close
to \ALE{}, \ALU{} and \AL{}, respectively.
The third case study concerned
a parameterized complexity view on the notion of data complexity
for instance checking and conjunctive query entailment for \ELI{}.
Moreover, we sketched some directions for future research,
applying (progressive notions from) parameterized complexity theory
to the study of description logic reasoning problems.
\subsubsection{Acknowledgments.}
This work was supported by the Austrian Science Fund (FWF),
project~J4047.
\DeclareRobustCommand{\DE}[3]{#3}
\bibliographystyle{aaai}
|
2,869,038,154,406 | arxiv | \section{INTRODUCTION}
The utility of the Polyakov loop ${\cal P}$ makes
finite temperature lattice gauge theory a natural place to study
confinement. A key issue is the mechanism which drives the Polyakov
loop expectation value to zero in the confined phase.
In section 2, it is shown that a large external field coupled to
$F_{\mu\nu}$ restores confinement above the deconfinement temperature,
explicitly demonstrating a connection between the chromomagnetic field
and confinement. In section 3, analytical results for $SU(2)$ give a
form for the coupling between ${\cal P}$ and $F_{\mu\nu}$.
\section{RESTORATION OF CONFINEMENT ABOVE $T_C$ BY AN APPLIED EXTERNAL FIELD}
An external field $J_{\mu\nu}$ can be coupled to the gauge field
$F_{\mu\nu}$ by adding to the lattice action a term of the form
\begin{equation}
\sum_{x} \sum_{\mu > \nu} \sum_{a} {1 \over N}
\left[ J_{\mu\nu}^a(x) F_{\mu\nu}^a(x) \right]
\end{equation}
where the sums are over lattice sites, directions in space-time and
directions in the Lie algebra of the gauge group.
For the lattice form of the gauge potential, the simplest definition
is used:
\begin{equation}
F_{\mu\nu}(x) = {1 \over 2 i} \left[ U_{\mu\nu}(x) - U_{\mu\nu}^{+}(x)
\right].
\end{equation}
For lattice simulations, it is most convenient to not fix the gauge,
as would be necessary in the continuum. As a consequence, the partition
function $Z$ satifies
\begin{equation}
Z\left[ g(x) J_{\mu\nu}(x) g^{+}(x) \right] = Z\left[ J_{\mu\nu}(x) \right].
\end{equation}
For the case we consider where $J$ is non-zero in a single hyperplane,
this property of $Z$ under local gauge transformations implies that
$Z$ depends only on the eigenvalues of $J$.
Figure 1 shows the results of simulations of
$SU(3)$ lattice gauge theory on a $16^3 \times 4 $ lattice
at $\beta = 6.0$,
plotting the Polyakov loop $P$ against source $J_{12}^3$ and $J_{34}^3$.
The superscript $3$ indicates that the source is in the
$\lambda_3$ direction in the gauge group, while the subscript
indicates sources coupled to $F_{\mu\nu}$ in the $12$ and
$34$ direction, thus coupling to the real chromomagnetic field and
the imaginary chromoelectric field, respectively.
With $J=0$, $\beta=6.0$ at $N_t = 4$ is well into the deconfined
phase of finite temperature $SU(3)$.
The phase transition to the confined phase for
sufficiently large $J$ is obvious in the figure.
Examination of time series and histograms indicate that the
transition observed in Figure 1 is most likely first order.
A comparison of $J_{12}^3$ and $J_{12}^8$ shows no evidence for
asymmetry in the gauge group: $\lambda_3$ and $\lambda_8$ appear
equivalent.
A small difference is seen between $J_{12}^3$
and $J_{34}^3$,
consistent with the more direct connection of the temporal plaquettes
to Polyakov loops via the link elements $U_4$.
\vspace{-0.3in}
\begin{figure}[htb]
\epsfxsize=75mm \epsfbox{pl3.eps}
\vspace{-0.3in}
\caption{$Tr_F ( {\cal P} )$ versus source $J$.}
\label{fig:fig1}
\end{figure}
\vspace{-0.3in}
The restoration of symmetry by a sufficiently strong external field
coupled to the chromomagnetic field is reminiscent of the restoration
of the normal phase from the superconducting phase by a strong
magnetic field.
It is natural from this point of view to consider
the system in a dimensionally reduced form, in which the Polyakov
loop play the role of a Higgs field in the adjoint representation,
coupled to a three-dimensional gauge field.
A complimentary approach is to view the effect of the external
field as changing the effective coupling constant. For the simple case
considered here, it is possible to find an alternative form for the
action by explicitly integrating over all local gauge transformations.
The leading term in $J_{\mu\nu}$ is proportional to
$ Tr \left[ J_{\mu\nu}^2 \right] Tr \left[ F_{\mu\nu}^2 \right] $,
with a negative sign, indicating a reduction in the effective value
of the gauge coupling $\beta$.
\section{QUARK CONFINEMENT BY CONSTANT FIELDS}
One plausible scenario for confinement
is that the coupling between the local
gauge field $F_{\mu\nu}$ and the adjoint Polyakov loop produces an effective
action which leads to two different phases \cite{MeOgGluon}.
In the
low temperature phase, there is a magnetic condensate and the Polyakov loop
indicates confinement; in the high temperature phase the magnetic condensate
vanishes, the Polyakov loop indicates deconfinement, and
the contribution of the thermal gauge boson gas dominates
the free energy.
This occurs because finite temperature effects
naturally couple the local gauge field to
Polyakov loops\cite{MeOgI,MeOgII,MeOgIII,MeOgIV}.
For $SU(N)$,
the fundamental and adjoint representation Polyakov loops are related by
\begin{eqnarray}
Tr_A ( {\cal P} ) = | Tr_F ( {\cal P} ) |^2 - 1.
\end{eqnarray}
Clearly, when $Tr_A ( {\cal P} )$ assumes its minimum value of $-1$,
$Tr_F ( {\cal P} )$ assumes the value $0$.
Thus, one way to produce confinement in the low temperature phase is
for the free energy to be minimized by minimizing the expected value
of the trace of the adjoint Polyakov loop.
The case of a constant background chromagnetic field in $SU(2)$
illustrates this possibility.
The color magnetic field
and the Polyakov loop are taken
to be simultaneously diagonal, and
the color magnetic field $H$ points in the $x_3$ direction.
The Polyakov loop is specified by a constant $A_0$ field,
given in the adjoint representation by
$A_0 = \phi \tau_3 / {2 \beta}$.
The trace of the Polyakov loop is then given by
$Tr_F ( {\cal P} ) = 2 \cos (\phi / 2 )$
in the fundamental representation and by
$Tr_A ( {\cal P} ) = 1 + 2 \cos (\phi )$
in the adjoint representation.
The external magnetic field we take to have the form
$A_2 = H x_1 \tau_3 / 2$
which gives rise to a chromomagnetic field
$ F_{12} = H \tau_3 / 2 $.
The one-loop contribution to the free energy
has the usual form \cite{NiOl,NiSa,vanBaal} of a sum over the logarithms
of modes. The crucial contributions to this mode sum comes from
modes of the form
\begin{eqnarray}
\left[ \left( \omega_n - {\phi \over \beta} \right)^2 +
2 H \left( m + {1 \over 2} \pm 1 \right) + k_3^2 \right]
\label{e2.1}
\end{eqnarray}
where the $\omega_n = 2 \pi n / \beta$ are the usual Matsubara frequencies, and
where the terms $2 H (m + 1/2 \pm 1)$ are the allowed Landau levels of the
gauge field in a background chromomagnetic field.
When $ \phi = 0$,
the $n = 0$ and $m = 0$ modes give rise to tachyonic modes
for $k_3$ sufficiently small;
these in turn give rise to an imaginary part in the free energy \cite{NiOl}.
These same modes will give a strictly real
factor to the determinant provided
\begin{eqnarray}
\beta \sqrt{H} < \phi < 2 \pi - \beta \sqrt{H}.
\label{e2.3}
\end{eqnarray}
The renormalized effective potential has real component
\begin{eqnarray}
V_R &=& {11 H^2 \over 48 {\pi}^2 } \,
\ln \left( {H \over {\mu}_0^2} \right)
- { 2 \pi^2 \over 90 \beta^4 }
- { ( H )^{3/2} \over {\pi}^2 \beta} \,\cdot
\nonumber
\end{eqnarray}
\begin{eqnarray}
\quad \sum_{n = 1}^{\infty} \,
{\cos (n \phi) \over n}
\bigg[ K_1 (n \beta \sqrt{H} )
- {\pi \over 2} Y_1 (n \beta \sqrt{H} )
\nonumber
\end{eqnarray}
\begin{eqnarray}
\quad + 2 \sum_{m = 0}^{\infty} \,
\sqrt{2 m + 3} K_1 [n \beta \sqrt{(2 m + 3)H}\, ]
\bigg]
\label{e2.11}
\end{eqnarray}
and imaginary component
\begin{eqnarray}
V_I = - { H^2 \over 8 \pi} - {H^{3/2} \over 2 \pi \beta} \,
\sum_{n = 1}^{\infty} \, {\cos (n \phi ) \over n} \,
J_1 ( n \beta \sqrt{H} ).
\label{e2.12}
\end{eqnarray}
At low
temperatures, minimization of $Re(V)$ leads to $\phi=\pi$ being
preferred. This is shown in Figure 2, which plots the real part of the
effective potential versus $H/\mu^2$ at $T/\mu = 0.25$
for both $\phi=0$ and $\phi=\pi$.
Unfortunately,
examination of $V_I$ shows that the lowest minima is not stable,
lying just to the right of the stable region, so that this
background field configuration remains unstable.
The confining solution $(\phi = \pi)$
is preferred over the perturbative vacuum
only for sufficiently low temperatures.
The tachyonic mode is responsible for $\phi=\pi$ being favored at low
temperature, and appears in the expression for $V_R$ as the $Y_1$ term.
Analysis shows that, at one loop for an arbitrary abelian background field,
tachyonic modes must occur when $\phi=0$ if $\phi=\pi$ is to
be favored at low temperature. If there are no tachonic modes,
$\phi=0$ is always favored.
\begin{figure}[htb]
\epsfxsize=75mm \epsfbox{v-low-t.eps}
\vspace{-0.3in}
\caption{$V/\mu^4$ at $T/\mu = 0.25$ for $\phi = 0$ and $\phi = \pi$.}
\label{fig:fig2}
\end{figure}
\section{CONCLUSIONS}
There is strong evidence from simulations and perturbation theory for the
relevance of the chromomagnetic field in confinement, but we are still
far from builing a realistic model of the QCD vacuum.
Directions for future research along the
lines discussed here include
simulations and analytical work
to determine the effect of a quenched
random field $J$ and
to explore the behavior of instantons, monopoles and vortices
in an external field.
|
2,869,038,154,407 | arxiv | \section{Introduction}
Multimodality research is an emerging field of study which examines how communication builds on appropriate combinations of multiple modes of expression, such as natural language, illustrations, drawings, photography, gestures, layout and many more. Modern multimodality theory has developed a battery of theoretical concepts to support more strongly \emph{empirical} analysis of such complex communicative situations and artefacts. Several core concepts, including semiotic mode \cite{bateman2011,kress2014}, medium \cite{batemanetal2017,bateman2017} and genre \cite{bateman2008,hiippala2015a}, theorise how individual modes of expression are structured and what precisely enables them to combine and co-operate with each other. Although diagrams are often acknowledged to draw on multiple modes of expression \cite{tversky2017}, they have rarely been approached from the perspective of multimodality research. In this article, we bring the state-of-the-art in multiomdality research to bear on diagrams and introduce what we tentatively term the \emph{diagrammatic mode}. By doing so, we seek to complement previous discussions of diagram syntax and semantics \cite{purchase2014} by introducing a multimodal, discourse-oriented perspective to diagrams research.
To exemplify the proposed approach, we discuss two recently published multimodal diagram corpora building on the same data but differing in terms of the analytical frameworks used and how their annotations are created. We argue that adopting a multimodal, corpus-based approach to diagrams has several benefits, chief among them being a deeper empirically-supported understanding of diagrammatic representations and their variation in context, which also has practical implications for the computational modelling of diagrams.
\section{A multimodal perspective on diagrams}
The notion of multimodality is not always understood in the same way across the diverse fields of study where the concept has been picked up -- those fields include, among others, text linguistics, spoken language and gesture research, conversation analysis, and human-computer interaction. More recently, Bateman, Wildfeuer and Hiippala \cite{batemanetal2017} have proposed a generalised framework for multimodality that extends beyond previous approaches by offering a common set of concepts and an explicit methodology for supporting empirical research regardless of the `modes' and materials involved. This broadly linguistically-inspired, semiotically-oriented approach is the framework we adopt here; it is with respect to this orientation that the central concepts of semiotic mode, medium, and genre mentioned above receive a formal definition. The result is a general foundation capable of addressing all forms of multimodal representation, including diagrammatic representations. Our general orientation is then to focus particularly on how we make and exchange meanings multimodally, drawing directly on the framework the general orientation provides.
Within this framework, the core concept of semiotic mode is defined graphically as shown on the left-hand side of Figure \ref{fig:semiotic_mode}. This sets out the three distinct `semiotic strata' that are always needed for a fully developed semiotic mode to operate \cite{bateman2011,batemanetal2017}. Starting from the lower portion of the inner circle, the model requires all semiotic modes to work with respect to a specified \emph{materiality} which a community of users regularly `manipulates' in order to leave traces for communicative purposes; second, these traces are organised (paradigmatically and syntagmatically) to form \emph{expressive resources} that characterise those material distinctions specifically pertinent for the semiotic mode at issue; and finally, those expressive resources are mobilised in the service of communication by a corresponding \emph{discourse semantics}, whose operation we show in a moment. This general model places no restrictions on the kinds of materiality that may be employed; for current purposes, however, we focus on materialities exhibiting a 2D spatial extent.
\begin{figure}
\includegraphics[width=1\textwidth]{semiotic_mode.pdf}
\caption{A theoretical model of a semiotic mode and a sketch of the fundamentals for a diagrammatic mode}
\label{fig:semiotic_mode}
\end{figure}
Building on this scheme, we set out on the right-hand side of the figure an initial characterisation of the specific properties of the diagrammatic mode. The 2D materiality of the diagrammatic mode not only allows the creation of spatial organisations in the form of layout, but is also a prerequisite for realising many of the further expressive resources commonly mobilised in diagrams, such as written language and arrows, lines, glyphs and other diagrammatic elements, which also inherently require (at least) a 2D material substrate. An example of the corresponding expressive resources typical of the diagrammatic mode is offered by the ``meaningful graphic forms'' identified by Tversky et al. \cite[\p{222}]{tverskyetal2000}, such as circles, blobs and lines. These can also be readily combined into larger syntagmatic organisations in diagrams such as route maps, as Tversky et al. illustrate \cite[\p{223}]{tverskyetal2000}. In fact, theoretically, the diagrammatic mode can draw on any expressive resource capable of being realised on a materiality with a 2D spatial extent, although in practice these choices are constrained by what the diagram attempts to communicate and the sociohistorical development of specific multimodal \emph{genres} by particular communities of practice \cite{bateman2008,hiippala2015a}. Finally, it is the task of the third semiotic stratum of discourse semantics to make the use of expressive resources interpretable in context.
Embedding expressive resources into the discourse organisations captured by a discourse semantics is crucial to our treatment. This addition formally captures how (and why) fundamental graphic forms, such as those identified by Tversky et al. \cite{tverskyetal2000}, may receive different interpretations in different contexts of use. Put differently, the purpose of discourse semantics is to identify candidate interpretations which are then resolved \emph{dynamically} against the context in which the expressive resources appear, typically applying defeasible abductive principles as characterised for language by, for example, Asher and Lascarides \cite{lascaridesasher2003}. The notion of discourse semantics defined by Bateman, Wildfeuer and Hiippala extends these mechanisms to apply to all forms of expression so that, as Bateman observes, ``discourse semantic rules control when and how world knowledge is considered in the interpretation process'' \cite[\p{22}]{bateman2011}.
Principles of this kind can readily be observed in diagrams and their interpretations.
Some combinations of expressive resources, such as written labels and lines that pick out a part of an illustration, allow their meaning to be recovered from their immediate context without extensive world knowledge -- this means that their discourse semantic treatment requires relatively little reference to contextual information. Conversely, using arrows and lines to represent processes in the real world naturally demands the viewer to relate whatever is being represented to that world knowledge \cite{alikhanistone2018}. Again, the principal formal difference lies in the range of constraints specified in the discourse semantics. The contribution of discourse semantics is also not limited to guiding the interpretation of \emph{local} discourse relations that hold between two or more diagram elements, because such local interpretations are also always evaluated within the context provided by the \emph{global} discourse organisation, which may as a consequence already nudge a viewer towards particular candidate interpretations rather than others.
The diagrammatic mode now appears sufficiently stable to have outgrown specific materialities since diagrammatic expressive resources can generally be recognised whenever they appear.
For example, one usually recognises a diagram when encountered in a newspaper, scientific publication, school textbook or some other medium purely on the basis of the kinds of material regularities present.
Media also act as `incubators' for new mode combinations \cite[\p{124}]{batemanetal2017}. School textbooks, for example, constitute a medium which regularly combines the diagrammatic mode with other modes of expression to support learning \cite{guo2004,roerhrich2016}. And aerial photography often draws on the diagrammatic mode in the form of labels and lines to support the interpretation of photographic images \cite[\p{280}]{batemanetal2017}. In all such cases, it is the well-developed discourse semantics of the diagrammatic mode that allows it to
`latch' on to other semiotic modes. Underlying properties of the medium that arise from its materiality can also foster new mode combinations that involve the diagrammatic mode; for example, when the materiality allows manipulation, such as the capability for \emph{interactivity} in screen-based media, we begin to find interactive data visualisations on digital media \cite{hiippala2019}. Finally, the combination of materiality, expressive forms and discourse interpretations provides a robust foundation for further considerations of diagrammatic reasoning as well. Since in general a semiotic mode may draw on any kind of material regularity, this readily includes semiotic systems relying substantially on iconicity, while the discourse description provides mechanisms akin to metaphor construction. Although we cannot go into detail here, the relation between multimodality theory and Peircean views of semiosis, including iconicity, is discussed at some length by Bateman \cite{bateman2018}.
To summarise, diagrams are shaped by both the medium they occur in and the genre they participate in. This means that, ideally, when building multimodal corpora for diagrams research, both medium and genre should be accounted for. In reality, however, multimodal corpora that strongly anchor diagrams to their context of occurrence while simultaneously providing a rich description of their multimodal structure remain non-existent. With this point in mind, we now turn to discuss two recent diagram corpora and their description of the diagrammatic mode from the perspective of multimodality.
\section{Multimodal diagram corpora}
In this section, we introduce two interrelated diagram corpora, AI2D \cite{kembhavietal2016} and AI2D-RST \cite{hiippalaetal2019-ai2d}, which build on one other, AI2D-RST covering a subset of AI2D.
\subsection{The Allen Institute for Artificial Intelligence Diagrams dataset} \label{sec:ai2d}
The Allen Institute for Artificial Intelligence Diagrams dataset (AI2D) was developed to support research on computational processing of diagrams \cite{kembhavietal2016}. AI2D contains a total of 4903 diagrams that represent topics in elementary school natural sciences, ranging from life and carbon cycles to human physiology and food webs, to name just a few of the 17 categories in the dataset. Because the diagram images were scraped from the web using school textbook chapter headings as search terms, the corpus covers a wide range of diagrams created by producers with various degrees of expertise with the diagrammatic mode, such as students, teachers and professional graphic designers. As the diagrams have been removed from their original context during scraping, little may be said about the medium they originated in. For this reason, it may be suggested that AI2D \emph{approximates} how diagrams are used in learning materials realised using various media.
AI2D models four types of diagram elements: text, blobs (graphic elements), arrows and arrowheads. Although these elements cover the main expressive resources mobilised in these diagrams, no further distinctions are made between visual expressive resources, such as drawings, illustrations and photographs, for instance. Each diagram in the dataset is nevertheless provided with several layers of description. First of all, instances of the four diagram element types were segmented from the original diagram image by crowd-sourced workers on Amazon Mechanical Turk\footnote{https://www.mturk.com}. The elements identified during this \emph{layout segmentation} provide a foundation for a \emph{Diagram Parse Graph} (DPG), which represents the diagram elements as nodes, whereas the edges define their semantic relations, which are described using ten relation definitions drawn from the framework proposed by Engelhardt \cite{engelhardt2002}.
\begin{figure}[p]
\centering
\includegraphics[width=0.68\textwidth]{4210.png}
\includegraphics[width=0.68\textwidth]{segmentation_4210.png} \\[0.5cm]
\includegraphics[width=0.68\textwidth]{graph_4210.png}
\caption{Original diagram image (top), layout segmentation (middle) and Diagram Parse Graph (bottom) for diagram \#4210 in AI2D. In the layout segmentation, the original image has been converted into grayscale to highlight the crowd-sourced layout segmentation. Each layout segment is coloured according to diagram element type (blue: text; red: blob; arrow: green; arrowhead: orange) and assigned a unique identifier. These colours and identifiers are carried over to the Diagram Parse Graph.}
\label{fig:ai2d_example}
\end{figure}
Figure \ref{fig:ai2d_example} shows as an example the treatment given to a diagram originally scraped from the web, diagram \#4210 in AI2D. In the middle of the figure we see its crowd-sourced layout segmentation and, below that, its corresponding DPG. The original diagram represents a rock cycle, that is, transitions between different types of rock, using a combination of an illustration (a cross-section) whose parts are described using written language. These parts set up the stages of the rock cycle, which are then related to one another using arrows.
For the formation of the AI2D corpus, annotators were instructed to identify units and relationships. As the resulting layout segmentation image in the middle of the figure shows, text blocks and arrowheads were segmented using rectangular bounding boxes, whereas more complex shapes for arrows and various types of graphics were segmented using polygons. The layout segmentation illustrates well how crowd-sourced annotators tend to segment diagrams to quite uneven degrees of detail. Here the entire cross-section is assigned to a single blob (B0), although a more accurate description would be to segment separate parts of the cross-section, such as magma and various layers of rock. We will see shortly how such omissions readily compromise the accurate description of semantic relations in the DPG.
The edges in the DPG carry semantic relations such as \textsc{arrowHeadTail} between arrow A2 and arrowhead H2 in the upper part of the diagram, which act as a connector in an \textsc{interObjectLinkage} relation between text blocks T1 (`Magma flows to surface ...') and T2 (`Weathering and erosion') (see the layout segmentation and DPG in Figure \ref{fig:ai2d_example}). As these relations exemplify, the relations drawn from Engelhardt \cite{engelhardt2002} are intended to cover local relations that hold between diagram elements positioned close to each other or connected using arrows or lines \cite[\p{239}]{kembhavietal2016}, but neglect the relations needed to describe the global organisation of the diagram, that is, relations between units that are made up of multiple elements.
Crowd-sourcing coherent graph-based descriptions of diagrams is certainly a challenging task, which may partly explain why isolated nodes and multiple connected components are commonly found in AI2D DPGs. This is also exemplified in the DPG in Figure \ref{fig:ai2d_example}, in which the diagrammatic representation is used to describe a rock cycle, but this cyclic nature is not reflected by the structure of the DPG, although the AI2D annotation schema does in principle provide the relation definitions necessary for describing this process, such as \textsc{interObjectLinkage} and \textsc{intraObjectRegionLabel} \cite[\p{239}]{kembhavietal2016}.
This problem emerges from insufficient detail in the layout segmentation. The crowd-sourced annotators were not instructed to decompose cross-sections or other visual expressive resources capable of demarcating meaningful regions. The blob B0, which covers the entire cross-section, is as a consequence not segmented into its component parts -- the stages of the rock cycle with labels such as `Magma' (T5) and `Metamorphic rock forms from heat and pressure' (T8) -- which pick out particular regions of the cross-section through visual \emph{containment} \cite[\p{47}]{engelhardt2002} and set up the stages of the cycle. Because the cross-section (B0) constitutes a single unit, an otherwise applicable relation such as \textsc{intraObjectRegionLabel} cannot be used to pick out the corresponding region, because the regions are not available in the inventory of elements. As such, the description is not sufficiently detailed to represent a cyclic structure.
The challenges related to decomposing diagrammatic representations described here relate to the well-known problem of identifying `units' in any visually-based semiotic mode. Bateman and Wildfeuer \cite{batemanwildfeuer2014b} consider this issue in the medium of comics and argue for a discourse-based approach to identifying analytical units, whereby the discourse organisation of some larger unit (e.g. a panel in a comic or an entire diagram) may determine which elements are to be picked up for interpretation in a given context. In other words, their discourse semantics simultaneously supports \emph{decomposing} larger units into their component parts and \emph{resolving} their potential interrelations, always with the goal of maximising \emph{discourse coherence} \cite[\p{377}]{batemanwildfeuer2014b}. This suggests that for visual media, such as diagrams, it will often be more effective not to operate with a pre-defined inventory of elements (i.e., defining units bottom-up), but instead to allow the inventory of relevant elements to change dynamically as interpretations are made and updated (top-down). This is precisely the mechanism that discourse semantics supports. In the next section, we show how this approach can be used for a more effective design of a multimodal corpus of diagrams.
\subsection{AI2D-RST -- a multimodally-motivated annotation schema}
One formalism that has frequently been applied to the description of discourse semantics in multimodality research is Rhetorical Structure Theory (RST), which was developed as a theory of text organisation and coherence in the 1980s \cite{mannthompson1988}. Originally, RST attempted to describe why well-formed texts appear coherent, or why individual parts of a text appear to contribute towards a common communicative goal \cite{taboadamann2006a}. As a part of an extension to multimodal discourse, RST has been used to describe multimodal discourse structures in various media \cite{bateman2008,taboadahabel2013,hiippala2015a}. Most recently, RST has been applied to diagrams in the AI2D dataset as a part of an alternative annotation schema that seeks to provide a more multimodally-informed description of diagrammatic representations \cite{hiippalaetal2019-ai2d}.
This dataset, called AI2D-RST, covers 1000 diagrams from the AI2D corpus, annotated using a new schema by experts trained in the uese of the schema \cite{hiippalaetal2019-ai2d}. The development of AI2D-RST was motivated by the observation that the AI2D annotation schema introduced above conflates descriptions of different types of multimodal structure \cite{hiippalaorekhova2018}, such as implicit semantic relations and explicit connections signalled using arrows and lines into a single DPG. These can be pulled apart multimodally to better understand how these structures contribute to diagrammatic representations.
For this reason, AI2D-RST represents each diagram using three distinct graphs corresponding to three distinct, but mutually complementary, layers of annotation: grouping, connectivity and discourse structure. Figure \ref{fig:ai2d-rst_example} shows examples of all three graphs for the diagram introduced in Figure \ref{fig:ai2d_example}. To begin with, the grouping layer (top right) organises diagram elements that are likely to be perceived as belonging together into visual perceptual groups, which are loosely based on Gestalt properties \cite{ware2012}. The resulting organisation is represented using a hierarchical tree graph. Grouping nodes with the prefix `G' are added to the graph as parents to nodes that are grouped together during annotation. The grouping nodes can be picked up in subsequent annotation layers to refer to a group of diagram elements and thereby serve as a foundation for the description of both the connectivity and discourse structure layers.
\begin{figure}[h!]
\includegraphics[width=0.5\textwidth]{segmentation_4210.png}
\includegraphics[width=0.5\textwidth]{grouping_4210.png}
\includegraphics[width=0.5\textwidth]{connectivity_4210.png}
\includegraphics[width=0.5\textwidth]{rst_4210.png}
\caption{The original crowd-sourced layout segmentation from AI2D (top left) and AI2D-RST grouping (top right), connectivity (bottom left; with two subgraphs) and discourse structure (bottom right) graphs for diagram \#4210. Note that unlike AI2D, AI2D-RST does not model arrowheads, which is why they are absent from the graphs. This information can be retrieved from the original AI2D annotation if needed.}
\label{fig:ai2d-rst_example}
\end{figure}
The connectivity layer (bottom left) is represented using a cyclic graph whose edges represent visually explicit connections signalled using arrows and lines in the diagram. As the connectivity graph in Figure \ref{fig:ai2d-rst_example} shows, these cover explicit connections only. The diagram is thus revealed as leaving several gaps in its characterisation of the rock cycle, namely between the stages represented using text blocks T7 (`Magma cools beneath surface ...') and T1 (`Magma flows to surface ...'), and between T2 (`Weathering and erosion) and T3 (`Transport'). It is consequently left to the viewer to fill in such connections during discourse interpretation. These connections are explicitly not included in the description of connectivity in order to capture discrepancies between explicit visual signals, such as arrows and lines, and implicit meanings that may then only be recovered from the discourse structure.
In AI2D-RST, such implicit discourse relations are handled by the third layer, that of discourse structure, which uses Rhetorical Structure Theory (RST) \cite{mannthompson1988,taboadamann2006a} to describe semantic relations between diagram elements. The relations defined by RST are intended to capture the communicative intentions of the designer, as judged by an analyst, and are added to the discourse structure graph as nodes prefixed with the letter `R' as shown in the graph bottom right in Figure \ref{fig:ai2d-rst_example}; the edges of the graph describe which role an element takes in the discourse relation, namely nucleus (`n') or satellite (`s'). The notion of \emph{nuclearity} is a key criterion in definitions of semantic relations in RST. Following the original RST definitions, AI2D-RST represents the discourse structure layer using a tree graph: if a diagram element is picked up as a part of multiple rhetorical relations, a duplicate node is added to the graph to preserve the tree structure.
In Figure \ref{fig:ai2d-rst_example}, the specific rhetorical relations in the bottom right graph include \textsc{identification} (R1--R6), \textsc{cyclic sequence} (R7) and \textsc{background} (R8). Since AI2D-RST still builds on the inventory of diagram elements provided by the original layout segmentation in AI2D, this requires some compromises in the RST analysis. Here the original annotator of the diagram had concluded that most text instances serve to identify what the arrows stand for, namely stages of the rock cycle. The image showing the cross-section (B0), in turn, is placed in a \textsc{background} relation to the \textsc{cyclic sequence} relation. The definition of a \textsc{background} relation \cite{mannthompson1988} states that the satellite (B0) increases the ability to understand the nucleus (R7), which is the top-level relation assigned to the diagram's representation of the entire cycle.
This is however a very crude description of the discourse structure of the diagram in Figure \ref{fig:ai2d-rst_example} because B0 is actually providing far more information. This information is crucial for understanding what the diagram is attempting to communicate \emph{but we cannot know that such a decomposition is necessary without considering the rhetorical discourse organisation of the diagram as a whole}. For example, if we instead take a hypothetical illustration of a volcano accompanied by the text `Volcano', this would \emph{not} require decomposition because the most plausible relation between the text and the illustration in this case is simply that of identification.
This demonstrates why the decomposition of diagrams should instead be pursued from a top-down direction, emphasising the discourse structure \cite{batemanwildfeuer2014b}. Without prioritising the analysis of discourse structure, it is difficult to know which aspects of the diagrammatic mode are being drawn on and which elements should be included in the description of discourse structure. A cross-section such as the one shown in Figure \ref{fig:ai2d_example} is, in fact, very likely to use \emph{illustration} or other semiotic resources capable of representing and demarcating meaningful regions in 2D layout space \cite{richards2017}. This possibility makes the question of whether the capability is actually being drawn on pertinent and, if the capability is used, raises further the issue of the extent to which the illustration must be decomposed so as to achieve the inventory of elements needed for making appropriate inferences about the discourse structure.
\subsection{Next step: adding discourse-driven decomposition to AI2D-RST}
Having concluded that analytical problems arising from the original layout segmentation are being propagated from AI2D to AI2D-RST, in this section we propose an alternative, discourse-driven layout segmentation that relies more fully on the modelling distinctions provided by our adopted definition of semiotic modes. Figure \ref{fig:decomp} shows a decomposition motivated by discourse structure for our example diagram \#4210 picking out relevant parts of the cross-section. In contrast to the crowd-sourced segmentation in Figure \ref{fig:ai2d_example}, here the cross-section has been decomposed with the goal of maximising the coherence of discourse structure, which involves making available all the elements needed for such a representation of the diagram and its communicative intentions using the AI2D-RST annotation schema.
\begin{figure}[h!]
\centering
\includegraphics[width=0.75\textwidth]{decomp.pdf}
\caption{Decomposing example diagram \#4210 into its component parts}
\label{fig:decomp}
\end{figure}
\begin{figure}[h!]
\includegraphics[width=0.5\textwidth]{grouping_4210_mod.png}
\includegraphics[width=0.5\textwidth]{connectivity_4210_mod.png}
\caption{The grouping (left) and connectivity (right) graphs for diagram \#4210, based on the decomposition shown in Figure \ref{fig:decomp}. The dashed lines in the connectivity graph indicate edges in the grouping graph. Note that the grouping node identifiers prefixed with `G' are not carried over from the grouping to the connectivity graph in this visualisation, as the grouping node identifiers are aliases used in the annotation tool and are replaced with randomly generated, unique identifiers in the corpus.}
\label{fig:decomp_g}
\end{figure}
This is illustrated in Figure \ref{fig:decomp_g}, which applies the AI2D-RST annotation schema to the diagram elements identified during decomposition in Figure \ref{fig:decomp}. When provided with a sufficient inventory of diagram elements, the grouping graph reflects key structural properties of the diagram far more accurately. The grouping graph (left) contains two subgraphs, whose root nodes G10 and I0 correspond to the cross-section and cycle, respectively. Keeping in mind that the grouping graph seeks to capture visual groupings, this already provides a strong cue for two visually distinct configurations. The AI2D-RST annotation schema refers to such structural configurations as \emph{macro-groups}, which constitute established configurations of the diagrammatic mode that may be flexibly combined in diagrams. To summarise, the grouping graph then already pulls these macro-groups apart and provides a foundation for their further analysis. In a moment, we shall see how these macro-groups are integrated in the discourse structure graph.
The connectivity graph in Figure \ref{fig:decomp_g} reveals that the diagram makes perhaps surprisingly limited use of arrows and lines as an expressive resource despite the intention that the diagram represent a cycle. The diagram does use arrows to set up connections between some individual elements and their groups, but the connectivity graph does not exhibit a cyclic structure. Some arrows, such as A2, have clear sources (T1; `Magma flows to surface ...') and targets (T2; `Weathering and erosion'), whereas other arrows, such as A4, do not. This seems to encourage two alternative frames of interpretation for arrows \cite{alikhanistone2018}: some clearly signal transitions between stages (A2, A3), whereas others indicate the overall direction of the cycle (A4, A0).
The disconnections in the connectivity graph raise a crucial question: how does an interpretation involving a cyclic structure emerge, if it is not clearly signalled using arrows? The answer to this question may be found in the discourse structure of the graph as a whole, which relies largely on \emph{written} language as an expressive resource. This allows the diagram to describe stages of the rock cycle explicitly using clausal structures, e.g. ``Metamorphic rock forms from heat and pressure'', but does not express the relationships diagrammatically using arrows. The verbal descriptions are instead frequently placed in relation with specific regions of the cross-section, as shown in the discourse structure graph in Figure \ref{fig:decomp_g_2}.
\begin{figure}[h!]
\includegraphics[width=0.5\textwidth]{decomp.pdf}
\includegraphics[width=0.5\textwidth]{rst_4210_mod.png}
\caption{Discourse-driven layout segmentation and discourse structure graph for diagram \#4210}
\label{fig:decomp_g_2}
\end{figure}
Figure \ref{fig:decomp_g_2} illustrates how the cross-section and the cycle, which form separate subgraphs in the grouping graph in Figure \ref{fig:decomp_g}, are tightly integrated in the discourse structure graph, which captures their \emph{joint} contribution towards a shared communicative goal. The specific rhetorical relations in Figure \ref{fig:decomp_g_2} and criteria for their application, based loosely on Bateman \cite[\p{149--162}]{bateman2008}, are given in Table \ref{tab:rels}. Note that these criteria are presented in an abbreviated form, whereas those actually defined in RST are stricter. Beginning from the top of the table, several \textsc{identification} relations are used to name regions (R1) and arrows (R6, R3). In relation R3, identification is extended to both arrows A0 and A1, which are joined together using the \textsc{joint} relation R2. \textsc{elaboration} relations R4--R5 and R7--R9, which assign descriptions to specific regions of the cross-section, explain most of the phenomena depicted in the diagram.
\begin{table}
\caption{Rhetorical relations in the discourse structure graph in Figure \ref{fig:decomp_g}}
\begin{tabular}{p{2cm} p{2.6cm} p{3.6cm} p{3.4cm}}
\hline\noalign{\smallskip}
Identifier(s) & Relation & Nucleus & Satellite \\
\hline\noalign{\smallskip}
R1, R3, R6 & \textsc{identification} & Identified & Identifier \\
R2 & \textsc{joint} & No constraints & No constraints \\
R4--5, R7--9 & \textsc{elaboration} & Basic information & Additional information \\
R10 & \textsc{disjunction} & Two or more alternatives & -- \\
R11 & \textsc{cyclic sequence} & Repeated steps & -- \\
\hline
\end{tabular}
\label{tab:rels}
\end{table}
All of these descriptions contribute towards an interpretation involving a cycle, which requires not only world knowledge, but is also supported using cohesive ties between lexical elements, such as the nouns `magma' and `rock' and the verb `to form'. The cycle itself is represented by the \textsc{cyclic sequence} relation R11, which joins together the individual descriptions, which form its steps. Because the cycle also includes two possible alternatives, that is, whether magma cools below or above ground to form rocks, this is captured by the \textsc{disjunction} relation R10.
This analysis has attempted to illustrate some of the methodological benefits of adopting a discourse-driven approach to unpacking the structure of diagrammatic representations and the applicability of RST to describe their semantics. In the following section, we conclude by briefly discussing some of the principal implications of our analysis for diagram research more generally.
\section{Discussion}
The analysis above suggests that a multimodal perspective can yield valuable insights into diagrammatic representations, but only when the characteristics of the diagrammatic mode are accounted for appropriately. Instead of building pre-defined inventories of diagrammatic elements, which are rapidly exhausted when faced with data that do not fall neatly into the categories defined, one should invest in mapping the expressive resources available to the diagrammatic mode and describing the kinds of discourse structures they participate in. Which expressive resources are actually encountered, however, should not be assumed beforehand, but always be treated as an open question to be answered through empirical research.
We can also consider the potential contribution of multimodality research to diagrams research in a broader context. For instance, the recent framework proposed by Engelhardt and Richards seeks to define ``universal building blocks of all types of diagrams and information graphics'' \cite[\p{201}]{engelhardtrichards2018}. This includes signs present in the diagram, or its \emph{graphic components}, their participation in \emph{graphic structures} and their \emph{meaning}, but excludes ``context-related aspects'' related to diagram use \cite[\p{203}]{engelhardtrichards2018}. A multimodal perspective is inherently geared towards addressing all of the aforementioned aspects of diagrammatic representations, and spanning from form to contextually-motivated use.
The context in which the diagrammatic mode is used and to what effect strongly constrains which expressive resources and which of their capabilities are mobilised for signification. More specifically, just like any other semiotic mode, the diagrammatic mode is subject to constraints arising from \emph{genre}, that is, staged, socially-motivated use of semiotic modes for achieving specific communicative goals \cite{bateman2008}. This has been aptly exemplified in recent research on graphical abstracts in scientific articles, which use the diagrammatic mode to summarise article content \cite{hullmanbach2018,hendgesflorek2019}. In plain words, what the diagrams are used for influences their graphic components and relations, and multimodality research provides the concepts needed to discuss this variation.
Multimodality research can also contribute towards a deeper understanding of \emph{signification} in diagrams, as this is precisely what expressive resources do as part of the diagrammatic mode. As our analysis shows, diagrams that represent cycles do not necessarily need to draw on arrows for this purpose: the diagrammatic mode provides alternatives, such as written language, whose structural features (here: cohesive ties) may be used to cue a discourse semantic interpretation involving cyclicity. This allows a fine-grained decomposition of the proposed building blocks of diagrammatic representations \cite{hullmanbach2018,engelhardtrichards2018}. Conversely, multimodality research is likely to benefit from the concepts developed in diagrams research for producing systematic descriptions of expressive resources. This will, however, require a significant effort in triangulating what has been done previously in multimodality and diagrams research, and aligning their theoretical concepts as necessary \cite{bateman2017}.
Finally, our findings also carry implications for the computational modelling of diagrams. In particular, problems with the AI2D annotation \cite{kembhavietal2016} underline the need for domain expertise in describing the diagrammatic mode in order to achieve a description that respects its specific features. When applied to diagrams, computer vision tasks such as instance-level semantic segmentation and visual question answering must acknowledge particular characteristics of the diagrammatic mode. They should not be based simply on assumptions concerning how such tasks are defined for processing pictorial representations, since pictures constitute a quite different family of semiotic modes with rather different properties. Particularly important here is the issue of the appropriate level of semantic segmentation, that is, to what extent the mode in question needs to be decomposed into its components. Developing appropriate descriptions of the diagrammatic mode for computational modelling is therefore a task that needs to involve research communities working on both diagrams and multimodality.
\section{Conclusion}
In this paper, we have introduced a multimodal perspective on diagrammatic representations, and presented a description of the diagrammatic mode, exemplifying the proposed approach using two recent multimodal diagram corpora. Multimodal analysis involves decomposing diagrammatic representations into their component parts, and we have argued for the need for a decomposition driven by discourse structure -- that is, what the diagrammatic representations attempt to communicate and how their organisations explicitly guide readers to candidate interpretations. Capturing segmentations of this kind explicitly in appropriately designed corpora ensures that the necessary diagrammatic elements are available for further analysis. We suggest that given the widespread use of diagrams and their variation in different domains, an extensive programme of corpus-driven research of the kind we have proposed is now essential for developing an empirically-motivated account of diagrams and the rich internal workings of the diagrammatic mode.
\bibliographystyle{splncs04}
|
2,869,038,154,408 | arxiv | \section{Introduction}
Given a graph $G$, let $\calM(G)$ denote the set of perfect matchings
in $G$. A classical theorem of Petersen \cite{petersen1891} states
that every cubic bridgeless graph has at least one perfect matching,
i.e.\ $\calM(G)\neq \emptyset$. Indeed, it can be proven that
any edge in a cubic bridgeless graph is contained in some perfect
matching~\cite{plesnik72}, which implies that $|\calM(G)|\geq 3$.
In the 1970s, Lov\'asz and Plummer conjectured that the number of
perfect matchings of a cubic bridgeless graph $G$ should grow
exponentially with its order (see~\cite[Conjecture
8.1.8]{lovaszp86}). It is a simple exercise to prove that $G$
contains at most $2^{|V(G)|}$ perfect matchings, so we can state the
conjecture as follows:
\begin{lpc}
There exists a universal constant $\epsilon >0$ such that for any
cubic bridgeless graph $G$, $$2^{\epsilon |V(G)|} \leq |\calM(G)|
\leq 2^{|V(G)|}.$$
\end{lpc}
The problem of computing $|\calM(G)|$ is connected to problems in
molecular chemistry and statistical physics (see e.g.~\cite[Section
8.7]{lovaszp86}). In general graphs, this problem is $\sharp
P$-complete \cite{valiant79}. Thus we are interested in finding good
bounds on the number of perfect matchings for various classes of
graphs such as the bounds in the conjecture above.
For bipartite graphs, $|\calM(G)|$ is precisely the permanent of the
graph biadjacency matrix. Voorhoeve proved the conjecture for cubic
bipartite graphs in 1979 \cite{voorhoeve79}; Schrijver later extended
this result to all regular bipartite graphs \cite{schrijver98}. We
refer the reader to \cite{laurents10} for an exposition of this
connection and of an elegant proof of Gurvits generalizing Schrijver's
result. For {\em fullerene graphs}, a class of planar cubic graphs for
which the conjecture relates to molecular stability and aromaticity of
fullerene molecules, the problem was settled by Kardo\v{s}, Kr\'al',
Mi\v{s}kuf and Sereni \cite{kardoskms09}. Chudnovsky and Seymour
recently proved the conjecture for all cubic bridgeless planar graphs
\cite{chudnovskys11}.
The general case has until now remained open. Edmonds, Lov\'asz and
Pulleyblank \cite{edmondslp82} proved that any cubic bridgeless $G$
contains at least $\frac 14{|V(G)|}+2$ perfect matchings (see also
\cite{naddef82}); this bound was later improved to $\frac 12|V(G)|$
\cite{kralss09} and then $\frac34|V(G)|-10$ \cite{esperetkss10}. The
order of the lower bound was not improved until Esperet, Kardo\v{s},
and Kr\'al' proved a superlinear bound in 2009 \cite{esperetkk11}.
The first bound, proved in 1982, is a direct consequence of a lower
bound on the dimension of the perfect matching polytope, while the
more recent bounds combine polyhedral arguments with analysis of
brick and brace decompositions.
In this paper we solve the general case. To avoid technical
difficulties when contracting sets of vertices, we henceforth allow
graphs to have multiple edges, but not loops. Let $m(G)$ denote
$|\calM(G)|$, and let $m^{\star}(G)$ denote the minimum, over all
edges $e\in E(G)$, of the number of perfect matchings containing $e$.
Our result is the following:
\begin{thm}\label{t:Main}
For every cubic
bridgeless graph $G$ we have $m(G) \geq 2^{|V(G)|/\ceps}$.
\end{thm}
We actually prove that at least one of two sufficient conditions applies:
\begin{thm}\label{t:Main2}
For every cubic
bridgeless graph $G$, at least one of the following holds:
\begin{itemize}
\item[$\Sone$] $m^{\star}(G) \geq 2^{|V(G)|/\ceps},$ or
\item[$\Stwo$] there exist $M,M' \in \calM(G)$ such that $M \triangle
M'$ has at least $|V(G)|/\ceps$ components.
\end{itemize}
\end{thm}
To see that Theorem~\ref{t:Main2} implies Theorem~\ref{t:Main}, we can
clearly assume that $\Stwo$ holds since $m^{\star}(G) \leq m(G)$.
Choose $M,M' \in \calM(G)$ such that the set $\calC$ of components of
$M \triangle M'$ has cardinality at least $|V(G)|/\ceps$, and note that
each of these components is an even cycle alternating between $M$ and
$M'$. Thus for any subset $\calC' \subseteq \calC$, we can construct
a perfect matching $M_{\calC'}$ from $M$ by flipping the edges on the
cycles in $\calC'$, i.e.\ $M_{\calC'} = M \triangle \bigcup_{C \in
\calC'} C$. The $2^{|\calC|}$ perfect matchings $M_{\calC'}$ are
distinct, implying Theorem \ref{t:Main}.
We cannot discard either of the sufficient conditions $\Sone$ or
$\Stwo$ in the statement of Theorem \ref{t:Main2}. To see that
$\Stwo$ cannot be omitted, consider the graph depicted in
Figure~\ref{f:uni} and observe that each of the four bold edges is
contained in a unique perfect matching. To see that $\Sone$ cannot be
omitted, it is enough to note that there exist cubic graphs with girth
logarithmic in their size (see~\cite{imrich84} for a construction).
Such graphs cannot have linearly many disjoint cycles, so condition
$\Stwo$ does not hold.
\begin{figure}[t]
\centering
\includegraphics[width=8cm]{uni}
\caption{A graph cubic bridgeless graph $G$ with
$m^{\star}(G)=1$.} \label{f:uni}
\end{figure}
\subsection{Definitions and notation}
\
For a graph $G$ and a set $X \subseteq V(G)$, $G|X$ denotes the
subgraph of $G$ induced by $X$. For a set $X \subseteq V(G)$, let
$\delta(X)$ denote the set of edges with exactly one endpoint in $X$,
and let $E_X$ denote the set of edges with at least one endpoint in
$X$, i.e.\ $E_X = E(G|X)\cup \delta(X)$. The set $C=\delta(X)$ is
called an \emph{edge-cut}, or a \emph{$k$-edge-cut}, where $k=|C|$,
and $X$ and $V(G)\setminus X$ are the \emph{sides} of $C$. A
$k$-edge-cut is said to be {\em even} (resp.\ {\em odd}) if $k$ is
even (resp.\ odd). Observe that the parity of an edge-cut $\delta(X)$
in a cubic graph is precisely that of $|X|$. An edge-cut $\delta(X)$
is \emph{cyclic} if both $G|X$ and $G|(V(G) \setminus X)$ contain a
cycle. Observe that every 2-edge-cut in a cubic graph is cyclic. If
$G$ contains no edge-cut (resp.\ cyclic edge-cut) of size less than
$k$, we say that $G$ is \emph{$k$-edge-connected} (resp.\
\emph{cyclically $k$-edge-connected}).
Observe that the number of perfect matchings of a graph is the product
of the number of perfect matchings of its connected components. Hence,
in order to prove Theorem~\ref{t:Main}, we restrict ourselves to
connected graphs for the remainder of this paper (this means, for
example, that we can consider the terms \emph{2-edge-connected} and
\emph{bridgeless} to be interchangeable, and the sides of a cut are
well-defined).
\subsection{Constants}
\
Let $x:=\log(\tfrac43)/\log(2)$.
The following constants appear throughout the
paper:
$$\alpha:=\tfrac{x}{314}, \qquad \beta_1 :=\tfrac{154x}{314},
\qquad \beta_2:=\tfrac{74x}{314}, \qquad \gamma:=\tfrac{312x}{314}.$$
We avoid using the numerical values of these constants for the sake of
clarity. Throughout the paper we make use of the following
inequalities, which can be routinely verified:
\makeatletter
\let\temptag\tagform@
\def\tagform@#1{\maketag@@@{\bf(\ignorespaces#1\unskip\@@italiccorr)}}
\begin{align}
0 < \alpha \leq \beta_2 \leq \beta_1,\label{e:const1}\\
1/\ceps \leq \frac{\alpha}{9\beta_1+3},\label{e:const2}\\
\beta_2+6\alpha \leq \beta_1,\label{e:const3}\\
74\alpha \leq \beta_2,\label{e:const4}\\
146 \alpha \leq \beta_1,\label{e:const5}\\
\beta_2+ 80\alpha \leq \beta_1,\label{e:const6}\\
6\alpha + \gamma \leq \log(6)/\log(2),\label{e:const7}\\
\gamma + 2\beta_1 +7\alpha -\beta_2 \leq 1,\label{e:const8}\\
6\alpha+2\beta_1 \leq \log(\tfrac{4}{3})/\log(2),\label{e:const9}\\
2\beta_1+4\alpha \leq \gamma.\label{e:const10}
\end{align}
\let\tagform@\temptag
\makeatother
The integer $\ceps$ is chosen minimum so that the system of inequalities
above has a solution. Inequalities
{\bf (\ref{e:const4})}, {\bf (\ref{e:const6})}, {\bf (\ref{e:const9})}, and
{\bf (\ref{e:const10})} are tight.
\section{The proof of Theorem \ref{t:Main2}}
In this section we sketch the proof of Theorem \ref{t:Main2},
postponing the proofs of two main lemmas until later sections. Our
general approach to Theorem \ref{t:Main2} is to reduce on cyclic
2-edge-cuts and cyclic 3-edge-cuts and prove inductively that either
$\Sone$ or $\Stwo$ holds. Dealing with $\Sone$ is relatively
straightforward -- perfect matchings containing a given edge behave
well with reductions on a cut, which is our main motivation for
considering $m^{\star}(G)$. To deal with $\Stwo$, we do not directly
construct perfect matchings $M$ and $M'$ for which $M\triangle M'$ has
many components. Instead, we define a special type of vertex set in
which a given random perfect matching is very likely to admit an
alternating cycle. We call these sets {\em burls} and we call a set
of disjoint burls a {\em foliage} -- a large foliage will guarantee
the existence of two perfect matchings with many components in their
symmetric difference.
\subsection{Burls, twigs, and foliage weight}
\
Consider a subset $X \subseteq V(G)$. Let $\calM(G,X)$ denote the
family of subsets $M$ of $E_X$ (the edges with at least one endpoint
in $X$) such that every vertex of $X$ is incident with exactly one
edge of $M$. Note that some elements of $\calM(G,X)$ might not be
matchings in $G$ (if two edges of $\delta(X)$ share a vertex from
$V(G)\setminus X$). However, for any $M \in \calM(G,X)$, $M \cap
E(G|X)$ is a matching.
A probability distribution $\bM$ on $\calM(G,X)$ is \emph{balanced} if
for any edge $e \in E_X$, $\Pr[e\in \bM] = \tfrac13$. It follows from
Edmonds' characterization of the perfect matching
polytope~\cite{edmonds65} that if $G$ is cubic and bridgeless, there
exists a balanced probability distribution on
$\calM(G,V(G))=\calM(G)$. For any $X \subseteq V(G)$, the restriction
of this distribution to $E_X$ yields a balanced probability
distribution on $\calM(G,X)$. The following easy fact will be used
several times throughout the proof:
\begin{cla}\label{cl:balanced}
Let $G$ be a cubic bridgeless graph and consider $Y \subseteq X
\subseteq V(G)$ such that $C=\delta(Y)$ is a 3-edge-cut in $G$. For
any balanced probability distribution $\bM$ on $\calM(G,X)$, and any
$M\in \calM(G,X)$ such that $\Pr[\bM =M]>0$, we have $|M\cap C|=1$.
\end{cla}
Given some $M \in \calM(G,X)$, a cycle of $G|X$ is
\emph{$M$-alternating} if it has even length and half of its edges are
in $M$ (it alternates edges in $M$ and edges not in $M$). Let
$a(G,X,M)$ denote the maximum number of disjoint $M$-alternating
cycles in $G|X$ (equivalently, the maximum number of components of $M
\triangle M'$, for $M' \in \calM(G,X)$).
We define a \emph{burl} as a vertex set $X \subseteq V(G)$ such that
for any balanced probability distribution $\bM$ on $\calM(G,X)$,
$\Ex[a(G,X,\bM)] \ge \tfrac13$. Note that if $X$ is a burl, any set $Y
\supset X$ is also a burl, since any balanced probability distribution
on $\calM(G,Y)$ induces a balanced probability distribution on
$\calM(G,X)$. We would like to insist on the fact that we consider the
whole set $\calM(G,X)$, and not only $\{M \cap E_X, M \in
\calM(G)\}$. This way, being a burl is really a local property of $X$
and is completely independent of the structure of $G|(V(G) \setminus
X)$. This aspect of burls will be fundamental in the proof of
Theorem~\ref{t:Main2}.
A collection of disjoint vertex sets $\{X_1,\ldots,X_k\}$ is a {\em
foliage} if each $X_i$ is a burl. Assume that $G$ contains such a
collection of disjoint sets, and consider a balanced probability
distribution $\bM$ on $\calM(G,V(G))=\calM(G)$. This distribution
induces balanced probability distributions $\bM_{X_i}$ on
$\calM(G,X_i)$, for each $1\le i \le k$. By definition of a burl, we
have $\Ex[a(G,X_i,\bM_{X_i})] \ge \tfrac13$ for each each $1\le i \le
k$. By linearity of expectation, the maximum number of disjoint
alternating cycles of $\bM$ is then expected to be at least $k/3$. We
get the following key fact as a consequence:
\begin{cor}\label{c:FoliageApplied}
If a cubic bridgeless graph $G$ contains a foliage $\calX$, then
there exist perfect matchings $M,M' \in \calM(G)$ such that $M
\triangle M'$ has at least $|\calX|/3$ components.
\end{cor}
We now introduce a special class of burls. Let $G$ be a cubic
bridgeless graph and let $X \subseteq V(G)$. We say that $X$ is a
\emph{$2$-twig} if $|\delta(X)|=2$, and $X$ is a \emph{$3$-twig} if
$|\delta(X)|=3$ and $|X| \geq 5$ (that is, $X$ is neither a triangle,
nor a single vertex). A \emph{twig} in $G$ is a $2$- or $3$-twig.
Before we prove that every twig is a burl, we need a simple lemma.
\begin{lem}\label{l:smallcount}
Let $G$ be a cubic bridgeless graph. Then
\begin{enumerate}
\item $m(G-e) \geq 2$ for every $e \in E(G)$, and
\item $m(G) \geq 4$ if $|V(G)|\geq 6$. In particular, for any $v\in
V(G)$ there is an $e\in \delta(\{v\})$ contained in at least two
perfect matchings.
\end{enumerate}
\end{lem}
\begin{proof}
The first item follows from the classical result mentioned in the
introduction: every edge of a cubic bridgeless graph is contained in
a perfect matching. The second is implied by the bound $m(G)\geq
\tfrac 14|V(G)|+2$ from~\cite{edmondslp82}.
\end{proof}
\begin{lem}\label{l:twigs}
Every twig $X$ in a cubic bridgeless graph $G$ is a burl.
\end{lem}
\begin{proof}
Let $\bM$ be a balanced probability distribution on $\calM(G,X)$.
If $X$ is a $2$-twig, let $H$ be obtained from $G|X$ by adding an
edge $e$ joining the two vertices incident with $\delta(X)$. Then
$H$ is cubic and bridgeless. By applying Lemma~\ref{l:smallcount}(1)
to $H$, we see that $G|X$ contains at least one $M$-alternating
cycle for every $M \in \calM(G,X)$ such that $M \cap
\delta(X)=\emptyset$. Note that since $H$ is cubic, $|X|$ is even,
and thus $M$ either contains the two edges of $\delta(X)$, or none
of them. Since $\bM$ is balanced, $\Pr[\bM \cap \delta(X)=
\emptyset] \geq 1-1/3=2/3$. Hence $\Ex[a(G,X,\bM)] \ge \tfrac23$ and
we conclude that $X$ is a burl.
Suppose now that $X$ is a $3$-twig. Let
$\delta(X)=\{e_1,e_2,e_3\}$. Let $H$ be obtained from $G$ by
identifying all the vertices in $V(G)-X$ (removing loops but
preserving multiple edges). We apply Lemma~\ref{l:smallcount}(2) to
$H$, which is again cubic and bridgeless. It follows that for some
$1 \leq i \leq 3$, the edge $e_i$ is in at least two perfect
matchings of $H$. Therefore $G|X$ contains at least one
$M$-alternating cycle for every $M \in \calM(G,X)$ such that $M \cap
\delta(X)= \{e_i\}$. By Claim~\ref{cl:balanced}, $\Pr[\bM \cap
\delta(X)= \{e_i\}]=\Pr[e_i \in \bM]=1/3$. It implies that
$\Ex[a(G,X,\bM)] \ge \tfrac13$ and thus $X$ is a burl.
\end{proof}
The \emph{weight} of a foliage $\calX$ containing $k$ twigs is defined
as $\fw(\calX) := \beta_1 k + \beta_2 (|\calX|-k)$, that is each twig
has weight $\beta_1$ and each non-twig burl has weight $\beta_2$. Let
$\fw(G)$ denote the maximum weight of a foliage in a graph $G$.
\subsection{Reducing on small edge-cuts}
\
We now describe how we reduce on 2-edge-cuts and 3-edge-cuts, and
consider how these operations affect $m^{\star}(G)$ and foliages. Let
$C$ be a $3$-edge-cut in a cubic bridgeless graph $G$. The two graphs
$G_1$ and $G_2$ obtained from $G$ by identifying all vertices on one
of the sides of the edge-cut (removing loops but preserving multiple
edges) are referred to as \emph{$C$-contractions} of $G$ and the
vertices in $G_1$ and $G_2$ created by this identification are called
\emph{new}.
We need a similar definition for $2$-edge-cuts. Let $C=\{e,e'\}$ be a
$2$-edge-cut in a cubic bridgeless graph $G$. The two
\emph{$C$-contractions} $G_1$ and $G_2$ are now obtained from $G$ by
deleting all vertices on one of the sides of $C$ and adding an edge joining
the remaining ends of $e$ and $e'$. The resulting edge is now called
\emph{new}.
In both cases we say that $G_1$ and $G_2$ \emph{are obtained from $G$
by a cut-contraction}. The next lemma provides some useful
properties of cut-contractions.
\begin{lem}\label{l:Contraction}
Let $G$ be a graph, let $C$ be a $2$- or a $3$-edge-cut in $G$, and
let $G_1$ and $G_2$ be the two $C$-contractions. Then
\begin{enumerate}
\item $G_1$ and $G_2$ are cubic bridgeless graphs,
\item $m^{\star}(G) \geq m^{\star}(G_1)\,m^{\star}(G_2)$, and
\item For $i=1,2$ let $\calX_i$ be a foliage in $G_i$ such that for
every $X \in \calX_i$, if $|C|=3$ then $X$ does not contain the
new vertex, and if $|C|=2$ then $E(G_i|X)$ does not contain the
new edge. Then $\calX_1 \cup \calX_2$ is a foliage in $G$. In
particular, we have $\fw(G) \geq \fw(G_1)+\fw(G_2)-2\beta_1$.
\end{enumerate}
\end{lem}
\begin{proof}\mbox{}\smallskip
\begin{enumerate}
\item This can be confirmed routinely.
\item Consider first the case of the contraction of a 2-edge-cut
$C=\delta(X)$ in $G$. Let $e$ be an edge with both ends in
$X=V(G_1)$. Every perfect matching of $G_1$ containing $e$ combines
either with $m^{\star}(G_2)$ perfect matchings of $G_2$ containing
the new edge of $G_2$, or with $2m^{\star}(G_2)$ perfect matchings
of $G_2$ avoiding the new edge of $G_2$. If $e$ lies in $C$, note
that perfect matchings of $G_1$ and $G_2$ containing the new edges
can be combined into perfect matchings of $G$ containing $C$. Hence,
$e$ is in at least $m^{\star}(G_1)\,m^{\star}(G_2)$ perfect matchings
of $G$.
Now consider a 3-edge-cut $C=\delta(X)$. If $e$ has both ends in $X
\subset V(G_1)$, perfect matchings of $G_1$ containing $e$ combine
with perfect matchings of $G_2$ containing either of the 3 edges of
$C$. If $e$ is in $C$, perfect matchings containing $e$ in $G_1$ and
$G_2$ can also be combined into perfect matchings of $G$. In any
case, $e$ is in at least $m^{\star}(G_1)\,m^{\star}(G_2)$ perfect
matchings of $G$.
\item In this case the elements of $\calX_1 \cup \calX_2$ are disjoint
subsets of $V(G)$. Consider some $X \in \calX_1 \cup \calX_2$, and
assume without loss of generality that $X \in \calX_1$. Since $X$
does not contain the new vertex of $G_1$ (if $|C|=3$), or the new
edge (if $|C|=2$), each balanced probability distribution on
$\calM(G,X)$ is also a balanced probability distribution on
$\calM(G_1,X)$, so $\calX_1 \cup \calX_2$ is a foliage in $G$. Since
$\beta_1 \geq \beta_2$, this implies $\fw(G) \geq
\fw(G_1)+\fw(G_2)-2\beta_1$. \qedhere
\end{enumerate}
\end{proof}
It is not generally advantageous to reduce on a $3$-edge-cut arising
from a triangle, unless this reduction leads to a chain of similar
reductions. Thus we wish to get rid of certain triangles from the
outset. We say that a triangle sharing precisely one edge with a
cycle of length three or four in a graph $G$ is \emph{relevant}, and
otherwise it is \emph{irrelevant}. A graph $G$ is \emph{pruned} if it
contains no irrelevant triangles. The following easy lemma shows that
we can prune a bridgeless cubic graph by repeated cut-contraction
without losing too many vertices.
\begin{lem}\label{l:ContractTriangles}
Let $G$ be a cubic bridgeless graph, and let $k$ be the size of
maximum collection of vertex-disjoint irrelevant triangles in
$G$. Then one can obtain a pruned cubic bridgeless graph $G'$ from
$G$ with $|V(G')| \geq |V(G)|-2k$ by repeatedly contracting
irrelevant triangles.
\end{lem}
\begin{proof}
We proceed by induction on $k$. Let a graph $G''$ be obtained from
$G$ by contracting an irrelevant triangle $T$. The graph $G''$ is
cubic and bridgeless by Lemma~\ref{l:Contraction}(1). Since $T$ is
irrelevant in $G$, the unique vertex of $G''$ obtained by
contracting $T$ is not in a triangle in $G''$. Therefore if
$\mathcal{T}$ is a collection of vertex disjoint irrelevant
triangles in $G''$ then $\mathcal{T} \cup \{T\}$ is such a
collection in $G$. (After the contraction of an irrelevant triangle,
triangles that were previously irrelevant might become relevant, but
the converse is not possible.) It follows that $|\mathcal{T}| \leq
k-1$. By applying the induction hypothesis to $G''$, we see that the
lemma holds for $G$.
\end{proof}
\begin{cor}\label{c:Prune}
Let $G$ be a cubic bridgeless graph. Then we can obtain a cubic
bridgeless pruned graph $G'$ from $G$ with $|V(G')| \geq |V(G)|/3$
by repeatedly contracting irrelevant triangles.
\end{cor}
We wish to restrict our attention to pruned graphs, so we must make
sure that the function $m^{\star}(G)$ and the maximum size of a
foliage does not increase when we contract a triangle.
\begin{lem}\label{l:Triangle}
Let $G'$ be obtained from a graph $G$ by contracting a
triangle. Then $m^{\star}(G') \leq m^{\star}(G)$ and the maximum
size of a foliage in $G'$ is at most the maximum size of a foliage
in $G$.
\end{lem}
\begin{proof}
Let $xyz$ be the contracted triangle, and let $e_x$, $e_y$, and
$e_z$ be the edges incident with $x$, $y$, $z$ and not contained in
the triangle in $G$. Let $t$ be the vertex of $G'$ corresponding to
the contraction of $xyz$. Every perfect matching $M'$ of $G'$ has a
canonical extension $M$ in $G$: assume without loss of generality
that $e_x$ is the unique edge of $M'$ incident to $t$. Then $M$
consists of the union of $M'$ and $yz$. Observe that perfect
matchings in $G$ containing $yz$ necessarily contain $e_x$, so every
edge of $G$ is contained in at least $m^{\star}(G')$ perfect
matchings.
Now consider a burl $X'$ in $G'$ containing $t$. We show that $X=X'
\cup \{x,y,z\} \setminus t$ is a burl in $G$. Let $\bM$ be a
balanced probability distribution on $\calM(G,X)$. By
Claim~\ref{cl:balanced} and the remark above, we can associate a
balanced probability distribution $\bM'$ on $\calM(G',X')$ to $\bM$
such that $\Ex[a(G,X,\bM)] =\Ex[a(G',X',\bM')]$. Since $X'$ is a
burl in $G'$, this expectation is at least $\tfrac13$ and $X$ is a
burl in $G$.
Since a burl avoiding $t$ in $G'$ is also a burl in $G$, it follows
from the analysis above that the maximum size of a foliage cannot
increase when we contract a triangle.
\end{proof}
\subsection{Proving Theorem \ref{t:Main2}}
\
We say that $G$ \emph{has a core} if we can obtain a cyclically
4-edge-connected graph $G'$ with $|V(G')|\geq 6$ by applying a
(possibly empty) sequence of cut-contractions to $G$ (recall that this
notion was defined in the previous subsection).
We will deduce Theorem \ref{t:Main2} from the next two lemmas. This
essentially splits the proof into two cases based on whether or not
$G$ has a core.
\begin{lem}\label{l:Klee}
Let $G$ be a pruned cubic bridgeless graph. Let $Z \subseteq V(G)$
be such that $|Z| \geq 2$ and $|\delta(Z)| = 2$, or $|Z| \geq 4$ and
$|\delta(Z)| = 3$. Suppose that the $\delta(Z)$-contraction $G'$ of
$G$ with $Z \subseteq V(G')$ has no core. Then there exists a
foliage $\calX$ in $G$ with $\bigcup_{X \in \calX}X \subseteq Z$
and $$\fw(\calX) \geq \alpha|Z| + \beta_2.$$
\end{lem}
By applying Lemma~\ref{l:Klee} to a cubic graph $G$ without a core and
$Z=V(G) \setminus \{v\}$ for some $v \in V(G)$, we obtain the following.
\begin{cor}\label{c:Klee}
Let $G$ be a pruned cubic bridgeless graph without a core.
Then $$\fw(G) \geq \alpha (|V(G)|-1) + \beta_2.$$
\end{cor}
On the other hand, if $G$ has a core, we will prove that either $\fw(G)$
is linear in the size of $G$ or every edge of $G$ is contained in an
exponential number of perfect matchings.
\begin{lem}\label{l:MainInduction} Let $G$ be a pruned cubic
bridgeless graph. If $G$ has a core then
$$m^{\star}(G) \geq 2^{\alpha|V(G)|-\fw(G)+\gamma}.$$
\end{lem}
We finish this section by deriving Theorem~\ref{t:Main2} from
Lemmas~\ref{l:Klee} and~\ref{l:MainInduction}.
\begin{proof}[Proof of Theorem~\ref{t:Main2}] Let $\epsilon
:=1/\ceps$. By Corollary~\ref{c:Prune} there exists a pruned cubic
bridgeless graph $G'$ with $|V(G')| \geq |V(G)|/3$ obtained from $G$
by repeatedly contracting irrelevant triangles. Suppose first that
$G'$ has a core. By Corollary~\ref{c:Prune} and Lemmas~\ref{l:Triangle}
and~\ref{l:MainInduction}, condition $\Sone$ holds as long as
$\epsilon|V(G)| \leq \alpha|V(G)|/3-\fw(G')$. Therefore we assume
$\fw(G') \geq (\tfrac \alpha 3-\epsilon)|V(G)|$. It follows from the
definition of $\fw(G')$ that $G'$ has a foliage containing at least
$(\tfrac \alpha 3-\epsilon)|V(G)|/\beta_1$ burls. If $G'$ has no
core then by Corollary~\ref{c:Klee} and the fact that $\alpha \leq
\beta_2$, $\fw(G')\ge \alpha (|V(G')|-1)+\beta_2 \ge \alpha|V(G')|$,
so $G'$ contains a foliage of size at least $\alpha|V(G')|/\beta_1
\ge \alpha|V(G)|/3\beta_1$. In both cases condition $\Stwo$ holds
by Corollary~\ref{c:FoliageApplied} and Lemma~\ref{l:Triangle},
since Equation {\bf (\ref{e:const2})} tells us that $3\epsilon \leq
{(\tfrac \alpha3 -\epsilon)}/\beta_1$.
\end{proof}
\section{Cut decompositions}\label{s:CutDecompose}
In this section we study cut decompositions of cubic bridgeless
graphs. We mostly follow notation from~\cite{chudnovskys11}, however we
consider $2$- and $3$-edge-cuts simultaneously. Cut decompositions play a
crucial role in the proof of Lemma~\ref{l:Klee} in the next section.
Let $G$ be a graph. A \emph{non-trivial cut-decomposition} of $G$ is a
pair $(T, \phi)$ such that:
\begin{itemize}
\item $T$ is a tree with $E(T) \neq \emptyset$,
\item $\phi : V(G) \to V (T)$ is a map, and
\item $|\phi^{-1}(t)|+\deg_T(t) \geq 3$ for each $t \in V(T)$.
\end{itemize}
For an edge $f$ of $T$, let $T_1$, $T_2$ be the two components of $T
\setminus f$, and for $i = 1,2$ let $X_i = \phi^{-1}(T_i)$. Thus
$(X_1,X_2)$ is a partition of $V(G)$ that induces an edge-cut
denoted by $\phi^{-1}(f)$. If $|\phi^{-1}(f)| \in \{2,3\}$ for each $f
\in E(T)$ we call $(T, \phi)$ a \emph{small-cut-decomposition} of $G$.
Let $(T, \phi)$ be a small-cut-decomposition of a $2$-edge-connected
cubic graph $G$, and let $T_0$ be a subtree of $T$ such that
$\phi^{-1}(V(T_0)) \neq \emptyset$. Let $T_1,\ldots,T_s$ be the
components of $T \setminus V (T_0)$, and for $1 \leq i \leq s$ let
$f_i$ be the unique edge of $T$ with an end in $V(T_0)$ and an end in
$V(T_i)$. For $0 \leq i \leq s$, let $X_i = \phi^{-1}(V(T_i))$. Thus
$X_0,X_1, \ldots ,X_s$ form a partition of $V(G)$.
Let $G'$ be the graph obtained from $G$ as follows. Set $G_0=G$. For
$i=1,\ldots,s$, take $G_{i-1}$ and let $G_i$ be the
$(\phi^{-1}(f_i))$-contraction containing $X_0$. Now let $G'$ denote
$G_{s}$.
Note that $G'$ is cubic. We call $G'$ \emph{the hub of $G$ at $T_0$}
(with respect to $(T, \phi)$). If $t_0 \in V (T)$ and $\phi^{-1}(t_0)
\ne \emptyset$, by the \emph{hub of $G$ at $t_0$} we mean the hub of
$G$ at $T_0$, where $T_0$ is the subtree of $T$ with vertex set
$\{t_0\}$.
Let $\calY$ be a collection of disjoint subsets of $V(G)$. We say that
a small-cut-decomposition $(T,\phi)$ of $G$ \emph{refines} $\calY$ if
for every $Y \in \calY$ there exists a leaf $v \in V(T)$ such that $Y
= \phi^{-1}(v)$. Collections of subsets of $V(G)$ that can be refined
by a small-cut decomposition are charaterized in the following easy
lemma.
\begin{lem}\label{l:RefinableCollections} Let $G$ be a cubic
bridgeless graph. Let $\calY$ be a collection of disjoint subsets
of $V(G)$. Then there exists a small-cut-decomposition refining
$\calY$ if $|Y| \geq 2$ and $|\delta(Y)| \in \{2,3\}$ for every $Y
\in \calY$, and either
\begin{enumerate}
\item $\calY =\emptyset$ and $G$ is not cyclically $4$-edge-connected, or
\item $\calY=\{Y\}$, and $|V(G) \setminus Y| >1$, or
\item $|\calY| \geq 2$.
\end{enumerate}
\end{lem}
\begin{proof}
We only consider the case $|\calY|\geq 3$, as the other cases are
routine. Take $T$ to be a tree on $|\calY|+1$ vertices with
$|\calY|$ leaves $\{v_Y \: | \: Y \in \calY \}$ and a non-leaf
vertex $v_0$. The map $\phi$ is defined by $\phi(u)=v_Y$, if $u \in
Y$ for some $Y \in \calY$, and $\phi(u)=v_0$, otherwise. Clearly,
$(T,\phi)$ refines $\calY$ and is a small-cut-decomposition of $G$.
\end{proof}
We say that $(T,\phi)$ is \emph{$\calY$-maximum} if it refines $\calY$
and $|V(T)|$ is maximum among all small-cut decompositions of $G$
refining $\calY$. The following lemma describes the structure of
$\calY$-maximum decompositions. It is a variation of Lemma 4.1 and
Claim 1 of Lemma 5.3 in~\cite{chudnovskys11}.
\begin{lem}\label{l:CutDecompose} Let $G$ be a cubic bridgeless
graph. Let $\calY$ be a collection of disjoint subsets of $V(G)$ and
let $(T,\phi)$ be a $\calY$-maximum small-cut-decomposition of
$G$. Then for every $t \in V(T)$ either $\phi^{-1}(t)=\emptyset$, or
$\phi^{-1}(t)\in \calY$, or the hub of $G$ at $t$ is cyclically
$4$-edge-connected.
\end{lem}
\begin{proof}
Fix $t \in V(T)$ with $\phi^{-1}(t) \neq \emptyset$ and
$\phi^{-1}(t) \not \in \calY$. Let $f_1,\ldots, f_k$ be the edges
of $T$ incident with $t$, and let $T_1,\ldots, T_k$ be the
components of $T \setminus \{t\}$, where $f_i$ is incident with a
vertex $t_i$ of $T_i$ for $1 \leq i \leq k$. Let $X_0$ =
$\phi^{-1}(t)$, and for $1 \leq i \leq k$ let $X_i =
\phi^{-1}(V(T_i))$. Let $G'$ be the hub of $G$ at $t$, and let $G''$
be the graph obtained from $G'$ by subdividing precisely once every
new edge $e$ corresponding to the cut-contraction of a cut $C$ with
$|C|=2$. The vertex on the subdivided edge $e$ is called \emph{the
new vertex corresponding to the cut-contraction of $C$}, by
analogy with the new vertex corresponding to the cut-contraction of
a cyclic 3-edge-cut.
Note that $G'$ is cyclically 4-edge-connected if and only if $G''$
is cyclically 4-edge-connected. Suppose for the sake of
contradiction that $C=\delta(Z)$ is a cyclic edge-cut in $G''$ with
$|C|\leq 3$. Then $|C|\in \{2,3\}$ by Lemma~\ref{l:Contraction}(1),
as $G''$ is a subdivision of $G'$ and $G'$ can be obtained from $G$
by repeated cut-contractions. Let $T'$ be obtained from $T$ by by
splitting $t$ into two vertices $t'$ and $t''$, so that $t_i$ is
incident to $t'$ if and only if the new vertex of $G''$
corresponding to the cut-contraction of $\phi^{-1}(f_i)$ is in
$Z$. Let $\phi'(t')=X_0 \cap Z$, $\phi'(t'')=X_0 \setminus Z$, and
$\phi'(s)=\phi(s)$ for every $s \in V(T') \setminus \{t',t''\}$.
We claim that $(T',\phi')$ is a small-cut-decomposition of $G$
contradicting the choice of $T$. It is only necessary to verify that
$|\phi^{-1}(s)|+\deg_{T'}(s)\geq 3$ for $s \in \{t',t''\}$. We have
$|\phi^{-1}(t')|+\deg_{T'}(t')-1=|Z \cap V(G'')| \geq 2$ as $C$ is a
cyclic edge-cut in $G''$. It follows that
$|\phi^{-1}(t')|+\deg_{T'}(t') \geq 3$ and the same holds for $t''$
by symmetry.
\end{proof}
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.5]{twigs}
\caption{Isomorphism classes of subgraphs induced by elementary
twigs.} \label{f:twigs}
\end{figure}
We finish this section by describing a collection $\calY$ to which we
will be applying Lemma~\ref{l:CutDecompose} in the sequel. In a cubic
bridgeless graph $G$ a union of the vertex set of a relevant triangle
with the vertex set of a cycle of length at most four sharing an edge
with it is called \emph{a simple twig}. Note that simple twigs
corresponding to distinct relevant triangles can intersect, but one
can routinely verify that each simple twig intersects a simple twig
corresponding to at most one other relevant triangle. \emph{An
elementary twig} is either a simple twig, that intersects no simple
twig corresponding to a relevant triangle not contained in it, or the
union of two intersecting simple twigs, corresponding to distinct
relevant triangles. An elementary twig is, indeed, a twig, unless it
constitutes the vertex set of the entire graph. Figure~\ref{f:twigs}
shows all possible elementary twigs. The next corollary follows
immediately from the observations above and
Lemmas~\ref{l:RefinableCollections} and~\ref{l:CutDecompose}.
\begin{cor}\label{c:ElemTwigs}
Let $G$ be a cubic bridgeless graph that is not cyclically
$4$-edge-connected with $|V(G)| \geq 8$. Then there exists a
collection $\calY$ of pairwise disjoint
elementary twigs in $G$ such that every relevant triangle in $G$ is
contained in an element of $\calY$.
Further, there exists a $\calY$-maximum
small-cut-decomposition $(T, \phi)$ of $G$ and for every $t \in
V(T)$ either $\phi^{-1}(t)=\emptyset$, or $\phi^{-1}(t)$ is an
elementary twig, or the hub of $G$ at $t$ is cyclically
$4$-edge-connected.
\end{cor}
\section{Proof of Lemma~\ref{l:Klee}.}\label{s:Klee}
The proof of Lemma~\ref{l:Klee} is based on our ability to find burls
locally in the graph. The following lemma is a typical example.
\begin{lem}\label{l:4Burl}
Let $G$ be a cubic bridgeless graph and let $X \subseteq V(G)$ be
such that $|\delta(X)|=4$ and $m(G|X) \geq 2$. Then $X$ is a burl.
\end{lem}
\begin{proof}
Note that if $M \in \calM(G,X)$ contains no edges of $\delta(X)$
then $G|X$ contains an $M$-alternating cycle. Let $\bM$ be a balanced
probability distribution on $\calM(G,X)$. As $M \cap \delta(X)$ is
even for every $M \in \calM(G,X)$ we have $$\tfrac 43=\Ex\left[|\bM
\cap \delta(X)|\right] \geq 2\Pr[\bM \cap \delta(X) \neq
\emptyset].$$ Therefore $\Pr[\bM \cap \delta(X) = \emptyset] \geq
1/3$. Hence, $\Ex[a(G,X,\bM)] \ge \tfrac13$ and so $X$ is a burl.
\end{proof}
The proof of Lemma~\ref{l:Klee} relies on a precise study of the
structure of small-cut trees for graphs with no core. The following
two lemmas indicate that long paths in such trees necessarily contain
some burls.
\begin{lem}\label{l:3Burl} Let $(T,\phi)$ be a small-cut-decomposition
of a cubic bridgeless graph $G$, and let $P$ be a path in $T$ with
$|V(P)|= 10$. If we have
\begin{itemize}
\item $\deg_T(t)=2$ for every $t \in V(P)$,
\item the hub of $G$ at $t$ is isomorphic to $K_4$ for every $t \in
V(P)$, and
\item $|\phi^{-1}(f)|=3$ for every edge $f \in E(T)$ incident to a
vertex in $V(P)$,
\end{itemize}
then $\phi^{-1}(P)$ is a burl.
\end{lem}
\begin{proof}
Let $P'=v_{-1}v_0\ldots v_{9}v_{10}$ be a path in $T$ such that
$P=v_0\ldots v_{9}$. Let $f_i=v_{i-1}v_{i}$ and let
$C_i=\{e^i_1,e^i_2,e^i_3\}=\phi^{-1}(f_i)$, $0 \le i \le 10$. Let
$X:=\phi^{-1}(V(P))$. It is easy to see that $\phi^{-1}(v_i)$
contains precisely two vertices joined by an edge, $0\le i \le 9$.
We assume without loss of generality that $G|X$ contains no cycles of
length $4$, as otherwise the lemma holds by Lemma~\ref{l:4Burl}. Let
$A$ be the set of ends of edges in $C_{0}$ outside of $X$, and let $B$
be the set of ends of edges in $C_{10}$ outside of $X$. Observe that
$E_X$ consists of $3$ internally vertex-disjoint paths from $A$ to
$B$, as well as one edge in $G|\phi^{-1}(\{v_i\})$ for each $0\leq i\leq
9$. Let $R_1$, $R_2$ and $R_3$ be these three paths from $A$ to $B$,
and let $u_j$ and $v_j$ be the ends of $R_j$ in $A$ and $B$,
respectively, for $j=1,2,3$. For $0 \leq i \leq 9$, we have
$\phi^{-1}(v_i)=\{x_i,y_i\}$ so that $x_i \in V(R_j), y_i \in
V(R_{j'})$ for some $\{j,j'\} \subseteq \{1,2,3\}$ with $j\ne j'
; let the \emph{index} $\sigma_i$ of $v_i$ be defined as
$\{j,j'\}$. Since there is no 4-cycle in $X$, $\sigma_i\ne
\sigma_{i-1}$ for $1\le i\le 9$. Let the \emph{type} $\psi_i$ of
$v_i$ (for $1\le i \le 8$) be defined as 0 if
$\sigma_{i-1}=\sigma_{i+1}$, otherwise let $\psi_i=1$.
Let $i,j,k$ be integers such that $0\le i < j< k\le 10$
which will be determined later. Let
$X_1:=\phi^{-1}(\{v_i,\dots,v_{j-1}\})$,
$X_2:=\phi^{-1}(\{v_j,\dots,v_{k-1}\})$, $X_0=X_1\cup X_2$.
Let $\bM$ be a balanced probability distribution on $\calM(G,X)$,
let $Z_0$ ($Z_1$, $Z_2$) be the maximum number of disjoint
$\bM$-alternating cycles in $G|X_0$ ($G|X_1$, $G|X_2$,
respectively). Let $Y_\ell=|\bM\cap C_\ell|$, for every $\ell$, and
let $Y=Y_i+Y_j+Y_k$. Since $\bM$ is balanced, we have $\Ex(Y)=3$;
moreover, $Y_i\equiv Y_j \equiv Y_k \pmod{2}$. Therefore,
$\Pr(Y=1)=0$; $Y=3$ if and only if $Y_i=Y_j=Y_k=1$; and $ Y=2$ if
and only if $\{Y_i,Y_j,Y_k\}=\{2,0,0\}$.
Assume that $i,j,k$ fulfill the following conditions:
\begin{enumerate}
\item $\Pr(Z_1=0\,|\,Y_i=Y_j=0)=0$, $\Pr(Z_2=0\,|\,Y_j=Y_k=0)=0$, and
$\Pr(Z_0=0\,|\,Y_i=Y_k=0)=0$;
\item for at least one of the cuts $C_i$, $C_j$ or $C_k$, say $C_t$,
there exists an edge $e\in C_t$ such that for at least one of the
two corresponding graphs among $G|X_0$, $G|X_1$,
$G|X_2$, say $G|X_s$, there is an alternating cycle in
$G|X_s$ for any element of $\calM(G,X)$ containing $e$, provided
$Y_i=Y_j=Y_k=1$.
\end{enumerate}
First, we derive $\Ex(Z_0)\ge \frac13$ from these assumptions, then we
prove the existence of such a triple $i,j,k$.
Observe that the first condition yields $\Ex(Z_0\,|\,Y=0)\ge 2$ and
$\Ex(Z_0\,|\,Y=2)\ge 1$. Since $\Ex(Y)=3$, we have $3\cdot
\Pr(Y=0)+\Pr(Y=2) \ge \Pr(Y\ge 4)$. This gives $\Pr(Y\ne 3)\le 4\cdot
\Pr(Y=0) + 2\cdot \Pr(Y=2)$, and hence $\Ex(Z_0\,|\,Y\ne 3)\ge
\frac12$. Let $C_t=\{e_1,e_2,e_3\}$, where $e=e_1$. Let
$p_i=\Pr[\bM\cap C_t=\{e_i\} \wedge Y=3]$, $i=1,2,3$. Clearly
$p_1+p_2+p_3=\Pr(Y=3)$. On the other hand, since $\bM$ is balanced,
$\frac13-p_1 \le \frac13-p_2+\frac13-p_3$ (all elements of
$\calM(G,X)$ containing $e_1$ together with some other edge from $C_t$
contain $e_2$ or $e_3$). Hence, $p_1\ge \frac12\cdot
(\Pr(Y=3)-\frac13)$.
Altogether, in this case
$$
\gathered \Ex(Z_0)=\Ex(Z_0\,|\,Y\ne 3)\cdot \Pr(Y\ne
3)+\Ex(Z_0\,|\,Y=3)\cdot \Pr(Y=3)\ge \\\ge \tfrac12\cdot
\left(1-\Pr(Y=3)\right)+\tfrac12\cdot \left(\Pr(Y=3)-\tfrac13\right) =
\tfrac13.
\endgathered
$$
Now we prove that there is always $i,j,k$ such that both conditions
(1) and (2) are satisfied. Note that (1) is satisfied if $j-i=3$ and
$\psi_{i+1}=1$, or if $j-i\ge 4$; the same holds for $j$ and $k$ (to
observe this, see Figure \ref{fig:tbd}).
\begin{figure}
\centerline{
\includegraphics{tbd.1}
\hfil
\includegraphics{tbd.2}
}
\caption{If there are three consecutive pairs of pairwise distinct
indices contained in $G|X$, there is always an $M$-alternating cycle
for any $M\in \calM(G,X)$ using just the vertical edges (left); the
same is true for any four consecutive pairs (right). }
\label{fig:tbd}
\end{figure}
Consider the sequence $\Psi$ of 8 types occuring in $X$.
Suppose $\Psi$ contains $1111$ as a subsequence, say
e.g. $\psi_2=\psi_3=\psi_4=\psi_5=1$. Then for $i=1$, $j=4$, $k=7$ the
condition (1) clearly holds. The condition (2) holds for $G|X_0$ in
this case, see Figure \ref{fig:psi}, left.
One can prove that the following subsequences
are feasible as well by drawing triples of figures (we omit the details):
00011, 01011, 100000, 100010, 100100, 101000, 101010,
100110, 110110, 1000010, 1001010, 1010010, 00000000.
\begin{figure}
\centerline{
\begin{tabular}{ccc}
\includegraphics{psi2.3}
&
\includegraphics{psi1.1}
&
\includegraphics{psi3.1}
\\
\includegraphics{psi2.2}
&
\includegraphics{psi1.2}
&
\includegraphics{psi3.2}
\\
\includegraphics{psi2.1}
&
\includegraphics{psi1.3}
&
\includegraphics{psi3.3}
\end{tabular}
}
\caption{If $\Psi$ contains $1111$ (left), $111$ (center) or $0001$
(right) as a subsequence, then for each perfect matching containing
the bottom-leftmost edge such that $Y=2$ there is an alternating
cycle. Observe that there is always one case out of three which is
not possible. }
\label{fig:psi}
\end{figure}
Similarly, if $\psi_1=\psi_2=\psi_3=1$, then we may pick $i=0$, $j=5$,
and $k=9$ (or even $k=8$ if $\psi_6=1$). In this case the condition
(2) holds for $G|X_1$, see Figure \ref{fig:psi}, center. Analogously
it works with 0001. It means $111**\,1$ and
$111****$ are feasible as well, and so are
$0001**\,1$ and $0001****$.
It remains to prove that $\Psi$ always contains at least one feasible
subsequence, which is a routine case analysis.
\end{proof}
\begin{lem}\label{l:2Burl} Let $(T,\phi)$ be a small-cut-decomposition
of a cubic bridgeless graph $G$. Let $t_1,t_2 \in V(T)$ be a pair of
adjacent vertices of degree $2$. Suppose that $|\phi^{-1}(f)|=2$ for
every edge $f \in E(T)$ incident to $t_1$ or $t_2$. Then
$\phi^{-1}(\{t_1,t_2\})$ is a burl.
\end{lem}
\begin{proof}
Let $t_0t_1t_2t_3$ be a subpath of $T$ and let
$C_i=\phi^{-1}(t_{i-1}t_{i})$ for $i=1,2,3$ be an edge-cut of size
$2$. Assume that both $G|\phi^{-1}(t_1)$ and $G|\phi^{-1}(t_2)$
have at most one perfect matching. By Lemma~\ref{l:4Burl} it
suffices to show that $G|\phi^{-1}(\{t_1,t_2\})$ has at least two
perfect matchings. As the hub $G_1$ over $t_1$ is cubic and
bridgeless it contains at least $2$ perfect matching avoiding any
edge. Let $e_1, e_2 \in E(G_1)$ be the edges in $E(G_1)-E(G)$
corresponding to $C_1$- and $C_2$-contraction, respectively. By
assumption, at most one perfect matching of $G_1$ avoids both $e_1$
and $e_2$. It follows that either two perfect matchings of $G_1$
avoid $e_1$ and contain $e_2$, or one avoids $e_1$ and $e_2$ and one
avoids $e_1$ and contains $e_2$. Let $G_2$ be the hub over
$t_2$. The symmetric statement holds for $G_2$. In any case, the
perfect matchings in $G_1$ and $G_2$ can be combined to obtain at
least two perfect matchings of $G|\phi^{-1}(\{t_1,t_2\})$.
\end{proof}
From the definition of a small-cut-decomposition, we immediately get
the following corollary:
\begin{cor}\label{c:2Burl}
Let $(T,\phi)$ be a small-cut-decomposition of a cubic bridgeless
graph $G$, and let $P$ be a path in $T$ in which every vertex has
degree $2$. Suppose there exist three edges $f_1$, $f_2$, $f_3$ of
$T$ incident to vertices of $P$ such that
$|\phi^{-1}(f_1)|=|\phi^{-1}(f_2)|=|\phi^{-1}(f_3)|=2$. Then
$\phi^{-1}(P)$ is a burl.
\end{cor}
Let $B_3$ denote the cubic graph consisting of two vertices joined by
three parallel edges. Lemmas~\ref{l:3Burl} and~\ref{l:2Burl} imply
the following.
\begin{cor}\label{c:TreeBranch}
Let $(T,\phi)$ be a small-cut-decomposition of a cubic bridgeless
graph $G$ and let $P$ be a path in $T$ with $|V(P)|=32$. If for
every $t \in V(P)$, $\deg_T(t)=2$ and the hub of $G$ at $t$ is
isomorphic to $K_4$ or $B_3$, then $\phi^{-1}(P)$ is a burl.
\end{cor}
\begin{proof}
If at least three edges incident to vertices in $V(P)$ correspond to
edge-cuts of size $2$ in $G$ then the corollary holds by
Corollary~\ref{c:2Burl}. Otherwise, since there are $33$ edges of
$T$ incident to vertices of $P$, there must be 11 consecutive edges
incident to vertices in $P$ corresponding to edge-cuts of size $3$.
In this case, the result follows from Lemma~\ref{l:3Burl}.
\end{proof}
\vskip 10pt
\begin{proof}[Proof of Lemma~\ref{l:Klee}]
We proceed by induction on $|Z|$. If $|Z| \leq 6$ then $Z$ is a
twig. In this case the lemma holds since $\beta_1 \geq \beta_2 +
6\alpha$ by {\bf (\ref{e:const3})}. We assume for the remainder of
the proof that $|Z| \geq 7$. It follows that $G'$ is not cyclically
$4$-edge-connected, as $G'$ has no core. Therefore
Corollary~\ref{c:ElemTwigs} is applicable to $G'$. Let $\calY$ be a
collection of disjoint elementary twigs in $G'$ such that every
relevant triangle in $G'$ is contained in an element of $\calY$, and
let $(T,\phi)$ be a $\calY$-maximum small-cut decomposition of
$G'$. By Corollary~\ref{c:ElemTwigs}, the hub at every $t \in V(T)$
with $|\phi^{-1}(t)|\neq \emptyset$ is either an elementary twig, in
which case $t$ is a leaf of $T$, or is cyclically
$4$-edge-connected, in which case it is isomorphic to either $K_4$
or $B_3$.
In calculations below we will make use of the following claim: If
$\deg_T(t)=2$ for some $t \in V(T)$, then $|\phi^{-1}(t)| \leq 2$.
If this is not the case, the hub at $t$ is isomorphic to $K_4$, and
at least three of its vertices must be vertices of $G$. It follows
that there is an edge $f\in E(T)$ incident to $t$ for which
$|\phi^{-1}(f)|=2$. Let $v \in \phi^{-1}(t)$ be a vertex incident to
an edge in $\phi^{-1}(f)$. Then $C=\phi^{-1}(f) \triangle \delta(v)$
is a 3-edge-cut in $G$. As in the proof of
Lemma~\ref{l:CutDecompose} we can split $t$ into two vertices
$t',t''$ with $\phi^{-1}(t')=\{v\}$ and $\phi^{-1}(t'')=
\phi^{-1}(t)\setminus v$. We now have $\phi^{-1}(t't'')=C$ and the
new small-cut-decomposition contradicts the maximality of
$(T,\phi)$. This completes the proof of the claim.
Let $t_0 \in V(T)$ be such that $\phi^{-1}(t_0)$ contains the new
vertex or one of the ends of the new edge in $G'$. Since $G$ is
pruned, $G'$ contains at most one irrelevant triangle, and if such a
triangle exists, at least one of its vertices lies in
$\phi^{-1}(t_0)$. As a consequence, for any leaf $t\ne t_0$ of $T$,
$\phi^{-1}(t)$ is a twig. Let $t^{*} \in V(T) \setminus \{t_0\}$ be
such that $\deg_{T}(t^{*})\geq 3$ and, subject to this condition,
the component of $T \setminus \{t^{*}\}$ containing $t_0$ is
maximal. If $\deg_{T}(t) \leq 2$ for every $t \in V(T) \setminus
\{t_0\}$, we take $t^{*}=t_0$ instead.
Let $T_1,\ldots,T_k$ be all the components of $T \setminus
\{t^{*}\}$ not containing $t_0$. By the choice of $t^{*}$, each
$T_i$ is a path. If $|V(T_i)| \geq 33$ for some $1 \leq i \leq k$
then let $T'$ be the subtree of $T_i$ containing a leaf of $T$ and
exactly $32$ other vertices. Let $f$ be the unique edge in
$\delta(T')$.
Let $H$ (resp.~$H'$) be the $\phi^{-1}(f)$-contraction of $G$ (resp.~$G'$)
containing $V(G')\setminus \phi^{-1}(T')$, and let $Z'$ consist of
$V(H') \cap Z$ together with the new vertex created by $\phi^{-1}(f)$-contraction (if it exists). If
$H$ is not pruned then it contains a unique irrelevant triangle
and we contract it, obtaining a pruned graph.
By the induction
hypothesis, either $|Z'| \leq 6$ or we can find a foliage $\calX'$
in $Z'$ with $\fw(\calX')\geq \alpha(|Z'|-2) + \beta_2$. If $|Z'|
\leq 6$ let $\calX' := \emptyset$.
Let $t'$ be a vertex of $T'$ which is not a leaf in $T$. Since
$\deg_T(t')=2$, $|\phi^{-1}(t')|\neq \emptyset$. Therefore
$\phi^{-1}(t')$ is isomorphic to $B_3$ or $K_4$ and we can apply
Corollary~\ref{c:TreeBranch}. This implies that $\phi^{-1}(T')$
contains an elementary twig and a burl that are vertex-disjoint,
where the elementary twig is the preimage of the leaf. Further, we
have $|\phi^{-1}(T')|\leq 8+ 2\cdot32 = 72$, since an elementary twig
has size at most $8$ and the preimage of every non-leaf vertex of
$T'$ has size at most $2$ by the claim above. By
Lemma~\ref{l:Contraction}(3), we can obtain a foliage $\calX$ in $Z$
by adding the twig and the burl to $\calX'$ and possibly removing a
burl (which can be a twig) containing the new element of $H'$
created by $\phi^{-1}(f)$-contraction. It follows that if $|Z'| \geq
7$ then
$$\fw(\calX)\geq \alpha(|Z'|-2) + 2\beta_2 \geq (\alpha|Z|+\beta_2) -
74\alpha +\beta_2 \geq \alpha|Z|+\beta_2,$$ by {\bf
(\ref{e:const4})}, as desired. If $|Z'| \leq 6$ then $|Z| \leq 78$
and
$$\fw(\calX)\geq \beta_1 + \beta_2 \geq 78\alpha +\beta_2 \ge
\alpha|Z| +\beta_2,$$ by {\bf (\ref{e:const5})}.
It remains to consider the case when $|V(T_i)| \leq 32$ for every $1
\leq i \leq k$. Suppose first that $t^{*} \neq t_0$ and that
$|\phi^{-1}(T_0)| \geq 7$, where $T_0$ denotes the component of $T
\setminus t^{*}$ containing $t_0$. Let $f_0$ be the edge incident to
$t^*$ and a vertex of $T_0$. We form the graphs $H$, $H'$ and a set $Z'$
by a $\phi^{-1}(f_0)$-contraction as in the previous case, and
possibly contract a single irrelevant triangle. As before, we find a
foliage $\calX'$ in $Z'$ with $\fw(\calX')\geq \alpha(|Z'|-2) +
\beta_2$. Note that $\phi^{-1}(T_i)$ contains a twig for every $1\leq
i\leq k$. By Lemma~\ref{l:Contraction}(3), we now obtain a foliage
$\calX$ in $Z$ from $\calX'$, adding $k \geq 2$ twigs and possibly
removing one burl (which can be a twig) from $\calX'$. We have
$|\phi^{-1}(T_i)|\leq 8+31 \cdot 2=70$ for every $1 \leq i \leq k$,
and $|\phi^{-1}(t^{*})| \leq 4$. Therefore $|Z| \leq |Z'|+70k+4$. It
follows from {\bf (\ref{e:const5})} that
$$\fw(\calX) \geq \alpha(|Z'|-2) + \beta_2 + (k-1)\beta_1 \geq \alpha|Z|
+\beta_2 -76\alpha +(k-1)(\beta_1-70\alpha) \geq \alpha|Z|+\beta_2.$$
Now we can assume $t^*=t_0$ or $|\phi^{-1}(T_0)|\leq 6$. First
suppose $t^* \neq t_0$ but $|\phi^{-1}(T_0)|\leq 6$. Then again
$|\phi^{-1}(t^*)|\leq 4$, so we have $|Z|\leq 70k+10$. Let $\calX$ be
the foliage consisting of twigs in $T_1,\ldots,T_k$. Thus by {\bf
(\ref{e:const6})}, we have
\begin{equation*}\fw(\calX)=k\beta_1 \geq (\alpha|Z|+\beta_2) +
k(\beta_1-70\alpha)-10\alpha -\beta_2 \geq
\alpha|Z|+\beta_2.\end{equation*}
Finally we can assume $t^{*} = t_0$. Then $|\phi^{-1}(t^*)|\leq 4$,
unless $k=1$ and $\phi^{-1}(t^*)$ is an elementary twig. In either
case, $|Z|\leq 70k+8$ and the equation above applies.
\end{proof}
\section{Proof of Lemma~\ref{l:MainInduction}}\label{s:MainInduction}
The following lemma is a direct consequence of a theorem of Kotzig,
stating that any graph with a unique perfect matching contains a bridge
(see~\cite{esperetkss10}).
\begin{lem}
\label{l:prop-double}
Every edge of a cyclically $4$-edge-connected cubic graph with at
least six vertices is contained in at least two perfect matchings.
\end{lem}
Let $G$ be a cubic graph. For a path $v_1v_2v_3v_4$, the graph
obtained from $G$ by {\em splitting along the path $v_1v_2v_3v_4$} is
the cubic graph $G'$ obtained as follows: remove the vertices $v_2$
and $v_3$ and add the edges $v_1v_4$ and $v'_1v'_4$ where $v'_1$ is
the neighbor of $v_2$ different from $v_1$ and $v_3$ and $v'_4$ is the
neighbor of $v_3$ different from $v_2$ and $v_4$. The idea of this
construction (and its application to the problem of counting perfect
matchings) originally appeared in~\cite{voorhoeve79}. We say that a
perfect matching $M$ of $G$ is a \emph{canonical extension} of a
perfect matching $M'$ of $G'$ if $M \triangle M' \subseteq E(G)
\triangle E(G')$, i.e. $M$ and $M'$ agree on the edges shared by $G$
and $G'$.
\begin{lem}
\label{l:splitting}
Let $G$ be a cyclically 4-edge-connected cubic graph with $|V(G)| \geq
6$. If $G'$ is the graph obtained from $G$ by splitting along some
path $v_1v_2v_3v_4$, then
\begin{enumerate}
\item $G'$ is cubic and bridgeless;
\item $G'$ contains at most $2$ irrelevant triangles;
\item $\fw(G) \geq \fw(G')-2\beta_1$;
\item Every perfect matching $M'$ of $G'$ avoiding the edge $v_1v_4$
has a canonical extension in $G$.
\end{enumerate}
\end{lem}
\begin{proof}\mbox{}\smallskip
\begin{enumerate}
\item The statement is a consequence of an easy lemma
in~\cite{esperetkk11}, stating that the cyclic edge-connectivity can
drop by at most two after a splitting.
\item Since $G$ is cyclically 4-edge-connected and has at least six
vertices, it does not contain any triangle. The only way an
irrelevant triangle can appear in $G'$ is that $v_1$ and $v_4$ (or
$v_1'$ and $v_4'$) have precisely one common neighbor (if they have
two common neighbors, the two arising triangles share the new edge
$v_1v_4$ or $v_1'v_4'$ and hence, are relevant).
\item At most two burls from a foliage of $G'$ contain $\{v_1,v_4\}$
or $\{v_1',v_4'\}$. Therefore, a foliage of $G$ can be obtained from
any foliage of $G'$ by removing at most two burls (observe that this
is precisely here that we use the fact that being a burl is a local
property, independent of the rest of the graph).
\item The canonical extension is obtained (uniquely) from $M' \cap
E(G)$ by adding either $v_2v_3$ if $v'_1v'_4 \not\in M'$ or
$\{v'_1v_2,v_3v_4'\}$ if $v'_1v'_4 \in M'$.\qedhere
\end{enumerate}
\end{proof}
\vskip 10pt
\begin{proof}[Proof of Lemma~\ref{l:MainInduction}]
We proceed by induction on $|V(G)|$. The base case $|V(G)|=6$
holds by Lemma~\ref{l:prop-double} and {\bf (\ref{e:const7})}.
For the induction step, consider first the case that $G$ is
cyclically 4-edge-connected. Fix an edge $e=uv \in E(G)$. Our goal
is to show that $e$ is contained in at least
$2^{\alpha|V(G)|-\fw(G)+\gamma}$ perfect matchings.
Let $w \neq u$ be a neighbor of $v$ and let $w_1$ and $w_2$ be the
two other neighbors of $w$. Let $x_i,y_i$ be the neighbors of $w_i$
distinct from $w$ for $i=1,2$. Let $G_1,\ldots,G_4$ be the graphs
obtained from $G$ by splitting along the paths $vww_1x_1$,
$vww_1y_1$, $vww_2x_2$ and $vww_2y_2$. Let $G'_i$ be obtained from
$G_i$ by contracting irrelevant triangles for $i=1,\ldots,4$. By
Lemma~\ref{l:splitting}(2) we have $|V(G'_i)| \geq |V(G)|-6$.
Suppose first that one of the resulting graphs, without loss of
generality $G'_1$, does not have a core. By Corollary~\ref{c:Klee},
Lemma~\ref{l:Triangle} and Lemma~\ref{l:splitting}, we have $$\alpha
|V(G)| \leq \alpha (|V(G'_1)| + 6) \leq \fw(G'_1)+ 7\alpha -\beta_2
\leq \fw(G_1)+ 7\alpha -\beta_2 \leq \fw(G)+ 2\beta_1 +7\alpha
-\beta_2.$$ Therefore $$\alpha|V(G)|-\fw(G)+\gamma \leq \gamma +
2\beta_1 +7\alpha -\beta_2 \leq 1$$ by {\bf (\ref{e:const8})} and the
lemma follows from Lemma~\ref{l:prop-double}.
We now assume that all four graphs $G'_1,\ldots,G'_4$ have a
core. By Lemma~\ref{l:splitting}(4), every perfect matching
containing $e$ in $G_i$ canonically extends to a perfect matching
containing $e$ in $G$. Let $S$ be the sum of the number of perfect
matchings of $G_i$ containing $e$, for $i\in\{1,2,3,4\}$. By
induction hypothesis and Lemmas~\ref{l:Triangle}
and~\ref{l:splitting}, $S \ge 4 \cdot
2^{\alpha(|V(G)|-6)-\fw(G)-2\beta_1+\gamma}$. On the other hand, a
perfect matching $M$ of $G$ containing $e$ is the canonical
extension of a perfect matching containing $e$ in precisely three of
the graphs $G_i$, $i\in\{1,2,3,4\}$. For instance if $w_1y_1,ww_2
\in M$, then $G_2$ is the only graph (among the four) that does not
have a perfect matching $M'$ that canonically extends to $M$ (see
Figure~\ref{f:voo}). As a consequence, there are precisely $S/3$
perfect matchings containing $e$ in $G$. Therefore,
$$m^{\star}(G) \ge \tfrac{4}{3}\cdot
2^{\alpha(|V(G)|-6)-\fw(G)-2\beta_1+\gamma} \geq
2^{\alpha|V(G)|-\fw(G)+\gamma},$$ by {\bf (\ref{e:const9})}, as
desired.
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.35]{voo}
\caption{Perfect matchings in only three of the $G_i$'s canonically
extend to a given perfect matching of $G$ containing $e$.} \label{f:voo}
\end{figure}
It remains to consider the case when $G$ contains a cyclic edge-cut
$C$ of size at most $3$. Suppose first that for such edge-cut $C$,
both $C$-contractions $H_1$ and $H_2$ have a core. Then, by
Lemma~\ref{l:Contraction}(3), $\fw(G) \geq \fw(H_1)+\fw(H_2)-2\beta_1$
and, by induction hypothesis, applied to $H_1$ and $H_2$ (after
possibly contracting one irrelevant triangle in each) and
Lemma~\ref{l:Contraction},
$$m^{\star}(G) \geq m^{\star}(H_1)m^{\star}(H_2) \geq
2^{\alpha|V(G)|-4\alpha-\fw(G)-2\beta_1 +2\gamma} \geq
2^{\alpha|V(G)|-\fw(G)+\gamma},$$ by {\bf (\ref{e:const10})}, as desired.
Finally, if for every cyclic edge-cut $C$ of size at most $3$ only one
$C$-contraction has a core, we apply Corollary~\ref{c:ElemTwigs} to
$G$. Let $(T,\phi)$ be the resulting small-cut-decomposition of $G$.
There exists a unique vertex $t \in V(T)$ such that the hub $H$ of $G$
at $t$ is cyclically $4$-edge-connected with $|V(H)|\geq 6$. Let
$T_1,\ldots, T_k$ be the components of $T - t$ and let $Z_i =
\phi^{-1}(V(T_i))$. We apply Lemma~\ref{l:Klee} to
$Z_1,\ldots,Z_k$. Note that Lemma~\ref{l:Klee} is indeed applicable,
as $G$ is pruned, and therefore every triangle in $G$ belongs to an
elementary twig. Consequently, no edge-cut corresponding to an edge of $(T,
\phi)$ separates exactly $3$ vertices of $G$.
Let $\calX_1,\calX_2,\ldots,\calX_k$ be the foliages satisfying the
lemma. Let $\calX_0$ be the maximal foliage in $H$ avoiding new
vertices and edges created by contraction of the edge-cuts
$\delta(Z_1),\ldots,\delta(Z_k)$. Then $\fw(\calX_0) \geq
\fw(H)-k\beta_2$, as $H$ contains no twigs (it is cyclically
4-edge-connected). Since $\calX_0 \cup \calX_1 \cup \ldots \cup \calX_k$
is a foliage in $G$ we have
$$\fw(G) \geq \fw(H)-k\beta_2 + \sum_{i=1}^{k}\fw(\calX_i) \geq \fw(H) +
\alpha \sum_{i=1}^{k}|Z_i|,$$ by the choice of
$\calX_1,\ldots,\calX_k$. It remains to observe that
$$m^{\star}(G) \geq m^{\star}(H) \geq 2^{\alpha|V(H)|-\fw(H)+\gamma}
\geq 2^{\alpha(|V(G)|-\sum_{i=1}^{k}|Z_i|)-\fw(H)+\gamma} \geq
2^{\alpha|V(G)|-\fw(G)+\gamma},$$ by the above.
\end{proof}
\section{Concluding remarks}
\subsection{Improving the bound.} We expect that the bound in
Theorem~\ref{t:Main} can be improved at the expense of more careful
case analysis. In particular, it is possible to improve the bound on
the length of the path in Corollary~\ref{c:TreeBranch}. We have chosen
not to do so in an attempt to keep the argument as straightforward as
possible.
In~\cite{cyganps09} it is shown that for some constant $c>0$ and every
integer $n $ there exists a cubic bridgeless graph on at least $n$
vertices with at most $c2^{n/17.285}$ perfect matchings.
\subsection{Number of perfect matchings in $k$-regular graphs.}
In~\cite[Conjecture 8.1.8]{lovaszp86} the following generalization of
the conjecture considered in this paper is stated. A graph is said to
be \emph{matching-covered} if every edge of it belongs to a perfect
matching.
\begin{conj}\label{conj:kregular}
For $k \geq 3$ there exist constants $c_1(k), c_2(k) > 0$ such that
every $k$-regular matching-covered graph contains at least $c_2(k)
c_1(k)^{|V(G)|}$ perfect matchings. Furthermore, $c_1(k) \to \infty$
as $k \to \infty$.
\end{conj}
While our proof does not seem to extend to the proof of this
conjecture, the following weaker statement can be deduced from
Theorem~\ref{t:Main}. We are grateful to Paul Seymour for suggesting
the idea of the following proof.
\begin{thm}\label{t:kregular}
Let $G$ be a $k$-regular $(k-1)$-edge-connected graph on $n$
vertices for some $k \geq 4$. Then
$$\log_2{m(G)} \geq (1-\tfrac1k)(1-\tfrac2k) \tfrac{n}{\ceps}.$$
\end{thm}
\begin{proof}
It follows by Edmonds' characterization of the perfect matching
polytope~\cite{edmonds65} that there exists a probability
distribution $\bM$ on $\calM(G)$ such that for every edge $e\in
E(G)$, $\Pr[e\in \bM]=\tfrac1k$.
We choose a triple of perfect matchings of $G$ as follows. Let $M_1
\in \calM(G)$ be arbitrary. We have $$\Ex[|\bM \cap
M_1|]=\frac{n}{2k}.$$ Therefore we can choose $M_2 \in \calM(G)$ so
that $|M_2 \cap M_1|\leq \frac{n}{2k}$. Let $Z \subseteq V(G)$ be
the set of vertices not incident with an edge of $M_1 \cap
M_2$. Then $|Z| \geq (1-\tfrac1k)\, n$. For each $v \in Z$ we
have $$\Pr[\bM \cap \delta(\{v\}) \cap (M_1 \cup M_2) =
\emptyset]=1-\tfrac2k.$$ Therefore the expected number of vertices
whose three incident edges are in $\bM$, $M_1$ and $M_2$
respectively, is at least $(1-\tfrac1k)(1-\tfrac2k)\,n$. It follows
that we can choose $M_3 \in \calM(G)$ so that the subgraph $G'$ of
$G$ with $E(G')=M_1 \cup M_2 \cup M_3$ has at least
$(1-\tfrac1k)(1-\tfrac2k)\,n$ vertices of degree three. Note that
$G'$ is by definition matching-covered. It follows that the only
bridges in $G'$ are edges joining pairs of vertices of degree
one. Let $G''$ be obtained from $G'$ by deleting vertices of degree
one and replacing by an edge every maximal path in which all the
internal vertices have degree two. The graph $G''$ is cubic and
bridgeless and therefore by Theorem~\ref{t:Main} we have
$$\log_2{m(G)}> \log_2{m(G')} \geq \log_2{m(G'')} \geq
\tfrac1{\ceps}|V(G'')| \geq (1-\tfrac1k)(1-\tfrac2k)
\tfrac{n}{\ceps},$$ as desired.\end{proof}
\bibliographystyle{plain}
|
2,869,038,154,409 | arxiv | \section{Introduction.}
On se place dans un domaine de Green $\Omega$ de ${\mathbb R}^n$, c'est-\`a-dire un domaine
r\'egulier quelconque de ${\mathbb R}^n$ si $n\ge 3$, ou un domaine de compl\'ementaire non
polaire si $n=2$. Rappelons
que le noyau (ou fonction) de Green de $\Omega$ est une fonction sym\'etrique $G$ d\'efinie sur $\Omega\times \Omega$ \`a valeurs
dans $]0,+\infty[$ ayant les propri\'et\'es suivantes:
1. $G$ est s.c.i. sur $\Omega\times\Omega$ et continue
en dehors de la diagonale de $\Omega\times \Omega$.
2. Pour tout $y\in \Omega$, la fonction $G(.,y)$ est un potentiel,
harmonique dans $\Omega\setminus \{y\}$.
Toute autre fonction $G'$ sur $\Omega\times \Omega$ \`a valeurs
dans $]0,+\infty[$ poss\'edant les propri\'et\'es 1. et 2. est de la
forme $G'(.,y)=\varphi(y)G(.,y)$ pour tout
$y\in \Omega$, o\`u $\varphi$ est une fonction finie continue et $>0$ sur $\Omega$.
De plus, comme $G$ est sym\'etrique, la fonction $G'$ est sym\'etrique si et seulement si
$\varphi$ est constante.
Si $\Omega={\mathbb R}^n$, $n\ge 3$, alors $G$ est donn\'e par
$G(x,y)=\frac{1}{||x-y||^{n-2}}$ \`a la multiplication pr\`es par une
constante $>0$.
La notion de fonction ou noyau de Green a \'et\'e \'etendue au cadre g\'en\'eral des
espaces harmoniques de Brelot v\'erifiant l'hypoth\`ese d'unicit\'e par R.-M. Herv\'e
dans \cite{He}.
Soit $U$ un domaine fin de $\Omega$, c'est-\`a-dire un domaine
au sens de la topologie fine sur $\Omega$. Rappelons que la topologie fine,
d\'efinie par Cartan en 1940, est la moins fine des topologies
sur $\Omega$ qui rendent continues les fonctions
surharmoniques dans $\Omega$ (pour plus
de d\'etails sur cette topologie on renvoie \`a
\cite[Chapter 1]{F1} et \cite{DO}). Pour tout $y\in U$, on note
$G_U(.,y)$ la fonction d\'efinie sur $U\setminus \{y\}$ par
$G_U(.,y)=G(.,y)-\widehat R_{G(.,y)}^{\complement U}$. Cette fonction
est finement surharmonique $\ge 0$ dans $U\setminus \{y\}$,
et le point $y$ est polaire, donc elle se prolonge par continuit\'e fine
\`a $U$ en une fonction finement surharmonique sur $U$, not\'ee encore $G_U(.,y)$.
La fonction $(x,y)\mapsto G_U(x,y)$ d\'efinie sur $U\times U$ est appel\'ee un noyau de
Green fin de $U$. D'apr\`es \cite[Th\'eor\`eme, p. 203]{F0}, pour tout $y\in U$,
$G_U(.,y)$ est un potentiel fin dans $U$ et tout potentiel fin
dans $U$ finement harmonique dans $U\setminus \{y\}$ est de la forme
$\alpha(y)G_U(.,y)$, o\`u $\alpha(y)$ est une constante $>0$ ne d\'ependant que de
$U$ et de $y$.
On peut alors se demander si le noyau de Green fin $G_U$ de $U$ est
r\'egulier dans le sens o\`u $G_U$ finement s.c.i. dans $U\times U$ et finement continue en dehors de la diagonale de $U\times U$.
Dans le cas classique d'un domaine euclidien de ${\mathbb R}^n$, la propri\'et\'e
2. d'un noyau de Green de $\Omega$ est une cons\'equence du principe de
Harnack pour les fonctions harmoniques positives dans $\Omega$. Or le principe de
Harnack n'est pas satisfait par les fonctions finement harmoniques dans un domaine fin.
Toutefois ces fonctions poss\`edent une propri\'et\'e de convergence voisine.
C'est cette propri\'et\'e qui va nous permettre de r\'epondre \`a la question
soulev\'ee ci-dessus. Plus pr\'ecisemment, nous \'etablissons une propri\'et\'e de
convergence pour les suites et les familles de fonctions finement harmoniques uniform\'ement
finement localement born\'ees, et gr\^ace \`a cette propri\'et\'e et \`a l'aide d'une comparaison entre
la topologie naturelle et la topologie fine de $U$, nous montrons que
le noyau $G_U$ est r\'egulier.
Les r\'esultats de ce travail sont valables dans le cadre
g\'en\'eral d'un $\cal P$-espace harmonique de la th\'eorie axiomatique de Brelot
\`a base d\'enombrable qui satisfait l'axiome (D), l'hypoth\`ese d'unicit\'e et dans lequel la topologie
fine est moins fine que la topologie fine adjointe, pour de tels
espaces on renvoie \`a \cite{He}. On s'est plac\'e dans un domaine de Green
de l'espace ${\mathbb R}^n$ pour des raisons de simplicit\'e seulement.
{\bf Notations et d\'efinitions}:
Dans tout ce travail on se place dans un domaine r\'egulier $\Omega$ de ${\mathbb R}^n$, de compl\'ementaire non polaire
si $n=2$, et on consid\`ere un domaine fin $U$ de $\Omega$, c'est \`a-dire au sens de la topologie
fine de $\Omega$. Le noyau de Green de $\Omega$ est not\'e tout simplement
$G$. Nous utilisons le mot fin (finement)
pour distinguer les notions relatives
\`a la topologie fine de celles relatives \`a la topologie
euclidienne de $U$, c'est-\`a-dire la topologie induite
sur $U$ par la topologie usuelle de ${\mathbb R}^n$. Pour toute partie $A$ de $U$, on note $\overline A$
l'adh\'erence de $A$ dans $\Omega$ en topologie euclidienne
et $\tilde A$ l'adh\'erence de $A$ dans $\Omega$ en topologie fine.
On note ${\rm f}$-$\lim$ et ${\rm f}$-$\liminf$ la limite et la limite inf
en topologie fine. On
note aussi ${\cal S}(U)$ le c\^one convexe des fonctions finement
surharmoniques $\ge 0$ dans $U$ au sens de \cite{F1}.
Si $f$ est une fonction sur $U$ \`a valeurs dans $\overline {\mathbb R}$, on note
$\widehat f$ la r\'egularis\'ee finement s.c.i de $f$, c'est-\`a-dire la plus grande minorante
finement s.c.i de $f$. Le noyau de Green fin de
$U$ est not\'e $G_U$. Pour plus de d\'etails sur ce noyau nous
renovoyons \`a \cite{F0}.
\section{ Propri\'et\'e de convergence de familles et de suites de fonctions finement harmoniques}
D'apr\`es \cite[Th\'eor\`eme 3.1, p. 107]{EK1}, il existe une r\'esolvante
absolument continue $(V_\lambda)$ de noyaux
bor\'eliens sur $U$ dont le c\^one des fonctions
excessives finies $(V_\lambda)$-p.p. est le c\^one $\cal S(U)$. Il en r\'esulte d'apr\`es \cite[Theorem 4.4.6, p. 136]{BBC}
que $\cal S(U)$ est un $H$-c\^one standard
de fonctions. On le munit alors de la topologie naturelle \cite[Section 4.5, p. 141]{BBC}.
Rappelons que cette topologie
est induite sur $\cal S(U)$ par celle d'un espace vectoriel localement convexe $E$ dans
lequel $\cal S(U)$ est un c\^one convexe saillant
et que, pour cette topologie, si un filtre $\cal F$ sur $\cal S(U)$ est convergent, alors
on a $\lim_{\cal F}=\sup_{M\in \cal F}{\widehat {\inf}}_{u\in M}u$ (\cite[Theorem 4.5.2]{BBC}).
De plus, pour cette topologie $\cal S(U)$ est localement compact et admet une base
compacte \cite[Corollaire 2, p. 110]{EK1}.
\begin{lemma}\label{lemma2.1}
Soit $x_0\in U$. Alors il existe un voisinage fin
de $x_0$ compact $V\subset U$ tel que la r\'estriction
de toute fonction $u\in \cal S(U)$ \`a $V$
est s.c.i (en topologie euclidienne).
\end{lemma}
{\it D\'emonstration.} Comme $\cal S(U)$ est un $H$-c\^one standard
de fonctions, il existe d'apr\`es \cite[Definition, p. 104 et Theorem 4.4.6]{BBC} une suite
$(s_n)$ de fonctions de $\cal S(U)$ telle que toute
fonction $s\in \cal S(U)$ est l'enveloppe sup\'erieure d'une sous-suite de $(s_n)$.
Soit $x_0\in U$, il existe d'apr\`es \cite[Lemma, p. 114]{F5} un voisinage fin $V$
de $x_0$,
compact en topologie euclidienne, tel que la r\'estriction de
toute fonction $s_n$ \`a $V$ est continue en topologie euclidienne.
On en d\'eduit donc que la r\'estriction de toute fonction $s\in \cal S(U)$ \`a $V$
est s.c.i. en topologie euclidienne.
\begin{theorem}\label{thm2.2}
Soit $(h_i)_{i\in I}$ une famille de fonctions finement harmoniques $\ge 0$
uniform\'ement finement localement born\'ee dans $U$,
et soit $\cal F$ un filtre sur $I$. Supposons que la famille $(h_i)$ converge selon $\cal F$
vers une fonction $h\in \cal S(U)$ en topologie
de $\cal S(U)$.
Alors $h$ est finement harmonique dans $U$ et la famille $(h_i)$ converge, selon $\cal F$,
uniform\'ement finement localement
dans $U$ vers $h$.
\end{theorem}
{\it D\'emonstration.} Quitte \`a se placer finement localement
et ajouter une constante on peut supposer que $0\le h_i\le c$
dans $U$ pour
une certaine constante $c>0$ et pour tout
$i\in I$. On a $\lim_{i,\cal F}h_i=\sup_{M\in \cal F}{\widehat {\inf_{i\in M}}}h_i=h\in \cal S(U)$
d'apr\`es \cite[Theorem 4.5.2]{BBC}.
D'autre part on a $c=c-h_i+h_i$, et $\lim_{i,\cal F}h_i=h$ dans $\cal S(U)$,
donc $\lim_{i,\cal F}(c-h_i)=c-h$ dans $\cal S(U)$.
Soit $x_0\in U$,
alors d'apr\`es le lemme pr\'ec\'edent,
il existe un voisinage fin $V$ de $x_0$, compact en topologie euclidienne, sur lequel les fonctions
$\widehat{\inf_{i\in M}}h_i$, $M\in \cal F$, sont s.c.i en topologie euclidienne
et $h$ est continue en topologie euclidienne.
Soit $\epsilon >0$, comme $V$ est compact
on peut trouver $M_0\in \cal F$ tel que
$h-\widehat{\inf_{i\in M}}h_i\le \epsilon$ dans $V$ pour tout $M\in \cal F$ contenant $M_0$.
On en d\'eduit que pour tout $M\in \cal F$ contenant $M_0$, on a
$h-h_i\le \epsilon$ dans $V$ pour tout $i\in M$. En appliquant le m\^eme proc\'ed\'e
aux fonctions $c-h_i$, $i\in I$,
on peut trouver un voisinage fin $W$ de $x_0$, compact en topologie initiale et contenu dans $V$,
et un ensemble $M_1\in \cal F$, tel que $h_i-h<\epsilon$ dans $W$ pour tout
$M\supset M_1$ et tout $i\in M$. On a alors
$|h-h_i|<\epsilon$ dans $W$ pour tout $M\in \cal F$ contenant $ M_0\cup M_1$
et tout $i\in M$. Ainsi la famille $h_i$ converge
uniform\'ement selon $\cal F$ vers $h$ dans $W$.
\begin{cor}\label{cor2.3}
Soit $I$ un ensemble ordonn\'e et r\'eticul\'e
\`a droite et soit $(h_i)_{i\in I}$ une famille filtrante croissante de fonctions finement harmoniques dans
$U$. Si $(h_i)$ est uniform\'ement finement localement born\'ee dans $U$, alors $h=\sup_ih_i$ est
finement harmonique, et
$(h_i)$ est finement localement uniform\'ement convergente vers $h$
selon le filtre des sections de $I$.
\end{cor}
{\it D\'emonstration.} Il est clair que $h=\sup_{i\in I}h_i$ est
finement harmonique dans $U$. Soit $\cal F$
le filtre des sections de
$I$. Alors $(h_i)$ est convergente selon $\cal F$ vers la fonction $h=\sup_ih_i\in \cal S(U)$.
Le corollaire r\'esulte aussit\^ot du th\'eor\`eme pr\'ec\'edent.
\begin{cor}\label{cor2.4} Soit $U$ un ouvert fin de
$\Omega$ et soit
$(h_n)$ une suite uniform\'ement finement localement born\'ee
(i.e. au sens de la topologie fine) de fonctions finement harmoniques dans $U$.
Alors on peut en extraire une sous-suite $(h_{n_k})$
qui converge
uniform\'ement finement localement vers
une fonction finement harmonique $h$ dans $U$.
\end{cor}
{\it D\'emonstration.} Quitte \`a se placer finement localement et ajouter une constante
aux fonctions $h_n$, on peut supposer qu'il existe
une constante $c>0$ telle que $0\le h_n\le c$ pour tout entier $n$.
On peut alors extraire de $(h_n)$ une sous-suite $(h_{n_k})$ qui converge
au sens de la tolopologie naturelle de $\cal S(U)$ vers une fonction finement surharmonique
$h\ge 0$. Il suffit maintenant d'appliquer le th\'eor\`eme \ref{thm2.2}
\`a la famille $(h_{n_k})_k$ et le filtre des voisinages de $+\infty$ dans ${\mathbb N}$.
\section{R\'egularit\'e du noyau de Green fin}
Rappelons que d'apr\`es \cite[Corollaire 2, p. 110]{EK1}, le c\^one $\cal S(U)$
muni de la topologie naturelle admet une base compacte.
Soit $B$ une base compacte de $\cal S(U)$
et soit $\Phi$ une forme lin\'eaire continue
positive sur $\cal S(U)$ d\'efinissant $B$, i.e $B=\{s\in \cal S(U): \Phi(s)=1\}$.
Pour tout $y\in U$, posons $P_y=\frac{G_U(.,y)}{\Phi(G_U(.,y))}$.
L'application $\varphi: U\longrightarrow B$ d\'efinie par
$\varphi(y)=P_y$ est injective. Nous identifions $U$ avec son image par $\varphi$. La topologie induite
sur $U$ par celle de $B$ sera appel\'ee la topolgie naturelle de $U$.
Cette topologie est ind\'ependante de la base $B$. En effet
si $B_1$ et $B_2$ sont deux bases compactes de $\cal S(U)$, alors $B_1$ et
$B_2$ sont hom\'eomorphes, donc les topologies induites sur $U$ par celles
de $B_1$ et $B_2$ sont identiques.
Nous commen\c cons d'abord par comparer la topolgie naturelle de $U$ avec la topologie
fine. La proposition suivante et ses deux corollaires ont \'et\'e d\'emontr\'es
dans \cite{EKF1}.
\begin{prop}\label{prop3.1} {\rm (}\cite[Corollary 3.15]{EKF1}{\rm)}
La fonction $U\ni y\mapsto P_y\in {\cal S}(U)$ est finement continue sur $U$.
\end{prop}
\begin{proof}
Soit $x_0,z\in U$, $x_0\neq z$, et $V$ un voisinage fin de $z$ tel que $x_0\notin {\overline V}$.
Par le principe de Harnack, on peut trouver un voisinage ouvert $W$ de $x_0$ tel que
$$(1-\epsilon)\inf_{y\in V}G(x_0,y)\le \inf_{y\in V} G(x_0,y)\le (1+\epsilon)\inf_{y\in V}G(x_0,y)$$
pour tout $x\in W$. On en d\'eduit que
$$(1-\epsilon)\inf_{y\in V}G(x_0,y)\le \widehat\inf_{y\in V} G(x_0,y)\le (1+\epsilon)\inf_{y\in V}G(x_0,y)$$
$x_0$ et $V$ \'etant arbitraire, on en d\'eduit que
${\rm f}$-$\lim{\widehat \inf}_{y\to z} \ G(.,y)= G(.,z)$ dans $U\setminus \{z\}$, donc partout.
Soit $\cal U$ un ultrafiltre sur $U$ plus fin que le filtre des
voisinages fins de $z$.
On a, dans ${\cal S}(U)$,
$$G(.,z)|U=\lim_{y,\cal U}G(.,y)|U=\lim_{y,\cal U}G_U(.,y)+\lim_{y,\cal U}\widehat R_{G(.,y)}^{\complement U}|U.$$
D'autre part, on a, pour tout $x\in U$,
$$(\lim_{y,\cal U}\widehat R_{G(.,y)}^{\complement U})(x)=(\sup_{M\in {\cal U}}\widehat \inf_{y\in M}\widehat R_{G(.,y)}^{\complement U})(x)
\le \lim_{y,\cal U}\widehat R_{G(.,y)}^{\complement U}(x)= \widehat R_{G(.,z)}^{\complement U}(x)$$
puisque la fonction $y\mapsto \widehat R_{G(.,y)}^{\complement U}(x)=
\widehat R_{G(.,x)}^{\complement U}(y)$ est finement continue
sur $U$. On en d\'eduit que $s=\lim_{y,\cal U} G_U(.,y)\ge G_U(.,z)>0$.
D'autre part on a aussi $s\le \lim_{y,\cal U}G(.,y)|U=G(.,z)|U$, donc $s\in \cal S(U)$.
On a \'egalement $G(.,y)-U=G_U(.,y)+\widehat R_{G(.,y)}^{\complement U}|U$,
d'o\`u par passage \`a la limite suivant $\cal U$:
$$G(.,z)|U=s+\lim{_\cal U}\widehat R_{G(.,y)}^{\complement U}|U\ge s+\widehat R_{G(.,y)}^{\complement U}|U,$$
ce qui prouve que $s$ est un potentiel fin finement harmonique dans
$U\setminus \{z\}$, donc de la forme $\alpha G_U(.,z)$ pour un certain $\alpha \in ]0,1]$
d'apr\`es \cite[Theorem, p. 203]{F0}. Il en r\'esulte que
$$\lim_{y,\cal U}P_y=\lim_{y,\cal U}\frac{G_U(.,y)}{\Phi(G_U(.,y))}=\frac{s}{\Phi(s)}=P_z.$$
On en d\'eduit finalement que ${\rm f}$-$\lim_{y\to z}P_y=P_z.$ Ce qui prouve bien
la fonction $U\ni y\mapsto P_y$ est est finement continue
et termine la preuve.
\end{proof}
\begin{cor}\label{cor3.2} {\rm (}\cite[Corollary 3.15]{EKF1}${\rm)}$
La topologie naturelle de
$U$ est moins fine que la topologie fine de $U$.
\end{cor}
La proposition suivante n'est que le (a) de \cite[Lemma 3.14]{EKF1}.
\begin{prop}\label{prop3.4} La fonction $g$ d\'efinie sur $U$ par
$g(y)=\Phi(G_U(.,y))$ est finement continue sur $U$.
\end{prop}
{\it D\'emonstration.} Soient $z\in U$ et $\cal U$ un ultrafiltre plus fin que le filtre
des voisinages fins de $z$. On a $\lim\widehat{\inf}_{\cal U}G_U(.,y)\le \liminf_{\cal U} G(.,y)
\le G(.,z)$. Soit $V$ un ouvert fin tel que $V\subset {\overline V}\subset U$ et
$z\notin {\overline V}$. Il est clair que la famille $(G_U(.,y))_{y\in \complement V}$
est localement finement uniform\'ement
born\'ee dans $V$, donc d'apr\`es le th\'eor\`eme \ref{thm2.2} cette famille converge
uniform\'ement finement localement localement vers $G_U(.,z)$ dans $V$ puisque
pour tout $x\in U$, on a ${\rm f}-\lim_{y\to z}G_U(x,y)=G_U(x,z)$. On en d\'eduit que
$\lim_{y,\cal U}G_U(.,y)=G_U(.,z)$ dans $U\setminus \{z\}$, donc partout
puisque $\{z\}$ est polaire. Donc $\lim_{\cal U} \Phi(G_U(.,y))=G_U(.,z)$.
Il en r\'esulte que ${\rm f}$-$\lim_{y\to z} g(y)={\rm f}$-$\lim_{y\to z}\Phi(G_U(.,y))=G_U(.,z)$.
Comme $z$ est arbitraire, on en d\'eduit
que $g$ est finement continue sur $U$.
\begin{lemma}\label{lemma3.4}
Soit $\alpha >0$. Alors l'ensemble $A=\{y\in U: \Phi(G_U(.,y))\ge \alpha\}$ est compact en topologie
euclidienne.
\end{lemma}
{\it D\'emonstration.} Pour tout
$y\in A$, on a $\Phi(G(.,y)|U)=\Phi(G_U(.,y))+\Phi(\widehat R_{G(.,y)}^{\complement U}|U) \ge
\Phi(G_U(.,y))\ge \alpha$ et donc $A\subset \{y\in \Omega: \Phi(G(.,y)|U)\ge \alpha\}$. Or
$\liminf_{y\to z}G(.,y)=0$ pour tout $z\in \partial \Omega$
car $\Omega$ est r\'egulier, donc $A$ est relativement compact dans $\Omega$.
Soit $(y_n)$ une suite de points de $A$ qui converge vers
$y\in \overline\Omega$, alors $y\in \Omega$ d'apr\`es ce qui pr\'ec\`ede. D'autre part
on a
$G(.,y_n)|U=G_U(.,y_n)+\widehat R_{G(.,y_n)}^{\complement U}|U$ pour tout $n$, et on peut extraire
de $(y_n)$ une sous-suite $(y_{n_k})$ telle que les suites
$(G_U(.,y_{n_k}))$ et $(\widehat R_{G(.,y_{n_k})}^{\complement U}|U)$ convergent
dans $\cal S(U)$. On en d\'eduit en passant \`a la limite quand $k\to +\infty$ que
$G(.,y)|U=\lim_{k\to +\infty}G_U(.,y_{n_k})+\lim_{k\to +\infty}\widehat R_{G(.,y_{n_k})}^{\complement U}|U$,
donc $\lim_{k\to +\infty}G_U(.,y_{n_k})$ est finement harmonique $\ge 0$ dans
$U\setminus\{y\}$ puisque $G(.,y)|U$ est finement harmonique dans $U\setminus\{y\}$. De plus, on a
$\lim\inf R_{G(.,y_{n_k})}^{\complement U}|U\ge \widehat R_{G(.,y)}^{\complement U}|U$ et, en r\'egularisant, on obtient
$\lim_{k\to \infty} \widehat R_{G(.,y_{n_k})}^{\complement U}|U\ge \widehat R_{G(.,y)}^{\complement U}|U$ et,
par cons\'equent, $\lim_{k\to +\infty}G_U(.,y_{n_k})\le G_U(.,y)$.
Donc la fonction $\lim_{k\to +\infty}G_U(.,y_{n_k})$ est un potentiel fin dans
$U$, finement harmonique dans $U\setminus \{y\}$. Il r\'esulte alors de \cite[Theorem, p.203]{F0}
que $\lim_{k\to +\infty}G_U(.,y_{n_k})=\gamma G_U(.,y)$ pour un certain
$\gamma \in [0,1]$. On en d\'eduit que $\Phi(G_U(.,y))\ge \gamma \Phi(G_U(.,y))=\lim_{k\to +\infty} \Phi(G_U(.,y_{n_k}))\ge \alpha$,
ce qui prouve que $y\in A$. Donc $A$ est compact en topologie euclidienne.\\
Avec les notations pr\'ec\'edentes d\'efinissons la fonction
$G_1$ sur $U\times U$ par $G_1(x,y)=P_y(x)$.
\begin{theorem}\label{thm3.5}
La fonction $G_1$ est s.c.i. sur $U\times U$ et continue sur $(U\times U)\setminus D$,
\noindent o\`u $U$ est muni de la topologie fine, $U\times U$ est muni de la topologie produit correspondante et o\`u $D$ est
la diagonale de $U\times U$.
\end{theorem}
{\it D\'emonstration.} La fonction $G_1$ est s.c.i sur $U\times U$ d'apr\`es
le corollaire \ref{cor3.2} et \cite[Proposition 3.2, iii)]{EKF1}.
Soient $(x_0,y_0)\in U\times U$ tel que
$x_0\ne y_0$ et $\alpha>0$ tel que $\alpha<\Phi(G_U(.,y_0))$. Posons
$U_\alpha=\{y\in U: \Phi(G_U(.,y))>\alpha\}$. Alors $U_{\alpha}$ est un ouvert fin
d'apr\`es la proposition \ref{prop3.4} et on a $y_0\in U_{\alpha}$.
On peut trouver un voisinage fin $W$ de $y_0$ compact en topologie euclidienne tel que
$W\subset U_\alpha$, et un voisinage $V_1$ de $x_0$ ouvert en topologie euclidienne
tels que $V_1\cap W=\emptyset$.
Les fonctions
$G(.,y)$, $y\in W$ sont harmoniques dans $\Omega\setminus W$ et la fonction $G$ est continue
au point $(x_0,y_0)$, on peut donc trouver, en vertu de la propri\'et\'e de Harnack,
un voisinage ouvert $V$ (dans $\Omega$) de $x_0$ tel que $V\subset V_1$
et dans lequel les fonctions $\frac{G(.,y)}{\alpha}$, $y\in U_\alpha$, sont
major\'ees par une m\^eme constante $C>0$.
D'autre part on a
$G_1(.,y)=\frac{G_U(.,y)}{\Phi(G_U(.,y))}\le \frac{G(.,y)}{\alpha}$ pour tout
$y\in W$, donc
$G_1(.,y)\le \frac{C}{\alpha}$ dans $V$ pour tout $y\in W$.
Soit $\cal V$ le filtre
des voisinages fins de $y_0$. Alors d'apr\`es la proposition \ref{prop3.1}, on a
$\lim_{\cal V}G_1(.,y)=G_1(.,y_0)$ dans $\cal S(U)$. En vertu du Th\'eor\`eme \ref{thm2.2} et ce qui pr\'ec\`ede
on a $\lim_{\cal V}G_1(.,y)=G_1(.,y_0)$ uniform\'ement dans
un voisinage fin $V_2$ de $x_0$ contenu dans $U\setminus W$.
On en d\'eduit que pour $\epsilon >0$ donn\'e, on a
$$|G_1(x,y)-G_1(x_0,y_0)|\le |G_1(x,y)-G_1(x,y_0)|+|G_1(x,y_0)-G_1(x_0,y_0)|\le \epsilon$$
dans le produit $U_1\times U_2$ d'un voisinage fin de $x_0$ et d'un voisinage fin
de $y_0$. Il en r\'esulte bien que la fonction
$G_1$ est continue en $(x_0,y_0)$.
\begin{cor}\label{cor3.6}
La fonction $G_U$ est s.c.i. sur $U\times U$ et continue sur $(U\times U)\setminus D$,
\noindent o\`u $U$ est muni de la topologie fine, $U\times U$ est muni de la
topologie produit correspondante et o\`u $D$ est
la diagonale de $U\times U$.
\end{cor}
{\it D\'emonstration.} En effet, on a
$G_U(x,y)=\Phi(G_U(.,y))G_1(x,y)$ pour tout $(x,y)\in U\times U$.
La fonction $(x,y)\mapsto \Phi(G_U(.,y))$ est continue sur $U^2$
d'apr\`es la proposition \ref{prop3.4}. Donc $G_U$
poss\`ede les propri\'et\'es requises d'apr\`es le th\'eor\`eme
\ref{thm3.5}.
\begin{remark}\label{remark5.7}
Le noyau de Green fin $G_U$ de $U$ est finement continu sur $U\times U$ en tant
qu'ouvert fin de ${\mathbb R}^{2n}$. En effet, la fonction $G_U$ est s\'epar\'ement
finement surharmonique $\ge 0$, il r\'esulte alors de \cite[Th\'eor\`eme 4.5]{EK2}
que $G_U$ est finement surharmonique sur $U^2$, donc finement continue sur $U^2$.
\end{remark}
\thebibliography{99}
\bibitem{AG} Armitage, D. H., Gardiner, S. J.: \textit{Classical
Potential Theory}, Springer, London, 2001.
\bibitem{BBC} Boboc, N., Bucur, Gh., Cornea, A.: \textit{Order and convexity in Potential theory:
H-cones}, Lect. Notes in Math. 853, Springer-Verlag, 1981.
\bibitem{CC} Constantinescu, C., Cornea, A.: \textit{Potential Theory on
Harmonic spaces}, Springer Verlag Heidelberg, 1972.
\bibitem{DM} Dellacherie, C., Meyer, P. A.: \textit{Probabilit\'es et
Potentiel}, Hermann, Paris 1987, Chap. XII-XVI.
\bibitem{DO} Doob, J.L.: \textit{Classical Potential Theory and its
Probabilistic Counterpart}, Springer-Verlag, Berlin, 2001.
\bibitem{EK1} El Kadiri, M.: \textit {Sur la d\'ecomposition de Riesz
et la repr\'esentation int\'egrale des fonctions finement surharmoniques}, Positivity 4 (2000), no. 2, 105--114.
\bibitem{EK2} El Kadiri, M.: \textit {Fonctions s\'epar\'ement
finement surharmoniques}, Positivity 7, no. 3 (2003), no. 3, 245--256.
\bibitem{EKF1} El Kadiri, M., Fuglede, B.: \textit {Martin boundary of a fine domain and a Fatou-Na\"im-Doob Theorem
for finely superharmonic functions}, arXiv:1403.0857.
\bibitem{EKF2} El Kadiri, M., Fuglede, B.: \textit {Sweeping at the Martin boundary of a fine domain},
arXiv:1409.7098.
\bibitem{F1} Fuglede, B.: \textit {Finely harmonic functions}, Lecture Notes in Math., 289,
Springer-Verlag, 1972.
\bibitem{F2}Fuglede, B.: \textit {Localization in Fine Potential Theory and Uniform Approximation
by Subharmonic Functions}, J. Funct. Anal. 49 (1982), 52-72.
\bibitem{F0} Fuglede, B.: \textit {Sur la fonction de Green pour un
domaine fin}, Ann. Inst. Fourier \textbf{25}, 3--4 (1975), 201--206.
\bibitem{F3} Fuglede, B.: \textit {Repr\'esentation int\'egrale des
potentiels fins}, Comptes Rendus, 300,
S\'erie I (1985), 129 -132.
\bibitem{F5} Fuglede, B.: \textit {Finely harmonic mappings and finely holomorphic
functions}, Ann. Acad. Sci. Fenn. Serie A.I. Mathematica, Helsinki, 2, (1976) 113-127.
\bibitem{GH} Gardiner, S. J., Hansen W.: \textit {The Riesz decomposition of finely superharmonic functions.} Adv. Math. 214 (2007), no. 1, 417--436.
\bibitem{He} Herv\'e, R.-M.: \textit{ Recherches axiomatiques sur la
th\'eorie des fonctions surharmoniques et du potentiel},
Ann. Inst. Fourier, 12 (1962), 415-571.
\bibitem{M} Mokobodzki, G.: \textit {Repr\'esentation int\'egrale des fonctions surharmoniques
au moyen des r\'eduites}, Ann. Inst. Fourier, 15 (1965), 103-112.
\end{document}
|
2,869,038,154,410 | arxiv | \section{Introduction}
The Sun is a magnetically active star, filling the interplanetary space with a stream of charged particles called the solar wind \citep[see e.g. a recent review by][]{Verscharen2019}. The solar wind properties are far from being homogeneous, with strong variations in temperature, density, or interplanetary magnetic field observed in connection with various phenomena of solar activity. The main drivers of strong disturbances of the solar wind are coronal mass ejections (CMEs), the fast--slow solar wind interaction on the borders of corotating interaction regions, and fast-wind outflows from coronal holes. Solar-wind disturbances may ultimately interact with Earth's magnetosphere, thereby triggering geomagnetic activity.
As first proposed by \citet{Dungey1961}, the dynamic pressure exerted by the solar wind on the magnetosphere can trigger magnetic reconnection, opening dayside dipolar geomagnetic field lines. The solar wind then transports this magnetic field to the nightside, forming a long tail behind the Earth. This transfer of magnetic flux and the resulting reconfiguration of the magnetosphere eventually leads to nightside magnetic reconnection, returning flux to the dayside in various phenomenological response modes that depend on the disturbance level \citep{Dungey1961, Kepko2015}. However, a common characteristic of all such response modes is the formation of a current wedge system \citep{Kepko2015, McPherron&Chu17:ssr}. A fraction of the tail current along geomagnetic field lines is then temporarily diverted through the ionosphere, allowing a closure of the current wedge and causing perturbations in the auroral zone and at middle latitudes \citep{McPherron&Chu17:ssr}.
Both substorms and geomagnetic storms give rise to a current wedge, plasma sheet inward convection by inductive electric fields, and energetic particle injections \citep{Ganushkina2017, Kepko2015, McPherron&Chu17:ssr, Thomsen2004}. However, the current wedge has generally a more limited temporal extent during substorms than during storms, which frequently last for days \citep{Ganushkina2017, Kepko2015}. Substorms are one of the key dynamical processes occurring during storms, but isolated substorms also occur outside storms \citep{Viljanen2006, Turnbull2009}. During storms (mainly caused by strong interactions between CMEs and the magnetosphere), a stronger buildup of the inner ring current (a westward current of ions roughly $\sim2-4$ Earth radii above the equator) is provided by a deeper inward transport of charged particles from the plasma sheet, leading to a significant and prolonged decrease of the geomagnetic field \citep{Ganushkina2017}.
All these ionospheric and magnetospheric currents, and the related field-aligned currents, can cause important geomagnetic field variations during periods of rapidly evolving solar wind dynamic pressure \citep{Gonzalez94, Lakhina2016, McPherron&Chu17:ssr, Kappenman2005, tsurutani2009brief}. This realization has led to the traditional concept of disturbed days: days of smooth and regular geomagnetic field variations have been called {\it quiet days}, whereas days of stronger and irregular variations have been called {\it disturbed days} \citep{ChapmanBartels1940}.
Geomagnetically induced currents (GICs) in the ground are due to strong variations $dH/dt$ of the horizontal component $H$ of the geomagnetic field over typical time scales of $\sim 10-1000$ seconds during disturbed days \citep{Carter2015, Kappenman2003, Kataoka2008, Pokhrel2018,Zhang2016}. Substorms generally produce the largest $dH/dt$ at high and mid-latitudes during periods of fast solar wind and have caused many of the major GIC impacts during large storms -- e.g., the Quebec voltage collapse on 13 March 1989 was triggered by a first substorm, while two later substorms tripped out transformers in the UK \citep{Boteler2019}. $dH/dt$ was found to be twice smaller in general during non-storm substorms than during storm-related substorms, possibly due to an additional input from ring current variations during storms \citep{Viljanen2006, Turnbull2009}. Other important sources of $dH/dt$ during geomagnetic storms include sudden commencements (the shock compression of the magnetosphere when a fast CME impacts the magnetosphere at the start of a storm, leading to an increase of Chapman-Ferraro currents at the dayside magnetopause; e.g., see \citealt{Kikuchi2001}) and rapid variations of the ring current, through its role in the generation of Region 2 field-aligned currents \citep{Ganushkina2017}. Sudden commencements have a large $dH/dt$ because of their shock-like nature, while rapid increases of ring current energy density following large scale injection or inward convection of energetic charged particles coming from the plasma sheet can also produce large $dH/dt$ \citep{Kappenman2003, Kappenman2005, Kataoka2008}.
GICs propagate through conducting regions in the ground and water, but also in the grounded conductors. The presence of GICs in the electric power grid can cause various kinds of damage. GICs are quasi-DC currents that can lead to half-cycle saturation and drive a transformer response into a non-linear regime. This poses a risk for transformers by producing high pulses of magnetizing current, a local heating (also vibration) within the transformer \citep{Gaunt2014}, and the generation of AC harmonics that propagate out into the power network, where they can disrupt the operations of various devices \citep{Kappenman2007, Molinski2002}. In particular, the propagation of harmonics in the power grid during half-cycle saturation can distort the electrical current waveform, eventually triggering a detrimental reaction of protective relays connecting power lines, or leading to a disruption of other devices attached to these lines.
GICs identified by fast variations of the geomagnetic field have been linked with various power grid failures \citep{schrijver2013disturbances}, eventually leading to power grid disruptions \citep{Kappenman2007, pirjola2000geomagnetically, Pulkkinen2017, schrijver2013disturbances}. Although high latitude regions are more at risk from GICs, middle and low latitude regions may also be impacted by significant GICs \citep{bailey2017, Carter2015, gaunt2007, Lotz2017, Marshall2012, Torta2012, Tozzi2019, Wang2015, Watari2009, Zhang2016, Zois2013}.
A first study of anomalies in the Czech power grid as a function of geomagnetic activity (defined by the $K$ index computed from the measurements of the Earth's magnetic field at a local magnetometer station near Budkov -- e.g., see \citealt{Mayaud1980,McPherron&Chu17:ssr}) has already identified some statistically significant increases of the rate of anomalies around month-long periods of higher geomagnetic activity than nearby periods of lower activity \citep{Vybostokova2018}. Nevertheless, the relationship between geomagnetic events and anomalies still remained somewhat loose.
Accordingly, the main goal of the present paper is to better ascertain the existence of a tight relationship between power grid anomalies and geomagnetic storms, on the basis of the same data set. We shall discuss the physical mechanisms by which GICs may cause anomalies in power lines and transformers, and show that our statistical results are suggestive of a causal relationship based on those mechanisms. We shall also address the important and unanswered question of the time delay between moderate to large geomagnetic storms with minimum $Dst<-40$ nT \citep{Gonzalez94} and the actual occurrences of anomalies. For that purpose, we shall use Superposed Epoch Analysis to investigate the relative occurrence of GIC effects in the Czech power grid during disturbed days as compared with quiet days. Such disturbed days will be categorized using different time-integrated parameters of geomagnetic activity, related to the magnitude of temporal variations of the horizontal component of the geomagnetic field, which can induce detrimental currents in power lines.
\section{Data sets}
In this study, we searched for a causal relation between two types of time series. The first series describing the daily anomaly rates in the Czech electric power-distribution grid, and the second serving as a proxy of disturbed days for the estimation of geomagnetically induced currents.
\subsection{Logs of Anomalies}
The Czech Republic is a mid-latitude country (around $\sim 50^\circ$ geographic latitude and $\sim 45^\circ$ corrected geomagnetic latitude), where the effects of solar/geomagnetic activity on ground-based infrastructures is expected to be moderate at most. The modelled amplitudes of GICs during the Halloween storms in late October 2003 reached 1-minute peaks of about 60~A\footnote{Smi\v{c}kov\'a, A., Geomagnetically Induced Currents in the Czech Power Grid, BSc. thesis (supervisor \v{S}vanda, M.), Faculty of Electrical Engineering, Czech Technical University, 2019, available online \url{http://hdl.handle.net/10467/84988}.}. The country has a shape prolonged in the east--west direction (about 500 km length), whereas in the south--north direction it is about 280~km long from border to border. The spine of the electric power network is operated by the national operator \v{C}EPS, a.s., which maintains the very-high-voltage (400~kV and 220~kV) transmission network, and connects the Czech Republic with neighbouring countries. \v{C}EPS also maintains the key transformers and electrical substations in the transmission network. The area of the state is then split into three regions, where the electricity distribution is under the responsibility of the distribution operators. The southern part is maintained by E.ON Distribuce, a.s., the northern part by \v{C}EZ Distribuce, a.s., and the capital city of Prague is maintained by PREdistribuce, a.s. All three distributors maintain not only very-high-voltage (110 kV) and high-voltage (22 kV) power lines, but also connect the consumers via the low-voltage (400 V) electric power transmission network.
All four above-mentioned power companies have agreed to provide us their maintenance logs. The datasets used in this study are exactly the same datasets already used in the study by \cite{Vybostokova2018}. Thus, we refer the reader to section 3.2 of this previous paper for a more detailed description of the datasets. By mutual non-disclosure agreement with the data providers, the datasets were anonymised (by removing the information about the power-company name, and also by changing the calendar date to a day number) and must be presented as such. The total time span is 12 years, but the span of individual maintenance logs provided by the operators is shorter, varying between 6 to 10 years.
We only briefly recall that the obtained logs were cleaned from events that were obviously not related to variations of geomagnetic activity. From these logs, we keep only the dates when the events occurred and did not consider any other details. These inhomogeneous datasets (the log entries were provided by different individuals with varying levels of details and quality of the event description) were split into twelve subsets D1--D12, which were investigated separately. Each sub-dataset was selected so that it contained only events occurring on devices of a similar type and/or with the same voltage level and were recorded by the same operating company. The dataset descriptions are briefly summarised in Table~\ref{tab:datasets}.
\begin{table}[ht]
\caption{Datasets analysed in this study. This is a reduced version of Table~1 in \cite{Vybostokova2018}.}
\label{tab:datasets}
\centering
\begin{tabular}{l|lll}
{\bf Dataset} & {\bf Voltage level} & {\bf Type} & {\bf Span}\\
{\bf ID} & {\bf } & {\bf } & {\bf } \\
\hline
D1 & very high voltage & equipment: transformers, & 9 years\\
& & electrical substations& \\
D2 & high voltage & equipment & 6 years \\
D3 & very high voltage & equipment & 6 years\\
D4 & high and low voltage & power lines & 7 years\\
D5 & high and low voltage & equipment and power lines & 7 years\\
D6 & high and low voltage & equipment & 7 years \\
D7 & very high voltage & power lines & 10 years \\
D8 & high voltage & transformers & 10 years \\
D9 & very high voltage & transformers& 10 years \\
D10 & very high and high voltage & electrical substations& 10 years \\
D11 & very high voltage & power lines& 10 years \\
D12 & high voltage & power lines & 10 years \\
\end{tabular}
\end{table}
\subsection{Geomagnetic Indices and Parameters used for GIC Estimation}
Various parameters have been considered to estimate the effects of geomagnetic activity on power grids \citep{schrijver2013disturbances}. GICs are due to strong variations $dH/dt$ over typical time scales of $\sim 10-1000$ seconds \citep{Kappenman2003}. There are two sources of such large $dH/dt$ at low and middle latitudes: (i) sudden impulses (SI), also called sudden commencements (SC) when they are followed by a storm caused by the shock preceding a fast CME, and (ii) the growth and decay of the ring current during a magnetic storm. Substorm-related disturbances are mostly limited to high and middle latitudes, whereas disturbances caused by ring current changes generally affect mainly middle and low latitudes.
Statistically, periods of stronger cumulative effects of GICs in a power grid are therefore expected to correspond to {\it disturbed days} of elevated geomagnetic activity \citep{ChapmanBartels1940}. In the present study, we shall use various cumulative (time-integrated) parameters based on different magnetic indices to categorize such disturbed days, and we shall investigate the relative occurrence of GIC effects during such disturbed days as compared with quiet days.
An appropriate quantity to estimate GICs at low latitudes is $d(\textit{SYM-H})/dt$, which directly provides a (longitudinally averaged) measure of the 1-minute $dH/dt$ due to ring current variations that drive GICs there \citep{Carter2015, Kappenman2003, Zhang2016}. Indeed, the $\textit{SYM-H}$ index is essentially similar to the hourly $Dst$ storm time index, but measured on 1-minute time scales -- that is, it provides the disturbance $\Delta H = H - H_{\rm quiet}$ of the horizontal component of the magnetic field as compared to its quiet-time level, longitudinally averaged based on ground magnetometer measurements at different low latitude magnetometer stations \citep{Mayaud1980}.
Several studies have demonstrated the existence of significant correlations between GICs or electric grid failures and times of large $d(\textit{SYM-H})/dt$ at low to middle latitudes during geomagnetic storms, although $d(\textit{SYM-H})/dt$ is often inappropriate during strong substorms \citep{Carter2015, Wang2015, Zhang2016}. \cite{Carter2015} have further shown that the actual $dH/dt$ at middle latitudes due to SI/SCs can be a factor $\sim 2-3$ larger on the dayside than $d(\textit{SYM-H})/dt$, potentially allowing GIC effects even during geomagnetic events with relatively small $d(\textit{SYM-H})/dt$. We checked that $dH/dt$ at the Czech magnetometer station of Budkov can also be sometimes $>2-3$ times larger than $d(\textit{SYM-H})/dt$ during SI/SCs. \cite{Viljanen2014} have noticed the presence of a European region of low underground conductivity stretching from France through Czech Republic to Hungary that could favor significant GICs at middle latitudes. \cite{Gil2019} have shown the presence of GICs during a few selected storms in Poland, while \cite{Tozzi2019} have found that non-negligible GICs could exist even down to northern Italy. \cite{Wang2015} have further emphasized that cumulative GICs in a nuclear plant transformer during a long-duration geomagnetic event could sometimes be more harmful than short events, due to the longer cumulated time of transformer heating.
Accordingly, we consider here the $Int(d(\textit{SYM-H})/dt)$ parameter to categorize disturbed days of expected significant GIC impacts on power grids. $Int(d(\textit{SYM-H})/dt)$ is calculated over each day, as the sum of all 1-minute $\vert d(\textit{SYM-H})/dt\vert$ values (in nT/min) obtained during times when $\textit{SYM-H}$ remains smaller than some threshold. The selected threshold (varying from $-50$ nT to $-25$ nT) should ensure that only geomagnetic storm periods are considered \citep{Gonzalez94}. This $Int(d(\textit{SYM-H})/dt)$ parameter allows, in principle, to take into account the immediate effects on power grids caused by large individual $\vert dH/dt\vert$ due to ring current variations, as well as the more delayed, cumulative effects potentially caused by prolonged periods of moderate to significant $\vert dH/dt\vert$ levels \citep{Carter2015, Wang2015, Zhang2016} -- although large individual $\vert dH/dt\vert$ during strong substorms will need other indices such as $AE$ or $ap$ to take them into account (see below).
Other works have suggested that the mean or cumulative $Dst$ during storm main phase should be good indicators of long duration GICs, because larger and steeper decreases of $Dst$ correspond to stronger disturbances that should generally lead to larger $dH/dt$ at the relevant shorter time scales of $\sim 10-1000$ seconds \citep{Balan2014, Balan2016, Lotz2017}. Using observations in South Africa (at middle corrected geomagnetic latitudes $\sim 36^\circ-42^\circ$ not much lower than in the Czech Republic), \cite{Lotz2017} have demonstrated the existence of a linear relationship between the sum of induced electric fields recorded in the ground during geomagnetic storms and the integral of $\textit{SYM-H}$ (or $Dst$) values, suggesting that the cumulative $\textit{SYM-H}$ or $Dst$ could be used as good proxies for cumulated induced electric fields at middle corrected geomagnetic latitudes (although ring current effects are likely more important for GICs in South Africa than in the Czech Republic, where a more balanced mixture of ring current and substorm effects is present). They also noted that some effects might be present as long as $\textit{SYM-H}$ remained below $-20$ nT.
Therefore, we also consider the $IntDst$ parameter to categorize disturbed days of expected significant GICs in the Czech Republic \cite[e.g., see][]{Mourenas2018}. $IntDst$ (in nT$\cdot$hr) is calculated as a sum of hourly $\vert Dst\vert$ values. This summation starts when $Dst$ first becomes smaller than a threshold (taken between $-50$ nT and $-25$ nT as before) chosen to ensure that only storm periods are considered, and this summation ends when $Dst$ reaches its minimum value over the next 24 hours. Each $IntDst$ value is then assigned to the starting day of a given summation, with all integration periods strictly separated by construction. As a result, $IntDst$ is generally measured during storm main phase, where the effects on GICs are likely stronger \citep{Balan2014, Balan2016}, to provide a complementary metric to the $Int(d(\textit{SYM-H})/dt)$ metric calculated over each whole day without any consideration of storm phase.
While ring current variations during storms can be quantified by $Dst$ and $\textit{SYM-H}$ indices, the magnetic indices that provide a measure of magnetospheric and ionospheric current variations observed during strong substorms are $AE$, $AL$, $Kp$, or $ap$ \citep{Kamide1996, Mayaud1980, Mourenas2020, Thomsen2004}. The $ap$ index (as its logarithmic equivalent $Kp$) provides a global measure of the range of magnetic field variations at middle latitudes over 3-hour time scales, obtained by averaging measurements from different mid-latitude magnetometer stations spread in longitude \citep{Mayaud1980, Thomsen2004}. In contrast, the range indices $AE$ and $AL$ are measured at higher magnetic latitudes $>60^\circ$ inside the auroral region \citep{Mayaud1980, Kamide&Rostoker04}, and $AE$ saturates at high geomagnetic activity $am>150$ (with $am$ a mid-latitude index similar to $ap$) because the auroral oval then expands equatorward of the magnetometer stations measuring it \citep{Lockwood2019, Thomsen2004}. Therefore, $ap$ is probably more appropriate than $AE$ for quantifying the strength of time-integrated geomagnetic disturbances at middle (sub-auroral) geomagnetic latitudes than $AE$ \citep{Thomsen2004, Mourenas2020}.
Although $ap$ cannot provide an accurate ranking or quantification of the maximum $dH/dt$ values reached during the most disturbed events due to its intrinsic saturation at $Kp=9$ and its coarse 3-hour time resolution, it may still provide rough estimates during less extreme events with $Kp\sim3-7$ \citep{Kappenman2005}. Therefore, it is worth examining whether some time-integrated measure of $ap$ could still be used to simply categorize disturbed/quiet days of expected stronger occurrence/absence of GIC effects at middle latitudes, during a large series of medium (most frequent) to strong (more rare) time-integrated $ap$ events spread over 6 to 10 years.
Accordingly, we shall consider in section~\ref{sect:intAP} a third parameter of geomagnetic activity, $IntAp$, corresponding to the daily maximum level of the integral of 3-hourly $ap$ values over a continuously active period of $ap\geq 15$ nT \citep{Mourenas2019, Mourenas2020}. This should allow to categorize disturbed days that include contributions to GICs from both (storm-time) ring current variations and strong substorms, usefully complementing the $Int(d(\textit{SYM-H})/dt)$ and $IntDst$ parameters. Indeed, $IntAp$ provides a rough estimate of the effects at middle latitudes of significant time-integrated $dH/dt$ disturbances due to substorms, which often do not reach the low latitudes where $\textit{SYM-H}$ and $Dst$ are measured.
In addition, we shall consider a fourth parameter, called $IntAE$, which is based on the high-latitude $AE$ auroral electrojet index \cite{Mayaud1980}. $IntAE$ is the daily maximum level of the integral of $AE$ calculated over the same period of continuously high $ap\geq15$ nT as $IntAp$ (generally corresponding to $AE>200$ nT), to ensure that the corresponding substorm-related magnetic disturbances effectively reach middle latitudes \citep{Mourenas2019, Mourenas2020}. $IntAE$ provides a measure of cumulative substorm-related disturbances, corresponding to continuous periods of auroral current variations roughly similar to High-Intensity Long-Duration Continuous $AE$ Activity (HILDCAA) events \citep{Tsurutani06}.
\begin{figure}
\centering
\resizebox{0.9\textwidth}{!}{\rotatebox{-90}{\includegraphics{figure1.pdf}}}
\caption{Upper panel: $\textit{SYM-H}$, $Dst$, $Ap$, and $AE$ indices during the 13-15 February 2011 geomagnetic event. Bottom panel: corresponding $Int(d(\textit{SYM-H})/dt)$ (in nT), and $IntDst$, $IntAp$, and $IntAE$ (in nT$\cdot$hr) cumulative parameters, calculated using thresholds $\textit{SYM-H}\leq-30$ nT, $Dst\leq-30$~nT, or $ap\geq15$ nT. }
\label{fig:2011-02-13}
\end{figure}
These four cumulative metrics of disturbed days are displayed in Figure \ref{fig:2011-02-13} together with 1-min $\textit{SYM-H}$ and $AE$, hourly $Dst$, and 3-hourly $ap$, during a moderate geomagnetic storm on 14-15 February 2011 that reached a minimum $\textit{SYM-H}=-49$ nT and a minimum $Dst=-40$ nT on 14 February, with strong substorms (identified by peaks in $AE$ and $ap$) during storm sudden commencement and main phase, and with a very weak secondary minimum of $Dst$ reaching $-30$ nT on 15 February at 17 UT during a burst of $AE$ activity.
\section{Methods}
In the present follow-up study to the work by \cite{Vybostokova2018}, we search for a tighter relationship between power grid anomalies and geomagnetic storms, based on the same datasets of anomalies in the Czech power grid. We also address the important and as yet unanswered question of the time delay between geomagnetic events and the occurrences of anomalies.
Our working hypothesis is that {\it disturbed days} of high geomagnetic activity should cause an increase in daily rates of anomalies in the power distribution network as compared with {\it quiet days}. Accordingly, the daily anomaly rates should sharply peak within a few days (with some delay) after such disturbed days, and then decrease back to normal levels. This corresponds to a rapid response to GICs induced by substorms and storms, as observed for a few selected events -- e.g., see \cite{Gil2019, Wang2015}.
Unfortunately, in a mid-latitude country such as the Czech Republic, the effects of geomagnetic activity are expected to be weak. Consequently, an investigation of individual, moderate geomagnetic events is not expected to reveal a significant increase of anomalies, because such anomalies induced by geomagnetic activity (via GICs) will generally remain hidden among many other anomalies caused by various other effects. It is therefore imperative in our statistical analysis to find a way to reduce the importance of anomalies caused by other effects. Note that our data series cover 6 to 10 years, each subset providing records of anomaly rates occurring during many separated disturbed days of high geomagnetic activity. Therefore, a feasible approach is to average over all these different events. The corresponding methodology is the \emph{Superposed Epoch Analysis}, widely used in astrophysics.
A Superposed Epoch Analysis \citep[SEA;][]{Chree1913} is a statistical technique used to reveal either periodicities within a time sequence, or to find a correlation between two time series. In the later case, the method proceeds in several steps.
\begin{enumerate}
\item In the reference time series, occurrences of the repeated events are defined as key times (or epochs).
\item Subsets are extracted from the second time series within some range around each key time.
\item Subsets from each time series are superposed, synchronized at the same key time (Day 0), and averaged, allowing inter-comparisons.
\end{enumerate}
This methodology is known to efficiently enhance the ``signal'' (related variations in both series) with respect to ``noise'' (unrelated variations in both series), because the noise adds up incoherently, whereas the signal is reinforced by the superposition.
Thus, we performed the SEA of geomagnetic activity defined by $Int(d(\textit{SYM-H})/dt)$ or $IntDst$ parameters. A range of event thresholds $\textit{SYM-H}$ (or $Dst$) $<-25$ nT to $-50$ nT was considered, to keep only periods corresponding to weak to large geomagnetic storms \citep{Gonzalez94} and to allow for the determination of the best thresholds on event strength. Other days were assigned a zero level of $Int(d(\textit{SYM-H})/dt)$ or $IntDst$. An important further requirement was that the 5-day period immediately preceding the start of a geomagnetic storm (Day 0 in the SEA) contained a zero level of the considered geomagnetic activity parameter (that is, all such quiet days must have $IntDst=0$ or $Int(d(\textit{SYM-H})/dt)=0$). This rather strict constraint should allow to better quantify the effect of geomagnetic storms on the power grid during {\it disturbed days} as compared with {\it quiet days}, at the expense of a slight reduction of the number of considered events. In a second step, we analyzed in more details these SEAs to determine as accurately as possible the time delay (after the start of a storm) that corresponds to the statistically most significant increase of anomalies, for each type of power grid equipment.
\section{Results of Superposed Epoch Analysis}
A Superposed Epoch Analysis was performed based on $IntDst$ and $Int(d(\textit{SYM-H})/dt)$ parameters, considering successively thresholds $Dst<-25$ nT, $-30$ nT, $-40$ nT, and $-50$ nT, or $\textit{SYM-H}<-25$ nT, $-30$ nT, $-40$ nT, and $-50$ nT, to explore the dependence of power grid anomalies on the minimum strength of geomagnetic storms. The number of epochs considered in the SEAs of each reference series are given in Table~\ref{tab:epochs}.
\begin{table}[]
\caption{The number of epochs considered in SEAs for various reference series. }
\centering
\begin{tabular}{ccl}
{\bfseries Reference series} & {\bfseries Threshold} & {\bfseries \# of epochs}\\
\hline
$IntDst$ & $-50$~nT & 138 \\
$IntDst$ & $-40$~nT & 172 \\
$IntDst$ & $-30$~nT & 221 \\
$IntDst$ & $-25$~nT & 222 \\
$Int(d(\textit{SYM-H})/dt)$ & $-50$~nT & 154 \\
$Int(d(\textit{SYM-H})/dt)$ & $-40$~nT & 191 \\
$Int(d(\textit{SYM-H})/dt)$ & $-30$~nT & 218 \\
$Int(d(\textit{SYM-H})/dt)$ & $-25$~nT & 231
\end{tabular}
\label{tab:epochs}
\end{table}
\begin{figure}
\centering
\resizebox{0.49\textwidth}{!}{\rotatebox{-90}{\includegraphics{figure2a.pdf}}}
\resizebox{0.49\textwidth}{!}{\rotatebox{-90}{\includegraphics{figure2b.pdf}}}
\caption{Plots of epoch-superposed daily numbers of anomalies in the D12 series, considering $IntDST$ (in nT$\cdot$hr, left) and $Int(d(\textit{SYM-H})/dt)$ (in nT, right) for different upper thresholds on $Dst$ and $\textit{SYM-H}$. Solid lines indicate the superposed anomaly rates (upper row) or geomagnetic activity in the reference time series (lower row) during Days~$-1$ to $+5$ from the epoch (Day~0), whereas dashed lines show the same quantities for the remaining days. Error bars show the one-standard-deviation half-widths. }
\label{fig:SEA4}
\end{figure}
The SEAs obtained for $IntDst$ and $Int(d(\textit{SYM-H})/dt)$ both show a clear peak of geomagnetic activity at Day 0 and a sharp decrease on Day~1 for $IntDst$ or on Day~2 for $Int(d(\textit{SYM-H})/dt)$. The later decrease for $Int(d(\textit{SYM-H})/dt)$ is due to the presence of significant $d(\textit{SYM-H})/dt$ variations during the recovery phase of many storms stretching over at least 2 consecutive days, whereas $IntDst$ is generally calculated only during storm main phase. Fig. \ref{fig:SEA4} shows the SEAs obtained for the D12 series (power lines). Similar trends are found for other datasets concerning power lines. All the figures corresponding to the different series D1 to D12 are available in the online supplement as Figs.~\ref{fig:D1seas}-\ref{fig:D12seas}.
\subsection{Storm Effects: 5-day Periods After/Before Day 0}
\label{sect:5daysafterbefore}
Next, we compared the period of 5 {\it disturbed days} immediately following Day 0 (the day of peak storm activity) with the 5-day period immediately preceding Day 0 -- a preceding period of {\it quiet days} especially selected to have zero $IntDst$ or $Int(d(\textit{SYM-H})/dt)$ levels. This allows to directly check the impact of {\it disturbed days} of geomagnetic storms on power grid anomalies, as compared with {\it quiet days}. For the two time intervals, we summed the total number of registered anomalies in the superposed series for each data subset and computed the statistical significance of the differences using the standard binomial statistical test. We tested the null hypothesis that the number of anomalies recorded over quiet days is not different from the number of anomalies recorded over disturbed days, that is, the null hypothesis that the probability of recording anomalies is the same during quiet and disturbed days. Should the resulting $p$-value be smaller than the selected statistical threshold (usually 0.05 for single-bin tests), we reject the null hypothesis, thereby saying that the recorded differences are indeed statistically significant.
\begin{table}[]
\caption{Comparison of the number of power grid anomalies in the 5-day period prior to Day~0 $N_{-}$ and in the 5-day period after Day 0~$N_{+}$, together with $p$-values of the statistical significance of the differences. These values are given for different reference series involved in SEAs with varying thresholds. }
\centering
$IntDst$
\begin{tabular}{l|lll|lll|lll|lll}
{\bfseries ID} & \multicolumn{3}{c|}{$<-25$~nT} & \multicolumn{3}{c|}{$<-30$~nT} & \multicolumn{3}{c|}{$<-40$~nT} & \multicolumn{3}{c}{$<-50$~nT}\\
& $N_{-}$ & $N_{+}$ & $p$ & $N_{-}$ & $N_{+}$ & $p$ & $N_{-}$ & $N_{+}$ & $p$ & $N_{-}$ & $N_{+}$ & $p$\\
\hline
D1 & 60 & 59 & 1.0 & 54 & 52 & 0.92 & 35 & 33 & 0.90 & 29.0 & 36.0 & 0.46\\
D2 & 100 & 115 & 0.34 & 109 & 137 & 0.08 & 94 & 112 & 0.24 & 82 & 94 & 0.41\\
D3 & 17 & 17 & 1.0 & 20 & 23 & 0.76 & 16 & 22 & 0.42 & 18 & 12 & 0.36\\
D4 & 58 & 38 & 0.05 & 52 & 43 & 0.41 & 45 & 46 & 1.0 & 38 & 40 & 0.91 \\
D5 & 86 & 75 & 0.43 & 91 & 84 & 0.65 & 83 & 82 & 1.0 & 71 & 68 & 0.87\\
D6 & 30 & 36 & 0.54 & 40 & 39 & 1.0 & 38& 37 & 1.0 & 34& 31& 0.80\\
D7 & 134 & 132 & 0.95 & 143 & 137 & 0.77 & 115 & 120 & 0.79 & 98 & 105 & 0.67\\
D8 & 968 & 955 & 0.78 & 892 & 922 & 0.50 & 710 & 760 & 0.20 & 562 & 586 & 0.50\\
D9 & 105 & 102 & 0.89 & 95 & 112 & 0.27 & 70 & 67 & 0.86 & 44 & 53 & 0.42\\
D10 & 14292 & 14338 & 0.79 & 13245 & 13477 & 0.16 & 10791 & 11047 & 0.08 & 8601 & 8764 & 0.22\\
D11 & 415 & 494 & 0.01 & 403 & 476 & 0.02 & 302 & 387 & $<0.01$ & 247 & 297 & 0.04\\
D12 & 11366 & 12118 & $<0.01$ & 10787 & 11748 & $<0.01$ & 8965 & 9421 & $<0.01$ & 7242 & 7606& $<0.01$
\end{tabular}
\vskip5mm
$Int(d(\textit{SYM-H})/dt)$
\begin{tabular}{l|lll|lll|lll|lll}
{\bfseries ID} & \multicolumn{3}{c|}{$<-25$~nT} & \multicolumn{3}{c|}{$<-30$~nT} & \multicolumn{3}{c|}{$<-40$~nT} & \multicolumn{3}{c}{$<-50$~nT}\\
& $N_{-}$ & $N_{+}$ & $p$ & $N_{-}$ & $N_{+}$ & $p$ & $N_{-}$ & $N_{+}$ & $p$ & $N_{-}$ & $N_{+}$ & $p$\\
\hline
D1 & 59 & 56 & 0.85 & 59 & 58 & 1.0 & 43 & 47 & 0.75 & 32 & 37 & 0.63\\
D2 & 98 & 98 & 1.0 & 104 & 110 & 0.73 & 101 & 121 & 0.20 & 93 & 107 & 0.36\\
D3 & 20 & 15 & 0.50 & 20 & 16 & 0.62 & 15 & 20 & 0.50 & 17 & 18 & 1.0 \\
D4 & 53 & 36 & 0.09 & 51 & 37 & 0.17 & 43 & 45 & 0.92 & 46 & 49 & 0.84\\
D5 & 79 & 66 & 0.32 & 83 & 70 & 0.33 & 80 & 78 & 0.94 & 83 & 77 & 0.69\\
D6 & 29 & 28 & 1.0 & 35 & 31 & 0.71 & 38 & 33 & 0.64 & 38 & 29 & 0.33\\
D7 & 115 &118 & 0.90 & 137 & 127 & 0.58 & 116 & 122 & 0.75 & 119 & 118 & 1.0\\
D8 & 964 & 936 & 0.54 & 1005 & 964 & 0.37 & 784 & 790 & 0.90 & 635 & 667 & 0.39\\
D9 & 98 & 101 & 0.89 & 107 & 102 & 0.78 & 80 & 93 & 0.36 & 58 & 74 & 0.19\\
D10 & 14220 & 14061 & 0.35 & 14594 & 14518 & 0.66 & 11951 & 11877 & 0.64 & 9702 & 9854 & 0.28\\
D11 & 408 & 450 & 0.16 & 420 & 473 & 0.08 & 334 & 415 & $<0.01$ & 300 & 323 & 0.38\\
D12 & 11273 & 11798 & $<0.01$ & 11675 & 12305 & $<0.01$ & 9669 & 10162 & $<0.01$ & 8385 & 8714 & 0.01
\end{tabular}
\label{tab:pvalues}
\end{table}
The results, summarized in Table~\ref{tab:pvalues}, reveal a clear increase of anomalies during the period of 5 {\it disturbed days} following Day 0 as compared with the period of 5 {\it quiet days} preceding Day 0, for the two series D11 and D12 corresponding to power lines. The number of anomalies increases by 5\% for D12 and by 30\% for D11, with corresponding $p$-values always statistically significant ($<0.05$), for thresholds $<-30$ nT or $<-40$ nT -- except for $Int(d(\textit{SYM-H})/dt)$ and D11 for a threshold $<-30$ nT. Lower or higher thresholds usually lead to less statistically significant increases of anomalies, although not always -- e.g. for D11 and $IntDst$, the $<-25$ nT threshold gives a higher statistical significance. This means that moderate events with minimum $Dst$ or $\textit{SYM-H}$ near $-40$ nT have often a statistically detectable impact on anomaly rates, whereas weaker events do not. The same thresholds also lead to the highest peaks of anomalies after Day 0 in many other series. Finally, for D11 and D12, the $<-40$ nT thresholds lead to the smallest $p$-values ($<0.01$) for both $IntDst$ and $Int(d(\textit{SYM-H})/dt)$, as well as to the smallest $p$-values $<0.1-0.2$ for D8 and D10 when considering $IntDst$, and to the smallest or second smallest $p$-values $<0.2-0.36$ for D2 and D9 when considering $Int(d(\textit{SYM-H})/dt)$. Therefore, the thresholds $\textit{SYM-H}<-40$ nT and $Dst<-40$ nT are probably the most appropriate to detect statistically significant increases of anomalies related to geomagnetic storms.
The weaker significance of results for higher thresholds $<-25$ nT agrees with previous observations from \cite{Lotz2017} that weaker events have little effects on induced electric fields. However, moderate $Dst$ or $\textit{SYM-H}$ geomagnetic disturbances in the range $-40$ nT to $-50$~nT are found to still have some impact on power lines. The weaker significance of results for lower thresholds $<-50$ nT is likely due to a combination of two different effects: (i) storms start slightly later when using a threshold $<-50$ nT than for higher thresholds $<-40$ nT or $<-30$ nT, meaning that the 5-day period preceding Day 0 can actually contain significant $dH/dt$ geomagnetic activity leading to some anomalies, and (ii) the $<-50$ nT threshold corresponds to a 30\% to 40\% smaller number of events than the $<-30$ nT threshold, decreasing the sensibility of the SEA to a potential slight increase of anomalies due to storms.
A detailed inspection of the SEAs of D12 lends further credence to the impact of geomagnetic storms on power lines. Indeed, for both $IntDst$ and $Int(d(\textit{SYM-H})/dt)$, the peaks of anomalies in the few days following Day 0 reach the highest daily levels of anomalies of the whole 21-day SEAs for $<-30$ nT to $<-50$ nT thresholds, the main increases of anomalies occurring from Day $+0$ to Day $+3$. For D11 and thresholds $<-30$ nT to $<-40$ nT, the 4-day period following Day 0 has also the highest number of anomalies of the whole 21-day SEA, while the 5-day interval preceding Day 0 has the lowest average number of anomalies of the whole SEA.
\subsection{Storm Effects: 3-day Periods Before/After Day 0 with Time Lags}
\label{sect:3days}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{figure3a.pdf}
\includegraphics[width=\textwidth]{figure3b.pdf}
\caption{(left) Maps for D12 of increases (or decreases) of the number of anomalies as a function of the middle day of the first (abscissa) and second (ordinate) considered 3-day periods. (right) Maps of the corresponding $p$-values. The upper row is computed for the $IntDST$ reference series, whereas the lower row corresponds to the $Int(d(\textit{SYM-H})/dt)$ reference series. The $p$-values are evaluated only if there is an increase of anomaly rates in the second 3-day period as compared to the first 3-day period. Note the logarithmic scale of the plotted $p$-values: $p=0.0055$ (the adopted level of statistical significance for individual bins) corresponds to $\log p=-2.26$. Statistically significant bins are indicated by white dots. Blank bins are indicated by the white colour. }
\label{fig:pvalues12}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{figure4a.pdf}
\includegraphics[width=\textwidth]{figure4b.pdf}
\caption{(left) Maps for D8 of increases (or decreases) of the number of anomalies as a function of the middle day of the first (abscissa) and second (ordinate) considered 3-day periods. (right) Maps of the corresponding $p$-values. The upper row is computed for the $IntDST$ reference series, whereas the lower row corresponds to the $Int(d(\textit{SYM-H})/dt)$ reference series. The $p$-values are evaluated only if there is an increase of anomaly rates in the second 3-day period as compared to the first 3-day period. Note the logarithmic scale of the plotted $p$-values: $p=0.0055$ (the adopted level of statistical significance for individual bins) corresponds to $\log p=-2.26$. Statistically significant bins are highlighted by white dots. Blank bins are indicated by the white colour.}
\label{fig:pvalues8}
\end{figure}
Next, we examined in more details the SEAs performed based on $IntDst$ and $Int(d(\textit{SYM-H})/dt)$ parameters for thresholds $Dst<-40$ nT and $\textit{SYM-H}<-40$ nT. We considered two shorter 3-day periods, located before and after Day 0. We varied the time lag between them and calculated (as before for 5-day periods) the statistical significance of the difference in anomaly rates between these two periods. Considering shorter 3-day periods should help to determine more precisely the (statistically most significant) time delay between the start of a geomagnetic storm and the related increase of the number of anomalies.
Fig. \ref{fig:pvalues12} for D12, Fig. \ref{fig:pvalues8} for D8, and Figs.~\ref{fig:D1ps}--\ref{fig:D12ps} in the online supplement for all other datasets, show two-dimensional maps of the increases (or decreases) of the number of anomalies as a function of the middle day of the first and second 3-day periods, together with maps of the corresponding $p$-values computed only for increases.
Let us examine these maps of $p$-values. For consistency with the procedure of estimation of the statistical significance adopted in Section~\ref{sect:5daysafterbefore}, we need to compare the number of anomalies over the same 5-day periods after and before Day 0. Accordingly, we must only consider the bins (representing 3-day periods) comprised between Days $-4$ and $-2$ (actually covering Days $-5$ to $-1$) for the period before Day 0, and the bins comprised between Days $+2$ and $+4$ (actually covering Days $+1$ to $+5$) for the period following Day 0. There are $3\times 3 = 9$ such bins. Finding only one bin with a $p$-value $\sim 0.05$ (corresponding to a 5\% probability to obtain an increase of anomalies by chance) among 9 bins is not anymore as statistically significant as before. Therefore, an individual bin (representing 3-day periods) is hereafter required to have a smaller $p$-value $\leq 0.05/9= 0.0055$ to be considered statistically significant.
In the case of the D12 dataset (power lines), there are six bins with $p$-values $< 0.0055$ for both $IntDst$ and $Int(d(\textit{SYM-H})/dt)$ in the considered square of $3\times 3$ bins centered on $(-3,+3)$ in Fig.~\ref{fig:pvalues12}, corresponding to a statistically significant increase of anomalies. A significant increase of anomalies is already observed over final 3-day periods centered on Day $+1$, as compared with initial 3-day periods centered on Days $-3$ and $-2$, indicating an immediate effect of geomagnetic storms on power lines.
In the case of D8 (transformers), however, the three bins corresponding to increases of anomalies with the smallest $p$-values are found in Fig. \ref{fig:pvalues8} for final 3-day periods centered on Days $+3$ to $+4$, as compared with initial 3-day periods centered on Days $-1$ to $0$. Therefore, there is a clear time delay of $\sim 2-3$ days between a variation of $IntDst$ or $Int(d(\textit{SYM-H})/dt)$ and the corresponding variation of the number of anomalies in the D8 dataset. In such a situation, it is more appropriate to consider for D8 the square of $3\times 3 =9$ bins centered on $(-1,+3)$ in Fig. \ref{fig:pvalues12}. Inside this domain, one bin has a $p$-value $=0.0045 < 0.0055$ for $IntDst$ in Fig. \ref{fig:pvalues8}, indicating a statistically significant {\it delayed} increase of anomalies for D8.
Overall, the results displayed in Figs. \ref{fig:pvalues12}-\ref{fig:pvalues8} and in Figs.~\ref{fig:D1ps}--\ref{fig:D12ps} therefore confirm the preceding results obtained for 5-day periods, but they further allow to determine the optimal time delays before a statistically significant increase of anomalies in different power grid equipment.
Most strikingly, a statistically highly significant increase of anomalies is found for D11--D12 (power lines) for both $IntDst$ and $Int(d(\textit{SYM-H})/dt)$ only $\sim 0-1$ day after Day 0, and as compared with all the preceding 3-day periods without storm activity (i.e., with $IntDst=0$ or $Int(d(\textit{SYM-H})/dt)=0$). Some less significant increases are also found for D4 (power lines as D11--D12) for $IntDst$. Such results imply an immediate effect of geomagnetic storms on power lines, already on Days 0 to $+1$. This looks quite realistic, because any effect of GICs on power lines (due to harmonics-related current waveform distortion leading to a detrimental reaction of protective relays or other devices connected to these lines) is likely to occur almost immediately.
Furthermore, Fig. \ref{fig:pvalues8} reveals the presence of a statistically significant {\it delayed} increase of anomalies for D8 (high voltage transformers) following geomagnetic storms when considering $IntDst$ (an increase is also present for $Int(d(\textit{SYM-H})/dt)$ but somewhat less significant), with a delay of $\sim 3$ days after Day 0. This strongly suggests the presence of some delayed effects of storm-time geomagnetic activity on transformers (note also that the lowest rates of anomalies are observed here on Days $-2$ to $0$, similarly corresponding to a delayed effect of the previous days of zero storm activity). Transformers may indeed be affected by GICs but still continue to operate for a while -- typically for a few days -- before actual problems ultimately show up and are registered in logs \citep[e.g.,][]{Wang2015}.
\subsection{Ring and Auroral Currents Effects: $IntAp$ parameter}
\label{sect:intAP}
Since both ring current variations during storms and other (mainly auroral) current variations during strong substorms may produce significant GICs, we further performed similar SEAs for the $IntAp$ parameter, which (despite its own limitations, see section 2.2 and \cite{Kappenman2005}) is expected to roughly take into account the effects of both kinds of disturbances -- whereas $IntDst$ and $Int(d(\textit{SYM-H})/dt)$ only correspond to storm periods. However, due to the relatively low threshold $ap\geq 15$ (equivalent to $Kp\geq 3$) of integration used to calculate daily $IntAp$ levels, this new data series contained many more events (notably, many isolated substorms, sometimes outside of storms) than the previous $IntDst$ (storm) data set. As a result, requiring as before a 5-day period prior to events with $IntAp=0$ led to only a weak $IntAp$ maximum on Day 0, with a preceding $IntAp$ peak on Days $-10$ to $-5$ of comparable magnitude. Therefore, we changed our selection procedure, to consider only events with a peak $IntAp>1000$ nT$\cdot$hr and such that no similar event was present in the preceding 5 days.
The resulting SEAs displayed in Fig.~\ref{fig:INTAP} show that this new selection procedure produces a large peak $IntAp\sim 1400$ nT$\cdot$hr on Day 0 in the SEAs, with much lower levels on all 10 previous days, especially between Days $-6$ to $-2$. The daily number of anomalies is found to increase by a statistically very significant amount during the 5-day period following Day 0 as compared to the 5-day period preceding Day 0, for series D11 and D12 in Fig. \ref{fig:INTAP}, with corresponding $p$-values 0.03 and 0.007, respectively. There is a remarkable simultaneity between the peak of $IntAp$ and the peak of anomalies in the two SEAs with at most one day of delay. Moreover, such peaks of daily anomalies on Days 0 or $+1$ are consistently larger than all other daily values in the full 21-days SEAs. Such results therefore demonstrate the likely presence of nearly immediate effects of both storm-related and substorm-related geomagnetic disturbances on GICs and power lines (D11--D12) in the Czech power network. This is certainly due to the major impact of strong substorms on GICs, both during and outside geomagnetic storms.
\begin{figure}
\centering
\resizebox{0.49\textwidth}{!}{\rotatebox{-90}{\includegraphics{figure5a.pdf}}}
\resizebox{0.49\textwidth}{!}{\rotatebox{-90}{\includegraphics{figure5b.pdf}}}
\caption{Plots of epoch-superposed subsets D11 (left) and D12 (right) of variations of the daily number of anomalies as a function of time, considering the $IntAp$ parameter. Solid lines indicate mean superposed daily rates of anomalies (upper row) or geomagnetic activity $IntAp$ (in nT$\cdot$hr) in the reference time series (bottom row) during Days $-1$ to $+5$ from the epoch (Day 0), whereas dashed lines show the same quantities for the remaining days. Error bars show the one-standard-deviation half-widths.}
\label{fig:INTAP}
\end{figure}
There are also detectable increases of daily anomalies between 5-day periods before/after Day 0 for D8 (transformers, with a delay of $\sim 2$ days) and for D4 (power lines, immediate), but they are not statistically significant, with $p$-values $\simeq 0.25$ (see SEAs for all D1 to D12 series provided in Figs.~\ref{fig:D1intAP}--\ref{fig:D12intAP} in the online supplement).
Besides, there is a statistically significant increase of anomalies for D10 (high and very high voltage electrical substations) with a $p$-value of 0.006, with a first peak of anomalies at Day $+1$ but a much delayed higher peak on Days $+4$ and $+5$. While power lines react immediately to GICs, high and very high voltage electrical substations, which comprise busbars, capacitors, or transformers, may indeed be affected but still continue to operate without registered problems until the cumulative damage reaches a sufficient level. A time lag of 3--5 days does not seem wholly unrealistic in this respect \citep{Kappenman2007, Wang2015}.
It is worth noting that our previous analysis based on $IntDst$ did not show a statistically significant impact of storms for D10 (although the smallest $p$-value reached 0.08 in Table \ref{tab:pvalues}), contrary to the present analysis based on $IntAp$. This suggests that prolonged 2-3 day periods of repeated non-storm-time substorms or solar wind sudden impulses (SIs), taken into account by $IntAp$ but not by $IntDst$, could have a noticeable effect on some electrical substations.
\subsection{Auroral Current Effects: $IntAE$ parameter}
Next, we performed similar SEAs for the $IntAE$ parameter that provides a measure of cumulated high-latitude auroral current variations. An increased hourly auroral electrojet index $AE>150-250$ nT is one of the dominant manifestations of substorms, and many substorm studies rely on $AE$ to estimate the intensity of substorms, although $AE$ is not a specific measure of substorms \citep{Kamide1996, Tsurutani2004}. We compared the period of 5 {\it disturbed days} (with daily $IntAE>150$ nT$\cdot$hr) immediately following Day 0 (the day of peak $IntAE$) with the 5-day period immediately preceding Day 0 -- a preceding period of {\it nearly quiet days} (with daily $IntAE<30$ nT$\cdot$hr) especially selected to have such nearly zero $IntAE$ levels. This way, we can check the impact of {\it disturbed days} of strong $AE$ activity (often corresponding to substorms, occurring both during and outside storms) on power grid anomalies, as compared with {\it quiet days}. We also tried as before to consider shorter 3-day periods to help determine the best time lags between increases of anomalies and Day 0.
\begin{figure}
\centering
\resizebox{0.49\textwidth}{!}{\rotatebox{-90}{\includegraphics{figure6.pdf}}}
\caption{Epoch-superposed daily numbers of anomalies in the D11 series, considering the $IntAE$ parameter. Solid lines indicate superposed anomaly rates (upper panel) or $IntAE$ (in nT$\cdot$hr) in the reference time series (lower panel) during Days~$-1$ to $+5$ from the epoch (Day~0), whereas dashed lines show the same quantities for the remaining days. Error bars show one-standard-deviation half-widths. }
\label{fig:IntAE}
\end{figure}
All the corresponding plots are given in Figs.~\ref{fig:D1intAE}--\ref{fig:D12intAE} in the Online attachment. In general, these results mostly agree with the $IntAp$ results. However, they are somewhat less statistically significant than the results obtained with all the preceding metrics, except for the D11-D12 (power lines) series. For D11, we find a statistically significant 15\% increase in the total number of anomalies after/before Day 0, with a $p$-value of 0.034 (see Fig. \ref{fig:IntAE}), while for D12 (power lines) the increase of anomalies is only 2.6\%, with a barely significant $p=0.055$.
An important point is that these results based on $IntAE$ confirm the impact on power lines of auroral electrojet disturbances, often related to substorms. Nevertheless, these results also suggest that the $IntDst$, $IntAp$, or $Int(d(\textit{SYM-H})/dt)$ metrics may be slightly more appropriate than $IntAE$ for categorizing disturbed days leading to GIC effects at middle latitudes in the Czech power grid. This could stem from the higher latitudes of stations measuring $AE$ than for the mid-latitude $ap$ index: $IntAE$ may either take into account weak substorms that actually do not strongly affect middle latitudes, or it may under-estimate mid-latitude disturbances produced by large substorms \citep{Lockwood2019, Thomsen2004}. Alternatively, there could be some significant impacts of ring current variations on GICs at mid-latitudes, not taken into account in $IntAE$.
\section{Discussion}
In the SEAs, roughly $\approx 5-10$\% increases of the number of anomalies were often observed during the 5 most disturbed days as compared with the preceding 5 consecutive quiet days. However, it is important to note that such increases of anomalies were present during only the 5 most disturbed days among the 21-day total duration of each SEA. It is also unclear if there was any statistically significant increase of anomalies caused by the much weaker geomagnetic activity present during other days that did not fulfill the criteria for our SEA analysis. It is thus difficult to obtain a credible estimate of the total fraction of anomalies that could be directly related to geomagnetic effects. In our previous study \citep{Vybostokova2018}, the corresponding total number of anomalies attributable to variations of geomagnetic activity was also estimated as 1--4\%. Such values are consistent with results from a previous study of the impact of solar activity on the US electric power transmission network in 1992--2010, which showed that $\sim$ 4\% of the corresponding anomalies were likely attributable to strong geomagnetic activity and GICs \citep{schrijver2013disturbances}.
We also considered different parameter series, namely cumulative $IntDst$, $IntAp$, and $IntAE$ parameters integrated over the preceding 5 or 10 days, to evaluate the effects of a longer exposure to GICs on power-grid devices. The corresponding superposed epoch analysis did not yield statistically significant results. Without a proper event selection procedure and no integration limit, the SEAs were dominated by weak events, during which the effects were probably weak and did not emerge from the average rates of anomalies due to causes other than geomagnetic activity. SEAs were further performed separately for weak, moderate, and strong events, but this did not significantly improve the results. The most promising results in terms of magnitude of increase of anomalies during stronger activity were for D8, D10, and D12 for $IntDst$ (with lags of 1--3 days), and D8 and D11 for $IntAE$.
Based on our analysis, it turns out that geomagnetic disturbances affected mostly the datasets registering anomalies on power lines. It is interesting to note that most of the power lines in D7, D11, and D12 are the power lines with distances between grounding points of the order of tens of kilometers. We also found significant delayed effects in the D8 dataset of high-voltage transformers. Although significant effects were observed in D4 during strong storms (see Fig.~S40), the distances between grounding points are of the order of hundreds of meters in this case, that is, much shorter than for the other power-line datasets. The topology of the network in D4 is also far more complex than in the other power-line datasets. It is unlikely that GICs induced in the D4 network could be responsible for the observed increase of anomaly rate after Day 0 in the corresponding SEA. Nevertheless, some detrimental currents could have entered the D4 network from nearby connected networks of other power companies and caused operational anomalies during strong events.
\section{Conclusions}
As noted by \cite{schrijver2013disturbances}, the selection of an appropriate geomagnetic parameter is very important when searching for correlations between anomalies recorded in human infrastructures and variations of geomagnetic activity. Here, we have presented results obtained by considering four different and complementary parameters of cumulative geomagnetic activity, namely the different storm-time $Int(d(\textit{SYM-H})/dt)$ and $IntDst$ low-latitude metrics tracking mainly ring current variations, the high-latitude $IntAE$ metric mainly tracking auroral current variations, and the mid-latitude $IntAp$ metric tracking both ring and auroral current variations -- all of which were integrated over geomagnetically disturbed periods. This allowed us to compare the cumulated number of anomalies observed in the Czech power grid during the corresponding disturbed days of high geomagnetic activity with the number of anomalies recorded during quiet days.
At the considered middle geomagnetic latitudes, our statistical analysis of $\sim10$ years of data has shown that space weather-related events affected mostly long power lines (D11, D12), probably due to a distortion of the electrical current waveform that eventually triggered a detrimental reaction of protective relays or disrupted other connected devices. However, significant and slightly more delayed (by $\sim1-2$ days) effects were also observed in high-voltage transformers.
Both substorm-related disturbances and magnetic storms were found to have statistically significant impacts on the power grid network, since the four considered measures of disturbed days ($IntDst$, $Int(d(\textit{SYM-H})/dt)$, $IntAp$, and $IntAE$) led to more or less similar results -- although $IntAE$ was slightly less efficient. In addition, we found that considering moderate thresholds (neither too large nor too small) on time-integrated geomagnetic activity quantified by $IntDst$, $Int(d(\textit{SYM-H})/dt)$, or $IntAp$, produced the most statistically significant increases in anomaly rates, suggesting a non-negligible impact of moderate disturbances. These results are therefore consistent with a major impact of substorms, either inside or outside storms, on GICs at middle latitudes, together with a possible additional impact of ring current variations during storms.
It is worth noting that our study showed that in the 5-day period following the commencement of geomagnetic activity there is an approximately 5--10\% increase in the recorded power line and transformers anomalies in the Czech power grid, probably related to geomagnetic activity and GICs. Such values are consistent with previous results concerning the US power grid \citep{schrijver2013disturbances}.
\cite{schrijver2014assessing} further found that for the US network, the 5\% stormiest days were apparently the most dangerous, with a 20\% increase of grid-related anomalies as compared to quiet periods. We similarly found that the days with a minimum $Dst<-50$ nT (roughly representing the $\approx 8$\% stormiest days, see \citealt{Gonzalez94}) had probably the strongest impact in the Czech power grid, leading to immediate or slightly delayed $\sim 5-20$\% increases of anomalies as compared to quiet periods.
\begin{acknowledgements}
M.\v{S} was supported by the institute research project RVO:67985815. We are grateful to power grid data providers for giving us an opportunity to exploit their logs of anomalies, namely to P.~Spurn\'y (\v{C}EPS), J.~Bro\v{z} and J.~Bu\v{r}i\v{c} (\v{C}EZ Distribuce), R.~Hanu\v{s} (PREdistribuce), and D.~Mezera and R.~B\'il\'y (E.ON Distribuce). The maintenance logs are considered strictly private by the power companies and are provided under non-disclosure agreements. We gratefully acknowledge the World Data Center in Kyoto and the Space Physics Data Facility (SPDF) at NASA Goddard Space Flight Center for the OMNI data at \url{http://omniweb.gsfc.nasa.gov} of $Dst$ and $\textit{SYM-H}$ geomagnetic indices used in this paper. {\bf Author contributions:} DM designed the study and provided processed geomagnetic data, K\v{Z} and TV wrote the processing code as parts of their student projects under the supervision of M\v{S}. M\v{S} performed the analysis. DM and M\v{S} interpreted the data and wrote the manuscript. All authors contributed to the final version of paper.
\end{acknowledgements}
|
2,869,038,154,411 | arxiv | \section{Introduction}
\label{s_intro}
While the free action of open string field theory \cite{NNW} is local
in the sense that it involves not more than two derivatives of the
string field, it is well known that the interaction term of Witten's
string field theory \cite{Witten:1985cc} contains infinitely many
derivatives. Theories with more than two, but finitely many,
derivatives \cite{Ostrogradski,Eliezer:1989cr} suffer either from an
unbounded Hamiltonian or from ghosts\footnote{See, however,
\cite{Bender:2007wu} for an example of quantization of a theory with
four derivatives, with no ghost and bounded (but non-hermitian)
Hamiltonian.}, but these instabilities do not necessarily survive in
the limit of infinitely many derivatives. Perhaps the simplest way to
see this is that the propagator in a theory with finitely many
derivatives is the inverse of a polynomial and therefore has poles,
some of them ghosts. In the limit of infinite number of derivatives,
however, the propagator becomes the inverse of a function that might
have only one zero, corresponding to a regular excitation. It might
even have no zero at all, like in $p$-adic string theory where the
propagator is an exponential (furthermore, this exponential propagator
renders all loop diagrams finite \cite{Minahan:2001pd}).
In cubic open string field theory, the form of the nonlocality is
universal, the higher derivatives of any field $\phi(x)$ always
appearing in the interaction as
\begin{equation}
\tilde{\phi}(x) \equiv K^{\Box} \phi(x),
\label{tilde}
\end{equation}
where $K = \frac{3 \sqrt{3}}{4}$, and we use the signature $\eta_{\mu
\nu} = {\rm diag}(-1,1,\ldots,1)$. In this case one can see
the nonlocality explicitly because $\tilde{\phi}(x)$ is a smearing of
$\phi(x)$ as can be seen from the convolution formula
\cite{Brekke:1988dg}
\begin{equation}
e^{\beta \partial_x^2} \phi(x) =
\frac{1}{2 \sqrt{\pi \beta}} \int_{-\infty}^\infty
e^{-\frac{1}{4 \beta} (x-y)^2} \phi(y) dy, \qquad \beta >0.
\label{convolution}
\end{equation}
For a homogeneous time-dependent problem, one would take
$\tilde{\phi}(t)$ as the fundamental field and write $\phi(t) =
e^{\log(K) \partial_t^2} \tilde{\phi}(t)$, which can then be written
as a convolution as in Eq.~(\ref{convolution}). A consequence is that
the equation of motion of a homogeneous time-dependent string field
involves the string field not only at time $t$ but at {\em all} times,
both in the past and in the future of $t$. One can treat this kind of
equation either as a convolution equation using
Eq.~(\ref{convolution}), or as a differential equation of infinite
order. It must be understood, however, that such a differential
equation cannot be seen as a limit of finite-order differential
equation; in particular the initial value problem becomes different
when we have infinitely many derivatives (see \cite{Barnaby:2007ve}
for a rigorous discussion, and \cite{Calcagni:2007ef} which contains
some similar results).
A time-dependent equation of motion with infinitely many derivatives
which is of particular physical interest, is the equation describing
the decay of an unstable D-brane. A well-known problem is that, on the
one hand, a boundary conformal field theory (BCFT) analysis shows that
one should expect a monotonic decay of the tachyon down its potential
\cite{Sen:2002nu} and that the energy of the D-brane is converted into
very massive closed strings at rest, behaving like dust
\cite{Sen:2002in} (tachyon matter). On the other hand, numerical
solutions of string field theory \cite{Moeller:2002vx,Fujita:2003ex}
show a completely different behavior. Namely, the tachyon does reach
the non-perturbative vacuum, but it then continues further and starts
oscillating around it with diverging amplitude. Although the tachyon
can climb arbitrarily high up the potential, the energy conservation
is not violated because the kinetic energy can be negative. In fact,
it may seem that energy conservation obviously discards a monotonically
rolling tachyon that would stop at the local minimum of its potential.
However, this is not totally trivial because one could imagine that
the energy of the D-brane is somehow stored in the very high-order
derivatives of the tachyon, still allowing for a monotonic rolling.
But this is actually ruled out \cite{Yang:2002nm,Joukovskaya:2008cr}
because one can write the expression of the energy in an integral form
which makes it clear that, in fact, a monotonically rolling tachyon
cannot conserve energy. It is now believed that these ever-growing
oscillations are not catastrophic after all. For one thing the string
field is not a gauge-invariant observable; but more concretely, it was
shown in \cite{Coletti:2005zj}, that a field redefinition mapping the
cubic SFT action to the boundary SFT action, would also map the
oscillating solution to a well-behaved solution. More recently, it was
shown \cite{Kiermaier:2008qu} that the closed string boundary state
obtained from the rolling tachyon solution, coincides with the BCFT
boundary state.
It is interesting to investigate how this wild rolling changes if we
somehow couple the closed strings sector to the open string SFT
action. The most consistent way to do this would be to consider
open-closed string field theory \cite{Zwiebach:1997fe}; but solving
the equations of motion of the purely closed sector
\cite{Zwiebach:1992ie} involves a much higher level of difficulty
\cite{Belopolsky:1994bj,Moeller:2004yy,Yang:2005ep,Yang:2005iua,Yang:2005rx,
Moeller:2006cv,Moeller:2006cw,Moeller:2007mu,Moeller:2008tb}. A
somewhat more manageable approach would be to consider a fixed closed
string background, but the SFT action becomes in general
non-polynomial in a generic closed background \cite{Zwiebach:1990qj},
hence also hard to solve. What can be done, however, is to minimally
couple gravity to the open SFT action. It has been shown, for
instance, that minimally coupling an open superstring tachyon to a FRW
metric tames the wild oscillations of the tachyon; and convergent
rolling tachyon solutions were found numerically
\cite{Joukovskaya:2008cr}. In \cite{Hel-Sch}, which was the motivation
for the present work, Hellerman and Schnabl considered open SFT in a
linear dilaton background. They chose a {\em light-like} dilaton
gradient and a string field depending only on the light-cone time
$x^{+}$. This is physically motivated because a bubble of true vacuum is
expected to expand at the speed of light \cite{Coleman:1977py}. If the
radius of the bubble is large enough, we can focus on one small patch
and approximate it by a plane, and we choose the light-light
coordinate $x^+$ (which we call light-cone time) to be orthogonal to
this plane. Moreover, with this ansatz important simplifications
occur. In particular, using the fact that $e^{X^+}$ is an exactly
marginal operator, Hellerman and Schnabl were able to use the results
of \cite{Schnabl:2007az,Kiermaier:2007ba} in order to prove that the
rolling tachyon asymptotes to the tachyon vacuum \cite{Schnabl:2005gv}
at large light-cone time. On a more explicit footing, Hellerman and
Schnabl also considered the SFT action truncated at level zero in
Siegel gauge (i.e. keeping only the tachyon). Here the light-cone
simplification manifests itself by changing the nature of the
non-locality. The non-locality of Eq.~(\ref{tilde}), which by virtue
of Eq.~(\ref{convolution}), involves the tachyon field at all times,
becomes simply the tachyon at some {\em retarded light-cone time}
$\phi(x^+-\gamma)$. The equation of motion for the tachyon is then
\begin{equation}
\phi'(x^{+}) - \phi(x^{+}) = - K^3 \, \phi(x^{+}-\gamma)^2.
\label{TachEOMlev0}
\end{equation}
A numerical solution to this equation was worked out by Hellerman and
Schnabl. Their method, however, didn't allow them to go very far in
light-cone time, but enough to see convincingly that the tachyon
reaches the vacuum after oscillating around it with a decreasing
amplitude. In \cite{Barnaby}, Barnaby et al. considered the initial
value problem and the stability of light-like rolling in $p$-adic
string theory and in SFT at level zero. In particular, they were able
to numerically solve Eq.~(\ref{TachEOMlev0}) for a much larger
light-cone time interval. Their numerical method is based on the
diffusion equation
\cite{Calcagni:2007wy,Calcagni:2009tx,Calcagni:2009jb}. Although this
method can in principle be generalized to higher levels, it is hard to
do so in practice.
The motivation for our present paper, was to investigate further the
light-cone rolling in Siegel gauge by considering higher-level fields.
We will consider levels $(2,4)$, $(2,6)$, and $(4,8)$, where the notation
$(L,M)$ means that we are keeping fields up to level $L$ and
interactions up to total level $M$. Our results are three-fold.
Firstly, we realized that equations of the type (\ref{TachEOMlev0})
are known in the mathematics literature as {\em delay differential
equations} (abbreviated DDEs). The most widely used method for
numerically solving DDEs is the {\em method of steps}. Using this
method, we show that solving Eq.~(\ref{TachEOMlev0}) numerically
becomes surprisingly easy. Moreover, the generalisation to higher
levels is straightforward.
Secondly, we show that when we include higher-level fields, the nice
picture of the string field gently oscillating around the vacuum with
decreasing amplitude, takes a serious hit. Indeed, we show that
already at level two, the tensor fields which must be included in our
analysis, bring derivatives into the equations of motion in such a way
that these become a so called \textit{system of higher order neutral
DDEs} with one positive delay. Such DDEs cannot in general be
solved with the method of steps. We have to do some simplifications
before obtaining numerical solutions. What we can do, however, is to
look at the equations of motion close to the vacuum. We will find that
the latter effectively contain several delays. This would pose no
further conceptual difficulty if all delays were positive, but we show
that we obtain {\em negative delays} as well. In other words, the
equations of motion effectively involve the fields at some past
light-cone times, but also at some {\em future} light-cone times. We
show that if we have both negative and positive delays, there exist
{\em growing oscillation} modes around the vacuum. This suggests that
the string field may not converge to the non-perturbative vacuum. This
seems to be in contradiction with the analytic solution obtained in
\cite{Hel-Sch}, and also with our numerical solution obtained by
expanding the fields in exponential series (this provides in principle
a very accurate solution but only up to a limited light-cone
time). But the two pictures can be reconciled if we notice that the
initial conditions for the analytic solution (which are the same as
those for the exponential series solution) are very special. The
diverging modes might not be excited for this particular solution, but
our results imply that a small change in the initial conditions can
render the rolling non-convergent.
Thirdly, by studying the equations of motion near the vacuum at level
four, we show that there are more delays at this levels, and that they
are more spread, both towards the past and towards the future. We are
led to conjecture that in the large level limit, one recovers a
discrete version of the non-locality (\ref{tilde}). Subsequently
setting the dilaton gradient to zero, the delays become less and less
spaced, and we will recover (\ref{tilde}). We conclude that in
particular, the nice simplification of the non-locality that happens
at level zero, is only accidental.
This paper is structured as follows: In the next section, we calculate
the action and derive the equations of motion at level two. We give a
short review of DDEs in Appendix \ref{s_dde} and the complete results
are given in Appendices \ref{Lagrangian24} and \ref{EoMs24}. We solve
the level-zero equation with the method of steps, and we attempt to do
the same at level two. We show what problems we face and what can be
done to get some information on the rolling at this level. In Section
\ref{s_oscillations}, we study the linearized equations of motion near
the vacuum. We show that negative delays appear at level two and four
and conclude the general form of non-locality in the linear dilaton
background. In order to do so, we need to calculate the determinants
of polynomial matrices. A naive approach fails if the matrices are too
large, so we explain a little-known method for calculating such
determinants in Appendix \ref{polydet_s}. In Section
\ref{s_discussion}, we discuss further the consequences of our
results. And at last, we offer a review of linear dilaton CFT in
Appendix \ref{rev lin Dil}.
\section{Light-like dynamics of the vacuum transition in Siegel gauge}
\label{s_level2}
This section is divided into several paragraphs. At first we show how to
derive the equations of motion of open string field theory in a linear
dilaton background with level truncation. We present more details of the
calculation, on the one hand to introduce the notation, and on the other hand
because they are omitted too often. After that we briefly describe how
we solved the resulting delay differential equations on the computer.
To that end we explain how astonishingly natural it is in our setup to
choose the infinitely many initial conditions required for producing a unique solution.
We test our machinery in the simplest possible case of level zero, and observe
excellent agreement with the literature \cite{Hel-Sch,Barnaby}. We then go beyond
level zero and explain our results at level two and four, which
can be summarized as follows:
The individual modes of the string field are initially in the perturbative vacuum,
then, driven by the tachyon, they grow steeply. Finally they oscillate around
their respective vacuum expectation values with decaying amplitudes.
\paragraph{Derivation}
If we write the string field in terms of vertex operators as
$|\Psi\rangle = \Psi(0) |0\rangle$, the action of Witten's open string field theory reads
\begin{equation}
S = - \frac{1}{g^2} \, \left( \frac{1}{2} \langle \Psi , Q \Psi \rangle +
\frac{1}{3} \langle f_1 \circ \Psi(0) \, f_2 \circ \Psi(0) \, f_3 \circ \Psi(0) \rangle
\right),
\end{equation}
where the functions $f_i$ are the conformal transformations mapping
each string (semi-disk) to the common interaction upper-half plane.
We have derived the action and equations of motion with two independent methods.
First we calculated the action by hand, calculating explicitly the
conformal transformations $f_i \circ \Psi(0)$, and then the
CFT correlators
\footnote{For a detailed derivation of the correlator in the
linear dilaton background, cf. \cite{Ho:2007ar}.}
\footnote{Note that we use the complex derivative $\partial_{z}$ instead of the real derivative.}
:
\begin{eqnarray}
\corr{\prod_{i=1}^{n}\ordB e^{ik_{i}\cdot X\left(z_{i}\right)}\ordB\prod_{j=1}^{p}\partial_{z'_{j}}X^{\mu_{j}}\left(z'_{j}\right)} & = & \left(2\pi\right)^{D}\delta^{D}\left(\sum_{i}k_{i}\right)\prod_{i,,j=1,i<j}^{n}\left|z_{i}-z_{j}\right|^{2\alpha'k_{i}\cdot k_{j}}\nonumber \\
& & \cdot\corr{\prod_{j=1}^{p}\left[v^{\mu_{j}}\left(z'_{j}\right)+q^{\mu_{j}}\left(z'_{j}\right)\right]}.\label{eq:corr n exp. and delX fields}\end{eqnarray}
The new objects $v,q$ serve as a tool to quickly work out the combinatorics
of the contractions - just expand the product into a polynomial in
$v,q$ and observe the following rules:
\begin{enumerate}
\item \emph{Replace $v^{\mu}\left(z\right)=-i\alpha'\sum_{i=1}^{n}\frac{k_{i}^{\mu}}{z-z_{i}}.$
\label{rule v}}
\item \emph{Contract products of two $q$s using $-\alpha'\eta^{\mu\nu}/2\left(z-z'\right)^{-2}.$\label{rule q^2}}
\item \emph{Remove all terms with an odd number of $q$s. \label{rule odd q}}
\item Note that the general expression diverges if $z=z'$ . For correlators
of normal ordered products (e.g. $\ordB\partial X^{\mu}\, e^{ik\cdot X}\ordB\times\ordB\dots\ordB$)
these terms precisely cancel, providing a non-singular result. In
this case we can further simplify with the additional rule\\
\emph{Remove all terms in $v$ with $z=z_{i}$ and all $q$-products
at the same point, $z=z'$. \label{rule remove at same point}}
\end{enumerate}
The dilaton background enters explicitly in two ways:
\begin{enumerate}
\item the delta function is updated to include the breaking of the translation
invariance by the linear dilaton background \[
\delta^{D}\left(\sum_{i}k_{i}\right)\to\delta^{D}\left(\sum_{i}k_{i}+iV\right),\]
where the delta function of a complex argument is formally defined
by the following integral representation\begin{equation}
\delta^{D}\left(\sum_{i}k_{i}+iV\right)\equiv\frac{1}{\left(2\pi\right)^{D}}\int\mbox{d}^{D}x\,\, e^{i\, x\cdot\sum k_{i}-V\cdot x}.\label{eq:delta function with dilaton}\end{equation}
\item through the modified conformal transformation law \[
X^{\mu}\left(z,\bar{z}\right)\to f\circ X^{\mu}\left(z,\bar{z}\right)=X^{\mu}\left(f\left(z\right),f\left(\bar{z}\right)\right)+\frac{\alpha'}{2}V^{\mu}\log\left|f'\left(z\right)\right|^{2}\]
needed when mapping the string field vertices to the interaction worldsheet.
\end{enumerate}
In order to make sure that our results are correct, we redid the same
calculation with the method of conservation laws. Luckily, the
conservation laws for an anomalous vector (like $\partial X^\mu$ in a
linear dilaton background) were already worked out by Rastelli and
Zwiebach in \cite{Rastelli:2000iu}. This method has the advantage of
being easy to implement on a computer. In our case we wrote
a \texttt{mathematica} program. To our satisfaction both methods
agreed entirely. Since the calculations become rather cumbersome at
higher levels to do by hand, we relied on our code for the equations
of motion at levels (2,6) and (4,8).
Explaining our notation we will quickly see that at level two, the
string field written out in its mode expansion can be reduced to
just eight spacetime fields. We follow the conventions by \cite{Kos-Sam}
(except that we call $\beta$ their $\beta_1$) and write
the string truncated to level two fields. Working in the Siegel gauge,
we can eliminate those terms containing a $c_{0}$ -ghost mode, and
due to the twist symmetry of the action, we can consistently
set all terms at odd levels to zero. This leaves us with the following expression for the string field
at level two:
\begin{equation}
\state\Psi=\biggl\{\phi+\frac{i}{\sqrt{2}}B_{\mu}\alpha_{-2}^{\mu}+\frac{1}{\sqrt{2}}B_{\mu\nu}\alpha_{-1}^{\mu}\alpha_{-1}^{\nu}\,+\beta b_{-1}c_{-1}\biggr\} c_{1}\state0.\label{eq:SF at level two}\end{equation}
Since we work in $D=26$ spacetime dimensions, expr. (\ref{eq:SF at level two}) contains
379 spacetime fields. But in the case of light-like tachyon rolling,
we can drastically reduce the number of fields needed in our calculation.
Working in the light-cone frame
we can split the dimensions into light-like and ordinary components
\[
\mu=\left(0,1,2,\dots D-1\right)\to\left(+,-,2,3,\dots D-1\right)\equiv\left(+,-,i\right).\]
We assume the linear dilaton gradient light-like, $V^{2}=0$. By rotational
symmetry we can choose a coordinate system where $V=\left(V^{+},0,\dots,0\right)$.
Furthermore we consider spacetime fields $\phi,\beta,B_{\mu},B_{\mu\nu}$
in expr. (\ref{eq:SF at level two}) that depend only on the first lightcone
coordinate $x^{+}$. As detailed in appendix \ref{Lagrangian24}, we can then focus
on the following eight (of 379) fields to compute the action:
\begin{equation}
\left\{ \phi,\, B^{+},\, B^{-},\, B^{++},\, B^{+-},\, B^{--},\, F,\,\beta \right\}. \label{eq:eight fields}\end{equation}
Note that $\phi$ is the tachyon field and $F$ is the scalar field
associated with the contribution of $B^{ij},~ i,j=2\dots25$ to the
trace of $B^{\mu\nu}$ by
\begin{equation}
\mathrm{Tr}\ B^{\mu\nu}\equiv-2B^{+-}+F. \label{eq:def F}
\end{equation}
The resulting action is presented in appendix \ref{Lagrangian24},
both in Lorentz covariant form and explicitly using (\ref{eq:eight fields}).
To check its correctness, one can take the limit of vanishing dilaton
gradient, $V\to0$ and compare to the action found in \cite{Kos-Sam}. Both
expressions agree as desired.
With the action computed, we can proceed to deriving the equations
of motion in notationally compact manner. For a Lagrangian ${\mathcal L}\left(\phi,\partial\phi,\partial^{2}\phi\dots\right)$ containing
arbitrary orders of field derivatives $\partial^{n}\phi$, the Euler-Lagrange
equation is
\[
0=\frac{\partial\mathcal{L}}{\partial\phi}-\partial_{\mu_{1}}\frac{\partial\mathcal{L}}{\partial\left[\partial_{\mu_{1}}\phi\right]}+\partial_{\mu_{1}}\partial_{\mu_{2}}\frac{\partial\mathcal{L}}{\partial\left[\partial_{\mu_{1}}\partial_{\mu_{2}}\phi\right]}-\dots\]
In this notation the derivatives are \emph{not} symmetrized, their
order matters:\[
\frac{\partial\left[\partial_{\mu_{1}}\partial_{\mu_{2}}\dots\partial_{\mu_{k}}\phi\right]}{\partial\left[\partial_{\nu_{1}}\partial_{\nu_{2}}\dots\partial_{\nu_{k}}\phi\right]}=\delta_{\mu_{1}}^{\nu_{1}}\delta_{\mu_{2}}^{\nu_{2}}\dots\delta_{\mu_{k}}^{\nu_{k}}.\]
This is just a matter of more convenient bookkeeping as we sum over
all combinations of indices. In a compact notation we can define the
differential operator $\mathcal{D}^{\phi}$ which returns the
equation of motion for the field $\phi$ when applied to the Lagrangian $\mathcal{L}$ depending
on $\phi$ and possibly other fields to any order in the field derivatives:
\begin{eqnarray*}
\mathcal{D}^{\phi} & \equiv & \sum_{k=0}^{\infty} \left(-1\right)^{k}\partial_{\nu_{1}}
\partial_{\nu_{2}}\dots\partial_{\nu_{k}}\frac{\partial}{\partial\left[\partial_{\nu_{1}}\partial_{\nu_{2}}
\dots\partial_{\nu_{k}}\phi\right]}\\
\Rightarrow & \mathcal{D}^{\phi}\mathcal{L} & \stackrel{!}{=}0.\end{eqnarray*}
Let's see how to apply this in a concrete example, take a generic
interaction term from $\mathcal{L}$, e.g.
$\tilde{A}(x^{+})(\partial^{\mu_1} \ldots \partial^{\mu_l}
\tilde{B}(x^{+}))
(\partial_{\mu_1} \ldots \partial_{\mu_l}
\tilde{\phi}(x^{+}))e^{V^{+}x^{-}}$
and compute its contribution to the equation of motion. When deriving the equation of motion for
the tachyon field $\phi$ we apply $\mathcal{D}^{\phi}$. We
use $\Box=-2\partial_{+}\partial_{-}+\partial_{i}\partial^{i}=-2\partial_{+}\partial_{-}$
because $\mathcal{L}$ is independent of the $x_{i}$-coordinate.
Fields with a tilde are defined by\[
\tilde{\phi}\left(x^{+}\right)=K^{\alpha'\Box}\phi\left(x^{+}\right)=\sum_{n=0}^{\infty}\frac{1}{n!}\left(\alpha'\log K\right)^{n}\left(-2\partial_{+}\partial_{-}\right)^{n}\phi.\]
Now derive the equation of motion:
\begin{eqnarray*}
\mathcal{D}^{\phi}\left\{\partial_{\mu_{1}}\dots\partial_{\mu_{l}}\tilde{\phi}\right\}
& = & \sum_{k=0}^{\infty}\left(-1\right)^{k}\partial_{\nu_{1}}\dots\partial_{\nu_{k}}\frac{\partial\left[\partial_{\mu_{1}}\dots\partial_{\mu_{l}}\tilde{\phi}\right]}{\partial\left[\partial_{\nu_{1}}\partial_{\nu_{2}}\dots\partial_{\nu_{k}}\phi\right]}\\
& = & \sum_{k,n=0}^{\infty}\left(-1\right)^{k}\partial_{\nu_{1}}\dots\partial_{\nu_{k}}\frac{1}{n!}\left(-2\alpha'\log K\right)^{n}\delta_{\mu_{1}}^{\nu_{1}}\dots\delta_{\mu_{l}}^{\nu_{l}}\dots\delta_{+}^{\nu_{k-1}}\delta_{-}^{\nu_{k}}\delta_{2n+l,k}\\
& = & \sum_{n=0}^{\infty}\left(-1\right)^{2n+l}\frac{1}{n!}\left(-2\alpha'\log K\right)^{n}\partial_{\mu_{1}}\dots\partial_{\mu_{l}}\left(\partial_{+}\partial_{-}\right)^{n}\\
& = & \left(-1\right)^{l}\partial_{\mu_{1}}\dots\partial_{\mu_{l}}e^{-2\alpha'\log\left(K\right)\partial_{+}\partial_{-}}.\end{eqnarray*}
Applying $\mathcal{D}^{\phi}$ to the whole term, it becomes apparent that it acts essentially
as a translation operator when $\partial_{-} \to V^{+}$:
\begin{eqnarray*}
&& \mathcal{D}^{\phi}\left\{\tilde{A}(x^{+})
\left(\partial^{\mu_1} \ldots \partial^{\mu_l}\tilde{B}(x^{+})\right)
\left(\partial_{\mu_1} \ldots \partial_{\mu_l}\tilde{\phi}(x^{+})\right)
e^{V^{+}x^{-}}\right\} \\
& = & \left(-1\right)^{l}\partial_{\mu_{1}}\dots\partial_{\mu_{l}}
e^{-2\alpha'\log\left(K\right)\partial_{+}\partial_{-}}\left\{\tilde{A}(x^{+})
\left(\partial^{\mu_1} \ldots \partial^{\mu_l}\tilde{B}(x^{+})\right)
e^{V^{+}x^{-}} \right\} \\
& = & \left(-1\right)^{l}\partial_{\mu_{1}}\dots\partial_{\mu_{l}}
e^{-2\alpha'V^{+}\log\left(K\right)\partial_{+}}\left\{ A(x^{+})\left(\partial^{\mu_1} \ldots \partial^{\mu_l}B(x^{+})\right)e^{V^{+}x^{-}}\right\} \\
& = &\left(-1\right)^{l}\partial_{\mu_{1}}\dots\partial_{\mu_{l}} \left\{ A\left(x^{+}-2\alpha'V^{+}\log\left(K\right)\right)\left(\partial^{\mu_1} \ldots \partial^{\mu_l}B\left(x^{+}-2\alpha'V^{+}\log\left(K\right)\right)\right)e^{V^{+}x^{-}} \right\}\end{eqnarray*}
where finally the tilde was eliminated because
$\Box A(x^+) = 0$.
The translation operator simply shifts the argument $x^{+}$ by $-2\alpha'V^{+}\log\left(K\right)$.
This is a generic feature in \textit{every} interaction term, we therefore
define the symbol \[
y^{+}\equiv x^{+}-2\alpha'V^{+}\log\left(K\right)\]
for the shifted point to abbreviate the notation. The reason for
introducing $\mathcal{D}^{\phi}$ is that it simplifies the calculation
significantly. As an example, consider the \textit{chain rule}
\[\mathcal{D}^{\phi}\left\{ \tilde{A}\left(x^{+}\right){\tilde{\phi}}^{2}
\left(x^{+}\right)e^{V^{+}x^{-}}\right\} = 2 A\left(y^{+}\right) \phi\left(y^{+}\right)e^{V^{+}x^{-}}.\]
The set of eight equations of motion contains derivatives of up to
fourth order, each equation is quite lengthy with one notable
exception: the equation of motion for $B^{--}$ is short and can be
solved easily:\[
\mathcal{D}^{B^{--}}\mathcal{L}_{total}=B^{++}(x^{+})+{B^{++}}'(x^{+})+\frac{8B^{++}(y^{+})\phi(y^{+})}{\sqrt{3}}\stackrel{!}{=}0.\]
The equation is linear in $B^{++}$, hence we set $B^{++}\equiv0$ to
obtain a solution. As a matter of fact this still admits non-trivial
solutions for the other seven fields. This observation was also made
in \cite{Erler}, in the related context of OSFT using the lightcone
basis for the modes of the string
field.
This is a peculiarity of level (2,4). Indeed, already at level (2,6)
one cannot consistently set $B^{++}$ to zero anymore.
Setting $B^{++}$ to zero at level (2,4) reduces
the length of the equations of motion by about one third and the total differential
order from 21 to 17. The resulting set of equations of motion is presented in appendix
\ref{EoMs24}. The seven equations contain a total of 144 terms. For
reference we list the seven remaining fields\begin{equation}
\left\{ \phi,\, B^{+-},\, B^{--},\, F,\,\beta,\, B^{+},\, B^{-},\right\}. \label{eq:seven fields}\end{equation}
Let $\Psi_{j},\, j=1\dots7$,
denote one of the above fields, e.g. $\Psi_{1}\left(x^{+}\right)=\phi\left(x^{+}\right)$
is the tachyon field. Then the generic form of an equation of motion is
\footnote{To simplify the notation, we set $V^{+} = \alpha' \equiv 1$.}:
\begin{equation}
0=\Psi_{j}\left(x^{+}\right)+\left(-1\right)^{\delta_{j,1}}\partial\Psi_{j}\left(x^{+}\right)+
\sum_{i,k=1}^{7}\sum_{n,m=0}^{3}a_{nm}^{ik}\partial^{n}\Psi_{i}\left(y^{+}\right)\partial^{m}\Psi_{k}\left(y^{+}\right).
\label{eq: generic EoMs}\end{equation}
The first two terms come from the kinetic term in the action. They
have the same sign except for the tachyon and are evaluated at position
$x^{+}$. All derivatives are understood with respect to $x^{+}$. All
terms in the sum arise from the cubic interaction. They are evaluated
at $y^{+}$ and contain derivatives of up to the third order. In fact
many of the real-valued coefficients $a_{nm}^{ik}$ are zero.
\paragraph{Solving the Equations of Motion}
Given the general structure of the equations of motion
(\ref{eq: generic EoMs}),
let us now focus on solving them at level $\left(2,4\right)$.
There are derivatives with respect to $x^{+}$ at both points: $x^{+}$ (max: 1st order)
and $y^{+}=x^{+}-\gamma$ (max: 3rd order). The total
differential order of the system of equations is 17. Because of the
non-locality (fields at $x^{+}$ and at $y^{+}$) these are
not ordinary differential equations (ODEs). It is the key observation
that, mathematically speaking, we have a system of coupled \emph{delay
differential equations}
of the neutral type (NDDE) with one constant delay $\gamma$. The
theory of delay differential equations (DDE) has been developed to
great extent in the 20th century as this type of differential equation
arises in a large array of disciplines: populations dynamics, machine
control theory, neutron diffusion, spreading of diseases, retarded
propagation in classical electrodynamics etc. We give a short introduction
to DDEs, highlighting the differences to ODEs, in appendix \ref{s_dde}.
Further useful references are \cite{Bellman-Cooke,Driver,Bellen}.
In order to solve DDEs we use the \emph{method of steps}, explained
further in Appendix \ref{s_dde}. The basic idea is to reduce the problem
of computing the solution over a full interval to subintervals where
the DDE reduces to an ODE which is solved with standard methods. In
order to obtain a unique solution, it is however not sufficient to
give an initial condition at one point as in the ODE case: initial
conditions over a finite interval (of the length of the delay $\gamma$)
have to be specified. These conditions are called the \emph{initial
data}. They are essentially uniquely determined when we require that
the tachyon condensation starts in the perturbative vacuum.
There are other methods. For instance Barnaby et al. \cite{Barnaby}
transformed the level-zero DDE into an equivalent diffusion-like local
partial differential equation problem with suitable boundary
conditions, solved that with standard codes and finally transformed
back to obtain the DDE solution. For the system of equations at level
(2,4), their method seems cumbersome to apply. In contrast, the method
of steps is powerful enough to easily extend to the case of several
unknowns. For more details on diffusion methods, we refer the reader to
We have seven coupled equations for seven fields with
derivatives up to the third order. The numerical stability and
convergence properties of the method of steps have been studied
carefully in the past decades, cf. \cite{Bellen} for an extensive
review of numerical methods. The problem can now be considered a
standard one, much like solving a system of ODEs numerically. This
implies that the user has to carefully check the consistency of the
numerical solution, preferably by independent methods. Much of the
effort in this section is aimed in that direction.
On the computer we need dimensionless numbers. As of now we work in
units where $\alpha'\equiv 1$. For convenience we choose $V^{+}\equiv1$.
\paragraph{Choosing initial data}
Let us now see how to supply initial data for each field to obtain
a unique solution to the equations of motion. A priori we could choose any initial
data, but physical reasons nearly completely fix them.
Consider as an example the tachyon field $\phi$. In principle we
would like to obtain the solution $\phi\left(x^{+}\right)\forallx^{+}\in\mathbb{R}$
from the equations of motion. On the computer we can only obtain a solution on some
finite interval $\left[x^{+}_\text{min},x^{+}_\text{max}\right]$. We then need to fix
the tachyon on $[x^{+}_\text{min}-\gamma,x^{+}_\text{min}]$, with the delay $\gamma=2\log K$.
To find constraints recall that we are interested in the tachyon condensation
solution: the solution should initially be in the perturbative vacuum
(string field $\state\Psi=0$) and in the end arrive at the non-perturbative
vacuum. Hence at sufficiently small $x^{+}_\text{min}$ the absolute value
of the tachyon and all other six fields $\Psi_{j}$$\left(x^{+}\right)$
from \eqref{eq:seven fields} should be small\[
\left|\phi\left(x^{+}_\text{min}\right)\right|\ll1, \qquad
\left|\Psi_{j}\left(x^{+}_\text{min}\right)\right|\ll1\,.\,\]
As we take $x^{+}_\text{min}\to-\infty$ we can neglect all interaction terms
$\sim\Psi_{i}\Psi_{j}$ in the equations of motion \eqref{eq: generic EoMs} and
receive dramatically simplified equations. In other words, we linearize
the equations of motion around zero, $\Psi_{j}=0+\delta\Psi_{j}$ , and neglect
all terms quadratic in the small quantity $\delta\Psi_{j}$. The
resulting equations are\begin{equation}
0=\delta\Psi_{j}\left(x^{+}\right)+\left(-1\right)^{\delta_{j,1}}
\partial\delta\Psi_{j}\left(x^{+}\right).\label{eq:linearized Eom}\end{equation}
These are first order \emph{ordinary} differential equations
which can be solved analytically, we just need
to fix the initial condition. Starting in the unstable vacuum means
all fields vanish at negative infinity,
\[
\Psi_{j}\left(-\infty\right)=0,\quad\forall j.\]
The linearized tachyon equation of motion has the solution
\begin{align}
& \delta\phi\left(x^{+}\right)=a_{\phi}e^{x^{+}}\label{eq:tach initial data}\\
& 0 =\delta\phi\left(-\infty\right)\quad \Rightarrow \quad a_{\phi}\mbox{ arbitrary}.\nonumber \end{align}
The solution grows exponentially, reflecting the instability of the
perturbative vacuum. The initial condition does not fix the constant
$a_{\phi}$. This is the only free parameter in choosing the initial
data as we will see shortly. In short, there are just three cases, \[
a_{\phi}=\begin{cases}
\mbox{positive}\\
0\\
\mbox{negative}\end{cases}\]
that give qualitatively different solutions to the full equations of motion. From equation (\ref{eq:tach initial data}) $a_{\phi}$ can be chosen to have any real
value. If we set it to zero
this corresponds to the static solution in the perturbative vacuum
that we are \emph{not} interested in.
For $a_{\phi}\ne 0$ we obtain interesting solutions.
By shifting the origin in the $x^{+}$
direction, we can fix the absolute value
\begin{equation}
|a_{\phi}|\equiv1\label{eq: 1st tachyon coefficient is one}\end{equation}
without loss of generality. It turns out that for $a_{\phi}<0$
the solutions diverge. This can be intuitively explained as
``rolling down the wrong side of the hill'': the effective tachyon potential
is presented schematically in Fig.~\ref{fig: EffTachyonPot}.
\begin{figure}[!ht]
\begin{center}
\input{TachyonPotential.pstex_t}
\caption{\footnotesize{Effective tachyon potential. Depending on the sign
of the coefficient the tachyon either rolls off to infinity or to the non-perturbative
vacuum.}}
\label{fig: EffTachyonPot}
\end{center}
\end{figure}
The linearized equations of motion for the other fields have the solution
\begin{eqnarray*}
&&\delta\Psi_{j}\left(x^{+}\right)=a_{j}e^{-x^{+}}\\
&& 0 = \delta\Psi_{j}\left(-\infty\right)\quad \Rightarrow \quad a_{j}=0.\end{eqnarray*}
In conclusion the initial data are as follows: the tachyon rises exponentially
with a prefactor of choice, all other fields vanish. This confirms
that the tachyon drives the condensation process. The same in a formula
is\[
\delta\Psi_{j}\left(x^{+}\right)=\delta_{j,1}a_{\phi}e^{x^{+}},\qquadx^{+}_\text{min}-\gamma\lex^{+}\lex^{+}_\text{min}.\]
We require that the solutions of the equations of motion be analytic,
thus we can express the analytic initial data as
\[\delta\Psi_{j}\left(x^{+}\right)=\sum c_{n}^{j} \cdot \left(x^{+}\right)^{n}.\]
With our choice for the tachyon $\delta\Psi_{1}=a_{\phi}e^{x^{+}}$,
we have fixed every coefficient, $c_{n}^{1}=\frac{a_{\phi}}{n!}$.
Conversely for the other fields we find $c_{n}^{j}=0$.
We conclude:
the solution of the linearized equation requires only one initial condition
($a_\phi$ for the tachyon), it then fixes the countably many initial conditions
required for finding a unique solution of the full non-linear DDE.
Note however that in general a Lagrangian with derivatives of all orders
does \textit{not} require supplying countably initial conditions for
a unique solution, cf. \cite{Barnaby:2007ve} for a review of the
initial value problem for linear equations.
\paragraph{Programming details}
Several codes for NDDEs implementing the method of steps are available.
We choose to use \texttt{mathematica }in the version 7 as it
easily allows to further manipulate the equations symbolically besides
the capability of solving systems of NDDEs with the single command
\texttt{NDSolve}.%
\footnote{This feature was not available in previous versions.}
\texttt{mathematica }also allows to do the numerics with arbitrary
precision, which we made use of, as the built in machine precision
was not quite satisfactory.
We want to warn the reader that in the case of NDDEs with higher order
derivatives at delayed positions \texttt{mathematica}
quickly returns results without any warning or error message.
However upon plugging the supposed solutions into the
equations of motion we realized that they do {\em not} satisfy the
equations. As always when using numerical results, checking is crucial.
In those cases where \texttt{mathematica} yields correct results we
used the parameters and options listed in Table~\ref{NDSolve}.
\begin{table}
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
Method & AccuracyGoal & WorkingPrecision & MaxSteps\tabularnewline
\hline
\hline
Adams & 15 & 20 & 50000\tabularnewline
\hline
\end{tabular}
\caption{\footnotesize{Standard options used with \texttt{mathematica 7's NDSolve }to numerically
solve the equations of motion\label{NDSolve}.}}
\end{center}
\end{table}
\paragraph{Warm up at level zero}
As a basic consistency check for our numerical method we run the
simplest example: the level zero truncation to the tachyon only. We
can compare the results to Barnaby et al. \cite{Barnaby} (diffusion
problem) and Hellerman and Schnabl \cite{Hel-Sch} (exponential series
solution) that each solved the same problem with a different method.
The equation of motion \cite{Hel-Sch} for the tachyon at level zero
is \begin{equation}
0=\phi'\left(x^{+}\right)-\phi\left(x^{+}\right)+K^{3}\phi^{2}\left(y^{+}\right).\label{eq:hellerman
EoM no alpha,V}\end{equation} We used the initial data
$\phi\left(x^{+}\right)=1\cdot e^{x^{+}}$, Eq.~\eqref{eq:tach initial
data}, on an initial interval $[-25-2 \log K,-25]$. In practice
this is close enough to the perturbative vacuum, as $
\phi\left(x^{+}\right)=e^{-25} \ll 1$. Moreover, the number of intervals
between $x^+=-25$ and $x^+=0$ is $\frac{25}{2 \log K} \approx 47.8$. This
tells us (see Appendix \ref{s_dde}) that the numerical solution will be
differentiable at least $47$ times for $x^+>0$; we can therefore
expect that it will be a very good approximation to the analytic
solution. The solution is depicted in Fig.~\ref{fig:level0}.
\begin{figure}
\begin{center}
\includegraphics[scale=0.8]{HellermanSchnabl}\caption{\footnotesize{
Level zero tachyon condensation calculated with the method of steps.
From the old vacuum $\phi=0$ we jump to the new vacuum with
exponentially dampened oscillations. The vev is indicated by the
dashed line.}}
\label{fig:level0}
\end{center}
\end{figure}
It is in excellent agreement with solutions
from \cite{Hel-Sch,Barnaby}. The asymptotic behavior
for large $x^{+}$ is known, it is of the form $exp\times cos$. The
frequency and decay rate can be determined by a simple fit, the values
agree perfectly with those noted by Hellerman/Schnabl from linearization
at large $x^{+}$. Note that these authors could compute the solution
of \eqref{eq:hellerman EoM no alpha,V} only as far as $x^{+}_\text{max}=7$
because of computing time limitations: the computational complexity
grows exponentially with $x^{+}$ for their method. Even though we plot
the solution only up to $x^{+}_\text{max}=25$ we compute it up to $x^{+}_\text{max}=100$
and beyond without any difficulty. The numerical solution ($\sim$0.1s)
actually takes less computing time than plotting the result ($\sim$1s)
on a modern computer. This confirms that the method of steps works
both fast and accurately.
We want to emphasize that the level zero equation of motion is in many facets simpler
than the level two equations of motion. Obviously it is only one equation compared to
seven coupled equations, but the major difference is of another kind:
the level two equations are \emph{neutral} DDEs, while \eqref{eq:hellerman EoM no alpha,V}
is of the {\em retarded} type, there are no derivatives of $\phi$ at position $y^{+}$.
\paragraph{Level two solutions}
The full set of seven equations of motion at level (2,4) (appendix
\ref{EoMs24}) contains derivatives up to the third order in the
fields at the delayed position $y^{+}=x^{+}-\gamma$, and up to first
order at $x^{+}$, the total differential order is 17, while the total
number of terms is a staggering 144.
As explained above \texttt{mathematica} 7 cannot handle
higher derivatives at delayed positions, and
as of the time of writing we know
of no other numerical method for solving a system of higher
order neutral DDEs.
Hence we look for ways to simplify the problem that allow us to
follow the vacuum transition:
\begin{enumerate}
\item Set all higher field derivatives to zero.
\item Consider only the scalar fields, $\phi,\beta,F$, and set the other
vector/tensor component fields,
$B^{+-},B^{--},B^{+},B^{-}$ to zero. Then the system of equations
contains at most first order derivatives.
\item Rewrite the fields as exponential series,
\begin{equation}
\Psi_j(x^+) = \sum_{n=1}^\infty a_{j,n} \, e^{n x^+},
\label{expseries}
\end{equation}
and solve for the first few hundred
coefficients $a_{j,n}$ recursively. Evidently
for large $x^{+}$ this procedure requires knowing many of the $a_{j,n}$,
in fact the number of coefficients needed for an accurate solution
grows exponentially with $x^{+}$. Thus we compute the solution with
this approach in
reasonable time only on a relatively small range $[x^{+}_\text{min},
x^{+}_\text{max}]$, with $x^{+}_\text{max}\approx 4.1$.
In this range the numerical solution is very accurate, but
for larger $x^{+}$, the last exponential in (\ref{expseries}) dominates and
the numerical solution diverges.
\end{enumerate}
Note that the range $[x^{+}_\text{min}, x^{+}_\text{max}]$ is sufficient
to compare the different approximations and the different levels (0,0), (2,4), (2,6), (4,8)
around the transition to the non-perturbative
vacuum, see Fig.~\ref{fig:CompTach} as an example for the tachyon only.
\begin{figure}
\begin{center}
\includegraphics[scale=1.1]{CompareTachyon}
\caption{\footnotesize{The different approximations for the tachyon
condensation at level (2,4): From the perturbative vacuum $\phi=0$
the tachyon jumps to the non-perturbative vacuum. The vev of the
full system is indicated by the dashed line. When ignoring higher
derivatives the vev is unchanged. In contrast when considering the
reduced system composed of only the scalar fields $\phi, F, \beta$
the tachyon vev is slightly altered; this can be understood from
the remark starting before Eq.~(\protect \ref{vacuumremark}).}}
\label{fig:CompTach}
\end{center}
\end{figure}
Similarly to the level zero solution, Fig.~\ref{fig:level0},
the tachyon is initially in the perturbative vacuum $\phi(x^{+})=0$,
then grows exponentially near $x^{+} = 0$, slightly overshoots the
vacuum expectation value $\langle \phi \rangle_{(2,4)} $, then settles
in the non-perturbative vacuum.
However using the exponential series ansatz we cannot evaluate the
convergence properties around
the non-perturbative vacuum. We devote Section \ref{s_oscillations} to this issue.
All three methods agree very well up to the maximum value
near $x^{+} \approx 1$, where the non-linear couplings due to the interaction
between the various fields become important. But even for $x^{+} > 1$,
the solutions are qualitatively very similar.
Let us now consider the solution for the full system of the seven fields
at level two that we calculated up to $x^{+}_\text{max} \approx 4.1$ using the exponential series ansatz
(\ref{expseries}),
see Figure \ref{fig:ExpAll}.
\begin{figure}
\begin{center}
\includegraphics[scale=1.1]{ExpSeriesLevel2}
\caption{\footnotesize{Level (2,4) tachyon condensation:
From the perturbative vacuum $\phi=0$ we jump to the new vacuum. The solution was calculated
using the exponential series ansatz with 500 terms and a precision of
150 digits.}}
\label{fig:ExpAll}
\end{center}
\end{figure}
The general picture involving the three
stages
$$ \mathrm{perturbative~vacuum} \to \mathrm{transition}
\to \mathrm{non-perturbative~vacuum}$$
holds for all fields. The tachyon is drawn with a thick red brush
to emphasize its importance for the vacuum transition: it is the first
component to grow, and thus drives the others out of the perturbative
vacuum. This affirms the intuitive notion of the tachyon as the unstable
mode of the string field, indicating the instability of the supporting
D-brane.
In addition to the evolution of the seven fields, we included the vevs
in the non-perturbative vacuum.
In Figure \ref{fig:ExpTachyon} we zoom on the tachyon in order to show how it
oscillates around the vev.
\begin{figure}
\begin{center}
\includegraphics[scale=0.8]{TachyonExpSeriesLevel24}
\caption{\footnotesize{Level (2,4) tachyon condensation: Zoom on the
tachyon $\phi(x^+)$ as it oscillates around the vev. The solution
was calculated using the exponential series ansatz with 500 terms
and a precision of 150 digits.}}
\label{fig:ExpTachyon}
\end{center}
\end{figure}
Typically one would expect that all
fields, except the scalars, have vanishing vev, preserving translation
invariance in the non-perturbative vacuum. But due to the presence of
the linear dilaton background $V\cdot x=V^{+}x^{-}$, translation
invariance is broken in the $x^{+}$-direction. Hence
e.g. $\corr{B^{+}}_{V^{+}=1}\ne0$ is no upset. If $V^{+}=0$, then
$B^{+}$ can have no vev.
How exactly the vevs are reached
for large $x^{+}$ is studied in detail in Section \ref{s_oscillations}.
There is more than one way to determine the vevs. The
simplest one is the following: Simplify the equations of motion (appendix \ref{EoMs24}) to allow
only constant solutions.
This results in a set of equations where the vevs $\corr{\phi},\corr{ B^{+-}},\corr F$
appear quadratically, all others linearly, hence there exist $3\cdot2+4=10$
solutions. With \texttt{mathematica} they are found in closed form,
as expressions depending on roots of 10th order polynomials. Numerically
the vevs can be evaluated to arbitrary precision. To pick the right
solution from the set of ten, it is sufficient to consider the tachyon
vev:
\begin{itemize}
\item Two solutions can be eliminated as $\corr{\phi}\in\mathbb{C}$.
\item Six more can be neglected because $\corr{\phi}<0$. From the effective
potential, Fig.~\ref{fig: EffTachyonPot}, we know it is unbounded
for negative field values, hence there can be no finite negative vev.
\item One solution has $\corr{\phi}=0$. The vevs of the other fields vanish,
too. This is the unstable, \emph{perturbative vacuum} solution.
\item The last is the \emph{non-perturbative} \emph{vacuum} solution: $\corr{\phi}\approx0.5416$.
The resulting vevs to four significant digits are summarized in
Table~\ref{tab:Vacuum-expectation-values:}.
\end{itemize}
\begin{table}[h]
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
$\corr{\phi}$ & $\corr F$ & $\corr{\beta}$ & $\corr{B^{+-}}$ & $\corr{B^{--}}$\
& $\corr{B^{+}}$ & $\corr{B^{-}}$\tabularnewline
\hline
\hline
0.5416 & 0.8808 & -0.1733 & -0.03670 & 0 & -0.05190 & 0\tabularnewline
\hline
\end{tabular}
\caption{\footnotesize{Vacuum expectation values at level two: the set of constant solutions of the full
equations of motion corresponding to the non-perturbative vacuum.}\label{tab:Vacuum-expectation-values:} }
\end{center}
\end{table}
Another way to determine the vevs is to realize that if we expand the
string field in the {\em universal} basis (i.e. using ghost modes and
matter Virasoro modes, but no matter oscillators), the vevs would be
independent of the dilaton gradient, and therefore equal to their
values in flat spacetime. Using the expression of the Virasoro
operators in terms of oscillators and dilaton gradient (\ref{eq:lin
dil Virasoros}) we can then deduce the vevs of the fields in the
non-universal basis. Explicitly, at level two in the universal basis
and in Siegel gauge, we have three fields $t$, $u$ and $v$, and the
string field in the non-perturbative vacuum is given by
\begin{equation}
|\Psi \rangle = \corr{t} c_1 |0\rangle + \corr{u} c_{-1} |0\rangle + \corr{v} L^m_{-2} c_1 |0\rangle,
\label{vacuumremark}
\end{equation}
where the vevs $\corr{t}$, $\corr{u}$ and $\corr{v}$ are well known
\cite{Sen:1999nx}. Plugging the expression (\ref{eq:lin dil Virasoros}) for $L^m_{-2}$, we have
\begin{equation}
|\Psi \rangle = \corr{t} \, c_1 |0\rangle + \corr{u} \, c_{-1} |0\rangle + \frac{1}{2} \corr{v} \, \alpha_{-1}^i \alpha_{-1}^i c_1 |0\rangle - \corr{v} \, \alpha_{-1}^+ \alpha_{-1}^- c_1 |0\rangle +
\frac{i}{\sqrt{2}} V^+ \corr{v} \, \alpha_{-2}^- |0\rangle.
\end{equation}
Comparing with our string field expansion Eq.~(\ref{eq:SF at level two}), and the definition of F (\ref{eq:def F}), we find
\begin{equation}
\corr{\phi} = \corr{t}, \quad \corr{\beta} = -\corr{u}, \quad \corr F = 12 \sqrt{2} \corr{v}, \quad
\corr{B^{+-}} = -\frac{1}{\sqrt{2}} \corr{v}, \quad \corr{B^{+}} = - V^+ \corr{v}.
\label{universalvev}
\end{equation}
And these give precisely the same values as in
Table~\ref{tab:Vacuum-expectation-values:}, providing further evidence
that our action was correctly calculated.
A curious result worth mentioning is that the following relations
\begin{equation}
B^{+-} = -\frac{1}{24} F, \qquad B^+ = - \frac{V^+}{12 \sqrt{2}} F,
\label{lev24relations}
\end{equation}
which, by Eq.~(\ref{universalvev}), should hold for the expectation
values, actually hold {\em for all} $x^+$ at level (2,4). This can be
roughly seen on Fig.~\ref{fig:ExpAll}. We have checked these relations
numerically beyond doubt (they hold with a precision of at least 100
digits), but haven't found any simple reason to explain them. They
must in fact be ``accidental'' because they do not hold anymore at
level (2,6).
Let us now briefly look at level (2,6). The main difference here is
that we cannot set $B^{++}$ to zero anymore. In particular, the
equation of motion for $B^{--}$, which at level (2,4) contained only
terms proportional to $B^{++}$, now contains in particular a term
$(B^{--})^2$. We show the exponential series solution for the rolling
in Fig.~\ref{fig:ExpAll26}
\begin{figure}
\begin{center}
\includegraphics[scale=1.0]{ExpSeriesLevel26}
\caption{\footnotesize{Level (2,6) tachyon condensation:
The solutions were calculated using the exponential series ansatz with
300 terms and a precision of 150 digits. We have multiplied $B^{++}$
by 20 in order to make it visible.}}
\label{fig:ExpAll26}
\end{center}
\end{figure}
\paragraph{Level four solutions}
Let us now turn to the system of equations at level (4,8). Based on the method of
conservation laws we used our \texttt{mathematica} code to compute the
equations of motion.
There are now 50 fields to consider, the equations of motion contain 21400 terms.
We want to investigate to what extent
the solution changes compared to level two. Again we used the exponential series
ansatz and computed the first 50 coefficients for all 50 fields. This
calculation took about 15 h on a fast computer.
From those 50 fields we decide to focus on the tachyon, discussion of
the other 49 fields would be redundant, for they are all qualitatively
similar. Now we compare the tachyon at level two and four, Figure
\ref{fig:lev2vs4}.
\begin{figure}
\begin{center}
\includegraphics[scale=1.1]{Level2vsLevel4}
\caption{\footnotesize{ Comparison of tachyon condensation at levels
(0,0), (2,4), (2,6) and (4,8). The vevs have been normalized to
unity. The solution at level zero was calculated with the method
of steps, and the solutions at higher levels were calculated using
the exponential series ansatz with the origin of $x^{+}$ chosen such
that the first coefficient is one. Accordingly the initial
condition $a_{\phi}=K^{-3} $ was used for the level zero DDE. The
curves at levels (2,4) and (2,6) are almost indistinguishable.}}
\label{fig:lev2vs4}
\end{center}
\end{figure}
As before the vevs of all fields are extracted
from the constant solutions. For the tachyon, the vev is only slightly
bigger at level 4, the numerical value agrees with the estimate in \cite{Sen:1999nx}.
We notice that the tachyon overshoots more at level four, both
on a relative scale and in absolute value.
As noted in \cite{Hel-Sch}, the full solution (no level truncation)
converges monotonically. One might have expected to see the
overshooting decrease monotonically with the level as well, since it is
a lot more pronounced at level zero than at level two, but that is not
the case. For completeness, the relative overshooting of the tachyon
is shown in Table~\ref{tab:overshooting}.
\begin{table}[h]
\begin{center}
\begin{tabular}{|c||c|c|c|c|}
\hline
Level & 0 & (2,4) & (2,6) & (4,8)\tabularnewline
\hline
\hline
$\left(\phi_\text{max}-\corr\phi\right)/\corr\phi$ & 18.7\% & 7.1\% & 6.7\% & 8.2\% \tabularnewline
\hline
\end{tabular}
\caption{{\footnotesize {Relative overshooting of the maximum vs the vev for
the tachyon at the lowest levels; it is not monotonically decreasing.}}}
\label{tab:overshooting}
\end{center}
\end{table}
\section{Oscillations around the non-perturbative vacuum}
\label{s_oscillations}
In this section, we want to consider the equations of motions when the
fields are very close to their non-perturbative vacuum expectation
values. This analysis can be started in a relatively general
formalism, independent of the level. We start by putting the
components of the string field into a column vector
\begin{equation}
{\mathbf \Psi}(x^+) = \left(\phi(x^+), B^+(x^+), B^-(x^+), B^{+-}(x^+),
\ldots \right)^\mathrm{T}.
\end{equation}
The length $n$ of this vector will be seven at level (2,4), eight at
level (2,6), and fifty
at level (4,8), the highest level that we consider in this paper.
Next we write
\begin{equation}
{\mathbf \Psi}(x^+) = {\mathbf \Psi}_0 + \delta{\mathbf \Psi}(x^+),
\end{equation}
where ${\mathbf \Psi}_0$ is the non-perturbative vacuum expectation
value of ${\mathbf \Psi}$, and $\delta{\mathbf \Psi}(x^+)$ is a
small perturbation. We now write schematically the equations of motion
for $\delta{\mathbf \Psi}(x^+)$ at linearized level, i.e. keeping
only the terms that are linear in $\delta{\mathbf \Psi}(x^+)$. Note
that because ${\mathbf \Psi}_0$ is a solution of the full equations of
motion, there will be obviously no constant term in the linearized
equations. The general structure of these equations is easy to
understand. On the left hand side, we will write the contributions
from the kinetic term; this involves the fields at light-cone time
$x^+$ (not retarded) and at most first derivatives of the fields. On
the right-hand side we will write the contributions from the
interaction term. These all involve the same retarded light-cone time
$x^+-\gamma$, and at most $3L$ derivatives, where $L$ is the level.
This is so because the
number of derivatives is at most equal to the total number of Lorentz
indices carried by the interacting fields, and each field of level $L$
can carry at most $L$ indices. So the general form of the equations of
motion is
\begin{align}
& \delta{\mathbf \Psi}'(x^+) + A \, \delta{\mathbf \Psi}(x^+) = \nonumber \\
& B_0 \, \delta{\mathbf \Psi}(x^+-\gamma) + B_1 \,
\delta{\mathbf \Psi}'(x^+-\gamma) +
B_2 \, \delta{\mathbf \Psi}''(x^+-\gamma) + \ldots +
B_{3L} \, \delta{\mathbf \Psi}^{(3L)}(x^+-\gamma),
\label{linearEoM}
\end{align}
where all the details are hidden in the $n$ by $n$ matrices $A$ and
$B_m$, $m=0,\ldots 3L$. We can now make the ansatz
\begin{equation}
\delta{\mathbf \Psi}(x^+) = e^{\omega x^+} \, {\mathbf \Xi},
\label{oscillation_ansatz}
\end{equation}
where $\omega$ is a complex number and ${\mathbf \Xi}$ is a vector of
complex numbers. Since the equations (\ref{linearEoM}) are real and
linear, we can always find a real solution by adding its complex
conjugate to (\ref{oscillation_ansatz}). Note that with this ansatz,
all fields oscillate with the same frequency $\mathop{\mathrm{Im}} \omega$ with
exponentially decaying or growing amplitudes, according to the sign of
$\mathop{\mathrm{Re}} \omega$. The relative amplitudes and phase shifts between the
fields are encoded in ${\mathbf \Xi}$. Plugging this ansatz into
Eq.~(\ref{linearEoM}) and multiplying by $e^{\gamma \omega}$, we
obtain the equation
\begin{equation}
\left( \omega e^{\gamma \omega} \, I + e^{\gamma \omega} \, A -
B_0 - \omega \, B_1 - \ldots - \omega^{3L} \, B_{3L} \right) \,{\mathbf \Xi} = 0,
\label{deteq1}
\end{equation}
where $I$ is the $n$ by $n$ identity matrix. After naming
\begin{equation}
M(\omega) \equiv \omega e^{\gamma \omega} \, I + e^{\gamma \omega} \, A -
B_0 - \omega \, B_1 - \ldots - \omega^{3L} \, B_{3L},
\label{M_def}
\end{equation}
we see that Eq.~(\ref{deteq1}) has a nontrivial solution for ${\mathbf \Xi}$ if and
only if $\omega$ is such that
\begin{equation}
\det M(\omega) = 0.
\label{detM0}
\end{equation}
It is easy to see from (\ref{M_def}), that $\det M(\omega)$ is an {\em
exponential polynomial} in $\omega$, i.e. a function of the form
\begin{equation}
\det M(\omega) = \sum_{j=0}^n p_j(\omega) \, e^{\beta_j \omega},
\qquad 0 = \beta_0 < \beta_1 < \ldots < \beta_n,
\label{exp_pol}
\end{equation}
where $p_j(\omega)$ are polynomials of degree $d_j$. In our particular
case, we have $\beta_j = j \gamma$.
\paragraph{}
So we are interested in finding the roots of an exponential polynomial
(\ref{detM0}), and particularly in the signs of their real parts
because they will determine whether the perturbation $\delta{\mathbf
\Psi}(x^+)$ will eventually die out or not. This problem has been
studied quite extensively (see for example \cite{Bellman-Cooke}). The
general answer is that one can write expressions for the asymptotic
values of the zeros (i.e. the zeros $\omega$ with $|\omega|
\rightarrow \infty$). More simply, one can use the {\em distribution
diagram} to approximately locate the zeros. Namely, we plot the
points $P_j$ with coordinates $(\beta_j, d_j)$; we then draw the
convex polygonal line $L$ that joins $P_0$ to $P_n$, such that its
vertices are points of the set $P_j$ and such that no point $P_j$ lies
above it. This is illustrated in Fig.~\ref{det24_f} for the case of
level (2,4).
\begin{figure}
\begin{center}
\input{shortdet24.pstex_t}
\caption{\footnotesize{The distribution diagram of $\det M(\omega)$ at level (2,4).
It has three segments, $L_1$ (retarded), $L_2$ (neutral), and $L_3$ (advanced) with
respective slopes $\frac{1}{\gamma}$, $0$, and $-\frac{1}{\gamma}$.}}
\label{det24_f}
\end{center}
\end{figure}
Now let us denote the successive segments of $L$ (from left to right)
by $L_1, L_2, \ldots L_k$, and let $\mu_1, \mu_2, \ldots \mu_k$ denote
their slopes. Now the general result \cite{Bellman-Cooke} is that
there exist positive numbers $c_1$ and $c_2$ such that all the zeros
with norm greater than $c_2$, are located inside the union of the
strips $V_r$ defined by
\begin{equation}
\left| \mathop{\mathrm{Re}} (\omega + \mu_r \, \log \omega) \right| \leq c_1.
\end{equation}
It is clear that, for large $|\omega|$, these strips are located in
the left half-plane if $\mu_r$ is positive and in the right-half plane
if $\mu_r$ is negative. If $\mu_r=0$, the strip is vertical and
contains the imaginary axis. At level (2,4), we have three strips with
$\mu_r$ respectively equal to $\frac{1}{\gamma}$, $0$, and
$-\frac{1}{\gamma}$. These strips are sketched in
Fig.~\ref{zeros24_f} $i)$.
\begin{figure}
\begin{center}
\input{zeros24.pstex_t}
\caption{\footnotesize{Location of the zeros of $\det M$ at level
(2,4). $i)$: The asymptotic analysis tells us that there exists a
$c_2>0$ such that outside the circle of radius $c_2$, all the
zeros are located inside the retarded strip $V_1$, the neutral
strip $V_2$ and the advanced strip $V_3$.}
$ii)$: Roots found numerically for $-200<\mathop{\mathrm{Im}} \omega<200$. The blue
dots are simple roots while the red dots are double roots.}
\label{zeros24_f}
\end{center}
\end{figure}
One can say a little more. In particular, in any region $R$ defined by
\begin{equation}
|\mathop{\mathrm{Re}}(\omega+\mu_r \log \omega)| \leq c_1, \quad
|\mathop{\mathrm{Im}}(\omega + \mu_r \log \omega) - a| \leq b,
\label{ab}
\end{equation}
with no zero on the boundary, the number $n(R)$ of zeros in $R$ satisfies
\begin{equation}
1-n_r + \frac{b}{\pi} \beta \leq n(R) \leq \frac{b}{\pi} \beta + n_r - 1,
\label{zerodensity}
\end{equation}
where $n_r$ is the number of points of the distribution diagram on
$L_r$ and $\beta$ is the difference in values of $\beta_j$ between the
end-points of $L_r$. In Eq.~(\ref{ab}), $a$ is an arbitrary real
number, and $b$ is an arbitrary real positive number. So, for example
with $\mu_r =0$ (neutral strip), Eq.~(\ref{ab}) defines a rectangular
box of width $2 c_1$ and height $2 b$ centered on $a \times i$.
Therefore, Eq.~(\ref{zerodensity}) just means that the average
vertical density of the zeros along the neutral strip is given by
$\beta/\pi$. For the other branches with $\mu_r \neq 0$, the picture
is almost the same, just a little twisted. But in the limit $|\omega|
\rightarrow \infty$, $\log(\omega)$ is going to be roughly constant in
the box, so (\ref{ab}) defines an almost rectangular box centered on
$-\mu_r \log(\omega) + a \times i$. This means in particular that each
strip contains infinitely many zeros with arbitrarily large norm. And
it has the following immediate consequence: {\em At level (2,4), $\det
M(\omega)$ has infinitely many zeros with arbitrarily large positive
real part.} This means that if we specify a generic initial
condition for the string field ${\bf \Psi}(x^+)$, $0 \leq x^+ \leq
\gamma$, with ${\bf \Psi}(x^+)$ close to the non-perturbative vacuum,
then we should expect that it will start to oscillate with
exponentially growing amplitude, at least until the linear
approximation becomes invalid. For concreteness we show, in Figure
\ref{zeros24_f} $ii)$, the zeros of $\det M(\omega)$ found numerically
in the range $-200 < \mathop{\mathrm{Im}} \omega < 200$. The picture agrees with the
asymptotic analysis. We see in particular that the zeros are aligned
along branches in each of the strips, and that along these branches
the spacing between roots is asymptotically constant. Interestingly,
in the retarded strip there is a branch of {\em double roots} (the
double zeros are plotted in red in Figure \ref{zeros24_f} $ii)$). It
turns out that this is related to the fact, already mentioned in
Section \ref{s_level2}, that at level (2,4) there are only five
independent fields because of the relations (\ref{lev24relations}).
In Fig.~\ref{det24_f} we have denoted the segment with positive slope
as {\em retarded}, the one with zero slope as {\em neutral} and the
one with negative slope as {\em advanced}. This denomination is easily
understood by looking at the following three simple linear delay
differential equations (we refer the reader to Appendix \ref{s_dde}
for definitions and more details on delay differential equations).
First, suppose we have an equation of the form
\begin{equation}
f'(t) = f(t-1).
\label{simple_retarded}
\end{equation}
This is an equation of the retarded type since the right-hand-side
includes $f$ at the retarded time $t-1$. Making the ansatz $f(t) =
e^{\omega t}$, we obtain the characteristic equation for $\omega$
\begin{equation}
\omega e^\omega - 1 = 0,
\end{equation}
whose distribution diagram is shown in Fig.~\ref{branches_f}$i)$. It
has one segment of positive unit slope; this justifies the
denomination ``retarded'' for such segments.
\begin{figure}
\begin{center}
\input{branches.pstex_t}
\caption{\footnotesize{Distribution diagrams of some simple linear
delay differential equations of $i)$ retarded type, $ii)$ neutral
type, and $iii)$ advanced type.}}
\label{branches_f}
\end{center}
\end{figure}
Now we consider the neutral delay differential equation
\begin{equation}
f'(t) = f'(t-1),
\label{simple_neutral}
\end{equation}
which has for characteristic equation $\omega e^\omega -
\omega = 0$. Its distribution diagram, shown in
Fig.~\ref{branches_f}$ii)$, has one segment of zero slope, hence the
denomination ``neutral'' for such segments. At last, the equation
\begin{equation}
f'(t) = f(t+1)
\label{simple_advanced}
\end{equation}
is of the {\em advanced} type because the right-hand side involves $f$
at the advanced time $t+1$. Its characteristic equation is $\omega -
e^\omega = 0$, whose branch diagram is shown on
Fig.~\ref{branches_f}$iii)$. It has one segment of negative slope,
which we therefore name $advanced$.
To summarize, at level zero the linearized equations of motion are
purely of the retarded type because the characteristic equation for
the tachyon is \cite{Hel-Sch}:
\begin{equation}
(\omega-1) e^{\gamma \omega} + 2 = 0,
\end{equation}
and its distribution diagram is thus as shown in
Fig.~\ref{branches_f}$i)$. From the above discussion, we therefore
know that its roots of large absolute value all have negative real
part, and it could thus have at most finitely many roots with positive
real part. It turns out that {\em all} the roots have negative real
parts, and the motion around the non-perturbative vacuum is stable at
this level. However, we have shown that at level (2,4) we obtain two
new segments, a neutral one and an advanced one which ruins the
stability around the non-perturbative vacuum. It is interesting to ask
what happens at higher levels. We show, in Fig.~\ref{det26_f}, the
distribution diagram found at level (2,6).
\begin{figure}[!ht]
\begin{center}
\input{det26.pstex_t}
\caption{\footnotesize{The distribution diagram of $\det M(\omega)$ at level (2,6). It has
four segments $L_1,\ldots,L_4$ with respective slopes $\frac{1}{\gamma}$, $0$, $-\frac{1}{\gamma}$, and $-\frac{2}{\gamma}$.}}
\label{det26_f}
\end{center}
\end{figure}
Note that at this level, it is not anymore consistent to set
$B^{++}(x^+)$ to zero, so we have a total of eight fields. One of the
most striking differences with level (2,4) is that there is one more
advanced segment. This can be understood from the fact that the action
at level (2,6) contains higher derivatives at retarded times. One
notices also that the retarded segment $L_1$ has become much shorter.
With our code, we were able to calculate the action and equations of
motion up to level (4,8). From this large calculation we show in
Fig.~\ref{det48_f} the result for the distribution diagram.
\begin{figure}[!ht]
\begin{center}
\input{det48.pstex_t}
\caption{\footnotesize{The distribution diagram of $\det M(\omega)$ at level (4,8). It has
five segments $L_1,\ldots,L_5$ with respective slopes $\frac{1}{\gamma}$, $0$, $-\frac{1}{\gamma}$, $-\frac{2}{\gamma}$, and $-\frac{3}{\gamma}$.}}
\label{det48_f}
\end{center}
\end{figure}
At this level we have fifty fields. It turns out that the calculation
of $\det M(\omega)$ is a difficult problem. With the definition $z
\equiv e^{\gamma \omega}$, we see that the entries of the matrix
$M(\omega)$ are polynomials in the two variables $\omega$ and $z$. We
explain, in Appendix~\ref{polydet_s}, why the computation of the
determinant of a polynomial matrix is hard and how to overcome this
difficulty.
Interestingly, we see that at level (4,8), we have one retarded
branch, one neutral branch, and {\em three} advanced branches. In this
sense, the equations of motion become more and more advanced as the
level is increased. We can actually express this idea differently; one
could ask if we can write an {\em effective} equation of motion for
one field, e.g. the tachyon $\phi(x^+)$, that would give rise to the
same characteristic equation as Eq.~(\ref{detM0}). The answer is very
easy; indeed if we make the ansatz $\phi(x^+) = e^{\omega x^+}$, it is
immediately clear that the equation of motion
\begin{equation}
\sum_{j=0}^n p_j(\partial_+) \, \phi(x^+ + j\, \gamma) = 0
\end{equation}
is precisely the same as Eq.~(\ref{detM0}), where the $p_j$'s are the
polynomials defined by Eq.~(\ref{exp_pol}). We now do the usual
splitting of this differential equation by putting one of the terms
with highest derivatives on the left-hand side and all the other ones
on the right-hand side. So we choose a $j_{\text{max}}$ such that the
degree $d_{j_{\text{max}}}$ of the polynomial $p_{j_{\text{max}}}$ is
maximum (i.e. $d_{j_{\text{max}}} \geq d_j$, $j=0,\ldots n$). If the
distribution diagram has a neutral segment, then we can choose, for
$j_{\text{max}}$, any $j$ along this segment. After subtracting
$j_{\text{max}} \gamma$ from $x^+$, the equation of motion thus takes the form
\begin{equation}
p_{j_{\text{max}}}(\partial_+) \, \phi(x^+) = - \sum_{j \neq j_{\text{max}}} p_j(\partial_+)
\, \phi(x^+ + (j -j_{\text{max}}) \, \gamma).
\label{effdde}
\end{equation}
What we can immediately read from this equation is that, at level
higher than zero, the non-locality is not anymore concentrated at one
point in the past $x^+-\gamma$, but instead takes all the values $(j
-j_{\text{max}}) \, \gamma$, $j=0,\ldots n$. While this precise range
depends on the value of $j_{\text{max}}$ that we chose, it is clear
that it will extend in the future by, at least, an amount
$n_{\text{future}} \, \gamma$ equal to the length of the projection of
the advanced segments on the $\beta_j$ axis. And similarly, it will
extend in the past by, at least, an amount $n_{\text{past}} \, \gamma$
equal to the length of the projection of the retarded segments on the
$\beta_j$ axis.
If we specify generic initial data for DDEs (\ref{effdde}) whose
lowest derivative appears at an advanced time (which we call advanced
DDEs), we should not expect that smoothing will occur when solving
with the method of steps. In fact, we show in Appendix \ref{s_dde} an
example of an advanced DDE which does not even possess a continuous
solution for a given initial data. For a system of advanced DDEs, on
the other hand, smoothing {\em may} occur. For our rolling solution,
we require an analytic solution. This is not in conflict with the
preceding remark because we are not free to choose any initial data,
as we have shown that the initial conditions are essentially uniquely
determined by demanding that the tachyon lives in the perturbative
vacuum in the infinite light-cone past.
It is clear from the distribution diagrams in Figs.~\ref{det24_f} and
\ref{det48_f}, that $n_{\text{future}}$ and $n_{\text{past}}$ both
increase from level (2,4) to level (4,8). Going from level (2,4) to
level (2,6), on the other hand, we see that although
$n_{\text{future}}$ increases, $n_{\text{past}}$ decreases. But if we
look only at levels $(L,2L)$, we nevertheless still expect that both
$n_{\text{future}}$ and $n_{\text{past}}$ will continue to grow beyond
level (4,8), and that in the limit $L \rightarrow \infty$, the
non-locality extends over the whole discrete range $\left\{n \,
\gamma, \ n \in \mathbb{Z}\right\}$.
At last, we remark that at infinite level, if we subsequently take the
limit $V^+ \rightarrow 0$, which is equivalent to
shrinking the delay $\gamma \rightarrow
0$, the range of non-locality becomes continuous and extends over
{\em all} light-cone times. We thus recover, in this limit, the
non-locality characteristic of usual (zero dilaton) string field
theory.
\section{Discussion and Conclusions}
\label{s_discussion}
We have identified the equations of motion for the open string tachyon
rolling in a linear dilaton background as being delay differential
equations. This class of equations has been widely studied in the
mathematics literature. Although no method to analytically solve
non-trivial DDEs is known, the method of steps is in general an
efficient numerical method (and essentially the only method) once the
initial data is provided. We have used this method in order to solve
the equations of motion at level zero, and found that it is both
efficient and accurate; in particular our solution agrees very well
with the results from an exponential series ansatz, and can be
continued much further in light-cone time (Fig.~\ref{fig:level0}).
At truncation level two, we found that the method of steps can be used
only when we further truncate the action by either removing all higher
derivatives, or by keeping only the scalar fields. But we saw that the
results from these approximations all agree reasonably well with the
more accurate solution from the exponential series ansatz, at least
until the first maximum has been reached
(Fig.~\ref{fig:CompTach}). The obtained numerical solutions can be
continued for much larger light-cone time, where it is seen that they
converge towards the non-perturbative vacuum. If we don't simplify the
equations of motion, our numerical solver gives us wrong solutions. We
have understood that this problem is due to the fact that our
equations of motion possess derivatives of order higher than one at
retarded times. In general, such equations are not even guaranteed to
possess continuous solutions. While we still expect the equations of
motion to have an analytic solution for the initial data corresponding
to a tachyon in the perturbative vacuum in the far light-cone past,
the method of steps is rendered unstable by the higher derivatives.
We have already mentioned that the diffusion equation method
\cite{Calcagni:2007wy,Calcagni:2009tx,Calcagni:2009jb} has been
successfully used to solve numerically the light-like rolling problem
at level zero \cite{Barnaby}. It is then natural to ask wether this
method can give more stable solutions at higher levels. The diffusion
method has the virtue of applying to a larger class of non-local
equations. But in the case of DDEs, we believe that it is completely
identical to the method of steps. It would then face exactly the same
problem as the method of steps at higher levels. To illustrate this,
we briefly review the diffusion method applied to the equation
\begin{equation}
\phi'(x^+) - \phi(x^+)=-\phi(x^+-\gamma)^2.
\label{exampleeq}
\end{equation}
First we define a function of two variables $x^+$ and $r$ by
$
\Phi(x^+,r) \equiv e^{\gamma r \partial_+} \phi(x^+) =
\phi(x^+ +r\gamma).
$ This definition is implemented by the simple diffusion equation
\begin{equation}
\partial_+ \Phi(x^+,r) = \frac{1}{\gamma} \partial_r \Phi(x^+,r).
\label{diffusion}
\end{equation}
The idea is then to solve the diffusion equation on the region $x^{+} \ge 0$
and $0\leq r\leq1$. The subtle part is to determine the boundary
conditions. It turns out that the boundary conditions on the $r=1$
axis are determined by the equation (which is now local in $x^+$):
\begin{equation}
\Phi(x^+,1)-\frac{1}{\gamma} \left[ \partial_r
\Phi(x^+,r)\right]_{r=1} = \Phi(x^+,0)^2.
\label{boundary}
\end{equation}
A simple algorithmic resolution of the diffusion equation is the following:
\begin{enumerate}
\item Specify boundary conditions on the segment $x^+ = 0$. This is the {\em initial data}.
\item Solve the diffusion equation (\ref{diffusion}) with the given piece of
boundary conditions. One can easily see that this amounts to copying ``diagonally''
the values on the segment $x^+=0$ to the horizontal segment defined by $r=0$ and
$0 \leq x^+ \leq \gamma$, see Fig.~\ref{fig_PDE_method}. \label{diagonally}
\item To solve the diffusion equation further, one needs first to specify the
initial conditions on the segment defined by $r=1$ and $0 \leq x^+ \leq \gamma$
using Eq.~(\ref{boundary}). Since the simple diffusion equation has been solved
exactly in the previous step, replace $\frac{\partial_r}{\gamma}$ by
$\partial_+$, and the right-hand side is simply the initial data. One
obtains precisely the original equation (\ref{exampleeq}) (with $\phi(x^+)$
replaced by $\Phi(x^+,1)$).
\item Solve again the diffusion equation by copying the obtained boundary
values on the segment $r=1$ and $0 \leq x^+ \leq \gamma$ to the segment $r=0$
and $\gamma \leq x^+ \leq 2 \gamma$.
\item Solve again Eq.~(\ref{boundary}) on the segment $r=1$ and
$\gamma \leq x^+ \leq 2 \gamma$. From the same argument as above, plus the
fact that one can replace the right-hand side of (\ref{boundary}) by
$\Phi(x^+ -\gamma, 1)^2$, this is solving the DDE equation (\ref{exampleeq})
on the interval $\gamma \leq x^+ \leq 2 \gamma$ given the values of $\phi$
on the interval $0 \leq x^+ \leq \gamma$. This is precisely the method of steps
as described in Appendix \ref{s_dde}!
\end{enumerate}
The algorithm then continues by repeating points 4 and 5, which is
equivalent to solving the original DDE \eqref{exampleeq} by the method
of steps.
To summarize, the diffusion method applied to a DDE ``mimics'' the method of steps. We cautiously note, however, that a different numerical method to solve the diffusion equation may change the above claim if Eq.~(\ref{diffusion}) is {\em not} solved exactly as in our algorithm.
\begin{figure}
\begin{center}
\input{PDEgrid.pstex_t}
\caption{\footnotesize{
Step 2 in the algorithm to solve a DDE in the diffusion equation approach.
The initial data at $x^{+} = 0$ is copied ``diagonally'' to the $r=0$ boundary.}}
\label{fig_PDE_method}
\end{center}
\end{figure}
\paragraph{}
In order to find accurate numerical solutions taking all derivatives
into account, we had to resort to the exponential series ansatz. This
method generalizes obviously from level zero; in particular one still
can calculate recursively the coefficients of the series. The problem
with this method is that the number of coefficients needed grows
exponentially with the light-cone time $x_\text{max}^+$ until which we
want our solution to be accurate. Moreover, we must calculate the
coefficients with many digits precision in order to account for the
numerical cancellation errors. At levels (2,4) (Figs.~\ref{fig:ExpAll}
and \ref{fig:ExpTachyon}) and (2,6) (Fig.~\ref{fig:ExpAll26}) we were
nevertheless able to find solutions that are accurate up to a point
where the string field is convincingly seen to converge towards the
non-perturbative vacuum. Moreover, we have found that the convergence
(as measured by the overshooting of the first maximum) is
substantially faster at levels two and four than at level zero. At
level four, it is a little bit slower that at level two
(Fig.~\ref{fig:lev2vs4}). In order to compare this to the analytic
solution of Hellerman and Schnabl \cite{Hel-Sch}, it is instructive to
look at the tachyon component of their solution (in Schnabl
gauge). Normalizing the tachyon vev in the non-perturbative vacuum to
one, it is
\begin{equation}
f_1^{(0)}(x^+) = \frac{e^{x^+}}{1+e^{x^+}},
\label{HStach}
\end{equation}
a monotonic solution. So the fact that our solution at level two has
become more monotonic is a good indication that level truncation
yields qualitatively the same solution (in Siegel gauge). We note,
however, that if we expand (\ref{HStach}) as a power series in
$e^{x^+}$ (what we have called an exponential series), it has a finite
radius of convergence; the series diverges for $x^+ \geq 0$. On the
other hand, the exponential series that we obtained have a much larger
radius of convergence, perhaps infinite as can be shown at level zero
from the asymptotic behavior of the coefficients of the series
\cite{Hel-Sch}. It is worth mentioning a curious fact. Had we taken
the level-zero equation of motion (with vev normalized to unity)
$\phi'(x^{+}) - \phi(x^{+}) = - \phi(x^{+}-\gamma)^2$, and {\em dropped the
delay} $\gamma$, we would get
$$
\phi'(x^{+}) - \phi(x^{+}) = -\phi(x^{+})^2,
$$
which has the solution
$$
\phi(x^+) = \frac{e^{x^+}}{1+e^{x^+}}.
$$ Precisely the same as (\ref{HStach})! This might just be a
coincidence; it could also point at something deeper which may be
worth examining in a further work.
\paragraph{}
To sum up, our numerical solutions at levels two and four strongly
indicate that the string field will eventually converge to the
non-perturbative vacuum, in agreement with the analytic solution of
\cite{Hel-Sch}. On the other hand, we have also shown that, at levels
higher than zero, the equations of motion, linearized around the
non-perturbative vacuum, admit infinitely many oscillating solutions
with exponentially growing amplitude. This suggests that the string
field should not converge to this vacuum.
These two apparently contradictory results can be reconciled in
different ways. Below we describe two possibilities.
\paragraph{}
The \textit{first} possibility is that the string field somehow avoids the
infinitely many exponentially growing modes. This is certainly
possible because the initial conditions (i.e. the string field sitting
in the perturbative vacuum in the infinite light-cone past) are very
special:
With this point of view, we should conclude that the motion
is unstable: should an external perturbation, or a quantum effect,
move the string field the tiniest bit away from this particular
solution it would not converge. We should also stress that our
analysis of the linearized equations of motion doesn't tell us how
non-convergent the rolling would be. It might approach a limit cycle
(in phase space) as is sometimes the case when the linearized equations of
motion have a {\em finite} number of growing solutions. Our situation,
however, is different because we have infinitely many growing
solutions in the vacuum. This could render the rolling diverging. Or
it might not be that bad; after all, we could imagine expressing a
well-behaved converging function as a series in the infinitely many
diverging modes \cite{Bellman-Cooke}; this would not be possible in
the finite case. We emphasize here again the special character of the
initial condition of the tachyon sitting on the top of the hill in the
infinite light-cone past, which we have shown uniquely fixes the
initial data up to a shift in light-cone time. The method of
exponential series, which gives a convergent solution, can work only
with this particular initial condition, which is also the initial
condition of the analytic solution of \cite{Hel-Sch}. For a generic
initial condition the coefficients cannot be determined recursively,
because ``negative'' modes have to be taken into account. In the level
zero case, with $\phi(x^+) = \sum_{n=-\infty}^{\infty}a_n e^{n x^+}$,
one obtains
\begin{equation}
a_n = -\frac{K^{3-2n} }{n-1} \sum_{m=-\infty}^{\infty}{a_m a_{n-m}}.
\label{recursion relation}
\end{equation}
In other words: in order to calculate \textit{one} coefficient, we need
to know \textit{all} coefficients, including the one we are interested in!
The same problem is found in the DDE approach: to find initial data that
are consistent with the equation of motion, we need to solve the very
same equation of motion. This is a simple problem only when we require
that the tachyon be in the non-perturbative vacuum in the infinite
past.
We leave it for further work to study different initial conditions. At
level zero it was done in \cite{Barnaby}; but at higher levels we
expect different results due to the higher derivatives and divergent
modes.
\paragraph{}
A \textit{second} possibility to reconcile a convergent rolling
solution with infinitely many growing modes is that these modes do not
satisfy the ``out-of-gauge'' equations of motion\footnote{We thank
T.~Erler and M.~Schnabl for emphasizing this point to us.}. Indeed
we have derived the equations of motion from the gauge-fixed action.
This means that for a solution $\Psi_0$, we are guaranteed that
$\langle \Phi, Q \Psi_0 + \Psi_0 \star \Psi_0 \rangle = 0$ for all $\Phi$ in
Siegel gauge. But it is possible that $Q \Psi_0 + \Psi_0 \star \Psi_0$ itself doesn't
vanish. That such solutions $\Psi_0$ can exist was shown in \cite{Hata:2000bj}.
\paragraph{}
The two different explanations above differ in particular by the
interpretation of the growing modes. In the first possibility, they do
have some physical meaning and make the rolling unstable. While we do
not expect to find open string degrees of freedom in the
non-perturbative vacuum, it is not unreasonable to think that the
unstable modes could correspond to the {\em closed} tachyon
instability. After all, we have coupled the open string field theory
to the closed sector, although only via the dilaton. In the second
possibility mentioned above, however, the growing modes are simply
unphysical because they do not obey the out-of-gauge equations of
motion. In order to find out if this is the case, we would have to
derive the equations of motion from the gauge-unfixed action and
verify whether the growing modes are still solutions of these full
equations of motion. We hope to be able to report on this in a future
publication. The present paper gives only a partial answer to the
question of the physical meaning of the growing modes. At the end of
Section \ref{s_oscillations}, we wrote an effective equation of motion
for one scalar field, whose solutions are precisely the exponential
modes solutions of the whole system of equations of motion linearized
around the non-perturbative vacuum. We have then found that the
decaying modes correspond to {\em positive} delays in the effective
equation of motion, thus non-locality in the {\em past}, and that
growing modes correspond to {\em negative} delays, i.e.
non-locality extending into the {\em future}.
\acknowledgments
We extend our sincere gratitude to T.~Erler for useful discussions, and to
N.~Barnaby and M.~Schnabl for insightful
comments on the manuscript. We are indebted to A.~Golovnev for providing
us with a copy of Ref.~\cite{Ostrogradski}. N.~M. is supported in parts
by the Transregio TRR 33 'The Dark Universe' and the Excellence
Cluster 'Origin and Structure of the Universe' of the DFG.
|
2,869,038,154,412 | arxiv | \subsection{Rephasing scheme}
\begin{figure*}[hbtp]
\includegraphics[width=0.6\textwidth]{fig1}
\caption{(color online). Experimental setup. (a) Two methods are used in creating spin-waves in an atomic ensemble. In the electromagnetically induced transparency (EIT) process, a weak probe pulse at single-photon level is converted to atomic spin waves by turning off the coupling beam. In the spontaneous Raman scattering (SRS) process, a single-quanta spin-wave is imprinted in the atomic ensemble heralded by detecting a Raman scattered write-out photon. In the rephasing precess, rephasing pulses which couple the $|g\rangle\leftrightarrow|s\rangle$ transition through a two-photon Raman transition is applied. Later, spin-wave states are retrieved either by turning on the coupling or the read beam. (b) Schematic view of the experimental configuration. The atomic ensemble is prepared through magneto-optical trap (MOT). The coupling beam and the read beam have the same frequency, polarization and spatial mode, so do the the probe and read-out photon. $H$ ($V$) refers to horizontal (vertical) polarization relative to the drawing plane. HWP and PBS represents half-wave plate, and polarized beam-splitter, respectively. (c) Momentum relationships for the control beams, detection modes and the Raman beams.}
\label{fig1}
\end{figure*}
In an atomic-ensemble quantum memory, a single quantum state is stored as a spin wave spreading over the whole ensemble \cite{Fleischhauer2000,Duan2001}. The spin-wave state at $t=0$ can be described as
\begin{equation*}
|\Psi \rangle _{gs}=\frac{1}{\sqrt{N}}\sum_{j}^{N}e^{i\mathbf{k}_{s}\cdot
\mathbf{r}_{j}(0)}|g...s_{j}...g\rangle ,
\end{equation*}
where $N$ is the number of atoms, $|g\rangle $ and $|s\rangle $ are two atomic ground states, $\mathbf{k}_{s}$ is the wavevector of the spin wave, $\mathbf{r}_{j}(0)$ is the position of the $j$-th atom in the ensemble at $t=0 $. This state can be physically interpreted as a phase grating, which enables strong collective interference in the read-out process \cite{Duan2001}. With this collective interference, efficient conversion from spin waves to photons has been demonstrated \cite{Simon2007, Hedges2010, Bao2012}. As the spin wave is a collective exication over the whole ensemble, inhomogeneity of frequency or phase difference between the atomic levels of $|g\rangle$ and $|s\rangle$ for all atoms will distort the relative phase between each term in $|\Psi\rangle_{gs}$, thus results in a distorted phase grating. Atomic motion is one dominant decoherence mechanism for atomic-ensemble quantum memories \cite{Zhao2009, ZhaoR2009}. Within an atomic ensemble, the velocity $\mathbf{v}_j$ of each atom varies from atom to atom, which will distort the original phase grating in $|\Psi\rangle_{gs}$. After a storage time of $t=T$, the mismatching phase of $j$-th term in $|\Psi \rangle _{gs}$ is $\Delta \phi _{j}=\mathbf{k}_{s}\cdot \mathbf{r}_{j}(T)-\mathbf{k}_{s}\cdot \mathbf{r}_{j}(0)=\mathbf{k}_{s}\cdot \mathbf{v}_{j}T$ where we assume the atoms are freely moving with $\mathbf{r}_{j}(T)=\mathbf{r} _{j}(0)+ $ $\mathbf{v}_{j}T$. This results in a storage time \cite{Zhao2009} of $\tau_{s}\simeq1/k_{s}\bar{v}$ with $\bar{v}$ the average atomic thermal velocity.
This phase distortion can be eliminated by applying a spin-echo rephasing technique. As shown in Fig. \ref{fig1}, we apply two laser beams to induce stimulated Raman transitions \cite{Kasevich1991} between $|g\rangle $ and $|s\rangle $, where one laser couples the $|s\rangle \leftrightarrow |e\rangle $ transition with a wavevector of $\mathbf{k}_{1}$, and the other laser couples the $|g\rangle \leftrightarrow |e\rangle $ transition with a wavevector of $\mathbf{k}_{2}$. The rephasing scheme is implemented by applying two Raman $\pi $ pulses at $t=t_{1}$ and $t_{2}$ respectively. During the first $\pi $ pulse, an atom in the state $|g\rangle $ is transferred to $|s\rangle $ by absorbing a photon with momentum $\hbar \mathbf{k}_{2}$ and coherently emits a photon with momentum $\hbar \mathbf{k}_{1}$, thus obtains a phase of $\mathbf{k}_{\pi }\cdot \mathbf{r}(t_{1})$ with $\mathbf{k}_{\pi}=\mathbf{k}_{2}-\mathbf{k}_{1}$. While an atom in the state $|s\rangle $ is transferred to $|g\rangle $ by absorbing a photon with momentum $\hbar \mathbf{k}_{1}$ and coherently emits a photon with momentum $\hbar \mathbf{k}_{2}$, thus obtains a phase of $-\mathbf{k}_{\pi }\cdot \mathbf{r}(t_{1})$. Therefore, after the first $\pi $ pulse, the $j$-th term in $|\Psi \rangle _{gs}$ is changed from $|g...s_{j}...g\rangle $ to $|s...g_{j}...s\rangle $ and acquires a net phase $-2\mathbf{k}_{\pi }\cdot \mathbf{r}_{j}(t_{1})$, where we have neglected the overall phase. After the second $\pi $ pulse, the $j$-th term is transferred back to $|g...s_{j}...g\rangle $ and another phase of $2\mathbf{k}_{\pi }\cdot \mathbf{r}_{j}(t_{2})$ is obtained. Consequently, the overall phase gained by these two $\pi $ pulses is $\Delta \phi _{j}^{\pi}=2\mathbf{k}_{\pi }\cdot (\mathbf{r}_{j}(t_{2})-\mathbf{r}_{j}(t_{1}))=2\mathbf{k}_{\pi }\cdot \mathbf{v}_{j}\Delta t$ with the interval $\Delta t=t_{2}-t_{1}$. If $\Delta \phi _{j}^{\pi }$ is equal to $\Delta \phi _{j}$, the random phases cancel with each other and thus the spin-wave can be efficiently read out at $t=T$. In this way, we obtain the rephasing condition of $2\mathbf{k}_{\pi }\Delta t=\mathbf{k}_{s}T$, which sets critical constraints on the direction of Raman beams and the time interval between the two $\pi $ pulses, and the read out time $T$.
The layout of our experiment is shown in Fig.~\ref{fig1}. An ensemble of $\sim $10$^{8}$ $^{87}$Rb atoms are loaded in a magneto-optical trap. After polarization-gradient cooling, the temperature obtained is $\sim$10 $\mu $K, and the optical depth is $\sim $4. The energy levels employed are $|g\rangle \rightarrow |F=2,m_{F}=0\rangle $, $|s\rangle \rightarrow |F=1,m_{F}=0\rangle $ and $|e\rangle \rightarrow |F^{\prime }=2,m_{F}=\pm1\rangle $ of the D$1$ line. Note that by employing the ``clock states" $|F=2,m_{F}=0\rangle $ and $|F=1,m_{F}=0\rangle $, the decoherence due to magnetic field are suppressed and the inhomogeneous broadening due to atomic random motion is isolated for experimental study. Initially we prepare all atoms into the state of $|g\rangle $ through optical pumping, which increases the atom temperature to $\sim $15 $\mu $K. Two Raman beams with the same power of 2.5 mW originate from two separate diode lasers, which are phase locked to a frequency synthesizer at 6.8 GHz, i.e., the frequency separation between $|g\rangle $ and $|s\rangle $. Single-photon detuning $\Delta $ for both Raman beams is +750 MHz relative to $|e\rangle$. The wave number $k_{\pi } $ of Raman light can approximated by $k_{\pi }\approx k_{1}\theta _{\pi }$ if $\theta _{\pi }\ll 1$ (see Fig. 1c). In order to have high-fidelity Raman pulses, the Rabi frequencies for both Raman beams have to be stable for long time and identical for all the atoms. Therefore, we actively stabilize the intensity for both Raman beams with two independent digital proportional-integral controllers. We also increase the diameter of the Raman beams to 3.8 mm to improve the intensity homogeneity of the central region. Rabi flopping between $|g\rangle $ and $|s\rangle $ is measured as shown in Fig.~\ref{fig2}b. By fitting the curve with $I(\tau)=A \cos (2\pi \Omega _{r}\tau )e^{-\gamma \tau} + B$, we obtain a two-photon Rabi frequency of $\Omega _{r}=87.1$ kHz and a decay rate of $\gamma=13.4$ kHz. The fidelity of a single $\pi $ pulse is estimated to be $96\%$. Slight imperfection is mainly limited by slight intensity variations in the central region of the Raman beams, which originate from the imperfection of the optics used.
\begin{figure}[hbtp]
\includegraphics[width=\columnwidth]{fig2}
\caption{(color online). Verification of the rephasing condition via EIT storage. (a) Time sequences for the coupling, probe and rephasing pulses. (b) Two-photon Raman Rabi oscillations between two ground states $|g\rangle$ and $|s\rangle$. The vertical axis is the photon detection probability in the read-out mode, which is proportional to the atom population in $|s\rangle$ state. (c) Optimization of time interval $\Delta t=t_2-t_1$ between two $\protect\pi$ pulses for a storage time of $T=600$ $\protect\mu$s. (d) Measured relationship between $\Delta t/T$ and the intersection angle $\protect\theta_\protect\protect\pi$ between the two Raman beams. The solid line refers to the theoretical estimate of $\theta_s/{2\theta_\pi}$ determined by the rephasing condition.}
\label{fig2}
\end{figure}
We first verify the rephasing condition via electromagnetically induced transparency (EIT) \cite{Fleischhauer2005}. A weak coherent laser pulse with an average photon number of $\sim $1 couples the $|g\rangle \leftrightarrow |e\rangle$ transition, and a control beam resonant with the transition of $|s\rangle\leftrightarrow |e\rangle $ controls the storage and read-out process. The waist of the probe beam is 90 $\mu $m, and that of the coupling beam is 200 $ \mu $m. There is an angle of $\theta _{s}=1.1^{\circ }$ between the coupling light and probe light. According to the time sequences shown in Fig.~\ref{fig2}a, by turning off the coupling beam, an input probe light pulse is converted to an atomic spin wave with $\mathbf{k}_{s}=\mathbf{k}_{p}-\mathbf{k}_{c}$, where $\mathbf{k}_{p}$ ($\mathbf{k}_{c}$) is the wavevector for the probe (coupling) beam. The wave number $k_{s}$ can be expressed as $k_{s}\approx k_{c}\theta _{s}$. According to the rephasing condition, $\mathbf{k}_{\pi }$ has to be in the same direction as $\mathbf{k}_{s}$, which is satisfied approximately since $\theta _{s}$ and $\theta _{\pi }$ are rather small. The time interval between the two Raman pulses has to satisfy $\Delta t/T=k_{s}/2k_{\pi }\approx \theta _{s}/2\theta _{\pi }$ where we have used $k_{c}\approx k_{1}$. Under the condition of $T=600$ $\mu $s and $\theta_{\pi }=2.1^{\circ }$, we measure the photon detection probability in the read-out mode as a function of time interval $\Delta t$. The result is shown in Fig.~\ref{fig2}c, which gives a Gaussian $1/e$ width of 46(1) $\mu $s, and an optimal interval of 154.9(5) $\mu $s. With this method the optimal intervals $\Delta t$ for different $T$ are determined, the average value of $\Delta t/T$ is calculated to be $25.8(1)\%$, which agrees very well with the theoretical estimate of $\theta _{s}/2\theta _{\pi}\approx 26.2(8)\%$. We also change the Raman angle $\theta_\pi$ for several different values, and redo the optimization process for each angle. We find that the rephasing condition is satisfied very well, as shown in Fig.~\ref{fig2}d.
\begin{figure}[hbtp]
\includegraphics[width=0.7\columnwidth]{fig3}
\caption{(color online). Angular distribution of the read-out noise due to imperfection of the $\pi$ pulses. Data is measured with a CCD camera of 53 cm away from the atomic ensemble. Image center corresponds to $\theta_\pi = \theta_s$. Slight interference fringe is due to imperfection of optics used along the imaging path.}
\label{fig3}
\end{figure}
Note that when $\theta _{\pi }$ is approaching $\theta _{s}=1.1^{\circ }$,
namely $\mathbf{k}_{\pi }$ approaching $\mathbf{k}_{s}$, extremely strong
noise due to the $\pi$ pulse imperfections is observed in the probe direction.
We use a CCD camera to measure the angular distribution of this noise, with the result shown in Fig. \ref{fig3}. It suggests that the read-out noise due to imperfection of the $\pi $ pulses is highly directional, which is in conflict with both of our intuition and a previous theoretical study \cite{Heshami2011}. The angle width of this noise is calculated to be $0.28^\circ$, which corresponds to a Gaussian mode with a waist of 102 $\mu$m at the position of the atomic ensemble, which is slightly smaller than the read beam \cite{imagenote}. This highly directional noise implies that the imperfection of the $\pi$ pulses creates another collective excitation state with the wavevector $\mathbf{k}_{\pi}$. When the angle separation between $\theta_{\pi}$ and $\theta_{s}$ is much larger than the angle width in Fig. \ref{fig3}, the noise can be treated incoherently, which gives the result of $2\varepsilon N\Delta\Omega/4\pi$ in the unit of photon numbers, where $N$ is number of atoms in the mode of the coupling beam, $\varepsilon$ is the imperfection for a single $\pi$ pulse and $\Delta\Omega$ is the solid angle. When $\theta_{\pi}=\theta_{s}$, this read-out noise is collectively enhanced by a factor of $N$. Thus in order to reduce the $\pi $-pulse-induced read-out noise, the angle separation between $\theta_\pi$ and $\theta_s$ has to be large.
\begin{figure}[hbtp]
\includegraphics[width=\columnwidth]{fig4}
\caption{(color online). Cross-correlation measurement as a function of storage time $T$. (a) Without applying the rephasing pulses, the lifetime is measured to be 228(6) $\protect\mu$s. (b) With the rephasing pulses applied, the lifetime is measured to be 1.20(7) ms. The reduction in $g^{(2)}$ is due to imperfection of the rephasing pulses. At $T= $ 1 ms, nonclassical correlation ($g^{(2)}>2$) is well preserved. The detection probability of the write-out photon for both measurements is set to $p_{w}=0.35\%$.}
\label{fig4}
\end{figure}
In order to directly test the feasibility of applying these rephasing pulses without destroying the single spin waves stored in the atomic ensemble, we implement the Duan-Lukin-Cirac-Zoller (DLCZ) \cite{Duan2001} protocol, for which nonclassical photon-photon correlation can be used as a criteria to verify the quantum nature of storage \cite{Kuzmich2003}. We apply a weak write pulse coupling the transition of $|g\rangle \leftrightarrow |e\rangle$ with a small detuning to induce spontaneous Raman scattering. Heralded on the detection of a single-photon in the write-out mode, a single-quanta spin-wave is created with the wavevector $\mathbf{k}_{s}=\mathbf{k}_{w}-\mathbf{k}_{wo}$, where $\mathbf{k}_{w}$ ($\mathbf{k}_{wo}$) is the wavevector of the write
beam (write-out mode). Configuration for the beam directions are shown in Fig.~\ref{fig1}. After a storage time of $T$, a strong read pulse coupling the $|s\rangle \leftrightarrow |e\rangle$ transition converts the spin wave into a single photon emitted in the read-out mode. Experimentally the cross-correlation is characterized by $g^{(2)}=p_{w,\,r}/(p_{w}\,p_{r})$ where $p_{w}$($p_{r}$) denotes the probability of detecting a write-out (read-out) photon and $p_{w,\,r}$ donates the coincidence probability between the write-out and read-out channels. $g^{(2)}>2$ means that the write-out photon and read-out photon are nonclassically correlated \cite{Felinto2005}. Without applying the rephasing pulses, the measured $g^{(2)}$ is shown in Fig.~\ref{fig4}a with $p_{\mathrm{r}}=0.28\%$ and $p_{w}=0.35\%$. The cross-correlation $g^{(2)}$ decays with a $1/e$ lifetime of 228(6) $\mu $s starting from an initial value of 24.3(6).
In order to keep away the $\pi$-pulse-induced directional noise away from the read-out mode, the intersecting angle between the Raman beams is set to be $\theta_{\pi}=1.9^{\circ}$. After optimization of the beam qualities, the fidelity of a single $\pi $ pulse is about $97\%$. The $\pi$-pulse-induced noise in the read-out mode is measured to be $p_{r}=0.8\%$. We measure the cross-correlation for a series of time points with the result shown in Fig.~\ref{fig4}b. In comparison to the case without applying the rephasing pulses, lifetime is increased by 5 folds. With the rephasing pulses on, the cross-correlation $g^{(2)}$ drops from an initial value of 5.2(1) and stays well above $2$ for about $1$ ms storage time. This result does prove that quantum nature of storage is well preserved. Higher nonclassical photon-photon correlation can be achieved by improving the accuracy of the $\pi$ pulses. We estimate that a $\pi$-pulse fidelity of $99\%$ would improve the cross-correlation to well above 10.
In summary, we have successfully managed to operate the spin echo technique in the single-quanta regime for an atomic-ensemble quantum memory. In our experiment, we find that the noise induced by slight imperfection of the $\pi$ pulses is highly directional and can be avoided by arranging the rephasing beam directions properly. With $\pi$ pulses of moderate fidelities, the quantum nature of the spin echo process is verified by observing nonclassical photon-photon correlations. We emphasize that although in our experimental demonstration we merely study the motion-induced decoherence for a cold-atomic-gas ensemble, our findings and techniques developed do apply to other decoherence mechanisms and other physical systems, like the solid-state photon-echo quantum memories \cite{Tittel2010}.
This work was supported by the National Natural Science Foundation of China, National Fundamental Research Program of China (under Grant No. 2011CB921300), and the Chinese Academy of Sciences. X.-H. B. and B. Z. acknowledge support from the Youth Qianren Program.
\textit{Note added.}$-$After completing this work we became aware of a related experiment by Jobez \emph{et al.}~\cite{Jobez2015}.
|
2,869,038,154,413 | arxiv | \section{Introduction}
SWIFT J1729.9$-$3437 is a transient X-ray pulsar discovered during all-sky monitoring with the \emph{Swift} Burst Alert
Telescope (BAT) on 2010 July 13 (Markwardt et al. 2010a). At the same time, \emph{Rossi X-ray Timing Explorer} (\emph{RXTE})
Proportional Counter Array (PCA) monitoring scans of the Galactic centre region confirmed a gradual increase in flux from a
position consistent with the \emph{Swift} position of the source (RA = 262.53746, Dec. = $-$34.61239). Consecutive
\emph{RXTE}$-$PCA pointings identified SWIFT J1729.9$-$3437 as a pulsar with $\sim$530 s pulsations (Markwardt et al. 2010b).
Markwardt et al.(2010b) also suggested that the X-ray spectrum of the source is compatible with basic X-ray pulsar spectra,
modelled by an absorbed power law with a high energy cut-off.
In this paper, we analyse \emph{RXTE} and \emph{Swift} observations of SWIFT J1729.9$-$3437 and examine the spectral and
timing properties of its outburst from 2010 July 20 to 2010 August 12 (MJD 55397 $-$ MJD 55420). In Section 2, we describe
the observations that we analyse. In Section 3 and 4, we present our timing and spectral analysis results. In Section 5, we
discuss our results and conclude.
\section{Observations}
\subsection{\emph{RXTE}}
11 pointing \emph{RXTE}$-$PCA observations of SWIFT J1729.9$-$3437 were performed between 2010 July 20 and 2010 August 8.
The total exposure time is about 42 ks. PCA consisted of five co-aligned identical proportional counter units (PCUs)
(Jahoda et al. 1996) each having an effective area of approximately 1300 $cm^{2}$. The energy range of PCA was from
2 to 60 keV, with an energy resolution of 18 per cent at 6 keV. The field of view (FOV) at full width at half-maximum (FWHM)
was about 1$\degr$. Generally, various PCUs were turned off during PCA observations. The responses of PCU0 and PCU1 were not
well known due to loss of their propane layers. Furthermore, the responses of the top anode layers were modeled better than
the other layers. Although the number of active PCUs during the observations of SWIFT J1729.9$-$3437 varied between one and
three, we select only data from the top anode layer of PCU2 because of the reasons mentioned above.
The standard software tools of \verb"HEASOFT v.6.11" are used for the PCA data analysis. Data is filtered by selecting times
when the elevation angle is greater than 10$\degr$, the offset from the source is less than 0.02$\degr$ and the electron
contamination of PCU2 is less than 0.1. The background extraction for spectra and light curves are done by using the latest
PCA background estimator models supplied by the RXTE Guest Observer Facility (GOF), \verb"EPOCH 5C". During the extraction
of spectra, Standard2f mode data is considered with 128 energy channels and 16-s time resolution. Relevant response matrices
for spectra are created by \verb"PCARSP V.11.7.1". Furthermore we construct pulse phase resolved spectra with the tool
\verb"FASEBIN Rev.1.8" by using Good Xenon mode event data and 125 $\mu$s time resolution event files
(\verb"E_125us_64M_0_1s") when Good Xenon mode data is not available. From these spectra, we generate energy resolved pulse
profiles by obtaining the count rates per phase bin with the tool \verb"FBSSUM". The 1 s binned 3-20 keV light curves are
produced from Good Xenon and \verb"E_125us_64M_0_1s" events.
\subsection{\emph{SWIFT}}
After the discovery of SWIFT J1729.9$-$3437 by the \emph{Swift}$-$BAT (Barthelmy et al. 2005) all-sky monitoring on 2010
July 13 (Markwardt et al. 2010a), a total of 11 follow-up X-ray Telescope (XRT; Burrows et al. 2005) observations
(each $\sim$3 ks) were carried out between 2010 July 20 and 2010 August 12. XRT is a focusing instrument on board the
\emph{Swift} satellite (Gehrels et al. 2004) which operates between 0.2 and 10 keV energy range. XRT has an effective area
of 110 $cm^{2}$ at 1.5 keV with a spatial resolution of 18$\arcsec$ and the FOV of $23.6\arcmin \times 23.6\arcmin$.
Operation mode of XRT switches between photon-counting (PC), imaging and timing depending on the brightness of the observed
source. Pointing observations of SWIFT J1729.9$-$3437 are in the PC mode. Screened event files are produced by the
\verb"XRTDAS v.2.7.0" standard data pipeline package \verb"XRTPIPELINE v.0.12.6". Standard grade filtering (0-12) is applied
to PC mode data.
The spectral extraction is carried out by filtering appropriate regions using \verb"XSELECT v.2.4b". For the observations
in which the XRT count rate is high enough (above 0.5 cts/s) to produce event pile-up, source region is selected to be
annular. We compared the observed and nominal point spread function (PSF) (Vaughan et al. 2006) and decided the size of
the core affected in order to determine the radius of the inner circle of the annulus. For this purpose, we first modelled
the outer wings ($>15 \arcsec$) of the PSF with a King function which has typical parameter values for XRT
(Moretti et al. 2005). Then the model is extrapolated to the inner region for comparison with the observed PSF and the size
of the piled-up region is determined from the deviation point between the data and the model. For the brightest observations
an exclusion region of radius $\sim9\arcsec$ is sufficient to correct pile-up. For low count rate observations a circular
region of radius 47$\arcsec$ is selected for the source spectral extraction. Source regions are centred on the position
determined with \verb"XRTCENTROID v.0.2.9". A circular source-free region with 140$\arcsec$ radius is selected for the
background spectral extraction. Resulting spectral files are grouped to require at least 50 counts per bin using the ftool
\verb"GRPPHA v.3.0.1", for the $\chi^{2}$ statistics to be applicable. We used the latest response matrix file (version v013)
and created individual ancillary response files using the tool \verb"XRTMKARF v.0.5.9" with the exposure map produced by
\verb"XRTEXPOMAP v.0.2.7". Spectral analysis is performed using \verb"XSPEC v.12.7.0".
For the timing analysis, arrival times of XRT events are first corrected to the Solar system barycenter by using the tool
\verb"BARYCORR v.1.11". Then, background-subtracted XRT light curves in the 0.2$-$10 keV energy range; corrected for pile-up,
PSF losses and vignetting; are extracted by using the highest possible time resolution (2.51 s) for PC mode data.
\section{Timing Analysis}
\subsection{Timing Solution}
\begin{figure}
\center{\includegraphics[height=8.4cm, angle=270]{both_lc_new.eps}}
\caption{530 s binned light curves from \emph{Swift}$-$XRT (0.3$-$10 keV; upper panel) and \emph{RXTE}$-$PCA (3$-$20 keV,
PCU2 top layer; lower panel) observations.}
\label{bothlc}
\end{figure}
For the timing analysis, we use 1 s binned \emph{RXTE}$-$PCA and 2.51 s binned \emph{Swift}$-$XRT light curves of the source.
To illustrate the temporal variation of the pulse phase averaged count rate of the source, in Fig. \ref{bothlc}, we present
530 s binned versions of these light curves. These background subtracted light curves are also corrected to the barycenter
of the Solar system.
In order to estimate pulse periods of the source, the \emph{RXTE}$-$PCA time series is folded on statistically independent
trial periods (Leahy et al. 1983). Template pulse profiles are formed from these observations by folding the data on the
period giving maximum $\chi^2$. Then the barycentred \emph{Swift}$-$XRT time series are also folded over the best period
obtained from \emph{RXTE}. Pulse profiles consisting of 20 phase bins are represented by their Fourier harmonics
(Deeter \& Boynton 1985). Using cross-correlation of pulses between template and in each observation, we obtain the pulse
arrival times.
\begin{figure}
\center{\includegraphics[width=8.4cm]{arr_circular_new4.eps}} \vspace*{0.5cm}
\caption{Pulse phases and their residuals after a quadratic fit. Crosses and triangles represent data from
\emph{RXTE}$-$PCA and \emph{Swift}$-$XRT respectively. The solid line in the bottom panel corresponds to the elliptical
orbital model with an upper limit of eccentricity of 0.60 and $T_{\pi/2}=5^{\circ}$ and circular orbital parameters
listed in Table \ref{soln}.}
\label{arrt}
\end{figure}
We are able to connect all the pulse arrival times of the observations in phase over the whole observation time span.
Following the approach of Deeter et al. (1981), we find that it is possible to fit the pulse arrival times to the second
order polynomial,
\begin{equation}
\delta \phi = \delta \phi_{o} + \delta \nu (t-t_{o})
+ \frac{1}{2}\dot{\nu} (t-t_{o})^{2}
\label{polyn}
\end{equation}
where $\delta \phi$ is the pulse phase offset found from the pulse timing analysis, $t_{o}$ is the epoch for folding;
$\delta \phi_{o}$ is the residual phase offset at t$_{o}$; $\delta \nu$ is the correction to the pulse frequency at time
$t_0$; $\dot{\nu}$, being the pulse frequency derivative, is the second derivative of the pulse phase. In Fig. \ref{arrt},
we present the pulse phases and the residuals of this quadratic fit.
In Fig. \ref{pulexp}, we show typical \emph{Swift}$-$XRT and \emph{RXTE}$-$PCA pulse profiles. As seen from this figure, the
\emph{Swift} pulse has larger error bars and less signal to noise ratio relative to the \emph{RXTE} pulse. In the pulse
timing analysis we find that the error bars of the pulse phases of \emph{RXTE} and \emph{Swift} are inversely correlated
with the signal to noise ratio of the pulses of each observation.
\begin{figure}
\center{\includegraphics[width=8.4cm]{pulseexamples.eps}} \vspace*{0.5cm}
\caption{Two pulse examples obtained from the \emph{Swift}$-$XRT observation at MJD 55397.66 (top panel) and the
\emph{RXTE}$-$PCA observation at MJD 55397.77 (bottom panel).}
\label{pulexp}
\end{figure}
\begin{table}
\caption{Timing Solution and Orbital Model of SWIFT J1729.9$-$3437.}
\center{\renewcommand{\arraystretch}{1.2}\begin{tabular}{|l|l|}
\hline \hline
{\bf{Timing Solution Parameter}} & {\bf{Value}} \\
\hline
Timing Epoch (MJD) & $55393.50(6)$ \\
Frequency (Hz) & $1.8734(8) \times 10^{-3}$ \\
Frequency Derivative (Hz.s$^{-1}$) & $6.42(6) \times 10^{-12}$ \\
\hline \hline
{\bf{Circular Orbital Model Parameter}} & {\bf{Value}} \\
\hline
Orbital Epoch (MJD) & $55395.7(6)$ \\
${{a} \over {c}}\sin i$ (lt.s) & $65(3)$ \\
Orbital Period (days) & $15.3(2)$ \\
\hline
\end{tabular}}
\label{soln}
\end{table}
The residuals shown in Fig. \ref{arrt} fit well to a sinusoidal function with a period of 15.3(2) days. This corresponds to
a circular orbital model, parameters of which are listed in Table \ref{soln}. This circular model has a reduced $\chi^2$ of
1.0. Using an elliptical model we find an upper limit for the eccentricity as 0.6 (see bottom panel of Fig. \ref{arrt}).
In this case, reduced $\chi^2$ is found to be 0.4. This small reduced $\chi^2$ value indicates that the elliptical orbital
model "over-fits" data.
The residuals of the quadratic can alternatively express the noise process due to random torque fluctuations
(Bildsten et al. 1997, Baykal et al. 2007). In order to estimate the noise strength, we fit a cubic polynomial to the
residuals of the pulse arrival times. The observed time series is simulated by the Monte Carlo technique for a unit white
noise strength defined as $P_{\dot \nu}({f})=1$ and fitted to a cubic polynomial in time. Then the square of the third order
polynomial term is divided into the value from Monte Carlo simulations (Deeter 1984, Cordes 1980). The torque noise strength
is obtained as $6.8 \times 10^{-18}$ Hz sec$^{-2}$. This value of noise strength estimate is comparable with those of other
accretion powered sources such as wind accretors e.g. Vela X$-$1, 4U 1538$-$52 and GX 301$-$2 which has the values changing
between $10^{-20}$ and $10^{-18}$ Hz s$^{-2}$ (Bildsten et al. 1997). Her X$-$1 and 4U 1626$-$67, which are disc accretors with
low mass companions, have shown pulse frequency derivatives consistent with noise strengths
$10^{-21}$ to $10^{-18}$ Hz sec$^{-2}$ (Bildsten et al. 1997). Therefore residuals of quadratic fit can also be associated
with torque noise fluctuation of the source.
\subsection{Pulse Profiles and Hardness Ratios}
\begin{figure}
\center{\includegraphics[width=7.7cm, angle=270]{lc_ex.eps}}
\caption{26 s binned light curves of three different \emph{RXTE}$-$PCA observations (3$-$20 keV, PCU2 top layer). Time
values are converted to phases (or time/pulse period) for arbitrary observation epochs. The IDs and mid-times of the
observations are A) 95044-05-02-02 at MJD 55404.6, B) 95044-05-02-03 at MJD 55406.7, C) 95044-05-03-00 at MJD 55408.5.}
\label{lcex}
\end{figure}
A double-peaked pulse profile is observed in the first five \emph{RXTE}$-$PCA observations of SWIFT J1729.9$-$3437
(see panel (A) of Fig. \ref{lcex}); however one peak loses its intensity starting from the middle of the observation on
MJD 55406.7 (see panel (B) Fig. \ref{lcex}) as the source flux continues to decrease after the burst. The peak value of
2$-$10 keV unabsorbed flux is $3.04 \times 10^{-10}\,$erg$\,$cm$^{-2}\,$s$^{-1}$ on MJD 55398.7, during its gradual decrease
it reaches $1.96 \times 10^{-10}\,$erg$\,$cm$^{-2}\,$s$^{-1}$ on MJD 55408.5, when the shape of the pulse profile changes
(see panel (C) Fig. \ref{lcex}). The minimum flux observed for the last \emph{RXTE}$-$PCA observation on MJD 55416.4 is
$1.36 \times 10^{-10}\,$erg$\,$cm$^{-2}\,$s$^{-1}$. These flux values are calculated from the model flux of individual
spectral fitting of each observation.
\begin{figure*}
\center{\includegraphics[width=8cm, angle=270]{3_norm_dash.eps}\hspace{1cm}\includegraphics[width=8cm, angle=270]{7_norm_dash.eps}}
\caption{Energy resolved pulse profiles of \emph{RXTE}$-$PCA observations on MJD 55400.78 (left; ID:95044-05-02-00) and
MJD 55408.46 (right; ID:95044-05-03-00). The unit of y-axis of each plot is normalized count from the specified energy
interval. The dashed line represents the mid-point of the secondary peak.}
\label{enrespp}
\end{figure*}
To search for a possible energy dependence of the pulse shape change, we construct pulse profiles in five energy bands,
i.e. 3$-$8, 8$-$13, 13$-$18, 18$-$25 and 25$-$60 keV. Two examples for the energy resolved pulse profiles,
one for double-peaked and one for single-peaked, are given in Fig. \ref{enrespp}. We find that 8$-$13 keV and 13$-$18 keV
pulses are stronger (they have higher pulse fraction) than the 3$-$8 keV and 18$-$25 keV pulses in all observations. In
25$-$60 keV the pulse fraction noticeably drops. The pulse shape shows no strong energy dependence except for the two
observations on MJD 55400.78 (ID:95044-05-02-00 see Fig. \ref{enrespp}) and MJD 55404.63 (ID:95044-05-02-02). In these two
exceptions the secondary peak around pulse phase 0.3 loses its intensity at the 18$-$25 keV energy band.
\begin{figure*}
\center{\includegraphics[width=5.8cm, angle=270]{hard_day.eps}\hspace{1cm}\includegraphics[width=5.8cm,
angle=270]{hard_intensity.eps}}
\caption{Hardness ratios of energy resolved light curves. Daily averaged time evolutions are plotted on the left, whereas
530 s binned hardness ratios over 3-25 keV PCU2 count rates are plotted on the right. Specified abbreviations on y-axis
of each panel stand for HR1: 8$-$13 keV/3$-$8 keV, HR2: 18$-$25 keV/13$-$18 keV.}
\label{hard}
\end{figure*}
We construct hardness ratio plots from the energy resolved light curves. Daily averaged count rates from 3$-$8,
8$-$13, 13$-$18 and 18$-$25 keV light curves are used to plot hardness ratios HR1: 8$-$13 keV/3$-$8 keV and
HR2: 18$-$25 keV/13$-$18 keV over time (see left panels of Fig. \ref{hard}). HR1 shows a noticeable evolution with respect
to time, it remains almost constant at a value $\sim1.1$ until MJD 55406.7 and a gradual decrease starts after the sixth
observation. The interval with constant HR1 correspond to times when the pulse profile is double-peaked, where as the
decreasing interval of HR1 coincides with the times when the pulse profile is single-peaked. Nevertheless, it should be
noted that the pulse profiles show no significant variation in the corresponding energy bands (see Fig. \ref{enrespp}).
Furthermore, 530 s binned hardness ratios are plotted over the total count rate in 3$-$25 keV band (see right panels of
Fig. \ref{hard}). A noticeable correlation is again observed for HR1. As the source flux decreases during the ongoing decline
of the outburst, the emitted radiation becomes softer.
\section{Spectral Analysis}
\subsection{Overall Spectrum}
\begin{figure}
\center{\includegraphics[width=6.2cm, angle=270]{sim_son.eps}}
\caption{Simultaneous spectral fitting of \emph{Swift}$-$XRT (0.3$-$9.3 keV; grey) and \emph{RXTE}$-$PCA (5$-$25 keV;
black) data. The data and its best fit with \texttt{WABS$\times$(CUTOFFPL$+$GAU)} ($\chi^{2}=1.09\,$; solid line) are
shown in upper panel, the residuals are given in the lower panel.}
\label{bothspe}
\end{figure}
\begin{table}
\caption{Best fit (\texttt{WABS$\times$(CUTOFFPL$+$GAU)}) spectral parameters for the simultaneous fitting of
\emph{Swift}$-$XRT and \emph{RXTE}$-$PCA data shown in Fig.\ref{bothspe}. $C_1$ and $C_2$ are the multiplicative constant
factors for different instruments. All uncertainties are calculated at the 90 per cent confidence level. }
\label{simspe}
\center{\renewcommand{\arraystretch}{1.5}\begin{tabular}{llr}
\hline \hline
\bf{Parameter} & & \bf{Value} \\
\hline
$C_1$ & for XRT & 1.00 (fixed) \\
$C_2$ & for PCA & $1.55^{+0.04}_{-0.04}$ \\
$n_H$ & $(10^{22}\,$cm$^{-2})$ & $8.27^{+0.37}_{-0.36}$ \\
$\Gamma$ & & $1.23^{+0.07}_{-0.07}$ \\
$\Gamma$ Norm. &($10^{-2}\,$cts$\,$cm$^{-2}\,$s$^{-1}$) & $2.42^{+0.26}_{-0.25}$ \\
$E_{fold}$ &(keV) & $16.50^{+1.79}_{-1.52}$ \\
$E_{Fe}$ &(keV) & $6.51^{+0.03}_{-0.03}$ \\
$\sigma_{Fe}$ &(keV) & $0.25^{+0.06}_{-0.05}$ \\
$Fe$ Norm. &($10^{-4}\,$cts$\,$cm$^{-2}\,$s$^{-1}$) & $4.68^{+0.49}_{-0.41}$ \\
Unabs. $F\,_{0.3-9.3\text{ keV}}$ & $(10^{-10}\,$erg$\,$cm$^{-2}\,$s$^{-1})$ & $2.08^{+0.017}_{-0.024}$ \\
Unabs. $F\,_{5-25\text{ keV}} $ & $(10^{-10}\,$erg$\,$cm$^{-2}\,$s$^{-1})$ & $3.00^{+0.004}_{-0.016}$ \\
\hline
Reduced $\chi^{2}$ (dof) & & 1.09 (187) \\
\hline
\end{tabular}} \\
\end{table}
A preliminary spectral analysis for the first pointed observations of SWIFT J1729.9$-$3437 was performed by Markwardt
et al. (2010b). In this paper, we extend the spectral study by using all the available observations of the source defined
in Section 2. Basically, the spectra can be modelled by a power law with a high energy cut-off and photoelectric absorption
as it is suggested by Markwardt et al. (2010b). Among the several models that describe the cut-off power law, the best fit
is achieved by \verb"CUTOFFPL" model in \verb"XSPEC". An additional Gaussian component is also required for a weak \emph{Fe}
emission line around 6.4 keV. During the simultaneous fitting of \emph{Swift}$-$XRT and \emph{RXTE}$-$PCA spectra we included
a multiplicative constant factor in the model to account for the normalization uncertainty between the two instruments. The
data and its best fit are plotted in Fig. \ref{bothspe} and the corresponding spectral parameters are given in
Table \ref{simspe}.
The energy ranges for the simultaneous spectral analysis are initially selected to be 0.3$-$9.3 keV for the XRT spectrum and
3$-$25 keV for the PCA spectrum. Individually these spectra have similar shapes that can be modelled with the same models,
apart from an offset in absolute flux calibration. However individual modelling of the data from different instruments yields
different absorption parameters due to different instrumental band-passes (Markwardt et al. 2010b). During the simultaneous
fitting trials we observe large residuals for the first energy bins of the PCA spectrum although the fit is adequate for the
XRT spectrum. Therefore we exclude energies below 5 keV for the PCA spectrum, since the XRT spectrum has more spectral
energy bins in soft X-rays.
The FOV of PCA is large and SWIFT J1729.9$-$3437 is near the Galactic ridge. The thermal emission from the ridge is usually
known to contaminate the spectral count rates of the source. During the analysis of the first PCA observation, the weak line
at $\sim$6.6 keV reported by Markwardt et al. (2010b) was suggested to be a contamination, since it could not be resolved in
the spectrum of the first XRT observation. We tried to handle this issue in overall PCA spectrum with fixed additive
Galactic ridge models. We confirm that the flux of the source is two orders of magnitude bigger than the flux of the
Galactic ridge (Valinia \& Marshall 1998) therefore we conclude that the contamination could be small enough to be neglected.
Furthermore; addition of the Gaussian model component for the \emph{Fe} line improves the individual fit of the 0.3$-$9.3 keV
XRT spectrum after combining data from all XRT observations by reducing the reduced $\chi^{2}$ from 1.08
($\chi^{2}/$d.o.f. $ = 159.0 / 147$) to 0.98 ($\chi^{2}/$d.o.f. $ = 143.8 / 146$). The F-test probability that this
improvement is achieved just by chance is $1.3 \times 10^{-4}$. Therefore we suggest that the \emph{Fe} emission originates
from SWIFT J1729.9$-$3437. The line energy found for the source is shifted from the neutral value of \emph{Fe} K$\alpha$
($\sim$6.4 keV), which may be a consequence of an excess of ionization.
\begin{figure}
\center{\includegraphics[width=6.2cm, angle=270]{Fe_flux.eps}}
\caption{Variation of \emph{Fe} line flux with the X-ray flux from \emph{RXTE}$-$PCA observations. The error bars indicate
uncertainties at 1$\sigma$ (68 per cent) confidence level. The dotted line represents the linear fits to the data.}
\label{Fe}
\end{figure}
In accretion powered pulsars, the \emph{Fe} K$\alpha$ line is produced by the reprocessing of the hard X-ray emission in the
relatively low ionized and cool matter surrounding the pulsar. The correlation of the iron line intensity and the continuum
intensity in many sources (e.g. Vela X$-$1: Ohashi et al 1984, GX 301$-$2: Leahy et al. 1989) is specified to be the evidence
for the fluorescence of X-ray continuum by the cold material. As the continuum intensity increases the illumination of the
matter increases and the line intensity strengthens. We measure \emph{Fe} line flux values from the individual \emph{RXTE}
observations of SWIFT J1729.9$-$3437 by using the \verb"CFLUX" model in \verb"XSPEC" and find that they correlate with the
source flux values (see Fig. \ref{Fe}). In Fig. \ref{Fe} the dotted line represents the linear fits to the data, with a
slope of $0.46\pm0.12$. The Pearson product-moment correlation coefficient between \emph{Fe} line flux and unabsorbed source
flux at 2$-$10 keV is 0.96. The null-hypothesis probability calculated from the Student's t-distribution (two-tailed) is
$1.2 \times 10^{-5}$. This correlation also confirms the origin of the \emph{Fe} emission.
\subsection{Pulse Phase Resolved Spectra}
We construct pulse phase resolved spectra of SWIFT J1729.9$-$3437 from the first three RXTE$-$PCA observations. There are
two motivations to the data selection. First, the brightest of the observations are selected to ensure the best
signal-to-noise ratio possible. Second, the selected observations are the only ones that have 256 channels Good Xenon event
mode data available, which is required for the \verb"FASEBIN" tool. The timing solution found for the source in Section 3.1
is appended to the timing files of the \verb"FASEBIN" tool for correct phase-binning. The pulse period is divided into five
equal segments resulting in $\sim$2.1 ks exposure time for each phase bin spectrum.
We model 3$-$25 keV spectra with the same models used for the overall spectrum in the previous section. As we do not observe
any significant change in the Hydrogen column density ($n_H$) and the \emph{Fe} line energy during the preliminary fitting
trials, we fixed these parameters during the analysis in order to better constrain the other parameters of the fits. The
photon index and the folding energy of the high energy cut-off ($E_{fold}$) are found to be varying within pulse phases
(see Fig. \ref{phasespe}). The phase dependence of photon index is in anti-correlation with the pulsed flux. The Pearson
product-moment correlation coefficient between pulsed flux and photon index values is $-0.87$ and the corresponding
null-hypothesis probability (two-tailed) is 0.06. The null-hypothesis probability of this anti-correlation is quite
significant but our results still indicate that the softest emission is observed at the un-pulsed phase around 0.05; from
which we might suggest a possible physical relation between the parameters. The $E_{fold}$ values show a similar trend as
the photon index values, which means softer spectra have higher cut-off energies. The Pearson product-moment correlation
coefficient between $E_{fold}$ and photon index values is 0.96 and the corresponding null-hypothesis probability (two-tailed)
is 0.01. Although the correlation analysis indicates a strong dependence between the parameters, one should note that the
uncertainties in the parameters are not taken into account during this analysis. When the large uncertainties in $E_{fold}$
values are taken into account the data is consistent with a constant value of $\sim11.7$ keV. Therefore it is difficult to
infer a clear variation of the $E_{fold}$ with the pulse phase, suggesting a rather marginal detection.
\begin{figure}
\center{\includegraphics[width=10.2cm, angle=270]{fasebin_mak_step.eps}}
\caption{Pulse phase resolved variations of the spectral parameters with 5 phase bins for the first three
\emph{RXTE}$-$PCA observations. The pulse profile is plotted in 10 phase bins at the top panel. From top to bottom, we show
the pulse profile, 2$-$10 keV unabsorbed flux, the photon index, the folding energy of exponential roll-off and reduced
$\chi^{2}$ of the fits. All uncertainties are calculated at the 90 per cent confidence level. For clarity, the data points
are repeated for a cycle.}
\label{phasespe}
\end{figure}
\section{Summary and Discussion}
In this paper, we study timing and spectral properties of SWIFT J1729.9$-$3437 using \emph{RXTE} and \emph{Swift}
observations of its outburst from 2010 July 20 through 2010 August 12 (MJD 55397 $-$ MJD 55420). From these observations
with a time span of $\simeq 23$ days, we find that arrival times can be fit to a quadratic. From this fit, we calculate a
spin frequency and spin frequency derivative of $1.8734(8) \times 10^{-3}$ Hz and $6.42(6) \times 10^{-12}$ Hz/s
respectively. The residuals of the quadratic is further found to fit a circular orbital model with
${{a} \over {c}}\sin i=65(3)$ lt.s and an orbital period of 15.3(2) days. We also try an elliptical model and find an upper
limit for eccentricity as 0.60. However, this model over-fits the data with a reduced $\chi^2$ of 0.40. Future observations
might help to refine orbital parameters of the source.
Using this ${{a} \over {c}}\sin i$ value, we find the mass function
(${{4\pi^2} \over {G}}{{(a\sin i)^3} \over {P_{orbital}^2}}$) to be about $1.3M_{\odot}$. An orbital period of 15.3 days and
a spin period of 533.76 s puts the source in line with the accretion powered pulsars with supergiant companions, in the
Corbet diagram (Drave et al. 2012, Corbet 1984). On the other hand, the small mass function obtained from the circular
orbital model should be an indication of a small orbital inclination angle. If the circular orbital model is preferred as
the model fitting the residuals of the quadratic fit, this indicates that we observe the binary system nearly edge-on.
Alternatively the residuals of the quadratic fit can be explained by a torque noise strength of
$6.8 \times 10^{-18}$ Hz sec$^{-2}$. This value is quite consistent with other accreting X-ray binaries
(Baykal \& \"{O}gelman 1993, Bildsten et al. 1997). Future observations are needed to understand the exact nature of the
source.
Initially, a double-peaked pulse profile is observed in the light curves of the source. Then we find that one peak loses its
intensity starting from the middle of the observation on MJD 55406.7 as the source flux continues to decrease after the
burst. To study the energy dependence and temporal variability of the pulse profiles, we construct pulse profiles with five
different energy bands shown in Fig. \ref{enrespp}. We observe stronger pulses in the 8$-$13 keV and 13$-$18 keV energy bands
but generally the pulse shape shows no strong energy dependence. Double to single peak transition seen in this source might
be due to a sharp decline in the intensity of the radiation coming from the fan beam, since the formation of fan beam
strongly depends on the luminosity of the source as for EXO 2030+375 (Parmar et al. 1989) and GX 1+4 (Paul et al. 1997).
However lack of significant energy dependence of the pulse profiles makes the fan beam explanation implausible since fan
beams are expected to be spectrally harder than pencil beams.
In order to have a basic understanding of spectral variability of the whole observations, we study hardness ratios of the
source. From the hardness ratio plots (see Fig. \ref{hard}), we suggest that the emitted radiation becomes softer as the
source flux decreases during the ongoing decline of the outburst. The similar spectral softening with decreasing flux was
reported before for 1A 0535+262, A 1118$-$616, SWIFT J1626.6$-$5156, XTE J0658$-$073 and GRO J1008$-$57 using
hardness-intensity relations (Reig \& Nespoli, 2013).
To extend the preliminary spectral analysis of Markwardt et al. (2010b), we also construct a single spectrum from all the
available \emph{RXTE} and \emph{Swift} observations (see Fig. \ref{bothspe} and Table \ref{simspe}). We find that adding an
\emph{Fe} line complex feature with a peak at 6.51 keV slightly improves the spectral fit. We discuss that this \emph{Fe}
line feature is more likely originated from the source as the galactic ridge emission is too weak to explain this emission
alone. We also measure \emph{Fe} line flux values from the individual \emph{RXTE} observations of the source and find that
they correlate with the source flux values (see Fig. \ref{Fe}). This correlation confirms the origin of the \emph{Fe}
emission as the source itself rather than the galactic ridge.
We perform pulse phase resolved spectral analysis of the source using the first three \emph{RXTE} observations. From this
analysis, we find a marginal evidence of a variation of the photon index and the folding energy of the high energy cut-off
with the pulse phase (see Fig. \ref{phasespe}).
\section*{Acknowledgements}
We acknowledge support from T\"{U}B\.{I}TAK, the Scientific and Technological Research Council of Turkey through the
research project TBAG 109T748.
|
2,869,038,154,414 | arxiv | \section{Introduction}
\indent Les premiers r\'esultats en r\'esolution des singularit\'es sont \`a attribuer \`a Newton au XVII$^{\textup{\`eme}}$ si\`ecle et \`a Puiseux au XIX$^{\textup{\`eme}}$ si\`ecle. Leurs r\'esultats permettent de r\'esoudre les singularit\'es des courbes d\'efinies sur $\mathbb C$. En 1939, Zariski (voir \cite{zariskiunifsurface}) propose une nouvelle m\'ethode pour r\'esoudre les singularit\'es d'une surface d\'efinie sur un corps de caract\'eristique nulle: on r\'esout le probl\`eme localement le long d'une valuation puis on recolle au niveau de la vari\'et\'e de Riemann-Zariski. Le recollement n'est actuellement possible que jusqu'\`a la dimension $3$ en toute caract\'eristique (voir par exemple \cite{piltantrecole}). La r\'esolution locale le long d'une valuation, ou uniformisation locale, n'est possible, en caract\'eristique positive ou mixte, \'egalement que jusqu'\`a la dimension $3$ (voir \cite{cosspilt1}, \cite{cosspilt2}, \cite{cossartpiltantmixtehal} et \cite{cossartpiltantmixtehal2}). En caract\'eristique nulle, la r\'esolution des singularit\'es a \'et\'e d\'emontr\'ee par Hironaka en 1964 (\cite{hiro}) pour des vari\'et\'es de dimension quelconque. Ce r\'esultat a par la suite \'et\'e red\'emontr\'e par Villamayor en 1989 (\cite{vill89}), Bierstone et Milman en 1990 (\cite{birsmilm}), Encinas et Villamayor en 2001 (\cite{encvilla}), Encinas et Hauser en 2002 (\cite{encinashauser}), W{\l}odarczyk en 2005 (\cite{wlod}) et Temkin en 2008 (\cite{tem}).
\\\indent Ces derni\`eres ann\'ees, une nouvelle approche a \'et\'e propos\'ee par Spivakovsky (\cite{spiva}) et Teissier (\cite{teissiertorique}) pour r\'esoudre le probl\`eme de la r\'esolution des singularit\'es en caract\'eristique positive et mixte via la m\'ethode de Zariski. La premi\`ere \'etape \'etant de d\'emontrer l'uniformisation locale d'une valuation, ils se sont int\'eress\'es \`a l'alg\`ebre gradu\'ee qui lui est naturellement associ\'ee ainsi qu'\`a ses g\'en\'erateurs, appel\'es \textit{polyn\^omes-cl\'es}.
\\\indent Dans cet article, nous adaptons l'approche de Spivakovsky (\cite{spiva}) pour l'uniformisation locale plong\'ee des anneaux quasi-excellents \'equicaract\'eristiques au cas o\`u le corps r\'esiduel est de caract\'eristique nulle. Ce r\'esultat est d\'ej\`a bien connu, l'int\'er\^et de notre d\'emonstration est qu'elle permet de d\'ecrire \`a l'avance toutes les coordonn\'ees de tous les points infiniment proches et ceci pour tous les \'eclatements interm\'ediaires. La m\'ethode est la suivante: par \cite{spivanova}, on sait qu'il suffit d'obtenir l'uniformisation locale pour des valuations de rang $1$ centr\'ees sur un anneau $S$ local noeth\'erien int\`egre. On consid\`ere ensuite l'id\'eal $\overline H$ compos\'e de tous les \'el\'ements de valuation infinie dans le compl\'et\'e de $S$, not\'e $\widehat{S}$. Cet id\'eal est premier d'apr\`es \cite{ideimp}, c'est l'\textit{id\'eal premier implicite}. Si $S$ est quasi-excellent, c'est-\`a-dire, si le morphisme de compl\'etion est r\'egulier, alors $\widehat{S}_{\overline H}$ est r\'egulier (Th\'eor\`eme \ref{RHestreg}). Pour conclure, il suffit de montrer qu'il existe une suite d'\'eclatements locaux qui rendent $\widehat{S}/\overline H$ r\'egulier et que tout \'el\'ement de cet anneau s'\'ecrit comme le produit d'un mon\^ome par une unit\'e (Lemme \ref{lemmetechniquespiva} et Th\'eor\`eme \ref{uniflocalerang1car0}). Par le th\'eor\`eme de structure de Cohen, on sait qu'il existe un anneau local r\'egulier complet $R$ et un morphisme surjectif $R\twoheadrightarrow \widehat{S}/\overline H$. Ce morphisme induit un isomorphisme entre $R/H$ et $\widehat{S}/\overline H$, o\`u $H$ est le noyau du morphisme. Pour obtenir notre r\'esultat, il ne reste plus qu'\`a l'obtenir sur $R/H$ (Th\'eor\`eme \ref{thmprelimcar0}).
\\\indent La strat\'egie va consister \`a faire d\'ecro\^itre la dimension de plongement de $R/H$ si $H$ est non-nul, sinon \`a monomialiser les \'el\'ements de $R$.
\\\indent Dans le cas \'equicaract\'eristique, on sait que $R$ est un anneau de s\'eries formelles, on peut alors utiliser la th\'eorie des polyn\^omes-cl\'es. Par r\'ecurrence sur la dimension de plongement, on montre que l'id\'eal $H$ est, \`a une suite d'\'eclatements locaux pr\`es, principal et engendr\'e par un polyn\^ome unitaire. La suite d'\'eclatements choisie n'est pas quelconque, les param\`etres r\'eguliers de l'anneau d'arriv\'ee sont des polyn\^omes-cl\'es et d\`es qu'un \'el\'ement s'est transform\'e en mon\^ome, la suite d'\'eclatements le conserve sous cette forme.
\\\indent Comme tout polyn\^ome de $R$ peut s'\'ecrire de mani\`ere unique comme une somme finie de polyn\^omes-cl\'es, il suffit de monomialiser les polyn\^omes-cl\'es. S'il n'y a pas de premier polyn\^ome-cl\'e limite, une simple r\'ecurrence nous fournit le r\'esultat (Proposition \ref{eclatpolyclecar0}). Cette situation correspond au cas o\`u, apr\`es un nombre fini d'\'eclatements, on fait un nombre fini de translations pour obtenir un r\'esultat de monomialisation (en termes de polyn\^omes-cl\'es et de valuations, cette situation est celle o\`u, apr\`es un nombre fini $i_0$ d'\'etapes, la valuation de d\'epart est \'egale \`a la valuation monomiale correspondante au polyn\^ome-cl\'e de l'\'etape $i_0$). En terme de d\'efaut d'une extension, cette situation correspond au cas o\`u il n'y a pas de d\'efaut; dans un article en pr\'eparation (\cite{jcdefaut}) on montrera que le d\'efaut peut \^etre compris tr\`es pr\'ecis\'ement en terme de degr\'e du premier polyn\^ome-cl\'e limite.
\\\indent Or cette situation se produit toujours en \'equicaract\'eristique nulle: il n'existe pas de polyn\^ome-cl\'e limite pour des valuations de rang $1$ (Corollaire \ref{polycleencar0}).
\\\indent Dans le cas o\`u il existe un premier polyn\^ome-cl\'e limite, c'est-\`a-dire lorsque l'on fait un nombre infini de translations, autrement dit, dans le cas o\`u l'extension consid\'er\'ee poss\`ede un d\'efaut, on ne sait pas conclure. Dans \cite{jcthese}, on a montr\'e qu'il suffit de monomialiser le premier polyn\^ome-cl\'e limite qui peut \^etre vu comme un polyn\^ome d'Artin-Schreier g\'en\'eralis\'e.
\\\indent En caract\'eristique mixte, la situation est semblable, $R$ est un anneau de s\'eries formelles \`a coefficients sur un anneau de valuation discr\`ete, ou un quotient d'un anneau de s\'eries formelles de ce type. On peut \'egalement conclure lorsqu'il n'y a pas de premier polyn\^ome-cl\'e limite en utilisant le cas \'equicaract\'eristique mais il nous faut supposer que la valuation de $p$, la caract\'eristique du corps r\'esiduel de $R$, n'est pas divisible par $p$ dans le groupe des valeurs.
\\ \\\indent Pour commencer, nous rappelons les d\'efinitions de centre d'une valuation ainsi que celles des diff\'erentes alg\`ebres gradu\'ees utilis\'ees.
\\\indent Dans la deuxi\`eme partie, nous redonnons la notion de quasi-excellence pour un anneau ainsi que pour un sch\'ema. Nous d\'efinissons \'egalement les diff\'erentes propri\'et\'es d'uniformisation locale et d'uniformisation locale plong\'ee que nous allons d\'emontrer dans le cas de caract\'eristique nulle.
\\\indent Par la suite, apr\`es avoir rappel\'e la notion de polyn\^ome-cl\'e, nous montrons qu'il n'y a pas de polyn\^omes-cl\'es limite en caract\'eristique nulle pour des valuations de rang $1$.
\\\indent Dans la quatri\`eme partie, nous d\'eveloppons les outils n\'ecessaires pour les diff\'erentes preuves des th\'eor\`emes de monomialisation et d'uniformisation locale. Nous donnons tout d'abord la d\'efinition d'\'eclatement local encadr\'e qui va imposer un syst\`eme de g\'en\'erateurs de l'id\'eal maximal; ce type d'\'eclatement va conserver la propri\'et\'e d'\^etre un produit d'un mon\^ome par une unit\'e. Par la suite, on en construit un explicitement ayant la propri\'et\'e de transformer les polyn\^omes-cl\'es en param\`etres r\'eguliers. Nous rappelons ensuite les r\'esultats essentiels sur l'id\'eal premier implicite. Nous terminons cette partie par la monomialisation d'\'el\'ements non-d\'eg\'en\'er\'es (c'est-\`a-dire dont leur valuation est \'egale \`a la valuation monomiale), qui est un cas particulier du jeu d'Hironaka (voir \cite{jeuhiro} et \cite{spivajeuhiro}), ainsi que par un th\'eor\`eme d'uniformisation locale pour des hypersurfaces quasi-homog\`enes satisfaisant certaines propri\'et\'es.
\\\indent Nous d\'emontrons, dans la cinqui\`eme partie, un th\'eor\`eme de monomialisation dans le cas \'equicaract\'eristique. \`A chaque \'etape de l'algorithme, un \'eclatement local est suivi d'une compl\'etion. On d\'esingularise l'id\'eal $H$ et on monomialise les polyn\^omes-cl\'es dans le cas o\`u il n'y a pas de premier polyn\^ome-cl\'e limite.
\\\indent La sixi\`eme partie est identique \`a la pr\'ec\'edente sauf que l'on se place dans le cadre de la caract\'eristique mixte, avec l'hypoth\`ese suppl\'ementaire que la valuation de $p$, o\`u $p$ est la caract\'eristique du corps r\'esiduel de $R$, n'est pas divisible par $p$ dans le groupe des valeurs.
\\\indent Dans la septi\`eme partie, nous d\'emontrons \`a nouveau le m\^eme r\'esultat de monomialisation que dans les deux parties pr\'ec\'edentes mais sans compl\'eter apr\`es chaque \'eclatement.
\\\indent Nous terminons par la d\'emonstration du r\'esultat principal d'uniformisation locale plong\'ee d'une valuation centr\'ee en un point d'un sch\'ema quasi-excellent dont le corps r\'esiduel de l'anneau local en ce point est de caract\'eristique nulle.
\\ \\\indent Je tiens \`a remercier Mark Spivakovsky qui a d\'evelopp\'e la plupart des outils de cet article pendant de longues ann\'ees et qui m'a permis de les appliquer dans le cadre de la caract\'eristique nulle.
\\ \\ \noindent\textbf{Notations.} Soit $\nu$ une valuation sur un corps $K$. Notons $R_\nu=\lbrace f\in K\:\vert\:\nu(f)\geqslant 0\rbrace$, c'est un anneau local d'id\'eal maximal $m_\nu=\lbrace f\in K\:\vert\:\nu(f)> 0\rbrace$. On note alors $k_\nu=R_\nu/m_\nu$ le corps r\'esiduel de $R_\nu$ ansi que $\Gamma_\nu=\nu(K^*)$.
\\Si $R$ est un anneau, on notera $car(R)$ sa caract\'eristique. Si $(R,\mathfrak m)$ est un anneau local on notera $\widehat{R}$ le compl\'et\'e $\mathfrak m$-adique de $R$.
\\Pour tout $P\in Spec(R)$, on note $\kappa(P)=R_P/PR_P$ le corps r\'esiduel de $R_P$.
\\Pour $\alpha\in\mathbb Z^n$ et $u=(u_1,...,u_n)$ un ensemble d'\'el\'ements de $R$, on note:
\[u^\alpha=u_1^{\alpha_1}...u_n^{\alpha_n}.\]
Pour $P,Q\in R\left[ X\right] $ avec $P=\sum\limits_{i=0}^{n}a_{i}Q^{i}$ et $a_{i}\in R[X]$ tels que le degr\'e de $a_{i}$ est strictement inf\'erieur \`a celui de $Q$, on note:
\[d_{Q}^{\:\circ}(P)=n.\]
Si $Q=X$, on notera plus simplement $d^{\:\circ}(P)$ au lieu de $d_{X}^{\:\circ}(P)$.
\\Enfin, si $R$ est un anneau int\`egre, on notera $Frac(R)$ son corps des fractions.
\section{Centre d'une valuation, alg\`ebres gradu\'ees associ\'ees}
\begin{defi}\label{centrevaldef}
Soient $R$ un anneau et $P$ un id\'eal premier. Une valuation de $R$ \textbf{centr\'ee en} $\boldsymbol P$ est la donn\'ee d'un id\'eal premier minimal $P_\infty$ de $R$ contenu dans $P$ et d'une valuation du corps des fractions de $R/P_\infty$ centr\'ee en $P/P_\infty$. L'id\'eal $P_\infty$ est alors le support de la valuation.
\\Si $R$ est un anneau local d'id\'eal maximal $\mathfrak{m}$, on dira que $\nu$ est \textbf{centr\'ee en} $\boldsymbol R$ pour dire que $\nu$ est centr\'ee en $\mathfrak{m}$.
\\Soit $X$ un sch\'ema int\`egre de corps des fonctions $K(X)$. Une valuation $\nu$ de $K(X)$ est \textbf{centr\'ee en un point} $\boldsymbol{\xi}$ de $X$ si $\nu$ est centr\'ee en $\mathcal{O}_{X,\xi}$. On dira alors que $\xi$ est le \textbf{centre} de $\nu$.
\end{defi}
\begin{defi}\label{alggrad}
Soient $R$ un anneau et $\nu:R\rightarrow \Gamma\cup\lbrace\infty\rbrace$ une valuation centr\'ee en un id\'eal premier de $R$. Pour tout $\alpha\in\nu(R\setminus\lbrace 0\rbrace)$, on d\'efinit les id\'eaux:
\[P_\alpha=\lbrace f\in R\:\vert\:\nu(f)\geqslant\alpha\rbrace;\]
\[P_{\alpha,+}=\lbrace f\in R\:\vert\:\nu(f)>\alpha\rbrace.\]
L'id\'eal $P_\alpha$ est appel\'e le $\boldsymbol{\nu}$\textbf{-id\'eal} de $R$ de valuation $\alpha$.
\\On d\'efinit alors \textbf{l'alg\`ebre gradu\'ee de} $\boldsymbol{R}$ \textbf{associ\'ee \`a} $\boldsymbol{\nu}$ par:
\[gr_{\nu}(R)=\bigoplus\limits_{\alpha\in\nu(R\setminus\lbrace 0\rbrace)}P_{\alpha}/P_{\alpha,+}.\]
L'alg\`ebre $gr_\nu(R)$ est un anneau int\`egre.
\\Pour $f\in R\setminus\lbrace 0\rbrace$, on d\'efinit son image dans $gr_\nu(R)$, not\'ee $in_\nu(f)$, comme \'etant l'image naturelle de $f$ dans $P_{\nu(f)}/P_{\nu(f),+}\subset gr_\nu(R)$; c'est un \'el\'ement homog\`ene de degr\'e $\nu(f)$.
\\Enfin, on d\'efinit une valuation naturelle sur $gr_\nu(R)$ de groupe des valeurs $\nu(R\setminus\lbrace 0\rbrace)$, not\'ee $ord$, par:
\[ord(f)=\min \alpha,\]
o\`u $f\in gr_\nu(R)$ s'\'ecrit comme une somme finie $f=\sum\limits_{\alpha\in\nu(R\setminus\lbrace 0\rbrace)}f_{\alpha}$, $f_\alpha\in P_\alpha/P_{\alpha,+}$.
\end{defi}
Si $R$ est un anneau local int\`egre, on d\'efinit une autre alg\`ebre gradu\'ee comme suit:
\begin{defi}
Soient $R$ un anneau local int\`egre, $K=Frac(R)$ et $\nu:K^\times\twoheadrightarrow \Gamma\cup\lbrace\infty\rbrace$ une valuation de $K$ centr\'ee en $R$. Pour tout $\alpha\in\Gamma$, on d\'efinit les $R_\nu$-sous-modules de $K$ suivants:
\[\boldsymbol{P_\alpha}=\lbrace f\in K\:\vert\:\nu(f)\geqslant\alpha\rbrace;\]
\[\boldsymbol{P_{\alpha,+}}=\lbrace f\in K\:\vert\:\nu(f)>\alpha\rbrace.\]
On d\'efinit alors \textbf{l'alg\`ebre gradu\'ee associ\'ee \`a} $\boldsymbol{\nu}$ par:
\[G_{\nu}=\bigoplus\limits_{\alpha\in\Gamma}\boldsymbol{P_{\alpha}}/\boldsymbol{P_{\alpha,+}}.\]
Pour $f\in K^\times$, on d\'efinit son image dans $G_\nu$, not\'ee $in_\nu(f)$, comme dans la D\'efinition \ref{alggrad}.
\\Enfin, on d\'efinit une valuation naturelle sur $G_\nu$ de groupe des valeurs $\Gamma$, not\'ee $ord$, comme dans la D\'efinition \ref{alggrad}.
\end{defi}
\begin{rem}
\textup{On a l'injection naturelle:
\[gr_\nu(R)\hookrightarrow G_\nu.\]}
\end{rem}
\begin{defi}
Soit $G$ une alg\`ebre gradu\'ee n'ayant pas de diviseurs de z\'ero. On appelle \textbf{satur\'e de} $\boldsymbol G$ l'ag\`ebre gradu\'ee $G^*$ d\'efinie par:
\[G^{*}=\left\lbrace \left.\dfrac{f}{g}\:\right|\:f,g\in G,\:g\textit{ homog\`ene},\:g\neq 0\right\rbrace.\]
On dit que $G$ est \textbf{satur\'ee} si $G=G^*$.
\end{defi}
\begin{rem}
\textup{Pour toute alg\`ebre gradu\'ee $G$, on a:
\[G^*=\left(G^*\right)^*.\]
Autrement dit, $G^*$ est toujours satur\'ee.}
\end{rem}
\begin{ex}
\textup{Soit $\nu$ une valuation centr\'ee en un anneau local $R$. Alors:
\[G_\nu=\left(gr_\nu(R)\right)^*.\]
En particulier, $G_\nu$ est satur\'ee.}
\end{ex}
\section{Quasi-excellence, \'eclatements locaux et uniformisation locale}
Pour plus de clart\'e nous rappelons la notion de quasi-excellence ainsi que diff\'erentes notions d'uniformisation locale, que ce soit pour des sch\'emas ou pour des anneaux. L'uniformisation locale est la version locale de la r\'esolution des singularit\'es. R\'esoudre les singularit\'es d'un sch\'ema $X$ noeth\'erien irr\'eductible et r\'eduit revient \`a trouver un morphisme propre et birationnel $X'\rightarrow X$ tel que $X'$ soit r\'egulier. Ainsi, l'uniformisation locale d'une valuation $\nu$ de $K$, le corps des fractions d'un anneau local int\`egre $R$ o\`u est centr\'ee la valuation, revient \`a trouver un anneau $R'$ r\'egulier qui domine birationnellement $R$ et tel que $R'\subset R_\nu\subset K$.
\begin{defi}
Un anneau noeth\'erien $R$ est \textbf{quasi-excellent} si les deux conditions suivantes sont v\'erifi\'ees:
\begin{enumerate}
\item Pour tout $P\in Spec(R)$, le morphisme de compl\'etion $R_P\rightarrow\widehat{R_P}$ est r\'egulier;
\item Le lieu r\'egulier de toute $R$-alg\`ebre de type fini est ouvert.
\end{enumerate}
Un sch\'ema localement noeth\'erien est dit \textbf{quasi-excellent} s'il existe un recouvrement form\'e d'ouverts affines $(U_{\alpha})$, $U_\alpha=Spec(R_\alpha)$, tel que, pour tout $\alpha$, $R_\alpha$ soit un anneau quasi-excellent.
\end{defi}
\begin{rem}
\textup{\begin{enumerate}
\item Un anneau local est quasi-excellent si et seulement si la condition 1. est v\'erifi\'ee.
\item La notion de quasi-excellence est conserv\'ee par localisation, passage au quotient et passage aux alg\`ebres de type fini.
\item Un corps est quasi-excellent.
\item les anneaux de s\'eries formelles sur un anneau de Cohen sont des anneaux quasi-excellents.
\end{enumerate}}
\end{rem}
\begin{defi}
Soient $X$ un sch\'ema noeth\'erien et $Y$ un sous-sch\'ema de $X$. Soit $\mathcal{I}_Y$ le faisceau d'id\'eaux d\'efinissant $Y$ dans $X$.
\\On dit que $X$ est \textbf{normalement plat} le long de $Y$ si, pour tout point $\xi\in Y$, $\bigoplus\limits_{n\geqslant 0}\mathcal{I}_{Y,\xi}^n/\mathcal{I}_{Y,\xi}^{n+1}$ est un $\mathcal{O}_{Y,\xi}$-module libre.
\end{defi}
\begin{propr}\textup{\textbf{(d'uniformisation locale des sch\'emas).}}
Soit $S$ un sch\'ema noeth\'erien (non n\'ecessairement int\`egre). Soient $X$ une composante irr\'eductible de $S_{red}$ et $\nu$ une valuation de $K(X)$
centr\'ee en un point $\xi\in X$. Il existe alors un \'eclatement $\pi: S'\rightarrow S$ le long d'un sous-sch\'ema de $S$, ne contenant aucune composante irr\'eductible de $S_{red}$ et ayant la propri\'et\'e suivante : \\Soient $X'$ le transform\'e strict de $X$ par $\pi$ et $\xi'$ le centre de $\nu$ sur $X'$, alors $\xi'$ est un point r\'egulier de $X'$ et $S'$ est normalement plat le long de $X'$ en $\xi'$.
\end{propr}
Le probl\`eme \'etant local, on peut l'exprimer en termes d'anneaux. Avant cela, nous allons rappeler la notion d'\'eclatement local par rapport \`a une valuation.
\begin{defi}\label{defeclatlocal}
Soit $(R,\mathfrak{m})$ un anneau local noeth\'erien int\`egre de corps des fractions $K$. Soit $\nu$ une valuation de $K$ centr\'ee en $R$. Soient $u_1,...u_r\in R$ et $v_1,...,v_r\in R$ tels que $\nu(v_i)\leqslant\nu(u_i)$ pour tout $i\in\lbrace 1,...,r\rbrace$. Notons $R'$ l'anneau:
\[R'=R\left[\dfrac{u_1}{v_1},...,\dfrac{u_r}{v_r}\right].\]
Alors l'anneau $R^{(1)}=R'_{m_\nu\cap R'}$ est un anneau local d'id\'eal maximal $\mathfrak{m}^{(1)}=(m_\nu\cap R')R'_{m_\nu\cap R'}$.
\\Un \textbf{\'eclatement local de} $\boldsymbol R$ \textbf{par rapport \`a} $\boldsymbol{\nu}$ est un morphisme local d'anneaux locaux de la forme:
\[\pi:(R,\mathfrak m)\rightarrow (R^{(1)},\mathfrak{m}^{(1)}).\]
Soient $I$ un id\'eal de $R$ et $u_0\in I$ tel que $\nu(u_0)\leqslant\nu(f)$, pour tout $f\in I$. Compl\'etons $u_0$ en un ensemble $\lbrace u_0,u_1,...,u_s\rbrace$ de g\'en\'erateurs de $I$. Le morphisme pr\'ec\'edent est appel\'e un \textbf{\'eclatement local de} $\boldsymbol R$ \textbf{par rapport \`a} $\boldsymbol{\nu}$ \textbf{le long de} $\boldsymbol I$ si $r=s$ et $v_i=u_0$ pour tout $i\in\lbrace 1,...,s\rbrace$; conditions auxquelles on peut toujours se ramener sans perte de g\'en\'eralit\'e en posant $u_0=v_1...v_r$ et $u_i=\frac{u_i}{v_i}u_0$, $i\in\lbrace 1,...,r\rbrace$.
\end{defi}
\begin{rem}
\textup{\`A isomorphisme pr\`es, la d\'efinition pr\'ec\'edente est ind\'ependante du choix de l'ensemble de g\'en\'erateurs de $I$, c'est-\`a-dire qu'un autre choix de g\'en\'erateurs donne un anneau isomorphe.}
\end{rem}
\begin{propr}\textup{\textbf{(d'uniformisation locale des anneaux locaux).}}\label{uniflocanneaunonint}
Soient $(S,\mathfrak m)$ un anneau local noeth\'erien (non n\'ecessairement int\`egre), $P$ un id\'eal premier minimal de $S$ et $\nu$ une valuation du corps des fractions de $S/P$ centr\'ee en $S/P$. Alors, il existe un \'eclatement local $\pi:S\rightarrow S'$ par rapport \`a $\nu$ tel que $S'_{red}$ soit r\'egulier et $Spec(S')$ soit normalement plat le long de $Spec(S'_{red})$.
\end{propr}
Nous finissons avec la notion de croisements normaux et d'uniformisation locale plong\'ee.
\begin{defi}
Soient $(R,\mathfrak m)$ un anneau local noeth\'erien et $\nu$ une valuation centr\'ee en $R$, au sens de la D\'efinition \ref{centrevaldef}, de groupe des valeurs $\Gamma$. Soit $u=\lbrace u_1,...,u_n\rbrace\subset\mathfrak m$ tel que $(u)+\sqrt{(0)}=\mathfrak m+\sqrt{(0)}$. Enfin, pour $f\in R$, on note $\overline f\in R/\sqrt{(0)}=R_{red}$ l'image de $f$ dans $R_{red}$ par le morphisme de passage au quotient.
\begin{enumerate}
\item Un mon\^ome $u^\alpha=u_1^{\alpha_1}...u_n^{\alpha_n}$ est dit \textbf{minimal par rapport \`a} $\boldsymbol \nu$ si la famille $\lbrace\nu(u_j)\:\vert\:\alpha_j\neq 0\rbrace_{1\leqslant j\leqslant n}$ est $\mathbb Z$-libre dans $\Gamma$.
\item Soit $I$ un id\'eal de $R$. On dit que le triplet $(R,I,u)$ est \`a \textbf{croisements normaux} si:
\begin{enumerate}
\item $R_{red}$ est un anneau local r\'egulier et $(\overline{u_1},...,\overline{u_n})$ est un syst\`eme r\'egulier de param\`etres de $R_{red}$;
\item $Spec(R)$ est normalement plat le long de $Spec(R_{red})$;
\item $I/\left(I+\sqrt{(0)}\right)$ est un id\'eal principal engendr\'e par un mon\^ome en $\overline{u_1},...,\overline{u_n}$ (avec la possibilit\'e que $I=(1)$ et donc $I/\left(I+\sqrt{(0)}\right)=(1)$).
\end{enumerate}
\item Soit $I$ un id\'eal de $R$, le triplet $(R,I,u)$ est \`a \textbf{croisements normaux standards} par rapport \`a $\nu$ si $(R,I,u)$ est \`a croisements normaux et $I/\left(I+\sqrt{(0)}\right)$ est engendr\'e par un mon\^ome minimal par rapport \`a $\nu$.
\item Soit $I$ un id\'eal de $R$, on dit que $(R,I)$ est \`a \textbf{croisements normaux} (resp. \`a \textbf{croisements normaux standards}) s'il existe $u$ tel que $(R,I,u)$ soit \`a croisements normaux (resp. \`a croisements normaux standards).
\item On dit que $R$ est \textbf{d\'esingularis\'e} si $(R,R)$ est \`a croisements normaux.
\end{enumerate}
\end{defi}
\begin{defi}\label{defiuniflocpourpaire}
Soient $(R,\mathfrak m)$ un anneau local noeth\'erien et $\nu$ une valuation centr\'ee en $R$ au sens de la D\'efinition \ref{centrevaldef}. Soit $I$ un id\'eal de $R$, on dit que la paire $(R,I)$ admet une \textbf{uniformisation locale plong\'ee} (resp. une \textbf{uniformisation locale plong\'ee standard}) s'il existe une suite:
\[ \xymatrix{ R \ar[r]^-{\pi_{0}} & R^{(1)} \ar[r]^-{\pi_{1}} & \ldots \ar[r]^-{\pi_{l-2}} & R^{(l-1)} \ar[r]^-{\pi_{l-1}} & R^{(l)}} \]
o\`u, pour $1\leqslant i \leqslant l$, $\pi_i$ est un \'eclatement local par rapport \`a $\nu$ le long d'un id\'eal $J^{(i)}$ ayant les propri\'et\'es suivantes:
\begin{enumerate}
\item Pour $1\leqslant i \leqslant l$, $J^{(i)}\not\subset P_{\infty}^{(i)}$, $P_{\infty}^{(i)}$ \'etant le support de $\nu$ dans $R^{(i)}$.
\item $\left(R^{(i)},IR^{(i)}\right)$ est \`a croisements normaux (resp. \`a croisements normaux standards).
\end{enumerate}
Enfin, on dit que $R$ admet une \textbf{uniformisation locale plong\'ee} (resp. une \textbf{uniformisation locale plong\'ee standard}) si, pour tout id\'eal $I$ de $R$, $(R,I)$ admet une uniformisation locale plong\'ee (resp. une uniformisation locale plong\'ee standard).
\end{defi}
\begin{propr}
\textup{\textbf{(d'uniformisation locale plong\'ee des sch\'emas).}}
Soit $S$ un sch\'ema noeth\'erien (non n\'ecessairement int\`egre). Soient $X$ une composante irr\'eductible de $S_{red}$ et $\nu$ une valuation de $K(X)$
centr\'ee en un point $\xi\in X$. Il existe alors un \'eclatement $\pi: S'\rightarrow S$ le long d'un sous-sch\'ema de $S$, ne contenant aucune composante irr\'eductible de $S_{red}$ et ayant la propri\'et\'e suivante : \\Soient $X'$ le transform\'e strict de $X$ par $\pi$, $\xi'$ le centre de $\nu$ sur $X'$ et $D$ le diviseur exceptionnel de $\pi$, alors $(\mathcal{O}_{X',\xi'},\mathcal{I}_{D,\xi'})$ admet une uniformisation locale plong\'ee.
\end{propr}
Dans le cas des anneaux locaux noeth\'eriens int\`egres, on peut \'enoncer la propri\'et\'e de mani\`ere un peu plus simple:
\begin{propr}\textup{\textbf{(d'uniformisation locale plong\'ee des anneaux locaux int\`egres).}}\label{uniflocplongint}
Soient $(R,\mathfrak{m})$ un anneau local noeth\'erien int\`egre et $\nu$ une valuation de $K$, le corps des fractions de $R$, centr\'ee en $R$. On dit que $\nu$ admet une \textbf{uniformisation locale plong\'ee} si, pour un nombre fini d'\'el\'ements de $R$, $f_1,...,f_q\in R$ tels que $\nu(f_1)\leqslant ...\leqslant\nu(f_q)$, il existe une suite d'\'eclatements locaux par rapport \`a $\nu$:
\[ \xymatrix{ R \ar[r]^-{\pi_{0}} & R^{(1)} \ar[r]^-{\pi_{1}} & \ldots \ar[r]^-{\pi_{l-2}} & R^{(l-1)} \ar[r]^-{\pi_{l-1}} & R^{(l)}} \]
telle que $R^{(l)}$ soit r\'egulier et telle qu'il existe un syst\`eme r\'egulier de param\`etres $u^{(l)}=\left(u_1^{(l)},...,u_d^{(l)}\right)$ de $R^{(l)}$ tel que les $f_i$, $1\leqslant i\leqslant q$, soient des mon\^omes en $u^{(l)}$ multipli\'es par une unit\'e de $R^{(l)}$ et $f_1/.../f_q$ dans $R^{(l)}$.
\end{propr}
\section{Polyn\^omes-cl\'es en caract\'eristique nulle}
Consid\'erons $K\hookrightarrow K(x)$ une extension de corps simple et transcendante. Soit $\mu'$ une valuation de $K(x)$, notons $\mu:=\mu'_{\vert\:K}$. On note $G$ le groupe des valeurs de $\mu'$ et $G_{1}$ celui de $\mu$. On suppose de plus que $\mu$ est de rang $1$, $\mu'(x)>0$ et $car(k_\mu)=0$. Enfin, pour $\beta\in G$, on pose:
\[P'_{\beta}=\lbrace f\in K(x)\:\vert\:\mu'(f)\geqslant\beta\rbrace\cup\lbrace 0\rbrace;\]
\[P'_{\beta,+}=\lbrace f\in K(x)\:\vert\:\mu'(f)>\beta\rbrace\cup\lbrace 0\rbrace;\]
\[G_{\mu'}=\bigoplus\limits_{\beta\in G}P'_{\beta}/P'_{\beta,+};\]
et $in_{\mu'}(f)$ l'image de $f\in K(x)$ dans $G_{\mu'}$.
\begin{defi}
Un \textbf{ensemble complet de polyn\^omes-cl\'es} pour $\mu'$ est une collection bien ordonn\'ee:
\[\textbf{Q}=\lbrace Q_{i}\rbrace_{i\in\Lambda}\subset K[x]\]
telle que, pour tout $\beta\in G$, le groupe additif $P'_{\beta}\cap K[x]$ soit engendr\'e par des produits de la forme $a\prod\limits_{j=1}^{s}Q_{i_{j}}^{\gamma_{j}}$, $a\in K$, tels que $\sum\limits_{j=1}^{s}\gamma_{j}\mu'\left(Q_{i_{j}}\right)+\mu(a)\geqslant\beta$.
\\L'ensemble est dit \textbf{1-complet} si la condition a lieu pour tout $\beta\in G_1$.
\end{defi}
\begin{thm}(\cite{spivaherrera}, Th\'eor\`eme 62)\label{existpolycle}
Il existe une collection $\textbf{Q}=\lbrace Q_{i}\rbrace_{i\in\Lambda}$ qui soit un ensemble $1$-complet de polyn\^omes-cl\'es.
\end{thm}
Par le Th\'eor\`eme \ref{existpolycle}, on sait qu'il existe un ensemble $1$-complet de polyn\^omes-cl\'es $\textbf{Q}=\lbrace Q_{i}\rbrace_{i\in\Lambda}$ et que le type d'ordre de $\Lambda$ est au plus $\omega\times\omega$. Si $K$ est sans d\'efaut, on va voir que le type d'ordre de $\Lambda$ est au plus $\omega$ et que, par cons\'equent, il n'y a pas de polyn\^omes-cl\'es limites, ce sera en particulier le cas si $car(k_\mu)=0$. Pour tout $i\in\Lambda$, notons $\beta_i=\mu'(Q_i)$.
Soit $l\in\Lambda$, on note:
\[\alpha_{i}=d_{Q_{i-1}}^{\:\circ}(Q_{i}),\:\forall\:i\leqslant l;\]
\[\boldsymbol{\alpha_{l+1}}=\lbrace \alpha_{i}\rbrace_{i\leqslant \:l};\]
\[\textbf{Q}_{l+1}=\lbrace Q_{i}\rbrace_{i\leqslant \:l}.\]
On utilise \'egalement la notation $\overline{\gamma}_{l+1}=\lbrace \gamma_{i}\rbrace_{i\leqslant\:l}$ o\`u les $\gamma_{i}$ sont tous nuls sauf pour un nombre fini d'entres eux, $\textbf{Q}_{l+1}^{\overline{\gamma}_{l+1}}=\prod\limits_{i\leqslant\: l}Q_{i}^{\gamma_{i}}$.
\begin{defi}
Un multi-indice $\overline{\gamma}_{l+1}$ est dit \textbf{standard par rapport \`a} $\boldsymbol{\alpha_{l+1}}$ si $0\leqslant \gamma_{i}<\alpha_{i+1}$, pour $i\leqslant l$.
\\ Un \textbf{mon\^ome l-standard en} $\boldsymbol{Q_{l+1}}$ est un produit de la forme $c_{\overline{\gamma}_{l+1}}\textbf{Q}_{l+1}^{\overline{\gamma}_{l+1}}$, o\`u $c_{\overline{\gamma}_{l+1}}\in K$ et $\overline{\gamma}_{l+1}$ est standard par rapport \`a $\boldsymbol{\alpha_{l+1}}$.
\\ Un \textbf{d\'eveloppement l-standard n'impliquant pas} $\boldsymbol{Q_{l}}$ est une somme finie $\sum\limits_{\beta}S_{\beta}$ de mon\^omes $l$-standards n'impliquant pas $Q_{l}$, o\`u $\beta$ appartient \`a un sous-ensemble fini de $G_{+}$ et $S_{\beta}=\sum\limits_{j} d_{\beta,j}$ est une somme de mon\^omes standards de valuation $\beta$ v\'erifiant $\sum\limits_{j} in_{\mu'}(d_{\beta,j})\neq 0$.
\end{defi}
\begin{defi}
Soient $f\in K[x]$ et $i\leqslant l$, un \textbf{d\'eveloppement i-standard de f} est une expression de la forme:
\[f=\sum\limits_{j=0}^{s_{i}}c_{j,i}Q_{i}^{j},\]
o\`u $c_{j,i}$ est un d\'eveloppement $i$-standard n'impliquant pas $Q_{i}$.
\end{defi}
\begin{rem}\textup{
Un tel d\'eveloppement existe, par division Euclidienne et est unique dans le sens o\`u les $c_{j,i}\in K[x]$ sont uniques. Plus pr\'ecis\'ement, si $i\in\mathbb N$, on montre par r\'ecurrence que le d\'eveloppement $i$-standard est unique.}
\end{rem}
\begin{defi}
Soient $f\in K[x]$, $i\leqslant l$ et $f=\sum\limits_{j=0}^{s_{i}}c_{j,i}Q_{i}^{j}$ un d\'eveloppement $i$-standard de $f$. On d\'efinit la \textbf{i-troncature de} $\boldsymbol{\mu'}$, not\'ee $\mu_{i}'$, comme \'etant la pseudo-valuation:
\[\mu_{i}'(f)=\min_{0\leqslant j\leqslant s_{i}}\lbrace j\mu'(Q_{i})+\mu'(c_{j,i})\rbrace.\]
\end{defi}
\begin{rem}\label{inegalitetronque}\textup{
On peut montrer que c'est en fait une valuation. On a de plus:
\[\forall \:f\in K[x],\: i\in\Lambda, \:\mu_{i}'(f)\leqslant \mu'(f).\]}
\end{rem}
\indent La construction des polyn\^omes-cl\'es se fait par r\'ecurrence (voir \cite{mahboub}, \cite{mahboubthese}, \cite{spiva} \textsection 9 et \cite{spivaherrera}). Pour $l\in\mathbb N^*$, on construit un ensemble de polyn\^omes-cl\'es $\textbf{Q}_{l+1}=\lbrace Q_{i}\rbrace_{1\leqslant i\leqslant \:l}$; deux cas se pr\'esentent:
\begin{enumerate}
\item[(1)] $\exists\:l_0\in\mathbb N,\:\beta_{l_0}\notin G_1$;
\item[(2)] $\forall\:l\in\mathbb N,\:\beta_l\in G_1$.
\end{enumerate}
Dans le cas (1), on stoppe la construction; l'ensemble $\textbf{Q}_{l_0}=\lbrace Q_{i}\rbrace_{1\leqslant i\leqslant \:l_0-1}$ est par d\'efinition un ensemble $1$-complet de polyn\^omes-cl\'es et $\Lambda=\lbrace 1,...,l_0-1\rbrace$. Remarquons de plus que l'ensemble $\textbf{Q}_{l_0+1}$ est quant \`a lui un ensemble complet de polyn\^omes-cl\'es.
\\Dans le cas (2), l'ensemble $\textbf{Q}_{\omega}=\lbrace Q_{i}\rbrace_{i\geqslant 1}$ est infini et $\Lambda=\mathbb N^*$. Les propositions qui suivent nous assurent que dans ce cas, l'ensemble des polyn\^omes-cl\'es obtenu est \'egalement $1$-complet.
\\\indent Le lemme qui suit, tr\`es utile dans la suite, nous permet de remarquer qu'il n'existe pas de suite croissante born\'ee dans le cadre des valuations de rang $1$.
\begin{lem}\label{pasdesuitecroissantebornee}
Soit $\nu$ une valuation de rang $1$ centr\'ee en un anneau local noeth\'erien $R$. Notons $P_\infty$ le support de $\nu$. Alors, $\nu\left( R\setminus P_\infty\right)$ ne contient aucune suite infinie croissante et born\'ee.
\end{lem}
\begin{rem}
\textup{La donn\'ee d'une valuation $\nu$ centr\'ee en un anneau local $(R,\mathfrak{m})$ est la donn\'ee d'un id\'eal premier minimal $P_\infty$ de $R$ (le support de la valuation) et d'une valuation $\nu'$ du corps des fractions de $R/P_\infty$ telle que $R/P_\infty\subset R_{\nu'}$ et $\mathfrak m/P_\infty=(R/P_\infty)\cap m_{\nu'}$.}
\end{rem}
\noindent\textit{Preuve}: Soit $\left(\beta_i\right)_{i\geqslant 1}$ une suite croissante infinie de $\nu\left( R\setminus P_\infty\right)$ born\'ee par $\beta$. Cette suite correspond \`a une suite infinie d\'ecroissante d'id\'eaux de $R/P_\beta$. Il nous suffit donc de montrer que $R/P_\beta$ est de longueur finie. Notons $\mathfrak m$ l'id\'eal maximal de $R$, $\nu(\mathfrak m)=\min\left\lbrace\nu\left( R\setminus P_\infty\right)\setminus\lbrace 0\rbrace\right\rbrace$ et $\Gamma$ le groupe des valeurs de $\nu$. Remarquons que le groupe $\nu\left( R\setminus P_\infty\right)$ est archim\'edien. En effet, par l'absurde, si $\nu\left( R\setminus P_\infty\right)$ n'est pas archim\'edien, il existe $\alpha,\:\beta\in\nu\left( R\setminus P_\infty\right)$, $\beta\neq 0$ tels que, pour tout $n\geqslant 1$, $n\beta\leqslant\alpha$. En particulier, l'ensemble:
\[\lbrace \gamma\in\Gamma\:\vert\:\exists\:n\in\mathbb{N}\setminus\lbrace 0\rbrace,\:-n\beta<\gamma<n\beta\rbrace\]
est un sous-groupe isol\'e non trivial de $\Gamma$.
\\On en d\'eduit qu'il existe $n\in\mathbb N$ tel que:
\[\beta\leqslant n\nu(\mathfrak m).\]
Ainsi, $\mathfrak m^n\subset P_\beta$ et donc , il existe une application surjective:
\[R/\mathfrak m^n\twoheadrightarrow R/P_\beta.\]
On en d\'eduit que $R/P_\beta$ est de longueur finie, ce qui est absurde. On en conclut que $\nu\left( R\setminus P_\infty\right)$ ne contient aucune suite infinie croissante born\'ee.\\\qed
\begin{prop}\label{sibornealorscomplet}(\cite{spiva}, Proposition 9.30) Supposons que l'on ait construit un ensemble infini de polyn\^omes-cl\'es $\textbf{Q}_{\omega}=\lbrace Q_{i}\rbrace_{i\geqslant 1}$ tel que, pour tout $i\in\mathbb N^*$, $\beta_i\in G_1$. Supposons de plus que la suite $\lbrace\beta_i\rbrace_{i\geqslant 1}$ n'est pas born\'ee dans $G_1$. Alors, l'ensemble de polyn\^omes-cl\'es $\textbf{Q}_{\omega}$ est $1$-complet.
\end{prop}
\noindent\textit{Preuve}: Il suffit de montrer que, pour tout $\beta\in G_1$ et pour tout $h\in K\left[x\right]$ tels que $\mu'(h)=\beta$, $h$ est dans le $R_\mu$-sous-module de $K\left[x\right]$ engendr\'e par tous les mon\^omes de la forme $a\prod\limits_{j=1}^{s}Q_{i_{j}}^{\gamma_{j}}$, $a\in K$, tels que $\mu'\left(a\prod\limits_{j=1}^{s}Q_{i_{j}}^{\gamma_{j}}\right)\geqslant\beta$.
\\Consid\'erons donc $h\in K\left[x\right]$ tel que $\mu'(h)\in G_1$. En notant $h=\sum\limits_{j=0}^dh_jx^j$, on peut supposer, sans perte de g\'en\'eralit\'e, que:
\[\forall\:j\in\lbrace 0,...,d\rbrace,\:\mu(h_j)\geqslant 0.\]
En effet, dans le cas contraire, il suffit de multiplier $h$ par un \'el\'ement de $K$ choisi convenablement.
\\Comme la suite $\lbrace\beta_i\rbrace_{i\geqslant 1}$ n'est pas born\'ee dans $G_1$, il existe $i_0\in\mathbb N^*$ tel que:
\[\mu'(h)<\beta_{i_0}.\]
Notons alors $h=\sum\limits_{j=0}^{s_{i_0}}c_{j,i_0}Q_{i_0}^j$, le d\'eveloppement $i_0$-standard de $h$. Or, comme ce d\'eveloppement est obtenu par division euclidienne, vu le choix fait sur les coefficients de $h$ et, comme la suite $\left\lbrace\frac{\beta_i}{d^{\:\circ}\left(Q_i\right)}\right\rbrace_{i\geqslant 1}$ est strictement croissante (il suffit de regarder le d\'eveloppement $(i-1)$-standard de $Q_i$), on montre facilement que:
\[\forall\:j\in\lbrace 0,...,s_{i_0}\rbrace,\:\mu\left(c_{j,i_0}\right)\geqslant 0.\]
Rappelons que, par construction des polyn\^omes-cl\'es, pour $j\in\lbrace 0,...,s_{i_0}\rbrace$, $\mu'_{i_0}\left(c_{j,i_0}\right)=\mu'\left(c_{j,i_0}\right)$. On en d\'eduit alors que:
\[\forall\:j\in\lbrace 1,...,s_{i_0}\rbrace,\:\mu'\left(c_{j,i_0}Q_{i_0}^j\right)=\mu'_{i_0}\left(c_{j,i_0}Q_{i_0}^j\right)>\mu'(h).\]
Ainsi, $\mu'(h)=\mu'\left(c_{0,i_0}\right)$ et donc, $h$ est une somme de mon\^omes en $\textbf{Q}_{i_0+1}$ de valuation au moins $\mu'(h)$ (et en particulier, $\mu'_{i_0}(h)=\mu'(h)$).\\\qed
\\ \\\indent
On consid\`ere alors deux cas:
\begin{enumerate}
\item[(1)] $\sharp\lbrace i\geqslant 1\:\vert\:\alpha_i>1\rbrace=+\infty$;
\item[(2)] $\sharp\lbrace i\geqslant 1\:\vert\:\alpha_i>1\rbrace<+\infty$.
\end{enumerate}
Dans le cas (1), \`a l'aide de la Proposition \ref{sialphainfini}, on montre que l'ensemble infini de polyn\^omes-cl\'es est toujours $1$-complet, ind\'ependamment de la caract\'eristique de $k_\mu$. Dans le cas (2), si la caract\'eristique de $k_\mu$ est nulle et si l'ensemble de polyn\^omes-cl\'es $\textbf{Q}_{\omega}=\lbrace Q_{i}\rbrace_{i\geqslant 1}$ n'est pas complet, on montre dans la Proposition \ref{sialphafini} que la suite $\lbrace\beta_i\rbrace_{i\geqslant 1}$ n'est jamais born\'ee. Dans ce cas-l\`a, gr\^ace \`a la Proposition \ref{sibornealorscomplet}, on en d\'eduit que l'ensemble de polyn\^omes-cl\'es $\textbf{Q}_{\omega}=\lbrace Q_{i}\rbrace_{i\geqslant 1}$ est \'egalement $1$-complet.
\begin{prop}\label{sialphainfini}(\cite{spiva}, Corollaire 11.8)
Supposons que l'on ait construit un ensemble infini de polyn\^omes-cl\'es $\textbf{Q}_{\omega}=\lbrace Q_{i}\rbrace_{i\geqslant 1}$ tel que, pour tout $i\in\mathbb N^*$, $\beta_i\in G_1$. Supposons de plus que l'ensemble $\lbrace i\geqslant 1\:\vert\:\alpha_i>1\rbrace$ est infini. Alors, $\textbf{Q}_{\omega}$ est un ensemble $1$-complet de polyn\^omes-cl\'es.
\end{prop}
\noindent\textit{Preuve}: Soit $h\in K[x]$, comme dans la preuve de la Proposition \ref{sibornealorscomplet}, il suffit de montrer que $\mu'_{i}(h)=\mu'(h)$ pour un certain $i\geqslant 1$. Or, si on note:
\[\delta_i(h)=\max S_i(h,\beta_i),\]
o\`u:
\[S_i(h,\beta_i)=\lbrace j\in\lbrace 0,...,s_{i}\rbrace\:\vert\:j\beta_i+\mu'\left(c_{j,i}\right)=\mu'_i(h)\rbrace,\]
\[h=\sum\limits_{j=0}^{s_{i}}c_{j,i}Q_{i}^j,\]
par le (1) de la Proposition 37 de \cite{spivaherrera} (Proposition 11.2 de \cite{spiva}), on a:
\[\alpha_{i+1}\delta_{i+1}(h)\leqslant\delta_i(h),\:\forall\:i\geqslant 1.\]
On en d\'eduit qu'\`a chaque fois que $\delta_i(h)>0$ et $\alpha_{i+1}>1$:
\[\delta_{i+1}(h)<\delta_i(h),\forall\:i\geqslant 1.\]
Comme l'ensemble $\lbrace i\geqslant 1\:\vert\:\alpha_i>1\rbrace$ est infini et que l'in\'egalit\'e pr\'ec\'edente ne peut se produire une infinit\'e de fois, on en conclut qu'il existe $i_0\geqslant 1$ tel que $\delta_{i_0}(h)=0$ et donc que $\mu'_{i_0}(h)=\mu'(h)$.\\\qed
\\ \\\indent \`A partir de maintenant, on suppose que l'on a construit un ensemble infini de polyn\^omes-cl\'es $\textbf{Q}_{\omega}=\lbrace Q_{i}\rbrace_{i\geqslant 1}$ tel que $\alpha_i=1$, pour tout $i$ suffisamment grand. Ainsi pour ces $i$, on a:
\[Q_{i+1}=Q_i+z_i,\]
o\`u $z_i$ est un d\'eveloppement $i$-standard homog\`ene, de valuation $\beta_i$, n'impliquant pas $Q_i$.
\begin{prop}\label{sialphafini}(\cite{spiva}, Proposition 12.8)
Supposons que $car(k_\mu)=0$ et que l'on ait construit un ensemble infini de polyn\^omes-cl\'es $\textbf{Q}_{\omega}=\lbrace Q_{i}\rbrace_{i\geqslant 1}$ tel que, pour tout $i\in\mathbb N^*$, $\beta_i\in G_1$. Supposons de plus qu'il existe $h\in K[x]$ tel que, pour tout $i\geqslant 1$: $$\mu'_i(h)<\mu'(h).$$
Alors, la suite $\lbrace\beta_i\rbrace_{i\geqslant 1}$ n'est pas born\'ee dans $G_1$.
\end{prop}
\noindent\textit{Preuve}: Par la Proposition 37 de \cite{spivaherrera} (Proposition 11.2 de \cite{spiva}), la suite $\lbrace\delta_i(h)\rbrace_{i\geqslant 1}$ est d\'ecroissante, il existe donc $i_0\geqslant 1$ tel que $\delta_{i_0+t}(h)=\delta_{i_0}(h)$, pour tout $t\in\mathbb N$. Notons $\delta$ cette valeur commune. Si on note $h=\sum\limits_{j=0}^{s_{i}}c_{j,i}Q_{i}^j$ le d\'eveloppement $i$-standard de $h$ pour $i\geqslant i_0$, alors, par la Proposition 37 de \cite{spivaherrera} (Proposition 11.2 de \cite{spiva}), $\mu'_i(h)=\delta\beta_i+\mu'\left(c_{\delta,i}\right)$ et $\mu'\left(c_{\delta,i}\right)$ sont ind\'ependants de $i$. Il suffit donc de montrer que la suite $\lbrace\mu'_i(h)\rbrace_{i\geqslant 1}$ n'est pas born\'ee.
\\Notons:
\[\mu_i^+(h)=\min\left\lbrace\mu'\left.\left(c_{j,i} Q_i^j\right)\:\right\vert\:\delta<j\leqslant s_i\right\rbrace,\]
\[\varepsilon_i(h)=\min\left\lbrace j\in\lbrace\delta+1,...,s_i\rbrace\:\left\vert\:\mu'\left(c_{j,i} Q_i^j\right)=\mu_i^+(h)\right.\right\rbrace.\]
Toujours par la Proposition 37 de \cite{spivaherrera} (Proposition 11.2 de \cite{spiva}), la suite $\lbrace\varepsilon_i(h)\rbrace_{i\geqslant i_0}$ est d\'ecroissante, il existe donc $i_1\geqslant i_0$ tel que cette suite soit constante \`a partir de $i_1$. Notons alors $c_{\delta,i_1}^*\in K[x]$ l'unique polyn\^ome de degr\'e strictement inf\'erieur \`a $d^{\:\circ}\left(Q_{i_0}\right)=d^{\:\circ}\left(Q_{i_1}\right)$ tel que $c_{\delta,i_1}^*c_{\delta,i_1}-1$ soit divisible par $Q_{i_1}$ dans $K[x]$. On peut montrer que $\mu'_i\left(c_{\delta,i_1}^*\right)=\mu'\left(c_{\delta,i_1}^*\right)$, pour tout $i\geqslant i_1$. Multiplier $h$ par $c_{\delta,i_1}^*$ n'affecte pas $\delta$, donc multiplier $h$ par $c_{\delta,i_1}^*$ ne change rien au probl\`eme. On peut donc supposer que $in_{\mu'}\left(c_{\delta,i}\right)=in_{\mu'_i}\left(c_{\delta,i}\right)=1$ pour tout $i\geqslant i_1$.
\\Supposons que $h=Q_{i_1+1}$, rappelons que nous sommes dans la situation o\`u $Q_{i+1}=Q_i+z_i$, pour $i\geqslant i_1\geqslant i_0$. Les $z_i$ n'\'etant pas uniques, un choix possible de $z_i$, pour $i=i_1$, est:
\[z_{i_1}=\dfrac{c_{\delta-1,i_1}}{\delta}.\]
Par d\'efinition de $z_{i_1}$, $\mu'\left(z_{i_1}\right)=\beta_{i_1}$ et $\beta_{i_1}<\beta_{i_1+1}$. Par r\'ecurrence sur $t\in\mathbb N$, on construit $Q_{i_1+t}$.
\\Il faut tout de m\^eme montrer que la propri\'et\'e \og$\lbrace\mu'(Q_i+z_i+...+z_l)\rbrace_l$ n'est pas born\'ee\fg\: ne d\'epend pas du choix des $z_i,...,z_l$, $i\leqslant l$. En effet, supposons que l'on ait construit une autre suite de la forme $\lbrace\mu'(Q_i+z'_i+...+z'_{l'})\rbrace_{l'}$. Si pour tout $l$, il existe $l'$ tel que $\mu'(Q_i+z_i+...+z_{l})<\mu'(Q_i+z'_i+...+z'_{l'})$ alors la suite $\lbrace\mu'(Q_i+z'_i+...+z'_{l'})\rbrace_{l'}$ ne peut pas \^etre born\'ee car sinon la suite $\lbrace\mu'(Q_i+z_i+...+z_l)\rbrace_l$ le serait ce qui contredit l'hypoth\`ese de d\'epart. Supposons donc qu'il existe $l$ tel que, pour tout $l'$, $\mu'(Q_i+z'_i+...+z'_{l'})<\mu'(Q_i+z_i+...+z_l)$. Par la Proposition 9.29 de \cite{spiva}, il existe un d\'eveloppement $Q_i+z'_i+...+z'_{l'}+z''_{l'+1}+...+z''_{l''}$ tel que $Q_i+z''_i+...+z''_{l''}=Q_i+z_i+...+z_l$. Ainsi, on peut construire une troisi\`eme suite qui n'est pas born\'ee.
\\Comme $car(k_\mu)=0$, le sous-corps premier de $K$ est $\mathbb Q$, consid\'erons alors $A$ la $\mathbb Q$-sous-alg\`ebre de $K$ engendr\'ee par tous les coefficients de $Q_{l_1}$, on a donc que, pour tout $t\in\mathbb N$, $Q_{i_1+t}\in A\left[x\right]$. L'anneau $A$ \'etant noeth\'erien, l'anneau $A\left[x\right]$ l'est aussi. La valuation $\mu'_{\vert\:A[x]}$ est alors centr\'ee en $A[x]$ et $\left\lbrace \mu'_{\vert\:A[x]}\left(Q_{i_1+t}\right)\right\rbrace_{t\in\mathbb N}\subset G_1$, $G_1$ \'etant de rang $1$. En appliquant le Lemme \ref{pasdesuitecroissantebornee}, on en d\'eduit que la suite $\lbrace\beta_i\rbrace_{i\geqslant 1}$ ne peut \^etre born\'ee dans $G_1$.\\\qed
\begin{coro}\label{polycleencar0}
Si $car(k_\mu)=0$, il existe un ensemble $1$-complet de polyn\^omes-cl\'es $\lbrace Q_i\rbrace_{i\in\Lambda}$ tel que $\Lambda$ est, soit un ensemble fini, soit $\mathbb N^*$. En particulier, il n'y a pas de polyn\^omes-cl\'es limites pour des valuations de rang $1$ dont le corps r\'esiduel est de caract\'eristique nulle.
\end{coro}
\noindent\textit{Preuve}: On applique le processus de construction de \cite{spiva}, \textsection 9 et \cite{spivaherrera}. S'il existe $i_0\in\mathbb N$, tel que $\beta_{i_0}\notin G_1$, on pose $\Lambda=\lbrace 1,...,i_0-1\rbrace$ et, par d\'efinition, $\lbrace Q_i\rbrace_{i\in\Lambda}$ est $1$-complet. Sinon, pour tout $i\in\mathbb N$, $\beta_i\in G_1$ et on pose $\Lambda=\mathbb N^*$. Si $\sharp\lbrace i\geqslant 1\:\vert\:\alpha_i>1\rbrace=+\infty$, par la Proposition \ref{sialphainfini}, l'ensemble $\lbrace Q_i\rbrace_{i\in\Lambda}$ est $1$-complet. Si $\sharp\lbrace i\geqslant 1\:\vert\:\alpha_i>1\rbrace<+\infty$, par la Proposition \ref{sialphafini}, la suite $\lbrace\beta_i\rbrace_{i\geqslant 1}$ n'est pas born\'ee dans $G_1$ et donc, par la Proposition \ref{sibornealorscomplet}, l'ensemble $\lbrace Q_i\rbrace_{i\in\Lambda}$ est un ensemble $1$-complet de polyn\^omes-cl\'es.\\\qed
\section{Pr\'eliminaires}
\indent Les \'eclatements locaux sont un outil essentiel pour obtenir un r\'esultat d'uniformisation locale. Ces \'eclatements sont d\'ependants du choix des diff\'erents param\`etres r\'eguliers possibles pour l'anneau d'arriv\'ee. Les \'eclatements locaux encadr\'es vont imposer un syst\`eme de g\'en\'erateurs de l'id\'eal maximal d'arriv\'ee pour permettre de faire d\'ecro\^itre des invariants (dimension de plongement et rang rationnel d'un sous-groupe de l'enveloppe divisible du groupe des valeurs de la valuation).
\\Pour les preuves de tous les r\'esultats, on pourra consulter \cite{spiva}: \textsection 5, \textsection 6, \textsection 7, \textsection 8 ainsi que le Chapitre I de \cite{jcthese}.
\subsection{Suites locales encadr\'ees}\label{sectioneclatencad}
~\smallskip ~\\ \indent Soit $(R,\mathfrak{m},k)$ un anneau local noeth\'erien. Notons:
\[u=(u_1,...,u_n)\]
un ensemble de g\'en\'erateurs de $\mathfrak m$. Pour un sous-ensemble $I\subset\lbrace 1,...,n\rbrace$, notons:
\[u_I=\lbrace u_i\:\vert\:i\in I\rbrace.\]
Fixons un sous-ensemble $J\subset\lbrace 1,...,n\rbrace$ et un \'el\'ement $j\in J$. Notons:
\[J^c=\lbrace 1,...,n\rbrace\setminus J.\]
Pour tout $i\in \lbrace 1,...,n\rbrace$, consid\'erons les changements de variables suivants:
\[ u'_i=\left \{ \begin{array}{ccl} u_i & \textup{si} & i\in J^c\cup\lbrace j\rbrace \\ \frac{u_i}{u_j} & \textup{si} & i\in J\setminus\lbrace j\rbrace \end{array} \right.\]
On note alors:
\[u'=(u'_1,...,u'_n).\]
Rappelons que pour $f\in R$, l'\textbf{annulateur de} $\boldsymbol f$, not\'e $Ann_R(f)$, est l'id\'eal de $R$ d\'efini par:
\[Ann_R(f)=\lbrace g\in R\:\vert\:gf=0\rbrace.\]
Pour tout $i\in \lbrace 1,...,n\rbrace$, notons:
\[Ann_R\left(u_i^\infty\right)=\bigcup\limits_{l\geqslant 1}Ann_R\left(u_i^l\right),\]
\[R_i=R/Ann_R\left(u_i^\infty\right)\textup{ et }R'=R_j\left[u'_{J\setminus\lbrace j\rbrace}\right].\]
Notons $(R^{(1)},\mathfrak{m}^{(1)},k^{(1)})$ le localis\'e de l'anneau $R'$ en un id\'eal premier de $R'$.
\begin{rem}
\textup{Le sch\'ema $Spec\left(R'\right)$ est un sous-sch\'ema affine de l'\'eclat\'e de $Spec(R)$ le long de l'id\'eal $\left(u_J\right)$.}
\end{rem}
\indent Enfin, nous r\'ealisons une partition de $\lbrace 1,...,n\rbrace$ comme suit:
\[J^\times=\lbrace i\in J\setminus\lbrace j\rbrace\:\vert\:u'_i\in R^{(1)\times}\rbrace,\]
\[J^{\times c}=\lbrace i\in J\setminus\lbrace j\rbrace\:\vert\:u'_i\not\in R^{(1)\times}\rbrace.\]
On a donc:
\[\lbrace 1,...,n\rbrace=J^c\amalg J^\times \amalg J^{\times c}\amalg\lbrace j\rbrace,\]
\[u'=u'_{J^c}\cup u'_{J^\times}\cup u'_{J^{\times c}}\cup \lbrace u'_j\rbrace,\]
o\`u les r\'eunions sont disjointes dans la derni\`ere \'egalit\'e si $R$ est un anneau r\'egulier avec $u$ pour syst\`eme r\'egulier de param\`etres.
\\Notons $u^{(1)}=\left(u^{(1)}_1,...,u_{n_1}^{(1)}\right)$ un syst\`eme de g\'en\'erateurs de $\mathfrak m^{(1)}$ et \[\pi:(R,u)\rightarrow \left(R^{(1)},u ^{(1)}\right)\] le morphisme naturel entre ces deux anneaux locaux.
\begin{defi}\label{defeclatloc}
On dit que $\pi:(R,u)\rightarrow \left(R^{(1)},u ^{(1)}\right)$ est un \textbf{\'eclatement encadr\'e} de $(R,u)$ si $n_1\leqslant n$ et s'il existe un sous-ensemble $D_1\subset\lbrace 1,...,n_1\rbrace$ tel que:
\[u'_{\lbrace 1,...,n\rbrace\setminus J^\times}=u'_{J^c\cup J^{\times c}\cup\lbrace j\rbrace}=u_{D_1}^{(1)}.\]
Si de plus, $R$ est r\'egulier, $u$ est un syst\`eme r\'egulier de param\`etres de $R$ et $J^\times=\emptyset$ (c'est-\`a-dire si $n=n_1$ et $u'=u_{D_1}^{(1)}$), on dit que $\pi$ est un \textbf{\'eclatement monomial}.
\\Enfin, une \textbf{suite locale encadr\'ee} est une suite de la forme:
\[ \xymatrix{\left( R,u\right)=\left( R^{(0)},u^{(0)}\right) \ar[r]^-{\pi_{0}} & \left( R^{(1)},u^{(1)}\right) \ar[r]^-{\pi_{1}} & \ldots \ar[r]^-{\pi_{l-1}} & \left( R^{(l)},u^{(l)}\right)}, \]
o\`u chaque $\pi_i:\left( R^{(i)},u^{(i)}\right)\rightarrow\left( R^{(i+1)},u^{(i+1)}\right)$, $0\leqslant i\leqslant l-1$, est un \'eclatement encadr\'e. Si de plus, pour tout $i$, les $\pi_i$ sont des \'eclatements monomiaux, on dit que la suite est \textbf{monomiale}.
\end{defi}
\begin{defi}
Soient $\pi:(R,u)\rightarrow \left(R^{(1)},u ^{(1)}\right)$ un \'eclatement encadr\'e et $T\subset\lbrace 1,...,n\rbrace$. Supposons que $R$ est r\'egulier et que $u$ est un syst\`eme r\'egulier de param\`etres de $R$.
\\On dit que $\pi$ est \textbf{ind\'ependant de} $\boldsymbol{u_T}$ si $T\cap J=\emptyset$ (c'est-\`a-dire, $T\subset J^c$).
\end{defi}
\begin{rem}
\textup{Si un \'eclatement encadr\'e est ind\'ependant de $u_T$, alors: \[u_T\subset\left\lbrace u^{(1)}_1,...,u_{n_1}^{(1)}\right\rbrace.\] }
\end{rem}
On d\'efinit par r\'ecurrence l'ind\'ependance pour une suite locale encadr\'ee en supposant qu'elle est d\'ej\`a d\'efinie pour des suites de longueur $l-1$.
\begin{defi}
Une suite locale encadr\'ee de la forme:
\[ \xymatrix{\left( R,u\right)=\left( R^{(0)},u^{(0)}\right) \ar[r]^-{\pi_{0}} & \left( R^{(1)},u^{(1)}\right) \ar[r]^-{\pi_{1}} & \ldots \ar[r]^-{\pi_{l-1}} & \left( R^{(l)},u^{(l)}\right)} \]
est dite \textbf{ind\'ependante de} $\boldsymbol{u_T}$ si elle v\'erifie les deux conditions suivantes:
\begin{enumerate}
\item la suite $\pi_{l-2}\circ...\circ\pi_{0}$ est ind\'ependante de $u_T$;
\item si $u_T\subset\left\lbrace u^{(i)}_1,...,u_{n_i}^{(i)}\right\rbrace$, $0\leqslant i\leqslant l-1$, alors $\pi_{l-1}$ est ind\'ependante de $u_T$.
\end{enumerate}
\end{defi}
\begin{rem}\label{matriceeclat}
\textup{Soit $q\in\lbrace 1,...,n\rbrace$, on peut \'ecrire $u'_q$ sous la forme:
\[u'_q=u_1^{m_{1,q}}...u_{n}^{m_{n,q}},\]
o\`u $m_{p,q}\in\mathbb Z$, $p\in\lbrace 1,...,n\rbrace$. Le changement de variables $u\rightarrow u'$ est alors donn\'e par la matrice $M=(m_{p,q})_{p,q}\in SL_n(\mathbb Z)$ avec, par d\'efinition:
\[ m_{p,q}=\left \{ \begin{array}{rcl} 1 & \textup{si} & p=q \\ -1 & \textup{si} & p=j\textup{ et }q\in J
\\ 0 & \textup{si} & p\neq q \textup{ et, ou bien }q\neq j\textup{, ou bien }q\not\in J \end{array} \right.\]
En particulier, si $q\in J^c$, alors $u'_{q}\in u_{J^c}$ et si $q\in J$ alors $u'_q$ est un mon\^ome en $u_J$.
\\De m\^eme, le changement de variables $u'\rightarrow u$ est donn\'e par la matrice $N=(n_{p,q})_{p,q}=M^{-1}\in SL_n(\mathbb Z)$ avec:
\[ n_{p,q}=\left \{ \begin{array}{clc} 1 & \textup{si} & \textup{ou bien }p=q \textup{, ou bien }p=j\textup{ et }q\in J\\ 0 & \textup{sinon} & \end{array} \right.\]
En particulier, si $q\in J^c$, alors $u_q\in u'_{J^c}$ et si $q\in J$, alors $u_q$ est un mon\^ome en $u'_J$.
\\Si on note $e=\#(J^c\cup J^{\times c}\cup\lbrace j\rbrace)$, on en d\'eduit qu'il existe $\beta_q\in\mathbb N^q$ et $z_q\in R'^\times$ tels que:
\[u_q=\left(u_{J^c\cup J^{\times c}\cup\lbrace j\rbrace}\right)^{\beta_q}z_q.\]
De plus, si $q\in J$, alors $\left(u_{J^c\cup J^{\times c}\cup\lbrace j\rbrace}\right)^{\beta_q}$ est un mon\^ome en $u'_{J^{\times c}\cup\lbrace j\rbrace}$ uniquement. On a \'egalement:
\[\mathfrak mR'=\left(u_{J^c\cup\lbrace j\rbrace}\right)R'.\]
Enfin, si $J^\times=\emptyset$ alors, $z_q=1$.
\\Pour terminer cette remarque, on va \'etudier le cas o\`u l'\'eclatement encadr\'e est ind\'ependant d'un sous-ensemble. Soit $T\subset J^c$, notons:
\[t=\#(T)\textup{ et } r=n-t.\]
Soient $v=\lbrace v_1,...,v_t\rbrace=u_T$, $w=\lbrace w_1,...,w_r\rbrace=u_{\lbrace 1,...,n\rbrace\setminus T}$ et $u'=(v,w')$ o\`u $w'=\lbrace w'_1,...,w'_r\rbrace$. Pour $1\leqslant q\leqslant r$, on \'ecrit:
\[w'_q=w^{\gamma_q},\]
o\`u $\gamma_q\in\mathbb Z^r$. Alors les $r$ vecteurs $\gamma_1,...,\gamma_r$ forment une matrice de $SL_r(\mathbb Z)$ not\'ee $M_r$. Quitte \`a renum\'eroter les lignes de la matrice $M$, on peut \'ecrire $M$ sous la forme d'une matrice diagonale par blocs o\`u un bloc est $M_r$ et l'autre est $I_t$ la matrice identit\'e de taille $t$.
\\Ainsi, pour tout $\delta\in\mathbb Z^r$, on a:
\[w'^\delta=w^\gamma,\:\gamma=\delta F_r.\]
De m\^eme pour le changement de variables inverse, pour tout $\gamma\in\mathbb Z^r$, on a:
\[w^\gamma=w'^\delta,\:\delta=\gamma F_r^{-1}.\]}
\end{rem}
Nous allons g\'en\'eraliser cette remarque dans le cadre des suites locales encadr\'ees.
\begin{prop}\label{propgenerem}
Consid\'erons une suite locale encadr\'ee de la forme:
\[ \xymatrix{\left( R,u\right)=\left( R^{(0)},u^{(0)}\right) \ar[r]^-{\pi_{0}} & \left( R^{(1)},u^{(1)}\right) \ar[r]^-{\pi_{1}} & \ldots \ar[r]^-{\pi_{l-1}} & \left( R^{(l)},u^{(l)}\right)}. \]
Pour $0\leqslant i\leqslant l-1$, notons $n_{i+1}$ l'entier correspondant \`a l'entier $n_1$ de la D\'efinition \ref{defeclatloc}, $D_{i+1}$ l'ensemble correspondant \`a $D_1$ et $e_{i+1}=\#(D_{i+1})$.
\\Soient $0\leqslant i<i'\leqslant l$, $q\in\lbrace 1,...,n_{i}\rbrace$, $q'\in\lbrace 1,...,n_{i'}\rbrace$. Alors:
\begin{enumerate}
\item $\exists\:\delta_{q}^{(i',i)}\in\mathbb N^{e_{i}},\:z_{q}^{(i',i)}\in R^{(i')\times}$ tels que $u_q^{(i)}=\left(u_{D_{i'}}^{(i')}\right)^{\delta_{q}^{(i',i)}}z_{q}^{(i',i)}$.
\item Supposons de plus que la suite soit ind\'ependante de $u_T$ avec $T\subset\lbrace 1,...,n\rbrace$ et $u_q^{(i)}\not\in u_T$. Alors $\left(u_{D_{i'}}^{(i')}\right)^{\delta_{q}^{(i',i)}}$ est un mon\^ome uniquement en $u_{D_{i'}}^{(i')}\setminus u_T$.
\item Supposons que pour tout $i''>0$ tel que $i\leqslant i''<i'$, $D_{i''}=\lbrace 1,...,n_{i''}\rbrace$ et $q'\in D_{i'}$. Il existe alors $\gamma_{q'}^{(i,i')}\in\mathbb Z^{n_i}$ tel que $u_{q'}^{(i')}=\left(u^{(i)}\right)^{\gamma_{q'}^{(i,i')}}$.
\item Supposons de plus que la suite soit ind\'ependante de $u_T$ avec $T\subset\lbrace 1,...,n\rbrace$ et $u_{q'}^{(i')}\not\in u_T$. Alors $u_{q'}^{(i')}$ est un mon\^ome uniquement en $u_{\lbrace 1,...,n_i\rbrace}^{(i)}\setminus u_T$.
\end{enumerate}
\end{prop}
\noindent\textit{Preuve}: Il suffit de montrer le cas o\`u $i'=i+1$, le cas g\'en\'eral se faisant par r\'ecurrence. Or ce cas n'est qu'une application des d\'efinitions et de la Remarque \ref{matriceeclat}.\\\qed
\begin{prop}\label{propmatriceeclat}
Consid\'erons les m\^emes hypoth\`eses que dans la Proposition \ref{propgenerem} et supposons de plus que la suite locale encadr\'ee est monomiale et ind\'ependante de $u_T$, $T\subset\lbrace 1,...,n\rbrace$. Notons $t=\#(T)$ et $r=n-t$. On pose:
\[v=\lbrace v_1,...,v_t\rbrace=u_T,\]
\[w=\lbrace w_1,...,w_r\rbrace=u_{\lbrace 1,...,n\rbrace\setminus T}.\]
Alors:
\begin{enumerate}
\item $\forall\:i\in [\![0,l]\!],\:n_i=n$.
\item $\forall\:i\in \:]\!]0,l]\!],\:D_i=\lbrace 1,...,n\rbrace$.
\item Pour $0\leqslant i<i'\leqslant l$, notons $u^{(i)}=\left(v,w^{(i)}\right)$ o\`u $w^{(i)}=\left(w_1^{(i)},...,w_r^{(i)}\right)$ et $u^{(i')}=\left(v,w^{(i')}\right)$ o\`u $w^{(i')}=\left(w_1^{(i')},...,w_r^{(i')}\right)$. Alors, pour tout $1\leqslant q\leqslant r$, $w_q^{(i)}$ est un mon\^ome en $w^{(i')}$ ayant des exposants positifs.
\item Pour $1\leqslant q\leqslant r$, notons $w_q^{(i')}=\left(w^{(i)}\right)^{\gamma_q}$, $\gamma_q\in\mathbb Z^r$. Alors, les $r$ vecteurs colonnes $\gamma_1,...,\gamma_r$ forment une matrice $F_r^{(i',i)}\in SL_r(\mathbb Z)$. R\'eciproquement, notons $w_q^{(i)}=\left(w^{(i')}\right)^{\delta_q}$, $\delta_q\in\mathbb N^r$. Alors, les $r$ vecteurs colonnes $\delta_1,...,\delta_r$ forment la matrice $\left(F_r^{(i',i)}\right)^{-1}\in SL_r(\mathbb Z)$.
\end{enumerate}
\end{prop}
\noindent\textit{Preuve}: Comme dans la preuve de la Proposition \ref{propgenerem}, il suffit de montrer le cas o\`u $i'=i+1$, le cas g\'en\'eral se faisant par r\'ecurrence (et en remarquant que $SL_r(\mathbb Z)$ est un groupe). Or ce cas n'est qu'une application des d\'efinitions et de la Remarque \ref{matriceeclat}.\\\qed
\subsection{Construction d'un \'eclatement local encadr\'e}
~\smallskip ~\\ \indent Gardons les m\^emes notations que dans la sous-section \ref{sectioneclatencad}. Nous d\'efinissons un \'eclatement encadr\'e $\pi:(R,u,k)\rightarrow\left(R^{(1)},u^{(1)},k^{(1)}\right)$ tr\`es utile par la suite. Nous allons d\'ecrire, en termes de g\'en\'erateurs et relations, l'extension de corps $k\hookrightarrow k^{(1)}$ induite par $\pi$.
\\\indent Rappelons que $R'$ est l'anneau:
\[R'=R_j\left[u'_{J\setminus\lbrace j\rbrace}\right].\]
Notons:
\begin{align*}
h&=\#(J),\\h^c&=\#\left(J^c\right)=n-h,\\h^{\times c}&=\#\left(J^{\times c}\right)+1,\\h^\times&=\#\left(J^\times\right)=h-h^{\times c}.
\end{align*}
Quitte \`a renum\'eroter les variables, on peut supposer que:
\begin{align*}
J&=\lbrace 1,...,h\rbrace,\\J^c&=\lbrace h+1,...,n\rbrace,\\j&=1,\\J^{\times c}&=\lbrace 2,...,h^{\times c}\rbrace,\\J^\times&=\lbrace h^{\times c}+1,...,h\rbrace.
\end{align*}
Les changements de variables deviennent alors:
\[ u'_i=\left \{ \begin{array}{ccl} u_i & \textup{si} & i\in \lbrace 1\rbrace\cup\lbrace h+1,...,n\rbrace \\ \frac{u_i}{u_j} & \textup{si} & i\in\lbrace 2,...,h\rbrace \end{array} \right.\]
Comme on a vu pr\'ec\'edemment, prenons $\mathfrak m'\in Spec(R')$ tel que $u'_{J^c\cup J^{\times c}\cup\lbrace j\rbrace}\subset\mathfrak m'$, ainsi $R^{(1)}=R'_{\mathfrak m'}$ et $\mathfrak m_1=\mathfrak m'R^{(1)}$. De plus, $\mathfrak m=\mathfrak m_1\cap R=\mathfrak m'\cap R$.
\\Pour $1\leqslant i\leqslant n$, notons $z_i\in k^{(1)}$ l'image de $u'_1\in R'$ dans $k^{(1)}$. On remarque alors que:
\[z_i=0,\:\forall\:i\in J^c\cup J^{\times c}\cup\lbrace j\rbrace.\]
\begin{rem}\label{engparjc}
\textup{Notons $\overline R=R'/\mathfrak m R'$ et $\overline{u_i}\in\overline R$ l'image de $u'_i\in R'$ dans $\overline R$, $i\in J\setminus\lbrace j\rbrace$, alors $\overline R=k\left[\overline{u_{J^{\times c}}},\overline{u_{J^\times}}^{\:\pm\:1}\right].$ Les \'el\'ements $\overline{u_{J^{\times c}}}$ et $\overline{u_{J^\times}}^{\:\pm\:1}$ sont alg\'ebriquement ind\'ependants sur $k$, lorsque $R$ est r\'egulier avec $u$ comme syst\`eme r\'egulier de param\`etres. Or, on a les morphismes:
\[R\rightarrow R'\rightarrow R'_{\mathfrak m'}\rightarrow k^{(1)}.\]
En passant modulo $\mathfrak m$, on obtient:
\[k\rightarrow \overline R\rightarrow \overline R_{\overline{\mathfrak m}}\rightarrow k^{(1)},\]
o\`u $\overline{\mathfrak m}=\mathfrak m'/\mathfrak m R'$. On en d\'eduit que $k^{(1)}$ est engendr\'e sur $k$, en tant que corps, par $z_{J^\times}$.}
\end{rem}
Notons $t=deg.tr\left(k^{(1)}\vert k\right)+h^{\times c}$, par la Remarque \ref{engparjc}: \[deg.tr\left(k^{(1)}\vert k\right)\leqslant h^\times.\]
On en d\'eduit les in\'egalit\'es:
\[h^{\times c}\leqslant t\leqslant h^\times+h^{\times c}=h\leqslant n.\]
De plus, on peut supposer que $z_{h^{\times c}+1},...,z_t$ sont alg\'ebriquement ind\'ependants sur $k$ dans $k^{(1)}$, tant que $z_{t+1},...,z_h$ sont alg\'ebriques sur $k\left(z_{h^{\times c}+1},...,z_t\right)$.
\\Pour $t< i\leqslant h$, notons $P_i(X_i)$ le polyn\^ome minimal de $z_i$ sur $k\left(z_{h^{\times c}+1},...,z_{i-1}\right)$. On a l'isomorphisme:
\[k^{(1)}\simeq \dfrac{k\left(z_{h^{\times c}+1},...,z_t\right)\left[X_{t+1},...,X_h\right]}{\left(P_{t+1}(X_{t+1}),...,P_h(X_h)\right)}.\]
Quitte \`a r\'eduire au m\^eme d\'enominateur, pour $t< i\leqslant h$, on peut choisir $P_i\in k\left[z_{h^{\times c}+1},...,z_{i-1}\right]\left[X_i\right]$, mais alors les $P_i$ ne seront plus des polyn\^omes unitaires. Notons:
\[P_i(X_i)=\sum\limits_m p_{i,m}X_i^m,\]
o\`u $p_{i,m}\in k\left[z_{h^{\times c}+1},...,z_{i-1}\right]$, $t< i\leqslant h$. Notons alors $q_{i,m}$ l'\'el\'ement de $R\left[u'_{h^{\times c}+1},...,u'_{i-1}\right]$ obtenu \`a partir de $p_{i,m}$ en rempla\c{c}ant chaque $z_{i'}$ par $u_{i'}$, $t<i'<i$ et en rempla\c{c}ant chaque coefficient de $p_{i,m}$ par un repr\'esentant dans $R$ (on voit $p_{i,m}$ comme un polyn\^ome en $z_{i'}$ \`a coefficients dans $k=R/\mathfrak m$).
\\En particulier, on remarque que $p_{i,m}\equiv q_{i,m}\mod\mathfrak m^{(1)}$. Enfin, notons:
\[Q_i(X)=\sum\limits_m q_{i,m}X^m.\]
Pour $t< i\leqslant h$, comme $P_i(z_i)=0$ dans $k^{(1)}$, on en d\'eduit que:
\[Q_i(u'_i)\in\mathfrak m^{(1)}.\]
\begin{prop}\label{propconstruceclatencad}
Notons $n_1=n-t+h^{\times c}$ et posons le changement de variables suivant:
\[ u_i^{(1)}=\left \{ \begin{array}{lcl} Q_{i+n-n_1}(u'_{i+n-n_1}) & \textup{si} & h^{\times c}< i\leqslant h-(n-n_1) \\ u'_i & \textup{si} & 1\leqslant i\leqslant j^{\times c}\\ u'_{i+n-n_1} & \textup{si} & h-(n-n_1)< i \leqslant n_1 \end{array} \right.\]
Alors:
\begin{enumerate}
\item $u^{(1)}=\left(u_1^{(1)},...,u_{n_1}^{(1)}\right)$ est un syst\`eme de g\'en\'erateurs de $\mathfrak m^{(1)}$.
\item $\pi:(R,u)\rightarrow\left(R^{(1)},u^{(1)}\right)$ est un \'eclatement local encadr\'e.
\item Si $R$ est r\'egulier avec $u$ pour syst\`eme r\'egulier de param\`etres, alors $u^{(1)}$ est un syst\`eme r\'egulier de param\`etres de $R^{(1)}$.
\end{enumerate}
\end{prop}
\noindent\textit{Preuve}: Nous allons donner une id\'ee de preuve. Pour (1), il suffit de remarquer que, par construction:
\[u_i^{(1)}\in\mathfrak m^{(1)},\:1\leqslant i\leqslant n_1.\]
R\'eciproquement, par la Remarque \ref{matriceeclat}:
\[\mathfrak m R^{(1)}=\left(u_1^{(1)},u_{h+1-(n-n_1)}^{(1)},...,u_{n_1}^{(1)}\right)R^{(1)}\subset \left(u_1^{(1)},...,u_{n_1}^{(1)}\right)R^{(1)}.\]
Rappelons que $\overline{u_2},...,\overline{u_{h^{\times c}}},Q_{t+1}\left(\overline{u_{t+1}}\right),...,Q_{h}\left(\overline{u_h}\right)$ sont les images de $u_2^{(1)},...,u_{n_1}^{(1)}$ dans $k\left(z_{h^{\times c}+1},...,z_t\right)\left[\overline{u_2},...,\overline{u_{h^{\times c}}},\overline{u_{t+1}},...,\overline{u_h}\right]$; en particulier, ce sont des \'el\'ements de $\overline{\mathfrak m}\overline{R}_{\overline{\mathfrak m}}$. Enfin, $\overline{u_2},...,\overline{u_{h^{\times c}}},Q_{t+1}\left(\overline{u_{t+1}}\right),...,Q_{h}\left(\overline{u_h}\right)$ engendrent un id\'eal maximal de $k\left(z_{h^{\times c}+1},...,z_t\right)\left[\overline{u_2},...,\overline{u_{h^{\times c}}},\overline{u_{t+1}},...,\overline{u_h}\right]$ vu que:
\[\dfrac{k\left(z_{h^{\times c}+1},...,z_t\right)\left[\overline{u_2},...,\overline{u_{h^{\times c}}},\overline{u_{t+1}},...,\overline{u_h}\right]}{\left(\overline{u_2},...,\overline{u_{h^{\times c}}},Q_{t+1}\left(\overline{u_{t+1}}\right),...,Q_{h}\left(\overline{u_h}\right)\right)}\simeq\dfrac{k\left(z_{h^{\times c}+1},...,z_{h^{\times c}}\right)\left[\overline{u_{t+1}},...,\overline{u_h}\right]}
{\left(Q_{t+1}\left(\overline{u_{t+1}}\right),...,Q_{h}\left(\overline{u_h}\right)\right)}\simeq k^{(1)}.\]
Comme:
\begin{align*}
\overline{R}_{\overline{\mathfrak m}}&\simeq k\left[\overline{u_2},...,\overline{u_h}\right]_{\overline{\mathfrak{m}}}
\\&\simeq k\left(z_{h^{\times c}+1},...,z_t\right)\left[\overline{u_2},...,\overline{u_{h^{\times c}}},\overline{u_{t+1}},...,\overline{u_h}\right]_{\overline{\mathfrak m}k\left(z_{h^{\times c}+1},...,z_t\right)\left[\overline{u_2},...,\overline{u_{h^{\times c}}},\overline{u_{t+1}},...,\overline{u_h}\right]}.
\end{align*}
Tout ceci montre que les images de $u_{2}^{(1)},...,u_{n_1}^{(1)}$ engendrent l'id\'eal maximal $\overline{\mathfrak m}\overline{R}_{\overline{\mathfrak m}}$ de $\overline{R}_{\overline{\mathfrak m}}$. Or par d\'efinition de $\overline{R}$ et de ${\overline{\mathfrak m}}$, on en d\'eduit que $u_1^{(1)},...,u_{n_1}^{(1)}$ engendrent l'id\'eal $\mathfrak m'R_{\mathfrak m'}\equiv\mathfrak m^{(1)}$ dans $R'_{\mathfrak m'}\equiv R^{(1)}$.
\\Par d\'efinition, (2) est \'evidente, l'ensemble $D_1$ \'etant:
\[D_1=\lbrace 1,...,h^{\times c}\rbrace\cup\lbrace h-(n-n_1)+1,...,n_1\rbrace.\]
Pour montrer (3), on remarque que, $R$ \'etant r\'egulier avec $u$ comme syst\`eme r\'egulier de param\`etres, alors, $\overline R$ est r\'egulier et $\overline{u_2},...,\overline{u_{h^{\times c}}},Q_{t+1}\left(\overline{u_{t+1}}\right),...,Q_{h}\left(\overline{u_h}\right)$ forment un syst\`eme r\'egulier de param\`etres de l'anneau local r\'egulier $\overline{\mathfrak m}\overline{R}_{\overline{\mathfrak m}}$ qui est de dimension $h-(n-n_1)-1$. Enfin, on montre par r\'ecurrence sur $n-h$ que:
\[(0)\subsetneq \left(u_1^{(1)}\right)\subsetneq\left(u_1^{(1)},u_{h-(n-n_1)+1}^{(1)}\right)\subsetneq ... \subsetneq \left(u_1^{(1)},u_{h-(n-n_1)+1}^{(1)},...,u_{n_1}^{(1)}\right)\]
forme une cha\^ine de $n-h+1$ id\'eaux premiers de $R^{(1)}$ distincts.\\\qed
\\ \\\indent Pour terminer, nous allons interpr\'eter les r\'esultats pr\'ec\'edents en termes d'\'eclatements encadr\'es par rapport \`a une valuation donn\'ee.
\\\indent Soient $(R,\mathfrak m,k)$ un anneau local noeth\'erien, $u$ un ensemble de g\'en\'erateurs de $\mathfrak m$ et $\nu$ une valuation centr\'ee en $R$. Pour $1\leqslant i\leqslant n$, notons:
\[\beta_i=\nu(u_i),\]
\[x_i=in_\nu(u_i).\]
Soient $T\subset\lbrace 1,...,n\rbrace$, $E=\lbrace 1,...,n\rbrace\setminus T$ et $k\left[x_E\right]$ la sous-alg\`ebre gradu\'ee de $G_\nu$. Notons:
\[G=k\left[x_E\right]^{*}=\left\lbrace \left.\dfrac{f}{g}\:\right|\:f,g\in k\left[x_E\right],\:g\textup{ homog\`ene},\:g\neq 0\right\rbrace.\]
Consid\'erons $J\subset E$ et choisissons $j\in J$ tel que:
\[\beta_j=\min_{i\in J}\lbrace \beta_i\rbrace.\]
Soit $\pi:(R,\mathfrak m)\rightarrow\left(R^{(1)},\mathfrak m^{(1)}\right)$ un \'eclatement local par rapport \`a $\nu$ (voir D\'efinition \ref{defeclatlocal}) et consid\'erons $R^{(1)}$ comme le localis\'e de $R'$ en le centre de $\nu$. On a donc:
\[J^{\times c}=\lbrace i\in J\:\vert\:\beta_i>\beta_j\rbrace,\]
\[J^\times=\lbrace i\in J\setminus\lbrace j\rbrace\:\vert\:\beta_i=\beta_j\rbrace.\]
Notons alors:
\[ x'_i=\left \{ \begin{array}{ccl} x_i & \textup{si} & i\in J^c\cup\lbrace j\rbrace \\ \frac{x_i}{x_j} & \textup{si} & i\in J\setminus\lbrace j\rbrace \end{array} \right.\]
\[ \overline{z}_i=\left \{ \begin{array}{ccl} u'_i & \textup{si} & i\in J^\times \\ 1 & \textup{si} & i\in \lbrace 1,...,n\rbrace\setminus J^\times \end{array} \right.\]
\[ z_i=\left \{ \begin{array}{ccl} x'_i & \textup{si} & i\in J^\times \\ 1 & \textup{si} & i\in \lbrace 1,...,n\rbrace\setminus J^\times \end{array} \right.\]
\[\beta'_i=ord(x'_i), 1\leqslant i\leqslant n,\]
\[E'=E\setminus J^\times.\]
Pour $1\leqslant i\leqslant n$, $x'_i$ est homog\`ene et $ord(x'_i)\geqslant 0$. On a:
\[ord(x'_i)>0\Leftrightarrow\beta_i>\beta_j\Leftrightarrow i\in E',\]
\[ord(z_i)=0,\:\forall\:i\in J^\times.\]
Remarquons que $k^{(1)}=k\left(z_{J^\times}\right)$. Consid\'erons le morphisme $\rho:R'\rightarrow k^{(1)}$, extension de $R\rightarrow k$, d\'efini en envoyant $u'_i$ sur $z_i$ si $z\in J^\times$ et sur $0$ si $i\in J^{\times c}$. L'id\'eal $\mathfrak m'=\ker \rho$ est le centre de $\nu$ dans $R'$ et $R^{(1)}=R'_{\mathfrak m'}$.
\begin{defi}\label{eclatlocparrapnu}
Consid\'erons $u^{(1)}$ comme dans la Proposition \ref{propconstruceclatencad}, l'\'eclatement local encadr\'e $\pi:(R,\mathfrak m)\rightarrow\left(R^{(1)},\mathfrak m^{(1)}\right)$ qui en r\'esulte est appel\'e l'\textbf{\'eclatement local encadr\'e le long de} $\boldsymbol{\left(u_J\right)}$ \textbf{par rapport \`a} $\boldsymbol{\nu}$.
\end{defi}
Soit $\varphi:D_1\rightarrow J^c\cup J^{\times c}\cup\lbrace j\rbrace$ la bijection r\'esultante de l'\'eclatement encadr\'e. Notons alors:
\[E^{(1)}=\varphi^{-1}(E')\subset D_1,\]
\[x_i^{(1)}=x'_{\varphi(i)},\]
\[\beta_i^{(1)}=ord\left(x_i^{(1)}\right)=\beta'_{\varphi(i)}.\]
\begin{rem}\label{egalitealggrad}
\textup{Pour tout $i,i'\in E$, il existe $\delta_i\in\mathbb N^{\#(E^{(1)})}$, $\gamma_{i'}\in\mathbb Z^{\#(E)}$ tels que:
\[u_{i'}^{(1)}=u_E^{\gamma_{i'}},\]
\[u_i=\left(u_{E^{(1)}}^{(1)}\right)^{\delta_i}\overline{z_i}.\]
On a les m\^emes transformations au niveau des alg\`ebres gradu\'ees:
\[x_{i'}^{(1)}=x_E^{\gamma_{i'}},\]
\[x_i=\left(x_{E^{(1)}}^{(1)}\right)^{\delta_i}z_i.\]
On a \'egalement:
\[\beta_i=\left\langle\delta_i,\beta_{E^{(1)}}^{(1)}\right\rangle,\]
o\`u $\langle .,.\rangle$ repr\'esente le produit scalaire de vecteurs de taille $\#(D_1)$.
On en d\'eduit les \'egalit\'es d'alg\`ebres gradu\'ees suivantes:
\[k[x]^*=k\left[z_{J^\times},x_{D_1}^{(1)}\right]^*=k_1\left[x_{D_1}^{(1)}\right]^*,\]
\[k[x_E]^*=k\left[z_{J^\times},x_{E^{(1)}}^{(1)}\right]^*=k_1\left[x_{E^{(1)}}^{(1)}\right]^*.\]}
\end{rem}
\subsection{L'id\'eal premier implicite}
~\smallskip ~\\ \indent Pour une valuation donn\'ee, l'id\'eal premier implicite est un des objets central de l'uniformisation locale. En effet, cet id\'eal va \^etre l'id\'eal \`a d\'esingulariser. C'est un id\'eal du compl\'et\'e qui d\'ecrit les \'el\'ements de valuation infinie. Via le Lemme \ref{lemmetechniquespiva}, pour rendre $R$ r\'egulier, il nous suffit de rendre r\'egulier $\widehat{R}_H$ et $\widehat{R}/H$, o\`u $H$ est l'id\'eal premier implicite associ\'e \`a une valuation centr\'ee en $R$. L'int\'er\^et de l'id\'eal premier implicite est que $\widehat{R}_H$ est automatiquement r\'egulier sous l'hypoth\`ese de quasi-excellence. Ainsi il suffira de d\'emontrer l'uniformisation locale pour des valuations centr\'ees en $\widehat{R}/H$.
\\On pourra consulter \cite{ideimp}, \cite{jcthese} ou \cite{spiva} pour plus de d\'etails.
\begin{defi}(\cite{ideimp}, Definition 2.1)\label{defiidim}
Soient $(R,\mathfrak{m},k)$ un anneau local noeth\'erien, $P_\infty$ un id\'eal premier minimal de $R$ et $\nu$ une valuation de rang $1$ de $R_{P_\infty}$ centr\'ee en $R$. On appelle \textbf{id\'eal premier implicite} de $R$ associ\'e \`a $\nu$, not\'e $H(R,\nu)$ ou plus simplement $H$ s'il n'y a pas d'ambigu\"it\'e, l'id\'eal de $\widehat{R}$ d\'efinit par:
\[H=\bigcap\limits_{\beta\in\nu(R\setminus P_\infty)}P_\beta\widehat{R},\]
o\`u $P_\beta=\lbrace f\in R\:\vert\:\nu(f)\geqslant\beta\rbrace$.
\end{defi}
\begin{rem}\label{remcauchy}
\textup{\begin{enumerate}
\item Si l'on suppose de plus que $R$ est int\`egre, alors $P_\infty=(0)$.
\item Comme la valuation est de rang $1$ le groupe $\nu(R\setminus P_\infty)$ est archim\'edien (voir preuve du Lemme \ref{pasdesuitecroissantebornee}) et donc, pour tout $\beta\in\nu(R\setminus P_\infty)$, il existe $n\in\mathbb N$ tel que $\mathfrak m^n\subset P_\beta$. Il y a donc \'equivalence entre:
\begin{enumerate}
\item $f\in H$;
\item Il existe une suite de Cauchy $(f_n)_n\subset R$ telle que, si $f_n\underset{n\rightarrow +\infty}{\longrightarrow}f$, alors $\nu(f_n)\underset{n\rightarrow +\infty}{\longrightarrow}\infty$;
\item Pour toute suite de Cauchy $(f_n)_n\subset R$ telle que, si $f_n\underset{n\rightarrow +\infty}{\longrightarrow}f$, alors $\nu(f_n)\underset{n\rightarrow +\infty}{\longrightarrow}\infty$.
\end{enumerate}
\end{enumerate}}
\end{rem}
\begin{lem}\label{intersecimp}
Sous les hypoth\`eses de la D\'efinition \ref{defiidim}, si $H$ est l'id\'eal premier implicite de $R$ associ\'e \`a $\nu$, alors:
\[H\cap R=P_\infty\]
et il existe une inclusion naturelle:
\[R/P_\infty\hookrightarrow\widehat{R}/H.\]
\end{lem}
\begin{thm}(\cite{ideimp}, Theorem 2.1)\label{valetenuniqidealimpl}
Reprenons les hypoth\`eses de la D\'efinition \ref{defiidim}. Soit $H$ l'id\'eal premier implicite de $R$ associ\'e \`a $\nu$, alors:
\begin{enumerate}
\item $H$ est un id\'eal premier de $\widehat{R}$;
\item $\nu$ s'\'etend de mani\`ere unique en une valuation $\widehat{\nu}$ centr\'ee en $\widehat{R}/H$.
\end{enumerate}
\end{thm}
\begin{coro}(\cite{ideimp}, Proposition 2.8)\label{RHestreg}
Soient $R$ un anneau local quasi-excellent r\'eduit, $P_\infty$ un id\'eal premier minimal de $R$ et $\nu$ une valuation de rang $1$ de $R_{P_\infty}$ centr\'ee en $R$. Alors, $\widehat R_H$ est un anneau local r\'egulier.
\end{coro}
\begin{lem}(\cite{ideimp}, Lemma 2.4)\label{implistable}
Soit $(R,\mathfrak{m})\rightarrow (R',\mathfrak m')$ un morphisme local d'anneaux locaux noeth\'eriens. Soient $P_\infty$ un id\'eal premier minimal de $R$ et $\nu$ une valuation de rang $1$ de $R_{P_\infty}$ centr\'ee en $R$. Supposons qu'il existe un id\'eal premier minimal $P'_\infty$ de $R'$ tel que $P_\infty=P'_\infty\cap R$ et que $\nu$ s'\'etende en une valuation de rang $1$ $\nu'$ telle que son groupe des valeurs contienne celui de $\nu$.
\\Pour $\beta\in\nu'\left(R'\setminus\lbrace 0\rbrace\right)$, notons $P'_\beta=\lbrace f\in R'\:\vert\:\nu'(f)\geqslant\beta\rbrace$. Enfin, notons $H'=H(R',\nu')$ l'id\'eal premier implicite de $R'$ associ\'e \`a $\nu'$.
\\Alors, pour tout $\beta\in\nu\left(R\setminus\lbrace 0\rbrace\right)$,
\[\left(P'_\beta\widehat{R'}\right)\cap\widehat R=P_\beta\widehat R.\]
\end{lem}
\begin{coro}(\cite{ideimp}, Corollary 2.5)\label{impeclatloc}
Avec les hypoth\`eses du Lemme \ref{implistable}, on a:
\[H'\cap\widehat R=H.\]
\end{coro}
\begin{coro}(\cite{ideimp}, Corollary 2.7)\label{coroht}
Reprenons les hypoth\`eses du Lemme \ref{implistable} et supposons de plus que $(R,\mathfrak{m})\rightarrow (R',\mathfrak m')$ est un \'eclatement local par rapport \`a $\nu$ le long d'un id\'eal $J$ non nul de $R$ et que $\nu$ reste de rang $1$ sur $R'$. On a:
\[ht(H')\geqslant ht(H)\textit{ et}\]
\[\dim\left(\widehat{R'}/H'\right)\leqslant\dim\left(\widehat{R}/H\right).\]
\end{coro}
\subsection{Monomialisation d'\'el\'ements non d\'eg\'en\'er\'es}
~\smallskip ~\\ \indent Nous allons voir l'effet des \'eclatements encadr\'es sur les mon\^omes. Une cons\'equence sera qu'un \'el\'ement non d\'eg\'en\'er\'e, c'est-\`a-dire qu'en cet \'el\'ement, la valuation est \'egale \`a la valuation monomiale, peut \^etre transform\'e en un mon\^ome via une suite locale encadr\'ee.
\\Toute cette partie est un cas particulier du jeu d'Hironaka (voir \cite{jeuhiro} et \cite{spivajeuhiro}).
\\\indent Pour un \'el\'ement $\alpha=(\alpha_1,...,\alpha_n)\in \mathbb N^n$, on note:
\[\vert\alpha\vert=\alpha_1+...+\alpha_n.\]
Soient $\alpha=(\alpha_1,...,\alpha_n)$, $\gamma=(\gamma_1,...,\gamma_n)\in \mathbb N^n$. Pour $1\leqslant i\leqslant n$, notons:
\[\delta_i=\min\lbrace\alpha_i,\gamma_i\rbrace.\]
Posons alors $\delta=(\delta_1,...,\delta_n)\in \mathbb N^n$, $\tilde\alpha=\alpha-\delta$, $\tilde\gamma=\gamma-\delta$. Quitte \`a \'echanger $\alpha$ et $\gamma$, on peut supposer que $\vert\tilde\alpha\vert\leqslant\vert\tilde\gamma\vert$. On d\'efinit $\tau(\alpha,\gamma)$ par:
\[\tau(\alpha,\gamma)=\left(\vert\tilde\alpha\vert ,\vert\tilde\gamma\vert\right).\]
\begin{rem}\label{remtau}
\textup{
\begin{enumerate}
\item Si $\tilde\alpha=(0,...,0)$, alors $u^\alpha$ divise $u^\gamma$ dans $R$.
\item Quitte \`a renum\'eroter les variables de $\tilde\alpha$ et $\tilde\gamma$, on peut supposer qu'il existe $a\in\mathbb N$, $1\leqslant a<n$, tel que:
\[\tilde\alpha=(\tilde\alpha_1,...,\tilde\alpha_a,\underbrace{0,...,0}_{n-a}),\]
\[\tilde\gamma=(\underbrace{0,...,0}_{a},\tilde\gamma_{a+1},...,\tilde\gamma_n).\]
On peut \'egalement supposer que, pour $1\leqslant i\leqslant a$, $\tilde\alpha_i>0$.
\end{enumerate}}
\end{rem}
\indent Soit $(R,\mathfrak m)$ un anneau local noeth\'erien tel que $\mathfrak m$ soit non nilpotent et $u=(u_1,...,u_n)$ un ensemble de g\'en\'erateurs de $\mathfrak m$. Soit $\nu$ une valuation centr\'ee en $R$ de groupe des valeurs $\Gamma$.
\\Consid\'erons $J\subset\lbrace 1,...,n\rbrace$ le sous-ensemble le plus petit possible, au sens de l'inclusion, tel que:
\[\lbrace 1,...,a\rbrace\subset J\textup{ et }\sum\limits_{i\in J}\tilde\gamma_i\geqslant\vert\tilde\alpha\vert.\]
En reprenant les notations de la D\'efinition \ref{defeclatloc}, consid\'erons $\pi:(R,u)\rightarrow\left(R^{(1)},u^{(1)}\right)$ un \'eclatement encadr\'e le long de $\left(u_J\right)$, selon la D\'efinition \ref{eclatlocparrapnu}. Notons :
\[ \tilde\alpha'_i=\left \{ \begin{array}{ccc} \tilde\alpha_i & \textup{si} & i\neq j \\ 0 & \textup{si} & i=j \end{array} \right.\]
\[ \tilde\gamma'_i=\left \{ \begin{array}{ccc} \tilde\gamma_i & \textup{si} & i\neq j \\ \sum\limits_{i\in J}\tilde\gamma_i-\vert\tilde\alpha\vert & \textup{si} & i=j \end{array} \right.\]
\[\tilde\alpha'=(\tilde\alpha'_1,...,\tilde\alpha'_n),\]
\[\tilde\gamma'=(\tilde\gamma'_1,...,\tilde\gamma'_n),\]
\[\delta'=(\delta_1,...,\delta_{j-1},\delta_j+\vert\tilde\alpha\vert,\delta_{j+1},...,\delta_n).\]
Avec ces notations on obtient:
\[u^\alpha=\left(u'\right)^{\delta'+\tilde\alpha'},\]
\[u^\gamma=\left(u'\right)^{\delta'+\tilde\gamma'}.\]
Posons $\alpha'=\delta'+\tilde\alpha'$ et $\gamma'=\delta'+\tilde\gamma'$.
\begin{prop}\label{propdesctau}
Avec les notations pr\'ec\'edentes, on a, pour l'ordre lexicographique:
\[\tau(\alpha',\gamma')<_{lex}\tau(\alpha,\gamma).\]
\end{prop}
\noindent\textit{Preuve}: Nous ne donnerons qu'une id\'ee de preuve. Il suffit de montrer que:
\[\left(\vert\tilde\alpha'\vert,\vert\tilde\gamma'\vert\right)<_{lex}\left(\vert\tilde\alpha\vert,\vert\tilde\gamma\vert\right).\]
Si $j\in\lbrace 1,...,a\rbrace$, alors par d\'efinition et par la Remarque \ref{remtau}, on a:
\[\vert\tilde\alpha'\vert=\vert\tilde\alpha\vert-\tilde\alpha_j<\vert\tilde\alpha\vert.\]
Si $j\in\lbrace a+1,...,n\rbrace$, alors $\vert\tilde\alpha'\vert=\vert\tilde\alpha\vert$. Par minimalit\'e de $J$, il vient que:
\[\sum\limits_{i\in J\setminus\lbrace j\rbrace}\tilde\gamma_i<\vert\tilde\alpha\vert.\]
On en conclut que $\vert\tilde\gamma'\vert<\vert\tilde\gamma\vert$.\\\qed
\begin{coro}\label{procesusstop}
Soit $s=\#\lbrace i\in\lbrace 1,...,n\rbrace\:\vert\:u'_i\not\in R^{(1)}{}^\times\rbrace$. Quitte \`a renum\'eroter les variables, on peut supposer que $u'_i$ n'est pas inversible dans $R^{(1)}$, pour $1\leqslant i\leqslant s$ et, inversible pour $s< i\leqslant n$.
\\Comme $\pi$ est un \'eclatement encadr\'e, $\lbrace u'_1,...,u'_s\rbrace\subset u^{(1)}$. Quitte \`a renum\'eroter les variables, on peut supposer que $u'_i=u_i^{(1)}$, $1\leqslant i\leqslant s$. Notons les vecteurs de taille $n_1$ par:
\[\alpha^{(1)}=(\tilde\alpha'_1,...,\tilde\alpha'_s,\underbrace{0,...,0}_{n_1-s}),\]
\[\tilde\gamma=(\gamma'_{1},...,\gamma'_s,\underbrace{0,...,0}_{n_1-s}).\]
Alors, on a:
\[\tau\left(\alpha^{(1)},\gamma^{(1)}\right)<_{lex}\tau(\alpha,\gamma).\]
\end{coro}
\noindent\textit{Preuve}: Par la Proposition \ref{propdesctau} et par d\'efinition:
\[\tau\left(\alpha^{(1)},\gamma^{(1)}\right)\leqslant_{lex}\tau(\alpha',\gamma')<_{lex}\tau(\alpha,\gamma).\]\qed
\begin{rem}\label{remindeptau}
\textup{Soit $T\subset\lbrace 1,..,n \rbrace$ tel que $\tilde\alpha_i=\tilde\gamma_i=0$, pour tout $i\in T$. Alors, tout \'eclatement encadr\'e le long de $(u_J)$, avec $J$ d\'efini comme pr\'ec\'edemment, est ind\'ependant de $u_T$.}
\end{rem}
\begin{coro}\label{coroexistsuitlocale}
Soient $(R,\mathfrak m)$ un anneau local noeth\'erien tel que $\mathfrak m$ soit non nilpotent et $u=(u_1,...,u_n)$ un ensemble de g\'en\'erateurs de $\mathfrak m$.
Soit $r\in\mathbb N$ tel que $1\leqslant r\leqslant n$. Notons $u=(w,v)$ avec:
\[w=(w_1,...,w_r)=(u_1,...,u_r),\]
\[v=(v_1,...,v_{n-r}).\]
Soit $\nu$ une valuation centr\'ee en $R$, prenons $j$ dans $J$ v\'erifiant:
\[\nu(u_j)=\min_{i\in J}\lbrace\nu(u_i)\rbrace.\]
Soient $\alpha,\gamma\in\mathbb N^{r}$, il existe alors une suite locale encadr\'ee par rapport \`a $\nu$ et ind\'ependante de $v$:
\[(R,u)\rightarrow \left(R^{(l)},u^{(l)}\right)\]
telle que $w^\alpha$ divise $w^\gamma$ ou bien $w^\gamma$ divise $w^\alpha$ dans $R^{(l)}$.
\end{coro}
\noindent\textit{Preuve}: On it\`ere le processus de construction de la Proposition \ref{propdesctau} en choisissant des \'eclatements locaux encadr\'es par rapport \`a $\nu$, qui sont, par construction et par la Remarque \ref{remindeptau}, ind\'ependants de $v$. Par le Corollaire \ref{procesusstop}, cette construction s'arr\^ete apr\`es un nombre fini d'it\'erations. On conclut alors gr\^ace au (1) de la Remarque \ref{remtau}.\\\qed
\begin{prop}\label{quidivisequi}
Gardons les notations du Corollaire \ref{coroexistsuitlocale}. Alors:
\[w^\alpha\textit{ divise }w^\gamma \textit{ dans }R^{(l)}\Leftrightarrow\nu\left(w^\alpha\right)\leqslant\nu\left(w^\gamma\right).\]
\end{prop}
\noindent\textit{Preuve}: Notons $u^{(l)}=\left(w_1^{(l)},...,w_{r_l}^{(l)},v\right)$. Par le (1) de la Proposition \ref{propgenerem}, il existe $\alpha^{(l)},\gamma^{(l)}\in\mathbb N^{r_l}$ et $y,z\in R^{(l)}{}^\times$ tels que:
\[w^\alpha=y\left(w^{(l)}\right)^{\alpha^{(l)}},\]
\[w^\gamma=z\left(w^{(l)}\right)^{\gamma^{(l)}}.\]
Comme $\nu\left(w_1^{(l)}\right),...,\nu\left(w_{r_l}^{(l)}\right)\geqslant 0$ et comme, par construction, l'un des $\alpha^{(l)}$ ou $\gamma^{(l)}$ est plus grand que l'autre, composante par composante, on a:
\[\left(w^{(l)}\right)^{\alpha^{(l)}}\textit{ divise }\left(w^{(l)}\right)^{\gamma^{(l)}} \textit{ dans }R^{(l)}\Leftrightarrow\nu\left(\left(w^{(l)}\right)^{\alpha^{(l)}}\right)\leqslant\nu\left(\left(w^{(l)}\right)^{\gamma^{(l)}}\right).\]\qed
\begin{coro}\label{coroideal}
Gardons les notations du Corollaire \ref{coroexistsuitlocale}. Soit $I$ un id\'eal de $R$ engendr\'e par des mon\^omes en $w$. Consid\'erons $\varepsilon_0,...,\varepsilon_b\in\mathbb N^{r}$ une collection minimale d'\'el\'ements de $\mathbb N^{r}$ telle que $\left(w^{\varepsilon_0},...,w^{\varepsilon_b}\right)=I$.
\\Enfin, supposons que $\nu\left(w^{\epsilon_0}\right)\leqslant\nu\left(w^{\epsilon_i}\right)$, $1\leqslant i\leqslant b$. Il existe alors une suite locale encadr\'ee par rapport \`a $\nu$ et ind\'ependante de $v$:
\[(R,u)\rightarrow \left(R^{(l)},u^{(l)}\right)\]
telle que:
\[I R^{(l)}=\left(w^{\varepsilon_0}\right)R^{(l)}.\]
\end{coro}
\noindent\textit{Preuve}: On d\'efinit l'entier suivant:
\[\tau(I,w)=\left(b,\min_{0\leqslant i<i'\leqslant b}\lbrace \tau\left(w^{\varepsilon_i},w^{\varepsilon_{i'}}\right)\rbrace\right).\]
On suppose que $\tau(I,w)=(0,1)$ si $b=0$. Si $b\geqslant 1$, on applique la Proposition \ref{propdesctau} \`a la paire $\left\lbrace w^{\varepsilon_i},w^{\varepsilon_{i'}}\right\rbrace$, pour laquelle le minimum est atteint dans $\lbrace \tau\left(w^{\varepsilon_i},w^{\varepsilon_{i'}}\right)\rbrace$. On obtient alors un sous-ensemble $J$ de $\lbrace 1,...,n\rbrace$ tel que tout \'eclatement encadr\'e le long de $(u_J)$ fasse d\'ecro\^itre $\tau(I,w)$ pour l'ordre lexicographique. On conclut en utilisant la Proposition \ref{quidivisequi}.\\\qed
\begin{lem}\label{lemvalmonomiale}
Soit $(R,\mathfrak{m},k)$ un anneau local r\'egulier. On suppose que $\mathfrak{m}=(u_1,...,u_n)=u$, o\`u $n$ est le nombre de g\'en\'erateurs de $\mathfrak m$. Soient $\Phi$ un semi-groupe ordonn\'e archim\'edien et $\beta_1,...,\beta_n\in\Phi$ tels que $\beta_i> 0$, $1\leqslant i\leqslant n$.
\\Notons $\Phi_*\subset\Phi$ le semi-groupe ordonn\'e suivant:
\[\Phi_*=\left\lbrace \left.\sum\limits_{i=1}^n\alpha_i\beta_i\:\right|\:\alpha_i\in\mathbb N\right\rbrace.\]
Pour $\gamma\in\Phi_*$, consid\'erons l'id\'eal de $R$:
\[I_\gamma=\left\langle\left\lbrace u_1^{\alpha_1}...u_n^{\alpha_n}\:\left|\:\sum\limits_{i=1}^n\alpha_i\beta_i\geqslant\gamma\right.\right\rbrace\right\rangle.\]
Alors, pour $f\in R\setminus\lbrace 0\rbrace$, l'ensemble:
\[\Phi_f=\lbrace \gamma\in\Phi_*\:\vert\:f\in I_\gamma\rbrace\]
est fini.
\end{lem}
\noindent\textit{Preuve}: Soit $f\in R\setminus\lbrace 0\rbrace$. Comme $\Phi$ est archim\'edien alors $\Phi_*$ l'est aussi. Remarquons que $\Phi_*$ est un ensemble d\'enombrable et que $\Phi_f$ est un ensemble bien ordonn\'e car \`a une suite d\'ecroissante de $\Phi_f$ correspond une suite croissante d'id\'eaux de $R$ de la forme $I_\gamma$ qui est forc\'ement finie vu que $R$ est noeth\'erien. Notons $\gamma_0$ le plus petit \'el\'ement non nul de $\Phi_f$ (en fait $0$ est le min de $\Phi_*$ et de $\Phi_f$, si ce dernier ensemble est r\'eduit \`a $0$, la preuve est termin\'ee).
\\Comme $f$ est non nul, il existe $i\geqslant 0$ tel que $f\notin\mathfrak m^i$ et donc $\Phi_f\neq\Phi_*$. Il existe donc $\gamma_1=\sup \Phi_f\in\Phi_*$. Or $\Phi_*$ est archim\'edien, ainsi, il existe $N\in\mathbb N$ tel que $\gamma_1 < N\gamma_0$. Alors, pour tout \'el\'ement $\gamma=\sum\limits_{i=1}^n\alpha_i\beta_i\in\Phi_f$, $\alpha_i\in\mathbb N$, comme $\beta_i\in\Phi_f$, pour $1\leqslant i\leqslant n$, on en d\'eduit:
\[\left(\sum\limits_{i=1}^n\alpha_i\right)\gamma_0\leqslant\sum\limits_{i=1}^n\alpha_i\beta_i <\gamma_1 < N\gamma_0.\]
N\'ecessairement, on a $\left(N-\sum\limits_{i=1}^n\alpha_i\right)\gamma_0>0$ et comme $\gamma_0>0$ on en d\'eduit que $\sum\limits_{i=1}^n\alpha_i\leqslant N$, c'est-\`a-dire qu'il n'y a qu'un choix fini de $n$-uplets $(\alpha_1,...,\alpha_n)$ et donc de $\gamma\in\Phi_f$. \\\qed
\begin{coro}\label{defivaluationmono}
Sous les hypoth\`eses du Lemme \ref{lemvalmonomiale}, il existe une unique valuation, not\'ee $\nu_{0,u}$, centr\'ee en un id\'eal premier de $R$, telle que:
\[\nu_{0,u}(u_j)=\beta_j,\:1\leqslant j\leqslant n;\]
\[\nu_{0,u}(f)=\max \lbrace\gamma\in\Phi_f\rbrace,\:\forall\:f\in R\setminus\lbrace 0\rbrace.\]
Cette valuation est appel\'ee la \textbf{valuation monomiale} de $R$ associ\'ee \`a $u$ et \`a $\beta_1,...,\beta_n$.
\\Soit $\nu$ une valuation de groupe des valeurs $\Gamma$ et centr\'ee en un id\'eal premier de $R$. On dit que $\nu$ est \textbf{monomiale} par rapport \`a $u$ s'il existe $\beta_1,...,\beta_n\in\Gamma_+$ tels que:
\[\forall\:f\in R\setminus\lbrace 0\rbrace,\:\nu(f)=\nu_{0,u}(f).\]
\end{coro}
\begin{ex}
\textup{Pour $\nu$ une valuation centr\'ee en $R=k\left[\left[u_1,...,u_n\right]\right]$, si $f=\sum c_{\alpha}u^\alpha$, alors $\nu_{0,u}(f)=\min\left\lbrace\left.\sum\limits_{i=1}^n\alpha_i\nu(u_i)\:\right|\:c_\alpha\neq 0\right\rbrace$ est la valuation monomiale associ\'ee \`a $u$ et \`a $\nu(u_1),...,\nu(u_n)$.}
\end{ex}
\begin{rem}\label{valmonpluspetite}
\textup{Si $\nu$ est une valuation centr\'ee en $R$ dont le groupe des valeurs est archim\'edien et si $\nu_{0,u}$ est la valuation monomiale associ\'ee \`a $u$ et \`a $\nu(u_1),...,\nu(u_n)$, alors, pour tout $\gamma\in\Phi_*$, $\nu(I_\gamma)=\min\lbrace \nu(f)\:\vert\:f\in I_\gamma\rbrace\geqslant\gamma$. Ainsi, pour tout $f\in R\setminus\lbrace 0\rbrace$:
\[\nu_{0,u}(f)\leqslant\nu(f).\]
De plus, la valuation $\nu$ est monomiale si et seulement si:
\[gr_\nu(R)=k\left[in_\nu(u_1),...,in_\nu(u_n)\right].\]}
\end{rem}
\begin{defi}\label{defnondeg}
Soient $R$ un anneau local r\'egulier et $u=(u_1,...,u_n)$ un syst\`eme r\'egulier de param\`etres de $R$. Soit $\nu$ une valuation centr\'ee en $R$. On dit que $f\in R$ est \textbf{non d\'eg\'en\'er\'e} par rapport \`a $\nu$ et $u$ si:
\[\nu_{0,u}(f)=\nu(f),\]
o\`u $\nu_{0,u}$ est la valuation monomiale de $R$ par rapport \`a $u$ (Corollaire \ref{defivaluationmono}).
\end{defi}
\begin{rem}
\textup{\begin{enumerate}
\item $f\in R$ est non d\'eg\'en\'er\'e par rapport \`a $u$ si et seulement s'il existe un id\'eal $I$ de $R$, monomial par rapport \`a $u$, tel que $\nu(f)=\min\limits_{g\in I}\lbrace\nu(g)\rbrace$.
\item Consid\'erons une suite locale encadr\'ee $(R,u)\rightarrow\left(R^{(1)},u^{(1)}\right)$ et $f\neq 0$. Par le (1) de la Proposition \ref{propgenerem}, chaque $u_j$ est un mon\^ome en $u^{(1)}$ multipli\'e par une unit\'e de $R^{(1)}$. Ainsi, si $f$ est non d\'eg\'en\'er\'e par rapport \`a $u$ alors, $f$ est aussi non d\'eg\'en\'er\'e par rapport \`a $u^{(1)}$.
\end{enumerate}}
\end{rem}
Le Th\'eor\`eme \ref{uniflocnondeg} suivant peut \^etre vu comme un th\'eor\`eme \og d'uniformisation locale plong\'ee \fg{} de $f$, $f$ \'etant un \'el\'ement non d\'eg\'en\'er\'e par rapport \`a $\nu$.
\begin{thm}\label{uniflocnondeg}
Consid\'erons les m\^emes hypoth\`eses que celle de la D\'efinition \ref{defnondeg}. Soit $f$ un \'el\'ement non d\'eg\'en\'er\'e par rapport \`a $u$. Il existe alors une suite locale encadr\'ee $(R,u)\rightarrow\left(R^{(l)},u^{(l)}\right)$ telle que $f$ soit un mon\^ome en $u^{(l)}$ multipli\'e par une unit\'e de $R^{(l)}$.
\\De plus, soit $I$ un id\'eal de $R$ tel que $\nu(f)=\min\limits_{g\in I}\lbrace\nu(g)\rbrace$. Notons $u=(w,v)$ et supposons que $I$ est engendr\'e uniquement par des mon\^omes en $w$. Alors, la suite locale encadr\'ee pr\'ec\'edente peut \^etre choisie ind\'ependante de $v$.
\end{thm}
\noindent\textit{Preuve}: La suite locale encadr\'ee par rapport \`a $\nu$ provient du Corollaire \ref{coroideal}. Ainsi, comme $f\in I$, il existe $z\in R^{(l)}$ tel que $f=zw^{\varepsilon_0}$ (selon les notations du Corollaire \ref{coroideal}). Comme $I$ est engendr\'e par $w^{\varepsilon_0}$ (Corollaire \ref{coroideal}) et par hypoth\`eses, on en conclut que:
\[\nu(z)=\nu(f)-\nu(w^{\varepsilon_0})=\nu(f)-\min\limits_{g\in I}\lbrace\nu(g)\rbrace=0.\]
Or $\nu$ est centr\'ee en $R^{(l)}$, donc, $z\in R^{(l)}{}^\times$.\\\qed
\subsection{Suite \'el\'ementaire uniformisante}\label{sectionsuiteelemunif}
~\smallskip ~\\ \indent Nous allons construire une uniformisation locale, par rapport \`a une valuation $\nu$, d'une hypersurface quasi-homog\`ene satisfaisant certaines conditions vis-\`a-vis de l'alg\`ebre gradu\'ee $G_\nu=(gr_\nu(R))^*$, o\`u, pour une alg\`ebre gradu\'ee n'ayant pas de diviseurs de z\'ero $G$, l'ag\`ebre gradu\'ee $G^*$ est d\'efinie par:
\[G^{*}=\left\lbrace \left.\dfrac{f}{g}\:\right|\:f,g\in G,\:g\textit{ homog\`ene},\:g\neq 0\right\rbrace.\]
\\ \\\indent Soient ($R,\mathfrak m,k)$ un anneau local r\'egulier, $u=(u_1,...,u_n)$ un syst\`eme r\'egulier de param\`etres de $R$ et $\nu$ une valuation centr\'ee en $R$ de groupe des valeurs $\Gamma$. Notons:
\[\beta_i=\nu(u_i),\:1\leqslant i\leqslant n.\]
Pour $r\in\lbrace 1,...,n-1\rbrace$ posons $t=n-r-1$.
\\Supposons que $r=\dim_\mathbb Q\left(\sum\limits_{i=1}^n\mathbb Q\beta_i\right)$, c'est-\`a-dire, quitte \`a renum\'eroter les variables, que $\beta_1,...,\beta_r$ sont $\mathbb Q$-lin\'eairement ind\'ependants dans $\Gamma\otimes_\mathbb Z\mathbb Q$ et qu'en particulier $\beta_n$ est $\mathbb Q$-combinaison lin\'eaire de $\beta_1,...,\beta_r$.
\\Notons $u=(w,v)$ avec:
\[v=(v_1,...,v_t)=(u_{r+1},...,u_{n-1}),\]
\[w=(w_1,...,w_r,w_n)=(u_1,...,u_r,u_n).\]
Soit $\Delta=\langle \beta_1,...,\beta_r\rangle$ le sous-groupe de $\Gamma$ engendr\'e par $\beta_1,...,\beta_r$. Notons:
\[\overline\alpha=\min\lbrace m\in\mathbb N^*\:\vert\:m\beta_n\in\Delta\rbrace.\]
Par hypoth\`eses, $\overline\alpha<+\infty$. Pour $i\in\lbrace 1,...,r,n\rbrace$, on note $x_i=in_\nu(u_i)$, on a donc $ord(x_i)=\beta_i$.
\\Par le Corollaire 4.6 de \cite{spiva}, on peut montrer que les $x_1,...,x_r$ sont alg\'ebriquement ind\'ependants sur $k$ dans $G_\nu$. Si $x_n$ est alg\'ebrique sur $k\left[x_1,...,x_r\right]$, notons $P$ le polyn\^ome minimal de $x_n$ sur $k\left[x_1,...,x_r\right]^*$, choisi unitaire et de plus bas degr\'e possible; sinon posons $P=0$. Si $P\neq 0$, notons $\alpha=d^{\:\circ}(P)$. Soient $\alpha_1,...,\alpha_r\in\mathbb Z$ tels que:
\[\overline\alpha\beta_n-\sum\limits_{i=1}^r\alpha_i\beta_i=0.\]
On peut montrer (Lemme 4.5 de \cite{spiva}) que $d=\frac{\alpha}{\overline\alpha}\in\mathbb N$. On note alors:
\[\overline y=x_1^{\alpha_1}...x_r^{\alpha_r},\]
\[y=w_1^{\alpha_1}...w_r^{\alpha_r},\]
\[\overline z=\dfrac{x_n^{\overline \alpha}}{\overline y},\]
\[z=\dfrac{w_n^{\overline \alpha}}{y}.\]
Si $P\neq 0$, alors $P$ est de la forme:
\[P(X)=\sum\limits_{i=0}^dc_i\overline y^{d-i}X^{i\overline\alpha},\]
o\`u $c_i\in k$, pour $0\leqslant i\leqslant d$, $c_d=1$ et $\sum\limits_{i=0}^dc_iX^i$ est le polyn\^ome minimal de $\overline z$ sur $k$ dans $G_\nu$.
\\Enfin, pour $0\leqslant i\leqslant d$, fixons un \'el\'ement $b_i\in R$ tel que $c_i\equiv b_i\mod \mathfrak m$. On pose alors:
\[Q=\sum\limits_{i=0}^db_iy^{d-i}w_n^{i\overline\alpha}.\]
\begin{prop}\label{propcaspartpolycle}
Avec les hypoth\`eses et les notations pr\'ec\'edentes, il existe une suite locale encadr\'ee par rapport \`a $\nu$ et ind\'ependante de $v$:
\[ \xymatrix{\left( R,u\right)=\left( R^{(0)},u^{(0)}\right) \ar[r]^-{\pi_{0}} & \left( R^{(1)},u^{(1)}\right) \ar[r]^-{\pi_{1}} & \ldots \ar[r]^-{\pi_{l-1}} & \left( R^{(l)},u^{(l)}\right)}, \]
telle que, pour $0\leqslant i\leqslant l$, si on note:
\[u^{(i)}=\left(u_1^{(i)},...,u_{n_i}^{(i)}\right)\]
et $k^{(i)}$ le corps r\'esiduel de $R^{(i)}$, alors:
\begin{enumerate}
\item Les \'eclatements encadr\'es $\pi_0,...,\pi_{l-2}$ sont monomiaux. En particulier, $n_i=n$, $k^{(i)}=k$, pour $1\leqslant i<l$. De plus, $z\in R^{(l)}{}^\times$.
\item $n_l=\left \{ \begin{array}{ccc} n & \textup{si} & P\neq 0 \\ n-1 & \textup{si} & P=0 \end{array} \right.$
\item Notons:
\[u^{(l)}=\left \{ \begin{array}{lcc} \left(w_1^{(l)},...,w_r^{(l)},v,w_n^{(l)}\right) & \textup{si} & P\neq 0 \\ \left(w_1^{(l)},...,w_r^{(l)},v\right) & \textup{si} & P=0 \end{array} \right.\]
Alors, pour $j\in\lbrace 1,...,r,n\rbrace$, $w_j$ est un mon\^ome en $w_1^{(l)},...,w_r^{(l)}$ multipli\'e par une unit\'e de $R^{(l)}$.
\item Pour $j\in\lbrace 1,...,r\rbrace$, $w_j^{(l)}$ est un mon\^ome en $w$ dont les exposants peuvent \^etre n\'egatifs.
\item Si $P\neq 0$, alors:
\[w_n^{(l)}=\sum\limits_{i=0}^d b_iz^i=\dfrac{Q}{y^d}.\]
\item $k^{(l)}\simeq k\left(\overline z\right)\simeq\left \{ \begin{array}{lcc} k(X) & \textup{si} & P= 0 \\ k[X]/\left(\sum\limits_{i=0}^d c_iX^i\right) & \textup{si} & P\neq 0 \end{array} \right.$
\end{enumerate}
\end{prop}
\noindent\textit{Preuve}: Nous ne donnerons qu'une id\'ee de preuve. Sans perte de g\'en\'eralit\'e, on peut supposer qu'il existe $\delta=(\delta_1,...,\delta_r,\delta_n)$ et $\gamma=(\gamma_1,...,\gamma_r,\gamma_n)$ tels que $z=\frac{w^\delta}{w^\gamma}$ et $\nu(w^\gamma)=\nu(w^\delta)$. En appliquant le Corollaire \ref{coroexistsuitlocale}, on obtient l'existence de la suite locale encadr\'ee par rapport \`a $\nu$ et ind\'ependante de $v$ telle que $w^\gamma$ divise $w^\delta$ dans $R^{(l)}$. En appliquant la Proposition \ref{quidivisequi}, on en d\'eduit que $z,z^{-1}\in R^{(l)}$.
\\ On montre (1) par r\'ecurrence. Plus pr\'ecis\'ement, la Proposition \ref{propmatriceeclat} implique des relations de d\'ependance entre les images des $\beta_i$ dans les $R^{(i')}$ et les images des $\delta$ et $\gamma$, ainsi que $z^{\pm\:1}=\frac{w_j^{(i')}}{w_1^{(i')}}$, o\`u $j\neq 1$ est tel que $\beta_j^{(i')}=\min\limits_{i\in\lbrace 1,...,r,n\rbrace} \left\lbrace\beta_{i}^{(i')}\right\rbrace=\beta_1^{(i')}$. La Proposition \ref{quidivisequi} permet de conclure que $z,z^{-1}\in R^{(i'+1)}$. En particulier si $i'=l-1$ on a (1).
\\En reprenant les notations de la Proposition \ref{propconstruceclatencad}, on remarque qu'on est dans le cas o\`u $h^\times\leqslant 1$ (plus pr\'ecis\'ement, le cas $h^{\times c}=h-1$, $t=h$ et le cas $h^{\times c}=t=h-1$). En utilisant la Proposition \ref{propmatriceeclat} et quitte \`a interchanger, on peut supposer que $\overline z=\frac{x_j^{(l-1)}}{x_1^{(l-1)}}$. Le cas $h^{\times c}=h-1$, $t=h$ se produit si et seulement si $\overline z=\frac{x_j^{(l-1)}}{x_1^{(l-1)}}$ est transcendant sur $k$ (et donc $P=0$). Le cas $h^{\times c}=t=h-1$ se produit quant \`a lui si et seulement si $\overline z$ est alg\'ebrique sur $k$ (c'est-\`a-dire $P\neq 0$). Ainsi (2) et (6) proviennent de l'\'etude directe de ces deux cas particuliers. Dans le cas $P\neq 0$, vu que $z^{\pm\:1}=\frac{w_j^{(l-1)}}{w_1^{(l-1)}}$, on peut prendre $w_n^{(l)}=\sum\limits_{i=0}^d b_iz^i$, ce qui prouve (5). Pour terminer, (3) et (4) sont une application directe de la Proposition \ref{propgenerem} avec $i=0$ et $i'=l$.\\\qed
\begin{prop}\label{proppolycleplus}
Reprenons les notations et les hypoth\`eses de la Proposition \ref{propcaspartpolycle}. Notons $\tilde Q=Q+h$, o\`u $h\in R$ est tel que $\nu_{0,u}(h)>\nu_{0,u}(Q)$. Alors, la Proposition \ref{propcaspartpolycle} est vraie avec $\tilde Q$ \`a la place de $Q$ dans (5).
\end{prop}
\noindent\textit{Preuve}: Par hypoth\`eses, on peut \'ecrire $h$ comme une somme finie $h=\sum\limits_\gamma h_\gamma u^\gamma$ o\`u $h_\gamma\in R$ et $\nu(u^\gamma)>\nu_{0,u}(Q)$. Soit $N=\max\lbrace\vert\gamma\vert\:\vert\: h_\gamma\neq 0\rbrace$. Apr\`es une suite locale encadr\'ee ind\'ependante de $u_{r+1},...,u_{n}$, on peut supposer que:
\[\nu(w_1)<\dfrac{1}{N}\left(\nu_{0,u}(h)-\nu_{0,u}(Q)\right).\]
Pour tout $i\in\lbrace r+1,...,n\rbrace$, effectuons $\left\lfloor\frac{\nu(u_i)}{\nu(w_1)}\right\rfloor$ \'eclatements le long de l'id\'eal $(u_i,w_1)$. On peut alors supposer que, pour $h_\gamma\neq 0$, $u^\gamma$ est divisible par un mon\^ome $\varpi_\gamma$ en $w_1,...,w_r$ tel que $\nu(\varpi_\gamma)\geqslant\nu_{0,u}(Q)$. En appliquant le Corollaire \ref{coroideal} \`a l'id\'eal monomial engendr\'e par $\lbrace y^d\rbrace\cup\lbrace\varpi_\gamma\:\vert\:h_\gamma\neq 0\rbrace$, on construit une suite locale encadr\'ee monomiale ind\'ependante de $u_{r+1},...,u_n$ telle que $y^d$ divise $h$.
\\Ainsi, avec cette hypoth\`ese, on peut consid\'erer la suite locale encadr\'ee construite dans la Proposition \ref{propcaspartpolycle}. Comme $y^d$ divise $Q$ dans $R^{(l)}$ et comme $y^d$ divise $h$, on en d\'eduit que $y^d$ divise $\tilde Q$ dans $R^{(l)}$. Le $w_n^{(l)}$ de la Proposition \ref{propcaspartpolycle} diff\`ere alors de $\frac{\tilde Q}{y^d}$ par des \'el\'ements appartenant \`a l'id\'eal $\left(u_1^{(l)},...,u_{n-1}^{(l)}\right)$.\\\qed
\begin{defi}\label{suiteelementaire}
Reprenons les notations et les hypoth\`eses de la Proposition \ref{proppolycleplus}. La suite locale encadr\'ee par rapport \`a $\nu$ et ind\'ependante de $v$ construite dans la Proposition \ref{proppolycleplus} sera appel\'ee \textbf{la suite \'el\'ementaire uniformisante associ\'ee \`a} $\boldsymbol{\left(R,u,\nu,n,\tilde Q\right)}$, ou plus simplement, \textbf{la} $\boldsymbol n$\textbf{-suite \'el\'ementaire uniformisante}, lorsque il n'y a pas d'ambigu\"it\'e possible dans les choix de $R,u,\nu$ et $\tilde Q$.
\end{defi}
\begin{rem}\label{remjsuiteelemeunif}
\textup{L'entier $n$ de la D\'efinition \ref{suiteelementaire} fait r\'ef\'erence au fait que la suite est d\'ependante uniquement des variables $u_1,...,u_r,u_n$. Pour $j\in\lbrace r+1,...,n\rbrace$, on peut d\'efinir une $j$-suite \'el\'ementaire en rempla\c{c}ant les variables $u_1,...,u_r,u_n$ par $u_1,...,u_r,u_j$.}
\end{rem}
\subsection{Suites formelles encadr\'ees}\label{suiteformencaddefchap3}
~\smallskip ~\\\indent Soit $(R,\mathfrak{m},k)$ un anneau local r\'egulier complet de dimension $n$ avec $\mathfrak{m}=\left(u_1,...,u_{n}\right)$. Soient $\nu$ une valuation de $K=Frac(R)$, centr\'ee en $R$, de groupe des valeurs $\Gamma$ et $\Gamma_1$ le plus petit sous-groupe isol\'e non nul de $\Gamma$. On note:
\[H=\lbrace f\in R\:\vert\: \nu(f)\notin\Gamma_{1}\rbrace.\]
$H$ est un id\'eal premier de $R$ (voir Preuve du Th\'eor\`eme \ref{thmprelimcar0}). On suppose de plus que:
\[n=e(R,\nu)=emb.dim\left(R/H\right),\]
c'est-\`a-dire que:
\[H\subset\mathfrak{m}^2.\]
On note \'egalement $r=r(R,u,\nu)=\dim_\mathbb Q\left(\sum\limits_{i=1}^n\mathbb Q\nu(u_i)\right)$.
\\La valuation $\nu$ est unique si $ht(H)=1$, cas auquel on va se ramener gr\^ace au Corollaire \ref{hauteurcar0}. C'est la compos\'ee de la valuation $\mu:L^{*}\rightarrow\Gamma_{1}$ de rang $1$ centr\'ee en $R/H$, o\`u $L=Frac(R/H)$, avec la valuation $\theta :K^{*}\rightarrow \Gamma / \Gamma_{1}$, centr\'ee en $R_{H}$, telle que $k_{\theta}\simeq \kappa(H)$.
\\Par abus de notation, pour $f\in R$, on notera $\mu(f)$ au lieu de $\mu(f\mod H)$.
\begin{defi}
Soit $(R,u,k)\rightarrow (R',u',k')$ une suite locale encadr\'ee, on note $H'_{0}$ le transform\'e strict de $H$ dans $R'$. On dit que $\mu$ est \textbf{centr\'ee} en $R'$ si $\mu$ est centr\'ee en $R'/H'_{0}$. Dans ce cas, on dit que la suite locale encadr\'ee est une \textbf{suite locale encadr\'ee par rapport \`a} $\boldsymbol{\mu}$.
\end{defi}
\begin{defi}
Soit $(R,u)\rightarrow \left( R^{(1)},u^{(1)}\right) $ un \'eclatement local encadr\'e. Le morphisme induit par compl\'etion formelle est appel\'e un \textbf{\'eclatement formel encadr\'e par rapport \`a} $\boldsymbol{\mu}$.
\\Soient $\widehat{K^{(1)}}=Frac\left( \widehat{R^{(1)}}\right) $, $H^{(0)}$ le transform\'e strict de $H$ dans $R^{(1)}$ et $\overline{H}^{(1)}$ l'id\'eal premier implicite de $\widehat{R^{(1)}}/H^{(0)}\widehat{R^{(1)}}$.
\\On appelle \textbf{transform\'e} de $H$ dans $\widehat{R^{(1)}}$, not\'e $H^{(1)}$, la pr\'eimage de $\overline{H}^{(1)}$ dans $\widehat{R^{(1)}}$.
\\Enfin, on appelle \textbf{valuation induite par} $\boldsymbol{\mu}$ en $\widehat{R^{(1)}}$, not\'ee $\mu^{(1)}$, l'unique extension de $\mu$ de $\kappa\left( H\right) $ \`a $\kappa\left( H^{(1)}\right)$, centr\'ee en $\widehat{R^{(1)}}/H^{(1)}$ et donn\'ee par le Th\'eor\`eme \ref{valetenuniqidealimpl}.
\end{defi}
\begin{defi}
Une suite de morphismes locaux:
\[ \xymatrix{\left( R,u\right) \ar[r]^-{\pi_{0}} & \left( R^{(1)},u^{(1)}\right) \ar[r]^-{\pi_{1}} & \ldots \ar[r]^-{\pi_{l-2}} & \left( R^{(l-1)},u^{(l-1)}\right) \ar[r]^-{\pi_{l-1}} & \left( R^{(l)},u^{(l)}\right)} \]
est appel\'ee une \textbf{suite formelle encadr\'ee par rapport \`a} $\boldsymbol{\mu}$ si, la suite:
\[ \xymatrix{\left( R,u\right) \ar[r]^-{\pi_{0}} & \left( R^{(1)},u^{(1)}\right) \ar[r]^-{\pi_{1}} & \ldots \ar[r]^-{\pi_{l-2}} & \left( R^{(l-1)},u^{(l-1)}\right) } \]
est une suite formelle encadr\'ee par rapport \`a $\mu$ et si $\pi_{l-1}$ est un \'eclatement formel encadr\'e par rapport \`a la valuation $\mu^{(l-1)}$, induite par $\mu$ sur $R^{(l-1)}$.
\end{defi}
Pour tout \'eclatement local encadr\'e de la forme $(R,u)\rightarrow \left( R^{(1)},u^{(1)}\right) $, on d\'efinit une valuation $\nu^{(1)}$ centr\'ee en $\widehat{R^{(1)}}$ comme suit: fixons une valuation $\theta^{(1)}$ de $\widehat{K^{(1)}}$, centr\'ee en $\left( \widehat{R^{(1)}}\right) _{H^{(1)}}$ et telle que $k_{\theta^{(1)}}\simeq\kappa\left( H^{(1)}\right) $. On pose alors $\nu^{(1)}=\theta^{(1)}\circ\mu^{(1)}$.
\\Etant donn\'ee une suite formelle encadr\'ee:
\[ \xymatrix{\left( R,u\right) \ar[r]^-{\pi_{0}} & \left( R^{(1)},u^{(1)}\right) \ar[r]^-{\pi_{1}} & \ldots \ar[r]^-{\pi_{l-2}} & \left( R^{(l-1)},u^{(l-1)}\right) \ar[r]^-{\pi_{l-1}} & \left( R^{(l)},u^{(l)}\right)}; \]
on peut, par r\'ecurrence sur $1\leqslant i\leqslant l-1$, construire une valuation $\mu^{(i)}$, centr\'ee en $R^{(i)}$ telle que le plus petit sous-groupe non nul du groupe des valeurs de $\nu^{(i)}$ soit $\Gamma_{1}$, et d\'efinir le transform\'e de $H$ dans $R^{(i)}$, not\'e $H^{(i)}$. Par construction, on a:
\[H^{(i)}=\lbrace f\in R^{(i)}\:\vert\: \nu^{(i)}(f)\notin\Gamma_{1}\rbrace.\]
On note alors:
\[e\left( R^{(i)},\nu^{(i)}\right)=emb.dim\left(R^{(i)}/H^{(i)}\right);\]
\[r\left( R^{(i)},u^{(i)},\nu^{(i)}\right)=\dim_\mathbb Q\left(\sum\limits_{j=1}^{n_i}\mathbb Q\nu^{(i)}\left(u_j^{(i)}\right)\right),\]
o\`u $u^{(i)}=(u_1^{(i)},...,u_{n_i}^{(i)})$.
\\\indent On note $\nu_{0,u}$ la valuation monomiale centr\'ee en $R$ et associ\'ee \`a $u=\left( u_{1},...,u_{n}\right)$ et \`a $\nu (u_{1}),...,\nu (u_{n})$. Par la Remarque \ref{valmonpluspetite}, pour tout $f\in R$, on a:
\[\nu_{0,u}(f)\leqslant\nu(f).\]
\begin{rem}\label{Hnulcar0}
\textup{Supposons que $n=r$. Pour une suite formelle encadr\'ee de la forme:
\[ \xymatrix{\left( R,u\right) \ar[r]^-{\pi_{0}} & \left( R^{(1)},u^{(1)}\right) \ar[r]^-{\pi_{1}} & \ldots \ar[r]^-{\pi_{l-2}} & \left( R^{(l-1)},u^{(l-1)}\right) \ar[r]^-{\pi_{l-1}} & \left( R^{(l)},u^{(l)}\right)}, \]
on note $\nu_{0,u^{(l)}}$ la valuation monomiale centr\'ee en $R^{(l)}$ associ\'ee \`a $u^{(l)}$ et \`a $\nu^{(l)}\left( u_{1}^{(l)}\right),...,\nu^{(l)}\left( u_{n}^{(l)}\right)$.
\\Si $f\in H\setminus\lbrace 0\rbrace$, alors:
\[\nu_{0,u^{(l)}}(f)<\nu(f),\]
pour toute suite formelle encadr\'ee de la forme pr\'ec\'edente et telle que:
\[\nu^{(l)}\left( u_{1}^{(l)}\right),...,\nu^{(l)}\left( u_{n}^{(l)}\right)\in\Gamma_{1}.\]
Ainsi, comme $\nu_{0,u^{(l)}}(f)\in\Gamma_{1}$ et $\nu(f)\notin\Gamma_{1}$, on en d\'eduit que $H^{(i)}=(0)$, pour tout $i$.}
\end{rem}
\section{Un th\'eor\`eme de monomialisation en \'egale caract\'eristique}\label{monocar0}
\indent Soit $(R,\mathfrak{m},k)$ un anneau local r\'egulier complet \'equicaract\'eristique de dimension $n$ avec $\mathfrak{m}=\left(u_1,...,u_{n}\right)$. Soient $\nu$ une valuation de $K=Frac(R)$, centr\'ee en $R$, de groupe des valeurs $\Gamma$ et $\Gamma_1$ le plus petit sous-groupe isol\'e non nul de $\Gamma$. On note:
\[H=\lbrace f\in R\:\vert\: \nu(f)\notin\Gamma_{1}\rbrace.\]
$H$ est un id\'eal premier de $R$ (voir Preuve du Th\'eor\`eme \ref{thmprelimcar0}). On suppose de plus que:
\[n=e(R,\nu)=emb.dim\left(R/H\right),\]
c'est-\`a-dire que:
\[H\subset\mathfrak{m}^2.\]
On note \'egalement $r=r(R,u,\nu)=\dim_\mathbb Q\left(\sum\limits_{i=1}^n\mathbb Q\nu(u_i)\right)$.
\\La valuation $\nu$ est unique si $ht(H)=1$, cas auquel on va se ramener gr\^ace au Corollaire \ref{hauteurcar0}. C'est la compos\'ee de la valuation $\mu:L^{*}\rightarrow\Gamma_{1}$ de rang $1$ centr\'ee en $R/H$, o\`u $L=Frac(R/H)$, avec la valuation $\theta :K^{*}\rightarrow \Gamma / \Gamma_{1}$, centr\'ee en $R_{H}$, telle que $k_{\theta}\simeq \kappa(H)$.
\\Par abus de notation, pour $f\in R$, on notera $\mu(f)$ au lieu de $\mu(f\mod H)$.
Par le th\'eor\`eme de Cohen, on peut supposer que $R$ s'\'ecrit sous la forme:
\[R=k\left[ \left[ u_{1},...,u_{n}\right] \right].\]
\subsection{Un premier th\'eor\`eme de monomialisation}
~\smallskip ~\\\indent Pour $j\in \lbrace r+1,...,n\rbrace$, on note $\lbrace Q_{j,i}\rbrace_{i\in\Lambda_{j}}$ l'ensemble des polyn\^omes-cl\'es de l'extension $k\left( \left( u_{1},...,u_{j-1}\right) \right)\hookrightarrow k\left( \left( u_{1},...,u_{j-1}\right) \right)(u_{j})$, $\textbf{Q}_{j,i}=\left\lbrace Q_{j,i'}\vert i'\in\Lambda_{j},i'<i\right\rbrace $, $\Gamma^{(j)}$ le groupe des valeurs de $\nu_{\vert k\left( \left( u_{1},...,u_{j}\right) \right)}$ et $\nu_{j,i}$ la $i$-troncature de $\nu$ pour cette extension.
\begin{thm}\label{thmeclatformcar0}
Reprenons les hypoth\`eses pr\'ec\'edentes et supposons de plus que pour $j\in \lbrace r+1,...,n\rbrace$, il existe un ensemble $\lbrace Q_{j,i}\rbrace_{i\in\Lambda_{j}}$ de polyn\^omes-cl\'es $1$-complet pour la valuation $\nu_{\vert k\left( \left( u_{1},...,u_{j}\right) \right)}$ ne poss\'edant pas de polyn\^ome-cl\'e limite. Deux cas se pr\'esentent:
\begin{enumerate}
\item Ou bien $H\neq(0)$ et il existe une suite formelle encadr\'ee:
\[ \xymatrix{\left( R,u\right) \ar[r]^-{\pi_{0}} & \left( R^{(1)},u^{(1)}\right) \ar[r]^-{\pi_{1}} & \ldots \ar[r]^-{\pi_{l-2}} & \left( R^{(l-1)},u^{(l-1)}\right) \ar[r]^-{\pi_{l-1}} & \left( R^{(l)},u^{(l)}\right)} \]
telle que:
\[\left( e\left( R^{(l)},\nu^{(l)}\right) , e\left( R^{(l)},\nu^{(l)}\right) - r\left( R^{(l)},u^{(l)},\nu^{(l)}\right)\right) <_{lex}\left(e(R,\nu),n-r\right);\]
\item Ou bien $H=(0)$ et pour tout $f\in R$, il existe une suite formelle encadr\'ee:
\[ \xymatrix{\left( R,u\right) \ar[r]^-{\pi_{0}} & \left( R^{(1)},u^{(1)}\right) \ar[r]^-{\pi_{1}} & \ldots \ar[r]^-{\pi_{l-2}} & \left( R^{(l-1)},u^{(l-1)}\right) \ar[r]^-{\pi_{l-1}} & \left( R^{(l)},u^{(l)}\right)} \]
telle que $f$ soit un mon\^ome en $u^{(l)}$ multipli\'e par une unit\'e de $R^{(l)}$.
\end{enumerate}
\end{thm}
\noindent\textit{Preuve}: On proc\`ede par r\'ecurrence sur $n-r$. Si $n=r$ alors $\nu(u_{1}),...,\nu (u_{n})$ sont $\mathbb{Q}$-lin\'eairement ind\'ependants et donc, tout $f\in R$ contient un unique mon\^ome de valuation minimale. En particulier,
\[\forall\:f\in R,\:\nu_{0,u}(f)=\nu(f).\]
Par la Remarque \ref{Hnulcar0}, $H=(0)$. Prenons alors un \'el\'ement $f\in R$, par le Th\'eor\`eme \ref{uniflocnondeg}, il existe une suite locale encadr\'ee:
\[ \xymatrix{\left( R,u\right) \ar[r]^-{\pi_{0}} & \left( R^{(1)},u^{(1)}\right) \ar[r]^-{\pi_{1}} & \ldots \ar[r]^-{\pi_{i-2}} & \left( R^{(i-1)},u^{(i-1)}\right) \ar[r]^-{\pi_{i-1}} & \left( R^{(i)},u^{(i)}\right)} \]
telle que $f$ soit un mon\^ome en $u^{(i)}$ multipli\'e par une unit\'e de $R^{(i)}$. En passant au compl\'et\'e \`a chaque pas, on obtient la suite formelle encadr\'ee satisfaisant (2).
\\ \indent Supposons que $n-r>0$ et que l'on a d\'ej\`a construit une suite formelle encadr\'ee pour toutes les valeurs strictement plus petites et satisfaisant la conclusion du Th\'eor\`eme \ref{thmeclatformcar0}. On veut montrer le r\'esultat pour l'anneau $k\left[\left[u_1,...,u_{n-r}\right]\right]\left[\left[u_{n-r+1}\right]\right]$.
\\ Soit $\left(R^{(i)},\mathfrak m^{(i)},k^{(i)}\right)$ un anneau local apparaissant dans une suite formelle encadr\'ee. Par le Th\'eor\`eme de Cohen, on peut supposer que:
\[R^{(i)}=k^{(i)}\left[\left[u_1^{(i)},...,u_{n_i}^{(i)}\right]\right].\]
Dans un premier temps, montrons que l'on peut toujours se ramener aux hypoth\`eses suivantes:
\[H^{(i)}\cap R^{(i)}=(0)\:,n_i=n,\:r_i=r,\]
o\`u $r_i=r\left(R^{(i)},u^{(i)},\nu^{(i)}\right)$. En effet, si pour un certain $i$, on a:
\[H^{(i)}\cap R^{(i)}\neq(0),\]
en notant:
\[R_{n_i-1}^{(i)}=k^{(i)}\left[\left[u_1^{(i)},...,u_{n_i-1}^{(i)}\right]\right]\textup{ et }\overline u^{(i)}=\left(u_1^{(i)},...,u_{n_i-1}^{(i)}\right),\]
on peut appliquer l'hypoth\`ese de r\'ecurrence sur $n-r$ pour construire une suite formelle encadr\'ee:
\[ \xymatrix{\left(R_{n_i-1}^{(i)},\overline u^{(i)}\right) \ar[r] & \left( R_{n_i-1}^{(i,1)},\overline u^{(i,1)}\right) \ar[r] & \ldots \ar[r] & \left( R_{n_i-1}^{(i,l)},\overline u^{(i,l)}\right)} \]
telle que $e\left( R_{n_i-1}^{(i,l)},\nu^{(i,l)}\right)<e\left(R_{n_i-1}^{(i)},\nu^{(i)}\right)=n_i-1$. Notons alors:
\[\tilde R^{(j)}=R_{n_i-1}^{(i,j)}\left[\left[u_{n_i}^{(i)}\right]\right],\:1\leqslant j\leqslant l,\]
on obtient une suite formelle encadr\'ee:
\[ \xymatrix{\left( R^{(i)},u\right) \ar[r] & \left( \tilde R^{(1)},u^{(1)}\right) \ar[r] & \ldots \ar[r]& \left( \tilde R^{(l-1)},u^{(l-1)}\right) \ar[r] & \left(\tilde R^{(l)},u^{(l)}\right)} \]
telle que:
\[e\left(\tilde R^{(l)},\nu^{(l)}\right)< e\left(R^{(i)},\nu^{(i)}\right)\leqslant e(R,\nu).\]
De m\^eme, s'il existe un $i$ tel que $n_i<n$ ou $r_i>r$, la suite formelle encadr\'ee recherch\'ee est d\'ej\`a construite et il n'y a rien \`a faire.
\\\indent Ainsi, on peut supposer que, pour tous les anneaux $R^{(i)}$ apparaissant dans n'importe quelle suite formelle encadr\'ee:
\[n_i=n,\:r_i=r,\:H^{(i)}\cap R^{(i)}=(0).\]
En particulier, $\nu_{\vert\:R_{n_i-1}^{(i)}}^{(i)}$ est de rang $1$ et $e\left(R_{n_i-1}^{(i)},\mu^{(i)}\right)=n-1=e\left(R_{n-1},\mu\right)$.
\\\indent\`A partir de maintenant, quitte \`a poser $N=n-r+1$, on peut supposer qu'on est dans le cas o\`u $r=1$.
\\ \\\indent Pour achever la preuve, nous allons montrer qu'\`a une suite formelle pr\`es, les id\'eaux $H^{(i)}$ sont principaux engendr\'es par un polyn\^ome unitaire en $u_n$ et qu'il suffit de monomialiser ce type d'\'el\'ements.
\subsection{L'id\'eal premier implicite est engendr\'e par un polyn\^ome unitaire}
~\smallskip ~\\\indent Pour finir la preuve du Th\'eor\`eme \ref{thmeclatformcar0}, on a doit d\'emontrer la Proposition \ref{princcar0}. On suppose toujours que l'hyoth\`ese de r\'ecurrence est v\'erifi\'ee \`a l'\'etape $n-1$, c'est-\`a-dire que le Th\'eor\`eme \ref{thmeclatformcar0} est vraie pour $R$ de dimension $n-1$.
\begin{prop}\label{princcar0}
Reprenons les hypoth\`eses pr\'ec\'edentes et supposons que le Th\'eor\`eme \ref{thmeclatformcar0} est vrai en dimension $n-1$.
Notons $R_{n-1}=k\left[ \left[ u_{1},...,u_{n-1}\right] \right]$ et supposons que $H\not\subset R_{n-1}$ et $H\cap R_{n-1}=(0)$.
\\Soit $f\in H\setminus\lbrace 0\rbrace$. \`A une suite formelle encadr\'ee pr\`es, $f$ s'\'ecrit sous la forme:
\[f=\alpha f_{n-1}P;\]
o\`u $\alpha\in R^{\times}$, $f_{n-1}\in R_{n-1}$ et $P$ est un polyn\^ome unitaire en $u_{n}$.
\end{prop}
\noindent\textit{Preuve}: La valuation $\mu$ de rang $1$ centr\'ee en $R/H$ induit une valuation de rang $1$ centr\'ee en $R_{n-1}$.
\\Soit $f\in H$, $f\neq 0$, on peut \'ecrire $f=\sum\limits_{j\geqslant 0}b_{j}u_{n}^{j}$, avec $b_{j}\in R_{n-1}$. Par hypoth\`ese, on peut supposer qu'il existe $j\geqslant 0$ tel que $b_{j}\notin H\cap R_{n-1}$. On pose alors:
\[\beta=\min_{j\geqslant 0} \lbrace \mu (b_{j})\:\vert\: b_{j}\notin H\cap R_{n-1}\rbrace.\]
Soit $d$ le plus petit entier naturel tel que $\mu(b_{d})=\beta$ (donc $b_{d}\notin H\cap R_{n-1}$).
\\Soit $N\geqslant d$ un entier naturel non nul tel que, pour tout $j>N$, \[b_{j}\in\left( b_{0},...,b_{N}\right).\]
Soit $j\in\lbrace 0,...,N\rbrace$, comme, par hypoth\`eses, $R_{n-1}$ est un anneau local, r\'egulier et complet de dimension $n-1$, on peut appliquer l'hypoth\`ese de r\'ecurrence (Th\'eor\`eme \ref{thmeclatformcar0} en dimension $n-1$) \`a cet anneau muni de la valuation $\mu$ et \`a l'\'el\'ement $b_{j}$. Il existe donc une suite locale encadr\'ee:
\[\pi:\left( R_{n-1},\left( u_{1},...,u_{n-1}\right) \right) \rightarrow ... \rightarrow \left( R',(u'_{1},...,u'_{n-1})\right)\]
telle que $b_{j}$ soit un mon\^ome en $(u'_{1},...,u'_{n-1})$ (multipli\'e par une unit\'e de $R'$). En passant \`a chaque pas de la suite au compl\'et\'e formel, on obtient une suite formelle encadr\'ee telle que $b_{j}$ est un mon\^ome en $(u'_{1},...,u'_{n-1})$ (multipli\'e par une unit\'e de $R'$).
\\De plus, par le (1) de la Proposition \ref{propgenerem}, la propri\'et\'e d'\^etre un mon\^ome multipli\'e par une unit\'e est pr\'eserv\'ee pour tous les \'eclatements encadr\'es suivants. Ainsi, on peut choisir $\pi$ de telle sorte que les $b_{0},..., b_{N}$ soient simultan\'ement des mon\^omes en $(u'_{1},...,u'_{n-1})$.
\\Par les choix de $\beta$ et de $d$ et par le Corollaire \ref{coroideal}, apr\`es une suite locale encadr\'ee de plus, on peut se ramener \`a la situation o\`u $b_{d}$ divise $b_{j}$, $0\leqslant j \leqslant N$ et donc, $b_{d}$ divise $b_{j}$ pour tout $j\geqslant 0$.
\\Ainsi, $\frac{f}{b_{d}}\in R'\left[ \left[ u_{n}\right] \right] $ et satisfait les hypoth\`eses du th\'eor\`eme de pr\'eparation de Weierstrass
\\ \qed
\begin{coro}\label{hauteurcar0}
Sous les m\^emes hypoth\`eses que la Proposition \ref{princcar0}, on a:
\[ht\left( H\right) \leqslant 1.\]
\end{coro}
\noindent\textit{Preuve}: Si $H=(0)$, il n'y a rien \`a montrer. Sinon, prenons $f\in H$ tel que $f\neq 0$. Comme la hauteur de $H$ est croissante lorsque l'on fait des suites locales ou formelles encadr\'ees, (Corollaire \ref{coroht}), par la Proposition \ref{princcar0}, on peut supposer que $f$ est un polyn\^ome unitaire en $u_{n}$ \`a coefficients dans $R_{n-1}$. Ainsi, l'extension d'anneaux:
\[\sigma: R_{n-1}\hookrightarrow R_{n-1} \left[ \left[ u_{n}\right] \right] / (f) \]
est finie. La pr\'eimage de l'id\'eal $H/ \left( f\right) $ par $\sigma$ est $(0)$. Comme la hauteur est pr\'eserv\'ee par les extensions finies d'anneaux (\cite{matsualg}, Theorem 20), on a:
\[ht\left( H/(f)\right) = ht((0))=0.\]
Ainsi, $ht\left( H\right)=1$.
\\ \qed
\begin{coro}\label{engcar0}
Sous les hypoth\`eses de la Proposition \ref{princcar0}, \`a une suite formelle encadr\'ee pr\`es, l'id\'eal $H$ est principal engendr\'e par un polyn\^ome unitaire en $u_{n}$.
\end{coro}
\noindent\textit{Preuve}: C'est une cons\'equence directe du Corollaire \ref{hauteurcar0}.
\\ \qed
\begin{rem}
\textup{En appliquant le Corollaire \ref{engcar0} \`a chaque anneau $R^{(i)}$ apparaissant dans la preuve du Th\'eor\`eme \ref{thmeclatformcar0}, on peut supposer, \`a une suite formelle encadr\'ee pr\`es, que pour tout $i$, l'id\'eal $H^{(i)}$ est principal, engendr\'e par un polyn\^ome unitaire en $u_n^{(i)}$.
\\Remarquons qu'il existe alors une unique valuation $\theta^{(i)}$ centr\'ee en $\left(R^{(i)}\right)_{H^{(i)}}$ (c'est la valuation triviale si $ht\left(H^{(i)}\right)=0$ et la valuation discr\`ete centr\'ee en $ \left(R^{(i)}\right)_{H^{(i)}}$ si $ht\left(H^{(i)}\right)=1$). Ainsi, l'extension $\nu^{(i)}$ de $\nu$ \`a $R^{(i)}$ est d\'etermin\'ee de mani\`ere unique.
\\\indent Pour achever la preuve du Th\'eor\`eme \ref{thmeclatformcar0}, il suffit d'obtenir le r\'esultat pour des polyn\^omes en $u_n$.}
\end{rem}
\subsection{Monomialisation des polyn\^omes}
\begin{prop}\label{eclatpolycar0}
Sous les hypoth\`eses pr\'ec\'edentes et en supposant que le Th\'eor\`eme \ref{thmeclatformcar0} est vrai en dimension $n-1$, pour tout polyn\^ome $f\in k\left[\left[ u_{1},...,u_{n-1}\right]\right]\left[u_n\right]$, il existe une suite formelle encadr\'ee $(R,u)\rightarrow (R',u')$ telle que $f$ soit un mon\^ome en $u'$ multipli\'e par une unit\'e de $R'$.
\\Supposons de plus que $f$ soit irr\'eductible dans $k\left[\left[ u_{1},...,u_{n-1}\right]\right]\left[\left[u_n\right]\right]$, la suite formelle encadr\'ee pr\'ec\'edente peut alors \^etre choisie de telle sorte que $u'_n$ divise $f$ et $u_n'^2$ ne divise pas $f$ dans $R'$.
\end{prop}
\noindent\textit{Fin de la preuve du Th\'eor\`eme \ref{thmeclatformcar0} en supposant la Proposition \ref{eclatpolycar0} vraie}: \\Si $H\neq (0)$, prenons $f\in H\cap k\left[\left[u_{1},...,u_{n-1}\right]\right]\left[u_n\right]$, $f\neq 0$; sinon prenons $f\in k\left[\left[u_{1},...,u_{n-1}\right]\right]\left[u_n\right]\setminus\lbrace 0\rbrace$. Par hypoth\`eses, il existe une suite formelle encadr\'ee $(R,u)\rightarrow (R',u')$ telle que $f$ soit un mon\^ome en $u'$ multipli\'e par une unit\'e de $R'$. Notons $H'$ le transform\'e de $H$ dans $R'$.
\\Si $H\neq (0)$, alors, par d\'efinition, $\nu(f)\notin\Gamma_1$ et donc, il existe un $j$ tel que $\nu(u'_j)\notin\Gamma_1$, c'est-\`a-dire, $u'_j\in H'$.
Ainsi, $e(R',\nu)\leqslant n-1<n=e(R,\nu)$ et on est dans la situation (1) du Th\'eor\`eme \ref{thmeclatformcar0}. Si $H=(0)$ et $f\in k\left[\left[u_{1},...,u_{n-1}\right]\right]\left[u_n\right]\setminus\lbrace 0\rbrace$, on se retrouve dans la situation (2) par hypoth\`eses.
\\ Enfin, si $H=(0)$ et $f\in R\setminus\lbrace 0\rbrace$, non n\'ecessairement un polyn\^ome en $u_n$, \'ecrivons $f=f'+f''$ avec $\nu_{0,u}(f'')>\nu(f)$ (et donc $\nu(f)=\nu(f')$). Par le cas polynomial vu avant, il existe une suite formelle encadr\'ee $(R,u)\rightarrow (R',u')$ telle que $f'$ soit un mon\^ome en $u'$ multipli\'e par une unit\'e de $R'$. Or $\nu_{0,u'}(f'')\geqslant\nu_{0,u}(f'')>\nu(f)=\nu(f')$. Par le Corollaire \ref{coroideal}, quitte \`a compl\'eter, il existe une suite formelle encadr\'ee $(R',u')\rightarrow (R'',u'')$ telle que $f$ soit un mon\^ome en $u''$ multipli\'e par une unit\'e de $R''$. \\ \qed
\\ \\\noindent\textit{Preuve de la Proposition \ref{eclatpolycar0}}: On va montrer le r\'esultat par r\'ecurrence sur le degr\'e de $f$. Si $d_{u_n}^{\:\circ}(f)=1$, la Proposition \ref{eclatpolycar0} est alors \'evidente.
\\Soit $f\in k\left[\left[u_{1},...,u_{n-1}\right]\right]\left[u_n\right]$ de degr\'e $d>1$. Par hypoth\`ese de r\'ecurren\-ce, on suppose que la Proposition \ref{eclatpolycar0} est vraie pour tout polyn\^ome de degr\'e strictement inf\'erieur \`a $d$.
\\Par
hypoth\`ese, vu que l'ensemble de polyn\^omes-cl\'es est $1$-complet et ne poss\`ede pas de polyn\^ome-cl\'e limite, il existe $i\in\mathbb N^*$ tel que $\nu(f)=\nu_{n,i}(f)$. Ceci veut dire qu'il existe un d\'eveloppement $(n,i)$-standard de $f$ de la forme:
\[f=\sum\limits_{j=0}^Nc_jQ_{n,i}^j,\]
o\`u les $c_j$ sont des d\'eveloppements $(n,i)$-standards n'impliquant pas $Q_{n,i}$ et $\nu(f)=\nu_{n,i}(f)$.
\\On rappelle que pour $i\in\mathbb N^*$, on note $\alpha_{n,i}=d_{Q_{n,i-1}}^{\:\circ}(Q_{n,i})$.
\\Supposons qu'il existe $l\in\mathbb N^*$ tel que $\alpha_{n,l}>1$, prenons alors ce $l$. S'il n'en existe pas, prenons un $l$ suffisamment grand tel que $f=\sum\limits_{j=0}^Nc_jQ_{n,l}^j$ et $\nu(f)=\nu_{n,l}(f)$. Dans tous les cas, par d\'efinition des polyn\^omes-cl\'es et par
hypoth\`ese, $l<\omega$.
\\\indent Pour achever la preuve de la Proposition \ref{eclatpolycar0}, il nous suffit donc d'obtenir le r\'esultat voulu sur les polyn\^omes-cl\'es comme nous allons le voir dans la sous-section \ref{sectmonopolyclecar0} et la Proposition \ref{eclatpolyclecar0}.
\subsection{Monomialisation des polyn\^omes-cl\'es}\label{sectmonopolyclecar0}
\begin{prop}\label{eclatpolyclecar0}
Sous les hypoth\`eses pr\'ec\'edentes et en supposant que le Th\'eor\`eme \ref{thmeclatformcar0} est vrai en dimension $n-1$, il existe une suite formelle encadr\'ee: \[(R,u)\rightarrow (R',u'),\] o\`u $u=(u_{1},...,u_{n})$, $u'=(u'_{1},...,u'_{n})$, et v\'erifiant les propri\'et\'es suivantes:
\begin{enumerate}
\item Pour tout $q\in\mathbb{N}^*$ tel que $1\leqslant q \leqslant l$, le polyn\^ome-cl\'e $Q_{n,q}$ est un mon\^ome en $u'$ multipli\'e par une unit\'e de $R'$;
\item Dans $R'$, $u'_{n}$ divise $Q_{n,l}$ mais $u'^{2}_{n}$ ne divise pas $Q_{n,l}$.
\end{enumerate}
\end{prop}
\noindent\textit{Preuve de la Proposition \ref{eclatpolycar0} en supposant la Proposition \ref{eclatpolyclecar0} vraie}:
\\Par hypoth\`ese de r\'ecurrence sur $n-r$, n'importe quelle collection d'\'el\'ements de $k\left(\left( u_{1},...,u_{n-1}\right)\right)$ peut \^etre transform\'ee simultan\'ement en mon\^omes via une suite formelle encadr\'ee. De plus, en appliquant $n-r-1$ fois la Proposition \ref{propcaspartpolycle}, on peut supposer que seuls les $u'_1,...,u'_r$ apparaissent dans ces mon\^omes.
\\Si $\alpha_{n,l}=1$, on applique la Proposition \ref{propcaspartpolycle} \`a chaque polyn\^ome-cl\'e $Q_{n,1},...,Q_{n,l}$ et la Proposition \ref{eclatpolycar0} est d\'emontr\'ee.
\\Supposons que $\alpha_{n,l}>1$. Notons $f=\sum\limits_{j=0}^d a_ju_n^j$, $a_j\in k\left[\left[ u_{1},...,u_{n-1}\right]\right]$. Soit $j_0$ le plus grand $j\in\lbrace 0,...,d\rbrace$ tel que $\nu(a_{j_0})=\min\limits_{0\leqslant j\leqslant d}\lbrace\nu(a_j)\rbrace$. Par le Corollaire \ref{coroideal}, apr\`es une suite locale encadr\'ee ind\'ependante de $u_n$, et quitte \`a compl\'eter, on peut supposer que $a_{j_0}$ divise $a_j$, pour tout $j\in\lbrace 0,...,d\rbrace$. En appliquant le th\'eor\`eme de pr\'eparation de Weierstrass, on peut supposer que $f$ est un polyn\^ome unitaire en $u_n$ de degr\'e $d$.
\\Soit $f=\sum\limits_{j=0}^Nc_jQ_{n,l}^j$, $N=\left\lfloor\frac{d}{\alpha_{n,l}}\right\rfloor$, le d\'eveloppement $(n,l)$-standard de $f$. Par la Proposition \ref{eclatpolyclecar0}, il existe une suite formelle encadr\'ee telle que le d\'eveloppement $(n,l)$-standard de $f$ dans $R'$ soit de la forme $\sum\limits_{j=0}^Nc'_ju'^j_n$, $c'_j\in k'\left[\left[ u'_{1},...,u'_{n-1}\right]\right]$, multipli\'e par une unit\'e de $R'$.
\\Notons $j'_0$ le plus grand $j\in\lbrace 0,...,N\rbrace$ tel que $\nu(c'_{j'_0})=\min\limits_{0\leqslant j\leqslant N}\lbrace\nu(c'_j)\rbrace$. Toujours par le Corollaire \ref{coroideal}, apr\`es une suite locale encadr\'ee ind\'ependante de $u'_n$, et quitte \`a compl\'eter, on peut supposer que $c'_{j'_0}$ divise $c'_j$, pour tout $j\in\lbrace 0,...,N\rbrace$. En appliquant le th\'eor\`eme de pr\'eparation de Weierstrass, on peut supposer que $f$ est un polyn\^ome unitaire en $u'_n$ de degr\'e inf\'erieur ou \'egal \`a $N<d$. Pour conclure, il nous suffit juste d'appliquer l'hypoth\`ese de r\'ecurrence.\\\qed
\\ \\
\noindent\textit{Preuve de la Proposition \ref{eclatpolyclecar0}}: Comme $l\in\mathbb{N}^*$, le d\'eveloppement standard de $Q_{n,l}$ est:
\[Q_{n,l}=Q_{n,l-1}^{\alpha_{n,l}}+\sum\limits_{j=0}^{\alpha_{n,l}-1}\left(\sum\limits_{\overline{\gamma}_{n,l-1}}c_{n,l,j,\overline{\gamma}_{n,l-1}}\textbf{Q}_{n,l-1}^{\overline{\gamma}_{n,l-1}}\right)Q_{n,l-1}^j.\]
Par hypoth\`ese de r\'ecurrence, pour des valeurs strictement inf\'erieures \`a $n-r$, il existe une suite formelle encadr\'ee $(R,u)\rightarrow (R',u')$, ind\'ependante de $u_{n}$ telle que chaque \'el\'ement $c_{n,l,j,\overline{\gamma}_{n,l-1}}$ soit un mon\^ome en $u'_{1},...,u'_{n-1}$ multipli\'e par une unit\'e de $R'$.
\\Pour chaque $j\in\lbrace r+1,...,n-1\rbrace$, appliquons la $j$-suite \'el\'ementaire uniformisante de la Remarque \ref{remjsuiteelemeunif}, suivie \`a chaque fois d'une compl\'etion formelle. On arrive alors \`a la situation o\`u les $\sum\limits_{\overline{\gamma}_{n,l-1}}c_{n,l,j,\overline{\gamma}_{n,l-1}}\textbf{Q}_{n,l-1}^{\overline{\gamma}_{n,l-1}}$ sont des mon\^omes en $u'_1,...,u'_{r}$ multipli\'es par une unit\'e de $R'$.
\\Appliquons $l-1$ fois la Proposition \ref{propcaspartpolycle}, on peut supposer de plus que:
\[Q_{n,l-1}=\eta u'_n,\]
o\`u $\eta$ est un mon\^ome en $u'_1,...,u'_{n-1}$ multipli\'e par une unit\'e de $R'$.
\\ En appliquant la Proposition \ref{propcaspartpolycle} \`a $u'_1,...,u'_r,u'_n$, quitte \`a passer au compl\'et\'e, on obtient une suite formelle encadr\'ee $(R',u')\rightarrow (R'',u'')$ telle que $Q_{n,l}$ soit un mon\^ome en $u''_1,...,u''_{r},u''_n$. On en d\'eduit imm\'ediatement (1) et (2) par construction, ceci ach\`eve la preuve de la Proposition \ref{eclatpolyclecar0} et donc celle du Th\'eor\`eme \ref{thmeclatformcar0}.\\\qed
\section{Un th\'eor\`eme de monomialisation en caract\'eristique mixte}\label{carmixtemono}
\indent Soient $(R,\mathfrak{m},k)$ un anneau local r\'egulier complet de caract\'eristique mixte de dimension $n$ avec $\mathfrak{m}=(x)=\left(x_1,...,x_{n}\right)$ et $\nu$ une valuation de $K=Frac(R)$ centr\'ee en $R$, de groupe des valeurs $\Gamma$. Soit $\Gamma_1$ le plus petit sous-groupe isol\'e non nul de $\Gamma$. On note:
\[H=\lbrace f\in R\:\vert\: \nu(f)\notin\Gamma_{1}\rbrace.\]
$H$ est un id\'eal premier de $R$ (voir Preuve du Th\'eor\`eme \ref{thmprelimcar0}). On suppose de plus que:
\[n=e(R,\nu)=emb.dim\left(R/H\right),\]
c'est-\`a-dire que:
\[H\subset\mathfrak{m}^2.\]
On note \'egalement $r=r(R,x,\nu)=\dim_\mathbb Q\left(\sum\limits_{i=1}^n\mathbb Q\nu(x_i)\right)$ et $p=car(k)$.
\\La valuation $\nu$ consid\'er\'ee est la compos\'ee de la valuation $\mu:L^{*}\rightarrow\Gamma_{1}$ de rang $1$ centr\'ee en $R/H$, o\`u $L=Frac(R/H)$, avec la valuation $\theta :K^{*}\rightarrow \Gamma / \Gamma_{1}$, centr\'ee en $R_{H}$, telle que $k_{\theta}\simeq \kappa(H)$.
\\Par abus de notation, pour $f\in R$, on notera $\mu(f)$ au lieu de $\mu(f \mod H)$.
\begin{rem}\label{ppasdansH}
\textup{Si $p\in H$, alors $R/H$ est \'equicaract\'eristique et on est sous les hypoth\`eses de la Section \ref{monocar0}. Dans la suite on supposera donc que $p\notin H$.}
\end{rem}
\subsection{Suites formelles encadr\'ees et anneaux de caract\'eristique mixte}
\begin{lem}\label{pasm2}
Il existe $g\in W\left[\left[u_1,...,u_n\right]\right]$ \`a coefficients dans $W^{\times}$ tel que:
\[R\simeq W\left[\left[u_1,...,u_n\right]\right]/(p-g),\]
o\`u $W$ est un anneau local r\'egulier complet de dimension $1$ dont l'id\'eal maximal est engendr\'e par $p$.
\end{lem}
\noindent\textit{Preuve}: On sait qu'il existe un morphisme surjectif: \[\varphi:W\left[\left[u_1,...,u_n\right]\right]\twoheadrightarrow R,\] tel que $\varphi(u_i)=x_i$ et $\varphi_{\vert\:W}=id_W$. Comme $R$ est int\`egre (voir \cite{EGA4-1}, Corollaire 17.1.3), $\ker \varphi$ est un id\'eal premier et, en comparant les dimensions, on en d\'eduit que $ht(\ker \varphi)\leqslant 1$. Or, $ W\left[\left[u_1,...,u_n\right]\right]$ est factoriel donc, $\ker \varphi$ est un id\'eal principal engendr\'e par $f$.
Comme $p\in\mathfrak m$, il existe $a_1,...,a_n\in R$ tels que:
\[p=a_1x_1+...+a_nx_n.\]
Or, $\varphi$ est surjective, il existe donc $b_1,...,b_n\in W\left[\left[u_1,...,u_n\right]\right]$ tes que $\varphi(b_i)=a_i$, $1\leqslant i\leqslant n$. Notons $g=b_1u_1+...+b_nu_n$, alors, $p-g\in \ker\varphi$.
\\Si un des $b_i$ est divisible par $p$, en notant $b_i=pb'_i$, $b'_i\in W\left[\left[u_1,...,u_n\right]\right]$, on remplace $b_i$ par $b'_ig$. En it\'erant ce processus, apr\`es un nombre au plus d\'enombrable de pas, on peut supposer que tous les $b_i$ sont non divisibles par $p$ et donc $b_i\in W^\times$, $1\leqslant i\leqslant n$.
\\Vu que $p$ est un param\`etre r\'egulier de $W\left[\left[u_1,...,u_n\right]\right]$, l'id\'eal $(p-g)$ est premier, de hauteur $1$ et inclu dans $\ker\varphi$, d'o\`u: $$\ker\varphi=(p-g).$$
avec $g\in (u_{1},...,u_{n})$ \`a coefficients non divisibles par $p$, donc dans $W^{\times}$.
\\\qed
\begin{rem}
\textup{Si $R$ est ramifi\'e ($p\in\mathfrak m^2$) alors $g\in\left(u_1,...,u_n\right)^2$.}
\end{rem}
\`A partir de maintenant on suppose que:
\[R= W\left[\left[u_1,...,u_n\right]\right]/(p-g),\]
avec $g\in\left(u_1,...,u_n\right)$ \`a coefficients dans $W^{\times}$. L'id\'eal maximal de $R$ est $\mathfrak{m}=\left(u_1,...,u_{n}\right)$.
~\\\indent Pour $j\in\lbrace 1,...,n\rbrace$, notons $K_j$ le corps des fractions de $W\left[\left[u_1,...,u_j\right]\right]$. Pour $j\in \lbrace r+1,...,n\rbrace$, on note $\lbrace Q_{j,i}\rbrace_{i\in\Lambda_{j}}$ l'ensemble des polyn\^omes-cl\'es de l'extension $K_{j-1}\hookrightarrow K_{j-1}(u_{j})$. Si $\gamma=(\gamma_1,...,\gamma_l)$, on note:
\[\boldsymbol Q_{j,l+1}^\gamma=\prod\limits_{i=1}^lQ_{j,i}^{\gamma_i}.\]
Pour $j\in\lbrace r,...,n\rbrace$ et $\boldsymbol{\mathrm i}=(i_{r+1},...,i_j)\in\Lambda_{r+1}\times...\times\Lambda_j$, on d\'efinit par r\'ecurrence une valuation $\nu_{\boldsymbol{\mathrm i}}$ de $K_j$ comme suit:
\\Si $j=r$, on pose $\nu_\emptyset=\nu_{\vert\:K_r}$. Supposons que la valuation $\nu_{(i_{r+1},...,i_{j-1})}$ de $K_{j-1}$ soit d\'ej\`a construite. Si $f\in K_{j-1}\left[u_j\right]$, $\nu_{\boldsymbol{\mathrm i}}(f)$ est d\'efini comme l'extension de $\nu_{(i_{r+1},...,i_{j-1})}(f)$ d\'etermin\'ee par $\boldsymbol Q_{j,i_j+1}$. Si $f\in K_r\left[\left[u_{r+1},...,u_j\right]\right]$, posons $N$ suffisamment grand de telle sorte que $f=f_1+f_2$ avec:
\begin{enumerate}
\item[(1)] $f_1\in K_r\left[\left[u_{r+1},...,u_{j-1}\right]\right]\left[u_j\right]$;
\item[(2)] $f_2\in\left(u_j^N\right)K_r\left[\left[u_{r+1},...,u_j\right]\right]$;
\item[(3)] $\nu_{0,u}(f_2)>\nu_{\boldsymbol{\mathrm i}}(f_1)$.
\end{enumerate}
On pose alors $\nu_{\boldsymbol{\mathrm i}}(f)=\nu_{\boldsymbol{\mathrm i}}(f_1)$.
\begin{lem}\label{remregulier} Supposons que, pour $j\in\lbrace r+1,...,n\rbrace$, l'ensemble $\lbrace Q_{j,i}\rbrace_{i\in\Lambda_{j}}$ soit un ensemble $1$-complet de polyn\^omes-cl\'es pour la valuation $\nu_{\vert K_j}$ ne poss\'edant pas de polyn\^ome-cl\'e limite. Si $\nu(p)\notin p\Gamma$, alors, \`a une suite formelle encadr\'ee pr\`es, on peut supposer $R$ de la forme:
\[R=R[r]\left[ \left[ u_{r+1},...,u_{n}\right] \right] ,\]
o\`u $R[r]$ est un anneau local r\'egulier complet (\'eventuellement ramifi\'e) de dimension $r$ et tel que $\nu_{\vert R[r]}$ soit monomiale par rapport au syst\`eme r\'egulier de param\`etres de $R[r]$ et de rang rationnel maximal.
\end{lem}
\noindent\textit{Preuve}:
Consid\'erons l'\'el\'ement $g\in W\left[\left[u_1,...,u_n\right]\right]$ du Lemme \ref{pasm2}.
Par le Th\'eor\`eme \ref{existpolycle}, pour tout $j\in\lbrace r+1,...,n\rbrace$, la collection $\lbrace Q_{j,i}\rbrace_{i\in\Lambda_{j}}$ forme un ensemble complet de polyn\^omes-cl\'es, il existe donc $\boldsymbol{\mathrm i}=(i_{r+1},...,i_n)\in\Lambda_{r+1}\times...\times\Lambda_n$ tel que:
\[\nu_{\boldsymbol{\mathrm i}}(g)=\nu(g)\textup{ et }\nu\left(Q_{j,i}\right)\leqslant p,\]
pour tout $i\leqslant i_j$, $j\in\lbrace r+1,...,n\rbrace$ (on rappelle que, vu la Remarque \ref{ppasdansH} et comme $p=g$ dans $R$, $\nu(g)\in\Gamma_1$).
\\Notons $\overline g$ l'image de $g$ modulo $p$ dans $k\left[\left[u_1,...,u_n\right]\right]$ et $\overline\nu_{\boldsymbol{\mathrm i}}$ la valuation d\'efinie sur $k\left(\left(u_1,...,u_r\right)\right)\left[\left[u_{r+1},...,u_n\right]\right]$ comme la valuation $\nu_{\boldsymbol{\mathrm i}}$ mais en regardant les \'el\'ements modulo $p$. En appliquant le Th\'eor\`eme \ref{thmeclatformcar0} \`a la valuation $\overline\nu_{\boldsymbol{\mathrm i}}$,
il existe une suite formelle encadr\'ee $k\left[\left[u_1,...,u_n\right]\right]\rightarrow k'\left[\left[u'_1,...,u'_n\right]\right] $ telle que $\overline{g}$ soit un mon\^ome en $u'$ multipli\'e par une unit\'e de $ k'\left[\left[u'_1,...,u'_n\right]\right]$. On a alors:
\[\nu(g)=\nu_{\boldsymbol{\mathrm i}}(g)=\overline\nu_{\boldsymbol{\mathrm i}}\left(\overline{g}\right)=\nu_{0,u'}\left(\overline{g}\right).\]
En appliquant \`a chaque \'etape de l'algorithme du Th\'eor\`eme \ref{thmeclatformcar0}
les m\^emes changements de variables \`a $W\left[\left[u_1,...,u_n\right]\right]$, on obtient une suite $W\left[\left[u_1,...,u_n\right]\right]\rightarrow (R^{(2)},u^{(2)})$ telle que:
\[g=u_{1}^{(2) \alpha_{1}}...u_{r}^{(2) \alpha_{r}}z+ph,\]
o\`u $\alpha_{1},...,\alpha_{r}\in\mathbb{Z}$, $z\in R^{(2) \times}$, $h\in R^{(2)}$. Or l'algorithme du Th\'eor\`eme \ref{thmeclatformcar0} consiste en une r\'ep\'etition de $n$-suites \'el\'ementaires uniformisantes (D\'efinition \ref{suiteelementaire}), ainsi, par la Proposition \ref{propconstruceclatencad} et par choix de $\boldsymbol{\mathrm i}$: \[h\in \left(u_{1}^{(2)},...,u_{n}^{(2)}\right).\]
On en d\'eduit donc que:
\[h\not\in R^{(2) \times}.\]
On peut alors \'ecrire:
\[p-g=p(1-h)-u_{1}^{(2) \alpha_{1}}...u_{r}^{(2) \alpha_{r}}z=w(p-u_{1}^{(2) \alpha_{1}}...u_{r}^{(2) \alpha_{r}}z'),\]
o\`u $w=1-h\in R^{(2) \times}$ et $z'=zw^{-1}\in R^{(2) \times}$.
\\\`A une suite formelle encadr\'ee pr\`es, on peut donc supposer que, dans $R$, on a:
\[p=u_{1}^{\alpha_{1}}...u_{r}^{\alpha_{r}}z\]
avec $\alpha_{1},...,\alpha_{r}\in\mathbb{Z}$, $z\in R^{\times}$.
\\Par hypoth\`eses, comme $\nu(p)\notin p\Gamma$ et $R$ est complet, il existe $\alpha_{i}\notin p\mathbb Z$ et donc:
\[z^{1/\alpha_{i}}\in R.\]
Quitte \`a faire le changement de variable:
\[v_{i}=u_{i}z^{1/\alpha_{i}},\]
on peut supposer que:
\[p=u_{1}^{\alpha_{1}}...u_{r}^{\alpha_{r}}\in \mathfrak{m}^{2}.\]
Ainsi, \`a une suite formelle encadr\'ee pr\`es, $R$ s'\'ecrit sous la forme:
\[R=W\left[\left[u_1,...,u_n\right]\right]/\left(p-u_{1}^{\alpha_{1}}...u_{r}^{\alpha_{r}}\right)\simeq R[r]\left[ \left[ u_{r+1},...,u_{n}\right] \right] ,\]
o\`u $R[r]=W\left[\left[u_1,...,u_r\right]\right]/\left(p-u_{1}^{\alpha_{1}}...u_{r}^{\alpha_{r}}\right)$ est un anneau local r\'egulier complet (\'eventuellement ramifi\'e) de dimension $r$ tel que $\nu_{\vert R[r]}=\nu_{0,(u_1,...,u_r)}$ et $rg.rat\left(\nu_{\vert R[r]}\right)=r$.\\\qed
\subsection{Un premier th\'eor\`eme de monomialisation}
~\smallskip ~\\\indent \`A partir de maintenant et ce jusqu'\`a la fin de la Section \ref{carmixtemono}, on suppose que:
\[\nu(p)\notin p\Gamma,\]
\[R=R[r]\left[ \left[ u_{r+1},...,u_{n}\right] \right] ,\]
o\`u $R[r]$ est un anneau local r\'egulier complet (\'eventuellement ramifi\'e) de dimension $r$ et tel que $\nu_{\vert R[r]}$ soit monomiale par rapport au syst\`eme r\'egulier de param\`etres de $R[r]$ et de rang rationnel maximal.
\\\indent Pour $j\in \lbrace r+1,...,n\rbrace$, on note $\lbrace Q_{j,i}\rbrace_{i\in\Lambda_{j}}$ l'ensemble des polyn\^omes-cl\'es de l'extension $Frac\left(R[r]\left[ \left[ u_{r+1},...,u_{j-1}\right] \right]\right)\hookrightarrow Frac\left(R[r]\left[ \left[ u_{r+1},...,u_{j-1}\right] \right]\right)(u_{j})$.
\begin{thm}\label{thmeclatform}
Reprenons les hypoth\`eses pr\'ec\'edentes et supposons que, pour $j\in \lbrace r+1,...,n\rbrace$, l'ensemble $\lbrace Q_{j,i}\rbrace_{i\in\Lambda_{j}}$ est un ensemble $1$-complet de polyn\^omes-cl\'es ne poss\'edant pas de polyn\^ome-cl\'e limite. Deux cas se pr\'esentent:
\begin{enumerate}
\item[(1)] Ou bien $H\neq (0)$ et il existe une suite formelle encadr\'ee:
\[ \xymatrix{\left( R,u\right) \ar[r]^-{\pi_{0}} & \left( R^{(1)},u^{(1)}\right) \ar[r]^-{\pi_{1}} & \ldots \ar[r]^-{\pi_{l-2}} & \left( R^{(l-1)},u^{(l-1)}\right) \ar[r]^-{\pi_{l-1}} & \left( R^{(l)},u^{(l)}\right)} \]
telle que:
\[\left( e\left( R^{(l)},\nu^{(l)}\right) , e\left( R^{(l)},\nu^{(l)}\right) - r\left( R^{(l)},u^{(l)},\nu^{(l)}\right)\right) <_{lex}\left(e(R,\nu),n-r\right);\]
\item[(2)] Ou bien $H=(0)$ et pour tout $f\in R$, il existe une suite formelle encadr\'ee:
\[ \xymatrix{\left( R,u\right) \ar[r]^-{\pi_{0}} & \left( R^{(1)},u^{(1)}\right) \ar[r]^-{\pi_{1}} & \ldots \ar[r]^-{\pi_{l-2}} & \left( R^{(l-1)},u^{(l-1)}\right) \ar[r]^-{\pi_{l-1}} & \left( R^{(l)},u^{(l)}\right)} \]
telle que $f$ soit un mon\^ome en $u^{(l)}$ fois une unit\'e de $R^{(l)}$.
\end{enumerate}
\end{thm}
\noindent\textit{Preuve}: On proc\`ede par r\'ecurrence sur $n-r$. Si $n=r$ alors $R=R[r]$. En particulier,
\[\forall\:f\in R,\:\nu_{0,u}(f)=\nu(f).\]
Par la Remarque \ref{Hnulcar0}, $H=(0)$. Prenons alors un \'el\'ement $f\in R$, par le Th\'eor\`eme \ref{uniflocnondeg}, il existe une suite locale encadr\'ee:
\[ \xymatrix{\left( R,u\right) \ar[r]^-{\pi_{0}} & \left( R^{(1)},u^{(1)}\right) \ar[r]^-{\pi_{1}} & \ldots \ar[r]^-{\pi_{i-2}} & \left( R^{(i-1)},u^{(i-1)}\right) \ar[r]^-{\pi_{i-1}} & \left( R^{(i)},u^{(i)}\right)} \]
telle que $f$ soit un mon\^ome en $u^{(i)}$ multipli\'e par une unit\'e de $R^{(i)}$. En passant au compl\'et\'e \`a chaque pas, on obtient la suite formelle encadr\'ee satisfaisant (2).
\\ \indent Supposons que $n-r>0$ et que l'on ait d\'ej\`a construit une suite formelle encadr\'ee pour toutes les valeurs strictement plus petites et satisfaisant la conclusion du Th\'eor\`eme \ref{thmeclatform}. On va proc\'eder comme dans la preuve du Th\'eor\`eme \ref{thmeclatformcar0}.
\begin{prop}\label{princ}
Par le Lemme \ref{remregulier}, on peut supposer que $R$ s'\'ecrit sous la forme:
\[R=R[r]\left[ \left[ u_{r+1},...,u_{n}\right] \right] ,\]
o\`u $R[r]$ est un anneau local r\'egulier complet (\'eventuellement ramifi\'e). Notons $R_{n-1}=R[r]\left[ \left[ u_{r+1},...,u_{n-1}\right] \right]$ et supposons que $H\not\subset R_{n-1}$ et $H\cap R_{n-1}=(0)$. Supposons enfin que le Th\'eor\`eme \ref{thmeclatform} est vrai en dimension $n-1$.
\\Soit $f\in H\setminus\lbrace 0\rbrace$. \`A une suite formelle encadr\'ee pr\`es, $f$ s'\'ecrit sous la forme:
\[f=\alpha f_{n-1}P;\]
o\`u $\alpha\in R^{\times}$, $f_{n-1}\in R_{n-1}$ et $P$ est un polyn\^ome unitaire en $u_{n}$.
\end{prop}
\noindent\textit{Preuve}: La preuve est la m\^eme que celle de la Proposition \ref{princcar0}.
\\ \qed
\begin{coro}\label{hauteur}
Sous les m\^emes hypoth\`eses que la Proposition \ref{princ}, on a:
\[ht\left( H\right) \leqslant 1.\]
\end{coro}
\noindent\textit{Preuve}: La preuve est la m\^eme que celle de la Proposition \ref{hauteurcar0}.
\\ \qed
\begin{coro}\label{eng}
Sous les hypoth\`eses de la Proposition \ref{princ}, \`a une suite formelle encadr\'ee pr\`es, l'id\'eal $H$ est principal engendr\'e par un polyn\^ome unitaire en $u_{n}$.
\end{coro}
\noindent\textit{Preuve}: C'est une cons\'equence directe du Corollaire \ref{hauteur}.
\\ \qed
\\ \indent Soit $R^{(i)}$ un anneau local apparaissant dans une suite formelle encadr\'ee. Par le Lemme \ref{remregulier}, on peut \'ecrire $R^{(i)}$ sous la forme $B\left[ \left[ u_{n_{i}}\right] \right] $ o\`u $B$ est un anneau r\'egulier (\'eventuellement ramifi\'e) et si $H^{(i)}\cap R^{(i)}\neq (0)$, alors $H^{(i)}\subset \mathfrak{m}^{(i) 2}$. Par le Corollaire \ref{eng}, $H^{(i)}$ est engendr\'e par un polyn\^ome unitaire en $u_{n_{i}}$.
\begin{prop}\label{eclatpoly}
Sous les hypoth\`eses du Th\'eor\`eme \ref{thmeclatform}, pour tout polyn\^ome $f\in R[r]\left[\left[ u_{r+1},...,u_{n-1}\right]\right]\left[u_n\right]$, il existe une suite formelle encadr\'ee $(R,u)\rightarrow (R',u')$ telle que $f$ soit un mon\^ome en $u'$ multipli\'e par une unit\'e de $R'$.
\end{prop}
\noindent\textit{Preuve du Th\'eor\`eme \ref{thmeclatform} en supposant la Proposition \ref{eclatpoly} vraie}:
C'est la m\^eme que celle de la preuve du Th\'eor\`eme \ref{thmeclatformcar0}, il suffit de remplacer $k\left[\left[u_{1},...,u_{n-1}\right]\right]\left[u_n\right]$ par $R[r]\left[\left[u_{r+1},...,u_{n-1}\right]\right]\left[u_n\right]$.\\ \qed
\\ \\\noindent\textit{Preuve de la Proposition \ref{eclatpoly}}:
C'est la m\^eme que celle de la preuve de la Proposition \ref{eclatpolycar0}, il suffit de remplacer $k\left[\left[u_{1},...,u_{n-1}\right]\right]\left[u_n\right]$ par $R[r]\left[\left[u_{r+1},...,u_{n-1}\right]\right]\left[u_n\right]$.
\\Pour conclure, il nous suffit d'avoir le r\'esultat voulu sur les polyn\^omes-cl\'es comme nous allons le voir dans la Proposition \ref{eclatpolycle}.
\begin{prop}\label{eclatpolycle}
Sous les hypoth\`eses du Th\'eor\`eme \ref{thmeclatform}, il existe une suite formelle encadr\'ee: \[(R,u)\rightarrow (R',u'),\] o\`u $u=(u_{1},...,u_{n})$, $u'=(u'_{1},...,u'_{n})$, v\'erifiant les propri\'et\'es suivantes:
\begin{enumerate}
\item Pour tout $q\in\mathbb{N}^*$ tel que $1\leqslant q \leqslant l_0$, les polyn\^omes-cl\'es $Q_{n,q}$ et $Q_{n,l}$ sont des mon\^omes en $u'$ multipli\'es par une unit\'e de $R'$;
\item Dans $R'$, $u'_{n}$ divise $Q_{n,l}$ mais $u'^{2}_{n}$ ne divise pas $Q_{n,l}$.
\end{enumerate}
\end{prop}
\noindent\textit{Preuve de la Proposition \ref{eclatpoly} en supposant la Proposition \ref{eclatpolycle} vraie}:
C'est la m\^eme que celle de la preuve de la Proposition \ref{eclatpolycar0}, il suffit de remplacer $k\left[\left[u_{1},...,u_{n-1}\right]\right]$ par $R[r]\left[\left[u_{r+1},...,u_{n-1}\right]\right]$.\\\qed
\\ \\
\noindent\textit{Preuve de la Proposition \ref{eclatpolycle}}:
C'est la m\^eme que celle de la preuve de la Proposition \ref{eclatpolyclecar0}.\\\qed
\section{Un th\'eor\`eme de monomialisation sans compl\'etion}\label{soussectpasformel}
\indent Soient $\left( R,\mathfrak m,k\right)$ un anneau local r\'egulier complet de dimension $n$ tel que $\mathfrak m=(u)=(u_1,...,u_n)$. Soit $\nu$ une valuation de $K=Frac(R)$ centr\'ee en $R$ et de groupe des valeurs $\Gamma$. Notons $\Gamma_1$ le plus petit sous-groupe isol\'e non nul de $\Gamma$. On pose:
\[H=\lbrace f\in R\:\vert\: \nu(f)\notin\Gamma_{1}\rbrace.\]
On suppose de plus que:
\[n=e(R,\nu)=emb.dim\left(R/H\right),\]
c'est-\`a-dire que:
\[H\subset\mathfrak{m}^2.\]
La valuation $\nu$ consid\'er\'ee est la compos\'ee de la valuation $\mu:L^{*}\rightarrow\Gamma_{1}$ de rang $1$ centr\'ee en $R/H$, o\`u $L=Frac(R/H)$, avec la valuation $\theta :K^{*}\rightarrow \Gamma / \Gamma_{1}$, centr\'ee en $R_{H}$, telle que $k_{\theta}\simeq \kappa(H)$.
\\\indent Consid\'erons un sous-anneau local $\left(T,\mathfrak m_T\right)$ de $R$, non n\'ecessairement noeth\-\'erien, contenant $u_1,...,u_n$ et tel que $T/\mathfrak m_T\simeq k$.
Soient $J\subset\lbrace 1,...,n\rbrace$ et $j\in J$ tels que:
\[\nu(u_j)\leqslant \nu(u_i),\:i\in J.\]
Soit $\pi_0:(R,u)\rightarrow \left(R^{(1)},u^{(1)}\right)$ l'\'eclatement encadr\'e le long de $(u_J)$ par rapport \`a $\nu$ (D\'efinition \ref{eclatlocparrapnu}), notons $\mathfrak m^{(1)}$ l'id\'eal maximal de $R^{(1)}$.
\begin{defi}\label{defisuitedefsur}
Le \textbf{transform\'e de } $\boldsymbol T$ \textbf{par} $\boldsymbol{\pi_0}$ est l'anneau:
\[T^{(1)}=T\left[u'_{j\setminus\lbrace 0\rbrace}\right]_{\mathfrak m_1\cap T\left[u'_{j\setminus\lbrace 0\rbrace}\right]}.\]
On dit que l'\'eclatement $\pi_0$ est \textbf{d\'efini sur} $\boldsymbol T$ si $u^{(1)}\subset T^{(1)}$.
\\Pour une suite locale encadr\'ee de la forme:
\[ \xymatrix{\left( R,u\right)=\left( R^{(0)},u^{(0)}\right) \ar[r]^-{\pi_{0}} & \left( R^{(1)},u^{(1)}\right) \ar[r]^-{\pi_{1}} & \ldots \ar[r]^-{\pi_{l-1}} & \left( R^{(l)},u^{(l)}\right)}, \]
les notions de \textbf{transform\'e} $T^{(i)}$ de $T$ et de \textbf{d\'efinie sur} $\boldsymbol T$ sont d\'efinies par r\'ecurrence sur $1\leqslant i \leqslant l$. Plus pr\'ecis\'ement, la notion de transform\'e $T^{(i)}$ n'est d\'efinie qu'en supposant la suite locale encadr\'ee $(R,u)\rightarrow \left(R^{(i-1)},u^{(i-1)}\right)$ d\'efinie sur T.
\end{defi}
Nous allons montrer qu'ind\'ependamment de la caract\'eristique de $k_\nu$, s'il existe un ensemble $1$-complet de polyn\^omes-cl\'es sans polyn\^ome-cl\'e limite pour un anneau local r\'egulier complet $(R,\mathfrak m,k)$, il va exister une suite locale encadr\'ee (et non plus formelle encadr\'ee) qui fasse d\'ecro\^itre l'invariant $e(R,\nu)$. Si $R$ est de caract\'eristique mixte, il faut supposer de plus que $\nu(p)\not\in p\Gamma$, $p=car(k_\nu)$.
\begin{thm}\label{monomialisationpasformel}
Reprenons les hypoth\`eses pr\'ec\'edentes et supposons de plus qu'il existe un ensemble $1$-complet de polyn\^omes-cl\'es ne poss\'edant pas de polyn\^ome-cl\'e limite. Si $R$ est de caract\'eristique mixte, on suppose \'egalement que $\nu(p)\not\in p\Gamma$, o\`u $p=car(k_\nu)$. Alors:
\begin{enumerate}
\item
\begin{enumerate}
\item Ou bien $H\neq (0)$ et il existe une suite locale encadr\'ee $(R,u)\rightarrow (R',u')$ telle que: $$e(R',\nu)<e(R,\nu);$$
\item Ou bien $H=(0)$ et pour tout $f\in R$, il existe une suite locale encadr\'ee $(R,u)\rightarrow (R',u')$ telle que $f$ soit un mon\^ome en $u'$ multipli\'e par une unit\'e de $R'$.
\end{enumerate}
\item La suite locale encadr\'ee $(R,u)\rightarrow (R',u')$ de (1) peut \^etre choisie d\'efinie sur $T$.
\end{enumerate}
\end{thm}
\noindent\textit{Preuve}: Par les th\'eor\`emes \ref{thmeclatformcar0} et \ref{thmeclatform}, pour $f\in R$, il existe une suite formelle encadr\'ee:
\[ \xymatrix{\left( R,u\right)=\left( R^{(0)},u^{(0)}\right) \ar[r]^-{\pi_{0}} & \left( R^{(1)},u^{(1)}\right) \ar[r]^-{\pi_{1}} & \ldots \ar[r]^-{\pi_{l-1}} & \left( R^{(l)},u^{(l)}\right)} \]
telle que, ou bien $e\left(R^{(l)},\nu\right)<e(R,\nu)$ si $H\neq (0)$, ou bien $f$ est un mon\^ome en $u^{(l)}$ multipli\'e par une unit\'e de $R^{(l)}$ si $H=(0)$. \`A partir de cette suite formelle encadr\'ee, nous allons construire, par approximation $\left(u^{(l)}\right)$-adique, la suite locale encadr\'ee $(R,u)\rightarrow (R',u')$ recherch\'ee.
\\Plus pr\'ecis\'ement, pour $s\in\lbrace 1,...,l\rbrace$, consid\'erons $\pi_{s-1}:\left(R^{(s-1)},u^{(s-1)}\right)\rightarrow \left(R^{(s)},u^{(s)}\right)$ une des transformations de la suite formelle encadr\'ee, elle consiste en une suite \'el\'ementaire uniformisante $\pi_{0,s}$ (D\'efinition \ref{suiteelementaire}), qui r\'esout les singularit\'es d'un certain polyn\^ome-cl\'e, suivie d'une compl\'etion formelle. Ainsi, quitte \`a renum\'eroter les variables, $R^{(s)}$ est obtenu \`a partir de $R^{(s-1)}$ en adjoignant des expressions rationnelles $u_1^{(s)},...,u_r^{(s)},u_n^{(s)}$ en terme d'\'el\'ements de $R^{(s-1)}$ (dont les d\'enominateurs sont des mon\^omes en $u^{(s-1)}$), puis par passage au compl\'et\'e en le centre de la valuation $\nu$.
\\Pour $j\in\lbrace 1,...,n\rbrace$, notons $\mu_{j,s}$ la somme des valuations pour $\nu$ des num\'erateurs et d\'enominateurs de $u_j^{(s)}$, vu en tant que mon\^ome en $u^{(s-1)}$. On note alors:
\[\mu_s=\max_{1\leqslant j \leqslant n}\lbrace\mu_{j,s}\rbrace.\]
Soit $\beta\in\Gamma_1$ tel que $\beta>\sum\limits_{q=1}^l\mu_q$. Notons $I_s$ le $\nu_{0,u^{(s)}}$-id\'eal de $R^{(s)}$ d\'efini par:
\[I_s=\left\lbrace g\in R^{(s)}\:\left\vert\:\nu_{0,u^{(s)}}(g)\geqslant\beta-\sum\limits_{q=1}^s\mu_q\right.\right\rbrace.\]
Nous allons construire, par r\'ecurrence sur $s\in\lbrace 1,...,l\rbrace$, une suite locale encadr\'ee:
\[ \xymatrix{\left( R,u\right)=\left( \tilde R^{(0)},\tilde u^{(0)}\right) \ar[r]^-{\tilde \pi_{0}} & \left( \tilde R^{(1)},\tilde u^{(1)}\right) \ar[r]^-{\tilde \pi_{1}} & \ldots \ar[r]^-{\tilde \pi_{l-1}} & \left( \tilde R^{(l)},\tilde u^{(l)}\right)}, \]
d\'efinie sur $T$ telle que, pour tout $s$ et $j\in\lbrace 1,...,n\rbrace$, on ait:
\[\textup{(HR)}:\:\:\nu_{0,u^{(s-1)}}\left(\tilde u_j^{(s)}-u_j^{(s)}\right)>\sum\limits_{q=s+1}^l\mu_q.\]
Supposons que la suite locale encadr\'ee soit construite \`a l'\'etape $s-1$. Quitte \`a renum\'eroter les variables si n\'ecessaire, on peut supposer que:
\[u_j^{(s)}=u_j^{(s-1)},\:r+1\leqslant j\leqslant n-1.\]
L'hypoth\`ese de r\'ecurrence:
\[\nu_{0,u^{(s-2)}}\left(\tilde u_j^{(s-1)}-u_j^{(s-1)}\right)>\sum\limits_{q=s}^l\mu_q,\]
et le fait que les $u_1^{(s)},...,u_r^{(s)}$ s'expriment de mani\`ere rationnelle en fonction de $u^{(s-1)}$ entra\^ine que (HR) est vrai pour $j\in\lbrace 1,...,n-1\rbrace$.
\\Reprenons les notations de la sous-section \ref{sectionsuiteelemunif}. Consid\'erons:
\[\sum\limits_{i=1}^r\alpha_j\nu\left(u_j^{(s-1)}\right)=\overline\alpha\nu\left(u_n^{(s-1)}\right),\]
la plus petite combinaison $\mathbb Z$-lin\'eaire de $\nu\left(u_1^{(s-1)}\right),...,\nu\left(u_r^{(s-1)}\right),\nu\left(u_n^{(s-1)}\right)$ telle que $\overline\alpha\in\mathbb N^*$. Notons:
\[y=\left(u_1^{(s-1)}\right)^{\alpha_1}...\left(u_r^{(s-1)}\right)^{\alpha_r}\]
et
\[Q^{(s)}=\sum\limits_{i=0}^db_iy^{d-i}\left(u_n^{(s-1)}\right)^{i\overline\alpha},\]
le polyn\^ome $Q$ apparaissant dans la Proposition \ref{propcaspartpolycle} correspondant \`a la suite \'el\'ementaire uniformisante $\pi_{0,s}$. Pour chaque $b_i$ apparaissant dans $Q^{(s)}$, choisissons $\tilde b_i\in\left(\tilde R^{(s-1)}\right)^\times \cap T^{(s-1)}$ tel que:
\[\nu_{0,u^{(s-1)}}\left(b_j^{(s)}-\tilde b_j^{(s)}\right)>\sum\limits_{q=s+1}^l\mu_q.\]
Posons:
\[\tilde Q^{(s)}=\sum\limits_{i=0}^d\tilde b_iy^{d-i}\left(u_n^{(s-1)}\right)^{i\overline\alpha},\]
et $\tilde\pi_{s-1}$ la $n$-suite \'el\'ementaire uniformisante d\'etermin\'ee par ces donn\'ees. Ainsi, avec $Q^{(s)}$ et $\tilde Q^{(s)}$, on montre que (HR) est vraie pour $j=n$.
\\La suite locale encadr\'ee que l'on vient de construire par r\'ecurrence:
\[ \xymatrix{\left( R,u\right)=\left( \tilde R^{(0)},\tilde u^{(0)}\right) \ar[r]^-{\tilde \pi_{0}} & \left( \tilde R^{(1)},\tilde u^{(1)}\right) \ar[r]^-{\tilde \pi_{1}} & \ldots \ar[r]^-{\tilde \pi_{l-1}} & \left( \tilde R^{(l)},\tilde u^{(l)}\right)} \]
d\'efinie sur $T$, telle que, pour tout $s$ et $j\in\lbrace 1,...,n\rbrace$:
\[\nu_{0,u^{(s-1)}}\left(\tilde u_j^{(s)}-u_j^{(s)}\right)>\sum\limits_{q=s+1}^l\mu_q,\]
implique que $in_\nu\left(u_j^{(s)}\right)=in_\nu\left(\tilde u_j^{(s)}\right)$, vus en tant qu'\'el\'ements de $\left(gr_\nu\left(R^{(l)}\right)\right)^*$, alg\`ebre qui contient la sous-alg\`ebre $gr_\nu\left(\tilde R^{(s)}\right)$. De m\^eme, la valuation monomiale $\nu_{0,u^{(l)}}$ de $Frac\left(R^{(l)}\right)$, restreinte \`a $\tilde R^{(l)}$, co\"incide avec la valuation monomiale $\nu_{0,\tilde u^{(l)}}$. On a alors $in_{\nu_{0,u^{(l)}}}\left(u_j^{(s)}\right)=in_{\nu_{0,u^{(l)}}}\left(\tilde u_j^{(s)}\right)$, vus en tant qu'\'el\'ements de $\left(gr_{\nu_{0,u^{(l)}}}\left(R^{(l)}\right)\right)^*$, alg\`ebre qui contient la sous-alg\`ebre $gr_{\nu_{0,u^{(l)}}}\left(\tilde R^{(s)}\right)$.
\\Si $H=(0)$ et $f\in R\setminus\lbrace 0\rbrace$ est monomialis\'e par la suite formelle encadr\'ee, l'\'egalit\'e pr\'ec\'edente implique que:
\[f=\varpi + \tilde f,\]
o\`u $\varpi$ est un mon\^ome en $\tilde u^{(l)}$ et $\nu_{0,\tilde u^{(l)}}(\tilde f)>\nu(\varpi)$.
\\Si $H\neq (0)$ et $f\in H\setminus\lbrace 0\rbrace$ dont le transform\'e strict devient un param\`etre r\'egulier dans $R^{(l)}$, alors:
\[f=\tilde Q^{(l)}+\tilde f,\]
o\`u $\nu_{0,\tilde u^{(l)}}(\tilde f)>\nu(\tilde Q^{(l)})$. En appliquant le Corollaire \ref{coroideal}, apr\`es une suite monomiale $\left(\tilde R^{(l)},\tilde u^{(l)}\right)\rightarrow (R',u')$ (respectivement, en appliquant la Proposition \ref{proppolycleplus}, apr\`es une suite locale encadr\'ee ind\'ependante de $\tilde u_n^{(l)}$, dans le cas $H\neq (0)$), on est ramen\'e \`a la situation o\`u $\varpi$ divise $\tilde f$, c'est-\`a-dire \`a la situation o\`u $f=z\varpi$, $z$ unit\'e de $R'$ (respectivement, $f=zg$, o\`u $g$ est un param\`etre r\'egulier de $R'$ et $z$ une unit\'e de $R'$, dans le cas $H\neq (0)$).\\\qed
\section{Th\'eor\`emes d'uniformisation locale en caract\'eristique nulle}
Soit $S$ un anneau local noeth\'erien. Pour montrer que $S$ est transform\'e en un anneau r\'egulier via une suite locale encadr\'ee, il faut montrer que $\widehat S_{\overline H}$ et $\widehat S/\overline H$ le sont, $\overline H$ \'etant l'id\'eal premier implicite de $\widehat S$. Par le Th\'eor\`eme \ref{RHestreg}, si $S$ est quasi-excellent alors $\widehat S_{\overline H}$ est r\'egulier. Dans un premier temps, nous allons montrer que, sous certaines hypoth\`eses, $\widehat S/\overline H$ est aussi r\'egulier. Enfin, gr\^ace \`a ces deux r\'esultats nous montrerons le th\'eor\`eme d'uniformisation locale pour des valuations de rang $1$ puis pour des valuations de rang quelconque gr\^ace \`a \cite{spivanova}.
\subsection{Un th\'eor\`eme pr\'eliminaire d'uniformisation locale}
\begin{thm}\label{thmprelimcar0}
Soient $(S,\mathfrak m,k)$ un anneau local noeth\'erien int\`egre de corps des fractions $L$ et $\mu$ une valuation de $L$ de rang $1$ et de groupe des valeurs $\Gamma_1$, centr\'ee en $S$, telle que $car\left(k_\mu\right)=0$.
\\Notons $u=(u_1,...,u_n)$ un ensemble minimal de g\'en\'erateurs de $\mathfrak m$ et $\overline H$ l'id\'eal premier implicite de $\widehat S$.
\\Soient $f_1,...,f_s\in\mathfrak m$ tels que $\mu(f_1)=\min_{1\leqslant j\leqslant s}\lbrace\mu(f_j)\rbrace$. Il existe alors une suite locale encadr\'ee:
\[ \xymatrix{\left( S,u,k\right)=\left( S^{(0)},u^{(0)},k^{(0)}\right) \ar[r]^-{\rho_{0}} & \left( S^{(1)},u^{(1)},k^{(1)}\right) \ar[r]^-{\rho_{1}} & \ldots \ar[r]^-{\rho_{i-1}} & \left( S^{(i)},u^{(i)},k^{(i)}\right)}, \]
ayant les propri\'et\'es suivantes:
\\Notons $\overline H^{(i)}$ l'id\'eal premier implicite de $\widehat{S^{(i)}}$ et $\overline{f_j}$ l'image de $f_j\mod\overline H^{(i)}$, $1\leqslant j\leqslant s$, alors:
\begin{enumerate}
\item $\widehat{S^{(i)}}/\overline H^{(i)}$ est r\'egulier;
\item Pour $1\leqslant j\leqslant s$, $\overline{f_j}$ est un mon\^ome en $u^{(i)}$ multipli\'e par une unit\'e de $\widehat{S^{(i)}}/\overline H^{(i)}$;
\item Pour $1\leqslant j\leqslant s$, $\overline{f_1}$ divise $\overline{f_j}$ dans $\widehat{S^{(i)}}/\overline H^{(i)}$.
\end{enumerate}
\end{thm}
\noindent\textit{Preuve}: Notons $\sigma:S\rightarrow \widehat S$ le morphisme de compl\'etion formelle. Par le Th\'eor\`eme \ref{valetenuniqidealimpl}, $\mu$ s'\'etend de mani\`ere unique en une valuation $\widehat\mu$ centr\'ee en $\widehat S/\overline H$. Notons $u=(y,x)$ tel que $x=(x_1,...,x_l)$, $l=emb.dim\left(\widehat{S}/\overline{H}\right)$, $y=(y_1,...,y_{n-l})$ et les images des $x_1,...,x_l$ dans $\widehat S/\overline H$ induisent un ensemble minimal de g\'en\'erateurs de $(\mathfrak m \widehat S)/\overline H$.
\\Par le Th\'eor\`eme de structure de Cohen, on sait qu'il existe un anneau local r\'egulier complet de caract\'eristique nulle $R$ et un morphisme $\varphi$ surjectif:
\[\varphi:R\twoheadrightarrow \widehat S/\overline H.\]
Notons $H=\ker \varphi$, comme $\overline H$ est un id\'eal premier (Th\'eor\`eme \ref{valetenuniqidealimpl}), $H$ est un id\'eal premier de $R$. On choisit $R$ de telle sorte que $\dim (R)=l$. Notons $K$ le corps des fractions de $R$. Soit $\theta$ une valuation de $K$, centr\'ee en $R_H$, telle que $k_\theta=\kappa(H)$. Si l'on regarde $\widehat\mu$ comme une valuation centr\'ee en $R/H$ via le morphisme $\varphi$, on peut consid\'erer la valuation $\nu=\widehat{\mu}\circ\theta$ centr\'ee en $R$ et de groupe des valeurs $\Gamma$. Alors, $\Gamma_1$ est le plus petit sous-groupe isol\'e non nul de $\Gamma$ et:
\[H=\lbrace f\in R\:\vert\:\nu(f)\not\in\Gamma_1\rbrace.\]
De plus, $car\left(k_\nu\right)=car\left(k_\mu\right)=0$. On s'est donc ramen\'e aux hypoth\`eses du Th\'eor\`eme \ref{thmeclatformcar0}.
\\Soit $T=\varphi^{-1}(\sigma(S))$, c'est un sous-anneau local de $R$ d'id\'eal maximal $\varphi^{-1}(\sigma(\mathfrak m))=\mathfrak m\cap T$. Ainsi, $T$ contient $x_1,...,x_l$ et:
\[T/(\mathfrak m\cap T)\simeq k.\]
Comme le Th\'eor\`eme \ref{thmeclatformcar0} est vrai en caract\'eristique $0$, on peut appliquer le Th\'eor\`eme \ref{monomialisationpasformel}. Ainsi, plusieurs cas se pr\'esentent:
\begin{enumerate}
\item Si $H\neq (0)$, il existe une suite locale encadr\'ee $(R,x)\rightarrow \left(R^{(i)},x^{(i)}\right)$ telle que $e(R,\nu)$ d\'ecroisse strictement. En particulier, ce cas ne peut arriver qu'un nombre fini de fois, on arrive ainsi \`a la situation o\`u $H=(0)$ et donc $R/H$ est r\'egulier.
\item Si $H=(0)$, alors, pour chaque $f_j$, $1 \leqslant j\leqslant s$, il existe une suite locale encadr\'ee $(R,x)\rightarrow \left(R^{(i)},x^{(i)}\right)$ telle que $f_j$ soit un mon\^ome en $x^{(i)}$ multipli\'e par une unit\'e de $R^{(i)}$.
\end{enumerate}
Par la Proposition \ref{propgenerem}, la propri\'et\'e d'\^etre un mon\^ome multipli\'e par une unit\'e est pr\'eserv\'ee par les suites locales encadr\'ees. Ainsi, en it\'erant la proc\'edure de (2), on arrive \`a la situation o\`u tous les $f_1,...,f_s$ sont simultan\'ement des mon\^omes en $x^{(i)}$. Apr\`es une suite locale encadr\'ee de plus $(R,x)\rightarrow \left(R',x'\right)$, on peut supposer que les $f_j$ sont des mon\^omes uniquement en $x'_1,...,x'_r$, $1\leqslant j\leqslant s$, $r=r(R,x,\nu)$. Enfin, en appliquant plusieurs fois le Corollaire \ref{coroexistsuitlocale}, on est ramen\'e \`a la situation o\`u chaque $f_j$ est un mon\^ome en $x'_1,...,x'_r$, $1\leqslant j\leqslant s$ et, pour $j,j'\in\lbrace 1,...,s\rbrace$, $f_j$ divise $f_{j'}$ ou $f_{j'}$ divise $f_j$. De plus, toutes ces suites locales encadr\'ees sont d\'efinies sur $T$.
Consid\'erons le diagramme suivant:
\[ \xymatrix{\left( R,x,k\right) \ar[r]^-{\pi_{0}} \ar[d] & \left( R^{(1)},x^{(1)},k^{(1)}\right) \ar[r]^-{\pi_{1}} \ar[d] & \ldots \ar[r]^-{\pi_{i-1}} & \left( R^{(i)},x^{(i)},k^{(i)}\right)\ar[d] \\ \left(\widehat{S}/\overline{H},x,k\right)\ar[r]^-{\tilde\pi_{0}} & \left( \tilde S^{(1)},x^{(1)},k^{(1)}\right) \ar[r]^-{\tilde\pi_{1}} & \ldots \ar[r]^-{\tilde\pi_{i-1}} & \left( \tilde S^{(i)},x^{(i)},k^{(i)}\right)\\ \left( S,u,k\right) \ar[r]^-{\rho_{0}} \ar[u] & \left( S^{(1)},u^{(1)},k^{(1)}\right) \ar[r]^-{\rho_{1}} \ar[u] & \ldots \ar[r]^-{\rho_{i-1}} & \left( S^{(i)},u^{(i)},k^{(i)}\right)\ar[u] } \]
Par ce que l'on vient de voir, la premi\`ere colonne et la premi\`ere ligne ont d\'ej\`a \'et\'e construites. En passant au transform\'e strict de $R/H\simeq\widehat S/\overline H$ \`a chaque \'etape de la suite $(\pi_j)_{1\leqslant j \leqslant i-1}$, on construit la suite d'\'eclatements encadr\'es $\left(\tilde\pi_j\right)_{1\leqslant j \leqslant i-1}$ de $\widehat S/\overline H$ d\'efinie sur $S$. Enfin, la suite $\left(\tilde\pi_j\right)_{1\leqslant j \leqslant i-1}$ se rel\`eve en une suite locale encadr\'ee $(\rho_j)_{1\leqslant j \leqslant i-1}$.
\\ Si $R/H$ est singulier, par le Th\'eor\`eme \ref{monomialisationpasformel}, il existe une suite locale encadr\'ee $(\pi_j)_{1\leqslant j \leqslant i-1}$ qui fasse d\'ecro\^itre $e(R,\nu)$. Ainsi, la suite locale encadr\'ee $(\rho_j)_{1\leqslant j \leqslant i-1}$ r\'esultante poss\`ede la propri\'et\'e:
\[emb.dim\left(\widehat{S^{(i)}}/\overline{H}^{(i)}\right)<emb.dim\left(\widehat{S}/\overline{H}\right).\]
Ceci n'arrive qu'un nombre fini de fois. Apr\`es un nombre fini de pas, on arrive \`a la situation o\`u $\widehat{S^{(i)}}/\overline H^{(i)}$ est r\'egulier. Maintenant, si l'on suppose que $\widehat{S^{(i)}}/\overline H^{(i)}$ est r\'egulier, consid\'erons $f_1,...,f_s$ des \'el\'ements non nuls de $S$ tels que $\mu(f_1)=\min_{1\leqslant j\leqslant s}\lbrace\mu(f_j)\rbrace$, alors, par le (2) vu plus haut, on en d\'eduit que, pour $1\leqslant j \leqslant s$, $f_j\mod\overline H^{(i)}$ sont des mon\^omes en $u^{(i)}$ et $f_1\mod\overline H^{(i)}$ divise $f_j\mod\overline H^{(i)}$.\\\qed
\subsection{Uniformisation locale plong\'ee pour des valuations de rang 1}
~\smallskip ~\\ \indent Avant d'\'enoncer et de d\'emontrer le th\'eor\`eme d'uniformisation locale plong\'ee pour des valuations de rang $1$, nous allons donner un lemme un peu plus g\'en\'eral et ind\'ependant de la caract\'eristique.
\begin{lem}\label{lemmetechniquespiva}(\cite{spiva}, Lemme 16.3) Soient $(A,\mathfrak m,k)$ un anneau local noeth\'erien, $\nu$ une valuation centr\'ee en $A$ et $J$ un $\nu$-id\'eal premier de $A$ non maximal. Notons $h=ht(J)$. Supposons que $A_J$ et $A/J$ soient r\'eguliers. Notons $u=(u_1,...,u_n)$ un ensemble minimal de g\'en\'erateurs de $\mathfrak m$ et supposons que $u=(x,y)$ avec $x=(x_1,...,x_l)$ et $y=(y_1,...,y_{n-l})$ tels que:
\begin{enumerate}
\item $x$ induit un syst\`eme r\'egulier de param\`etres de $A/J$;
\item il existe un ensemble minimal de g\'en\'erateurs $(\widehat{y}_1,...,\widehat{y}_{n-l})$ de $J$ et des mon\^omes $\varpi_1,...,\varpi_{n-l}$ en $x$ tels que $\varpi_1/.../\varpi_{n-l}$ de sorte que $(\widehat{y}_{n-l-h+1},...,\widehat{y}_{n-l})$ induit un syst\`eme r\'egulier de param\`etres de $A_J$ et, pour tout $N\in\mathbb N^*$, il existe $v_j\in A^\times$ tel que:
\[\widehat{y}_j-y_j-\varpi_jv_j\in\varpi_j\mathfrak m^N,\]
$1\leqslant j\leqslant n-l$. Remarquons que, par convention, on peut avoir $y_j=\widehat y_j$, $\varpi_j=0$, $v_j=1$ et $(y)=J$.
\end{enumerate}
Soient $f_1,...,f_s\in A$ tels que:
\[\nu(f_1)\leqslant...\leqslant \nu(f_s).\]
Soit $(T,\mathfrak m_T)$ un sous-anneau local de $A$ non n\'ecessairement noeth\'erien tel que $T/\mathfrak m_T=k$. Enfin, supposons que pour tout $g_1,...,g_t\in A$ tels que:
\[\nu(g_1)\leqslant...\leqslant \nu(g_t),\]
il existe une suite locale encadr\'ee $(A,u)\rightarrow (A',u')$ ind\'ependante de $y$ et d\'efinie sur $T$ telle que, pour tout $1\leqslant j \leqslant t$, $g_j\:mod\: J'$ soit un mon\^ome en $u'$ et $g_q\:mod\: J'$ divise $g_i\:mod\: J'$, $1\leqslant q\leqslant i\leqslant t$, o\`u $J'$ est le transform\'e strict de $J$ dans $A'$.
\\Il existe alors une suite locale encadr\'ee $(A,u)\rightarrow (A'',u'')$ par rapport \`a $\nu$ et d\'efinie sur $T$ telle que $A''$ soit r\'egulier.
\\Supposons de plus que l'une au moins des deux conditions suivantes soit v\'erifi\'ee:
\begin{enumerate}
\item[(3)] $f_i\notin J$, $1\leqslant i \leqslant s$;
\item[(4)] $y_j=\widehat{y}_j$, $1\leqslant j \leqslant n-l$ (donc $J=(y)$), $T=A$ et, pour tout $1\leqslant i \leqslant s$, $f_i$ est un mon\^ome en $(y_{n-l-h+1},...,y_{n-l})$ et $f_i/f_{i+1}$ dans $A_J$.
\end{enumerate}
La suite locale encadr\'ee $(A,u)\rightarrow (A'',u'')$ pr\'ec\'edente peut alors \^etre choisie de telle sorte que les $f_i$ soient des mon\^omes en $u''$ multipli\'es par une unit\'e de $A''$ et telle que $f_i/f_{i+1}$ dans $A''$, $1\leqslant i \leqslant s$.
\end{lem}
\noindent\textit{Preuve}: Nous ne donnerons qu'une id\'ee de preuve, pour plus de d\'etails, on peut consulter \cite{spiva}. Si $J=(0)$, il n'y a rien \`a montrer; supposons donc que $J\neq (0)$. \`A partir de la suite locale encadr\'ee $(A,u)\rightarrow (A',u')$, on veut construire une suite locale encadr\'ee $(A,u)\rightarrow (A'',u'')$ d\'efinie sur $T$ telle que $A''$ soit r\'egulier. Pour cela il suffit d'avoir:
\[A''=Fitt_h(J''/J''^2),\]
o\`u $Fitt_h(J''/J''^2)$ est le $h$-i\`eme id\'eal de Fitting de $J''/J''^2$. Par hypoth\`ese et apr\`es une suite locale encadr\'ee n'impliquant que des variables en $x$, on peut se ramener \`a la situation o\`u $Fitt_h(J'/J'^2)$ est principal et engendr\'e par un mon\^ome en $x$, not\'e $a$. Quitte \`a renum\'eroter les variables de $y$, on peut supposer qu'il existe $n-l-h$ relations de la forme:
\[\psi_j=a\widehat y_j+\sum\limits_{q=n-l-h+1}^{n-l}a_{j,q}\widehat y_q+g_j,\]
o\`u $g_j\in J'^2$ et $a$ divise $a_{j,q}$ pour $1\leqslant j\leqslant n-l-h$ et $n-l-h+1\leqslant q\leqslant n-l$. Si on a (4), alors:
\[\nu_{0,u}(y_j)>\nu(a),\:1\leqslant j\leqslant n-l.\]
Comme $J$ est un $\nu$-id\'eal alors $y_j\in J$ et $a\notin J$. Supposons que l'on n'ait pas (4) et prenons $N\in\mathbb N^*$ tel que:
\[N>\dfrac{1}{\nu_{0,u}(\mathfrak m)}\left(\nu(\varpi_{n-l})+\max\limits_{\underset{f_q\notin J}{1\leqslant q\leqslant s}}\lbrace \nu(f),\nu(a)\rbrace\right).\]
Consid\'erons une variable $x_j$ de $x$ telle que $x_j^\alpha$ divise $\varpi_1$ pour un certain $\alpha\in\mathbb N^*$. On \'eclate l'id\'eal $(y,x_j)$ et on r\'ep\`ete cette proc\'edure $\alpha$ fois. On fait de m\^eme pour toutes les autres variables divisant $\varpi_1$. On arrive alors \`a la situation o\`u:
\[\nu_{0,u'}(y'_1)>\nu(a)+\nu(\varpi_{n-l})-\nu(\varpi_1).\]
On refait pareil pour toutes les autres variables de $y$; on est ainsi ramen\'e \`a la situation o\`u, pour ces nouvelles variables:
\[\nu_{0,u}(y'_j)>\nu(a),\:1\leqslant j\leqslant n-l.\]
Pour chaque variable $x_j$ de $x$ divisant $a$, on \'eclate en l'id\'eal $(y,x_j)$. Ces \'eclatements ont pour effet de multiplier $a$ et les $a_{j,q}$ par $x_j$ ainsi que les $g_1,...,g_t$ par $x_j^\gamma$ o\`u $\gamma\geqslant 2$. Apr\`es un nombre fini de fois, $a$ divise $g_j$ et donc $a$ divise $\psi_j$, $1\leqslant j \leqslant n-l-h$. Ainsi, pour $1\leqslant j \leqslant n-l-h$, les $y_j$ s'expriment comme une fonction des variables restantes modulo $\mathfrak m^2$. Ceci fait donc d\'ecro\^itre $emb.dim(A)$ et $A$ est r\'egulier, \`a une suite formelle encadr\'ee pr\`es.
\\\`A partir de maintenant on peut supposer que $h=n-l$; pour terminer il faut montrer que les $f_i$ sont des mon\^omes en $u''$ multipli\'es par une unit\'e de $A''$. Quitte \`a diviser $f_i$ par un mon\^ome en $y$, on peut supposer que (3) est toujours v\'erifi\'ee. Si (4) est v\'erifi\'ee alors:
\[\nu_{0,u}(y'_j)>\nu(f_i),\:1\leqslant j\leqslant n-l,\:1\leqslant i \leqslant s.\]
Si (4) n'est pas v\'erifi\'ee, l'in\'egalit\'e pr\'ec\'edente reste vraie par le choix de $N$. Ainsi, pour $1\leqslant i\leqslant s$, on a:
\[f_i=\rho_i+\tilde f_i,\]
o\`u $\rho_i$ est un mon\^ome en $x$ et $\nu_{0,u}(\tilde f_i)>\nu_{0,u}(\rho_i)$. On applique le Corollaire \ref{coroideal} \`a chaque $f_i$, $1\leqslant i\leqslant s$, et on obtient le r\'esultat cherch\'e.\\\qed
\\\indent Passons maintenant au th\'eor\`eme d'uniformisation locale plong\'ee pour des valuations de rang $1$ sur un anneau \'equicaract\'eristique dont le corps r\'esiduel est de caract\'eristique nulle.
\begin{thm}\label{uniflocalerang1car0}
Soient $(S,\mathfrak m,k)$ un anneau local int\`egre quasi-excellent de corps des fractions $L$ et $\mu$ une valuation de $L$ de rang $1$ et de groupe des valeurs $\Gamma_1$, centr\'ee en $S$, telle que $car\left(k_\mu\right)=0$.
\\Notons $u=(u_1,...,u_n)$ un ensemble minimal de g\'en\'erateurs de $\mathfrak m$.
\\Soient $f_1,...,f_s\in\mathfrak m$ tels que $\mu(f_1)=\min_{1\leqslant j\leqslant s}\lbrace\mu(f_j)\rbrace$. Il existe alors une suite locale encadr\'ee:
\[ \xymatrix{\left( S,u,k\right)=\left( S^{(0)},u^{(0)},k^{(0)}\right) \ar[r]^-{\rho_{0}} & \left( S^{(1)},u^{(1)},k^{(1)}\right) \ar[r]^-{\rho_{1}} & \ldots \ar[r]^-{\rho_{i-1}} & \left( S^{(i)},u^{(i)},k^{(i)}\right)}, \]
ayant les propri\'et\'es suivantes:
\begin{enumerate}
\item $S^{(i)}$ est r\'egulier;
\item Pour $1\leqslant j\leqslant s$, $f_j$ est un mon\^ome en $u^{(i)}$ multipli\'e par une unit\'e de $S^{(i)}$;
\item Pour $1\leqslant j\leqslant s$, $f_1$ divise $f_j$ dans $S^{(i)}$.
\end{enumerate}
En d'autres termes, $\mu$ admet une uniformisation locale plong\'ee au sens de la Propri\'et\'e \ref{uniflocplongint}.
\end{thm}
\noindent\textit{Preuve}: Reprenons les notations du Th\'eor\`eme \ref{thmprelimcar0}. On a vu qu'il existe un morphisme surjectif:
\[\psi:\widehat S\twoheadrightarrow \widehat{S}/\overline{H}\simeq R/H.\]
Par le Th\'eor\`eme \ref{thmprelimcar0}, apr\`es une suite locale encadr\'ee auxiliaire, on peut supposer que $\widehat{S}/\overline{H}$ est r\'egulier et donc que $R/H\simeq k\left[\left[x_{1},...,x_l\right]\right]$. Ainsi, il existe un ensemble de g\'en\'erateurs $\widehat y=\left(\widehat y_1,...,\widehat y_{n-l}\right)$ de $\overline H$ et des s\'eries formelles $\phi_j\in k\left[\left[x_{1},...,x_l\right]\right]$ tels que:
\[\widehat y_j=y_j+\phi_j \in \widehat S,\:1\leqslant j\leqslant n-l.\]
Quitte \`a renum\'eroter les $y_j$, on peut supposer que:
\[\mu(y_1)\leqslant\mu(y_2)\leqslant...\leqslant\mu(y_{n-l}).\]
En appliquant le Corollaire \ref{coroideal} aux mon\^omes de $\phi_j$, $1\leqslant j\leqslant n-l$, on peut supposer que:
\[\phi_j=\varpi_j\widehat v_j,\]
o\`u les $\varpi_j$ sont des mon\^omes en $x_1,...,x_l$, $\widehat v_j\in k\left[\left[x_{1},...,x_l\right]\right]^\times$ et tels que:
\[\varpi_1/.../\varpi_{n-l}.\]
Ainsi, on en d\'eduit que:
\[\forall\:j\in\lbrace 1,...,n-l\rbrace,\:\forall\:N\in\mathbb N^*,\:\exists\:v_j\in S^\times,\:\widehat y_j-y_j-\varpi_jv_j\in\varpi_j\mathfrak m^N.\]
Enfin, rappelons que, par le Corollaire \ref{RHestreg}, l'anneau $\widehat S_{\overline H}$ est r\'egulier. On applique alors le Lemme \ref{lemmetechniquespiva} \`a $A=\widehat S$, $J=\overline H$, $T=S$ et $\nu=\mu$. On en d\'eduit ainsi une uniformisation locale plong\'ee (Propri\'et\'e \ref{uniflocplongint}) de $\widehat S$. Or $S$ est quasi-excellent, donc le morphisme $S\rightarrow \widehat{S}$ est r\'egulier et on obtient ainsi une uniformisation locale plong\'ee (Propri\'et\'e \ref{uniflocplongint}) de $S$.\\\qed
\subsection{Th\'eor\`emes d'uniformisation locale plong\'ee}
\begin{coro}\label{uniflocaleplongeecar0}
Soient $(S,\mathfrak m,k)$ un anneau local int\`egre quasi-excellent de corps des fractions $L$ et $\nu$ une valuation de $L$ centr\'ee en $S$ et de groupe des valeurs $\Gamma$ telle que $car\left(k_\nu\right)=0$.
\\Alors, $\nu$ admet une uniformisation locale plong\'ee au sens de la Propri\'et\'e \ref{uniflocplongint}.
\end{coro}
\noindent\textit{Preuve}: On applique le Th\'eor\`eme \ref{uniflocalerang1car0} et le Th\'eor\`eme 1.3 de \cite{spivanova}.\\\qed
\begin{coro}\label{thm1.6car0}
Soient $(S,\mathfrak m,k)$ un anneau local int\`egre quasi-excellent de corps des fractions $L$ et $\nu$ une valuation de $L$ centr\'ee en $S$ et de groupe des valeurs $\Gamma$ telle que $car\left(k_\nu\right)=0$.
\\Pour $I$ un id\'eal de $S$, la paire $(S,I)$ admet une uniformisation locale plong\'ee par rapport \`a $\nu$ au sens de la D\'efinition \ref{defiuniflocpourpaire}.
\end{coro}
\noindent\textit{Preuve}: C'est une application imm\'ediate du Corollaire \ref{uniflocaleplongeecar0}.\\\qed
\begin{coro}Soit $S$ un sch\'ema quasi-excellent tel que pour tout $\xi\in S$, $car(\mathcal O_{S,\xi}/\mathfrak m_{S,\xi})=0$. Soient $X$ une composante irr\'eductible de $S_{red}$ et $\nu$ une valuation de $K(X)$
centr\'ee en un point $\xi\in X$. Il existe alors un \'eclatement $\pi: S'\rightarrow S$ le long d'un sous-sch\'ema de $S$, ne contenant aucune composante irr\'eductible de $S_{red}$ et ayant la propri\'et\'e suivante : \\Soient $X'$ le transform\'e strict de $X$ par $\pi$, $\xi'$ le centre de $\nu$ sur $X'$ et $D$ le diviseur exceptionnel de $\pi$, alors $(\mathcal{O}_{X',\xi'},\mathcal{I}_{D,\xi'})$ admet une uniformisation locale plong\'ee par rapport \`a $\nu$ au sens de la D\'efinition \ref{defiuniflocpourpaire}.
\end{coro}
\noindent\textit{Preuve}: C'est une application directe du Corollaire \ref{thm1.6car0}.\\\qed
\begin{thm}\label{thmfinal}
Soit $(S,\mathfrak m,k)$ un anneau local (non n\'ecessairem\-ent int\`egre) quasi-excellent. Soient $P$ un id\'eal premier minimal de $S$ et $\nu$ une valuation du corps des fractions de $S/P$ centr\'ee en $S/P$ et de groupe des valeurs $\Gamma$ telle que $car\left(k_\nu\right)=0$.
\\Il existe alors un \'eclatement local $\pi:S\rightarrow S'$ par rapport \`a $\nu$ tel que $S'_{red}$ soit r\'egulier et $Spec(S')$ soit normalement plat le long de $Spec(S'_{red})$, c'est-\`a-dire que l'anneau $S$ admet une uniformisation locale par rapport \`a $\nu$ au sens de la Propri\'et\'e \ref{uniflocanneaunonint}.
\end{thm}
\noindent\textit{Preuve}:
Par le Corollaire \ref{thm1.6car0}, il existe une suite locale encadr\'ee $(S,u)\rightarrow (S',u')$ le long de centres ne contenant aucune composante irr\'eductible du transform\'e strict de $Spec\left(S_{red}\right)$, telle que $Spec\left(S'_{red}\right)$ soit r\'egulier. On peut donc supposer que $S_{red}$ est r\'egulier. Il reste \`a montrer qu'il existe une suite locale encadr\'ee telle que $Spec(S')$ soit normalement plat le long de $Spec(S'_{red})$.
\\Soit $(y_1,...,y_h)=\sqrt{(0)}\subset S$, c'est l'id\'eal qui d\'efinit $ Spec\left(S_{red}\right)$ dans $Spec(S)$.
\\Rappelons que pour un anneau local noeth\'erien $(R,\mathfrak n)$, le \textit{c\^one tangent} de $Spec(R)$ est d\'efini par:
\[Spec\left(\bigoplus\limits_{n\geqslant 0}\mathfrak n^{n}/\mathfrak n^{n+1}\right).\]
Il suffit de construire une suite locale encadr\'ee telle que le c\^one tangent de $Spec(S')$ soit d\'efini par un id\'eal engendr\'e par des \'el\'ements de $k\left[\overline{y'_1},...,\overline{y'_h}\right]$, o\`u $y'_j$ est le transform\'e strict de $y_j$ dans $S'$ et $\overline{y_j}$ est l'image naturelle de $y_j$ dans l'alg\`ebre gradu\'ee de $S'$, $1\leqslant j\leqslant h$.
\\Notons $A=S_{red}$, $A'=S'_{red}$, on peut alors \'ecrire $S$ sous-la forme:
\[S=A\left[y_1,...,y_h\right]/I.\]
Notons $f_1,...,f_s\in A\left[y_1,...,y_h\right]$ un ensemble de g\'en\'erateurs de $I$ et $\left(x_1,...,x_r\right)$ un ensemble minimal de g\'en\'erateurs de l'id\'eal maximal de $A$.
\\Pour $1\leqslant j\leqslant s$, notons $f_j=\sum\limits_\alpha c_{j,\alpha}y^\alpha\in A[y]$. On va construire une suite locale encadr\'ee et une partition $(u')=(y',x')$ de $(u')$ o\`u $(y')$ est le transform\'e strict de $(y)$.
\\Soit $\nu_{0,x'}$ la valuation monomiale de $A'$ associ\'ee \`a $x'$ et \`a $\left\lbrace\nu\left(x'_j\right)\right\rbrace_j$ (Corollaire \ref{defivaluationmono}). Par le Corollaire \ref{thm1.6car0}, on peut construire une suite locale encadr\'ee $(S,u)\rightarrow (S',u')$ telle que les $c_{j,\alpha}$ soient des mon\^omes en $x'$ multipli\'es par une unit\'e de $A'$.
\\Pour tout $j\in\lbrace 1,...,s\rbrace$, notons $\mu_j=\max\lbrace N\in\mathbb N^*\:\vert\:f_j\in(y)^N\rbrace$ et $f'_j=\sum\limits_\alpha c'_{j,\alpha}y'^\alpha\in A'[y']$ le transform\'e strict de $f_j$ dans $S'=A'[y']$. Pour chaque $x'_t$ apparaissant dans $c'_{j,\alpha}$, pour un certain $j$ et un certain $\alpha$ tel que $\vert\alpha\vert=\mu_j$, \'eclatons en l'id\'eal $(y'_1,...,y'_h,x'_t)$ un nombre suffisant de fois. On arr\^ete le processus lorsque, pour $1\leqslant j\leqslant s$ et $\alpha$ tel que $\vert\alpha\vert>\mu_j$, il existe $\tilde\alpha$ tel que $c'_{j,\tilde\alpha}$ divise $c'_{j,\alpha}$, avec $\vert\tilde\alpha\vert=\mu_j$. Par le Corollaire \ref{coroideal}, on sait que, pour chaque $j$, il existe bien $\tilde\alpha$ tel que $\vert\tilde\alpha\vert=\mu_j$ et pour tout $\alpha$, $c'_{j,\tilde\alpha}$ divise $c'_{j,\alpha}$. Ainsi, le c\^one tangent de $Spec(S')$ est d\'efini par des polyn\^omes qui ne d\'ependent que de $\overline{y'_1},...,\overline{y'_h}$. On en conclut que $Spec(S')$ est normalement plat le long de $Spec(S'_{red})$.\\\qed
\begin{coro}
Soit $S$ un sch\'ema quasi-excellent tel que pour tout $\xi\in S$, $car(\mathcal O_{S,\xi}/\mathfrak m_{S,\xi})=0$. Soient $X$ une composante irr\'eductible de $S_{red}$ et $\nu$ une valuation de $K(X)$
centr\'ee en un point $\xi\in X$. Il existe alors un \'eclatement $\pi: S'\rightarrow S$ le long d'un sous-sch\'ema de $S$, ne contenant aucune composante irr\'eductible de $S_{red}$ et ayant la propri\'et\'e suivante : \\Soient $X'$ le transform\'e strict de $X$ par $\pi$ et $\xi'$ le centre de $\nu$ sur $X'$, alors $\xi'$ est un point r\'egulier de $X'$ et $S'$ est normalement plat le long de $X'$ en $\xi'$.
\end{coro}
\bibliographystyle{plain}
|
2,869,038,154,415 | arxiv | \section{Discussion}
We have demonstrated a StyleGAN2-based digitization approach using a non-linear 3DMM that can reliably generate high-quality normalized textured 3D face models from challenging unconstrained input photos. Despite the limited amount of available training data (only thousands of subjects), we have shown that our two-stage face inference method combined with a hybrid \textit{Normalized Face Dataset} is effective in digitizing relightable and animation friendly avatars and can produce results of quality comparable to state-of-the-art techniques where generated faces are not normalized.
Our experiments show that simply adopting existing methods using limited normalized facial training data is insufficient to capture the likeness and fine-scale details of the original subject, but a perceptual refinement stage is necessary to transfer person-specific facial characteristics from the input photo. Our experiments also show that perceptual loss enables more robust matching using deep features than only pixel loss, and is able to better preserve semantically meaningful facial features.
Compared to state-of-the-art non-linear 3DMMs, our generated face models can produce lighting and expression normalized face models, which is a requirement for seamless integration of avatars in virtual environments. Furthermore, our experiments also indicate that our results are not only perceptually superior, but also quantitatively more accurate and robust than existing methods.
\paragraph{Limitations and Future Work.} As shown in Fig.~\ref{fig:failure_case}, the effectiveness of our method in generating faces with normalized expressions and lighting is limited by imperfect training data and challenging input photos. In particular, some expressions and specularities can still be found in the generated results.
Furthermore, the fundamental problem of disentangling identity from expressions, or lighting conditions from skin tones is ill-posed. Nevertheless, we believe that such disentanglement can be improved using superior training data. In the future, we would like to explore how to increase the resolution and fidelity of the digitized assets and potentially combine our method with high-fidelity facial asset inference techniques such as~\cite{Lattas_2020_CVPR,chen2019photo,Yamaguchi_2018}.
\section*{Appendix I. Additional Comparisons}
\begin{figure}[hbt!]
\includegraphics[width=3.25in]{figs/supplemental/comparison_supp_v2.pdf}
\caption{Additional comparisons. The first row shows the input images and the second row our results. The remaining rows are the reconstructed 3D faces obtained by \cite{Lee_2020_CVPR,Gecer_2019_CVPR,Tran_2019_CVPR,deng2019accurate,Genova_2018_CVPR,Thies_2016_CVPR}, respectively.}
\label{fig:comparison_supp}
\end{figure}
In Fig.~\ref{fig:comparison_supp}, we compare our method with several recent state-of-the-art single view face reconstruction approaches. Thies et al. ~\cite{Thies_2016_CVPR} extend the seminal work of Blanz and Vetter~\cite{Blanz_1999_SIGGRAPH} with facial expression blendshapes and iteratively optimize for shape, texture, and lighting condition by minimizing energy terms based on facial landmark and pixel color constraints. We visualize the avatars with and without the facial expressions of the corresponding input photo. Neutralizing facial expressions is straightforward by setting all the blendshape coefficients to $0$.
We notice that the linear morphable face model is unable to recover features such as facial hair, as well as high-frequency geometry and appearance details. As a result, the face renderings often lack the likeness of the original subject and often fall within the so called ``uncanny valley''.
Genova et al. ~\cite{Genova_2018_CVPR} predict identity coefficients of linear 3DMM using a deep neural network and Deng et al.~\cite{deng2019accurate} predict the lights and face poses simultaneously using additionally linear 3DMM coefficients. Their models are still restricted to the linear subspace which has limited capabilities for representing facial details. Gecer et al.~\cite{Gecer_2019_CVPR} introduce an unsupervised training approach to regress linear 3DMM coefficients for geometry and adopt a Generative Adversarial Network model for generating nonlinear texture. Tran et al.~\cite{Tran_2019_CVPR} present an approach to learn additional proxies as means to avoid strong regularization, which efficiently captures high level details for geometry and texture with a simple decoder architecture. They do not separate identity and expressions in the training. Lee et al.~\cite{Lee_2020_CVPR} demonstrate the latest work for generating 3D face models from a single input photograph using non-linear 3DMMs and an uncertainty-aware mesh decoder. The resulting 3D faces are very faithful to the input image, but the lighting and expressions are baked into the texture and mesh. As a result, neither Lee et al.~\cite{Lee_2020_CVPR} nor the above non-linear 3DMM techniques produce normalized results as shown in our paper. Notice that the results in Fig.~\ref{fig:comparison_supp} from row 3 to row 7 were taken directly from the paper of ~\cite{Lee_2020_CVPR}, and the renderings may have slight inconsistencies.
\section*{Appendix II. Additional Evaluations}
\begin{figure}[hbt!]
\includegraphics[width=3.25in]{figs/4_results/projection_loss.pdf}
\caption{Algorithmic choice justification on the loss function for GAN-inversion. From top to bottom: Ground truth geometry and texture; Reconstruction results optimized by pixel loss and adversarial loss; Reconstruction results with perceptual loss in addition.}
\label{fig:projection_loss}
\end{figure}
In Sec. 3.1, we adopt a two step training method by first training $G$ and then freezing $G$ in order to compute the code inversion and to train $R$.
Fig.~\ref{fig:projection_loss} shows that the latent codes can be effectively found out with our choice of loss function in Eq. 2. Specifically, while pixel loss and adversarial loss cannot preserve the overall similarity, adding the perceptual loss improves the high-level appearance in the rendering views.
\begin{figure}[hbt!]
\includegraphics[width=3.25in]{figs/supplemental/interpolation.pdf}
\caption{Illustration of latent vector interpolation. The four input 3D avatars are shown
at the corners, while all the in-between interpolations are based on
bi-linear interpolated weights.}
\label{fig:interpolation}
\end{figure}
\paragraph{Face Interpolation.}
In Fig.~\ref{fig:interpolation}, we show interpolation results of multiple 3D avatars.
The four input avatars are shown at the corners. All the interpolation results are obtained via bi-linearly interpolation of the embedding $\textbf{w}$ computed from the four images.
As shown in the results, realistic, plausible, and artifact free avatar assets can be generated using our method, which can be useful for a wide range of avatar manipulation and synthesis tasks.
\begin{figure}[hbt!]
\includegraphics[width=3.25in]{figs/4_results/refinement_loss.pdf}
\caption{Visual comparison illustrating the effects of losses in the perceptual refinement step, where the full model leads to better results. From left to right: (a) input image; (b) refinement result with identity loss and $\textbf{w}$ regularization; (c) refinement result with perceptual loss and $\textbf{w}$ regularization; (d) refinement result with all three losses.}
\label{fig:optimization_loss}
\end{figure}
\paragraph{Optimization Loss.}
Fig.~\ref{fig:optimization_loss} shows the benefit of each loss term in $L_{refine}$ for the perceptual refinement. Combining identity loss, perceptual loss, and $\textbf{w}$ regularization allows us to generate clean assets, where the resulting subject preserves the likeness of the subject in the original input photo, but at the same time, ensures consistent and detailed assets with normalized lighting and neutral expressions.
\begin{figure}[hbt!]
\includegraphics[width=3.25in]{figs/supplemental/lighting_consistency_v2.pdf}
\caption{Consistent reconstructions of albedo texture under varying extreme illuminations.}
\label{fig:illuminations_consistency}
\end{figure}
\paragraph{Illumination Consistency.}
Fig.~\ref{fig:illuminations_consistency} demonstrates consistent face reconstructions of albedo textures from varying illuminations conditions. In this experiment we move around a light with different extreme colors around the subjects and demonstrate how a consistent 3D avatar with a nearly identical dark skin tone is correctly reconstructed for each input photo.
\begin{figure}[hbt!]
\includegraphics[width=3.25in]{figs/supplemental/expression_consistency_v3.pdf}
\caption{Consistent reconstructions of 3D avatars from images with different expressions.}
\label{fig:expression_consistency}
\end{figure}
\paragraph{Expression Consistency.}
We demonstrate how consistent faces are reconstructed from input images with different expressions in Fig.~\ref{fig:expression_consistency}. In particular, our method digitizes consistent 3D avatars with neutral expressions despite a wide range of diverse and extreme facial expressions of the same person as shown in the first row and the third row. While some amount of the input expressions are reflected in the normalized results, the overall neutralization is significantly superior than existing techniques, especially for extreme input facial expressions.
\begin{figure}[hbt]
\centering
\includegraphics[width=3.25in]{figs/pose_consistency.pdf}
\caption{Consistent reconstructions under different poses.}
\label{fig:pose_consistency}
\end{figure}
\paragraph{Pose Consistency.}
Fig.~\ref{fig:pose_consistency} shows consistent reconstructions from varying head poses. For side views, our method can still generate highly consistent textures and geometries despite non-visible face regions in the input image.
\section*{Appendix III. Additional Results}
To demonstrate the robustness of the our technique, we provide $156$ additional examples with a wider range of extremely challenging input photographs in Fig.~\ref{fig:more_results1}, Fig.~\ref{fig:more_results2}, Fig.~\ref{fig:more_results3}, and Fig.~\ref{fig:more_results4}. These figures illustrate input pictures, successful normalized 3D face reconstructions, as well as renderings using HDRI-based lighting environments. Our results include diverse ethnicity, both genders, and varying age groups, ranging from children to old people. We also showcase a wide range of complex lighting conditions, stylized photographs, black and white portraits, drawings and paintings, facial occlusions, as well as a wide range of extreme head poses and facial expressions. Notice that we also show several results of the same person, but reconstructed from entirely different input images.
\begin{figure*}[hbt!]
\centering
\includegraphics[width=6.75in]{figs/supplemental/results_5.pdf}
\caption{Batch 1 additional results of normalized 3D avatars from a single input image. None of these subjects have been used in training for our networks.}
\label{fig:more_results1}
\end{figure*}
\begin{figure*}[hbt!]
\centering
\includegraphics[width=6.75in]{figs/supplemental/results_6.pdf}
\caption{Batch 2 additional results of normalized 3D avatars from a single input image. None of these subjects have been used in training for our networks.}
\label{fig:more_results2}
\end{figure*}
\begin{figure*}[hbt!]
\centering
\includegraphics[width=6.75in]{figs/supplemental/results_7.pdf}
\caption{Batch 3 additional results of normalized 3D avatars from a single input image. None of these subjects have been used in training for our networks.}
\label{fig:more_results3}
\end{figure*}
\begin{figure*}[hbt!]
\centering
\includegraphics[width=6.75in]{figs/supplemental/results_8.pdf}
\caption{Batch 4 additional results of normalized 3D avatars from a single input image. None of these subjects have been used in training for our networks.}
\label{fig:more_results4}
\end{figure*}
\end{document}
\section{Introduction}
\begin{figure}[hbt!]
\includegraphics[width=3.25in]{figs/4_results/rig.pdf}
\caption{Automated digitization of normalized 3D avatars from a single photo.}
\label{fig:rig}
\end{figure}
The creation of high-fidelity virtual avatars have been mostly reserved to professional production studios and typically involves sophisticated equipment and controlled capture environments. Automated 3D face digitization methods that are based on unconstrained images such as selfies or downloaded internet pictures are gaining popularity for a wide range of consumer applications, such as immersive telepresence, video games, or social media apps based on personalized avatars.
Cutting-edge single-view avatar digitization solutions are based on non-linear 3D morphable face models (3DMM) generated from GANs~\cite{Tran_2018_CVPR,Tran_2019_CVPR,Gecer_2019_CVPR,Lee_2020_CVPR}, outperforming traditional linear models~\cite{Blanz_1999_SIGGRAPH} which often lack facial details and likeness of the subject. To successfully train these networks, hundreds of thousands of subjects in various lighting conditions, poses, and expressions are needed. While highly detailed 3D face models can be recovered, the generated textures have the lighting of the environment baked in, and expressions are often difficult to neutralize making these methods unsuitable for applications that require relighting or facial animation. In particular, inconsistent textured models are obtained when images are taken under different lighting conditions.
Collecting the same volume of 3D face data with neutral expressions and controlled lighting condition is intractable. Hence, we introduce a GAN-based facial digitization framework that can generate a high-quality textured 3D face model with neutral expression and normalized lighting using only thousands of real world subjects. Our approach consists of dividing the problem into two stages. The first stage uses a non-linear morphable face model embedded into a StyleGAN2~\cite{Karras_2020_CVPR} network to robustly generate detailed and clean assets of a normalized face. The likeness of the person is then transferred from the input photograph using a perceptual refinement stage based on iterative optimization using a differentiable renderer. StyleGAN2 has proven to be highly expressive in generating and representing real world images using an inversion step to convert image to latent vector~\cite{Abdal_2019_ICCV,Shen_2020_CVPR,Abdal_2020_CVPR,guan2020collaborative} and we are adopting the same two step GAN-inversion approach to learn facial geometry and texture jointly. To enable 3D neutral face inference from an input image, we connect the image with the embedding space of our non-linear 3DMM using an identity regression network based on identity features from FaceNet~\cite{Schroff_2015_CVPR}.
To train a sufficiently effective generator, we introduce a new \textit{Normalized Face Dataset} which consists of a combination of high-fidelity photogrammetry scans, frontal and neutral portraits in diffuse lighting conditions, as well as fake subjects generated using a pre-trained StyleGAN2 network with FFHQ dataset~\cite{Karras_2019_CVPR}.
Despite our data augmentation effort, we show that our two-stage approach is still necessary to handle the large variation of possible facial appearances, expressions and lighting conditions. We demonstrate the robustness of our digitization framework on a wide range of extremely challenging examples, and provide extensive evaluations and comparisons with current state-of-the-art methods. Our method outperforms existing techniques in terms of digitizing textured 3D face models with neutral expressions and diffuse lighting conditions. Our normalized 3D avatars can be converted into parametric models with complete bodies and hair, and the solution is suitable for animation, relighting, and integration with game engines as shown in Fig.~\ref{fig:rig}.
We summarize our key contributions as follows:
\begin{itemize}
\item \vspace{-0.05in}We propose the first StyleGAN2-based approach for digitizing a 3D face model with neutral expressions and diffusely lit textures from an unconstrained image.
\item \vspace{-0.1in}We present a two-stage digitization framework which consists of a robust normalized face model inference stage followed by a perception-based iterative face refinement step.
\item \vspace{-0.1in}We introduce a new data generation approach and dataset based on a combination of photogrammetry scans, photographs of expression and lighting normalized subjects, and generated fake subjects.
\item \vspace{-0.1in}Our method outperforms existing single-view 3D face reconstruction techniques for generating normalized faces, and we also show that our digitization approach works using limited subjects for training.
\end{itemize}
\section{Normalized 3D Avatar Digitization}
\label{sec:methods}
\begin{figure}[hbt!]
\includegraphics[width=3.2in]{figs/3_methods/overview_v3.pdf}
\caption{Two-stage facial digitization framework. The avatar is firstly predicted in the inference stage, and then improved to match the input image in the refinement stage.}
\label{fig:inference_overview}
\end{figure}
An overview of our two-stage facial digitization framework is illustrated in Fig.~\ref{fig:inference_overview}. At the inference stage, our system uses a pre-trained face recognition network FaceNet~\cite{Schroff_2015_CVPR} to extract a person-specific facial embedding feature given an unconstrained input image. This identity feature is then mapped to the latent vector $\textbf{w} \in \mathcal{W}+$ in the latent space of our \textit{Synthesis Network} using an \textit{Identity Regressor}. The synthesis network decodes $\textbf{w}$ to an expression neutral face geometry and a normalized albedo texture. For the refinement, the latent vector $\textbf{w}$ produced by the inference is then optimized iteratively using a differentiable renderer by minimizing the perceptual difference between the input image and the rendered one via gradient descent.
\subsection{Robust GAN-Based Facial Inference}
\label{sec:inference_pipeline}
Our synthesis network $G$ generates the geometry as well as the texture in UV space. Each pixel in the UV map represents the 3D position and the RGB albedo color of the corresponding vertex using a 6-channel tuple $(\textsf{r},\textsf{g},\textsf{b},\textsf{x},\textsf{y},\textsf{z})$. The synthesis network is first trained using a GAN to ensure robust and high quality mapping from any normal distributed latent vector $\mathcal{Z}\sim \mathcal{N}(\mu ,\sigma)$. Then, the identity regression network $R$ is trained by freezing $G$ to ensure accurate mapping from the identity feature of an input image. Further details of each network are described below.
\begin{figure}[hbt!]
\includegraphics[width=3.2in]{figs/3_methods/generator.pdf}
\caption{GAN-based geometry and texture synthesis.}
\label{fig:generator}
\end{figure}
We train our synthesis network to embed a nonlinear 3D Morphable Model into its latent space, in order to model the cross correlation between the 3D neutral face geometry and the neutral albedo texture, as well as to generate high fidelity and diverse 3D neutral faces from a latent vector. Inspired by~\cite{li2020learning}, we adopt the StyleGAN2~\cite{Karras_2020_CVPR} architecture to train a morphable face model using 3D geometry and albedo texture as shown in Fig.~\ref{fig:generator}. {Rather than predicting vertex positions directly, we infer vertex position offsets relative to the mean face mesh to improve numerical stability.} To jointly learn geometry and texture, we project the geometry representation of classical linear 3DMMs $S \in \mathbb{R}^{3 \times N} $, which consists of a set of $N = 13557$ vertices on the face surface, onto a UV space using cylindrical parameterization. The vertex map is then rasterized to a 3-channel position map with $256 \times 256$ pixels. Furthermore, we train 3 discriminators jointly, including 2 individual ones for albedo and vertex position as well as a joint discriminator taking both maps as input. The individual discriminators ensure the quality and sharpness of each generated map, while the joint discriminator can learn and preserve their correlated distribution. This GAN is trained solely from the provided ground truth 3D geometries and albedo textures without any knowledge of the identity features.
\begin{figure}[hbt!]
\includegraphics[width=3.2in]{figs/3_methods/projection.pdf}
\caption{Our GAN-inversion searches a corresponding $\textbf{w}$, which can reconstruct the target geometry and texture.}
\label{fig:projection}
\end{figure}
After obtaining $G$, we retrieve the corresponding input latent code via our code inversion algorithm. Inspired by~\cite{Abdal_2019_ICCV,zhu2019lia}, we choose the disentangled and extended latent space $\mathcal{W}+ := \mathbb{R}^{14 \times 512}$ of StyleGAN2 as the inversion space to achieve better reconstruction accuracy. As shown in Fig.~\ref{fig:projection}, we adopt an optimization approach to find the embedding of a target pair of position and albedo map with the following loss function:
\begin{equation}
L_{inv} = L_{pix} + \lambda_{1}L_{\text{LPIPS}} + \lambda_{2}L_{adv}
\label{eq:L_inversion}
\end{equation}
where $L_{pix}$ is the $L_1$ pixel error of the synthesized position and texture maps, $L_{\text{LPIPS}}$ is the LPIPS distance~\cite{zhang2018perceptual} as a perceptual loss, and $L_{adv}$ is the adversarial loss favoring realistic reconstruction results using the three discriminators trained with $G$. Note that while LPIPS outperforms other perceptual metrics in practice~\cite{zhang2018perceptual}, it is trained with real images and measuring the perceptual loss directly on our UV maps would lead to unstable results. Therefore, we use a differentiable renderer~\cite{ravi2020pytorch3d} to render the geometry and texture maps from three fixed camera viewpoints and compute the perceptual loss based on these renderings.
Finally, the identity regressor $R$ can be trained using the solved latent codes of the synthesis network and their corresponding identity features from the input images.
\subsection{Unsupervised Dataset Expansion}
\label{sec:frontal_neutral_dataset}
\begin{figure}[hbt!]
\includegraphics[width=3.2in]{figs/3_methods/fake_person.pdf}
\caption{Examples of synthetic faces from our \textit{Normalized Face Dataset}.}
\label{fig:fake_person}
\end{figure}
While datasets exist for frontal human face images in neutral expression~\cite{Ma_2015_data, du2014compound, GROSS2010807, doi:10.1080/02699930903485076}, the amount of such data is still limited and the lighting conditions often vary between datasets. Instead of manually collecting more images from the Internet for expanding our training data, we propose an automatic approach to produce frontal neutral portraits based on the pre-trained StyleGAN2 network trained with FFHQ dataset. {Similar to a recent technique for semantic face editing~\cite{Shen_2020_CVPR}, we train a neural network to predict identity attributes $\alpha$ of an input image in latent space. We used images collected from internet as input and estimate each $\alpha$ and apply it to $\textbf{w}_{mean}$. $\textbf{w}_{mean}$ is a fixed value in latent space, which could generate a mean and frontalized face. We then use a latent editing vector $\beta$ to neutralize the expressions. The final latent value $\textbf{w}' = \textbf{w}_{mean} + \alpha + \beta$ produces a frontalized and neutralized face by feeding into StyleGAN2.}
Some examples are shown in Fig.~\ref{fig:fake_person}. We further emphasize that all images in our \textit{Normalized Face Dataset} are frontal and have neutral expressions. Also, these images have well conditioned diffuse scene illuminations, which are preferred for conventional gradient descent-based 3D face reconstruction methods.
For each synthesized image, we apply light normalization~\cite{Nagano_2019_Siggraph} and 3D face fitting based on Face2Face~\cite{Thies_2016_CVPR} to generate a 3D face geometry and then project the light normalized image for the albedo texture. Instead of using the linear 3DMM completely, which results in coarse and smooth geometry, we first run our inference pipeline to generate the 3D geometry and take it as the initialization for the Face2Face optimization. After optimization, the resulting geometry is in fact the non-linear geometry predicted from our inference pipeline plus a linear combination of blendshape basis optimized by Face2Face, thus preserving its non-linear expressiveness. Also note that the frontal poses of the input images facilitate our direct projections onto UV space to reconstruct high-fidelity texture maps.
The complete training procedure works as follows: we first collect a high quality \textit{Scan Dataset} with $431$ subjects with accurate photogrammetry scans, with $63$ subjects from 3D Scan Store~\cite{3D_scan_store} and $368$ subjects from Triplegangers~\cite{tripelgangers}. The synthesis network $G_0$ is then trained from such scan data, and is then temporarily frozen for latent code inversion and the training of identity regressor $R_0$. These bootstrapping networks $(R_0, G_0)$ trained on the small \textit{Scan Dataset} are applied onto our \textit{Normalized Face Dataset} to infer the geometry and texture, which are then optimized and/or corrected by the Face2Face algorithm. Next, the improved geometry and texture are added back into the training of $(R_0, G_0)$ to obtain the fine-tuned networks $(R_1, G_1)$ with improved accuracy and robustness.
Our final \textit{Normalized Face Dataset} consists of $5601$ subjects, with $368$ subjects from Triplegangers, $597$ from Chicago Face Dataset (CFD)~\cite{Ma_2015_data}, $230$ from the compound facial expressions (CFE) dataset~\cite{du2014compound}, $153$ from The CMU Multi-PIE Face Dataset~\cite{GROSS2010807}, $67$ from Radboud Faces Database (RaFD)~\cite{doi:10.1080/02699930903485076}, and the remaining $4186$ generated by our method. We use most of the frontal and neutral face images that are available to increase diversity, but still rely on the large volume of synthetic data for the training.
\begin{figure*}[hbt!]
\centering
\includegraphics[width=6.5in]{figs/4_results/comparison_v4.pdf}
\caption{Qualitative comparison with other state-of-the-art 3D face reconstruction method. The first row shows the input images and the second row shows our results, and the third row are the reconstructed 3D faces obtained by \cite{Lee_2020_CVPR}.}
\label{fig:comparison}
\end{figure*}
\subsection{Perceptual Refinement}
\label{sec:perceptual_optimization}
While the inference pipeline described in Sec.~\ref{sec:inference_pipeline} with training data from Sec.~\ref{sec:frontal_neutral_dataset} can reliably infer the normalized texture and geometry from an unconstrained image, a second stage with perceptual refinement can help determine a neighbor of the predicted latent code in the embedding space that matches the input image better. The work from Shi et al.~\cite{shi2019probabilistic} shows that an embedding space learned for face recognition is often noisy and ambiguous due to the nature of fully unconstrained input data. While FaceNet predicts the most likely latent code, the variance (or \textit{uncertainty} in Shi et al.'s work) could be large. A small perturbation of the latent code may not affect the identity feature training at all. On the other hand, such a small error in the identity code may cause greater inconsistency in our inference pipeline after passing $R$ and $G$.
An ``end-to-end'' refinement step is introduced, to handle never seen before images while ensuring consistency between the final renderings using the predicted geometry and texture, and the input image. Fig.~\ref{fig:inference_overview} shows the end-to-end architecture for this refinement step. We reuse the differentiable renderer to generate a 2D face image $\hat{I}$ from the estimated 3D face, and compute the perceptual distance with the input image $I$. To project the 3D face back to the head pose in image $I$, we train a regression network with ResNet50~\cite{He_2016_CVPR} as backbone to estimate the camera $c=[t_x,t_y,t_z,r_x,r_y,r_z,f]^T$ from $I$, where $[t_x,t_y,t_z]^T$ and $[r_x,r_y,r_z]^T$ denote the camera translation and rotation and $f$ is the focal length. The network is trained using the accurate camera data from the \textit{Scan Dataset} and the estimated camera data from \textit{Normalized Face Dataset}, computed by Face2Face. Furthermore, in order to blend the projected face only image with the background from the original image $I$, we train a PSPNet~\cite{zhao2017pspnet} with ResNet101~\cite{He_2016_CVPR} as backbone using CelebAMask-HQ~\cite{CelebAMask-HQ}. We then blend the rendered image $\hat{I}$ into the segmented face region from $I$ {to produce $I_{0}$}.
The final loss is simply represented as:
\begin{equation}
L_{refine} = L_{w} + \lambda_1 L_{\text{LPIPS}} + \lambda_2 L_{id}\quad,
\label{eq:L_refine}
\end{equation}
where $L_{w}$ is a regularization term on $\textbf{w}$, i.e., the Euclidean distance between the variable $\textbf{w}$ and its initial prediction derived by $R$, enforcing the similarity between the modified latent and the initial prediction. $L_{LPIPS}$ is the perceptual loss measured by LPIPS distance~\cite{zhang2018perceptual} between $I_{0}$ and $I$, which enables improved matching in terms of robustness and better preservation of semantically meaningful facial features compared to using pixel differences. $L_{id}$ is the cosine similarity between the identity feature of $\hat{I}$ and $I$, to preserve consistent identity.
\section{Overview}
\section{Related Works}
\label{sec:relatedworks}
While a wide range of avatar digitization solutions exist for professional production, they mostly rely on sophisticated 3d scanning equipment (e.g., multi-view stereo, photometric stereo, depth sensors etc.) and controlled capture settings~\cite{Beeler_2010_SIGGRAPH, ghosh2011multiview, Fyffe_2016_EG}. We focus our discussion on monocular 3D face reconstruction methods as they provide the most accessible and flexible way of creating avatars for end-users, where only a selfie or downloaded internet photo is needed.
\paragraph{3D Morphable Face Models.}
Linear 3D Morphable Models (3DMM) have been introduced by Blanz and Vetter~\cite{Blanz_1999_SIGGRAPH} two decades ago, and have been established as the de-facto standard for 3D face reconstruction from unconstrained input images. The linear parametric face model encodes shape and textures using principal component analysis (PCA) built from $200$ laser scans. Various extensions of this work include the use of larger numbers of high-fidelity 3D face scans~\cite{Booth_2016_CVPR,Booth_2017_CVPR}, web images~\cite{Ira_2013_ICCV}, as well as facial expressions often based on PCA or Facial Action Coding Systems(FACS)-based blendshapes ~\cite{blanz2003reanimating,Vlasic:2005:FTM,cao2014facewarehouse}.
The low dimensionality and effectiveness of 3DMMs make them suitable for robust 3D face modeling as well as facial performance capture in monocular settings. To reconstruct a textured 3D face model from a photograph, conventional methods iteratively optimize for shape, texture, and lighting condition by minimizing energy terms based on constraints such as facial landmarks, pixel colors~\cite{Blanz_1999_SIGGRAPH,Sami_2005_CVPR,GVWT13,Shi:2014:AAH,Cao:2014:DDE,Ichim:2015:DAC,Thies_2016_CVPR,Garrido_2016_SIGGRAPH,cao2016real,Li_2017_SIGGRAPHASIA}, or depth information if available such as for the case of RGB-D sensors~\cite{weise09face,weise2011realtime,Bouaziz:2013:OMR,li2013realtime,hsieh2015unconstrained,hifi3dface2020tencentailab,Hu_2017_SIGGRAPHASIA}.
While robust face reconstruction is possible, linear face models combined with gradient optimization-based optimization are ineffective in handling the wide variation of facial appearances and challenging input photographs. For instance, detailed facial hair and wrinkles are hard to generate and the likeness of the original subject is typically lost after the reconstruction.
Deep learning-based inference techniques~\cite{wu2019mvf, Gecer_2019_CVPR,deng2019accurate,Genova_2018_CVPR,Tewari_2017_ICCV,Tran_2017_CVPR,Dou_2017_CVPR,Bas_2017_ICCV_Workshops,Tewari_2017_ICCV} were later introduced and have demonstrated significantly more robust facial digitization capabilities but they are still ineffective in capturing facial geometric and appearance detail due to the linearity and low dimensionality of the face model. Several post-processing techniques exist and use inferred linear face models to generate high-fidelity facial assets such as albedo, normal, and specular maps for relightable avatar rendering~\cite{Lattas_2020_CVPR,chen2019photo,Yamaguchi_2018}. AvatarMe~\cite{Lattas_2020_CVPR} for instance uses GANFIT~\cite{Gecer_2019_CVPR} to generate a linear 3DMM model as input to their post processing framework. Our proposed method can be used as alternative input to AvatarMe, and we compare it to GANFIT later in Section~\ref{sec:results}.
More recently, non-linear 3DMMs have been introduced. Instead of representing facial shapes and appearances as a linear combination of basis vectors, these models are formulated implicitly as decoders using neural networks where the 3D faces are generated directly from latent vectors.
Some of these methods use fully connected layers or 2D convolutions in image space~\cite{Tran_2018_CVPR,Bagautdinov_2018_CVPR,feng2018prn,Tran_2019_CVPR,li2020learning}, while others use decoders in the mesh domain to represent local geometries~\cite{Litany_2018_CVPR, Ranjan_2018_ECCV, Zhou_2019_CVPR, Cheng_2019_MeshGAN, Abrevaya_2019_CVPR,Lee_2020_CVPR,Lin_2020_CVPR}. With the help of differentiable renderers~\cite{Tewari_2017_ICCV,Genova_2018_CVPR,ravi2020pytorch3d}, several methods~\cite{Tran_2018_CVPR,Tran_2019_CVPR,Lee_2020_CVPR} have demonstrated high-fidelity 3D face reconstructions using non-linear morphable face models using fully unsupervised or weakly supervised learning, which is possible using massive amounts of images in the wild. While the reconstructed faces are highly detailed and accurate w.r.t. the original input image, the generated assets are not suitable for relightable avatars nor animation friendly, since lighting conditions of the environment and expressions are baked into the output.
Our work focuses on producing normalized 3D avatars with unshaded albedo textures and neutral expressions. Due to the limited availability of training data with normalized faces and the wide variation of facial appearances and capture conditions, the problem is significantly more challenging and ill-posed.
\paragraph{Generative Adversarial Network.}
We adopt StyleGAN2~\cite{Karras_2020_CVPR} to encode our non-linear morphable 3D face model. Among all generative models in deep learning, Generative Adversarial Networks (GANs)~\cite{NIPS2014_5423} have achieved a great success in producing realistic 2D natural images, nearly indistinguishable from real world images. After a series of advancements, state-of-the-art GANs like PGGAN~\cite{karras2018progressive}, BigGAN~\cite{brock2018large} and StyleGAN/StyleGAN2~\cite{Karras_2019_CVPR,Karras_2020_CVPR} have proven to be also effective in generating high resolution images and the ability to handle an extremely wide range of variations. In this work, we mainly focus on adopting StyleGAN2~\cite{Karras_2020_CVPR} to jointly learn facial geometry and texture,
since its intermediate latent representation has been proven effective
to best reconstruct a plausible target image with clean assets~\cite{Abdal_2019_ICCV,Shen_2020_CVPR,Abdal_2020_CVPR,guan2020collaborative}.
\paragraph{Facial Image Normalization.}
To address the problem of unwanted lighting and expressions during facial digitization, several methods have been introduced to normalize unconstrained portraits. Cole et al.~\cite{cole2017synthesizing} introduced a deep learning-based image synthesis framework based on FaceNet's latent code~\cite{Schroff_2015_CVPR}, allowing one to generate a frontal face with neutral expression and normalized lighting from an input photograph. More recently, Nagano et al~\cite{Nagano_2019_Siggraph} improved the method to generate higher resolution facial assets for the purpose generating high-fidelity avatars. In particular, their method breaks down the inference problem into multiple steps, solving explicitly for perspective undistortion, lighting normalization, followed by pose frontalization and expression neutralization. While the successful normalized portraits were demonstrated, their method rely on transferring details from the input subject to the generated output.
Furthermore, both methods rely on the linear 3DMMs for expression neutralization and thus cannot capture detailed appearance variations.
Neutralizing expression from nonlinear 3DMM, however, is not straightforward since the feature space of identity and expression are often entangled. Our new normalization framework with GAN-based reconstruction fills in this gap.
\section{Results}
\label{sec:results}
We demonstrate the performance of our method in Fig.~\ref{fig:teaser} and~\ref{fig:comparison}, and show how our method can handle extremely challenging unconstrained photographs with very harsh illuminations, extreme filtering, and arbitrary expressions. We can produce plausible textured face models where the likeness of the input subject is preserved and visibly recognizable. Compared to the state-of-the-art 3D face reconstruction method (see Fig.~\ref{fig:comparison}) based on non-linear 3DMMs, our method can neutralize expressions and produce an unshaded albedo texture suitable for rendering in arbitrary lighting conditions as demonstrated using various HDRI-based lighting environments. We also show in Fig.~\ref{fig:rig} how we can obtain a fully rigged 3D avatar from a single photo including body and hair, by adopting the hair digitization algorithm in~\cite{Hu_2017_SIGGRAPHASIA} (see accompanying video for live demo).
\paragraph{Evaluations.}
\begin{figure}[hbt!]
\includegraphics[width=3.25in]{figs/4_results/f2f.pdf}
\caption{Face2Face optimization results. The first row is the original implementation~\cite{Thies_2016_CVPR}. The second row is our proposed improvement with nonlinear initialization.}
\label{fig:f2f}
\end{figure}
Sec.~\ref{sec:frontal_neutral_dataset} further improves the performance of $G$ and $R$ using more training data. Fig.~\ref{fig:f2f} compares the default Face2Face optimization using a linear 3DMM with the improved ones using an initialization from $R_0$ and $G_0$.
\begin{figure}[hbt!]
\includegraphics[width=3.25in]{figs/4_results/projection_data.pdf}
\caption{Expressiveness of the synthesis network trained with different datasets. From top to bottom: The ground truth; The GAN-inversion results based on $G_0$ trained with \textit{Scan Dataset} only; The same process based on $G_1$, trained with \textit{Normalized Face Dataset}.}
\label{fig:projection_data}
\end{figure}
\begin{figure}[hbt!]
\includegraphics[width=3.25in]{figs/4_results/regression_data_v2.pdf}
\caption{Quality of the regression network trained with different datasets. The first row shows the inference results by $R_0$m trained with \textit{Scan Dataset}. The second row shows the results by $R_1$, trained with \textit{Normalized Face Dataset}.}
\label{fig:regression_data}
\end{figure}
\begin{figure}[hbt!]
\includegraphics[width=3.25in]{figs/4_results/optimization_initialization_v2.pdf}
\caption{Qualitative comparison with different initialization schemes for iterative refinement.
The mean initialization starts optimization from a mean latent vector of our training dataset. The inference initialization starts from the latent vector predicted by $R$.}
\label{fig:optimization_initialization}
\end{figure}
With such synthetic training data, Fig.~\ref{fig:projection_data} shows improved expressiveness of $G_1$ than $G_0$. Several artifacts from $G_0$ around eyes and the lack of facial hair are fixed in $G_1$. In Fig.~\ref{fig:regression_data}, $R_1$ also shows higher diversity of face shapes and superior accuracy compared to $R_0$ after training with the \textit{Normalized Face Dataset}.
Fig.~\ref{fig:optimization_initialization} demonstrates the effect of both the inference stage in Sec.~\ref{sec:inference_pipeline} and the refinement stage. For each row of the experiment, the end-to-end iterative refinement can always improve the likeness and expressiveness of the 3D avatar. However, notice that the refinements from the mean latent vector would fail to produce a faithful result after $200$ iterations, while the refinements from an accurate initial prior by $R$ converges to a highly plausible face reconstruction.
\begin{figure}[hbt]
\includegraphics[width=3.25in]{figs/4_results/identity_consistency_v2.pdf}
\caption{Consistent reconstructions of the same person under different environments.}
\label{fig:expressions_consistency}
\end{figure}
Since our proposed pipeline simply rely on the identity and perceptual features from $I$, the reconstructed 3D avatar is invariant to the factors FaceNet filters, such as occlusion, image resolution, lighting environment, and facial expression.
Fig.~\ref{fig:expressions_consistency} demonstrates how we can obtain consistent geometries from different lighting, viewpoints, and facial expressions. Further results of more challenging images, such as low resolution or largely occluded ones are provided in the supplemental material.
\paragraph{Comparisons.}
Fig.~\ref{fig:comparison} compare our method with the most recent single view face reconstruction method~\cite{Lee_2020_CVPR}. Lee et al.~\cite{Lee_2020_CVPR} adopts a state-of-the-art nonlinear 3DMM on both geometry and texture. They also use a Graph Convolutional Neural Network to embed geometry and a Generative Adversarial Network to synthesize texture. However, they train two networks separately with different datasets, where facial shape and appearance are uncorrelated. More importantly, their results show that expressions and lighting are baked in, which makes their method unsuitable for relighting and facial animation purposes. More comparisons with other monocular face reconstruction methods~\cite{deng2019accurate, Gecer_2019_CVPR, Tran_2019_CVPR} can be found in the supplemental material.
\begin{figure}[hbt]
\includegraphics[width=3.25in]{figs/4_results/comparison_koki_v2.pdf}
\caption{Qualitative comparison with state-of-the-art face normalization method~\cite{Nagano_2019_Siggraph}. From left to right, we show (a) input image; (b) our reconstructed result; (c) image-based face normalization result generated by Nagano et al.~\cite{Nagano_2019_Siggraph}; (d) Face2Face reconstruction result based on (c).}
\label{fig:koki}
\end{figure}
Fig.~\ref{fig:koki} shows our results compared to the deep face normalization method~\cite{Nagano_2019_Siggraph}. While some successful normalized results were demonstrated, their image-to-image translation architecture transfers details from the input subject to the generated output. If those details are deteriorated, then face normalization would fail.
\begin{table}[h]
\begin{center}
\begin{tabular}{c|c|c}
\hline
Tran et al.~\cite{Tran_2019_CVPR} & Deng et al.~\cite{deng2019accurate} & Ours \\
\hline
1.935mm & 1.568mm & \textbf{1.557mm} \\
\hline
\end{tabular}
\caption{Quantitative comparison of with other 3D face reconstruction methods.}
\label{table:facescape_comparison}
\end{center}
\vspace{-0.1in}
\end{table}
\begin{table}[h]
\begin{center}
\begin{tabular}{c|c|c}
\hline
Tran et al.~\cite{Tran_2019_CVPR} & Deng et al.~\cite{deng2019accurate}& Ours \\
\hline
0.304 & 0.392 & \textbf{0.205} \\
\hline
\end{tabular}
\caption{Quantitative comparison on texture.}
\label{table:texture_comparison}
\end{center}
\vspace{-0.1in}
\end{table}
Quantitative experiments on FaceScape~\cite{yang2020facescape} using high resolution 3D scans and corresponding images are shown in Tables~\ref{table:facescape_comparison} and ~\ref{table:texture_comparison}. For geometric accuracy, we randomly select $20$ scans from FaceScape, and for each method, we compute the average point to mesh distance between the monocular reconstructed geometry and the ground truth scan. The proposed model has smaller reconstruction errors than other state-of-the-art ones. For texture evaluation, we augment the input images with lighting variations and compute the mean L1 pixel loss between generated textures from each method and the ground truth. Our method generates textures that are less sensitive to lighting conditions.
\paragraph{Implementation Details.}
All our networks are trained on a desktop machine with Intel i7-6800K CPU, 32GB RAM and one NVIDIA TITAN GTX (24GB RAM) GPU using PyTorch~\cite{paszke2019pytorch}. The StyleGAN2 network training takes $13$ days with the \textit{Normalized Face Dataset}. We use the PyTorch implementation~\cite{stylegan2-pytorch} and remove the noise injection layer in the original implementation to remove the stochastic noise inputs and enable full control of the generated results from the latent vector. Our identity regression network is composed of four fully connected layers with Leaky ReLU activations, and the training takes $1$ hour to converge with the same training data.
At the testing stage, inference takes $0.13$ s and refinement takes $45$ s for $200$ iterations.
|
2,869,038,154,416 | arxiv | \section{Introduction}
We are now sure, that QCD is the genuine theory of strong interactions.
In the perturbative region at high momenta the theory excellently describes the totality of data. However at low momenta the perturbative theory fails. Firstly it is seen from the well known momentum dependence of the running coupling. Let us show here the three loop expression for $\alpha_s(\mu)$
\begin{eqnarray}
& &\alpha_s(\mu)\,=\,\frac{4 \pi}{\beta_0 \ln(\mu^2/\Lambda^2)}\biggr[1-\frac{2 \beta_1\,\ln(\ln(\mu^2/\Lambda^2))}{\beta_0^2\,\ln(\mu^2/\Lambda^2)}+
\label{eq:alphas3}\\
& &\frac{4 \beta_1^2}{\beta_0^4\,\ln^2(\mu^2/\Lambda^2)}\biggl(\Bigl(\ln(\ln(\mu^2/\Lambda^2))
-\frac{1}{2}\Bigr)^2+\frac{\beta_2 \beta_0}{8 \beta_1^2}-\frac{5}{4}\biggr)\biggr]\,;\nonumber
\end{eqnarray}
where $\Lambda$ is the QCD scale parameter and
\begin{eqnarray}
& &\beta_0\,=\,11-\frac{2\,N_f}{3}\,;\quad \beta_1\,=\,51-\frac{19\,N_f}{3}\,;
\label{eq:bi}\\
& &\beta_2\,=\,2857-\frac{5033\,N_f}{9}+\frac{325\,N_f^2}{27}\,; \nonumber
\end{eqnarray}
For low momenta region we take expression~(\ref{eq:alphas3}) with number of flavors $N_f\,=\,3$ and take for normalization its value at mass of $\tau$-lepton. We have
\begin{equation}
\alpha_{s}(M_\tau = 1777\,MeV)\,=\,0.32\pm 0.05\,. \label{eq:alphatau}
\end{equation}
From here we obtain
\begin{equation}
\Lambda_{3}\,=\,(345 \pm 19)\,MeV\,. \label{eq:Lambda3}
\end{equation}
Thus from~(\ref{eq:alphas3}) we see, that for $\mu = \Lambda_3$ we have the pole (and the cut at the same point).
At first such pole was disclosed in QED~\cite{LP,LNB} and thus was called the Landau pole. The existence of the pole makes a theory internally contradictory.
As for QED, L.D. Landau himself in the issue dedicated to Niels Bohr~\cite{LNB} had first stated, that for a realistic number of the charged elementary fields the pole was situated far beyond the Planck mass and so it presumably could be removed by quantum gravitation effects.
However in QCD pole is situated in the observable region of few hundreds MeV.
As far as we know, there is no way to get rid of such pole in the framework of the perturbation theory. It is a general belief, that non-perturbative contributions somehow exclude the pole. For reviews of different possibilities see {\it e.g.}~\cite{Fischer,SS2}.
In the present work we would demonstrate just how the pole in~(\ref{eq:alphas3}) could be eliminated in the approach to non-perturbative effects in gauge theories, which was induced by the famous N.N. Bogoliubov compensation approach~\cite{Bog1,Bog2}.
\section{ The compensation equation}
For the beginning we consider pure gluon QCD without quarks.
We start with Lagrangian
with gauge group $SU(3)$. That is we define the gauge sector to
be color octet of gluons $F^a_\mu$.
\begin{eqnarray}
& & L\,=\,-\frac{1}{4}\, F_{\mu\nu}^a F_{\mu\nu}^a;\;\label{eq:initial}\\
& &F_{\mu\nu}^a\,=\,
\partial_\mu F_\nu^a - \partial_\nu F_\mu^a\,+g\,f_{abc}F_\mu^b F_\nu^c\,.\nonumber
\end{eqnarray}
where we use the standard notations.
Let us consider a possibility of spontaneous generation of the following effective interaction
\begin{equation}
-\,\frac{G}{3!}\cdot \,f_{abc}\,
F_{\mu\nu}^a\,F_{\nu\rho}^b\,F_{\rho\mu}^c\,;\label{eq:effint}
\end{equation}
which is
usually called the anomalous three-gluon interaction.
Here notation
$\frac{G}{3!}\cdot \,f_{abc}\,
F_{\mu\nu}^a\,F_{\nu\rho}^b\,F_{\rho\mu}^c$ means corresponding
non-local vertex in the momentum space
\begin{eqnarray}
& &(2\pi)^4 G\,f_{abc} (g_{\mu\nu} (q_\rho pk - p_\rho qk)+ g_{\nu\rho}
(k_\mu pq - q_\mu pk)+\nonumber\\
& &+g_{\rho\mu} (p_\nu qk - k_\nu pq)+q_\mu k_\nu p_\rho - k_\mu p_\nu q_\rho)\,\times\nonumber\\
& &\times\, F(p,q,k)\,
\delta(p+q+k)\,+...\;;\label{eq:vertex}
\end{eqnarray}
where $F(p,q,k)$ is a form-factor and
$p,\mu, a;\;q,\nu, b;\;k,\rho, c$ are respectively incoming momenta,
Lorentz indices and color indices of gluons.
In accordance to the Bogoliubov approach~\cite{Bog1, Bog2} in application to
QFT~\cite{Arb04} we look for
a non-trivial solution of a
compensation equation, which is formulated on the basis
of the Bogoliubov procedure {\bf add -- subtract}. Namely
let us write down the initial expression~(\ref{eq:initial})
in the following form
\begin{eqnarray}
& &L\,=\,L_0\,+\,L_{int}\,;\nonumber\\
& &L_0\,=\,-\,\frac{1}{4}\,F_{\mu\nu}^a F_{\mu\nu}^a\,+
\frac{G}{3!}\cdot\,f_{abc}\,F_{\mu\nu}^a\,F_{\nu\rho}^b\,F_{\rho\mu}^c\,;
\label{eq:L0}\\
& &L_{int}\,=\,-\,\frac{G}{3!}\cdot\,f_{abc}\,
F_{\mu\nu}^a\,F_{\nu\rho}^b\,F_{\rho\mu}^c\,.\label{eq:Lint}
\end{eqnarray}
Here notation
$-\,\frac{G}{3!}\cdot \,f_{abc}\,
F_{\mu\nu}^a\,F_{\nu\rho}^b\,F_{\rho\mu}^c$ is already explained~(\ref{eq:vertex}).
We mean also that there are present four-gluon, five-gluon and
six-gluon vertices according to expression for $F_{\mu\nu}^a$
(\ref{eq:initial}). Note, that inclusion of total gluon term $F_{\mu \nu}^a\,F_{\mu \nu}^a$ in the new free Lagrangian~(\ref{eq:L0}) is performed in view of maintaining the gauge invariance of the approach.
Effective interaction~(\ref{eq:effint}) is
called {\bf anomalous three-gluon interaction}. Our interaction constant $G$ is to be defined by the subsequent studies.
Let us consider expression~
(\ref{eq:L0}) as the new {\bf free} Lagrangian $L_0$,
whereas expression~(\ref{eq:Lint}) as the new
{\bf interaction} Lagrangian $L_{int}$. It is important to note, that we
put into the new {\bf free} Lagrangian the full quadratic in $F$ term including
gluon self-interaction, because we prefer to maintain gauge invariance of the approximation being used. Indeed, we shall use both four-gluon term from the last term
in~(\ref{eq:L0}) and triple one from the last but one term of~(\ref{eq:L0}).
Then compensation conditions (see for details~\cite{Arb04}) will
consist in demand of full connected three-boson vertices of the structure~(\ref{eq:vertex}),
following from Lagrangian $L_0$, to be zero. This demand
gives a non-linear equation for form-factor $F$.
Such equations according to terminology of works
~\cite{Bog1, Bog2} are called {\bf compensation equations}.
In a study of these equations it is always evident the
existence of a perturbative trivial solution (in our case
$G = 0$), but, in general, a non-perturbative
non-trivial solution may also exist. Just the quest of
a non-trivial solution inspires the main interest in such
problems. One can not succeed in finding an exact
non-trivial solution in a realistic theory, therefore
it is of great importance to choose an adequate
approach, the first non-perturbative approximation of
which describes the main features of the problem.
Improvement of a precision of results is to be achieved
by corrections to the initial first approximation.
Thus our task is to formulate the first approximation.
Here the experience acquired in the course of performing
works~\cite{Arb04, Arb05, AVZ} could be helpful. Now in view of
obtaining the first approximation we would make the following
assumptions.\\
1) In compensation equation we restrict ourselves by
terms with loop numbers 0, 1.\\
2) We reduce thus obtained non-linear compensation equation to a linear
integral equation. It means that in loop terms only one vertex
contains the form-factor, being defined above, while
other vertices are considered to be point-like. In
diagram form equation for form-factor $F$ is presented
in Fig.1. Here four-leg vertex correspond to interaction of four
bosons due to our effective three-gluon interaction. In our approximation we
take here the vertex with interaction constant proportional
to $g\,G$.\\
3) We integrate by angular variables of the 4-dimensional Euclidean
space. The necessary rules are presented in paper~\cite{Arb05}.
Let us note that such approximation was previously used in works~\cite{Arb05, AVZ, AVZ2} in the study of spontaneous generation of effective Nambu -- Jona-Lasinio interaction.
It was shown in the works that the results agree with data
with average accuracy $\simeq 10 - 15\%$. Thus we could hope for such accuracy in the present problem.
Let us formulate compensation equations in this
approximation.
For {\bf free} Lagrangian $L_0$ full connected
three-boson vertices with Lorentz structure~(\ref{eq:vertex}) are to vanish. One can succeed in
obtaining analytic solutions for the following set
of momentum variables (see Fig.1): left-hand legs
have momenta $p$ and $-p$, and a right-hand leg
has zero momenta.
However in our approximation we need form-factor $F$ also
for non-zero values of this momentum. We look for a solution
with the following simple dependence on all three variables
\begin{equation}
F(p_1,\,p_2,\,p_3)\,=\,F\Bigl(\frac{p_1^2\,+\,p_2^2\,+\,p_3^2}{2}\Bigr)\,;\label{eq:123}
\end{equation}
Really, expression~(\ref{eq:123}) is symmetric and it turns to $F(x)$
for $p_3=0,\,p_1^2\,=\,p_2^2\,=\,x$. We consider the representation~(\ref{eq:123})
to be the first approximation and we it would be advisable to take into account the corresponding correction in forthcoming studies. We shall also discuss below some possible corrections due to this problem.
\begin{widetext}
At first let us present the expression for four-boson vertex
\begin{eqnarray}
& &\frac{V(p,m,\lambda;\,q,n,\sigma;\,k,r,\tau;\,l,s,\pi)}{\imath\,(2\,\pi)^4} = g G \Bigl(f^{amn}
f^{ars}\bigl(U(k,l;\sigma,\tau,\pi,\lambda)-U(k,l;\lambda,\tau,\pi,\sigma)-
U(l,k;\sigma,\pi,\tau,\lambda)+\nonumber\\
& &
U(l,k;\lambda,\pi,\tau,\sigma)+U(p,q;\pi,\lambda,\sigma,\tau)-U(p,q;\tau,
\lambda,\sigma,\pi)-U(q,p;\pi,\sigma,\lambda,\tau)
+U(q,p;\tau,\sigma,\lambda,\pi)\bigr)+\nonumber\\
& &f^{arn}\,
f^{ams}\bigl(U(p,l;\sigma,\lambda,\pi,\tau)-U(l,p;\sigma,\pi,\lambda,\tau)
-U(p,l;\tau,\lambda,\pi,\sigma)+U(l,p;\tau,\pi,\lambda,\sigma)+
U(k,q;\pi,\tau,\sigma,\lambda)-\label{eq:four}\\
& &U(q,k;\pi,\sigma,\tau,\lambda)-U(k,q;\lambda,\tau,\sigma,\pi)
+U(q,k;\lambda,\sigma,\tau,\pi)\bigr)-f^{asn}\,
f^{amr}\bigl(U(k,p;\sigma,\tau,\lambda,\pi)-U(p,k;\sigma,\lambda,\tau,\pi)
+\nonumber\\
& &U(p,k;\pi,\lambda,\tau,\sigma)-U(k,p;\pi,\tau,\lambda,\sigma)-
U(l,q;\tau,\pi,\sigma,\lambda)+
U(l,q;\lambda,\pi,\sigma,\tau)
-U(q,l;\lambda,\sigma,\pi,\tau)+U(q,l;\tau,\sigma,\pi,\lambda)\bigr)\Bigr)
\,;\nonumber\\
& &U(k,l;\sigma,\tau,\pi,\tau)=\bigl(k_\sigma\,l_\tau\,g_{\pi\lambda}-
k_\sigma\,l_\lambda\,g_{\pi\tau}+k_\pi\,l_\lambda\,g_{\sigma\tau}-
(kl)g_{\sigma\tau}g_{\pi\lambda}\bigr)\times F(k,\,l,\,-(k+l))\,.\nonumber
\end{eqnarray}
Here triad $p,\,m,\,\lambda$ {\it etc} means correspondingly incoming momentum, color
index, Lorentz index of a gluon and $F$ is the same form-factor as in expression~(\ref{eq:vertex}).
\begin{figure}
\includegraphics[width=18cm]{Fig11.eps}
\caption{Diagrams, describing the compensation equation.
Lines correspond to gluons, black circles correspond to vertex~(\ref{eq:vertex}), open circles to the same vertex with unity form-factor, open circles with four legs correspond to vertex~(\ref{eq:four}), simple point corresponds to usual perturbative vertex.}
\label{fig:Fig1}
\end{figure}
Now according to the rules being stated above we
obtain the following equation for form-factor $F(x)$, which corresponds to Fig.1.
\begin{eqnarray}
& &F(x)\,=\,-\,\frac{G^2\,N}{64\,\pi^2}\Biggl(\int_0^Y\,F(y)\,y dy\,-
\frac{1}{12\,x^2}\,\int_0^x\,F(y)\,y^3 dy\,
+\,\frac{1}{6\,x}\,\int_0^x\,F(y)\,
y^2 dy\,+\frac{x}{6}\,\int_x^Y\,F(y)\,dy\,-\label{eq:F}\\
& &\frac{x^2}{12}\,
\int_x^Y\,\frac{F(y)}{y}\,dy \Biggr)\,+\frac{G\,g\,N}{16\,\pi^2}\,
\int_0^Y F(y) dy + \frac{G g N}{24\,\pi^2} \Biggl(\int_{x}^Y \frac{(5 x- 6 y)}{(x-2 y)}F(y) dy +\int_{\frac{3 x}{4}}^{x} \frac{(3 x- 4 y)^2 (2 y -3 x)}{x^2 (x-2 y)}F(y)
dy\Biggr)\,+\nonumber\\
& &\frac{G g N}{32 \pi^2}\Biggl(\int_x^Y \frac{3(x^2-2 y^2)}{8(2 y-x)^2} F(y) dy + \int_{\frac{3 x}{4}}^{x} \frac{3(4 y-3 x)^2(x^2-4 x y+2 y^2)}{8 x^2(2 y-x)^2} F(y) dy + \int_0^x\frac{5 y^2-12 x y}{16 x^2} F(y) dy +\nonumber\\ & &\int_x^Y\frac{3 x^2- 4 x y - 6 y^2}{16 y^2} F(y) dy\Biggr)\,.\nonumber
\end{eqnarray}
\end{widetext}
Here $x = p^2$ and $y = q^2$, where $q$ is an integration momentum, number of colors $N=3$. Here we also divide the initial equation by coupling constant $G$ in view of looking for non-trivial solutions. Of course, the trivial solution $G\,=\,0$ is always possible.
The last four terms in brackets represent diagrams with one usual gauge vertex (see three last
diagrams at Fig.1) These terms maintain the gauge invariance of results in this approximation. Note that one can additionally check the gauge invariance by introduction of longitudinal term $d_l\,k_\mu k_\nu/(k^2)^2$ in boson propagators to verify independence of results on $d_l$ in this approximation. Ghost contributions also give zero result in the present approximation due to vertex~(\ref{eq:vertex}) being transverse:
\begin{eqnarray}
& &p_\mu V(p,q,k)_{\mu\nu\rho} = q_\nu V(p,q,k)_{\mu\nu\rho}= k_\rho V(p,q,k)_{\mu\nu\rho}=0 ;\nonumber\\
& &V(p,q,k)_{\mu\nu\rho}=g_{\mu\nu} (q_\rho pk - p_\rho qk)+ g_{\nu\rho}
(k_\mu pq -\label{eq:trans}\\
& & q_\mu pk)+g_{\rho\mu} (p_\nu qk - k_\nu pq)+q_\mu k_\nu p_\rho - k_\mu p_\nu q_\rho\,.\nonumber
\end{eqnarray}
Gauge invariance might be also violated by terms arising from momentum dependence of form-factor $F$. However this problem does not arise in the approximation corresponding to equation~(\ref{eq:F}) and becomes essential for taking into account of $g^2$ terms. In this case ghost contributions also do not cancel. The problem of gauge invariance of the next approximations has to be considered in future studies.
We introduce in equation~(\ref{eq:F})
an effective cut-off $Y$, which bounds a "low-momentum" region where
our non-perturbative effects act
and consider the equation at interval $[0,\, Y]$ under condition
\begin{equation}
F(Y)\,=\,0\,; \label{eq:Y0}
\end{equation}
and for $x\,>\,Y$ we continuously transit to the trivial solution
$G\,=\,0$.
We shall solve equation~(\ref{eq:F}) by iterations. That is we
expand its terms being proportional to $g$ in powers of $x$ and
take at first only constant term. Thus we have
\begin{eqnarray}
& &F_0(x) = - \frac{G^2 N}{64\,\pi^2}\Biggl(\int_0^Y F_0(y) y dy + \frac{x}{6}\,\int_x^Y\,F_0(y)\,dy-\nonumber\\
& &\,\frac{x^2}{12}\,
\int_x^Y\,\frac{F_0(y)}{y}\,dy
+\frac{1}{6\,x}\,\int_0^x\,F_0(y)\,
y^2 dy\,\,-\label{eq:F0}\\
& &\frac{1}{12 x^2} \int_0^x F_0(y) y^3 dy \Biggr) + \frac{87 G g N}{512\,\pi^2}\,\int_0^Y\,F_0(y)\, dy\,.\nonumber
\end{eqnarray}
Expression~(\ref{eq:F0}) provides an equation of the type which were
studied in papers~\cite{Arb04, Arb05, AVZ},
where the way of obtaining
solutions of equations analogous to (\ref{eq:F0}) are described.
Indeed, by successive differentiation of Eq.(\ref{eq:F0}) we come to
Meijer differential equation~\cite{be}
\begin{eqnarray}
& &\biggl(x\,\frac{d}{dx} + 2\biggr)\biggl(x\,\frac{d}{dx} + 1\biggr)\biggl(x\,\frac{d}{dx} - 1\biggr)\biggl(x\,\frac{d}{dx} - 2\biggr)\times\nonumber\\
& &F_0(x)\,+
\frac{G^2\,N\,x^2}{64\,\pi^2}\,F_0(x) = \label{eq:difur}\\
& &4 \Biggl(- \frac{G^2\,N}{64\,\pi^2} \int_0^Y F_0(y)
y dy + \frac{87 G g N}{512\,\pi^2}\int_0^Y F_0(y) dy
\Biggr)\,;\nonumber
\end{eqnarray}
which solution looks like
\begin{eqnarray}
& &F_0(x) \equiv \Psi_0(z) = C_1 G_{04}^{10}\Bigl( z\,|1/2, 1, -1/2, -1\Bigr) +\nonumber\\
& & C_2 G_{04}^{10}\Bigl( z\,|1, 1/2, -1/2, -1\Bigr) - \frac{G N}{128 \pi^2}\times \label{eq:solution}\\
& & G_{15}^{31}\Bigl( z\,|^0_{1, 1/2, 0, -1/2, -1}\Bigr) \int_0^Y \Biggl(G \,y\,-\,\frac{87\, g}{8}\Biggr)F_0(y)\,dy\,
;\nonumber\\
& &
z = \frac{G^2\,N\,x^2}{1024\,\pi^2}\,;\nonumber
\end{eqnarray}
where
$$
G_{qp}^{nm}\Bigl( z\,|^{a_1,..., a_q}_{b_1,..., b_p}\Bigr)\,;
$$
is a Meijer function~\cite{be}. In case $q=0$ we write only indices $b_i$ in one
line. Constants $C_1,\,C_2$ are defined by the following boundary conditions
\begin{eqnarray}
& &\Bigl[2\,z^2 \frac{d^3\,\Psi_0(z)}{dz^3}\,+9\,z\,\frac{d^2\,\Psi_0(z)}{dz^2}\,+\,
\frac{d\,\Psi_0(z)}{dz}\Bigr]_{z\,=\,z_0} = 0\,;\nonumber\\
& &\Bigl[2\,z^2\,\frac{d^2\, \Psi_0(z)}{dz^2}\,+5\,z\,\frac{d\, \Psi_0(z)}{dz}\,+\,
\Psi_0(z) \Bigr]_{z\,=\,z_0} = 0\,;\nonumber\\
& & z_0\,=\,\frac{G^2\,N\,Y^2}{1024\,\pi^2}\,.\label{eq:bc0}
\end{eqnarray}
Conditions~(\ref{eq:Y0}, \ref{eq:bc0}) defines set of
parameters
\begin{equation}
z_0\,=\,\infty\,; \quad C_1\,=\,0\,
; \quad C_2\,=\,0\,.\label{eq:z0C}
\end{equation}
The normalization condition for form-factor $F(0)=1$ here is the following
\begin{equation}
-\,\frac{G^2\,N}{64\,\pi^2}\,\int_0^\infty F_0(y)
\,y
dy + \frac{87\,G\,g\,N}{512\,\pi^2} \int_0^\infty F_0(y)\,dy\, =\,1\,.
\label{eq:norm}
\end{equation}
However the first integral in (\ref{eq:norm}) diverges due to asymptotic
$$
G_{15}^{31}\Bigl( z\,|^0_{1,\,1/2,\,0,\,-1/2,\,-1}\Bigr)\,\to\,
\frac{1}{2\,z}\,, \quad z\,\to\,\infty\,;
$$
and we have no consistent solution. In view of this we consider the next
approximation. We substitute solution (\ref{eq:solution}) with account of~(\ref{eq:norm}) into terms of Eq.~(\ref{eq:F}) being proportional to gauge constant $g$ but the
constant ones and
calculate terms proportional to $\sqrt{z}$. Now we have bearing in mind the normalization condition
\begin{eqnarray}
& &F(x)\equiv\Psi(z)\,=\,1 + \frac{85\, g\,\sqrt{N} \,\sqrt{z}}{96\,\pi}\Biggl(\ln\,z + 4\,
\gamma + \nonumber\\
& &4\,\ln\,2-\frac{1975}{168} +\frac{1}{2} G_{15}^{31}\Bigl( z_0\,|^0_{0, 0, 1/2, -1, -1/2}\Bigr)\Biggr) -\nonumber\\
& &\frac{2}{3\,z} \int_0^z \Psi(t)\,t\, dt- \frac{2\,z}{3}\,\int_z^{z_0}\,\Psi(t) \frac{dt}{t} +\label{eq:Fg}\\
& & \frac{4}{3\,\sqrt{z}} \int_0^z \Psi(t)
\sqrt{t}\, dt + \frac{4\,\sqrt{z}}{3} \int_z^{z_0} \Psi(t) \frac{dt}{\sqrt{t}}\,\,;\nonumber
\end{eqnarray}
where $\gamma$ is the Euler constant. We look for solution of (\ref{eq:Fg}) in the form
\begin{eqnarray}
& &\Psi(z) = \frac{1}{2} G_{15}^{31}\Bigl( z\,|^0_{1, 1/2, 0, -1/2, -1}
\Bigr) - \nonumber\\
& &\frac{85\,g \sqrt{N}}{128\,\pi}\,G_{15}^{31}\Bigl( z\,|^{1/2}_{1, 1/2, 1/2, -1/2, -1}\Bigr) +\label{eq:solutiong}\\
& &C_1\,G_{04}^{10}\Bigl( z\,|\frac{1}{2},\,1,\,-\frac{1}{2},\,-1\Bigr)\,+
\,C_2\,G_{04}^{10}\Bigl( z\,|1,\,\frac{1}{2},\,-\frac{1}{2},\,-1\Bigr)\,.\nonumber
\end{eqnarray}
We have also conditions
\begin{eqnarray}
& &1\,+\,8\int_0^{z_0}\,\Psi(z)\,dz\,=\,
\frac{87\,g\,\sqrt{N}}{32\,\pi}\,\int_0^{z_0}\Psi_0(z)\,\frac{dz}{\sqrt{z}}\,;
\nonumber\\
& &\Psi(z_0)\,=\,0\,;\label{eq:pht1}
\end{eqnarray}
and boundary conditions analogous to~(\ref{eq:bc0}). The last
condition~(\ref{eq:pht1}) means smooth transition from the non-trivial
solution to trivial one $G\,=\,0$. Knowing form~(\ref{eq:solutiong}) of
a solution we calculate both sides of relation~(\ref{eq:Fg}) in two
different points in interval $0\,<\,z\,<\,z_0$ and having four
equations for four parameters solve the set. With $N\,=\,3$ we obtain
the following solution, which we use to describe QCD case
\begin{eqnarray}
& &g(z_0)=3.8166\,;\; z_0=0.009553;\;\nonumber\\
& &C_1\,=\,-\,5.19055\,; \; C_2\,=\,5.46167\,.\label{eq:gY}
\end{eqnarray}
We would draw attention to the fixed value of parameter $z_0$. The solution
exists only for this value~(\ref{eq:gY}) and it plays the role of eigenvalue.
As a matter of fact from the beginning the existence of such eigenvalue is
by no means evident. This parameter $z_0$ defines scale appropriate to the solution. That is why we take value of running coupling $g$ in solution~(\ref{eq:gY}) just at this point. Note, that in what follows we always use the notation $F(x)$ for the main form-factor of the approach.
It is worth to note, that there is also another solution of the set of equations, which corresponds to larger value of $z_0 \simeq 9.6$ and smaller value of $g(z_0) \simeq 0.6$ for $N=2$. We apply this solution for an adequate description of non-perturbative contributions to the electro-weak interaction~\cite{AZ11,Arb12,AZ12,AZPR}.
Let us recall that from three-loop expression for $\alpha_s(\mu^2)$~(\ref{eq:alphas3}) with number of flavors $N_f = 3 $ we have normalization of its value at mass of $\tau$-lepton~(\ref{eq:alphatau}).
We normalize the running coupling by condition
\begin{equation}
\alpha_{s}(x_0)\,=\,\frac{g(z_0)^2}{4\,\pi}\,=\,1.15515;\label{eq:alphan}
\end{equation}
where coupling constant $g$ entering in expression
~(\ref{eq:gY}) is just corresponding to this normalization point. Now from definition of $z$~(\ref{eq:solution}) and value $z_0$~(\ref{eq:gY}) we have
\begin{equation}
G\,=\,\frac{1}{\Lambda_G^2}\,; \quad \Lambda_G\,=\,(264 \pm 7)\,MeV\,.\label{eq:GG}
\end{equation}
Thus we have obtained the definite value for the coupling of the interaction~(\ref{eq:effint}) under discussion.
Typical energy scale around $250\,MeV$ is natural for strong interaction.
It is also worth mentioning the value of the momentum which corresponds to boundary of non-perturbative region $z_0$. From Eqs.(\ref{eq:gY}, \ref{eq:GG}) we have for this momentum
\begin{equation}
p_0\,=\,(630 \pm 18)\,MeV\,.\label{eq:p0}
\end{equation}
Non-perturbative boundary~(\ref{eq:p0}) seems also natural from phenomenological point of view.
We have to bear in mind, of course, that all these results are obtained under chosen approximation. For example, change of form of dependence on three variables in expression~(\ref{eq:123}) leads to some change in constant term in inhomogeneous part of equation~(\ref{eq:Fg}). The coefficient afore the logarithm in its second term does not depend on the form, but the constant one can be changed. It is important to understand how small changes in this term influence results. In view of this we consider additional term $\epsilon$ in the inhomogeneous part of~(\ref{eq:Fg}). Thus we have the following modified expression
\begin{eqnarray}
& &1 + \frac{85 g \sqrt{N} \sqrt{z}}{96\,\pi}\Biggl(\ln z + 4
\gamma + 4 \ln 2- \frac{1975}{168} +\nonumber\\
& &\frac{G_{15}^{31}\Bigl( z_0\,|^0_{0, 0, 1/2, -1, -1/2}\Bigr)}{2} +\epsilon\Biggr) ;\label{eq:nhompart1}
\end{eqnarray}
Let us take example $\epsilon\,=\,0.13$. In this case instead of~(\ref{eq:gY}) we have
\begin{eqnarray}
& &g(z_0)=3.11587\,;\; z_0=0.0153348;\;\nonumber\\
& &C_1\,=\,-\,4.47289\,; \; C_2\,=\,3.62922\,;\label{eq:gY13}
\end{eqnarray}
that in the same way as for case $\epsilon\,=\,0$ leads to the following parameters
\begin{eqnarray}
& &\alpha_{s}(x_0)\,=\,\frac{g(z_0)^2}{4\,\pi}\,=\,0.7726;\nonumber\\
& &G\,=\,\frac{1}{\Lambda_G^2}\,; \quad \Lambda_G\,=\,(273.5 \pm 7.0)\,MeV\,.\label{eq:GG13}
\end{eqnarray}
Another example $\epsilon\,=\,0.15$. In this case we have
\begin{eqnarray}
& &g(z_0)=3.03685\,;\; z_0=0.0163105;\;\alpha_{s}(x_0)\,=\,0.7339;\nonumber\\
& &
C_1\,=\,-\,4.37005\,; \; C_2\,=\,3.43372\,;\nonumber\\
& &\quad
G\,=\,\frac{1}{\Lambda_G^2}\,; \quad \Lambda_G\,=\,(276.4 \pm 7.0)\,MeV\,.\label{eq:gY15}
\end{eqnarray}
\section{Running coupling}
In previous sections
N.N. Bogoliubov compensation principle~\cite{Bog1, Bog2}
was applied to studies of a spontaneous generation of effective non-local interaction~(\ref{eq:effint}) in QCD.
It is of the utmost interest to study an influence of interaction~(\ref{eq:effint}) on the behavior of strong running coupling $\alpha_s(k^2)$ in the region below $z_0$ {\it i.e.} $k < p_0$(\ref{eq:p0}).
\begin{figure}
\begin{picture}(20,0)
\put(-35,-15){$k$}
\put(55,-30){$k$}
\end{picture}
\includegraphics[width=8cm]{pict4.eps}
\caption{Diagrams, describing the contribution of non-perturbative vertex~(\ref{eq:four}), denoted by the black spot, to the running coupling $\alpha_s(k^2)$.
Simple lines correspond to gluons and thick lines correspond to quarks.}
\label{fig:Fig2}
\end{figure}
For the purpose we rely on considerations connected with the renormalization group approach~\cite{BogSh} (for application to QCD see, e.g~\cite{PS}).
We have the one loop perturbative expression for QCD $\beta$-function.
\begin{equation}
\beta(g)\,=\,-\,\frac{g^3}{(4\,\pi)^2}\biggl(11\,-\,\frac{2\,N_f}{3}\biggr)\,;
\label{eq:betapert}
\end{equation}
We shall take additional contributions for small momentum $k^2 \to 0$ of our new interactions according to diagrams shown in Fig. 2 that gives instead of~(\ref{eq:betapert})
\begin{equation}
\beta(g) = - \frac{g^3}{(4 \pi)^2}\biggl[\biggl(11 - \frac{2 N_f}{3}
\biggr) - \frac{405\sqrt{3}\,g(z0)}{2\,\pi}\,\Phi(0)\biggr];\label{eq:beta0}
\end{equation}
where $\Phi(0)$ is the result of calculation of diagrams Fig. 2 (see below).
Here we see a decisive difference in behavior of perturbative $\beta$ ~(\ref{eq:betapert}), which acts at large momenta $k > p_0$ and non-perturbative one for small $k\simeq 0$ ~(\ref{eq:beta0}). According to calculation of $\Phi(0)$ with account of~(\ref{eq:gY}) the sign of $\beta$ changes between these regions. So $\alpha_s(k^2)$ for $k^2 \to 0$ is also positive as well as for large $k$.
To consider a behavior in between we return to definition of the $\beta$-function~\cite{PS}
\begin{eqnarray}
& &\beta(g,t) = g\, M\frac{\partial}{\partial M}\bigl(\delta_{pert}+\delta_{nonpert}\bigr)=\frac{g^3}{(4\pi)^2}\times\label{eq:betaf}\\
& &M \frac{\partial}{\partial M}\frac{\Gamma(2-\frac{d}{2})}{2(M^2)^{2-\frac{d}{2}}}\Bigl(\bigl(11-\frac{2 N_f}{3}\bigr) -\frac{405\sqrt{3}\,g(z_0)}{2\,\pi}\,\Phi(t)\Bigr);\nonumber
\end{eqnarray}
where function $\Phi(t)$ is defined by calculation of diagrams Fig.2 and $d \to 4$ is the space-time dimension.
\begin{eqnarray}
& &\Phi(t) = \int_t^{z_{01}}\frac{u-3 t/4}{u-t/2}F(u) du + \nonumber\\
& &\int_{3 t/4}^t\frac{4(u-3 t/4)^2}{t(u-t/2)}F(u) du ;\; t < z01 ;\label{eq:betanpert}\\
& &\Phi(t) = \int_{3 t/4}^{z_{01}}\frac{4(u-3 t/4)^2}{t(u-t/2)}F(u) du;\, z_{01} < t < \frac{4 z_{01}}{3}\,;\nonumber\\
& &\Phi(t)\,=\,0\,;\; t\,>\,\frac{4 z_{01}}{3}\,;\; z_{01}\,=\,\sqrt{z_0} ;\nonumber\\
& & u=\frac{\sqrt{3}G\,q^2}{32\,\pi}\,;\; t=\frac{\sqrt{3}G\,k^2}{32\,\pi}\,.\nonumber
\end{eqnarray}
This leads to modification of relation~(\ref{eq:beta0})
\begin{equation}
\beta(g,t) = - \frac{g^3}{(4 \pi)^2}\biggl[\bigl(11 - \frac{2 N_f}{3}
\bigr) - \frac{405\sqrt{3}\,g(z0)}{2\,\pi} \Phi(t)\biggr];\label{eq:beta}
\end{equation}
Thus in approximation using the
two-loop expression corresponding to diagrams of Fig.2 we have for $N_f=3$
\begin{equation}
\alpha_s(k^2) = \frac{\alpha(k^2_0)}{1 + \frac{\alpha_s( k^2_0)}{6 \pi}\Bigl(\frac{27}{2}-\frac{405\sqrt{3} g(z0)}{2\pi}\Phi(t)\Bigr)\ln \frac{k^2}{k^2_0}} \,;\label{eq:alphax}
\end{equation}
where $t$ is defined in~(\ref{eq:betanpert}).
With $G$ defined by~(\ref{eq:GG}), $g(z_0)$ defined by~(\ref{eq:gY}) and $k^2=Q^2$ we have the behavior of $\alpha_s(Q)$.
With fixed parameter $\epsilon$ in~(\ref{eq:nhompart1}) we calculate the behavior of running coupling. Let us begin with initial case $\epsilon\,=\,0$.
We have value of $\alpha_s(Q)$ at the beginning point of non-perturbative contribution, corresponding to $\bar z_{01}=\frac{4}{3}z_{01}$, corresponding to momentum $Q = 726\,MeV$.
\begin{equation}
\alpha_s(\bar z_{01})\,=\,0.936\,. \label{eq:nalpha}
\end{equation}
The boundary of non-perturbative region $Q_0 = 726\,MeV$ seem quite reasonable.
Now the behavior of $\alpha_s(Q)$ is drawn in Fig.3. We would like to draw attention to the result, presented at Fig.3 , which consists in absence of Landau pole in expression~(\ref{eq:alphax}). Remind, that in perturbative calculation up to four loops the singularity at Landau pole point is always present. Only by taking into account of the non-perturbative effects we achieve elimination of this very unpleasant feature, which was seriously considered as a sign of the inconsistency of the quantum field theory~\cite{LP, LNB}.
\begin{figure}
\includegraphics[width=6.8cm]{als0.eps}
\caption{Dependence of the running coupling $\alpha_s(Q),\,Q$ in MeV, with $\epsilon=0$.
The continuous line corresponds to $\alpha_s$ with non-perturbative contribution~(\ref{eq:alphax}), the discontinuous one with a pole corresponds to the usual perturbative one-loop expression.}
\label{fig:Fig3}
\end{figure}
\begin{figure}
\begin{picture}(20,10)
\end{picture}
\includegraphics[width=6.8cm]{als13.eps}
\caption{Dependence of the running coupling $\alpha_s(Q)$ for $\epsilon=0.13$.
The continuous line corresponds to $\alpha_s$ with non-perturbative contribution~(\ref{eq:alphax}), the discontinuous one with a pole corresponds to the usual perturbative one-loop expression.}
\label{fig:Fig4}
\end{figure}
There is also a feature of expression~(\ref{eq:alphax}), which deserves being mentioned. The limit of $\alpha_s(Q)$ for $Q \to 0$ is zero. Such possibility is also discussed on phenomenological grounds. In particular, there are indications for decreasing of $\alpha_s(Q^2)$ for $Q^2\,\to\,0$ in studies of low-mass resonances~\cite{ItSh}. Let us note, that a number of lattice calculations of
the running coupling also give a similar behavior~\cite{SKW,PhB,Berlin,KF,BB}.
Let us also consider $\alpha_s$ behavior for other values of parameter $\epsilon$. The behavior for $\epsilon=0.13$ is presented in Fig.4. The pole here is also absent, but values of $\alpha_s$ in non-perturbative region are smaller than for case $\epsilon=0$.
The average $\alpha_s$ in the non-perturbative region for $\epsilon=0.13$
\begin{eqnarray}
& &\bar \alpha_s\,=\,\frac{1}{Q_0}\int_0^{Q_0}\,\alpha_s(Q)\,dQ\,=\,0.87\,;\nonumber\\ & &Q_0\,=\,\sqrt{\frac{128 \pi z_{01}}{3 \sqrt{3}\,G}}\,.\label{eq:alphaaver13}
\end{eqnarray}
For $\epsilon=0.15$ $\bar \alpha_s\,=\,0.84$.
\section{the gluon condensate}
One of important non-perturbative parameters is the gluon condensate, that is the following vacuum average
\begin{equation}
V_2\,=\,<\frac{g^2}{4\,\pi^2}\,F^a_{\mu \nu}\,F^a_{\mu \nu}>\,.\label{eq:V2}
\end{equation}
Let us estimate this parameter in our approach. We apply our method to the first non-perturbative contributions, presented at Fig.5, which is proportional to $g\,G$. It is important to introduce Feynman rule for
contribution of operator~(\ref{eq:V2}) in brackets. We denote it by skew cross in Fig.5
\begin{equation}
V_{FF}(\mu,\nu;p)\,=\,\imath\,\frac{g^2}{\pi^2}\,(g_{\mu \nu}\,p^2 - p_\mu\,p_\nu)\,.\label{eq:FeynmanV2}
\end{equation}
With distribution of integration momenta denoted in Fig. 5 form-factor in both types of diagrams according to~(\ref{eq:123}) has the same argument:
\begin{equation}
F(p^2+\frac{3}{4}q^2)\,.\label{eq:Fpq}
\end{equation}
It comes out, that the second and the third terms in the second row of Fig.~\ref{fig:Fig5} are twice each of the previous terms. Thus the sum is equal to the result for the first diagram multiplied by 10.
\begin{figure}
\includegraphics[width=9cm]{Gluon.eps}
\caption{Diagrams for calculation of the gluon condensate. Lines -- gluons, black circle -- triple vertex~(\ref{eq:vertex}), open circle -- four gluon vertex~(\ref{eq:four}) with corresponding form-factor and skew cross -- vertex~(\ref{eq:FeynmanV2}). Momenta directed to the right are p-q/2, q, -p-q/2 for bug-like diagrams and p-q/2, p+q/2 for $\infty$-like diagrams.}
\label{fig:Fig5}
\end{figure}
We have after the Wick rotation
\begin{eqnarray}
& &V_2\,=\,\frac{10\times 24\,g^3\,G}{(2\,\pi)^8\,\pi^2}\times\label{eq:V2Intmom}\\
& &\int F(p^2+\frac{3}{4}q^2)\frac{12\,(p^2\,q^2-pq^2)}{q^2(p-q/2)^2(p+q/2)^2}dp dq\,.\nonumber
\end{eqnarray}
Using the following integral by angle
\begin{eqnarray}
& &\int_0^\pi\frac{\sin^2(\theta)\,d\theta}{(p^2+\frac{q^2}{4})^2-(p q)^2}\,=\,\label{eq:intcos2}\\
& &
\frac{\pi}{2(x+\frac{y}{4})}\Bigl[\theta\bigl(x-\frac{y}{4}\bigr)\frac{1}{x}+
\theta\bigl(\frac{y}{4}-x\bigr)
\frac{4}{y}\Bigr]\,;\nonumber\\
& & x =p^2\,;\quad y=q^2\,;\nonumber
\end{eqnarray}
we obtain the following expression for quantity~(\ref{eq:V2Intmom})
\begin{eqnarray}
& &V_2\,=\,\frac{5\, g^3\,2^{11}}{G^2 \pi^3\,\sqrt{3}}\int_0^{\sqrt{z_0}}F(t)\,I_t\, dt\,;\nonumber\\
& & I_t\,=\,12\,\biggl(-\int_0^t\frac{(t-y)^2}{t-y/2}dy - \nonumber\\
& &4\int_t^{4 t/3}\frac{(t-y)^2(t-3 y/4)}{(t-y/2)y}dy+\int_0^{4 t/3}\Bigl(t-\frac{3 y}{4}\Bigr)dy\,\biggr)\,;\nonumber\\
& &t\,=\,\frac{G\,\sqrt{3}}{2^5\,\pi}\biggl(x\,+\,\frac{3\,y}{4}\biggr)\,.
\label{eq:intV2}
\end{eqnarray}
We have already expressions~(\ref{eq:solutiong}, \ref{eq:gY}) for form-factor $F(z),\,z=t^2$. So calculation here is direct and we obtain, using values for $g$~(\ref{eq:gY}) and the central value in definition of $G$~(\ref{eq:GG})
\begin{eqnarray}
& &V_2\,=\,\frac{5\, g^3\,2^{10}}{\pi^3 \sqrt{3}\,G^2} 12\Bigl(2-6 \ln \frac{4}{3}\Bigr) \int_0^{z_0} F(z)\sqrt{z}\,dz \,=\nonumber\\
& &0.00955\,GeV^4\,;\label{eq:V2N}
\end{eqnarray}
Provided we take nonzero value for $\epsilon$ in expression~(\ref{eq:nhompart1}) results for gluon condensate read
\begin{eqnarray}
& &V_2\,=\,0.0120\,GeV^4\,(\epsilon\,=\,0.13);\nonumber\\
& &V_2\,=\,0.0128\,GeV^4\,(\epsilon\,=\,0.15)\,.\label{eq:V2Ne}
\end{eqnarray}
So in this approximation we have the non-zero non-perturbative parameter $V_2$. Its value agrees within accuracy of determination of this parameter with phenomenological values $V_2 \simeq 0.012\, GeV^4$~\cite{SVZ}, $V_2 \simeq 0.010\, GeV^4$~\cite{Zakh}. Values~(\ref{eq:V2N}, \ref{eq:V2Ne}) show variation in the range of uncertainty of its phenomenological definition. Thus we can state, that our non-perturbative approach allows to calculate safely this important parameter.
Let us also estimate vacuum average $V_3$
\begin{equation}
V_3\,=\,<g^3\,f_{a b c}\,F_{\mu \nu}^a\,F_{\nu \rho}^b\,F_{\rho \mu}^c>\,.\label{eq:V3}
\end{equation}
Quite analogous calculations give {\it e.g.} with $\epsilon\,=\,0.13$
\begin{eqnarray}
& &V_3\,=\,\frac{g^3\, 2^{17}}{ G^3}\Bigl(2 - 6\ln\frac{4}{3}\Bigr)\int_0^{z_0} z F(z) dz\,=\nonumber\\
& &0.00744\, GeV^6\,.\label{eq:V3Ne}
\end{eqnarray}
\section{The glueball}
The existence of anomalous interaction~(\ref{eq:effint}) makes possible to consider gluonic states. We shall consider scalar glueball $X_0$ state to get indications if value of the non-perturbative constant~(\ref{eq:GG}) may be used for adequate description of the non-perturbative effects of the strong interaction. For the purpose we use
Bethe-Salpeter equation with the kernel corresponding to one-gluon exchange with our (point-like) anomalous three-gluon interaction~(\ref{eq:effint}). We take for vertex of $X_0$ interaction with two gluons in the following form
\begin{equation}
\frac{G_{gb}}{2}\,F_{\mu \nu}^a\,F_{\mu \nu}^a\,X_0\,\Psi_{gb}(x)\,;\quad x = p^2\,;\label{eq:FFX0}
\end{equation}
where $\Psi_{gb}(x)$ is a Bethe-Salpeter wave function.
We have for the first approximation (zero momentum of $X_0$)
\begin{eqnarray}
& &\Psi_{gb}(x)= -\frac{3 G^2}{16 \pi^2}\biggl(\frac{1}{2 x^2}\int_0^x y^3 \Psi_{gb}(y)dy-\nonumber\\
& &
\frac{1}{x}\int_0^x y^2 \Psi_{gb}(y)dy-3\int_0^Y y \Psi_{gb}(y)dy -\label{eq:EQgb}\\
& & x \int_x^Y \Psi_{gb}(y)dy+\frac{x^2}{2}\int_x^Y \frac{\Psi_{gb}(y)}{y}dy\,;\nonumber
\end{eqnarray}
where we take again the upper limit $Y$ of integration as in~(\ref{eq:F}) due to form-factor of interaction~(\ref{eq:effint}) $F(x)=0$ for $x \ge Y$.
Again by successive differentiations we obtain from Eq.(\ref{eq:EQgb})
the following differential equation
\begin{eqnarray}
& &\biggl(z'\frac{d}{dz'}+1\biggr)\biggl(z'\frac{d}{dz'}+\frac{1}{2}\biggr)
\biggl(z'\frac{d}{dz'}-\frac{1}{2}\biggr)\times\nonumber\\
& &\biggl(z'\frac{d}{dz'}-1\biggr)\Psi_{gb}(z')=
z'\Psi_{gb}(z')+\frac{C}{4}\,;\label{eq:diffgb}\\
& &C=4\int_0^{\bar z_0}\Psi_{gb}(t') dt';\; z'=\frac{9\, G^2 x^2}{128 \pi^2};\;t'=\frac{9\, G^2 y^2}{128 \pi^2}.\nonumber
\end{eqnarray}
Comparing variable $z'$ in Eq.(\ref{eq:diffgb}) with the initial variable $z$ in Eq(\ref{eq:solution}) we see relation $z'=24\, z$. This means also, that
$\bar z_0=24 z_0$, $z_0$ from solution~(\ref{eq:gY}).
In new variables Eq.(\ref{eq:EQgb}), in which we also have taken into account terms, proportional to gauge coupling $g$ and mass of the bound state squared $m^2$, looks like
\begin{eqnarray}
& &\Psi_{gb}(z')\,=\,1-\frac{2}{3\,z'}\int_0^{z'}\,\Psi_{gb}(t') t' dt'+\nonumber\\
& &\frac{4}{3\,\sqrt{z'}}\int_0^{z'}\,\Psi_{gb}(t')\sqrt{t'} dt'+
\frac{4 \sqrt{z'}}{3}\int_{z'}^{\bar z_0}\,\frac{\Psi_{gb}(t')}{\sqrt{t'}} dt'-\nonumber\\
& &\frac{2 z'}{3}\int_{z'}^{\bar z_0}\,\frac{\Psi_{gb}(t')}{t'} dt'\,;\label{eq:intgb}\\
& &1 = 4\int_0^{\bar z_0} \Psi_{gb}(t') dt' +\biggl(\kappa + \frac{3 g \sqrt{2}}{2 \pi}\biggr) \int_0^{\bar z_0} \frac{\Psi_{gb}(t')}{\sqrt{t'}} dt' .\nonumber
\end{eqnarray}
Here $\kappa$ is connected with the bound state mass $m$ in the following way
\begin{equation}
\kappa\,=\,-\,\frac{3\, G\, m^2}{8 \sqrt{2}\, \pi}\,.\label{eq:kappa}
\end{equation}
According to expression~(\ref{eq:diffgb}) we look for the solution of Eq.(\ref{eq:EQgb}) in the following form
\begin{eqnarray}
& &\Psi_{gb}(z')\,=\,\frac{\pi}{2}\,G_{1 5}^{2 1} \Bigl( z'|^0_{1,\,0,\,1/2,\,-1/2,\,-1}\Bigr)+\label{eq:Psigb}\\
& &C_1\,G_{0 4}^{2 0} \Bigl( z'|1,\,1/2,\,-1/2,\,-1\Bigr)+\nonumber\\
& &C_2\,G_{0 4}^{1 0} \Bigl(- z'|1,\,1/2,\,-1/2,\,-1\Bigr)\,.\nonumber
\end{eqnarray}
By substituting expression~(\ref{eq:Psigb}) into set of equations~(\ref{eq:intgb}) and using the values of $g$ and $z_0$~(\ref{eq:gY}) we obtain unique solution for parameters
\begin{equation}
C_1\,=\,1.07899\,;\; C_2\,=\,-1.38099\,;\; \kappa\,=\,-2.6415\,.\label{eq:gbCkappa}
\end{equation}
Now from values~(\ref{eq:GG}, \ref{eq:gbCkappa}), using relation~(\ref{eq:kappa}), we have the lightest scalar glueball mass
\begin{equation}
m\,=\,1479 \pm 40\,MeV\,.\label{eq:mgb}
\end{equation}
This value is quite natural, the more so, that the most serious candidate for
being the lightest scalar glueball is the state $f_0(1500)$ (see recent review~\cite{Oks}) with mass $1507 \pm 5\, MeV$, that evidently agrees our number~(\ref{eq:mgb}).
Now we have to obtain the coupling constant of the scalar gluon entering in the expression of the effective interaction~(\ref{eq:FFX0}).
For the purpose we use the normalization condition for Bethe-Salpeter wave function $\Psi(t)$.
\begin{equation}
1\,=\,\frac{\sqrt{2}\,G_{gb}^2}{\pi\,G}\,\int_0^{\bar z_0} \frac{\Psi_{gb}(t')^2}{\sqrt{t'}}\,dt'\,.\label{eq:GFFX}
\end{equation}
Substituting into Eq.(\ref{eq:GFFX}) solution~(\ref{eq:Psigb}, \ref{eq:gbCkappa})
and calculating the integral, we obtain
\begin{eqnarray}
& &G_{gb}^2\,=\, \frac{\pi\,G}{\sqrt{2}\,I}\,=\,1.825\, G\,;\nonumber\\
& &I\,=\,\int_0^{\bar z_0} \frac{\Psi_{gb}(t')^2}{\sqrt{t'}}\,dt'\,=\,1.21732.\label{eq:Ggb}
\end{eqnarray}
From result~(\ref{eq:Ggb}) we have the following value of the glueball coupling
\begin{equation}
G_{gb}\,=\,\frac{1}{190.337\,MeV}\,=\,\frac{5.254}{GeV}\,.\label{eq:Ggbnum}
\end{equation}
\section{Conclusion}
An existence of a non-trivial solution of a compensation
equation is extremely
restrictive. In the most cases such solutions do not exist at all. When we start from a
renormalizable theory we have arbitrary value for
its coupling constant. Provided there exists {\it
stable non-trivial solution of a compensation equation} the
coupling is fixed as well as the parameters of this
non-trivial solution. Note, that application of the same approach to the electro-weak theory~\cite{AZ11,Arb12,AZ12,AZPR,Arb09} also leads to strong restrictions on parameters of the theory including the coupling constant.
We also may state, that in the case, discussed in the present paper, just the non-trivial solution is the stable one, because the theory with the Landau pole is unstable.
We consider the results for the gluon condensate~(\ref{eq:V2N},\ref{eq:V2Ne}) and the glueball mass~(\ref{eq:mgb}) as a confirmation of efficiency of our approach in application to non-perturbative contributions to QCD.
Thus we consider the present results for low-momenta $\alpha_s$
to be encouraging and promising for further applications of
the Bogoliubov compensation approach to principal problems
of elementary particles physics.
\\
\section{Acknowledgments}
The work was supported in part by grant of Ministry of education and science of RF No. 8412 and by grant NSh-3920.2012.2.
|
2,869,038,154,417 | arxiv | \section{Introduction and Framework}
We consider the classical problem of computing a conditional expectation using a least-square Monte Carlo approach. To be more precise, let $(\Omega,\mathcal{F},\mathbb{P})$ be a probability space, $X:\Omega \to {\mathbb R}^d$ and $Y: \Omega \to {\mathbb R}^p$ be two random variables and $f : {\mathbb R}^p \to {\mathbb R}$ be a measurable function such that ${\mathbb E}[{f(Y)}^2] < \infty$. We are interested in computing ${\mathbb E}[f(Y) | X]$ by using a parametrized approximation. Thus, we introduce a family of measurable functions $(\varphi(\theta, \cdot))_{\theta \in {\mathbb R}^q}$ from ${\mathbb R}^d$ to ${\mathbb R}$ satisfying for all $\theta \in {\mathbb R}^q$, ${\mathbb E}[{\varphi(\theta, X)}^2] < \infty$. This family will be used to approximate the conditional expectation ${\mathbb E}[f(Y) | X]$. It is well known that ${\mathbb E}[f(Y) | X]$ solves the two following minimisation problems
\begin{align*}
\inf_{Z \in L^2(\Omega,\sigma(X))} {\mathbb E}[(Z - f(Y))^2], \ &\inf_{Z \in L^2(\Omega,\sigma(X))} {\mathbb E}[(Z - {\mathbb E}[f(Y) |X])^2],
\end{align*}
where $L^2(\Omega,\sigma(X))$ denotes the set of square integrable random variables that are measurable with respect to the $\sigma$-algebra generated by~$X$. Therefore, we are interested in the following minimization problems
\begin{align}\label{minim_pbms}
\inf_{\theta \in {\mathbb R}^q} {\mathbb E}[(\varphi(\theta,X) - f(Y))^2], \ &\inf_{\theta \in {\mathbb R}^q} {\mathbb E}[(\varphi(\theta,X) - {\mathbb E}[f(Y) |X])^2].
\end{align}
In practical cases, all these expectations are not explicit and it is often used Monte-Carlo estimators to approximate them. The classical problem of regression consists in minimizing $\inv{N} \sum_{i=1}^N \left( \varphi(\theta, X_i) - f(Y_i) \right)^2$ with respect to~$\theta$, where $(X_i,Y_i)_{i\ge 1}$ is a sequence of iid random variables with the same distribution as $(X,Y)$. In this work, we consider the possibility of having for each $X_i$ many samples of $Y$ given $X_i$. This is the case when samples are generated by computer simulation and when the conditional law of $Y$ given~$X$ can be simulated. More precisely, let $(X_i)_{i \ge 1} $ be a sequence of iid random variables following the distribution of $X$. For each $i \ge 1$, we introduce independent sequences $(Y_i^{(k)})_{k \ge 1}$ of iid random variables following the law ${\mathcal L}(Y | X = X_i)$ of $Y$ conditionally on $X=X_i$. For $N,K\in {\mathbb N}^*$, we define the sequence of functions $v_N^K: {\mathbb R}^q \to {\mathbb R}$ by
\begin{align}
\label{eq:vKN}
v_N^K(\theta) = \inv{N} \sum_{i=1}^N \left( \varphi(\theta, X_i) - \inv{K} \sum_{k=1}^K f(Y_i^{(k)}) \right)^2.
\end{align}
We are interested in finding $\theta_N^K$ minimizing $v_N^K$, so that $\varphi(\theta_N^K, X)$ will give an approximation of ${\mathbb E}[f(Y)|X]$. Formally, the two minimisation problems of Equation~\eqref{minim_pbms} correspond respectively to $N=\infty, K=1$ and $N=K=\infty$. Note that the minimisation of~\eqref{eq:vKN} with $K=1$ corresponds to the classical case, with as many samples of $Y_i$ as of $X_i$. Up to our knowledge, most of the literature (if not all) considers the case of minimizing $v_N^1$ to approximate the conditional expectation, and we refer to Gyorfi et al.~\cite{GKKW} for a nice presentation of the topic and references. This may be understood from the point of view of statistics: on empirical data, one usually have as many observations of $X$'s and $Y$'s. However, when $X$ and $Y$ are generated by computer simulation, it is relevant to consider the possibility of sampling $K\ge 2$ values of $Y$ for a given~$X$. The natural question then is to understand how to choose $N$ and $K$ in order to achieve the best accuracy for a given computational time. This is the goal of this paper.
The problem of computing conditional expectations is an important problem that arises in many different fields of research, such as the approximation of backward stochastic differential equations~\cite{BT,GLW}, the pricing of American options and more generally optimal stopping problems~\cite{LS}, and stochastic optimal control problems~\cite{BKS} to mention a few. It has a particular relevance in risk management, see e.g.~\cite{BaResi2,BDM,KrNiKo}, where financial institutions have to evaluate risk from a regulatory perspective. The valuation of future risks naturally involves conditional expectations. To be more precise, let us consider the case of insurance companies that have to calculate their Solvency Capital Requirement (SCR). This SCR can be calculated by computing expected losses under some stressed scenarios. This regulatory procedure to evaluate risk is called the ``standard formula''. If one aims at evaluating the SCR at a future date~$T$ with the same procedure, one has to compute conditional expected losses under the different stressed scenarios, given all the market information between the current date and~$T$, see~\cite{alfonsi2021multilevel}. Therefore, one naturally has to deal with the numerical approximation of conditional expectations. Let us stress that it is usually natural in this context to be able to sample conditional laws: assets are usually modeled by a Markovian process that can be simulated, and we can then simulate as many paths as desired after~$T$, from a given path up to time~$T$.
Many works have developed numerical methods based on nested
simulations and refinements to approximate an expectation that
involves conditional expectations. Among them we can cite \cite{GoJu}
which optimize nested simulations to estimate a value at risk on a
conditional expectation, \cite{BaReSi} which study nested simulations
in the context of risk insurance modeling and
\cite{alfonsi2021multilevel} which use a multilevel approach on the
same kind of insurance problem. But to the best of our knowledge, none
of these works are interested in the nested approximation of
conditional expectations using a parametric representation as done in
this work.
The paper is structured as follows. First, Section~\ref{Sec_AC} presents our main assumptions, under which we are able to show, by quite standard arguments, the convergence of~$\theta^K_N$ as well as a Central Limit Theorem. Section~\ref{Sec_Main} presents the main results of the paper. In particular, Theorem~\ref{thm:optimK} gives a precise asymptotic of the suboptimality of~$\theta^K_N$ (with respect to $\theta^\star$) as a function of $K$ and $N$, and the optimal value of $K$ for a given computational budget. It also gives estimators to approximate it. The computational gain is all the more important as the computational cost of sampling $Y$ given $X$ is small and the approximation family is close to the conditional expectation. Section~\ref{Sec_linearcase} gives a focus on the particularly important case of linear regression. We are able to refine the result of Theorem~\ref{thm:optimK} in this context. Besides, Proposition~\ref{prop:linear_reg} shows for a particular choice of approximating functions that the optimal value of~$K$ can be arbitrarily large. Finally, Section~\ref{Sec_num} presents numerical results and shows the relevance of considering $K>1$ on different examples. We also compare different estimators that approximate~$K$, and it comes out that one estimator is more relevant for practical use.
\section{Assumptions and Convergence results}\label{Sec_AC}
In this section, we apply the general results on the convergence of the estimators of the optimal solutions presented by~\cite[Section 2.6]{ShRu93}. We introduce the function
\begin{align}
v^\infty(\theta) & = {\mathbb E}\left[ \left( \varphi(\theta, X) - {\mathbb E}[f(Y) | X] \right)^2 \right] \label{def_v_infty}
\end{align}
and make the following assumptions.
\paragraph{Assumptions} Let $C\subset {\mathbb R}^q$ be a compact set with a non-empty interior~$\mathring{C}$.
\begin{hypo}
\label{hyp:ui}
Uniform integrability: ${\mathbb E}\left[\sup_{\theta \in C} \abs{\varphi(\theta, X)}^2\right] < \infty$.
\end{hypo}
\begin{hypo}
\label{hyp:continuous}
The function $\theta \longmapsto \varphi(\theta, X)$ is a.s. continuous on~$C$.
\end{hypo}
\begin{hypo}
\label{hyp:unique}
The function $v^\infty$ admits on~$C$ a unique minimizer $\theta^\star \in \mathring{C}$.
\end{hypo}
\begin{hypo}
\label{hyp:C2-ui}
The function $\theta \longmapsto \varphi(\theta, X)$ is a.s. twice continuously differentiable on~$C$ and such that
\begin{equation*}
{\mathbb E}\left[\sup_{\theta \in C} \abs{\nabla \varphi(\theta, X)}^2 \right] < \infty; \quad
{\mathbb E}\left[\sup_{\theta \in C} \abs{\nabla^2 \varphi(\theta, X)}^2 \right] < \infty.
\end{equation*}
\end{hypo}
Here, and in the whole paper, the gradient $\nabla$ is taken with respect to~$\theta$. Let us note that Hypotheses~\ref{hyp:ui},~\ref{hyp:continuous} and~\ref{hyp:C2-ui} are satisfied in the case of the linear regression, see Section~\ref{Sec_linearcase} for further details.
To apply the results on the convergence of the estimators presented by~\cite[Section 2.6]{ShRu93}, we introduce the function $$\Phi(\theta,Z)=\left(\varphi(\theta,X)-\frac 1K \sum_{k=1}^Kf( Y^{(k)})\right)^2,$$ with $Z=(X,Y^{(1)},\dots,Y^{(K)})$, where the sequence $(Y^{(k)})_{k\ge 1}$ is conditionally iid given~$X$, and given $X=x$ follows the distribution ${\mathcal L}(Y | X=x)$\footnote{In practice, we typically have $Y=F(X,U)$ with $U$ independent of~$X$ and $F$ a measurable function, and the conditional independence means that $Y^{(k)}=F(X,U^{(k)})$ with $X,U^{(1)},\dots,U^{(K)}$ independent and $\mathcal{L}(U^{(i)})=\mathcal{L}(U)$. }. We also define, for $K\in {\mathbb N}^*$, the function
\begin{align}\label{def_v_K}
v^K(\theta) = {\mathbb E}\left[ \left( \varphi(\theta, X) - \inv{K} \sum_{k=1}^K f(Y^{(k)}) \right)^2 \right],
\end{align}
and $v^K_N(\theta)=\frac 1 N \sum_{i=1}^N\Phi(\theta,Z_i)$, so that $v^K(\theta)={\mathbb E}[v^K_N(\theta)]$.
Since $|\Phi(\theta,Z)|\le 2|\varphi(\theta,X)|^2+\frac 2K \sum_{k=1}^K f(Y^{(k)})^2$, we get the uniform integrability on~$C$ by using~\ref{hyp:ui}. The continuity of $\Phi$ with respect to~$\theta\in C$ is clear by~\ref{hyp:continuous}, and we get the following lemma from the uniform law of large numbers, see Lemma~\ref{lem:ulln}.
\begin{lemma}
Under~\ref{hyp:ui} and~\ref{hyp:continuous}, for every fixed $K \in {\mathbb N}$,
$\sup_{\theta \in C}|v_N^K(\theta)-v^K(\theta)| \to 0$ almost surely as $N\to \infty$.
\end{lemma}
The next lemma makes explicit the link between~$v^\infty(\theta)$ and~$v^K(\theta)$ defined respectively by~\eqref{def_v_infty} and~\eqref{def_v_K}.
\begin{lemma}\label{lem_vk_vinfty}
We have for all $\theta \in C$,
\begin{align}
v^K(\theta) & = v^\infty(\theta) + \inv{K} {\mathbb E}[(f(Y)- {\mathbb E}[f(Y) | X])^2 ]. \label{expression_vK}
\end{align}
\end{lemma}
\begin{proof}
We expand~\eqref{def_v_K} and get
\begin{align*}
v^K(\theta) = \inv{K} {\mathbb E}\left[ \left( \varphi(\theta, X) - f(Y) \right)^2 \right] + \frac{1}{K^2} \sum_{k \neq k'}{\mathbb E}\left[ \left( \varphi(\theta, X) - f(Y^{(k)}) \right) \left( \varphi(\theta, X) - f(Y^{(k')}) \right) \right].
\end{align*}
On the one hand, using the conditional independence of the $Y^{(k)}$'s, we get for $k\not = k'$
\begin{align*}
& {\mathbb E}\left[ \left( \varphi(\theta, X) - f(Y^{(k)}) \right) \left( \varphi(\theta, X) - f(Y^{(k')}) \right) \right] \\
& = {\mathbb E}\left[ {\mathbb E}\left[ \varphi(\theta, X) - f(Y^{(k)}) | X \right] {\mathbb E}\left[ \varphi(\theta, X) - f(Y^{(k')}) | X \right] \right] \\
& = {\mathbb E}\left[ \left( \varphi(\theta, X) - {\mathbb E}[f(Y) | X] \right)^2 \right].
\end{align*}
On the other hand, as the conditional expectation is an orthogonal projection,
\begin{align*}
{\mathbb E}\left[ \left( \varphi(\theta, X) - f(Y) \right)^2 \right] & = {\mathbb E}\left[ \left( (\varphi(\theta, X) - {\mathbb E}[f(Y) | X]) + ({\mathbb E}[f(Y) | X] - f(Y)) \right)^2 \right] \\
& = {\mathbb E}\left[ \left( \varphi(\theta, X) - {\mathbb E}[f(Y) | X]\right)^2\right] + {\mathbb E}\left[\left({\mathbb E}[f(Y) | X] - f(Y) \right)^2 \right].
\end{align*}
This yields to the claim.
\end{proof}
Let $\theta_N^K$ (resp. $\theta^\star$) be a minimizer of $v_N^K$ (resp. $v^\infty$) on the compact set~$C$, i.e.
\begin{align*}
v_N^K(\theta_N^K) = \inf_{\theta \in C} v_N^K(\theta) \quad \text{and} \quad v^\infty(\theta^\star) = \inf_{\theta \in C} v^\infty(\theta).
\end{align*}
By Lemma~\ref{lem_vk_vinfty}, $v^K$ and $v^\infty$ differ only by a constant. So, $\theta^\star$ is also the unique minimizer of $v^K$ for every $K$. Therefore, we have the following result from~\cite[Theorem A1, p.~67]{ShRu93}.
\begin{proposition}
\label{prop:slln}
Under~\ref{hyp:ui},~\ref{hyp:continuous},~\ref{hyp:unique}, for every fixed $K$, $\theta_N^K \to \theta^\star$ a.s. when $N \to \infty$.
\end{proposition}
Beside this almost sure convergence result, we also have a central limit theorem under additional assumptions.
\begin{proposition}
\label{prop:tcl}
Under the assumptions of Proposition~\ref{prop:slln},~\ref{hyp:C2-ui} and if ${\mathbb E}\left[\left( \varphi(\theta^\star, X) - \inv{K} \sum_{k=1}^K f(Y^{(k)}) \right)^2 | \nabla \varphi(\theta^\star, X)|^2 \right]<\infty$ and the matrix $H:=\nabla^2 v^\infty(\theta^\star)$ is positive definite, we have
\begin{align}\label{conv_thetaKN}
\sqrt{N} (\theta_N^K - \theta^\star) \xrightarrow[N \to \infty]{{\mathcal L}} {\mathcal N}(0, 4 H^{-1} \Gamma^{K} H^{-1})
\end{align}
with \begin{equation}\label{Gamma_K_expr}
\Gamma^K=A+B/K,
\end{equation}
where $A,B\in {\mathbb R}^{q\times q}$ are the following semi-definite positive matrices:
\begin{align}
A&={\mathbb E}\left[ (\varphi(\theta^\star, X) - {\mathbb E}[f(Y) | X])^2 \nabla \varphi(\theta^\star, X) \nabla \varphi(\theta^\star, X)^T \right], \label{def_A}\\
B&={\mathbb E}\left[ \left(f(Y)- {\mathbb E}[f(Y) | X] \right)^2 \nabla \varphi(\theta^\star, X) \nabla \varphi(\theta^\star, X)^T \right]. \label{def_B}
\end{align}
Furthermore, we have \begin{equation}\label{asymp_vinf}
N(v^\infty(\theta^K_N)-v^\infty(\theta^\star))\xrightarrow[N \to \infty]{{\mathcal L}}
2 G^THG \text{ with } G\sim{\mathcal N}(0, H^{-1} \Gamma^{K}
H^{-1}).
\end{equation}
\end{proposition}
\begin{proof}
First, we check some properties on gradients. We have $\nabla \Phi(\theta,Z)=2(\varphi(\theta,X)-\frac 1K \sum_{k=1}^Kf( Y^{(k)})) \nabla\varphi(\theta,X)$ and $\nabla^2\Phi(\theta,Z)= 2\nabla\varphi(\theta,X)\nabla\varphi(\theta,X)^T+2(\varphi(\theta,X)-\frac 1K \sum_{k=1}^K f(Y^{(k)})) \nabla^2\varphi(\theta,X)$. From Cauchy-Schwarz inequality,~\ref{hyp:C2-ui} and ${\mathbb E}[f(Y)^2]<\infty$, we get that $\sup_{\theta \in C}|\nabla \Phi(\theta,Z)|$ and $\sup_{\theta \in C}|\nabla^2 \Phi(\theta,Z)|$ are integrable. Besides, the matrix
\begin{align}
\label{eq:Gamma_K}
\Gamma^{K} = {\mathbb E}\left[ \left( \varphi(\theta^\star, X) - \inv{K} \sum_{k=1}^K f(Y^{(k)}) \right)^2 \nabla \varphi(\theta^\star, X) \nabla \varphi(\theta^\star, X)^T \right].
\end{align}
is well defined since ${\mathbb E}\left[\left( \varphi(\theta^\star, X) - \inv{K} \sum_{k=1}^K f(Y^{(k)}) \right)^2 | \nabla \varphi(\theta^\star, X)|^2 \right]<\infty$, and we get~\eqref{conv_thetaKN} following the result from~\cite[Theorem A2, p.~74]{ShRu93}.
Let us check that $\Gamma^{K} =A+B/K$. We have
\begin{align*}
\Gamma^{K} & = \frac{1}{K^2} {\mathbb E}\left[ {\mathbb E}\left[\left( \sum_{k=1}^K \varphi(\theta^\star, X) - f(Y^{(k)}) \right)^2 | X \right] \nabla \varphi(\theta^\star, X) \nabla \varphi(\theta^\star, X)^T \right] \\
& = \frac{1}{K} {\mathbb E}\left[ {\mathbb E}\left[(\varphi(\theta^\star, X) - f(Y))^2 | X \right] \nabla \varphi(\theta^\star, X) \nabla \varphi(\theta^\star, X)^T \right] \\
& \quad + \frac{K-1}{K} {\mathbb E}\left[ \left(\varphi(\theta^\star, X) - {\mathbb E}[f(Y) | X] \right)^2 \nabla \varphi(\theta^\star, X) \nabla \varphi(\theta^\star, X)^T \right] \\
& = {\mathbb E}\left[ (\varphi(\theta^\star, X) - {\mathbb E}[f(Y) | X])^2 \nabla \varphi(\theta^\star, X) \nabla \varphi(\theta^\star, X)^T \right]
\\ & \quad + \frac{1}{K} {\mathbb E}\left[ \left(f(Y)- {\mathbb E}[f(Y) | X] \right)^2 \nabla \varphi(\theta^\star, X) \nabla \varphi(\theta^\star, X)^T \right]=A+\frac{B}K.
\end{align*}
Last, we have $v^\infty(\theta)-v^\infty(\theta^\star)=\frac 12 (\theta-\theta^\star)^T H(\theta-\theta^\star) +|\theta-\theta^\star|^2\varepsilon(|\theta-\theta^\star|)$ with $\varepsilon(h)\to 0$ as $h\to 0$. By Slutsky's theorem, Proposition~\ref{prop:slln} and~\eqref{conv_thetaKN}, we get~\eqref{asymp_vinf}.
\end{proof}
\begin{remark} Note that the matrix~$A$ defined by~\eqref{def_A}
corresponds to the asymptotic variance of the optimal regression that we would obtain if we
could directly sample ${\mathbb E}[f(Y)|X]$. The additional term~$B/K$ in the
decomposition of $\Gamma^{K}$ is the extra variance generated by
the Monte Carlo approximation of ${\mathbb E}[f(Y)|X]$.
\end{remark}
Unless in very specific cases where the function $v^\infty$ is explicit, it is impossible in practice to numerically evaluate $N(v^\infty(\theta^K_N)-v^\infty(\theta^\star))$. The next proposition shows that $N(v^K_N(\theta^\star)-v^K_N(\theta^K_N))$ has the same asymptotics as~\eqref{asymp_vinf}. Roughly speaking, the suboptimality of $\theta^\star$ for $v^K_N$ is of the same order as the suboptimality of $\theta^K_N$ for $v^\infty$. This result will be used in the numerical section~\ref{Sec_num} to illustrate the convergence.
\begin{proposition}\label{prop:asymp} Under the same assumptions as in Proposition~\ref{prop:tcl} and if $C$ is convex, we have
$$N(v^K_N(\theta^\star)-v^K_N(\theta^K_N))\xrightarrow[N \to \infty]{{\mathcal L}}
2 G^THG, \text{ with } G\sim{\mathcal N}(0, H^{-1} \Gamma^{K} H^{-1}). $$
\end{proposition}
\begin{proof}
We have by Taylor's theorem
\begin{equation}\label{Taylor_hess} v^K_N(\theta^\star)-v^K_N(\theta^K_N)= (\theta^K_N-\theta^\star)^T \left( \int_0^1(1-u) \nabla^2v^K_N\left(\theta^K_N+u (\theta^K_N-\theta^\star) \right) du \right) (\theta^K_N-\theta^\star),
\end{equation}
with
$$ \nabla^2v^K_N(\theta)=\frac 2N \sum_{i=1}^N \nabla \varphi(\theta,X_i)\nabla \varphi(\theta,X_i)^T +\left(\varphi(\theta,X_i)-\frac 1K \sum_{k=1}^K f(Y_i^{(k)})\right) \nabla^2\varphi(\theta,X_i).$$
By Lemma~\ref{lem_vk_vinfty}, we have $$ \nabla^2v^K(\theta)= \nabla^2v^\infty(\theta)= 2{\mathbb E}\left[ \nabla \varphi(\theta,X)\nabla \varphi(\theta,X)^T +\left(\varphi(\theta,X)-{\mathbb E}[f(Y)|X]\right) \nabla^2\varphi(\theta,X)\right].$$
By~\ref{hyp:ui},~\ref{hyp:continuous} and~\ref{hyp:C2-ui}, we can apply~\cite[Lemma A1 p.~67]{ShRu93} and get that $\sup_{\theta \in C} |\nabla^2v^K_N(\theta)- \nabla^2v^\infty(\theta)| \underset{N\to \infty}\to 0$, almost surely. Since $\theta^\star,\theta^K_N\in C$ and $C$ is convex, we get $ \int_0^1(1-u) \nabla^2v^K_N\left(\theta^K_N+u (\theta^K_N-\theta^\star) \right) du - \int_0^1(1-u) \nabla^2v^\infty\left(\theta^K_N+u (\theta^K_N-\theta^\star) \right) du \to 0$, almost surely. Since $\nabla^2v^\infty$ is bounded on~$C$, we get
$$\int_0^1(1-u) \nabla^2v^K_N\left(\theta^K_N+u (\theta^K_N-\theta^\star) \right) du \underset{N\to \infty} \to H=\nabla^2v^K(\theta^\star), a.s.,$$
by using Proposition~\ref{prop:slln}. This gives that $N(v^K_N(\theta^\star)-v^K_N(\theta^K_N))$ converges in law to $2G^THG$ by using~\eqref{Taylor_hess}, Proposition~\ref{prop:tcl} and Slutsky's theorem.
\end{proof}
\section{Main results}\label{Sec_Main}
In this section, we present our main theorem that determines the optimal allocation between~$N$ and $K$ to approximate the conditional expectation. Let us denote the computational time for sampling $X$ and ${\mathcal L}(Y | X)$ respectively by $C_X$ and $C_{Y | X}$. With these notations, the cost for computing $v_N^K$ is proportional to $N C_X + N K C_{Y | X}$. Without loss of generality, we will assume that $C_X=1$. This means that the computational time for sampling $X$ is one unit and that we express all the other computational times with respect to this unit.
\subsection{Optimal allocation between \texorpdfstring{$N$}{N} and \texorpdfstring{$K$}{K}}
\begin{definition}\label{def_nu} For $x>0$, we denote by $\nu(x) \in {\mathbb N}^*$ the unique natural number such that $$(\nu(x)-1)\nu(x) <x\le \nu(x)(\nu(x)+1).$$
\end{definition}
It is easy to check that $\forall x>0, \lfloor \sqrt{x} \rfloor \le \nu(x)\le \lceil \sqrt{x} \rceil. $ Now, we state our main result.
\begin{theorem}\label{thm:optimK}
Under the assumptions of Proposition~\ref{prop:tcl} and if the sequence $N(v^\infty(\theta^K_N)- v^\infty(\theta^\star))_{N\ge 1}$ is uniformly integrable, we have
$${\mathbb E}[v^\infty(\theta^K_N)] = v^\infty(\theta^\star)+\frac{\mathop{\rm tr}\nolimits(\Gamma^{K}H^{-1})}N+o(1/N),$$
as $N\to \infty$. If $A\not =0$, the asymptotic optimal choice minimizing ${\mathbb E}[v^\infty(\theta^K_N)]$ for a computational budget $c\to \infty$ is to take
$$N^\star= \left \lfloor \frac{c }{ 1 + K^\star C_{Y|X}} \right\rfloor, \ K^\star= \nu\left(\frac{\mathop{\rm tr}\nolimits(BH^{-1})}{C_{Y|X}\mathop{\rm tr}\nolimits(AH^{-1})} \right).$$
\end{theorem}
Note that if $A=0$ and ${\mathbb P}(\nabla \varphi(\theta^\star,X) \not = 0)>0$, then $\varphi(\theta^\star,X)={\mathbb E}[f(Y)|X]$. The condition $A\not=0$ ensures that $\mathop{\rm tr}\nolimits(AH^{-1})>0$.
\begin{proof}
We have by Proposition~\ref{prop:tcl}, $N(v^\infty(\theta^K_N)-v^\infty(\theta^\star))\xrightarrow[N \to \infty]{{\mathcal L}} 2 G^THG$ with $G\sim{\mathcal N}(0, H^{-1} \Gamma^{K} H^{-1})$.
From this convergence in distribution and the uniform integrability assumption, we get ${\mathbb E}[N(v^\infty(\theta^K_N)-v^\infty(\theta^\star))]\to 2 {\mathbb E}[G^THG]$. Let $C=\sqrt{H^{-1} \Gamma^{K} H^{-1}}$. Then, $G$ has the same law as $C\tilde{G}$ with $\tilde{G}\sim {\mathcal N}(0,I_q)$ and thus ${\mathbb E}[G^THG]={\mathbb E}[\tilde{G}^T CHC\tilde{G} ]=\mathop{\rm tr}\nolimits(CHC)=\mathop{\rm tr}\nolimits(HC^2)=\mathop{\rm tr}\nolimits(\Gamma^{K} H^{-1})$, which gives the first claim.
As $N\to \infty$, the minimization of ${\mathbb E}[v^\infty(\theta^K_N)]$ with respect to~$K$ amounts to minimizing $\mathop{\rm tr}\nolimits(\Gamma^{K}H^{-1})$ with respect to~$K$. For a large enough budget $c$, the problem becomes
\begin{equation*}
\inf_{\substack{
N, K \, \in \, {\mathbb N} \\
s.t. \, N + N K C_{Y | X} = c} } \frac{\mathop{\rm tr}\nolimits(AH^{-1})}{N} + \frac{\mathop{\rm tr}\nolimits(BH^{-1})}{KN}.
\end{equation*}
Then, we apply Lemma~\ref{lem:optim} to get the claim.
\end{proof}
\begin{remark}\label{rk_KnoH}
Theorem~\ref{thm:optimK} gives the asymptotic optimal allocation to minimize ${\mathbb E}[v^\infty(\theta^K_N)]$. Unfortunately, it involves the matrix $H$ which is in general unknown and may be difficult to estimate. When $\theta$ is a one dimensional parameter, $A$, $B$ and $H$ are scalar values and thus $K^\star= \nu\left(\frac{B}{C_{Y|X}A} \right)$. Otherwise, since $H$ is a definite positive matrix, we have $\underline{\lambda}_H I_q\le H\le \overline{\lambda}_H I_q$ and thus
$$ \overline{\lambda}_H^{-1}\mathop{\rm tr}\nolimits(\Gamma^{K})\le \mathop{\rm tr}\nolimits(\Gamma^{K}H^{-1})\le \underline{\lambda}_H^{-1}\mathop{\rm tr}\nolimits(\Gamma^{K}).$$
Therefore, it is reasonable (though not optimal) to minimize $\mathop{\rm tr}\nolimits(\Gamma^{K})$ under the same computational budget constraint, which then leads to
$$N'= \left \lfloor \frac{c }{ 1 + K' C_{Y|X}} \right\rfloor, \ K'= \nu\left(\frac{\mathop{\rm tr}\nolimits(B)}{C_{Y|X}\mathop{\rm tr}\nolimits(A)} \right).$$
\end{remark}
The next corollary gives a bound on the computational gain that can be obtained by the optimization of~$K$ given by Theorem~\ref{thm:optimK}.
\begin{corollary}[Comparison of the estimators $\theta^{K^\star}_{N^\star}$ and $\theta^1_N$ for a fixed computational budget] \label{cor:gain} Let $c$ be the computational budget. Under the assumptions of Theorem~\ref{thm:optimK} and with $N=\lfloor c/(1+C_{Y|X})\rfloor$, we have $${\mathbb E}[v^\infty(\theta^{K^\star}_{N^\star})]-v^\infty(\theta^\star)\sim_{c\to \infty} r^\star({\mathbb E}[v^\infty(\theta^{1}_{N})]-v^\infty(\theta^\star)),$$ with
\begin{equation}\label{def_rs}r^\star=\frac{(1+\nu(\xi) C_{Y|X})\left(1+\frac{\xi}{\nu(\xi)} C_{Y|X}\right)}{(1+ C_{Y|X})(1+\xi C_{Y|X})}
, \ \xi=\frac{\mathop{\rm tr}\nolimits(BH^{-1})}{C_{Y|X} \mathop{\rm tr}\nolimits(AH^{-1})}.
\end{equation}
This multiplicative gain $r^\star \in (0,1]$ satisfies $r^\star\ge \frac{C_{Y|X}}{1+C_{Y|X}}$ and $\lim_{\xi \to \infty}r^\star = \frac{C_{Y|X}}{1+C_{Y|X}}$.
\end{corollary}
Note that if $C_{Y|X}=1$, i.e. the computation time of sampling $\mathcal{L}(Y|X)$ is the same as the one of sampling~$X$, we cannot reduce the computation time more than by a factor~$1/2$. Besides, the smaller is $C_{Y|X}$, the more we may hope a significant reduction of computational time, and this really occurs if $\xi$ is large, so that $r^\star \approx \frac{C_{Y|X}}{1+C_{Y|X}}$.
\begin{proof}
Since we are comparing $\theta^1_N$ and $\theta^{K^\star}_{N^\star}$ for the same computation budget, we have $c\sim_{c\to \infty} N(1+C_{Y|X})\sim_{c\to \infty} N^\star(1+K^\star C_{Y|X})$.
By Theorem~\ref{thm:optimK}, the multiplicative gain in precision is
$$r^\star=\frac{\mathop{\rm tr}\nolimits(\Gamma^{K^\star}H^{-1})/N^\star}{\mathop{\rm tr}\nolimits(\Gamma^{1}H^{-1})/N} \underset{c \to \infty}\to\frac{\mathop{\rm tr}\nolimits(\Gamma^{K^\star}H^{-1})}{\mathop{\rm tr}\nolimits(\Gamma^{1}H^{-1})}\times\frac{1+K^\star C_{Y|X}}{1+ C_{Y|X}}. $$
Since $\Gamma^{K}=A+B/K$ and $K^\star=\nu(\xi)$, we get~\eqref{def_rs} after simple calculations. We have $r^\star\le 1$ since $\nu(\xi)+\frac \xi{\nu(\xi)}\le 1+\xi$,
$r^\star\ge \frac{C_{Y|X}}{1+C_{Y|X}}$ since $\nu(\xi)\ge 1$ and $\lim_{\xi \to \infty}r^\star = \frac{C_{Y|X}}{1+C_{Y|X}}$ since $\nu(\xi)\sim_{\xi \to \infty}\sqrt{\xi}$.
\end{proof}
\begin{remark}\label{rk_rK}
By the same reasoning as in the proof of Corollary~\ref{cor:gain}, we can define
\begin{equation}\label{def_rK}
r^K=\frac{\mathop{\rm tr}\nolimits(\Gamma^{K}H^{-1})}{\mathop{\rm tr}\nolimits(\Gamma^{1}H^{-1})}\times\frac{1+K C_{Y|X}}{1+ C_{Y|X}}
\end{equation}
as the multiplicative gain resulting from using $\theta^K_N$ instead of $\theta^1_N$. Note that this is indeed a gain if $r^K<1$ and that we have $r^\star=r^{K^\star}$.
\end{remark}
\subsection{Estimation of the matrices \texorpdfstring{$A$}{A} and \texorpdfstring{$B$}{B}}
\label{sec:hat_A_B}
In practice, to calculate the value of $K^\star$ given by Theorem~\ref{thm:optimK}, we need to estimate the
matrices $A$ and $B$. Let $(X_i, Y_i^{(1)},\dots, Y_i^{(K)})_i$ be iid samples
such that for all $i$, $X_i \sim X$ and
$Y_i^{(k)} \sim {\mathcal L}(Y | X = X_i)$ for $k=1,\dots,K$ being sampled independently given~$X_i$. From these samples, we
can compute $\theta_N^K$ and define
\begin{equation}
\label{eq:Gamma_hat}
\hat \Gamma_N^{K} = \frac{1}{N} \sum_{i=1}^N \left( \varphi(\theta^{K}_N, X_i) - \inv{K} \sum_{k=1}^K f(Y_i^{(k)}) \right)^2 \nabla \varphi(\theta_N^{K}, X_i) \nabla \varphi(\theta_N^{K}, X_i)^T.
\end{equation}
\begin{proposition}
\label{prop:GammaKN}
Assume~\ref{hyp:ui},~\ref{hyp:continuous},~\ref{hyp:unique},~\ref{hyp:C2-ui} and
\begin{align}\label{integrab}
{\mathbb E}\left[ \sup_{\theta \in C} \abs{\varphi(\theta, X) \nabla \varphi(\theta, X)}^2\right] < \infty, \quad {\mathbb E}\left[f(Y)^2 \sup_{\theta \in C} \abs{\nabla \varphi(\theta, X)}^2 \right] < \infty.
\end{align}
Then, we have $\hat \Gamma_N^{K} \underset{N\to \infty} \to \Gamma^{K}$ almost surely.
\end{proposition}
\begin{proof
We define the function $g_K: {\mathbb R}^q \times {\mathbb R}^d \times ({\mathbb R}^p)^K \to {\mathbb R}^{q \times q}$ by
\begin{equation*}
g(\theta, x, (y^{(k)})_{1 \le k \le K})= \left( \varphi(\theta, x) - \inv{K} \sum_{k=1}^K f(y^{(k)}) \right)^2 \nabla \varphi(\theta, x) \nabla \varphi(\theta, x)^T.
\end{equation*}
Assumptions~\ref{hyp:ui},~\ref{hyp:continuous},~\ref{hyp:C2-ui} ensure that the function $\theta \mapsto g_K(\theta, X, (Y^{(k)})_{1 \le k \le K})$ is a.s. continuous, while Assumption~\eqref{integrab} gives the integrability of $\sup_{\theta \in C}|g_K(\theta, X, (Y^{(k)})_{1 \le k \le K})|$. From Lemma~\ref{lem:ulln}, we get that $$ \sup_{\theta \in C}\left|\frac{1}{N} \sum_{i=1}^N g_K(\theta, X_i, (Y_i^{(k)})_{1 \le k \le K})-{\mathbb E}[g_K(\theta, X, (Y^{(k)})_{1 \le k \le K})]\right| \to 0, \ a.s.$$ From Proposition~\ref{prop:slln}, $\theta_N^K \to \theta^\star$ a.s. Hence, we deduce that $\hat{\Gamma}_N^{K} \to \Gamma^{K}$ a.s.
\end{proof}
\paragraph{Estimators for $A$ and $B$}
From Proposition~\ref{prop:GammaKN} and Equation~\eqref{Gamma_K_expr}, we deduce estimators of $A$ and $B$. For
$K_1,K_2\in {\mathbb N}^*$ such that $K_1 < K_2$, we have by Proposition~\ref{prop:GammaKN} when $N$ tends to $+\infty$
\[
\frac{K_2\hat \Gamma_N^{K_2} - K_1\hat \Gamma_N^{K_1}}{K_2-K_1} \to A \; a.s., \
\frac{K_1K_2\left(\hat \Gamma_N^{K_1} - \hat \Gamma_N^{K_2}\right)}{K_2-K_1}\to B \; a.s.
\]
We will mainly use $K_1=\bar{K}$ and $K_2=2\bar{K}$ for a given $\bar{K}\in {\mathbb N}^*$, which leads to simpler formulas
\[
\hat A_{\bar{K}} := 2\hat \Gamma_N^{2\bar{K}} - \hat \Gamma_N^{\bar{K}}, \hat B_{\bar{K}} = 2\bar{K}\left(\hat \Gamma_N^{\bar{K}} - \hat \Gamma_N^{2\bar{K}}\right).
\]
Besides, we rather work with the following antithetic estimators:
\begin{align*}
\hat A^{anti}_{\bar{K}} &= \frac{1}{N} \sum_{i=1}^N \Bigg[ 2\left( \varphi(\theta^{2\bar{K}}_N, X_i) - \inv{2 \bar{K}} \sum_{k=1}^{2\bar{K}} f(Y_i^{(k)}) \right)^2- \frac 12 \left( \varphi(\theta^{2\bar{K}}_N, X_i) - \inv{\bar{K}} \sum_{k=1}^{\bar{K}} f(Y_i^{(k)}) \right)^2\\ & - \frac 12\left( \varphi(\theta^{2\bar{K}}_N, X_i) - \inv{\bar{K}} \sum_{k=\bar{K}+1}^{2\bar{K}} f(Y_i^{(k)}) \right)^2 \Bigg]\nabla \varphi(\theta_N^{2\bar{K}}, X_i) \nabla \varphi(\theta_N^{2\bar{K}}, X_i)^T \\
\hat B^{anti}_{\bar{K}} &= 2\bar{K}\Bigg( \frac{1}{N} \sum_{i=1}^N \Bigg[ \frac 12 \left( \varphi(\theta^{2\bar{K}}_N, X_i) - \inv{\bar{K}} \sum_{k=1}^{\bar{K}} f(Y_i^{(k)}) \right)^2 + \frac 12\left( \varphi(\theta^{2\bar{K}}_N, X_i) - \inv{\bar{K}} \sum_{k=\bar{K}+1}^{2\bar{K}} f(Y_i^{(k)}) \right)^2 \\
&- \left( \varphi(\theta^{2\bar{K}}_N, X_i) - \inv{2\bar{K}} \sum_{k=1}^{2\bar{K}} f(Y_i^{(k)}) \right)^2\Bigg]\nabla \varphi(\theta_N^{2\bar{K}}, X_i) \nabla \varphi(\theta_N^{2\bar{K}}, X_i)^T \Bigg).
\end{align*}
Note that the same value of $\theta_N^{2\bar{K}}$ is used. Similarly, we have the almost sure convergence of these estimators respectively to $A$ and $B$ as $N\to \infty$. Thanks to the convexity of the square function, $ \hat B^{anti}_{\bar{K}}$ is a semi-definite positive matrix. Unfortunately, the matrix $ \hat A^{anti}_{\bar{K}}$ may not be semi-definite positive. More generally, the matrix $A$ is in general difficult to estimate. From its definition~\eqref{def_A}, we see that the better is the approximation family $\varphi(\theta,X)$, the smaller is the matrix $A$ for the natural order (L\"owner order). Thus, when the conditional expectation is well approximated, the matrix~$A$ is small and may be smaller than the noise in $O(N^{-1/2})$, so that the estimated matrix $\hat A^{anti}_{\bar{K}}$ may have negative eigenvalues. Thus, in practice, we use \begin{equation}\label{def_KAH}
\hat{K}^A_H=\nu\left(\frac{\mathop{\rm tr}\nolimits( \hat B^{anti}_{\bar{K}} \hat{H}^{-1})}{C_{Y|X} \mathop{\rm tr}\nolimits( (\hat A^{anti}_{\bar{K}} \hat{H}^{-1})_+)} \right)\end{equation} to approximate $K^\star$. An alternative is to approximate $A$ by $\Gamma^{2\bar{K}}_N$ for a (fixed) large value of $\bar{K}$: it is a nonnegative estimator of $\Gamma=A+B/\bar{K}\ge A$, and therefore \begin{equation}\label{def_KGH} \hat{K}^{\Gamma}_H= \nu\left(\frac{\mathop{\rm tr}\nolimits( \hat B^{anti}_{\bar{K}} \hat{H}^{-1})}{C_{Y|X} \mathop{\rm tr}\nolimits( \Gamma^{2\bar{K}}_N \hat{H}^{-1})} \right)
\end{equation}
underestimates $K^\star$. These estimators are discussed and illustrated in the numerical section~\ref{Sec_num}.
\section{The linear regression framework}\label{Sec_linearcase}
In this section, we rephrase some results of Section~\ref{Sec_Main} in the framework of linear regression as they actually take simpler forms. In particular, we show that the uniform integrability assumption of Theorem~\ref{thm:optimK} is always satisfied.
\subsection{Main results for the linear regression framework}
We consider in this section a function $u:{\mathbb R}^d\to {\mathbb R}^q$ such that ${\mathbb E}[|u(X)|^2]<\infty$ and
$$\varphi(\theta,X)=\theta \cdot u(X), \ \theta \in {\mathbb R}^q.$$
In this case, we have $\nabla \varphi(\theta,X)= u(X)$, $\nabla^2 \varphi(\theta,X)= 0$, $\nabla v^\infty(\theta)=2{\mathbb E}[(\theta \cdot u(X) -{\mathbb E}[f(Y)|X] )u(X)]$ and $\nabla^2 v^\infty(\theta)=2 {\mathbb E}[u(X) u^T(X)]$ does not depend on~$\theta$. Therefore, Assumptions~\ref{hyp:ui},~\ref{hyp:continuous} and~\ref{hyp:C2-ui} are clearly satisfied for any compact~$C$, while~\ref{hyp:unique} holds if, and only if
\begin{equation}\label{H3_lin}
H=2 {\mathbb E}[u(X) u^T(X)] \text{ is positive definite and } \theta^\star= 2H^{-1}{\mathbb E}[f(Y)u(X)] \in \mathring{C}.\tag{$\mathcal{H}$-3-lin}
\end{equation}
We also get a simpler expression for $\theta^K_N$ and $\hat \Gamma_N^{K}$:
\begin{align*}
\theta^{K}_N&=\left(\frac 1 N \sum_{i=1}^N u(X_i)u(X_i)^T \right)^{-1} \left( \frac 1 N \sum_{i=1}^N \left( \frac 1 K \sum_{k=1}^K f(Y_i^{(k)})\right)u(X_i) \right), \\
\hat \Gamma_N^{K} &= \frac{1}{N} \sum_{i=1}^N \left( \theta_N^{K} \cdot u(X_i) - \inv{K} \sum_{k=1}^K f(Y_i^{(k)}) \right)^2 u(X_i) u(X_i)^T.
\end{align*}
Here, we assume that $H$ is positive definite and $N$ is large enough so that $\frac 1 N \sum_{i=1}^N u(X_i)u(X_i)^T $ is positive definite by the law of large numbers. However, it may be convenient to slightly modify the estimator as in the next proposition. This new estimator satisfies in particular the uniform integrability assumption of Theorem~\ref{thm:optimK}, as shown in the proof of Proposition~\ref{prop:lin}.
\begin{definition} For a positive semi-definite matrix $S\in {\mathbb R}^{q\times q}$ and $\epsilon\in {\mathbb R}_+^*$, $S\vee (\epsilon I_q)$ is the positive definite matrix such that $ (S\vee (\epsilon I_q))e_l=\max(\lambda_l,\epsilon) e_l$, where $(e_l)_{1\le l\le q}$ is an orthonormal basis of eigenvectors with respective eigenvalues $(\lambda_l)_{1\le l\le q}$.
\end{definition}
\begin{proposition}\label{prop:lin}
We assume~\eqref{H3_lin}, ${\mathbb E}[|u(X)|^{4+\eta}]<\infty$ and ${\mathbb E}[f(Y)^{2+\eta}|u(X)|^{2+\eta}]<\infty$ for some $\eta>0$. Let $\epsilon>0$ be such that $H-2\epsilon I_q$ is positive definite and define
$$\theta^{K,\epsilon}_N=2\left(\left(\frac 2 N \sum_{i=1}^N u(X_i)u(X_i)^T\right)\vee (\epsilon I_q) \right)^{-1} \left( \frac 1 N \sum_{i=1}^N \left( \frac 1 K \sum_{k=1}^K f(Y_i^{(k)})\right)u(X_i) \right).$$
Then, we have $\theta^{K,\epsilon}_N\to \theta^\star$ a.s., $\sqrt{N} (\theta_N^{K,\epsilon} - \theta^\star) \xrightarrow[N \to \infty]{{\mathcal L}} {\mathcal N}(0, 4 H^{-1} \Gamma^{K} H^{-1})$ and
$${\mathbb E}[v^\infty(\theta^K_N)] \underset{N\to \infty}{=} v^\infty(\theta^\star)+\frac{\mathop{\rm tr}\nolimits(\Gamma^{K}H^{-1})}N+o(1/N). $$
The conclusions of Theorem~\ref{thm:optimK} hold.
\end{proposition}
\begin{proof}
By the law of large numbers, $\frac 2 N \sum_{i=1}^N u(X_i)u(X_i)^T \to H$, almost surely. Since $H-2\epsilon I_q$ is positive definite, there exists, almost surely, $\bar{N}$ such that for $N\ge \bar{N}$, $\frac 2 N \sum_{i=1}^N u(X_i)u(X_i)^T-\epsilon I_q$
is positive definite and thus $\theta^{K,\epsilon}_N=\theta^K_N$. This gives $\theta^{K,\epsilon}_N\to \theta^\star$ a.s. by Proposition~\ref{prop:slln} and $\sqrt{N} (\theta_N^{K,\epsilon} - \theta^\star) \xrightarrow[N \to \infty]{{\mathcal L}} {\mathcal N}(0, 4 H^{-1} \Gamma^{K} H^{-1})$ by Proposition~\ref{prop:tcl}.
Now, we check the uniform integrability of the sequence $N|\theta^{K,\epsilon}_N-\theta^\star|^2$. We have
\begin{align*}
&\theta^{K,\epsilon}_N-\theta^\star=2\left[\left(\left(\frac 2 N \sum_{i=1}^N u(X_i)u(X_i)^T\right)\vee (\epsilon I_q) \right)^{-1}-H^{-1}\right]{\mathbb E}[f(Y)u(X)] \\
&+ 2\left(\left(\frac 2 N \sum_{i=1}^N u(X_i)u(X_i)^T\right)\vee (\epsilon I_q) \right)^{-1} \left( \frac 1 N \sum_{i=1}^N \left( \frac 1 K \sum_{k=1}^K f(Y_i^{(k)})\right)u(X_i) -{\mathbb E}[f(Y)u(X)]\right)
\end{align*}
Note that for two symmetric matrices $M_1,M_2$ such that $M_1-\epsilon I_q$ and $M_2-\epsilon I_q$ are definite positive, we have $|M_1^{-1}-M_2^{-1}|=|M_1^{-1}(M_2-M_1)M_2^{-1}|\le \frac 1 {\epsilon^2} |M_2-M_1|$. Thus, we obtain
\begin{align*}
|\theta^{K,\epsilon}_N-\theta^\star|\le& \frac{2}{\epsilon^2} \left|\left(\frac 2 N \sum_{i=1}^N u(X_i)u(X_i)^T\right)\vee (\epsilon I_q)- H \right| |{\mathbb E}[f(Y)u(X)] | \\&+ \frac{2}{\epsilon}\left|\frac 1 N \sum_{i=1}^N \left( \frac 1 K \sum_{k=1}^K f(Y_i^{(k)})\right)u(X_i) -{\mathbb E}[f(Y)u(X)]\right|.
\end{align*}
Then, Lemma~\ref{lem_ui_TCL} with the assumptions on the moments gives the uniform integrability of the sequence~$(N|\theta^{K,\epsilon}_N-\theta^\star|^2)_{N\ge 1}$.
Last, observe that $0\le v^\infty(\theta)-v^\infty(\theta^\star)=(\theta-\theta^\star)^T H(\theta-\theta^\star)\le |H| |\theta -\theta^\star|^2$. Therefore, the sequence $N(v^\infty(\theta^{K,\epsilon}_N)-v^\infty(\theta^\star))$ is uniformly integrable, and we get ${\mathbb E}[v^\infty(\theta^K_N)] \underset{N\to \infty}{=} v^\infty(\theta^\star)+\frac{\mathop{\rm tr}\nolimits(\Gamma^{K}H^{-1})}N+o(1/N) $ as in the proof of Theorem~\ref{thm:optimK}.
\end{proof}
\subsection{Piecewise constant approximation framework}
\label{sec:local_approx}
We now specify our results in the linear case when $X$ takes its values in $[0,1]^d$, with $q=M^d$ and the basis
\begin{equation}\label{base_indic}
u_n(x)=\prod_{j=1}^d \mathbf{1}_{I_{a_j}}(x_j),
\end{equation}
for $n-1=a_1+a_2M+\cdots+a_dM^{d-1}$ with $a_1,\dots,a_d \in \{0,\dots,M-1\}$, and $I_a=[\frac{a}M,\frac{a+1}{M})$ for $a=0,\dots,M-2$ and $I_{M-1}=[\frac{M-1}M,1]$.
The next proposition shows that with this choice of basis, the optimal number of inner simulations is (under suitable assumptions) at least of order $M$. This illustrates that the sharper is the family $\varphi(\theta,x)$ to approximate the conditional expectation ${\mathbb E}[f(Y)|X]$, the larger is the optimal number of inner simulations.
\begin{proposition}\label{prop:linear_reg} Let us assume that ${\mathbb E}[f(Y)|X]=\psi(X)$ with $\psi:[0,1]^d\to {\mathbb R}$ being a Lipschitz function with Lipschitz constant $L$. Let us assume that ${\mathbb E}[f(Y)^2|X]-({\mathbb E}[f(Y)|X])^2=\sigma^2(X)$ for a function $\sigma:[0,1]^d \to [\underline{\sigma},+\infty)$ for some $0<\underline{\sigma}<\infty$. We consider $\varphi(\theta,X)=\theta\cdot u(X)$ with $\theta\in {\mathbb R}^q$, $q=M^d$ with the basis defined by~\eqref{base_indic}.
Then, $\mathop{\rm tr}\nolimits(AH^{-1})\le \frac 12 L^2M^{d-2}$ and $\frac 12 \underline{\sigma}^2 M^d\le \mathop{\rm tr}\nolimits(BH^{-1})$. In particular, $\frac{\mathop{\rm tr}\nolimits(BH^{-1})}{\mathop{\rm tr}\nolimits(AH^{-1})}\ge \underline{\sigma}^2 M^2 $ and thus $K^\star\ge \gamma M$ for some $\gamma>0$, with $K^\star$ given by Theorem~\ref{thm:optimK}.
\end{proposition}
\begin{proof}
We have $u_n(x)=\mathbf{1}_{C_n}(x)$, with $C_n=I_{a_1}\times \cdots \times I_{a_d}$. Since $C_n\cap C_{n'}=\emptyset$ for $n\not = n'$, we have that $u(x)u(x)^T$ is a diagonal matrix. Then, the matrices $H$, $A$ and $B$ are diagonal and we get:
\begin{align*}
& H_{nn}=2{\mathbb P}(X \in C_n),\ A_{nn}={\mathbb E}[(\theta^\star\cdot u(X)-{\mathbb E}[f(Y)|X])^2\mathbf{1}_{X\in C_n}] ,\\ &B_{nn}={\mathbb E}[(f(Y)-{\mathbb E}[f(Y)|X])^2\mathbf{1}_{X\in C_n}]={\mathbb E}[{\mathbb E}[f(Y)^2|X]-({\mathbb E}[f(Y)|X])^2\mathbf{1}_{X\in C_n}].
\end{align*}
Therefore, we get $\mathop{\rm tr}\nolimits(AH^{-1})=\frac 12 \sum_{n=1}^q {\mathbb E}[(\theta^\star\cdot u(X)-{\mathbb E}[f(Y)|X])^2 | {X\in C_n}]$ and $\mathop{\rm tr}\nolimits(BH^{-1})=\frac 12 \sum_{n=1}^q {\mathbb E}[{\mathbb E}[f(Y)^2|X]-({\mathbb E}[f(Y)|X])^2| X\in C_n]=\frac 12 \sum_{n=1}^q {\mathbb E}[\sigma(X)^2| X\in C_n]\ge \frac{\underline{\sigma}^2q}{2}$. Besides, we observe that $\theta^\star_n={\mathbb E}[\psi(X)|X\in C_n]$ since $\theta^\star$ minimizes $v^\infty(\theta)=\sum_{n=1}^q {\mathbb E}[\mathbf{1}_{C_n}(X)(\theta_n-\psi(X))^2]$, and therefore
$|\theta^\star_n-\psi(x)|\le L/M$ for $x\in C_n$ by the triangular inequality. This gives $\mathop{\rm tr}\nolimits(AH^{-1})\le \frac 12(L/M)^2q$, and then the claim.
\end{proof}
\begin{remark}(Asymptotic optimal tuning of $M$, $K$ and $N$) We work under the assumptions of Proposition~\ref{prop:linear_reg} and assume in addition that $\sigma(x)\le \overline{\sigma}<\infty$. We are interested in minimizing
$$\mathcal{E}:={\mathbb E} \left[ \frac 1 N \sum_{i=1}^N\left({\theta}^K_N\cdot u(X_i) - \frac 1K \sum_{k=1}^Kf(Y_i^{(k)}) \right)^2\right],$$
which is the averaged quadratic error on the sample. Following the lines of~\cite[Theorem 8.2.4]{Gobet}, we get $\mathcal{E}=\mathcal{E}_1+\mathcal{E}_2$, with
$$\mathcal{E}_1={\mathbb E} \left[ \frac 1 N \sum_{i=1}^N\left({\theta}^K_N \cdot u(X_i) - \psi(X_i) \right)^2\right] \text{ and } \mathcal{E}_2={\mathbb E} \left[ \frac 1 N \sum_{i=1}^N\left(\psi(X_i) - \frac 1K \sum_{k=1}^Kf(Y_i^{(k)}) \right)^2\right], $$
representing respectively the approximation error and the statistical error. We show in the same manner that $\mathcal{E}_1\le \frac{L^2}{M^2}$ and
$\mathcal{E}_2 \le \frac{\overline{\sigma}^2M^d}{K N}$. To achieve a precision of order $\varepsilon>0$ (and a quadratic error of order $\varepsilon^2$), we then take $\frac{L^2}{M^2}=\varepsilon^2$ and $\frac{\overline{\sigma}^2M^d}{K N}=\varepsilon^2$. This leads to $M=c\varepsilon^{-1}$ and $KN=c' \varepsilon^{-(2+d)}$ for some constants $c,c'>0$, for an overall computational cost of $O(\varepsilon^{-(3+d)})$. Taking the optimal choice $K^\star$ of Theorem~\ref{thm:optimK} does not change the order of convergence but improves its multiplicative rate.
\end{remark}
\section{Numerical experiments}\label{Sec_num}
The present numerical section is organized as follows. We first present the different estimators to approximate $K^\star$ and describe the Monte-Carlo algorithm that is used for all the examples. Then, we begin with a toy example that illustrates that $K^\star$ may be arbitrarily large. In this case, as the computational time $C_{Y|X}$ is equal to $C_X$, the theoretical multiplicative gain $r^\star \approx 1/2$, and we almost reach this bound in practice. The second example shows a case where the computational time $C_{Y|X}$ is much smaller than $C_X$, and therefore where the multiplicative gain may be larger. The last example is more practically oriented and deals with risk management concerns.
In this section, we compare the performances of the following estimations of $K^\star$ based on Theorem~\ref{thm:optimK},
\[
K^\star_H= \nu\left(\frac{\mathop{\rm tr}\nolimits(BH^{-1})}{C_{Y|X}\mathop{\rm tr}\nolimits(AH^{-1})} \right)
\]
with the one suggested by Remark~\ref{rk_KnoH},
\[
K^\star_{\cancel{H}}= \nu\left(\frac{\mathop{\rm tr}\nolimits(B)}{C_{Y|X}\mathop{\rm tr}\nolimits(A)} \right),
\]
which is much simpler to estimate. Following~\eqref{def_KAH} and~\eqref{def_KGH}, we approximate $K^\star_H$ or $K^\star_{\cancel{H}}$ by the four estimators
\begin{equation}
\label{eq:Kstar-estimators}
\begin{alignedat}{2}
&\hat{K}^A_H = \nu\left(\frac{\mathop{\rm tr}\nolimits( \hat B^{anti}_{\bar{K}} \hat{H}^{-1})}{C_{Y|X} \mathop{\rm tr}\nolimits( (\hat A^{anti}_{\bar{K}} \hat{H}^{-1})_+)} \right);& \quad &
\hat{K}^A_{\cancel{H}} = \nu\left(\frac{\mathop{\rm tr}\nolimits( \hat B^{anti}_{\bar{K}} )}{C_{Y|X} \mathop{\rm tr}\nolimits( (\hat A^{anti}_{\bar{K}} )_+)} \right)\\
&\hat{K}^\Gamma_H= \nu\left(\frac{\mathop{\rm tr}\nolimits( \hat B^{anti}_{\bar{K}} \hat{H}^{-1})}{C_{Y|X} \mathop{\rm tr}\nolimits( \Gamma^{2\bar{K}}_N \hat{H}^{-1})} \right);& \quad &
\hat{K}^\Gamma_{\cancel{H}} = \nu\left(\frac{\mathop{\rm tr}\nolimits( \hat B^{anti}_{\bar{K}} )}{C_{Y|X} \mathop{\rm tr}\nolimits( \Gamma^{2\bar{K}}_N )} \right),
\end{alignedat}
\end{equation}
where $\hat{H}=\nabla^2v^K_N(\theta^K_N)$. Note that in the linear regression framework, we simply have $\hat{H}=\frac 1 N \sum_{i=1}^Nu(X_i)u(X_i)^T$. When $q=1$, matrices are scalar, and we take $\hat{K}^A_H = \hat{K}^A_{\cancel{H}} = \nu\left(\frac{ \hat B^{anti}_{\bar{K}} }{C_{Y|X} | \hat A^{anti}_{\bar{K}}| } \right)$ and $\hat{K}^\Gamma_H=\hat{K}^\Gamma_{\cancel{H}} = \nu\left(\frac{ \hat B^{anti}_{\bar{K}} }{C_{Y|X} \Gamma^{2\bar{K}}_N } \right)$. Since $\Gamma^{2\bar{K}}\ge A$ is a semi-definite positive matrix, $\hat{K}^\Gamma_H$ and $\hat{K}^\Gamma_{\cancel{H}}$ will slightly underestimate $ K^\star_H$ and $ K^\star_{\cancel{H}}$. However, as we will see they have a much smaller variance and give a nearly optimal computational gain.
For each example, we run our algorithm $20,000$ times to approximate ${\mathbb E}[v^K_N(\theta^\star)] - {\mathbb E}[v^K_N(\theta_N^K)]$, which is (under uniform integrability assumption) an estimator of ${\mathbb E}[v^\infty(\theta_N^K)] - {\mathbb E}[v^\infty(\theta^\star)]$ by Proposition~\ref{prop:asymp}. Namely, we calculate for $J=20,000$ the estimator $\frac 1 J \sum_{j=1}^J v^K_{N,j}(\theta^\star) - v^K_{N,j}(\theta^K_{N,j})$, where $(v^K_{N,j})_{1\le j \le J}$ are iid samples of~\eqref{eq:vKN} and, for each $j$, $\theta^K_{N,j}$ is the minimum of $v^K_{N,j}$. This minimum is computed explicitly for linear regression and can be approximated by a gradient descent otherwise. The value of $\theta^\star$ is approximated by minimizing $v_N^1(\theta)$ for $N=100,000$. In comparison, the values of $(N,K)$ to sample $v^K_{N,j}$ and then $\theta^K_{N,j}$ are such that $N \frac{1+KC_{Y|X}}{1+C_{Y|X}} \approx 5000$.
Using the $20,000$ runs, we compute as many samples of the estimators $\hat{K}^A_H$, $\hat{K}^A_{\cancel{H}}$, $\hat{K}^\Gamma_H$ and $\hat{K}^\Gamma_{\cancel{H}}$ and plot their empirical distributions on the window ${0, \dots, 110}$. We also calculate the multiplicative computational gain $r^K$ defined by Remark~\ref{rk_rK} by using the estimator
\begin{align}
\frac{\frac 1 J \sum_{j=1}^J v^K_{N'(N,K),j}(\theta^\star) - v^K_{N'(N,K),j}(\theta^K_{N'(N,K),j})}{\frac 1 J \sum_{j=1}^J v^1_{N,j}(\theta^\star) - v^1_{N,j}(\theta^1_{N,j})}
\end{align}
with $N=5000$ and $N'(N,K)=\left\lfloor N \frac{1+C_{Y|X}}{1+KC_{Y|X}}\right \rfloor$. In fact, assuming the uniform integrability of the family $N'(N,K)(v^K_{N'(N,K)}(\theta^\star)-v^K_{N'(N,K)}(\theta^K_{N'(N,K)}))$, we get by Proposition~\ref{prop:asymp} that ${\mathbb E}[v^K_{N'(N,K)}(\theta^\star)]-{\mathbb E}[v^K_{N'(N,K)}(\theta^K_{N'(N,K)})]\sim_{N\to \infty} \frac{\mathop{\rm tr}\nolimits(\Gamma^K H^{-1})}{N'(N,K)}$ exactly as in the proof of Theorem~\ref{thm:optimK}. By~\eqref{def_rK}, this gives
$$ \frac{{\mathbb E}[v^K_{N'(N,K)}(\theta^\star)]-{\mathbb E}[v^K_{N'(N,K)}(\theta^K_{N'(N,K)})]}{{\mathbb E}[v^1_{N}(\theta^\star)]-{\mathbb E}[v^1_{N}(\theta^1_{N})]} \to_{N\to \infty} \frac{1+KC_{Y|X}}{1+C_{Y|X}} \frac{\mathop{\rm tr}\nolimits(\Gamma^K H^{-1})}{\mathop{\rm tr}\nolimits(\Gamma^1 H^{-1})}=r^K.$$
\subsection{Toy example in a Gaussian framework}
Consider a one dimensional toy example in a Gaussian framework. Let $(X,Y)$ be a Gaussian vector such that $X$ and $Y$ are two standard normal random variables with covariance $\rho \in [-1, 1]$. Let $f$ be the square function, $f: x \in {\mathbb R} \longmapsto x^2$. We consider a constant approximation meaning that the function $\varphi$ is defined by $\varphi(\theta,x)=\theta$, for $\theta \in {\mathbb R}$ and $x \in {\mathbb R}$. Easy computations lead to explicit formulas
\begin{align*}
&{\mathbb E}[f(Y)|X]= \rho^2 X^2 + (1-\rho^2) \\
&\theta^\star=1; \quad A=\Gamma_\infty= 2 \rho^4; \quad B=2(1-\rho^4).
\end{align*}
In this case, the value of $K^\star$ is given by
\[
K^\star=\nu\left(\frac{1-\rho^4}{\rho^4}\right).
\]
This very simple example shows that the optimal number of inner samples $K^\star$ can vary from $1$ to arbitrary large values. As the parameter $\theta$ is one dimensional, the Hessian matrix $H$ is scalar valued and therefore $ K^\star = K^\star_{\cancel{H}}$. Thus, the four estimators reduce to two.
\begin{figure}[h!tbp]
\begin{center}
\includegraphics[scale=0.7]{gaussian_toy_rho01_deg0_K4.results.pdf}
\end{center}
\caption{Computational multiplicative gain as a function of~$K$ for the Gaussian toy example ($\rho = 0.1$) with regression on the constant function.}
\label{fig:toy_efficiency}
\end{figure}
\begin{figure}[h!tbp]
\centering\subfloat[Distribution of $\hat{K}^A_{H}$ with $\bar K=4$]{
\label{fig:toy_K_H_A}
\includegraphics[keepaspectratio, width=.45\textwidth]{gaussian_toy_rho01_deg0.K_H_A.pdf}
}
\subfloat[Distribution of $\hat{K}^\Gamma_H$ with $\bar K=32$]{
\label{fig:toy_K_H_Gamma}
\includegraphics[keepaspectratio, width=.45\textwidth]{gaussian_toy_rho01_deg0_K32.K_H_Gamma2K.pdf}
}
\caption{Comparison of the 2 estimators $\hat{K}^A_H$ and $\hat{K}^\Gamma_H$ for the Gaussian toy example. }
\label{fig:toy_K}
\end{figure}
For our numerical experiments on this toy example, we fix $\rho = 0.1$, in which case the theoretical value of $K^\star$ is $100$. Figure~\ref{fig:toy_efficiency} clearly shows that the gain $r_K$ is almost constant for any $K \ge 20$. Even though from a theoretical point of view $K^\star=100$, any values of $K$ larger than $20$ are equally good in practice. Note that from Corollary~\ref{cor:gain}, $r^\star \ge \frac{1}{2}$; this lower bound is almost attained by $r_K$ for $K \ge 20$. Figure~\ref{fig:toy_K} shows a comparison of the distributions of the two estimators $\hat{K}^A_H$, and $\hat{K}^\Gamma_H$. The estimator $\hat{K}^A_H$ has a very large standard deviation (equal to $79$) and may take values as large as $4275$, whereas the estimator $\hat{K}^\Gamma_H$ is much more concentrated and only takes two values $8$ and $9$. These are typical behaviours of these estimators: as discussed at the end of Section~\ref{sec:hat_A_B}, the estimated matrix $\hat A^{anti}_{\bar{K}}$ may have negative eigenvalues coming from a too large variance in the Monte Carlo computation and leading to non reliable estimations of $A$. On the contrary, $\hat{K}^\Gamma_H$ uses $\Gamma^{2 \bar K}_N$ as an approximation of $A$ from above leading to a conservative estimation of $K^\star$ from below with far less variance. The gains $r_K$ reported in Figure~\ref{fig:toy_efficiency} for $K=8$ or $K=9$ are very close to the best possible gains. In the example, we can conclude that $\hat{K}^\Gamma_H$ with $\bar K = 32$ is much better than $\hat{K}^A_H$, since it has small fluctuations and gives a nearly optimal computation gain.
\subsection{A SDE conditioned on an intermediate date}\label{Subsec_SDE}
We consider the following SDE
\[
dX_t = \cos(X_t) dW_t; \quad X_0=0
\]
where $W$ is a real valued Brownian motion. We aim at estimating ${\mathbb E}[X_{t_2}^2 | X_{t_1}]$ with $t_2 = 10$ and $t_1 = 9$. This amounts to take $Y=X_{t_2}$, $f(x)=x^2$ and $X=X_{t_1}$ in~\eqref{minim_pbms}. The SDE is discretized using the Euler scheme with $200$ time-steps, hence inner simulations are cheaper than outer simulations; their relative cost is $C_{Y|X} = \frac{1}{9}$. We consider two different settings for the family of functions $(\varphi(\theta; \cdot))_\theta$: a polynomial with degree $3$ (see Figures~\ref{fig:cos_pol_efficiency_t9} and~\ref{fig:cos_sde_pol_K_t9}) and a piecewise constant approximation (see Figures~\ref{fig:cos_loc_efficiency_t9} and~\ref{fig:cos_sde_loc_K_t9}). In both settings, the parameter $\theta$ is multi-dimensional, so the Hessian matrix is a true matrix and the estimators with and without $H$ are actually different.
We build the piecewise constant approximation on ${\mathbb R}$ in the following way. First, we center the samples of $X$ around their mean and rescale them by their standard deviation, then we apply the function $x \mapsto \frac{1}{\sqrt{\pi}} \int_{-\infty}^x \expp{-t^2} dt$ to map ${\mathbb R}$ into $(0, 1)$. Finally, we split the interval $[0, 1]$ into $M$ regular sub-intervals.
\begin{figure}[h!tbp]
\begin{center}
\includegraphics[scale=0.7]{cos_sde_T10_t9_deg3_K4.results.pdf}
\end{center}
\caption{Computational multiplicative gain as a function of~$K$ for the SDE example with a polynomial regression of order $3$ and $t_1=9$.}
\label{fig:cos_pol_efficiency_t9}
\end{figure}
\begin{figure}[h!tbp]
\begin{center}
\includegraphics[scale=0.7]{cos_sde_T10_t9_deg3_5_loc.results.pdf}
\end{center}
\caption{Computational multiplicative gain as a function of~$K$ for the SDE example with a local regression with $t_1=9$ and $M=50$ (orange crosses) or $M=100$ (blue crosses).}
\label{fig:cos_loc_efficiency_t9}
\end{figure}
Figures~\ref{fig:cos_pol_efficiency_t9} and~\ref{fig:cos_loc_efficiency_t9} exhibit very similar gain profiles for $r^K$. The polynomial regression of order~3 works quite well on this example, which is related to the choice of $f(x)=x^2$. Figure~\ref{fig:cos_loc_efficiency_t9} compares the multiplicative gain for two different local regressions: one with $M=50$ intervals and the other one with $M=100$. As expected, increasing the number of cells improves the approximation and thus the gain, according to Corollary~\ref{cor:gain}. The multiplicative gain obtained is around $0.2$ for $M=50$ and $0.15$ for $M=100$, to be compared with the best gain $\frac{1}{10}$ given by Corollary~\ref{cor:gain}.
\begin{figure}[ht!bp]
\centering\subfloat[Distribution of $\hat{K}^A_{H}$ with $\bar K=4$]{
\label{fig:cos_sde_pol_K_H_A_t9}
\includegraphics[keepaspectratio, width=.45\textwidth]{cos_sde_T10_t9_deg3.K_H_A.pdf}
}
\subfloat[Distribution of $\hat{K}^A_{\cancel{H}}$ with $\bar K=4$]{
\label{fig:cos_sde_pol_K_noH_A_t9}
\includegraphics[keepaspectratio, width=.45\textwidth]{cos_sde_T10_t9_deg3.K_noH_A.pdf}
}\\
\subfloat[Distribution of $\hat{K}^\Gamma_H$ with $\bar K=32$]{
\label{fig:cos_sde_pol_K_H_Gamma2K_t9}
\includegraphics[keepaspectratio, width=.45\textwidth]{cos_sde_T10_t9_deg3_K32.K_H_Gamma2K.pdf}
}
\subfloat[Distribution of $\hat{K}^\Gamma_{\cancel{H}}$ with $\bar K=32$]{
\label{fig:cos_sde_pol_K_noH_Gamma2K_t9}
\includegraphics[keepaspectratio, width=.45\textwidth]{cos_sde_T10_t9_deg3_K32.K_noH_Gamma2K.pdf}
}
\caption{Comparison of the 4 estimators $\hat{K}^A_H$, $\hat{K}^A_{\cancel{H}}$, $\hat{K}^\Gamma_H$ and $\hat{K}^\Gamma_{\cancel{H}}$ for the SDE example with a polynomial regression with degree $3$ and $t_1=9$.}
\label{fig:cos_sde_pol_K_t9}
\end{figure}
Figure~\ref{fig:cos_sde_pol_K_t9} shows a comparison of the four estimators defined in~\eqref{eq:Kstar-estimators} in the polynomial regression setting. The estimators $\hat{K}^A_H$ (resp. $\hat{K}^\Gamma_H$) and $\hat{K}^A_{\cancel{H}}$ (resp. $\hat{K}^\Gamma_{\cancel{H}}$) have very similar distributions. Simplifying $H$ in the ratio $\frac{\mathop{\rm tr}\nolimits(BH^{-1})}{C_{Y|X}\mathop{\rm tr}\nolimits(AH^{-1})}$ even tends to slightly reduce the variance of the estimator without significantly changing its mean. The estimators $\hat{K}^A_H$ and $\hat{K}^A_{\cancel{H}}$ based on the use of $\hat A^{anti}_{\bar{K}}$ have larger variances and may return very extreme values (between $4$ and $340$). On the contrary, the estimators $\hat{K}^\Gamma_H$ and $\hat{K}^\Gamma_{\cancel{H}}$ have very small standard deviation and show a much more concentrated probability function than the estimators based on $\hat A^{anti}_{\bar{K}}$. The use of $\Gamma_{2K}$ as an approximation of $A$ tends to produce smaller approximations of $K^\star$: their empirical means are shifted by approximately $-20$. However, the gain profiles of Figure~\ref{fig:cos_pol_efficiency_t9} are almost flat for $K \ge 20$, hence this shift does not change the best gain attained by our method. As a conclusion, we recommend to use $\hat K_{\cancel{H}}^K$ to approximate $K^\star$.
In Figure~\ref{fig:cos_sde_loc_K_t9}, we observe very similar behaviours for the piecewise constant approximation setting as the ones we described above for the polynomial regression framework. However, we note that the estimation of the matrix $H$ is more difficult than in the previous polynomial framework, especially for the intervals with few data. This explains heuristically why the estimators without $H$ are less noisy.
\begin{figure}[htbp]
\centering\subfloat[Distribution of $\hat{K}^A_{H}$ with $\bar K=4$ and $t_1=9$]{
\label{fig:cos_sde_loc_K_H_A}
\includegraphics[keepaspectratio, width=.45\textwidth]{cos_sde_T10_t9_deg3_loc.K_H_A.pdf}
}
\subfloat[Distribution of $\hat{K}^A_{\cancel{H}}$ with $\bar K=4$]{
\label{fig:cos_sde_loc_K_noH_A}
\includegraphics[keepaspectratio, width=.45\textwidth]{cos_sde_T10_t9_deg3_loc.K_noH_A.pdf}
}\\
\subfloat[Distribution of $\hat{K}^\Gamma_H$ with $\bar K=32$]{
\label{fig:cos_sde_loc_K_H_Gamma2K}
\includegraphics[keepaspectratio, width=.45\textwidth]{cos_sde_T10_t9_deg3_loc_K32.K_H_Gamma2K.pdf}
}
\subfloat[Distribution of $\hat{K}^\Gamma_{\cancel{H}}$ with $\bar K=32$]{
\label{fig:cos_sde_loc_K_noH_Gamma2K}
\includegraphics[keepaspectratio, width=.45\textwidth]{cos_sde_T10_t9_deg3_loc_K32.K_noH_Gamma2K.pdf}
}
\caption{Comparison of the 4 estimators $\hat{K}^A_H$, $\hat{K}^A_{\cancel{H}}$, $\hat{K}^\Gamma_H$ and $\hat{K}^\Gamma_{\cancel{H}}$ for the SDE example with a local regression with $M=50$ and $t_1=9$. }
\label{fig:cos_sde_loc_K_t9}
\end{figure}
\subsection{An introductory example to risk management in insurance}
In the introduction of the present paper, we have indicated the relevance of computing conditional expectations for risk management. Here, we take back this example from~\cite{alfonsi2021multilevel} that mimics the methodology of the standard formula to calculate the Solvency Capital Requirement, in the sense that it applies a shock to the underlying asset (we refer to~\cite{alfonsi2021multilevel} for further details). We now describe this example and consider an asset whose price follows the Black-Scholes model:
$$S_t=S_0 \exp\left( \sigma W_t-\frac{\sigma^2}2t \right), \ t\ge 0,$$
where $S_0,\sigma>0$ and $W$ is a standard Brownian motion. In practice, insurance companies are interested in computing the losses of their portfolio when a shock occurs in the economy. Here, for simplicity, we will consider a butterfly option as a crude approximation of a true insurance portfolio.
Thus, we are interested in a butterfly option with strikes $0<{\bf K}_1<{\bf K}_2$ that pays
$$\psi(S_T)=(S_T-{\bf K}_1)^++(S_T-{\bf K}_2)^+-2\left(S_T-\frac{{\bf K}_1+{\bf K}_2}2 \right)^+$$
at time $T>0$. The price of such an option at time~$t\in [0,T]$ is given by ${\mathbb E}[\psi(S_T)|S_t]$. Solvency II in its standard model assumes that there is a shock on the asset at time~$t\in (0,T)$ that multiplies its value by $1+s$, $s\in (-1,+\infty)$. Then, in the Black-Scholes model, we have to computed the following quantity
\begin{equation}
\label{eq:loss}
\mathcal{L}= {\mathbb E}[\max({\mathbb E}[\psi(S_T)-\psi((1+s)S_T)|S_t],0)],
\end{equation}
which can be seen as the expected loss generated by the shock. In this particular example, ${\mathbb E}[\psi(S_T)-\psi((1+s)S_T)|S_t]$ has an explicit form by using the Black-Scholes formula, that we can use as a benchmark to compute the mean square error of our estimator of~\eqref{eq:loss}.
Note that since $x\mapsto \max(x,0)$ is $1$-Lipschitz, we have
\begin{align*}
\bigg|{\mathbb E}[\max(\varphi(\theta,S_t),0)]&-{\mathbb E}[\max({\mathbb E}[\psi(S_T)-\psi((1+s)S_T)|S_t],0)] \bigg|\\ &\le \sqrt{{\mathbb E}\left[\left( {\mathbb E}[\psi(S_T)-\psi((1+s)S_T)|S_t] -\varphi(\theta,S_t) \right)^2 \right]}.
\end{align*}
The estimator $\theta^K_N$ minimizes empirically the right hand side, which gives at the same time an upper bound on the approximation error of the expected loss.
Here, we have used our approach to compute ${\mathbb E}[\psi(S_T)-\psi((1+s)S_T)|S_t]$. Thus, we have $X=S_t$, $Y=X\exp\left(\sigma(W_T-W_t)-\frac{\sigma^2}2 (T-t) \right)$ and $C_X=C_{Y|X}$ (the simulation of $X$ and of $Y$ given $X$ both require to sample one normal random variable)\footnote{Note that $C_X=C_{Y|X}$ is particular to the Black-Scholes model for which exact simulation is possible. For a more general diffusion, one typically uses a discretization scheme to approximate it, like in Subsection~\ref{Subsec_SDE}. Then, we rather get $C_{Y|X} \approx \frac{T-t}tC_X$ and the computational gain may be important when $t\to T$.}.
We have taken $s=0.2$, $t=1$ and $T=2$ and consider the local regression with $M=50$, using the same transformation as the one used for the SDE example presented in Subsection~\ref{Subsec_SDE}.
Figure~\ref{fig:butterfly_efficiency} plots the multiplicative computational gain as a function of~$K$, while Figure~\ref{fig:butterfly_K} shows the empirical distribution of the different estimators~\eqref{eq:Kstar-estimators}. We see from Figure~\ref{fig:butterfly_K} that most of the computational gain is realized for $K\ge 5$. Similarly to the previous example, Figure~\ref{fig:butterfly_K} shows that the estimator $\hat{K}^\Gamma_{\cancel{H}}$ is a good one to choose $K$: it has few fluctuations and avoid the issue of estimating~$\hat{H}$.
We now focus on the numerical approximation of~\eqref{eq:loss}. Figure~\ref{fig:butterfly_loss} illustrates the mean square error on the estimated expected loss as a function of~$(N,K)$ for a given computational budget, as explained in the introduction of Section~\ref{Sec_num}. More precisely, from the sample $(\theta^K_{N'(N,K),j}, 1\le j\le J)$, we compute:
$$ \frac 1 J \sum_{j=1}^J \left({\mathbb E}[\max(\varphi(\theta^K_{N'(N,K),j},S_t),0)|\theta^K_{N'(N,K),j}]-\mathcal{L}\right)^2,$$
and plot the different values.
Here, we compute $${\mathbb E}[\max(\varphi(\theta^K_{N'(N,K),j},S_t),0)|\theta^K_{N'(N,K),j}]=\int_{{\mathbb R}} \max(\varphi(\theta^K_{N'(N,K),j},S_0 e^{\sigma \sqrt{t} x -\sigma^2 t/2}),0) \frac{e^{-x^2/2}}{\sqrt{2 \pi}}dx$$ and $\mathcal{L}$ by numerical integration, using the Black-Scholes formula for $\mathcal{L}$. We find $\mathcal{L}\approx 3.077$. We first note from Figure~\ref{fig:butterfly_loss} that in this example, as in all the other ones in Section~\ref{Sec_num}, the choice $K=1$ that is commonly used is suboptimal. Numerically, the optimal choice of $K$ seems to be $K^\star=8$ or $K^\star=9$, which is in line with the estimators $\hat{K}^\Gamma_{H}$ and $\hat{K}^\Gamma_{\cancel{H}}$. However, any choice of $K$ between 5 and 20 leads to an MSE that is close to the optimal one, which confirms that a precise estimation of~$K^\star$ is not needed to take the benefit of the proposed method.
\begin{figure}[h!tbp]
\begin{center}
\includegraphics[scale=0.7]{butterfly_T2_deg3_loc_K4.loss.results.pdf}
\end{center}
\caption{Computational multiplicative gain as a function of~$K$ for the butterfly example with a local regression with $M=50$.}
\label{fig:butterfly_efficiency}
\end{figure}
\begin{figure}[htbp]
\centering
\subfloat[Distribution of $\hat{K}^A_{H}$ with $\bar K=4$]{
\label{fig:butterfly_K_H_A}
\includegraphics[keepaspectratio, width=.45\textwidth]{butterfly_T2_deg3_loc_K4.K_H_A.pdf}
}
\subfloat[Distribution of $\hat{K}^A_{\cancel{H}}$ with $\bar K=4$]{
\label{fig:butterfly_K_noH_A}
\includegraphics[keepaspectratio, width=.45\textwidth]{butterfly_T2_deg3_loc_K4.K_noH_A.pdf}
}\\
\subfloat[Distribution of $\hat{K}^\Gamma_H$ with $\bar K=32$]{
\label{fig:butterfly_K_H_Gamma2K}
\includegraphics[keepaspectratio, width=.45\textwidth]{butterfly_T2_deg3_loc_K32.K_H_Gamma2K.pdf}
}
\subfloat[Distribution of $\hat{K}^\Gamma_{\cancel{H}}$ with $\bar K=32$]{
\label{fig:butterfly_K_noH_Gamma2K}
\includegraphics[keepaspectratio, width=.45\textwidth]{butterfly_T2_deg3_loc_K32.K_noH_Gamma2K.pdf}
}
\caption{Comparison of the 4 estimators $\hat{K}^A_H$, $\hat{K}^A_{\cancel{H}}$, $\hat{K}^\Gamma_H$
and $\hat{K}^\Gamma_{\cancel{H}}$ for the butterfly example with a local regression with $M=50$.}
\label{fig:butterfly_K}
\end{figure}
\begin{figure}[h!tbp]
\begin{center}
\includegraphics[scale=0.7]{butterfly_T2_deg3_loc_mse_loss.pdf}
\end{center}
\caption{Computation of the mean square error as a function of~$K$ with a local regression with $M=50$. }
\label{fig:butterfly_loss}
\end{figure}
\newpage
\section{Conclusion}
In this work, we have investigated how to balance the computational effort between inner and outer simulations when computing conditional expectations with least-square Monte Carlo. The computational gain can be significant when the computational cost $C_{Y|X}$ is small with respect to $C_X$, and when the family $(\varphi(\theta,X))_{\theta}$ well approximates the conditional expectation ${\mathbb E}[f(Y)|X]$.
We have proposed several estimators to approximate the optimal number of inner simulations in practice. Numerical simulations have shown that the estimators $\hat K^\Gamma_H$ and $\hat K^\Gamma_{\cancel{H}}$ have much smaller standard deviations. Although they provide smaller estimations of the optimal number of inner samples $K^\star$, they almost attain the best gain and should be used in practice in favour of those relying on $\hat A^{anti}$. When it comes to choosing between $\hat K^\Gamma_H$ and $\hat K^\Gamma_{\cancel{H}}$, one should keep in mind that $H$ is a Hessian matrix, whose computation may be extremely costly and noisy. The effect of removing $H$ in $\hat K^\Gamma$ is to reduce the noise, and in all our experiments, this estimator almost reaches the optimal gain. Then, as the best trade-off between accuracy and ease of computation, we suggest to use $K^\Gamma_{\cancel{H}}$.
\clearpage
|
2,869,038,154,418 | arxiv | \section{Introduction}
There is still very little known about the nature of the dark matter in the universe, besides that it is non-relativistic and interacts only weakly with the Standard Model. Whilst it is typically assumed that the dark matter consists of some as-yet undiscovered elementary particle, primordial black holes (PBHs) are a compelling alternative, being naturally cold, dark, and consistent with the framework of known physics. For this reason a great deal of work has been done in understanding the astrophysical and cosmological consequences of a large PBH background; for many choices of PBH mass, strict constraints exist on the fraction of dark matter they could constitute. See \cite{pbhdm} for a review.
Many of these constraints, in particular those applying to smaller mass PBHs (in the mass range $10^{10}$ g to $10^{17}$ g), are due to the effects of the Hawking radiation these black holes emit. Whilst the existence of Hawking radiation is under little doubt, being necessary for the consistency of a thermodynamic description of black holes, Hawking radiation has never been observed, and so the precise way black holes radiate is still under question. For this reason, it is interesting to consider how modifying the nature of Hawking evaporation modifies the constraints on the density of primordial black holes in the universe.
In this work, we consider how the dominant constraints on the density of small PBHs --- those from the extragalactic gamma ray background (EGB) --- differ in the scenario that black holes can radiate into higher dimensions. The nature of gravity on short scales is not well understood: whilst Coulomb's law (or its quantum field theoretic generalisation) is known to apply down to distances of order $10^{-18}$ m, Newton's law of gravity has only been tested on scales of order several microns. Consequently, it is consistent for there to exist extra large spatial dimensions, and being extraordinarily compact objects, even black holes as heavy as $10^{17}$ g could be sensitive to these dimensions.
Most new physics, involving the introduction of extra degrees of freedom available for black holes to radiate, would result in little reduction in detectable evaporation products and at best modest weakening of existing constraints. The motivation for studying extra-dimensional evaporation is the critical fact that higher-dimensional black holes, if smaller than the size of the extra dimensions, are significantly \textit{colder} than their 4D counterparts, for a given fixed mass. These PBHs are thus expected to produce fewer evaporation products, and thus be subject to weaker constraints, than ordinary 4D black holes.
This paper is divided into five sections. In Section 2 we review the latest constraints on the PBH density across the entire range of masses. In Section 3 we discuss the behaviour of black holes in theories with large extra dimensions, and explain how the evaporation rate is modified in such a way as to substantially modify the nature of all constraints on low-mass PBHs. In Section 4 we present the main result of our analysis, the modified constraints on the PBH density arising from the extragalactic photon background, for several choices of the number of extra dimensions. Finally, in Section 5, we discuss the modifications we expect to occur to other constraints on the PBH density.
We take $c = \hbar = k_B = 1$ throughout.
\section{Existing Primordial Black Hole Constraints}
Constraints on the density of primordial black holes can be divided into two classes: constraints from the gravitational effects of the black holes themselves, and constraints from the particles they produce through Hawking evaporation. Since the total rate of energy loss is less for larger black holes, these evaporative constraints exist only for PBHs of mass less than about $10^{17}$ g. Conversely, gravitational effects of PBHs are typically negligible for black holes below this mass.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{nonevap.pdf}
\includegraphics[width=\linewidth]{evapcons3.pdf}
\caption{Constraints on the PBH density (at formation) as a fraction of the dark matter density. Shaded regions are excluded. In the upper panel are those constraints arising from the gravitational influence of the black holes, adapted from \cite{extendedmassfunctions}; from left to right the constraints arise from femtolensing experiments (FL), capture by neutron stars (NS), microlensing experiments (HSC and EROS), and effects on the CMB (PLANCK). The vertical grey lines indicate the mass $M_c$ at which the black hole radius is equal to the size of the extra dimensions, as discussed in Section \ref{ledsec}. In the lower panel are those constraints arising from the effects of the PBH evaporation products; the constraints from BBN are taken from \cite{kazconstraints} and those from the EGB we have reproduced in this work.}
\label{consfig}
\end{figure}
\subsection{Gravitational constraints}
The constraints on the PBH density for masses above around $10^{17}$ g are illustrated in the upper panel of Fig \ref{consfig}. Perhaps the most important of these come from lensing experiments. For small black holes, the wave nature of electromagnetic radiation is significant, and gravitational lensing around the black hole can give rise to an interferometry pattern in the received radiation (termed \textit{femtolensing}) \cite{femto}\footnote{These results have been called into question by \cite{femto2}.}. For larger black holes this interference is not detectable, though gravitational lensing can nevertheless result in the apparent magnification of stars passing behind them (termed \textit{microlensing}) \cite{HSC, EROS}.
There also exist astrophysical effects associated with the collision and subsequent capture of primordial black holes by compact objects such as neutron stars and white dwarfs. In some mass ranges, such collisions would result in the destruction of these objects; the number density of existing neutron stars and white dwarfs hence places mild bounds on the density of primordial black holes \cite{nscapture, whitedwarfs}.
For large black holes, of order $10^{35}$ g and larger, strict constraints arise from the effects of these PBHs on the CMB. In particular, the accretion of primordial gas around the black holes and subsequent injection of energy into the background plasma would be detectable through its influence on the angular distribution of temperature and polarization of the CMB. Data from Planck strongly constrains this scenario \cite{cmb}.
\subsection{Evaporative constraints}
\label{existingevap}
For masses below about $10^{17}$ g, all constraints on the PBH density are due to the effects of black hole evaporation. The dominant constraints come from the effects on big bang nucleosynthesis (BBN) and the size of the extragalactic gamma ray background (EGB). In \cite{kazconstraints} a comprehensive analysis of these constraints was performed. Injection of high-energy particles during BBN can cause dissociation of heavier isotopes and induce extra interconversion of protons and neutrons, the extent of which is strictly constrained by the known abundances of the light elements. Constraints on black holes that have not fully evaporated today arise from the extragalactic photon background. Continual evaporation of PBHs over the course of the universe's history would have converted a considerable quantity of energy into photons (primarily gamma rays), and this would be observable in the EGB. These constraints are illustrated in the lower panel of Fig. \ref{consfig}.
The constraints from the extragalactic photon background are of most relevance to this work, so we shall endeavour to explain the qualitative form of the constraints. Assuming the black holes are formed very early in the universe, there exists some mass $M_0$ such that they are disappearing today. For black holes much larger than this, incomplete evaporation occurs, and thus not all of the energy in the black holes is converted into Standard Model particles. Since the lifetime of a black hole scales approximately like the cube of its mass, a black hole need not be much heavier than $M_0$ before its lifetime is far longer than the age of the universe and only a small fraction of its energy is converted to photons. The constraints hence weaken as $M$ is increased above $M_0$.
For masses smaller than $M_0$, the black holes have completely disappeared by today. Though the total energy released by the black holes is the same for all such $M$, the smaller the black holes, the earlier they disappeared, and hence the greater the redshifting of the photons they produced. The energy contributed to the photon background today hence decreases as $M$ is decreased below $M_0$, and the constraints weaken. One needs to take a little more care than this --- smaller black holes emit predominantly higher-energy radiation, and the constraints on the size of the photon background are stricter at higher energy. However, sufficient redshifting of frequency occurs that in fact the dominant constraints on small black holes come from the soft end of the gamma ray background.
For black holes smaller than about $10^{13}$ g, complete evaporation has occurred before photon decoupling. Such radiation is hence not visible in the photon background, and so the constraints disappear completely for such black holes.
We briefly mention that there are several other constraints of an evaporative nature, arising from annihilation of electrons with positrons emitted by the PBHs \cite{positrons, 511}, \textit{galactic} gamma rays and antiprotons \cite{galacticgamma, galacticantiproton}, and effects on CMB anisotropy \cite{anisotropy}. Apart from over very narrow mass ranges, the constraints from BBN and the EGB are the strictest of these.
\section{Black Holes in Large Extra Dimensions}
\label{ledsec}
The existence of extra compact spatial dimensions is an appealing explanation for the observed weakness of gravity relative to the other fundamental forces \cite{leds}. Informally, the force of gravity is weaker because it is `diluted' amongst these extra dimensions. More formally, in the case that the geometry of the extra dimensions is independent of ours, the gravitational action can be written
\begin{equation}
S = \frac{1}{16 \pi} M_*^{2+n}\int \, \mathrm{d}^{4+n}x \sqrt{-g} \left( \mathcal{R}_4 + \mathcal{R}_n \right) \,,
\end{equation}
where $M_*$ is the fundamental Planck scale, $\mathcal{R}$ is the Ricci scalar, and $n$ is the number of extra dimensions. Neglecting the second term, we can perform the integral over the extra spatial dimensions to generate an effective 4D action. If $R$ denotes the size of the extra dimensions, we hence find the relation
\begin{equation}
\label{mstar}
M_P^2 \sim M_*^{2+n} R^n \,.
\end{equation}
It is hence possible for the fundamental Planck scale $M_*$ to be far lower than the 4D Planck scale, if the extra dimensions are sufficiently large. For $M_* = 10$ TeV, the above relation implies that $R \sim 10^{11}$ m for $n=1$ --- certainly such a large extra dimension is ruled out by gravitational experiments on the solar system scale. For $n=2$ one finds $R \simeq 25 \, \mu \mathrm{m}$, which is consistent with current short distance tests of Newton's law \cite{torsion}. In this paper we will consider $2 \leq n \leq 6$. In Section \ref{resultssec} we will describe qualitatively the nature of the constraints for $n$ larger than this.
Naturally, experiments exist that test the Standard Model up to around $1 \, \mathrm{TeV}$ --- far smaller distance scales than a micron. If there are additional spatial dimensions, the particles of the Standard Model must not `feel' these extra dimensions --- they must be localised to a brane. The mechanism by which this could occur is an important question, but we note here that it is a natural scenario in the context of some string theories, in which branes occur with gauge theories automatically localised to them.
\subsection{The higher-dimensional Schwarzschild solution }
What is the nature of black holes in this model? Black holes which are much larger in size than these extra dimensions should be insensitive to them, and behave as ordinary 4D black holes. On the other hand, black holes much smaller than these extra dimensions should be insensitive to the finiteness of the dimensions, and behave as $(4+n)$-dimensional objects. There will be some intermediate regime in which the black hole is not adequately described by either picture. However, the crossover between the two descriptions is continuous, for one can show that there is some critical mass $M_c$ at which the size of the extra dimensions, the 4D Schwarzschild radius of the black hole, and the $(4+n)$-dimensional radius of the black hole all approximately coincide. This critical mass is tabulated in Table \ref{tabcrit}. For a review of this material, see \cite{kantireview}.
\begin{table}
\centering
\def1.1{1.1}
\begin{tabularx}{.4\textwidth}{YY}
\hline
$n$ & $M_c$ / g \\
\hline
2 & $1.62 \times 10^{25}$ \\
3 & $1.52 \times 10^{20}$ \\
4 & $4.65 \times 10^{17}$ \\
5 & $1.44 \times 10^{16}$ \\
6 & $1.45 \times 10^{15}$ \\
\hline
\end{tabularx}
\caption{The mass $M_c$ in grams of the black hole whose Schwarzschild radius is equal to the size of the extra dimensions, for $M_* = 10$ TeV.}
\label{tabcrit}
\end{table}
In $(4+n)$ dimensions the Schwarzschild metric is given by \cite{myersperry}
\begin{equation}
\label{metric}
\mathrm{d}s^2 = -\left(1 - \left(\frac{r_h}{r}\right)^{n+1}\right)\mathrm{d}t^2 + \left(1 - \left(\frac{r_h}{r}\right)^{n+1}\right)^{-1}\mathrm{d}r^2 + r^2 \,\mathrm{d}\Omega_{n+2}^2 \,,
\end{equation}
where the horizon radius is
\begin{equation}
\label{horizonr}
r_h = \frac{1}{M_*}\left(\frac{M}{M_*}\right)^\frac{1}{n+1} \left(\frac{8\, \,\Gamma((n+3)/2)}{(n+2) \pi^{(n+1)/2}}\right)^\frac{1}{n+1} \,.
\end{equation}
The Hawking temperature of such a black hole is given by
\begin{equation}
\label{temperature}
T = M_* \left(\frac{M_*}{M}\right)^\frac{1}{n+1} \left(\frac{n+1}{4 \sqrt{\pi}}\right)\left(\frac{n+2}{8 \Gamma((n+3)/2)}\right)^\frac{1}{n+1} \,.
\end{equation}
The relation between the radius and temperature of the black hole is particularly simple:
\begin{equation}
\label{inversely}
T = \frac{n+1}{4 \pi r_h} \,.
\end{equation}
When we restrict the metric Eq. \eqref{metric} to the brane, we find a 4D black hole solution whose geometry differs from the ordinary Schwarzschild solution. This will affect the way the black hole gravitates. Consequently, some of the aforementioned gravitational constraints will be modified in this scenario, such as those from lensing experiments, since the bending of light around a black hole is sensitive to the precise geometry. Similarly, capture of these black holes by neutron stars and white dwarfs is sensitive to the potential energy between the two objects, which depends fundamentally on the number of extra dimensions.
The above notwithstanding, it transpires that for $M_* = 10$ TeV, the critical mass describing the transition from the 4D to the $(4+n)$-dimensional picture occurs close to the mass at which existing constraints become dominantly evaporative\footnote{For $n>2$ at least. For $n=2$, $M_c$ is appreciably larger than this. }. These critical masses are illustrated as vertical lines in Fig. \ref{consfig}. Black holes larger than this critical mass behave as ordinary 4D black holes, and the existing gravitational constraints apply.
On the other hand, Eq. \eqref{temperature} shows that the temperature of a higher-dimensional black hole can differ significantly from that of a 4D Schwarzschild black hole of the same mass. This implies that the rate at which it evaporates, and the energy of the particles it produces during this evaporation, can differ considerably also. We thus expect the constraints on low-mass PBHs to be substantially modified in this scenario.
\subsection{Bulk and brane evaporation}
\label{bulkbrane}
A black hole smaller in size than the extra dimensions radiates gravitons into the bulk and Standard Model particles onto the brane. The radiation of Standard Model particles obeys the usual Stefan-Boltzmann law, with the critical difference that the temperature of the radiation is no longer inversely proportional to the mass of the black hole. As we discuss in the appendix, for both modes of evaporation, the rate of loss of energy is approximately equal, being given by
\begin{equation}
\label{stefanb}
-\diff{M}{t} \sim T^2 \,.
\end{equation}
The differing dependence of the temperature on mass however, gives rise to a black hole lifetime $\tau$ that depends critically on $n$:
\begin{equation}
\label{lifetime}
M_*\tau \sim \left(\frac{M}{M_*}\right)^\frac{n+3}{n+1} \,.
\end{equation}
By considering Eqs. \eqref{horizonr}, \eqref{temperature} and \eqref{lifetime} in conjunction with Eq. \eqref{mstar}, one can show that black holes much smaller than the size of the extra dimensions are larger, cooler, and live longer than 4D black holes of the same mass \cite{millimeter}, at least if one considers emission to involve only a single degree of freedom\footnote{We consider the detail of the number of emitted degrees of freedom in Section \ref{resultssec}.}. However, for black holes of order the size of the extra dimensions, the numerical factors in Eq. \eqref{temperature} are not insignificant, and lead to higher-dimensional black holes disappearing substantially faster.
Throughout the rest of this work we will take $M_* = 10$ TeV. As mentioned at the start of this section, this is approximately the bound for $n=2$ placed on the size of the extra dimensions by measurements of the behaviour of Newton's law on short distances. However, we briefly mention here that there are several other constraints on the size of $M_*$. Firstly, a weak-scale fundamental Planck mass is subject to collider constraints. These are fairly mild, and are consistent with $M_* = 10$ TeV. There are also several astrophysical and cosmological constraints due to the effects of the light Kaluza-Klein modes of the graviton. These bounds are more strict, and typically rule out $M_* = 10$ TeV for $n=2$. However, we note that they are subject to large systematic errors, and depend on the details of the decay of the KK modes. See \S 106 of \cite{pdg} for the latest constraints on $M_*$. In Section \ref{discussion} we describe the qualitative effects of choosing a larger value for $M_*$.
\section{Modified Constraints from the Extragalactic Photon Background}
\label{resultssec}
In this section we present the constraints that exist on the density of higher-dimensional PBHs that arise from the contribution they would make to the extragalactic photon background. It transpires that the strongest constraints come from the X-ray and gamma ray background, as in the 4D case. In the $n=1$ case the black holes radiate primarily in the UV and soft X-ray regions of the electromagnetic spectrum. There are only very poor measurements of the extragalactic UV background, but this is of no consequence since $n=1$ is ruled out by gravitational experiments.
We assume for simplicity a monochromatic mass distribution --- that is, that all of the primordial black holes are formed at the same time with the same mass. This is not particularly realistic, and it is known in the 4D case that constraints tend to become more stringent with extended mass distributions, if qualitatively similar \cite{extendedmassfunctions}. A monochromatic distribution is nevertheless sufficient to indicate the large modifications to the constraints that occur in the extra-dimensional scenario. We also emphasise that the quantity $\rho_\mathrm{PBH}$ we plot in Fig. \ref{results} is the fraction of the dark matter density the black holes constitute \textit{at formation}, and likewise the mass $M$ is their mass at formation. In order to constitute a fraction of the dark matter today, the PBHs must have an initial mass larger than that mass $M_0$ which would be evaporating now, tabulated in Table \ref{tabevap}.
\subsection{Methodology}
To compute the spectrum of radiation emitted by a black hole, the public code \texttt{BlackHawk} \cite{blackhawk} was used. In its original form, this code computes the emission rate of all Standard Model particles from a given black hole, accounting for greybody factors and using \texttt{PYTHIA} to compute the subsequent decay of all unstable particles. The code makes the simplifying assumption that a black hole begins radiating a given particle only when its temperature exceeds the particle's mass, and thereafter begins radiating it as though it were massless.
The code needs some modification to produce the emission rate in the large extra dimensions scenario. Naturally the mass-radius and mass-temperature relations are modified according to Eqs. \eqref{horizonr} and \eqref{temperature}. Furthermore, the greybody factors differ in different dimensions. These greybody factors were computed for all spins and for all dimensions $n$, using the numerical recipes outlined in \cite{kanti,bulkemission}. Accuracy of the numerical results could be compared to the results in \cite{kanti,gravitonemission2}. We found excellent agreement with the former, although not with the latter. We note that the numerical results in \cite{gravitonemission2} do not agree with the expected low-energy analytic expressions (in particular, all spin-2 greybody factors should go to zero at low energy), so we put the discrepancy between our results down to an error in theirs. To produce the high-energy and low-energy asymptotics of the greybody factors, the analytic results from \cite{JMRbrane1, JMRbrane2} were used.
Given the spectrum of radiation from a black hole at each moment of its lifetime, the density of background photons today (in units of energy per unit volume per unit energy) is given by the formula
\begin{equation}
\label{backgrounddens}
n = \int_{t_\mathrm{dec}}^{t_\mathrm{max}} (1+z) \frac{\mathrm{d}^2 N}{\mathrm{d}t \, \mathrm{d}E}((1+z)E) \, \mathrm{d}t \,,
\end{equation}
where the derivative in the integrand is the energy being emitted by the black holes per unit time per unit volume per unit energy. The integral is taken between the time of photon decoupling $t_\mathrm{dec}$ and $t_\mathrm{max} = \min(t_0, \tau)$, where $t_0$ is the time today and $\tau$ the lifetime of the black hole. Those photons with energies between $E$ and $E+\mathrm{d}E$, if produced at an earlier time $t$, must have been emitted with blueshifted energy $(1+z)E$ and belonged to a wider energy window $(1+z)\mathrm{d}E$. This accounts for the two redshift factors in the integrand. As appropriate for the matter-dominated era, we take $1+z(t) = (t_0/t)^{2/3}$.
An isotropic photon background gives rise to an observed flux of
\begin{equation}
I = \frac{c}{4 \pi} n \,.
\end{equation}
The data for the gamma ray region of the electromagnetic spectrum are collected by space-based telescopes, in particular EGRET and COMPTEL aboard the Compton Gamma Ray Observatory and LAT aboard the Fermi Gamma Ray Space Telescope \cite{COMPTEL, EGRET, FERMILAT}. More data exist for the intensity of the X-ray background, for which we use a fit from \cite{moretti}. In Fig. \ref{gammadata} we plot these data. In determining the PBH constraints we make the conservative assumption that the entirety of the photon background in the X-ray and gamma ray region of the electromagnetic spectrum is due to black hole evaporation products.
\begin{figure}
\centering
\includegraphics[width=0.85\linewidth]{gammadata2.pdf}
\caption{The observed background photon flux in units of energy per square centimetre per second per steradian per unit energy, as a function of energy. The energy range plotted corresponds to the X-ray and gamma ray region of the electromagnetic spectrum. }
\label{gammadata}
\end{figure}
\subsection{Results}
\label{actualresults}
Our results are plotted in Fig. \ref{results}. We see that independent of $n$, the shape of the constraints is broadly similar. The explanation for this shape is as outlined in Section \ref{existingevap}: those black holes which are evaporating today contribute most energy to the photon background, with the constraints disappearing for black holes small enough to have evaporated before photon decoupling. The primary qualitative difference is due to the fact that the mass $M_0$ of those black holes evaporating today is dimension-dependent. These masses are tabulated in Table \ref{tabevap}. For $n \geq 4$ we note that the constraints cut off sharply above a certain mass. This is the mass $M_c$ above which the 4D description of the black holes is valid, tabulated in Table \ref{tabcrit}. The total radiation rate is substantially reduced above this mass (as explained in Section \ref{bulkbrane}), which is why the constraints significantly soften. Indeed, comparison with Fig. \ref{consfig} shows that above this mass, the constraints coincide with those in the 4D case. Since we have not treated the crossover behaviour of the black hole solution precisely, the results are not reliable at this mass.
From Eqs. \eqref{lifetime} and \eqref{temperature} one can show that for a black hole which survives until today, the typical temperature of the Hawking radiation is higher for larger $n$. This explains why the constraints are slightly stronger for larger $n$ (and indeed why they are noisier), the dominant constraints coming from the higher-energy region of the electromagnetic spectrum (see Fig. \ref{gammadata}). The constraints also cover a wider mass range than the 4D constraints, on account that the lifetime of the black holes Eq. \eqref{lifetime} depends less strongly on mass than in the 4D case.
\begin{table}
\centering
\def1.1{1.1}
\begin{tabularx}{.4\textwidth}{YY}
\hline
$n$ & $M_0$ / g \\
\hline
2 & $2.44 \times 10^7$ \\
3 & $5.33 \times 10^{10}$ \\
4 & $1.83 \times 10^{13}$ \\
5 & $2.53 \times 10^{15}$ \\
6 & $1.45 \times 10^{15}$ \\
\hline
\end{tabularx}
\caption{The mass $M_0$ in grams of a black hole evaporating today, assuming formation at the start of the universe.}
\label{tabevap}
\end{table}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{234y.pdf}
\includegraphics[width=\linewidth]{456.pdf}
\caption{Constraints on the PBH density (at formation) as a fraction of the dark matter density for different choices of $n$, assuming a monochromatic mass distribution. Shaded regions are excluded. In the upper panel are plotted (left to right) the constraints for $n=2, 3, 4$ and in the lower panel the constraints for $n=4,5,6$. }
\label{results}
\end{figure}
One might wonder whether radiation into gravitons would dominate photon production for large $n$, and constraints might substantially weaken, on account that the number of degrees of freedom in the graviton is quadratic in $n$ ($g = (n+1)(n+4)/2$). Our results do not bear this hypothesis out. To understand this, we note that the graviton greybody factor is suppressed at low energies relative to those for the photon and neutrinos, and more so for larger $n$. See Fig. \ref{greybody2} in the appendix.
For $n=7$, the mass $M_c$ is approximately equal to the mass of a 4D black hole evaporating today ($M_0 \simeq 7.09 \times 10^{14}$ g). Black holes smaller than this, for which the extra-dimensional picture is valid, have disappeared by today and therefore could not constitute the dark matter. On the other hand, for black holes larger than this the 4D description, and hence the 4D constraints, apply. Since $M_c$ decreases for larger $n$, we find that for all $n>6$ the constraints on the fraction of the dark matter the black holes could constitute are unchanged from those in the 4D case.
We note that only for $n=2$ and $n=3$ are there wide mass windows in which PBHs could constitute the entirety of the dark matter --- $M \gtrsim 10^{12}$ g for $n=2$ and $M \gtrsim 10^{15}$ g for $n=3$. For larger $n$ our results show that the photon background places similar constraints on the PBH mass as in the 4D case, requiring $M$ to be larger than about $10^{17}$ g, beyond which other gravitational constraints need to be taken into consideration.
\section{Discussion}
\label{discussion}
The most important question to address next is the modification of the constraints from BBN in the extra-dimensional scenario. Just as for the photon background, we expect that the dominant constraints will arise from those black holes which evaporate completely during BBN. Since such black holes will be lighter than their 4D counterparts, we expect the constraints from BBN to be shifted to lower mass, and more so for lower $n$. We leave a detailed study of this to future work.
It is also necessary to understand how other evaporative and gravitational constraints change with the introduction of extra dimensions. As mentioned in Section \ref{actualresults}, the typical temperature of the black holes increases as $n$ is increased. For low $n$, the only evaporation products are neutrinos, gravitons, and photons, and we hence expect the constraints from positron annihilation or antiprotons to be non-existent. For large $n$, black holes larger than $10^{17} g$ behave as 4D objects, and so we expect existing gravitational constraints to apply. Lensing constraints in the low $n$ case, and other evaporative constraints in the large $n$ case, would need to be studied in greater detail.
The purpose of this work has been to demonstrate the significant qualitative changes to the constraints on the PBH density that occur when extra dimensions are present. In doing so we have made some simplifying assumptions --- that the black hole mass distribution is monochromatic, and that the black holes are not rotating. In any number of dimensions, the Hawking temperature of a black hole depends quite sensitively on its angular momentum; the constraints on a population of spinning black holes could thus be appreciably different. To understand how these two factors affect the constraints in the 4D case, see \cite{rotatingconstraints}.
A final natural question to ask is how the constraints would differ for a different choice of $M_*$. In this work we have chosen the lowest value of $M_*$ consistent with experiment. From Eq. \eqref{lifetime} we see that a black hole evaporating today would have larger mass for a larger choice of $M_*$. We thus expect the constraints to be shifted to larger mass as $M_*$ is increased, though with the same qualitative shape. For $M_* = 10$ TeV we find that the constraints for $n > 6$ are just as in the 4D case, and we expect that the larger we take $M_*$, the fewer choices of $n$ will give rise to novel constraints.
\section*{Acknowledgements}
I am grateful to John March-Russell for many useful discussions, and for initially suggesting the possibility that constraints on the PBH density could differ depending on the number of dimensions the black holes exist in.
|
2,869,038,154,419 | arxiv | \section{Introduction} \label{intro}
Suspension is an important mode for the transport of sediments by fluid flows. It occurs when the falling velocity of the particles is smaller than the turbulent velocity fluctuations, so that particles can remain suspended for a long time, trapped by turbulent eddies, before they eventually fall back on the bed due to gravity. In nature, one observes suspension in large rivers, \textit{i.e.} in their downstream part, where large amount of fine particles have been collected from the catchment basin. Rivers that ordinarily present bed-load transport (the moving particles remain close to the bed) can also experience suspension (the particles are present over the whole flow depth) when the water discharge is unusually large, e.g. during flood events.
Vertical concentration profiles and overall sediment fluxes are among the major issues -- see the pioneering works of Rouse (1936) or Vanoni (1946). From the point of view of hydraulic engineering, the problem is satisfactorily solved for rivers in a steady state, although some questions are still open, such as particle trapping by turbulent eddies, or the structure of the flow near the bottom where the concentration is large (Nielsen 1992; Nezu 2005). However, the response of the sediment flux to temporal or spatial changes of the flow is largely unknown. Such changes may be induced, for instance, by long gravity waves, or a sudden increase of the flow rate, or variations of the river slope or geometry. Two typical problems of relaxation downstream a change in the flow conditions are depicted in Figure~\ref{fig:sketches}, which will be studied in \S\ref{sec:applic}: that of a small change of the slope of the bottom (Fig.~\ref{fig:sketches}a), and that of a change of the bottom conditions, from non-erodible to erodible (Fig.~\ref{fig:sketches}b). The suspended sediment response is expected to have a strong effect on the dynamics of the erodible bottom, especially on the formation of dunes or bars, or, at larger scale, on the development of meanders (Seminara 2006). Specific relaxation problems have been investigated by numerical integration of the Reynolds-averaged Navier-Stokes equations, using mixing length or $k-\epsilon$ turbulence models (Hjelmfelt \& Lenau 1970; Jobson \& Sayre 1970; Apmann and Rumer 1970; van Rijn 1986a; Celik \& Rodi 1988; Ouillon \& Le Guennec 1996).
\begin{figure}
\begin{center}
\setlength{\unitlength}{1mm}
\begin{picture}(100, 60)(0, 0)
\put(0, 0){\includegraphics[width=10cm]{Illustrations.png}}
\put(-7, 55){(a)}
\put(-7, 22){(b)}
\end{picture}
\end{center}
\caption{Sketches of the situations studied in \S\ref{sec:applic}: (a), flow over a slope change; (b), flow over the passage from non erodible to erodible bed. $H$ and $S$ are the river depth and bottom slope for $x<0$, and $\Phi_0(z)$ is the corresponding saturated concentration profile; $\delta H$ and $\delta S$ are the variations in $H$ and $S$ downstream of the change at $x=0$, and $\Phi_{sat}(z)$ is the new saturated concentration profile reached at the distance $x \approx L_{sat}$. }
\label{fig:sketches}
\end{figure}
In the case where bed-load is dominant, or in the aeolian situation (saltation), it has been shown that the evolution of the sediment flux $q$ can be accounted for by a relaxation equation of the form
\begin{equation}
T_{\rm sat} \partial_t q + L_{\rm sat} \partial_x q = q_{\rm sat} - q,
\label{eq:relax}
\end{equation}
where $q_{\rm sat}$ is the saturated flux, $T_{\rm sat}$ and $L_{\rm sat}$ are the relaxation time and length scales -- \textit{i.e.} the time and length over which the flux relaxes toward saturation. This saturation corresponds to the homogeneous and steady state for which sediment transport is constant both in space and time for given flow conditions. Such a first order equation was first introduced in the aeolian context, as the simplest equation describing relaxation effects (Sauermann, Kroy \& Herrmann 2001; Andreotti, Claudin \& Douady 2002; Kroy, Sauermann \& Herrmann 2002; Andreotti 2004). It was then shown that, for water flows, this equation can be derived from analysis of the erosion and deposition rates, the relaxation scales being there related to particle deposition (and not particle inertia) (Charru 2006; Lajeunesse, Malverti \& Charru 2010). The importance of the relaxation length $L_{\rm sat}$ appeared to be crucial in particular for stability analyses of the erodible bottom, for selection of the ripple wavelength (Fourri\`ere, Claudin \& Andreotti 2010). The importance of relaxation phenomena for suspensions in turbulent flow is well-known in the context of hydraulic engineering (Yalin \& Finlaysen 1973; van Rijn 1986b; Celik \& Rodi 1988; Ouillon \& Le Guennec 1996). This importance has also been recognized in the context of geomorphology; in particular, Lague \& Davy (2009) proposed a deposition length of sediment as the relevant transport length. However, a derivation of a relaxation equation of the form (\ref{eq:relax}), from firm hydrodynamic grounds, is still lacking.
In this paper, we discuss the conditions under which an equation of the form (\ref{eq:relax}) can be derived for turbulent flows, when suspension is the dominant mode of transport, with particular emphasis on the identification of the saturation length and time scales. The article is organised as follows. In the next Section, we present the flow models and saturation conditions. In \S\ref{sec:NonEqFlows} we perform a mode analysis of the advection-diffusion equation for the particle concentration applied to unsaturated cases, and then identify the saturation length and time scales of the flux. The relevance of this approach is illustrated in \S\ref{sec:applic} by treating few examples: (i) the effect on the sediment flux of a change in the river slope; (ii) the change from a fixed to an erodible bed and (iii) the deposition of sediments from a source near the free surface. For the last two situations, the predictions of the model are tested against experimental data from the literature.
\section{Flow models} \label{FlowModel}
\subsection{Logarithmic flow model} \label{sec:log_model}
We consider the free-surface, turbulent flow of a fluid layer of thickness $H$ over an erodible bed. For the sake of simplicity, we restrict the discussion to flows invariant in the spanwise direction, \textit{i.e.} two-dimensional, with streamwise coordinate $x$ and upwards transverse coordinate $z$. Measurements have shown that the profile of the streamwise velocity is close to the logarithmic law
\begin{equation}
u_x(z) = \frac{u_*}{\kappa} \, \ln \left(\frac{z + z_0}{z_0}\right)\;,
\label{eq:log_ux}
\end{equation}
where $u_*$ is the friction velocity, $z_0$ is the hydrodynamical bed roughness, and $\kappa = 0.4$ is the von K\'arm\'an coefficient. For steady flow where the shear stress is balanced by the streamwise component of gravity, the shear stress increases linearly from zero at the free surface to $\tau_b = \rho u_*^2$ at the bottom, so that the logarithmic velocity profile (\ref{eq:log_ux}) corresponds to a parabolic eddy viscosity $\nu_t$ (Nezu \& Rodi 1986), given by
\begin{equation}
\frac{\nu_t}{u_* H} = \kappa\;\left(\frac{z + z_0}{H}\right)\; \left( 1-\frac{z}{H} \right).
\label{eq:nu_t}
\end{equation}
From (\ref{eq:log_ux}), the depth-averaged velocity $U$ is given by
\begin{equation}
\lambda \equiv \frac{U}{u_*} =
\frac{1}{\kappa} \left[ \ln\left(\frac{H}{z_0}\right) - 1 \right],
\label{eq:lambda}
\end{equation}
with typical value $\lambda = 10$, corresponding to $z_0/H \approx 0.01$ (Raudkivi 1998).
We assume that the sediment concentration $\phi$ is governed by the advection-diffusion equation
\begin{equation}
\frac{\partial \phi}{\partial t} + u_x \frac{\partial \phi}{\partial x} =
\frac{\partial}{\partial x} \left(D \frac{\partial \phi}{\partial x} \right) + \frac{\partial}{\partial z} \left(D \frac{\partial \phi}{\partial z} +
\phi V_{\rm fall} \right),
\label{eq:concentration}
\end{equation}
where $D$ is the particle eddy diffusivity and $V_{\rm fall}$ the settling velocity. Measurements have shown that $D(z)$ is reasonably parabolic and proportional to the eddy viscosity (\ref{eq:nu_t}), with turbulent Schmidt number
\begin{equation}
{\rm Sc} = \frac{\nu_t}{D}
\label{eq:Sc}
\end{equation}
in the range $0.5$--$1$ (Coleman 1970; Celik \& Rodi 1988; Nielsen 1992). The settling velocity $V_{\rm fall}$ is taken uniform, and, when needed for comparison with experiments, equal to that of a single particle in quiescent fluid. Note that the above modelling ignores inertial effects on particle motion, in particular their ejection from the core of vortices and their clustering (Bec et al. 2007; Hunt et al. 2007). We also limit the discussion to dilute suspensions, \textit{i.e.} small volumic particle concentration $\phi$, for which there is no significant feedback of the particles on transport.
Solving (\ref{eq:concentration}) requires two boundary conditions, one at the free surface and one on the sedimentary bed. At the free surface, the net vertical flux vanishes, giving
\begin{equation}
D \frac{\partial \phi}{\partial z} + \phi\,V_{\rm fall} = 0
\qquad{\rm at} \quad z=H.
\label{eq:bc@z=H}
\end{equation}
At the bottom, just above the bedload layer where particles mainly roll and slide on each other, the diffusive flux is equal to the erosion flux $\varphi_\uparrow$, \textit{i.e.} the volume of particles entrained in suspension per unit time and bed area (Parker 1978; van Rijn 1986a):
\begin{equation}
- D \frac{\partial \phi}{\partial z} = \varphi_\uparrow
\qquad{\rm at} \quad z=0.
\label{eq:bc@z=0}
\end{equation}
The erosion rate, or `pickup function', is generically an increasing function of the basal shear stress above a threshold. Its functional form is determined phenomenologically from experiments and depends on the nature of the bed -- e.g. whether it is consolidated/cohesive or not, composed of grains or containing clay, etc. (Shields 1936; Einstein 1950; Engelund 1970; van Rijn 1984b; Hanson \& Simon 2001; Briaud \& al. 2001; Bonelli et al. 2007).
Two remarks have to be made here. First, the bottom condition (\ref{eq:bc@z=0}) applies for steady and homogeneous as well as unsteady or heterogeneous flows. In the latter case, the erosion flux may be different from the deposition flux, so that the net flux is nonzero, which may lead to variations of the bed topography (not necessarily, as in the experiments to be discussed later). Possible variations in the bed topography will be ignored here. Second remark, the boundary condition (\ref{eq:bc@z=0}) corresponds to a bed allowing unlimited sediment supply. For more general situations (e.g. fixed bed), slightly different boundary conditions have been proposed, see Celik \& Rodi (1988), which however requires an empirical constant or reference concentration near the bottom to be given, which varies along the channel. Finally, assuming that particles have the same mean velocity as the fluid, $u_x$, the flux of suspended particles, per unit length in the spanwise direction, is given by
\begin{equation}
q = \int_0^H \phi u_x {\rm d}z.
\label{eq:defq}
\end{equation}
The concentration equation (\ref{eq:concentration}) with the boundary condition (\ref{eq:bc@z=H}) admit a steady and homogeneous solution corresponding to the balance of the settling and diffusive fluxes,
\begin{equation}
\Phi_{\rm sat}(z) = \Phi_{\rm b} \,
\left( \frac{1 - z/H}{1 + z/z_0} \right)^{\dfrac{\beta}{1+z_0/H}} ,
\label{eq:Phi_Rouse}
\end{equation}
where $\beta$, known as the Rouse number, is defined as $\beta = ({\rm Sc} \, V_{\rm fall})/(\kappa \, u_*)$ and the bottom concentration $\Phi_{\rm b} = \Phi_{\rm sat}(0)$ is determined from the condition (\ref{eq:bc@z=0}) as
\begin{equation}
\Phi_{\rm b} = \frac{\varphi_\uparrow(\tau_b)}{V_{\rm fall}}.
\label{eq:phi_0}
\end{equation}
Note that (\ref{eq:Phi_Rouse}) differs slightly for the classical expression of the Rouse profile (Nielsen 1992) because the location where the velocity (\ref{eq:log_ux}) vanishes and where the bottom boundary condition (\ref{eq:bc@z=0}) applies has been chosen to be $z=0$ instead of $z=z_0$. Suspension typically occurs when $V_{\rm fall} < 0.8\,u_*$ (Freds{\o}e \& Daigaard 1992), which corresponds to $\beta < 4/3$ with ${\rm Sc} = 2/3$. Figure \ref{fig:two_models}a displays the velocity profile (\ref{eq:log_ux}), normalized by $u_x(H)$, and the concentration profile (\ref{eq:Phi_Rouse}), normalized by $\Phi_{\rm b}$, for three typical values of $\beta$.
\begin{figure}
\begin{center}
\setlength{\unitlength}{1mm}
\begin{picture}(120, 52)(0, 46)
\put(2, 40){\includegraphics[width=12cm]{Log_vs_Plug_b.pdf}}
\put(3, 91){(a)}
\put(63, 91){(b)}
\end{picture}
\end{center}
\caption{(a) Logarithmic flow model: normalized velocity profile (dashed line) and normalized concentration profiles (\ref{eq:Phi_Rouse}) for three values of $\beta$ (solid lines); (b) plug flow model: normalized concentration profiles (\ref{eq:Phi}) for the corresponding values of $\alpha = 6 \beta$.}
\label{fig:two_models}
\end{figure}
\subsection{A simplified plug flow model} \label{sec:plug_model}
In order to get analytical results, a simplified plug flow model will be used in the following, which appears to provide accurate results as long as assessment of the relaxation equation (\ref{eq:relax}) is pursued. This model corresponds to uniform flow velocity $u_x(z) = U$ and friction velocity $u_* = U / \lambda$, where $\lambda$ is the same constant as in the previous section. Such a plug flow model is of course a rough description, and the velocity profile actually does not switch from zero on the bed to its average value on a vanishing vertical distance, leading to an infinite shear. This problem is not present in the logarithmic model, which is more realistic from this point of view. However, as shown below, these two models do not differ much as far as the relaxation modes are concerned, which means that what occurs very close to the bed is not very important for the present purpose. Accordingly, a uniform particle diffusivity $D_0$ will be taken in the concentration equation (\ref{eq:concentration}) and boundary conditions (\ref{eq:bc@z=H}-\ref{eq:bc@z=0}), equal to the average of the parabolic distribution given by (\ref{eq:nu_t}) and (\ref{eq:Sc}). Up to a small correction of order $z_0/H$, the diffusivity $D_0$ is given by
\begin{equation}
\frac{D_0}{u_* H} = \frac{\kappa}{6 {\rm Sc}} \equiv \mathcal{K}.
\end{equation}
With this uniform diffusivity $D_0$, the advection-diffusion equation (\ref{eq:concentration}) admits the steady and homogeneous solution
\begin{equation}
\Phi_{\rm sat} = \Phi_{\rm b} \, \exp \left( -\alpha \frac{z}{H}\right )
\quad {\rm with} \quad
\alpha \equiv \frac{V_{\rm fall} H}{D_0} = 6 \beta,
\label{eq:Phi}
\end{equation}
which also satisfies the boundary condition (\ref{eq:bc@z=H}) at the free surface. The boundary condition at the bed (\ref{eq:bc@z=0}) determines the bed concentration (\ref{eq:phi_0}). Figure \ref{fig:two_models}b displays the concentration profile (\ref{eq:Phi}), normalized by $\Phi_{\rm b}$, for three typical values of $\alpha = 6\beta$. It can be seen that for the same value of the Rouse number, the plug flow model predicts sediment concentration slightly larger than that of the logarithmic flow model. Small values of $\alpha$ correspond to strong suspensions, i.e. situations for which the sediment is distributed almost uniformly over the whole depth of the flow. This is achieved when the settling velocity is small (very fine particles) or when the diffusivity is large (large flow velocity). Finally, the saturated particle flux per unit width $q_{\rm sat}$, normalized by the water flux $U H$, is given by
\begin{equation}
\frac{q_{\rm sat}}{U H} = \frac{1}{U H} \, \int_0^H \Phi_{\rm sat} U {\rm d}z =
\frac{1 - {\rm e}^{-\alpha}}{\alpha} \, \Phi_{\rm b}.
\label{qsatPlugFlow}
\end{equation}
For small $\alpha$, this dimensionless flux tends to $\Phi_{\rm b}$, as expected.
\section{Non-homogeneous and unsteady flows} \label{sec:NonEqFlows}
In this section, we successively consider a spatial evolution problem (\S \ref{sec:Lsat}) and a temporal evolution problem (\S \ref{sec:Tsat}). These problems are solved using a mode analysis, \textit{i.e.} the departure of the concentation field from the saturated distribution $\Phi_{\rm sat}(z)$ is decomposed as a sum of terms of the form $c(x,t) f(z)$. It is shown that, for the spatial problem, there exists a discrete set of amplitudes $c_n(x) \propto e^{-x/L_n}$, and for the temporal problem, there exists a similar set of amplitudes $c_n(t) \propto e^{-t/T_n}$. Then the sediment flux is shown to be dominated by the mode with the largest length or time, $L_1$ or $T_1$. This result demonstrates that for large scale problems, the relaxation equation (\ref{eq:relax}) retains the most important features of unsaturated sediment transport, with relaxation scales $L_{\rm sat}$ and $T_{\rm sat}$ equal to the largest scales arising from the mode analysis.
The analytical calculations presented below use the plug flow model because of its simplicity, in the spirit of the work of Mei (1979). Calculations for the logarithmic model are not reported in detail, but the corresponding results are plotted for comparison in some of the figures.
\subsection{Spatial evolution and the relaxation lengths} \label{sec:Lsat}
We consider the situation where, for given flow conditions and corresponding saturated concentration profile $\Phi_{\rm sat}(z)$, the actual concentration profile at some point, say $x=0$, is $\Phi_{\rm sat}(z) - \phi(x=0,z)$ where $\phi(x,z)$ is a `concentration defect'. We search for the distance at which the saturated distribution $\Phi_{\rm sat}(z)$ is recovered, corresponding to vanishing $\phi(x,z)$. Looking for normal modes of relaxation of the concentration defect of the form
\begin{equation}
\phi(x,z) = \Phi_{\rm b} f(z) \exp(-x/L),
\end{equation}
we get from the advection-diffusion equation (\ref{eq:concentration})
\begin{equation}
\left(\frac{\lambda u_*}{L}+\frac{D}{L^2}\right) f+\frac{d}{d z}\left(D \frac{d f}{d z}+f\,V_{\rm fall} \right) = 0.
\label{equamodale}
\end{equation}
At the free surface, the zero flux condition (\ref{eq:bc@z=H}) gives
\begin{equation}
D \frac{d f}{d z} + f\,V_{\rm fall} = 0 \qquad{\rm at}\quad z=H.
\label{eq:bc2@z=H}
\end{equation}
On the bed the friction velocity $u_*$ is assumed to be uniform. The erosion flux $\varphi_\uparrow$, which depends only on $u_*$, is uniform too. Hence, the disturbance of $\varphi_\uparrow$ is zero, so that, from (\ref{eq:bc@z=0}),
\begin{equation}
\frac{d f}{d z}=0 \qquad{\rm at} \quad z=0.
\label{eq:bc2@z=0}
\end{equation}
The above differential problem is solved numerically for parabolic $D$ (logarithmic flow model) and analytically for uniform $D=D_0$ (plug flow model). For uniform $D_0$, equation (\ref{equamodale}) has solutions of the form $f(z) \propto \exp (K z/H)$, where $K$ has to satisfy a quadratic equation with roots $K_+$ and $K_-$ given by
\begin{equation}
K_{\pm} = - \frac{\alpha}{2} \pm \mathrm{i} K_{\rm i} \quad{\rm with}\quad
K_{\rm i}=\sqrt{\frac{\lambda}{\mathcal{K}} \frac{H}{L} + \frac{H^2}{L^2} - \frac{\alpha^2}{4}}.
\label{eq:solK}
\end{equation}
Then the boundary conditions (\ref{eq:bc2@z=H}-\ref{eq:bc2@z=0}) select a discrete set of relaxation lengths $L$ verifying~:
\begin{equation}
\tan K_{\rm i} = \left( \frac{K_{\rm i}}{\alpha} - \frac{\alpha}{4K_{\rm i}} \right)^{-1}.
\label{eq:Ki}
\end{equation}
This equation has an infinite number of real positive solutions $K_{{\rm i}n}$, $n \ge 1$. Figure \ref{fig:LnsurLd}a shows the variation with $\alpha$ of the three smallest ones ($n = 1, 2, 3$). For small $\alpha$, these solutions behave as $K_{\rm i1} \sim \sqrt{\alpha}$ and $K_{{\rm i}n} \sim (n-1)\pi$ for $n\ge2$.
\begin{figure}
\begin{center}
\setlength{\unitlength}{1mm}
\begin{picture}(120, 52)(0, 46)
\put(2, 40){\includegraphics[width=12cm]{LnsurLd.pdf}}
\put(3, 91){(a)}
\put(63, 91){(b)}
\end{picture}
\end{center}
\caption{(a) Variation with $\alpha$ of the three smallest roots of (\ref{eq:Ki}). (b) Corresponding relaxation normalised lengths $L_n/L_d$ for $\lambda/\mathcal{K} = 50$; plug flow (---); logarithmic flow ($--$).}
\label{fig:LnsurLd}
\end{figure}
The corresponding relaxation lengths $L_n$ are found from (\ref{eq:solK}):
\begin{equation}
\frac{H}{L_n} = \frac{1}{2}
\left( -\frac{\lambda}{\mathcal{K}} \pm \
\sqrt{\left(\frac{\lambda}{\mathcal{K}}\right)^2 + \alpha^2 + 4 K_{{\rm i}n}^2 } \right), \qquad n \ge 1.
\label{eq:elln}
\end{equation}
They are displayed for $n = 1, 2, 3$ in Figure \ref{fig:LnsurLd}b as a function of $\alpha$ (solid lines), normalised with the characteristic deposition length
\begin{equation}
L_d \equiv \frac{U}{V_{\rm fall}} \, H.
\end{equation}
It can be seen that $L_1$ is much larger than the higher-order relaxation lengths -- typically by one order of magnitude. Remarkably, in the limit of small $\alpha$ (large flow velocity or small settling velocity), the largest length $L_1$ tends to $L_d$, whereas higher-order lengths remain on the order of the flow depth $H$:
\begin{equation}
L_1 \sim L_d, \qquad
L_n \sim \frac{\lambda / \mathcal{K}}{ (n-1)^2 \pi^2} H
\quad {\rm for} \quad n \ge 2
\end{equation}
Figure~\ref{fig:LnsurLd}b also displays the normalized relaxation lengths obtained from the logarithmic flow model (dashed lines), from numerical integration of (\ref{equamodale}) with parabolic $D$. It can be seen that these lengths are close to those from the plug flow model, especially for the largest length $L_1$. Note that $L_n/L_d$ is weakly sensitive to the value of $\lambda/\mathcal{K}$: doubling this ratio does not bring any visible change, at least for $\alpha \le 2$.
The eigenfunctions $f_n(z)$ are given by
\begin{equation}
f_n(z) = \left[ \cos \left(K_{{\rm i}n}\frac{z}{H}\right) + \frac{\alpha}{2K_{{\rm i}n}} \, \sin \left(K_{{\rm i}n}\frac{z}{H}\right) \right] \exp\left( -\frac{\alpha}{2} \, \frac{z}{H} \right), \qquad n\ge1,
\label{eq:eigfun}
\end{equation}
with the normalization condition $f_n(0) = 1$. These eigenfunctions are displayed in Figure~\ref{fig:eigfun} for $n = 1, 2, 3$ (solid lines), for $\alpha = 0.1$ (Fig.~\ref{fig:eigfun}a) and $\alpha = 1$ (Fig.~\ref{fig:eigfun}b). It can be seen that $f_1(z)$ decreases slightly and monotically from bottom to top, whereas higher-order eigenfunctions oscillate, more and more strongly with increasing $n$. Figure~\ref{fig:eigfun} also displays the eigenfunctions from the logarithmic flow model (dashed lines). It can be seen that for the mode associated with the largest length $L_1$ ($n=1$), eigenfunctions of both models remain very close to each other, and that differences become larger as $n$ increases.
\begin{figure}
\begin{center}
\setlength{\unitlength}{1mm}
\begin{picture}(120, 52)(0, 46)
\put(2, 40){\includegraphics[width=12cm]{EigFun.pdf}}
\put(3, 91){(a)}
\put(63, 91){(b)}
\end{picture}
\end{center}
\caption{Profile of the three first eigenfunctions $f_n(z)$, for $\alpha=0.1$ (a) and $\alpha=1$ (b). Plug flow (---); logarithmic flow ($--$).}
\label{fig:eigfun}
\end{figure}
Let us turn to the sediment flux. The contribution of the $n^{\rm th}$-eigenmode to the sediment flux, $Q_n$, normalized with the characteristic sediment flux $U H \Phi_{\rm b}$ and the exponential $x$-dependence, is
\begin{equation}
\frac{1}{\exp(-x/L_n)} \, \frac{1}{\Phi_{\rm b} U H} \, Q_n= \frac{1}{H} \, \int_0^H f_n(z) {\rm d}z.
\label{eq:Qn}
\end{equation}
Table~\ref{tab:QnandAn}a displays the contribution of each of the first three modes to the sediment flux, \textit{i.e.} the right-hand side of the above equation. It can be seen that the contribution of the first mode $n=1$ strongly dominates. The smallness of the contribution of the higher-order modes is due to the oscillations of the eigenfunctions, as shown in Figure~\ref{fig:eigfun}. For small $\alpha$, the normalised flux is close to one for $n=1$ and decreases as $\alpha/((n-1)\pi)^2$ for $n \ge 2$.
\begin{table}
\begin{center}
(a) \ \
\begin{tabular}{|c|c|c|c|}
\hline
$n$ & 1 & 2 & 3 \\ \hline
$\alpha=0.1$ & 0.9836 & 0.0099 & 0.0025 \\
$\alpha=1$ & 0.8533 & 0.0832 & 0.0240 \\ \hline
\end{tabular}
\hspace*{1cm}
(b) \ \
\begin{tabular}{|l|c|c|c|c|}
\hline
& $A_1$ & $A_2$ & $A_3$ & $A_4$ \\ \hline
$\alpha=0.1$ & 0.967 & 0.020 & 0.005 & 0.002 \\
$\alpha = 1$ & 0.724 & 0.150 & 0.047 & 0.022 \\ \hline
\end{tabular}
\caption{(a) Contribution of the three lowest-order eigenmodes to the normalized sediment flux (r.h.s. of equation (\ref{eq:Qn})), for $\alpha = 0.1$ and $\alpha = 1$. (b) Normalized coefficients $A_n = a_n / \delta \Phi_{\rm b}$ of the expansion (\ref{projection}), computed from a projection over four modes.}
\label{tab:QnandAn}
\end{center}
\end{table}
The general form of the concentration defect finally is
\begin{equation}
\phi(x,z) = \Phi_{\rm b} \, \sum_{n=1}^{\infty} a_n f_n(z)\exp(-x/L_n),
\label{eq:expansion}
\end{equation}
where the relaxation lengths $L_n$ are given by (\ref{eq:Ki}-\ref{eq:elln}), the eigenfunctions $f_n(z)$ are given by (\ref{eq:eigfun}), and the coefficients $a_n$ have to be determined by the concentration profile imposed at $x=0$. Such a determination will be illustrated in section \ref{sec:applic}.
\subsection{Temporal evolution and relaxation times} \label{sec:Tsat}
We now consider an unsaturated concentration profile at initial time $t=0$, say $\Phi_{\rm sat}(z) - \phi(z, t=0)$, uniform in the streamwise $x$-direction, and search for the time needed for relaxation to the saturated distribution $\Phi_{\rm sat}(z)$ given by (\ref{eq:Phi}), \textit{i.e.} vanishing concentration defect $\phi(z, t)$. Calculations go along the same lines as in the previous sub-section, so they are only briefly sketched here. Looking for normal modes of the form
\begin{equation}
\phi(t,z) = \Phi_{\rm b} \, g(z) \exp(-t/T),
\end{equation}
we get from the equation (\ref{eq:concentration}) the equation governing the eigenfunctions $g(z)$:
\begin{equation}
\frac{1}{T} \, g+\frac{d}{d z}\left(D_0 \frac{d g}{d z}+g\,V_{\rm fall} \right)=0.
\label{equamodale4T}
\end{equation}
This equation has solutions of the form $g(z)\propto \exp(Kz/H)$, where $K$ has to satisfy a quadratic equation with roots $K_+$ and $K_-$ defined as
\begin{equation}
K_{\pm} = - \frac{\alpha}{2} \pm \mathrm{i} K_{\rm i} \quad{\rm with}\quad
K_{\rm i}=\sqrt{\frac{H}{\mathcal{K}u_*T} - \frac{\alpha^2}{4}}.
\label{eq:solK4T}
\end{equation}
The boundary conditions at $z=0$ and $z=H$ are the same as in the previous section, so that $K_{\rm i}$ still verifies equation (\ref{eq:Ki}), with same solutions $K_{{\rm i}n}$, $n \ge 1$. The corresponding relaxation times $T_n$ are then given by
\begin{equation}
\frac{H}{\mathcal{K}u_*T_n} = K_{{\rm i}n}^2 + \frac{\alpha^2}{4}, \qquad n \ge 1.
\label{eq:Tn}
\end{equation}
Introducing the characteristic deposition time
\begin{equation}
T_d \equiv \frac{H}{V_{\rm fall}} = \frac{L_d}{U},
\end{equation}
the relaxation times are, in the limit of small $\alpha$,
\begin{equation}
T_1 \sim T_d, \qquad
T_n \sim \frac{\alpha T_d}{ (n-1)^2 \pi^2} \sim \frac{L_n}{U}
\quad {\rm for} \quad n\ge2.
\end{equation}
As for the spatial problem, the sediment dynamics is dominated by the largest time $T_1$, equal to the deposition time $T_d$ for strong suspensions.
\section{Two illustrations, and comparison to experiments} \label{sec:applic}
\subsection{Effect of a change in the bed slope} \label{sec:slope}
Consider the situation depicted in Figure~\ref{fig:sketches}a, of a flow with saturated concentration profile $\Phi_0(z)$ which experiences a small variation $\delta S$ in the bottom slope at $x=0$, either positive or negative. This variation leads to a small change of the water depth and friction velocity, according to $\delta H/H = - \delta u_*/u_* = - \frac{1}{2} \delta S/S$. This change occurs on a hydrodynamic lengthscale $L_h$ given by the balance between the acceleration $U \delta U/L_h$ and the force $g \delta S$, \textit{i.e.} $L_h/L_d = U V_{\rm fall} / 2 u_*^2 = \lambda \mathcal{K} \alpha/2$. The present analysis is valid for small $L_h/L_d$, a condition which is fulfilled for small $\alpha$.
\begin{figure}
\begin{center}
\setlength{\unitlength}{1mm}
\begin{picture}(120, 52)(0, 46)
\put(2, 40){\includegraphics[width=12cm]{ProfilsRelaxPhi.pdf}}
\put(3, 91){(a)}
\put(63, 91){(b)}
\end{picture}
\end{center}
\caption{Relaxation of the concentration defect $\phi(x,z)$ after a slope change as sketched in Figure~\ref{fig:sketches}a, with summation over the three first modes (plug flow model). (a), Profiles of $\phi(x,z)$ at the four downstream positions $x/L_d = 0$, 0.3, 1 and 3, for $\alpha = 0.1$; (b), same for $\alpha = 1$.}
\label{fig:phi_relax}
\end{figure}
The change in the saturated concentration profile due to the slope variation $\delta S$ is, at the linear order in $\delta u_*$,
\begin{equation}
\delta \Phi(z) = \delta \Phi_{\rm b}
\exp \left( -\alpha \frac{z}{H} \right) \qquad \mbox{with} \qquad
\delta \Phi_{\rm b} = \frac{\varphi'_\uparrow(u_*) \delta u_*}{V_{\rm fall}},
\label{PhiPhi}
\end{equation}
where $\varphi'_\uparrow(u_*)$ is the derivative of the erosion rate $\varphi_\uparrow(u_*)$. The concentration defect at $x=0$ corresponds to this change ($\phi(0,z) = \delta \Phi(z)$), so that the coefficients $a_n$ of the expansion (\ref{eq:expansion}) must satisfy
\begin{equation}
\delta \Phi_{\rm b} \exp \left( -\frac{\alpha}{2} \, \frac{z}{H} \right) =
\sum_{n=1}^\infty a_n \left[ \cos \left(K_{{\rm i}n}\frac{z}{H}\right) + \frac{\alpha}{2K_{{\rm i}n}} \, \sin \left(K_{{\rm i}n}\frac{z}{H}\right) \right].
\label{projection}
\end{equation}
These coefficients can be determined from the projection of the above equation on the eigenfunctions, \textit{i.e.} truncation of the sum on the r.h.s. to $p$ terms, multiplication by each eigenfunction, and integration of both sides from $0$ to $H$. A linear system of $p$ equations is obtained, whose solution gives the coefficients $a_1$, ..., $a_p$. Table~\ref{tab:QnandAn}b displays the normalized coefficients $A_n = a_n/\delta \Phi_{\rm b}$ resulting from the projection over $p=4$ modes, for two values of $\alpha$. One can see that the first mode captures most of the weight; the contribution of the second one is smaller, but still significant, and higher modes are negligible. We have checked that considering more terms in the expansion has negligible effect on the dominant coefficients. The small weight of the oscillating modes is consistent with the slow variation with $z$ of the initial concentration profile (\ref{PhiPhi}).
Figure~\ref{fig:phi_relax} displays profiles of the concentration defect $\phi(x,z)$ at the location of the slope change, $x/L_d = 0$, and three positions downtream, for $\alpha = 0.1$ (Fig.~\ref{fig:phi_relax}a) and $\alpha = 1$ (Fig.~\ref{fig:phi_relax}b). It can be seen that at the position $x/L_d = 3$, the concentration defect is nearly zero. The flux defect, \textit{i.e.} the depth-integrated profiles of the concentration defect, correspondingly decays towards zero (not shown) and does so almost exponentially with relaxation length $L_d$, as predicted by equation (\ref{eq:relax}). This confirms that the dominant mode with relaxation length $L_d$ captures most of the sediment flux variations. For this example, as well as for the next ones, the question of the plug flow limit is not crucial: as far as the relaxation modes are concerned, the logarithmic and plug models do not differ much (Fig.~\ref{fig:eigfun}).
\subsection{Net erosion experiments} \label{sec:erodib}
Another situation of interest is that of a flow of clear fluid on a non-erodible bed ($\Phi_0(z)=0$) reaching an erodible bed lying in $x>0$, as sketched in Figure~\ref{fig:sketches}b. Suspension develops downstream until the saturated concentration profile $\Phi_{\rm sat}(z)$ is reached. The analysis goes along the same lines as for the slope change. The concentration defect $\phi(x,z)$, can be decomposed on the eigenfunctions (\ref{eq:eigfun}), with coefficients $a_n$ determined by the concentration profile at $x=0$. The equation to be satisfied turns out to be the same as (\ref{projection}) with $\Phi_{\rm b}$ instead of $\delta \Phi_{\rm b}$. Thus the coefficients are $a_n = A_n \delta \Phi_{\rm b}$ with the normalised coefficients $A_n$ given in Table~\ref{tab:QnandAn}b.
\begin{table}
\begin{center}
\begin{tabular}{|l|c|c|c|c|}
\hline
& van Rijn (1986b)
& \multicolumn{2}{c|}{Ashida \& Okabe (1982)}
& Jobson \& Sayre (1970) \\
\hline
& & \quad run 5 \quad \quad & run 6 & runs FS1 and FS1A \\
& erosion & erosion & deposition & deposition \\
Symbol & ({\Large $\ast$}) & ({\Large $\circ$})
& ($\square$) & ($\blacksquare$) \\
$H$ (cm) & 25 & \multicolumn{2}{c|}{4.3} & 40.7 \\
$U$ (cm/s) & 67 & \multicolumn{2}{c|}{37.3} & 29.1 \\
$u_*$ (cm/s) & 4.77 & \multicolumn{2}{c|}{3.63} & 4.48 \\
$V_{\rm fall}$ (cm/s) & 2.2 & \multicolumn{2}{c|}{1.85} & 1.0 -- 2.0 \\
$L_d$ (m) & 7.6 & \multicolumn{2}{c|}{0.87} & 5.9 -- 11.9 \\
$\alpha$ & --- & 3.1 & 2.5 & 1.2 \\
Sc & --- & 0.41 & 0.33 & 0.18 -- 0.36 \\
\hline
\end{tabular}
\end{center}
\caption{Hydraulic parameters $H$, $U$, $u_*$ and $V_{\rm fall}$ of the experiments, and $L_d = (U/V_{\rm fall})H$, $\alpha$ from the exponential fit of the downstream concentration profile, and ${\rm Sc} = \kappa \alpha u_*/6 V_{\rm fall}$.}
\label{tab:param_exp}
\end{table}
The prediction that the eigenmode with the largest relaxation length captures most of the sediment flux can be assessed from the experimental observations of van Rijn (1986b) and Ashida \& Okabe (1982) -- non-Japanese readers can access these latter data in the paper of Celik \& Rodi (1988). These experiments precisely correspond to the sketch depicted in Figure~\ref{fig:sketches}b. Their hydraulic parameters are given in Table~\ref{tab:param_exp}. The spatial evolution of the concentration profiles has been measured at different locations downstream the transition point at $x=0$. The corresponding sediment flux $q$, which is zero for $x<0$, increases downstream until it reaches the saturated value $q_{\rm sat}$. We determined this flux from integration of the measured concentration profile at each $x$-location. From the hydraulic parameters, the deposition length can be computed as $L_d = (U/V_{\rm fall}) H$ -- in the following we will not distinguish between $L_1$ and $L_d$, although they can differ by $\approx 20 \%$ for $\alpha$ on the order of unity (Fig.~\ref{fig:LnsurLd}b). It appeared that for Ashida \& Okabe (1982) the location of the farthest downstream measurements corresponds to $x/L_d = 8.1$, which is large enough for the sediment flux to be saturated. Figure~\ref{fig:relax1}a displays the corresponding concentration profile, and an exponential fit providing, from (\ref{eq:Phi}), the value of the parameter $\alpha$ reported in Table~\ref{tab:param_exp}. For van Rijn (1986b), we found $x/L_d = 1.3$, not large, preventing any straightforward determination of the parameter $\alpha$. For both experiments, the saturated flux $q_{\rm sat}$ was estimated as that providing the best fit to the exponential curve
\begin{equation}
\frac{q}{q_{\rm sat}} = 1 - \exp(-x/L_d).
\label{eq:relax_eros}
\end{equation}
Figure~\ref{fig:relax1}b diplays the variation of $q/q_{\rm sat}$ with $x/L_d$; it can be seen that the data points fall quite well on the exponential curve. Note that the deposition lengths $L_d$ differ by one order of magnitude between the two experiments. The data collapse therefore supports a first-order relaxation process with characteristic length equal to $L_d$.
\begin{figure}
\begin{center}
\setlength{\unitlength}{1mm}
\begin{picture}(120, 60)
\put(0, 0){\includegraphics[width=12cm]{qsurqsat1.pdf}}
\put(47, 49){(a)}
\put(107, 49){(b)}
\end{picture}
\end{center}
\vspace*{0.2cm}
\caption{Net erosion experiments. (a) Concentration profile at the farthest location $x/L_d = 8.1$ measured by Ashida \& Okabe (1982, run 5), normalized with the depth-averaged concentration $\Phi_{\rm ref}$; solid line: $\Phi_{\rm sat}$ given by (\ref{eq:Phi}) with $\alpha = 3.2$. (b) Relaxation to saturation of the normalized sediment flux versus $x/L_d$; symbols: van Rijn (1986b) and Ashida \& Okabe (1982) (see Table~\ref{tab:param_exp}); solid line: exponential relaxation (\ref{eq:relax_eros}).}
\label{fig:relax1}
\end{figure}
\subsection{Net deposition experiments} \label{sec:depos}
Ashida \& Okabe (1982) have also performed experiments in which the initial concentration profile is oversaturated (run 6), \textit{i.e.} the initial sediment flux $q_0$ at $x=0$ is larger than $q_{\rm sat}$, so that the sediment settle until the saturated regime is reached further downstream. Figure~\ref{fig:relax2}b displays the sediment flux, obtained from the measured concentration profiles, together with the exponential relaxation curve now given by
\begin{equation}
\frac{q}{q_{\rm sat}} =
1 + \left( \frac{q_0}{q_{\rm sat}} - 1 \right) \exp(-x/L_d)
\label{eq:relax_depos}
\end{equation}
where $q_0$ and $q_{\rm sat}$ were determined by curve fitting, and $L_d$ is given in Table~\ref{tab:param_exp}. Again, the agreement is quite good, showing that the mode with relaxation length $L_d$ captures most of the deposition process.
As a confirmation, Figure~\ref{fig:relax2}a compares the concentration profile measured at the location $x=0$ to its projection over one single mode. This projection is computed from an empirical representation of this inital profile, using the expansion (\ref{eq:expansion}) and the saturated flux $\Phi_{\rm sat}(z)$ measured from the concentration profile at $x = 8.1\,L_d$, fitted by the exponential form (\ref{eq:Phi}) with $\alpha = 2.5$. It can be seen that the resulting profile is in good agreement with the measurements. Note that $\alpha = 2.5$ corresponds to Schmidt number ${\rm Sc} = \alpha \kappa u_*/6 V_{\rm fall} = 0.33$, which is slightly below the usual range $0.5$--$1$ (Coleman 1970; Celik \& Rodi 1988; Nielsen 1992).
\begin{figure}
\begin{center}
\setlength{\unitlength}{1mm}
\begin{picture}(120, 60)
\put(0, 0){\includegraphics[width=12cm]{qsurqsat2.pdf}}
\put(47, 49){(a)}
\put(107, 49){(b)}
\end{picture}
\end{center}
\vspace*{0.2cm}
\caption{Net deposition experiments of Ashida \& Okabe (1982) (run 6, see Table \ref{tab:param_exp}). (a) Concentration profiles at $x=0$ normalized with the depth-averaged concentration $\Phi_{\rm ref}$; solid line: $\Phi_0$ reconstructed from $\Phi_{\rm sat}$ and one single mode for the concentration defect.
(b) Normalized sediment flux versus $x/L_d$, experiments and exponential relaxation (\ref{eq:relax_depos}).}
\label{fig:relax2}
\end{figure}
Other net deposition experiments have been performed by Jobson \& Sayre (1970). In this work, the particles were released near the water surface, so that the initial concentration profiles exhibits a peak close to $z=H$, as shown in Figure~\ref{fig:relax3}a. In contrast to Ashida \& Okabe experiments, expanding the concentration defect over one single mode is not sufficient to get a good representation of this profile; an expansion over four modes provide a much better description, as shown in Figure~\ref{fig:relax3}a. However, the high-order modes are expected to vanish over a short distance, on the order of a few flow depths $H$, and the exponential relaxation to be recovered at large distances. This scenario is evidenced in Figure~\ref{fig:relax3}b, which displays the normalized flux versus $x/H$, measurements and the exponential curve (\ref{eq:relax_depos}). Here, due to uncertainties on the falling velocity (see Table~\ref{tab:param_exp} of the present paper and Figure~6a of Jobson \& Sayre 1970), the deposition length $L_d$ was determined, together with $q_0$ and $q_{\rm sat}$, by fitting the experimental data points. We found $L_d = 5.2$ m, which is close to the range of the expected values displayed in Table~\ref{tab:param_exp}, although slightly smaller. We finally note that in the course of the reconstruction of $\Phi_0(z)$, we found $\alpha = 1.2$ from the saturated concentration profile, which corresponds to Schmidt number in the range 0.18--0.36, slightly smaller, again, than the usual range.
\begin{figure}
\begin{center}
\setlength{\unitlength}{1mm}
\begin{picture}(120, 60)
\put(0, 0){\includegraphics[width=12cm]{qsurqsat3.pdf}}
\put(47, 49){(a)}
\put(107, 49){(b)}
\end{picture}
\end{center}
\vspace*{0.2cm}
\caption{Net deposition experiments of Jobson \& Sayre (1970) (see Table \ref{tab:param_exp}). (a) Measured concentration profile at $x=0$ normalized with the depth-averaged concentration $\Phi_{\rm ref}$; dashed and solid lines: $\Phi_0$ reconstructed with one single mode and four modes, respectively, and $\alpha = 1.2$.
(b) Relaxation to saturation of the normalized sediment flux versus $x/L_d$; solid line: exponential relaxation (\ref{eq:relax_depos}) with $L_d = 5.2$ m.}
\label{fig:relax3}
\end{figure}
\section{Concluding remarks} \label{sec:conclusion}
In this paper, we have discussed the conditions under which a first-order relaxation equation for the sediment flux $q$ can be derived for turbulent flows, when suspension is the dominant mode of transport. From a mode analysis of the linear advection-diffusion equation for the particle concentration, it was shown that the sediment flux is dominated by the mode corresponding to the largest relaxation length for spatially varying flows, or the largest relaxation time for time-dependent flows. These relaxation scales were identified as the deposition length $H U/V_{\rm fall}$ and the deposition time $H/V_{\rm fall}$, where $H$ is the flow depth, $U$ the mean flow velocity and $V_{\rm fall}$ the sediment settling velocity. This result is expected to be particularly relevant for the case of sediment transport in slowly varying flows, for which the flux is never far from saturation. Predictions of the sediment flux were shown to be in quantitative agreement with flume experiments, for both net erosion and net deposition situations, and deposition lengths spanning over one order of magnitude.
As discussed in the introduction, the relaxation equation~(\ref{eq:relax}) allows for the description of both bed load and suspended load. However, these modes of transport correspond to very different physical lengthscales. For bed load, the relaxation length $L_{\rm sat}$ is on the order of $10$ grain diameters (Fourri\`ere, Claudin \& Andreotti 2010). As soon as the flow depth $H$ is larger than a few $L_{\rm sat}$, the -- unstable -- flat bed is insensitive to the presence of the free surface, and current ripples emerge at a centimetric wavelength ($\approx 10 L_{\rm sat}$). When suspended load is the dominant type of transport, we have shown that the relaxation length $L_{\rm sat}$ is on the order of $10$--$100~H$, which is typically four to five orders of magnitude larger than for bed load. Suspended transport thus prevents the formation of bedforms with wavelength smaller than $H$, and patterns such as bars, antidunes and meanders can be expected to emerge from linear instability, with large wavelengths on the order of $100$--$1000~H$. Further work is required for the experimental and theoretical investigations of these instabilities.
\vspace*{0.3cm}
\noindent
\rule[0.1cm]{3cm}{1pt}
\noindent
This work has benefited from the financial support of the Agence Nationale de la Recherche, grant `Zephyr' ($\#$ERCS07\underline{\ }18) and the GdR `M\'ePhy' of the CNRS ($\#$3166).
|
2,869,038,154,420 | arxiv | \section{Introduction}
Accurate black-hole masses are necessary to understand the growth of black holes and their role in galaxy evolution. In nearby (\textless\ 100 Mpc) galaxies, it is possible to measure black-hole mass directly from \red{high spatial resolution observations of} the dynamics of stars and gas \citep[e.g.][]{KormendyHo13}. But for distant active galactic nuclei\footnote{\label{AGN} Throughout this work, we generically use the terms ``quasar'' and ``AGN'' interchangeably to refer to broad-line AGN, as broad lines are necessary for reverberation mapping.} (AGN), the primary method to obtain reliable black-hole masses is reverberation mapping (RM) \red{from time-domain spectroscopy} \citep{Mckee,Peterson04}.
Reverberation mapping measures the time delay between variability in the continuum emission and the corresponding variability in the broad line region (BLR). In the environment around the supermassive black hole, light from the accretion disk is absorbed and re-emitted by the BLR with a delay due to the light travel time between the two emitting regions. The time delay, multiplied by the speed of light, gives a characteristic distance to the BLR, which is assumed to be in a virial orbit around the black hole. The mass of the black hole is thus given by a virial mass calculation \red{as in Equation (\ref{virial}), using the emission-line broadening ($\Delta\rm V^{2}$), characterized by the line-width FWHM or $\sigma_{\rm line}$, combined with the radius of the BLR}
\begin{equation} \label{virial}
M_{BH} = {f R_{BLR} \Delta V^{2}\over{G}}
\end{equation}
The mass calculation includes a dimensionless factor ``$f$'', to account for the geometry of the orbit and kinematics of the BLR; this factor can be calibrated from comparing RM and dynamical masses \citep{Onken07, Grier13}, the $M_{\mathrm{BH}}-\sigma$ relation \citep{Woo15, Yu19}, or from dynamical modeling of the BLR \citep{Pancoast14}. \red{The f-factor is of order unity and the exact value depends on assumptions like how the broad-line velocity is measured \citep[e.g.,][]{Peterson04,Collin06,Yu19}.}
From RM measurements taken over the last two decades, a correlation has been observed between the measured BLR time delay and the continuum luminosity of the AGN \citep[e.g.,][]{Kaspi00,Bentz09,Bentz13}. From this ``radius--luminosity'' ($R-L$) relation, we can estimate the radius of the BLR with just a luminosity measurement \red{(e.g, Equation \ref{BRL})} and estimate the black-hole mass from single-epoch observations.
This allows for the measurement of black-hole masses for a large number of AGN without high spatial resolution or long-term monitoring. However, single-epoch estimates are only correct if the $R-L$ relation accurately describes the diverse AGN population; therefore, it is necessary to measure this relation over a broad AGN sample and with the least bias possible.
\cite{Bentz13} used \hbox{{\rm H}$\beta$}\ time-lag measurements and reliable subtraction of host galaxy light for 41 AGN from different RM campaigns to determine the following $R-L$ relation between the mean radius of the \hbox{{\rm H}$\beta$}-emitting BLR and the AGN continuum luminosity at 5100 \AA\ (\hbox{$\lambda L_{5100}$}) :
\begin{equation}\label{BRL}
\log(R_{\rm BLR}/ \mathrm{lt \mhyphen day}) = K + \alpha \log(\hbox{$\lambda L_{5100}$}/10^{44}~\mathrm{erg~s^{-1}})
\end{equation}
The slope of this relation ($\alpha = 0.533$) is consistent with the $R_{\rm BLR} \propto L^{0.5}$ expectation from basic photoionization models \citep{Davidson72}. \cite{Bentz13} measured an intrinsic scatter in the relation of $\sigma \sim 0.19$, and a normalization $K = 1.527$. The \cite{Bentz13} $R-L$ relation has been the recent standard used to estimate single-epoch black hole masses; however, recent RM results appear to deviate from this relation.
The Sloan Digital Sky Survey Reverberation Mapping (SDSS-RM) project is a dedicated multi-object RM campaign that has been monitoring 849 quasars with spectroscopy and photometry since 2014 \citep{Shen15b}. \cite{Grier17} published an \hbox{{\rm H}$\beta$}\ $R-L$ relation for 44 AGN from the first year of SDSS-RM monitoring. The time lags measured by SDSS-RM are often significantly shorter than those predicted by Equation (\ref{BRL}) for their given AGN luminosity, and thus these sources fall below the \cite{Bentz13} $R-L$ relation. In addition, the Super-Eddington Accreting Massive Black Holes (SEAMBH) survey presented a $R-L$ relation for a sample of rapidly-accreting AGN that also differs from \cite{Bentz13} in the same manner \citep{Du16,Du18,Du19}.
In this work we examine if this discrepancy is due to observational biases that restrict the allowable lag detections, or if the SDSS-RM and SEAMBH samples have properties that \red{represent a broader population of AGN} compared to previous RM studies; thus indicating a physical origin for the discrepancy\red{, as suggested by recent work \citep{Czerny19,Du19}}. We explore this by simulating a $R-L$ relation based on \cite{Bentz13}, while imposing the observational constraints of the SDSS-RM dataset. We present the data included in our study in Section 2, and provide a detailed description of our simulated $R-L$ relation and results in Section 3. In Section 4, we discuss possible causes for the discrepancy. Throughout this work we assume a standard $\Lambda$CDM cosmology with $\Omega_\Lambda = 0.7$, $\Omega_M = 0.3$, and $H_0 = 70\ \mathrm{km~s^{-1}~Mpc^{-1}}$.
\section{Data}
For our analysis, we compare \hbox{{\rm H}$\beta$}\ lags, \hbox{$\lambda L_{5100}$}, and the best-fit $R_{\rm BLR} - \lambda L_{5100}$\ relation for the \cite{Bentz13}, \cite{Grier17}, and \cite{Du16,Du18} datasets. The lags for the 3 RM campaigns were measured using different methods: \cite{Bentz13} and \cite{Du16,Du18} used the interpolated cross-correlation function (ICCF, \citealp{Gaskell87, White94, Peterson04}), while \cite{Grier17} primarily used JAVELIN \citep{Zu11} and CREAM \citep{Starkey16}. JAVELIN and CREAM use different assumptions than ICCF but are designed to produce similar results, so any deviations from the \cite{Bentz13} $R-L$ relation should not be due to the different lag detection methods, \red{as discussed in section 2.3}. We briefly describe the details of the lag measurement methods in section 2.1.
\begin{figure}[!t]
\centering
\includegraphics[width=\columnwidth]{RL_full_sample.pdf}
\caption{The $R-L$ relation for \hbox{{\rm H}$\beta$}\ time lags from \cite{Bentz13}, \cite{Grier17}, and \cite{Du16, Du18}. The black line shows the $R-L$ relation from \cite{Bentz13}, with a slope $\alpha = 0.533$ and a normalization $K = 1.527$. The lag measurements from SDSS-RM \citep{Grier17} and SEAMBH \citep{Du18} frequently lie below the $R-L$ relation established by \cite{Bentz13}.}
\label{bentzfit}
\end{figure}
Figure \ref{bentzfit} presents the $R-L$ relation for the \cite{Bentz13}, \cite{Grier17}, and \cite{Du16,Du18} samples of AGN with \hbox{{\rm H}$\beta$}\ RM lags. We describe these three samples in detail in the subsections below. \red{The distribution of AGN properties in each sample is presented in Figure \ref{SampleHisto}. For the Eddington ratio ($\lambda_{\rm Edd} = {L_{\rm bol}\over{L_{\rm Edd}}}$), we assume $L_{\rm bol}$ = 5.15\hbox{$\lambda L_{3000}$}\ and $L_{\rm bol} = 9.26\lambda L_{5100}$ \citep{Richards06}. Published 3000 \AA\ luminosities are available only for 41 of the \cite{Grier17} AGN; we use the 5100 \AA\ luminosities for all other AGN in the three samples. We use black-hole masses for the \citet{Bentz13} sample from the compilation of \citet{Bentz15}.} In all three samples, the AGN luminosities are host-subtracted, and as such the luminosity uncertainties include a contribution from the uncertainty associated with the host-galaxy decomposition. In general this means that the AGN luminosity uncertainties are largest for low-luminosity and host-dominated AGN, and are generally small for luminous AGN. \red{We determine the best-fit $R-L$ relation for each sample employing multiple linear regression with the python MCMC software \texttt{PyMC3}, including uncertainties in both radius (y-axis) and luminosity (x-axis) and allowing for excess intrinsic scatter.}
\subsection{Lag Measurement Methods}
The ICCF determines the cross-correlation between two light curves, measured as the Pearson correlation coefficient $r$ as a function of time delay $\tau$. Because the data are unevenly spaced due to observational constraints, the ICCF linearly interpolates the first light curve to produce overlapping points to calculate $r$ for any delay $\tau$. The same process is repeated starting with the second light curve shifted by $-\tau$. The cross-correlation coefficient for a given $\tau$ is obtained by averaging the two values of $r$. The ICCF repeats this procedure for a range of $\tau$, to obtain the final cross-correlation function (CCF). The likely time lag between the two light curves is given by the centroid of the CCF. The uncertainties are calculated using Monte Carlo methods with flux re-sampling and random subset sampling \citep{Peterson04}.
\begin{figure}[!t]
\centering
\includegraphics[width=\columnwidth]{sample_z.pdf}
\includegraphics[width=\columnwidth]{sample_M.pdf}
\includegraphics[width=\columnwidth]{sample_Edd.pdf}
\caption{\red{From top to bottom: Distribution of redshift, mass, and $\lambda_{\rm Edd}$ for the \cite{Bentz13}, \cite{Grier17} and \cite{Du16,Du18} samples.} }
\label{SampleHisto}
\end{figure}
Instead of using linear interpolation, JAVELIN assumes that the variability of the continuum light curve is best described by a damped random walk (DRW) model. JAVELIN then models the BLR light-curve response with the same DRW model combined with a top-hat transfer function centered at a lag $\tau$, producing a BLR light curve model that is a shifted, smoothed, scaled version of the continuum light curve. Markov Chain Monte Carlo (MCMC) is used to identify the most likely lag and uncertainty. CREAM adopts a similar approach to JAVELIN to measure lags, with the same DRW assumption about variability, but with a slightly different treatment of the uncertainties. \red{Detailed simulations by \cite{Li19} and \cite{Yu20} find that, for lightcurves of similar cadence and noise to SDSS-RM, JAVELIN produces more accurate lags and lag uncertainties than ICCF, and fewer false positives.} \cite{Grier17} measured \hbox{{\rm H}$\beta$}\ lags using JAVELIN, ICCF, and CREAM; in this work we primarily utilize the lags from JAVELIN and CREAM, while noting that the ICCF lags of SDSS-RM quasars produce the same \red{offset in the} $R-L$ relation (Figure \ref{sdssiccf}).
\subsection{Bentz et al.}
\cite{Bentz13} collected a sample of 41 AGN from previous RM surveys, focusing on adding accurate host-galaxy subtraction from HST imaging. The sample primarily includes nearby AGN that were generally selected to be apparently bright and variable, with luminosities in the range $10^{42} < \hbox{$\lambda L_{5100,\mathrm{AGN}}$} < 10^{46}$\ ergs~s$^{-1}$. The AGN have lags measured from observing campaigns with monitoring durations that ranged from 64 to 120 days, with cadences as rapid as 1 day between observations. Lags were measured using the ICCF method, resulting in 70 \hbox{{\rm H}$\beta$}\ time lags for 41 unique AGN in the range 2--100 rest-frame days.
The luminosity measurements are corrected for host-galaxy contributions; this is especially important for lower-luminosity AGN since galaxy contamination leads to an overestimation of \hbox{$\lambda L_{5100}$}, steepening the $R-L$ relation. Previous RM surveys that did not correct for host-galaxy luminosity found a steeper $R-L$ relation with a slope $\alpha \sim$ 0.70 \citep{Kaspi00}. \cite{Bentz13} measured the host-galaxy contribution for each AGN through morphological decomposition of HST/ACS images, using the GALFIT software \citep{Peng02} to determine the best-fit point-source AGN and extended galaxy surface brightness profiles implementing a nonlinear least-squares fit algorithm.
Figure 11 in \cite{Bentz13} presents the $R-L$ relation observed for their measured \hbox{{\rm H}$\beta$}\ time lags, with a slope $\alpha = 0.533^{+0.035}_{-0.033}$ and a normalization $K = 1.527^{+0.031}_{-0.031}$ for the best-fit line. Our fitting method yields a nearly identical slope \red{$\alpha = 0.52 \pm 0.03$ and a normalization $K = 1.52 \pm 0.03$} for the \cite{Bentz13} \hbox{{\rm H}$\beta$}\ lags.
\subsection{SDSS-RM}
\cite{Grier17} successfully measured \hbox{{\rm H}$\beta$}\ time lags for 44 AGN from the SDSS-RM survey. The AGN have luminosities $10^{43} < \hbox{$\lambda L_{5100,\mathrm{AGN}}$} < 10^{45.5}$~ergs~s$^{-1}$\ and redshifts 0.12 \textless\ {\it{z}} \textless\ 1. The full SDSS-RM sample is magnitude-limited (by $i_{\rm AB}<21.7$), with no other selection criteria for AGN properties. This results in a sample that is more representative of the general AGN population, and a greater diversity in redshift and other AGN properties compared to previous RM studies. For example, the SDSS-RM sample spans a much broader range of emission-line widths, strengths, and blueshifts compared to the sample of \cite{Bentz13} \citep[see Figure 1 of][]{Shen15b}.
Spectra of the quasars were obtained using the Baryon Oscillation Spectroscopic Survey (BOSS) spectrograph \citep{Smee13} on the SDSS 2.5 m telescope \citep{Gunn06} at Apache Point Observatory. The initial observations include 32-epochs taken over a period of 6 months in 2014. The exposure time for each observation was $\sim$ 2 hr and the average time between observations was 4 days (maximum 16.6 days).
Photometric observations were acquired in the {\it{g}} and {\it{i}} filters with the Bok 2.3 m telescope and the Canada-France-Hawaii Telescope (CFHT). Additionally, synthetic photometric light curves were produced from the BOSS spectra in the {\it{g}} and {\it{i}} bands. All of the {\it{g}} and {\it{i}} band light curves were merged using the CREAM software \citep{Starkey16} to create a continuum light curve for each AGN \citep[see][for additional details of the light-curve merging procedure]{Grier17}.
\begin{figure}[!t]
\centering
\includegraphics[width=\columnwidth]{RL_SDSS_Lhost.pdf}
\caption{The $R-L$ relation for 44 AGN in the SDSS-RM survey, with \hbox{{\rm H}$\beta$}\ time lags from \cite{Grier17} and \hbox{$\lambda L_{5100}$}\ from \cite{Shen15a}. Out of the 44 lags, 32 were measured using JAVELIN and 12 were measured using CREAM. The open circles have \hbox{$\lambda L_{5100}$}\ that includes host-galaxy light, while the solid red circles have AGN luminosities (\hbox{$\lambda L_{5100}$}) that are host-subtracted using principal component analysis of the coadded spectra. Our best-fit line for the red (host-subtracted) points is shown as the red dashed line, with a slope $\alpha$ = 0.24 $\pm$ 0.08 and a normalization of $K=1.24 \pm 0.04$ that both differ from the \cite{Bentz13} best-fit $R-L$ relation (shown as the black solid line) by $>$ 3$\sigma$. The two \red{square} points were excluded from the fitting (see text for details). The SDSS-RM AGN generally have lags that are shorter than expected from the \citet{Bentz13} $R-L$ relation at a given host-subtracted $\hbox{$\lambda L_{5100}$}$.}
\label{sdssfit}
\end{figure}
\cite{Grier17} measured \hbox{{\rm H}$\beta$}\ reverberation lags using ICCF, JAVELIN and CREAM. Each method used a lag search range between $-100$ and 100 days, given the length of the SDSS-RM observation baseline ($\sim$ 200 days). This resulted in 32 lags from JAVELIN and 12 from CREAM, only including ``reliable'' positive time lags that have ${\rm SNR} > 2$, a single well-defined peak in the lag probability distribution function, and a correlation coefficient of $r_{\rm max}>0.45$.
\cite{Shen15b} used principal component analysis (PCA) to decompose the quasar and host-galaxy spectra, assuming that the total spectrum is a combination of linearly independent sets of quasar-only and galaxy-only eigenspectra. The SDSS eigenspectra are taken from \cite{Yip04}. To obtain the quasar-only spectrum, \cite{Shen15b} subtracted the best-fit host-galaxy spectrum from the total spectrum. \cite{Yue18} independently estimated the host-galaxy contribution using imaging decomposition and found consistent results to the spectral decomposition.
Figure \ref{sdssfit} presents the relation between the 44 SDSS-RM \hbox{{\rm H}$\beta$}\ time lags and \hbox{$\lambda L_{5100}$}. Host-subtracted continuum luminosity (\hbox{$\lambda L_{5100}$}) measurements were taken from \cite{Shen15a}. The points in red represent AGN luminosities that are host-subtracted as described above. The observed rest-frame time lags are generally shorter than predicted from the \cite{Bentz13} $R-L$ relation. The SDSS-RM data exhibit a positive correlation between radius and luminosity, with a Spearman's $\rho=0.54$ and a null probability of no correlation of $p\sim 0.0$. The $R-L$ properties of the SDSS-RM quasars are best fit by a line with shallower slope \red{and lower normalization}, as shown as the red best-fit line of slope \red{$\alpha = 0.24 \pm 0.08$ and a normalization $K = 1.24 \pm 0.04$.} However, the limited dynamic range of the SDSS-RM quasars means that the data could also be consistent with the same $\alpha \simeq 0.5$ slope of the \citet{Bentz13} data, with an average offset of shorter lags in SDSS-RM quasars over a range of continuum luminosities. Fitting the same SDSS-RM data, while fixing the slope to be 0.533, results in the same \red{lower normalization K = 1.24 $\pm$ 0.05.} For this and all subsequent least-squares fitting, we exclude the SDSS-RM data point with the longest lag and smallest fractional uncertainty as an outlier (RMID 781). We also exclude the hyper-variable quasar RMID 017, as it increases in luminosity by a factor of $\sim$10 over the span of the SDSS-RM monitoring \citep{Dexter19}.
\begin{figure}[!t]
\centering
\includegraphics[width=\columnwidth]{RL_SDSS_LHbeta.pdf}
\caption{The $R-L$($\hbox{{\rm H}$\beta$}$) relation for the 44 SDSS-RM AGN, with \hbox{{\rm H}$\beta$}\ time lags from \cite{Grier17} and broad-line $\hbox{{\rm H}$\beta$}$ luminosity from \cite{Shen19}. Black solid and dashed lines show the relation between \hbox{{\rm H}$\beta$}\ time lags and L$_{\rm H\beta}$ from \cite{Kaspi05} for two different fitting methods. The SDSS-RM AGN have lags that fall below the $R-L$(\hbox{{\rm H}$\beta$}) relation.}
\label{sdsshbeta}
\end{figure}
Figure \ref{sdssfit} also includes the total \hbox{$\lambda L_{5100}$}\ without host-galaxy subtraction for each AGN as open circles, as an indication of the typical relative contribution of AGN and galaxy light.
We further demonstrate that the $R-L$ offset is not due to under-subtracted host-galaxy luminosities by examining the $R-L$(\hbox{{\rm H}$\beta$}) relation, presented in Figure \ref{sdsshbeta}. The luminosity from the \hbox{{\rm H}$\beta$}\ emission line is produced by the AGN broad-line region and does not have any galaxy contribution. Since the \cite{Bentz13} sample lacks published \hbox{{\rm H}$\beta$}\ luminosities, we cannot compare that sample with the SDSS-RM $R-L$(\hbox{{\rm H}$\beta$}) relation. Instead, we use the \cite{Kaspi05} best-fit $R-L$(\hbox{{\rm H}$\beta$}) lines that were fit to a subset of the \cite{Bentz13} data, shown as dashed and solid lines in Figure \ref{sdsshbeta}. The SDSS-RM lags show the same general trend of falling below the relation measured from previous RM data.
Finally, to be certain that the different lag-detection methods are not the cause of the offset, we present the $R-L$ relation using ICCF measured lags from SDSS-RM in Figure \ref{sdssiccf}. The ICCF lags fall below the \cite{Bentz13} relation just as seen in the JAVELIN and CREAM lags.
\begin{figure}[!t]
\centering
\includegraphics[width=\columnwidth]{RL_SDSS_ICCF.pdf}
\caption{The $R-L$ relation for 39 ICCF lags of the SDSS-RM AGN from \cite{Grier17} with host-subtracted \hbox{$\lambda L_{5100}$}\ from \cite{Shen15a}. Five AGN have ICCF lags less than 1 day and are not shown in the figure. The ICCF lags of SDSS-RM AGN have the same offset from the \cite{Bentz13} $R-L$ relation seen in Figure \ref{sdssfit}.}
\label{sdssiccf}
\end{figure}
\subsection{SEAMBH}
The SEAMBH project is a RM campaign spanning 5 years of monitoring \citep{Du16,Du18}. The AGN in the sample were selected from SDSS using a dimensionless accretion rate $\dot{\mathcal{M}}$, derived from the standard thin-disk equations \citep{Wang14b}:
\begin{equation}\label{mscript}
\dot{\mathcal{M}} = 20.1 {\left( L_{44}\over{\cos i} \right)^{3/2}} m_{7}^{-2}
\end{equation}
The inclination of the disk is given by $i$ and we assume cos $i$ = 0.75 \citep{Du18}.
\red{The luminosity and mass dependence are parameterized as L$_{44} = \rm \hbox{$\lambda L_{5100}$} / 10^{44} $ and m$_7 = \rm M /{10^{7}\ M_{\odot}}$, respectively}.
The SEAMBH AGN were selected to have $\dot{\mathcal{M}} > 3$; the sample of 29 AGN has $10< \dot{\mathcal{M}} <10^{3}$, giving them higher accretion rates than the general AGN population. \red{For comparison, the \cite{Bentz13} and \cite{Grier17} samples have a median $\dot{\mathcal{M}}$ \textless\ 0.50.} Spectroscopic and photometric observations were made over 5 years with the Lijiang 2.4 m telescope, averaging 90 nights per object. Typical exposure times were 10 minutes for photometry and 1 h for spectroscopy. \cite{Du16,Du18} used an empirical relation to determine the host-galaxy contribution to the spectrum based on \hbox{$\lambda L_{5100}$}, derived by \cite{Shen11} for SDSS fiber spectra:
\begin{equation}\label{Lest}
{L_{5100}^{\rm host} \over L_{5100}^{\rm AGN}} = 0.8052 - 1.5502x + 0.912x^2 - 0.1577x^3
\end{equation}
Here $x = L^{\mathrm{tot}}_{5100} \times \ 10^{-44}$~ergs~s$^{-1}$. For spectra with $L^{\mathrm{tot}}_{5100} > 1.053 \times 10^{44}$~ergs~s$^{-1}$, the host-galaxy contribution was assumed to be zero.
The $R-L$ relation for the 29 SEAMBH \hbox{{\rm H}$\beta$}\ lags measured by \cite{Du16,Du18} is presented in Figure \ref{dufit}. Similar to the SDSS-RM data in Figure \ref{sdssfit}, the measured lags are shorter than expected from Equation (\ref{BRL}), resulting $R-L$ relation with a shallower slope \red{$\alpha = 0.29 \pm 0.07$ and a lower normalization $K = 1.24 \pm 0.04$}. The SEAMBH data, like the SDSS-RM data, cover a limited dynamic range on both axes, and also appear consistent with a slope of $\alpha \simeq 0.5$ with an average offset for shorter lags over a broad range of continuum luminosity.
\begin{figure}[!t]
\centering
\includegraphics[width=\columnwidth]{RL_Du.pdf}
\caption{The $R-L$ relation for the 29 \hbox{{\rm H}$\beta$}\ time lags measured by \cite{Du16,Du18}. The time lags were measured using ICCF and include 19 lags from \cite{Du16} and 10 lags from \cite{Du18}. The AGN luminosities (\hbox{$\lambda L_{5100}$}) were calculated using a galaxy-contribution estimate based on Equation (\ref{Lest}). Our best-fit line, shown as a dashed red line, gives a slope $\alpha = 0.29 \pm 0.07$ and a normalization $K = 1.24 \pm 0.04$, indicating that the SEAMBH AGN (like the SDSS-RM AGN) follow a relation that is significantly below the previous \cite{Bentz13} $R-L$ relation.}
\label{dufit}
\end{figure}
\section{Simulating Observational Bias on the $R-L$ relation}
\red{The effects of the SDSS-RM observational limits on the observed $R-L$ relation are not easily predictable. For instance, the sample is magnitude-limited in the $i$ band, rather than limited by the luminosity used for the R-L relation. There are also constraints on the length of the measurable lags due to the duration and cadence of the observations.} In order to examine how observational biases affect the $R-L$ relation, we simulated a $R-L$ relation starting from \cite{Bentz13} \red{(Equation \ref{BRL})} and including observational errors and limits appropriate for the SDSS-RM monitoring campaign.
\subsection{General Simulation}
To create a representative sample of AGN, we generated $10^7$ random AGN luminosities in the range 10$^{42}$--10$^{46}$~ergs~s$^{-1}$\ following the \red{$i$-band} luminosity function from \red{\cite{Ross13}}:
\begin{equation}\label{LD}
\Phi = {\Phi^*\over{ \left(L/ L_{B}^*\right)^{3.37} + \left(L/L_B^* \right)^{\red{1.16}} } }
\end{equation}
The $L^{3.37}$ and \red{$L^{1.16}$} terms represent the bright and faint end of the distribution, respectively, with a break luminosity \red{$L_{B}^* = 10^{44.62}$}~ergs~s$^{-1}$.
\red{This results in a distribution of AGN luminosities in the observed $i$-band. To shift the observed $i$-band luminosities to \hbox{$\lambda L_{5100}$}\, we use the average quasar SED of \cite{Richards06} and a randomly assigned redshift. Each simulated AGN was assigned a redshift randomly drawn from the set of 44 SDSS-RM AGN and spanning $0.2<z<1.2$.}
\begin{figure}[!t]
\centering
\includegraphics[width=\columnwidth]{sim_final.pdf}
\caption{One iteration of the simulated \hbox{{\rm H}$\beta$}\ $R-L$ relation. Points in purple represent the relation for AGN in sample S1, which includes only the intrinsic scatter in \cite{Bentz13}. Points in blue represent the AGN in sample S2, which takes into account observational errors and observational limits typical of SDSS-RM. The points in red are 44 random points chosen from sample S2, this accounts for the number of lags detected by SDSS-RM. The red line shows the best fit for the points in red (S3).}
\label{simulation}
\end{figure}
We calculated the expected radius of the \hbox{{\rm H}$\beta$}\ BLR (given as $\tau = R/c$ in days) for each \red{\hbox{$\lambda L_{5100}$}} using the \cite{Bentz13} relation, including an intrinsic scatter of $\sigma_{\rm int} = 0.19$. The BLR radius for each of the $10^7$ simulated AGN was initially calculated following the relation $\log\tau = K + \alpha(\log L - 44 ) + R(\sigma_{\rm int})$, where $R(\sigma_{\mathrm{int}})$ is a random number drawn from a normal distribution with a standard deviation of $\sigma_{\rm int}$. For a given luminosity, this process produced $\tau$ above or below the \cite{Bentz13} line. We designate this sample S1, shown in Figure \ref{simulation} as purple data points. Figure \ref{simulation} presents one iteration of the complete simulation.
\subsection{Observational Limits}
The SDSS-RM observational selection effects were applied to the simulations by adding observational uncertainties as well as lag and magnitude limits to the S1 sample. First, observational uncertainties were assigned to each of the simulated AGN by randomly drawing luminosity and lag uncertainties ($\sigma_L$ and $\sigma_\tau$) from the actual 44 SDSS-RM \hbox{$\lambda L_{5100}$}\ and $\tau$ measurements \citep{Shen15a,Grier17}. We then replicated the sample limits of SDSS-RM by imposing the same lag and magnitude constraints as the observations. Simulated AGN were restricted to observed-frame lags $4<\tau_{\rm obs}<75$~days, and $i$-band magnitude \textless\ 21.7.
The average cadence for SDSS-RM observations was 4 days, which places a lower limit on the possible observed-frame time lags. Conversely the upper limit of 75 days comes from the longest measured time lag from SDSS-RM, related to the monitoring duration of 180 days and the need for overlap between the continuum and emission-line light curves.
While the observed-frame lag limit can be implemented by a simple redshift conversion, several additional steps were required to fully emulate the magnitude limits of the observed SDSS-RM sample. The SDSS-RM parent sample of quasars is restricted to total (AGN+host) magnitudes of $i<21.7$, but the S1 sample has AGN-only luminosities at rest-frame 5100 \AA. We add a host-galaxy contribution to the simulated AGN luminosities following Equation (\ref{Lest}) (measured for similar SDSS AGN spectra by \citealp{Shen11}). We assume a 0.35~dex scatter in this relation, since 0.35~dex is the standard deviation of the actual host-galaxy luminosities of the SDSS-RM quasars. \red{We convert this total \hbox{$\lambda L_{5100}$}\ to $i$-band magnitude before implementing a magnitude cutoff.} However, there is an additional magnitude dependence of the lag detection that must be considered, as lags are easier to recover for brighter AGN: the fraction of AGN from SDSS-RM with detected lags by \cite{Grier17} is roughly 1/3 as high for $i>20$ AGN as for $i<20$ AGN. We account for this by removing all AGN with $i>21.7$ and keeping all AGN with $i<20$, and only keeping 1/3 of AGN with $20<i<21.7$.
We designate this ``observation-limited" sample S2, shown as blue points in Figure \ref{simulation}. The boundaries in rest-frame lag and luminosity are smooth rather than sharp due to the range of redshifts applied to the simulated sample, and are slightly tilted because both the observed-frame lag and magnitude limits depend on redshift to convert to the rest-frame lag and luminosity.
Finally, to account for the limit in the number of actual lag detections in SDSS-RM (44 measured lags), we randomly selected 44 points from S2; we designate this ``number-limited" sample S3. The S3 sample for one of the simulations is shown as the red points in Figure \ref{simulation}.
\subsection{Fitting the Simulated $R-L$ relation}
We repeated the random selection of 44 points and best-fit line \red{2000} times to see how observing specific AGN affected the slope of the simulated relation. We used the python package \red{\texttt{PyMC3}} to determine the best-fit $R-L$ relation for each of the \red{2000} simulations, with one example of this fit shown by the dashed red line in Figure \ref{simulation}. The distribution of best-fit line parameters from the \red{2000} simulations is presented in Figure \ref{slopecont}. The simulated best-fit $R-L$ relations have a median slope \red{$0.43^{+0.04}_{-0.04}$}, and a median normalization of \red{$1.42^{+0.04}_{-0.05}$}; here the plus and minus values represent the 16\% and 84\% percentiles of the distribution of slopes and normalizations, not the uncertainty in the fit. The slope and normalization are consistent \red{(2.6$\sigma$ and 2.7$\sigma$, respectively)} with the \cite{Bentz13} $R-L$ relation (represented by the black point in Figure \ref{slopecont}). Only \red{\textless\ $1\%$} of the simulations have best-fit slopes and normalizations that are as extreme as the best-fit $R-L$ relation for the observed SDSS-RM data. This result suggests that observational biases are unlikely to be the main cause of the different $R-L$ relation represented by SDSS-RM AGN compared to previous RM samples.
\red{To examine if the number of detected lags by SDSS-RM affects the $R-L$ relation, we can increase the number of selected points to reflect the future number of detected \hbox{{\rm H}$\beta$} lags. The Black Hole Mapper in the upcoming SDSS-V will allow RM of over 1000 quasars \citep{Kollmeier17}. We estimate that this will increase the number of \hbox{{\rm H}$\beta$}\ lags to $\sim$ 100. Here we assume the SDSS-RM observational effects applied to the simulations are also a reasonable approximation for the SDSS-V observations. The distribution of best-fit lines for the 100 random points has a median slope \red{$0.43^{+0.03}_{-0.03}$} and normalization of \red{$1.42^{+0.03}_{-0.03}$}. Here the best-fit slope and normalization that are inconsistent \red{(by 3.5$\sigma$)} with the \cite{Bentz13} best-fit line, suggesting that a larger sample will better constrain the effects of observational bias.} The narrower distribution of best-fit lines is even less likely than the smaller simulated sample to match the observed SDSS-RM $R-L$ relation, with less than 1\% of the simulated best-fit $R-L$ relations as extreme as the best fit to the SDSS-RM observations.
\begin{figure}[!t]
\includegraphics[width=\columnwidth]{contour_44_2000.pdf}
\includegraphics[width=\columnwidth]{contour_100_2000.pdf}
\caption{\textit{Top:} The distribution of slopes and normalizations from fitting 44 random points from our simulated sample, shown as red contours that include \red{38\% (0.5$\sigma$),} 68\% (1$\sigma$), 86\% (1.5$\sigma$), 95\% (2$\sigma$), 98\% (2.5$\sigma$) and 99\% (3$\sigma$) of the distribution. The red point represents the fitting results for SDSS-RM (Figure \ref{sdssfit}). The black point represents the result from \cite{Bentz13}. The dark red point represents the fitting result for SDSS-RM keeping the slope fixed to be the same as \cite{Bentz13}. The SDSS-RM measurement falls outside the \red{3$\sigma$ contour and is only \textless\ 1$\%$ likely to be produced by the simulation of observational bias. The \cite{Bentz13} measurement falls just outside the 2$\sigma$ contour and is consistent with 5\% of the simulated $R-L$ parameters.}
{\textit{Bottom:} The distribution of slopes and normalization for \red{100} random points from the simulated sample, using the same enclosed probabilities for the contour levels. \red{The SDSS-RM point is outside the 3$\sigma$ contour and so is again only $<1$\% likely to be consistent with the simulation}. In both cases, observational bias is insufficient to explain the $R-L$ offsets of the SDSS-RM quasars.}}
\label{slopecont}
\end{figure}
Since slope and normalization are degenerate parameters in the best-fit $R-L$ relation, and considering the limited range in SDSS-RM luminosities, we additionally repeated the fitting procedure with slope fixed to the \cite{Bentz13} value of $\alpha = 0.533$ and only allowed the normalization $K$ to vary. This effectively tests if the simulations of observational bias can reproduce the $R-L$ offset of the SDSS-RM AGN. The mean normalization for the distribution is \red{$K = 1.53^{+0.03}_{-0.03}$, again consistent with $K = 1.527$ from \cite{Bentz13} and \textgreater\ 5$\sigma$ inconsistent with the observed $R-L$ offset of the SDSS-RM data.}
In general the simulations of observational bias produce a $R-L$ relation that is statistically consistent with the \cite{Bentz13} best-fit relation, with only marginally flatter slopes and lower normalizations. \red{Less than 1\%} of the simulations produce best-fit $R-L$ relations that are as extreme as the observed SDSS-RM and SEAMBH $R-L$ data. \cite{Li19} arrived at a similar conclusion using independent light-curve simulations, additionally noting that JAVELIN lags measured from SDSS-RM data are unlikely to include enough false positive detections to strongly influence the measured $R-L$ relation.
Our simulations suggest that observational bias is unlikely to be the main cause of the SDSS-RM and SEAMBH AGN lags falling below the \cite{Bentz13} $R-L$ relation. In the next section we investigate the possibility that $R-L$ offsets are instead driven by physical AGN properties.
\section{Properties of Quasars Offset from the $R-L$ relation}
The $R-L$ differences between SDSS-RM and \cite{Bentz13} may exist because the SDSS-RM sample spans a broader range of quasar properties \citep{Shen15a,Shen19}. The SEAMBH sample also occupies a very different parameter space compared to the \cite{Bentz13} sample, as SEAMBH AGN were specifically selected to have higher Eddington ratios.
In this section, we investigate how the offset from the \cite{Bentz13} $R-L$ relation depends on various AGN properties. We define this offset as the ratio between the measured rest-frame \hbox{{\rm H}$\beta$}\ lag $\tau_{\mathrm{obs}}$ and the expected time lag $\tau_{R-L}$ from Equation (\ref{BRL}) for the given AGN \hbox{$\lambda L_{5100}$}. We calculate the offset ($\tau_{\mathrm{obs}}/\tau_{R-L}$) for each of the AGN in \cite{Grier17}, \cite{Bentz13}, and \cite{Du16,Du18}. \red{In the subsequent analyses, we report the significance of each correlation in terms of the factor of sigma by which its slope is inconsistent from zero, using 3$\sigma$ as our threshold for a significant correlation.}
\subsection{$R-L$ Offset with Accretion Rate}
\begin{figure}[!t]
\centering
\includegraphics[width=\columnwidth]{DiffvsEdd.pdf}
\includegraphics[width=\columnwidth]{DiffvsMdot.pdf}
\caption{The $R-L$ offset $\tau_{\mathrm{obs}}/\tau_{R-L}$\ of the three RM samples with Eddington ratio $\lambda_{\mathrm{Edd}}$ (top) and the pseudo accretion rate $\dot{\mathcal{M}}$ (see Equation \ref{mscript}). In both cases there is a significant anti-correlation between the two quantities, with the best-fit lines shown in red. The best-fit lines have slopes $m$ that are \textgreater\ 5$\sigma$ different from zero, and a Spearman's $\rho \sim -0.50$ with a null-probability value of $p \sim 10^{-11}$. However, these trends are \red{difficult to interpret} since the two axes are self-correlated. We find much weaker correlations when comparing $R-L$ offsets to uncorrelated quantities associated with accretion rate, as seen in Figures \ref{LMdiff} and \ref{refdiff}.}
\label{eddingtondiff}
\end{figure}
\cite{Du16,Du18} propose that the $R-L$ offsets are driven by accretion rate, with more rapidly accreting AGN having shorter lags at fixed \hbox{$\lambda L_{5100}$}. They suggest that radiation pressure in rapidly accreting AGN causes the inner disk to be thicker (a ``slim'' disk), causing self-shadowing of the disk emission that reduces the ionizing radiation received by the BLR and thus decreases its radius \citep{Wang14c}. The self-shadowing does not affect the optical continuum emission used in the $R-L$ relation, so the broad-line lags are shorter than expected for a given observed \hbox{$\lambda L_{5100}$}. However, a correlation between offset and accretion rate is expected not just from quasar properties but simply because the axes are correlated: the y-axis ($\tau_{\mathrm{obs}}/\tau_{R-L}$) is a log-ratio of $\tau/\hbox{$\lambda L_{5100}$}^{0.5}$, while the x-axes ($\lambda_{\rm Edd}$, $\dot{\mathcal{M}}$) include log-ratios of \hbox{$\lambda L_{5100}$}/$\tau$ and $\hbox{$\lambda L_{5100}$}^{1.5}/\tau^2$, respectively.
Despite these self-correlations, for direct comparisons to the previous SEAMBH results \citep[see][Figure 5]{Du18} we estimate accretion rates for all three samples using two dimensionless quantities: Eddington ratio \red{(calculated as described in Section 2) } and $\dot{\mathcal{M}}$ (Equation \ref{mscript}, as defined in \citealp{Du16}). The $R-L$ offsets of all three samples as a function of $\lambda_{\rm Edd}$ and $\dot{\mathcal{M}}$ are presented in Figure \ref{eddingtondiff}. Best-fit lines (with slope $m$ and y-intercept $b$ given in the figure legends) indicate significant (\textgreater\ 5$\sigma$) anti-correlations between $R-L$ offset and both estimators of accretion rate, with Spearman's $\rho$ $\sim$ $-0.50$ and $p$ $\sim 10^{-11}$.
\begin{figure}[!t]
\centering
\includegraphics[width=\columnwidth]{DiffvsFWHM.pdf}
\includegraphics[width=\columnwidth]{Diffvssigma.pdf}
\caption{The $R-L$ offset of AGN in all three samples with FWHM$_{\rm H\beta}$ (top panel) and $\sigma_{\rm H\beta}$ (bottom panel). \red{For the \cite{Bentz13} sample, the linewidths were taken from \cite{Bentz15}.} These observed quantities are related to Eddington ratio, and so are an attempt to connect $R-L$ offsets with accretion rate while avoiding direct self-correlation with $\tau$ on both axes. The red lines show the best-fit relations to the \cite{Grier17} SDSS-RM data, while the blue lines show the best-fit relations to all three samples. The $R-L$ offset is only marginally anti-correlated with the $\hbox{{\rm H}$\beta$}$ line widths in each case.}
\label{LMdiff}
\end{figure}
The anti-correlations in both panels of Figure \ref{eddingtondiff} are qualitatively consistent with the simple self-correlations. To avoid these self-correlations, we instead study the dependence of $R-L$ offsets on accretion rate by using only the components of the Eddington ratio that are not computed directly from the the RM lag $\tau$. Since $\lambda_{\rm Edd} \equiv {L_{\rm bol}\over{L_{\rm Edd}}} \propto {{\hbox{$\lambda L_{5100}$}}\over{M_{\rm BH}}}$ and $M_{\rm BH} \propto \tau v_{\rm fwhm}^2$, we examine the $R-L$ offset against two measurements of line width $v_{\rm fwhm}$ and $v_{\rm \sigma}$ to determine if there are residual correlations beyond the self-correlations induced from $\hbox{$\lambda L_{5100}$}$ and $\tau$ appearing in both axes; this is presented in Figure \ref{LMdiff}. For all samples and for both line-width indicators, there are \red{only marginal (\textless\ 2$\sigma$) anti-correlations between $R-L$ offset and $\hbox{{\rm H}$\beta$}$ broad-line width.}
\begin{figure}[!t]
\centering
\includegraphics[width=\columnwidth]{color_RL_RFeII.pdf}
\includegraphics[width=\columnwidth]{DiffvsRFeII.pdf}
\caption{The $R-L$ relation for SDSS-RM quasars color-coded by the \hbox{{\rm Fe}\kern 0.1em{\sc ii}}\ effective strength $R_{\mathrm{FeII}}$ (top) and the $R-L$ offset $\tau_{\mathrm{obs}}/\tau_{R-L}$\ versus $R_{\mathrm{FeII}}$ (bottom). Since $R_{\mathrm{FeII}}$ correlates with Eddington ratio \citep{Shen14}, a significant anti-correlation between $R-L$ offset and $R_{\mathrm{FeII}}$ would suggest that, at fixed luminosity, more rapidly accreting AGN have shorter lags. We \red{do not observe a significant anti-correlation between $R-L$ offset and relative iron strength for SDSS-RM quasars, with a best-fit slope 1$\sigma$} consistent with zero, and Spearman's $\rho = -0.11$ and $p = 0.49$.}
\label{refdiff}
\end{figure}
We make a final attempt at studying the relation between $R-L$ offset and accretion rate by using the relative \hbox{{\rm Fe}\kern 0.1em{\sc ii}}\ strength $R_{\mathrm{FeII}} \equiv \mathrm{EW_{FeII}\over{EW_{H\beta}}}$. The relative \hbox{{\rm Fe}\kern 0.1em{\sc ii}}\ strength is one of the ``Eigenvector 1" quantities that separate quasars into different spectral categories \citep{Boroson92}, and in particular $R_{\mathrm{FeII}}$ correlates positively with Eddington ratio \citep{Shen14}. Thus we can use $R_{\mathrm{FeII}}$ as an independent estimate of accretion rate that avoids any self-correlation with $\tau_{\mathrm{obs}}/\tau_{R-L}$. Figure \ref{refdiff} presents the relation between $R-L$ offset and $R_{\mathrm{FeII}}$ for the SDSS-RM AGN of \cite{Grier17}. We find \red{no anti-correlation between offset and $\mathrm{R_{{FeII}}}$, with a slope that is 1$\sigma$ consistent with zero} and Spearman's $\rho = -0.11$ and $p = 0.49$. This is in contrast to the recent work of \cite{Du19}, who found a significant correlation between $R-L$ offset and $\mathrm{R_{{FeII}}}$ using the SEAMBH and \citet{Bentz13} AGN samples. We do find a consistent slope in the relation (\red{$m=-0.24 \pm 0.23$} compared to $m=-0.42 \pm 0.06$ in \citealp{Du19}), and our anti-correlation may be marginal rather than significant due to the limited sample size of SDSS-RM, the different lag uncertainties of JAVELIN, and/or the greater diversity of AGN properties in the SDSS-RM sample.
\subsection{$R-L$ Offset with UV Ionizing Luminosity}
The $R-L$ relation is parameterized with the optical luminosity at rest-frame 5100 \AA, but the response and size of the BLR is governed by the incident ionizing photons \citep[e.g,][]{Davidson72}. In particular, the $\hbox{{\rm H}$\beta$}$ recombination line is driven by the incident luminosity of $E>13.6$~eV photons. The basic photoionization expectation of $R$ $\propto L^{0.5}$ is valid for the optical luminosity only if changes in optical luminosity also correspond to identical changes in the ionizing luminosity\red{; however, the shape of the SED and therefore the ratio of UV and optical luminosities depends on factors like mass, accretion rate and spin \citep[e.g.,][]{Richards06}. Modeling of the BLR by \cite{Czerny19}, with the assumption that the BLR radius is determined by the ionizing luminosity or number of incident photos, shows a diversity in the R-L relation due to changing UV/optical luminosity ratios, reproducing the range of observed lags from RM surveys. Additionally, emission-line lags relative to the optical continuum may be underestimated if there is an additional time lag between the UV-continuum and optical-continuum variability, as observed in NGC 5548 \citep{Pei17}. However, lags between the UV and optical continuum are short, so this would only significantly affect emission-line lags in the order of a few days.}
None of our samples have published measurements of the $E>13.6$~eV ionizing luminosity; however, the SDSS-RM sample has luminosity measurements at rest-frame 3000~\AA\ and \hbox{{\rm H}$\beta$}, better probing the (near-)UV compared to the optical \hbox{$\lambda L_{5100}$}. Both of these quantities are shown with the $R-L$ offset of SDSS-RM AGN in Figure \ref{iondiff}. We fit lines to each, finding that there is \red{no anti-correlation between the $R-L$ offset and \hbox{$\lambda L_{3000}$}, with Spearman's $\rho = -0.28$, $p = 0.09$.} The best-fit line finds \red{no significant (1$\sigma$) anti-correlation} between the $R-L$ offset and the $\hbox{{\rm H}$\beta$}$ luminosity with Spearman's $\rho = -0.36$ and $p = 0.02$, \red{additionally} the $R-L$ relation color-coded by $L_{\rm H\beta}$ (Figure \ref{iondiff} top right) indicates little variation of $L_{\rm H\beta}/\hbox{$\lambda L_{5100}$}$ across the SDSS-RM sample.
The ratio of luminosities of the $\hbox{{\rm [O}\kern 0.1em{\sc iii}{\rm ]}}\lambda$5007 and \hbox{{\rm H}$\beta$}\ emission lines is also frequently used as a proxy for the number of ionizing photons (e.g., \citealp{BPT81,VO87}). Both are recombination lines, and $\hbox{{\rm [O}\kern 0.1em{\sc iii}{\rm ]}}$ has an ionization energy of 55~eV compared to the H ionization energy of 13.6~eV. \red{We find a significant (3.7$\sigma$) correlation between offset and L$_{\rm [OIII]}$/L$_{\rm H\beta}$, shown in Figure \ref{OIIIdiff}, with Spearman's $\rho = 0.36$, $p = 0.02$, and an excess scatter of $\sim$ 0.24.}
\begin{figure*}[!t]
\centering
\includegraphics[width=\columnwidth]{color_RL_L3000.pdf}
\includegraphics[width=\columnwidth]{color_RL_LHBeta.pdf}
\includegraphics[width=\columnwidth]{DiffvsL3000.pdf}
\includegraphics[width=\columnwidth]{DiffvsLHbeta.pdf}
\caption{\textit{Left:} The $R-L$ relation of SDSS-RM AGN color-coded by \hbox{$\lambda L_{3000}$}\ and the $R-L$ offset $\tau_{\mathrm{obs}}/\tau_{R-L}$\ versus \hbox{$\lambda L_{3000}$}, a luminosity measurement closer to the ionizing UV luminosity than the $\hbox{$\lambda L_{5100}$}$ used in the $R-L$ relation. The sample spans a fairly narrow range of \hbox{$\lambda L_{3000}$}/\hbox{$\lambda L_{5100}$}\ (top left) and the $R-L$ offset is \red{not anti-correlated with \hbox{$\lambda L_{3000}$}\ (slope 1$\sigma$ from zero and Spearman's $\rho = -0.28$ and $p = 0.09$}). \textit{Right:} The $R-L$ relation of SDSS-RM AGN with the $\hbox{{\rm H}$\beta$}$ broad-line luminosity and the $R-L$ offset versus the $\hbox{{\rm H}$\beta$}$ broad-line luminosity, a proxy for the ionizing luminosity that drives $\hbox{{\rm H}$\beta$}$ recombination. Once more the sample spans a fairly narrow range of $L_{\rm H\beta}/\hbox{$\lambda L_{5100}$}$, \red{and} the $R-L$ offset and $L_{\rm H\beta}$ are \red{not significantly correlated with a slope $m$ that is 1$\sigma$ consistent with zero}, and Spearman's $\rho = -0.36$ and $p=0.02$, with excess scatter of $\sim$0.25~dex about the best-fit line.}
\label{iondiff}
\end{figure*}
We conclude that the shape of the UV/optical SED is likely to play a role in the $R-L$ offset of AGN, as evident from the correlation with $L_{\rm [OIII]}/L_{\rm H\beta}$. The \red{lack of significant} correlations with \hbox{$\lambda L_{3000}$}\ and $L_{\rm H\beta}$ may be because these luminosities do not accurately represent the luminosity of far-UV ($\lambda<912$ \AA) ionizing photons. The $L_{\rm [OIII]}/L_{\rm H\beta}$ ratio is likely tied to the broader shape of the AGN SED, which in turn is related to the accretion rate and/or black hole spin \red{\citep[e.g.,][]{Du18,Czerny19}}. It is a bit surprising that we find a significant correlation of $R-L$ offset with $L_{\rm [OIII]}/L_{\rm H\beta}$ but only a marginal anti-correlation with $R_{\mathrm{FeII}}$, given the observed anti-correlation between $\hbox{{\rm [O}\kern 0.1em{\sc iii}{\rm ]}}$ equivalent width and $R_{\mathrm{FeII}}$ \citep[Figure 1 of][]{Shen14}. This may be due to the limited sample size of SDSS-RM AGN, and/or to the large uncertainties in its measured lags. Regardless of the root cause of the UV/optical SED changes, it would be valuable to add far-UV observations to the samples of SDSS-RM and SEAMBH AGN in order to directly compare their $R-L$ offsets with the luminosity of photons responsible for ionizing the BLR.
\section{Conclusions}
While previous RM studies revealed a tight ``$R-L$'' relation between the broad-line radius $R=c\tau$ and the optical luminosity \hbox{$\lambda L_{5100}$}, more recent studies (SDSS-RM and SEAMBH) frequently find shorter lags than expected for a given optical luminosity. We use Monte Carlo simulations that mimic the SDSS-RM survey design to show that the $R-L$ offsets are not solely due to observational bias. Instead, we find that AGN $R-L$ properties correlate most closely with AGN spectral properties: at fixed \hbox{$\lambda L_{5100}$}, AGN have lower $\tau$ with lower L$_{\rm [OIII]}$/L$_{\rm H\beta}$. The correlation of $R-L$ offset with L$_{\rm [OIII]}$/L$_{\rm H\beta}$ is likely tied to changes in the UV/optical spectral shape. A more complete understanding of AGN $R-L$ properties will likely come from observations of the UV SED of RM AGN that directly measure the luminosity and shape of the ionizing continuum responsible for the AGN broad-line region.
\acknowledgments
LCH acknowledges support from the National Science Foundation of China (11721303, 11991052) and the National Key R\&D Program of China (2016YFA0400702). GFA, JRT, and YH acknowledge support from NASA grants HST-GO-15260.001-A and HST-GO-15650.002-A. YS acknowledges support from an Alfred P. Sloan Research Fellowship and NSF grant AST-1715579. KH acknowledges support from STFC grant ST/R000824/1. Funding for SDSS-III was provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, and the U.S. Department of Energy Office of Science. The SDSS-III web site is http://www.sdss3.org/.
\begin{figure}[!t]
\centering
\includegraphics[width=\columnwidth]{color_RL_ROIII.pdf}
\includegraphics[width=\columnwidth]{DiffvsROIII.pdf}
\caption{The $R-L$ relation of SDSS-RM color-coded by $L_{\rm [OIII]}/{L_{\rm H\beta}}$ (top) and the $R-L$ offset $\tau_{\mathrm{obs}}/\tau_{R-L}$\ with $L_{\rm [OIII]}/{L_{\rm H\beta}}$ (bottom), an indicator of the far-UV ionizing flux present. AGN with larger (negative) $R-L$ offsets typically have lower $L_{\rm [OIII]}/{L_{\rm H\beta}}$, and there is a significant correlation between $\tau_{\mathrm{obs}}/\tau_{R-L}$\ and $L_{\rm [OIII]}/{L_{\rm H\beta}}$ with a slope \red{3.7$\sigma$} different from zero and a Spearman's $\rho = 0.36$ and $p = 0.02.$}
\label{OIIIdiff}
\end{figure}
SDSS-III was managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS-III Collaboration including the University of Arizona, the Brazilian Participation Group, Brookhaven National Laboratory, Carnegie Mellon University, University of Florida, the French Participation Group, the German Participation Group, Harvard University, the Instituto de Astrofisica de Canarias, the Michigan State/Notre Dame/JINA Participation Group, Johns Hopkins University, Lawrence Berkeley National Laboratory, Max Planck Institute for Astrophysics, Max Planck Institute for Extraterrestrial Physics, New Mexico State University, New York University, Ohio State University, Pennsylvania State University, University of Portsmouth, Princeton University, the Spanish Participation Group, University of Tokyo, University of Utah, Vanderbilt University, University of Virginia, University of Washington, and Yale University.
This work is based on observations obtained with MegaPrime/MegaCam, a joint project of CFHT and CEA/DAPNIA, at the Canada-France-Hawaii Telescope (CFHT) which is operated by the National Research Council (NRC) of Canada, the Institut National des Sciences de l’Univers of the Centre National de la Recherche Scientifique of France, and the University of Hawaii. The authors recognize the cultural importance of the summit of Maunakea to a broad cross section of the Native Hawaiian community. The astronomical community is most fortunate to have the opportunity to conduct observations from this mountain.
We thank the Bok and CFHT Canadian, Chinese, and French TACs for their support. This research uses data obtained through the Telescope Access Program (TAP), which is funded by the National Astronomical Observatories, Chinese Academy of Sciences, and the Special Fund for Astronomy from the Ministry of Finance in China.
|
2,869,038,154,421 | arxiv | \section*{\refname}}
\usepackage{subfigure}
\theoremstyle{plain}
\newtheorem{thm}{Theorem}
\newtheorem{lem}[thm]{Lemma}
\renewcommand{\ss}[1]{{\sf{#1}}}
\DeclareMathOperator{\tr}{Tr}
\newcommand{\bra}[1]{\langle#1|}
\newcommand{\ket}[1]{|#1\rangle}
\newcommand{\tn}[1]{^{\otimes #1}}
\newcommand{\ensuremath{\mathcal}}{\ensuremath{\mathcal}}
\newcommand{\ensuremath{\mathrm}}{\ensuremath{\mathrm}}
\newcommand{\ensuremath{\mathbb}}{\ensuremath{\mathbb}}
\newcommand{\ensuremath{\mathbb}}{\ensuremath{\mathbb}}
\newcommand\id{\ensuremath{\mathbbm{1}}}
\newcommand{\ensuremath{^{citation\ needed}}}{\ensuremath{^{citation\ needed}}}
\newcommand{\correction}[1]{{\color{blue}#1}}
\newcommand{\remove}[1]{\ifmmode\text{\sout{\ensuremath{\color{red}#1}}}\else{\color{red}\sout{#1}}\fi}
\newcommand{\ideal}[1]{\ensuremath{\ensuremath{\mathcal}{#1}}}
\newcommand{\noisy}[1]{\ensuremath{\tilde{\ideal{#1}}}}
\newcommand{\fps}[1]{|#1\,\%}
\begin{document}
\title{Entangling logical qubits with lattice surgery}
\date{\today}
\author{Alexander Erhard}
\thanks{These authors contributed equally to this work. Contact: alexander.erhard@uibk.ac.at, hendrik.poulsen-nautrup@uibk.ac.at}
\affiliation{Institute for Experimental Physics, University of Innsbruck, 6020 Innsbruck, Austria}
\author{Hendrik Poulsen Nautrup}
\thanks{These authors contributed equally to this work. Contact: alexander.erhard@uibk.ac.at, hendrik.poulsen-nautrup@uibk.ac.at}
\affiliation{Institute for Theoretical Physics, University of Innsbruck, 6020 Innsbruck, Austria}
\author{Michael Meth}
\affiliation{Institute for Experimental Physics, University of Innsbruck, 6020 Innsbruck, Austria}
\author{Lukas Postler}
\affiliation{Institute for Experimental Physics, University of Innsbruck, 6020 Innsbruck, Austria}
\author{Roman Stricker}
\affiliation{Institute for Experimental Physics, University of Innsbruck, 6020 Innsbruck, Austria}
\author{Martin Ringbauer}
\affiliation{Institute for Experimental Physics, University of Innsbruck, 6020 Innsbruck, Austria}
\author{Philipp Schindler}
\affiliation{Institute for Experimental Physics, University of Innsbruck, 6020 Innsbruck, Austria}
\author{Hans J. Briegel}
\affiliation{Institute for Theoretical Physics, University of Innsbruck, 6020 Innsbruck, Austria}
\affiliation{Fachbereich Philosophie, Universit{\"a}t Konstanz, Fach 17, 78457 Konstanz, Germany}
\author{Rainer Blatt}
\affiliation{Institute for Experimental Physics, University of Innsbruck, 6020 Innsbruck, Austria}
\affiliation{Institute for Quantum Optics and Quantum Information of the Austrian Academy of Sciences, 6020 Innsbruck, Austria}
\author{Nicolai Friis}
\affiliation{Institute for Quantum Optics and Quantum Information - IQOQI Vienna, Austrian Academy of Sciences, 1090 Vienna, Austria}
\affiliation{Institute for Theoretical Physics, University of Innsbruck, 6020 Innsbruck, Austria}
\author{Thomas Monz}
\affiliation{Institute for Experimental Physics, University of Innsbruck, 6020 Innsbruck, Austria}
\affiliation{Alpine Quantum Technologies GmbH, 6020 Innsbruck, Austria}
\begin{abstract}
Future quantum computers will require quantum error correction for faithful operation. The correction capabilities come with an overhead for performing fault-tolerant logical operations on the encoded qubits. One of the most resource efficient ways to implement logical operations is lattice surgery, where groups of physical qubits, arranged on lattices, can be merged and split to realize entangling gates and teleport logical information. Here, we report on the experimental realization of lattice surgery between two topologically encoded qubits in a 10-qubit ion trap quantum information processor. In particular, we demonstrate entanglement between two logical qubits and we implement logical state teleportation.
\end{abstract}
\maketitle
\vspace*{-4.0mm}
The development of quantum computing architectures from early designs and current noisy intermediate-scale quantum (NISQ) devices~\cite{Preskill1997} to full-fledged quantum computers hinges on achieving fault-tolerance using quantum error correction (QEC)~\cite{Preskill1997, NielsenChuang2000}. The basis of QEC is storing and manipulating quantum information using logical qubits. A number of experiments have demonstrated significant technological progress towards QEC~\cite{CoryEtAl1998, KnillLaflammeMartinezNegrevergne2001, ChiaveriniEtal2004, BoulantViolaFortunatoCory2005, ZhangGangloffMoussaLaflamme2011, WoottonLoss2018}, including the creation of non-trivial QEC codes~\cite{BellHerreraMartiTameMarkhamWadsworthRarity2014, TakitaEtAl2017}, error detection~\cite{Kelly-Martinis2015, LinkeEtAl2017, Andersen2019}, correction of errors~\cite{AokiEtAl2009, ReedEtal2012, WaldherrEtAl2014, OfekEtAl2016} and qubit loss~\cite{StrickerEtal2020}, and operations on single~\cite{ZhangLaflammeSuter2012, NiggEtAl2014, Barends-Martinis2014, HeeresEtAl2017, Gong-Pan2019, HuEtAl2019} and on two logical qubits in non-topological codes~\cite{ChouEtal2018, HarperFlammia2019}.
The most promising road towards QEC is offered by topological codes, such as the surface code~\cite{Kitaev2003,DennisKitaevLandahlPreskill2002,FowlerMariantoniMartinisCleland2012}, which require only short-range interactions in 2D architectures. Nevertheless, the implementation of encoded operations remains a major challenge. Performing arbitrary logical operations requires costly techniques, including transversal gates~\cite{GottesmanPhD1997}, teleported gates~\cite{Gottesman1999}, and magic state distillation~\cite{BravyiKitaev2005}. Recent theoretical advances led to the development of lattice surgery (LS)~\cite{HorsmanFowlerDevittVanMeter2012,PoulsenNautrupFriisBriegel2017, GutierrezMuellerBermudez2019}, promising to reduce this complexity~\cite{Litinski2019}, while maintaining 2D layouts. In LS, the QEC code itself is altered by merging and splitting initially separate encodings, rather than operating on all physical qubits. Such modifications can be used to efficiently manipulate logical qubits, or to adapt the robustness to different noise processes~\cite{PoulsenNautrupDelfosseDunjkoBriegelFriis2019}. LS further enables entanglement generation between logical qubits and can be complemented with measurement-based protocols~\cite{raussendorf2001one,raussendorf2007fault,lanyon2013measurement} for logical state teleportation and manipulation~\cite{PoulsenNautrupFriisBriegel2017}. Here, we report the experimental implementation of LS using $10$ trapped ions to entangle two logical qubits encoded in two $4$-qubit surface codes~\cite{FowlerMariantoniMartinisCleland2012}.
\begin{figure*}[ht!]
\centering
\includegraphics[width=0.95\textwidth]{Fig_1_data_rough_small.pdf}
\caption{\textbf{Experimental surface code lattice surgery.}
Experimental results and schematics for LS between $Z$-type (rough) boundaries implementing a logical joint measurement $M^{\pm}_\ensuremath{\mathrm}{XX}=\mathbb{I}\pm X_\ensuremath{\mathrm}{L}^\ensuremath{\mathrm}{A}X_\ensuremath{\mathrm}{L}^\ensuremath{\mathrm}{B}$ to generate a logical Bell state. We use the error detection capabilities of the code and post-select measurements with valid code stabilizers, which are presented in light colored bars.
\textbf{Encoded:} Two surface codes defined on $2\times 2$ lattices with average code stabilizer values of $\langle |S_i|\rangle=0.868(4)$ (error is calculated from individual stabilizer errors) where $X$-stabilizers and $Z$-stabilizers in~\cref{eq:sc_stabilizer} are associated with orange and aquamarine faces, respectively. We observe (raw$|$post selected) state fidelities $\ensuremath{\mathcal}{F}(\ket{0_\ensuremath{\mathrm}{L}^\ensuremath{\mathrm}{A}})=93.8(4)\fps{99.3(2)}$ and ${\ensuremath{\mathcal}{F}(\ket{0_\ensuremath{\mathrm}{L}^\ensuremath{\mathrm}{B}})=93.4(5)\fps{99.4(2)}}$ for the encodings $\ket{0^\ensuremath{\mathrm}{A}_\ensuremath{\mathrm}{L}}$, $\ket{0^\ensuremath{\mathrm}{B}_\ensuremath{\mathrm}{L}}$, respectively.
Logical operators are products of Pauli operators connecting opposite boundaries (see \cref{eq:sc_logical}).
\textbf{Merged:} Stabilizers along the boundaries are measured (red) using ancillas $A_1,A_2$ such that ${S_6^\ensuremath{\mathrm}{M}S_7^\ensuremath{\mathrm}{M}=X_\ensuremath{\mathrm}{L}^\ensuremath{\mathrm}{A}X_\ensuremath{\mathrm}{L}^\ensuremath{\mathrm}{B}}$. The merged code~\cref{eq:sc_merged_rough} encodes a single logical qubit $\ket{0_\ensuremath{\mathrm}{L}^\ensuremath{\mathrm}{M}}$ corresponding to the logical operator $Z_\ensuremath{\mathrm}{L}^\ensuremath{\mathrm}{M}=Z_\ensuremath{\mathrm}{L}^\ensuremath{\mathrm}{A}Z_\ensuremath{\mathrm}{L}^\ensuremath{\mathrm}{B}$ in~\cref{eq:sc_merged_logic_rough}. We observe average stabilizer values and logical state fidelities of $\langle|S_i|\rangle=0.669(8)$, $\ensuremath{\mathcal}{F}(\ket{0_\ensuremath{\mathrm}{L}^\ensuremath{\mathrm}{M}})=86.4(1.0)\fps{97.9(5)}$, respectively.
\textbf{Split:} In order to split the merged code while preserving the eigenstate of $X_\ensuremath{\mathrm}{L}^\ensuremath{\mathrm}{A}X_\ensuremath{\mathrm}{L}^\ensuremath{\mathrm}{B}$, one boundary stabilizer of the original code is measured (green) reusing ancilla $A_1$. In this way, we recover the original codes with average stabilizer values of $\langle|S_i|\rangle=0.603(3)$ which are now in a logical Bell state $\ket{\phi_\ensuremath{\mathrm}{L}^+}$ with fidelity $\ensuremath{\mathcal}{F}(\ket{\phi_\ensuremath{\mathrm}{L}^+})=58.0(1.6)\fps{75.3(1.6)}$.}
\label{fig:sc_ls}
\end{figure*}
\noindent \textbf{Surface code}.
One of the most prominent examples of a QEC code is the surface code~\cite{Kitaev2003,DennisKitaevLandahlPreskill2002,FowlerMariantoniMartinisCleland2012,raussendorf2007fault} which has error thresholds of up to $1\%$~\cite{WangFowlerHollenberg2011}. The surface code has a simple description within the stabilizer formalism~\cite{GottesmanPhD1997}, as we discuss in the following (see Appendix~\ref{app:intro_qec} for more details).
Here, we consider the minimal instance of a surface code \textemdash\ a 4-qubit code encoding a single logical qubit \textemdash as the central component of our experimental implementation. The code can be represented graphically, where the physical qubits are the vertices of a $2\times 2$ bicolorable lattice, as shown in~\cref{fig:sc_ls}~(\textbf{Schematic Encoded}) for two initially separate logical qubits labelled $A$ and $B$. Depending on the color, faces are associated with products of either Pauli-$X$ or -$Z$ operators of the adjacent physical qubits. In~\cref{fig:sc_ls}~(\textbf{Schematic Encoded}) for example, the central, orange plaquettes can be associated with operators, $X_1X_2X_3X_4$ and $X_5X_6X_7X_8$. The resulting operators are called \emph{stabilizers} and form a set (group) of operations \textemdash the \emph{stabilizer code} $\mathcal{S}^\textrm{A/B}$ \textemdash under multiplication~\footnote{Note that we choose a negative sign for some stabilizers because this is advantageous for our implementation.},
\begin{equation}\label{eq:sc_stabilizer}
\begin{split}
\ensuremath{\mathcal}{S}^\ensuremath{\mathrm}{A}&=\langle S_1^\ensuremath{\mathrm}{A},S_2^\ensuremath{\mathrm}{A},S_3^\ensuremath{\mathrm}{A}\rangle=\langle -Z_1Z_2,-Z_3Z_4,+X_1X_2X_3X_4\rangle,\\
\ensuremath{\mathcal}{S}^\ensuremath{\mathrm}{B}&=\langle S_1^\ensuremath{\mathrm}{B},S_2^\ensuremath{\mathrm}{B},S_3^\ensuremath{\mathrm}{B}\rangle=\langle -Z_5Z_6,-Z_7Z_8,+X_5X_6X_7X_8\rangle.
\end{split}
\hspace{5mm}
\raisetag{1.6\normalbaselineskip}
\end{equation}
The logical states $\ket{\psi_\ensuremath{\mathrm}{L}^\mathrm{A/B}}$ spanning the respective code spaces for $A$ and $B$ are defined as the simultaneous eigenstates of the stabilizers, i.e., $S_i^\textrm{A/B}\ket{\psi_\ensuremath{\mathrm}{L}^\mathrm{A/B}}=\ket{\psi_\ensuremath{\mathrm}{L}^\mathrm{A/B}}$, $\forall i\in \{1,2,3\}$. The encoded logical qubits can be associated with logical $X$ and $Z$ operators that anti-commute with each other and commute with all stabilizers. Logical operators are defined up to multiplication with other logical operators, stabilizers and the imaginary unit~$i$. Therefore, the sets of logical operators are defined as
\begin{align}
\begin{split}
\mathcal{L}^\mathrm{A}&=\langle i, Z^\mathrm{A}_\mathrm{L}, X^\mathrm{A}_\mathrm{L}\rangle/\mathcal{S}^\textrm{A} = \langle i, Z_1Z_3,X_1X_2\rangle/\mathcal{S}^\textrm{A},
\label{eq:sc_logical}\\
\mathcal{L}^\mathrm{B}&=\langle i, Z^\mathrm{B}_\mathrm{L}, X^\mathrm{B}_\mathrm{L}\rangle/\mathcal{S}^\textrm{B} = \langle i, Z_5Z_7,X_5X_6\rangle/\mathcal{S}^\textrm{B},
\end{split}
\end{align}
where $\langle P_\ensuremath{\mathrm}{L}\rangle/\mathcal{S}$ indicates that logical Pauli operators $P_\ensuremath{\mathrm}{L}$ form equivalence classes defined up to multiplication with stabilizers (see~\cref{app:intro_qec} for details). The logical $Y$-operator is determined as $Y_L:=iZ_LX_L$ and we find $Y_\mathrm{L}^\mathrm{A}=Y_1X_2Z_3$ and $Y_\mathrm{L}^\mathrm{B}=Y_5X_6Z_7$. The computational basis states of each logical qubit are then $\ket{0_\mathrm{L}}=\frac{1}{\sqrt{2}}(\ket{0101}+\ket{1010})$ and $\ket{1_\mathrm{L}}=\frac{1}{\sqrt{2}}(\ket{1001}+\ket{0110})$.
Errors can be detected with \emph{error syndromes}, i.e a sign-flip of any code stabilizer. For instance, measuring $S_1^\mathrm{A}$ and obtaining a syndrome $-1$ detects an $X_1$ or $X_2$ error because $\{S_1^\mathrm{A},X_1\}=\{S_1^\mathrm{A},X_2\}=0$. Scaling the surface code is, in theory, as simple as scaling the lattice, see~\cref{app:intro_sc} for more details.\\[-2mm]
\noindent \textbf{Lattice surgery}~\cite{HorsmanFowlerDevittVanMeter2012} is a fault-tolerant\footnote{An operation is called fault-tolerant if errors during the operation can only map to a constant number of physical qubits in the encoding.} protocol for entangling QEC codes which is ideally suited to the geometry of 2D topological codes such as the surface code. This is because LS between topological codes requires only local, few-body interactions. Surface code LS~\cite{HorsmanFowlerDevittVanMeter2012} was introduced as a method to project two surface codes $\mathcal{S}^\mathrm{A}$ and $\mathcal{S}^\mathrm{B}$ with logical operators $X_\mathrm{L}^\mathrm{A},Z_\mathrm{L}^\mathrm{A}$ and $X_\mathrm{L}^\mathrm{B},Z_\mathrm{L}^\mathrm{B}$, respectively, onto joint eigenstates of either $X_\mathrm{L}^\mathrm{A}X_\mathrm{L}^\mathrm{B}$ or $Z_\mathrm{L}^\mathrm{A}Z_\mathrm{L}^\mathrm{B}$, referred to as \emph{rough} and \emph{smooth} LS, respectively~\footnote{Typically, the lattice boundaries of surface codes can be distinguished by their associated stabilizers: $Z$-type stabilizers along the boundary define a \emph{rough} boundary while $X$-type stabilizer define a \emph{smooth} boundary.}. These projections are entangling operations and can be used to construct entangling gates. Here, we proceed by describing rough LS for the minimal $2\times 2$ surface code discussed before and refer to Appendices~\ref{app:intro_ls} and~\ref{app:smooth_ls} for a more general introduction and details.\\[-2mm]
In order to project onto a logical eigenstate of $X_\mathrm{L}^\mathrm{A}X_\mathrm{L}^\mathrm{B}$, we perform a logical joint measurement $M_\ensuremath{\mathrm}{XX}^\pm=\mathbb{I}\pm X_\mathrm{L}^\mathrm{A}X_\mathrm{L}^\mathrm{B}$, which can be used to entangle two logical qubits. To achieve this, LS proceeds in two steps: \emph{merging} and \emph{splitting}.
This procedure is illustrated in~\cref{fig:sc_ls}~(\textbf{Schematic Merged}) and~(\textbf{Schematic Split}) for two $2\times 2$ surface codes $\mathcal{S}^\mathrm{A}$ and $\mathcal{S}^\mathrm{B}$. We first \emph{merge} the two separate codes $\mathcal{S}^\mathrm{A},\mathcal{S}^\mathrm{B}$ into a new stabilizer code $\mathcal{S}^\mathrm{M}$ by measuring \emph{merging} stabilizers $S_6^\mathrm{M}=X_3X_5$ and $S_7^\mathrm{M}=X_4X_6$ between the boundaries. These stabilizers commute with all stabilizers of the original codes except $S^\mathrm{A}_2$ and $S^\mathrm{B}_1$, and are chosen such that their joint measurement corresponds to the joint logical measurement $M_\ensuremath{\mathrm}{XX}$, i.e., $S_6^\mathrm{M}S_7^\mathrm{M}=X_\mathrm{L}^\mathrm{A}X_\mathrm{L}^\mathrm{B}$. As a result, we obtain the new code by discarding all stabilizers that anti-commute with the merging stabilizers, depicted in~\cref{fig:sc_ls} (\textbf{Schematic Merged}),
\begin{align}\label{eq:sc_merged_rough}
\ensuremath{\mathcal}{S}^\ensuremath{\mathrm}{M}&=\langle S_1^\ensuremath{\mathrm}{M},S_2^\ensuremath{\mathrm}{M},S_3^\ensuremath{\mathrm}{M},S_4^\ensuremath{\mathrm}{M},S_5^\ensuremath{\mathrm}{M}, S_6^\ensuremath{\mathrm}{M},S_7^\ensuremath{\mathrm}{M}\rangle\nonumber\\
&=\langle S_1^\ensuremath{\mathrm}{A},S_3^\ensuremath{\mathrm}{A},S_2^\ensuremath{\mathrm}{B},S_3^\ensuremath{\mathrm}{B},S_2^\ensuremath{\mathrm}{A}S_1^\ensuremath{\mathrm}{B},+X_3X_5,+X_4X_6\rangle.
\end{align}
Note that this code already encodes the desired joint eigenstate since $X_\mathrm{L}^\mathrm{A}X_\mathrm{L}^\mathrm{B}$ is included as a stabilizer in the merged code $\ensuremath{\mathcal}{S}^\ensuremath{\mathrm}{M}$.
In fact, the measurement outcomes $m,m'\in\{0,1\}$ of $S_6^\ensuremath{\mathrm}{M},S_7^\ensuremath{\mathrm}{M}$, respectively, are random such that $m_1=m+m'$ specifies the eigenvalue associated with $X_\mathrm{L}^\mathrm{A}X_\mathrm{L}^\mathrm{B}$ as $(-1)^\ensuremath{\mathrm}{m_1}$.
The merged code is an asymmetric $2\times 4$ surface code encoding a single logical qubit, i.e.,
\begin{align}\label{eq:sc_merged_logic_rough}
\ensuremath{\mathcal}{L}^\ensuremath{\mathrm}{M}&=\langle i,Z_\ensuremath{\mathrm}{L}^\ensuremath{\mathrm}{M},X_\ensuremath{\mathrm}{L}^\ensuremath{\mathrm}{M}\rangle/\ensuremath{\mathcal}{S}^\ensuremath{\mathrm}{M}
=\langle i,Z_\ensuremath{\mathrm}{L}^\ensuremath{\mathrm}{A}Z_\ensuremath{\mathrm}{L}^\ensuremath{\mathrm}{B}, X_\ensuremath{\mathrm}{L}^\ensuremath{\mathrm}{A}\rangle/\ensuremath{\mathcal}{S}^\ensuremath{\mathrm}{M},
\end{align}
and $Y_\ensuremath{\mathrm}{L}^\ensuremath{\mathrm}{M}=Y_\ensuremath{\mathrm}{L}^\ensuremath{\mathrm}{A}Z_\ensuremath{\mathrm}{L}^\ensuremath{\mathrm}{B}$.
\begin{figure*}[ht!]
\centering
\includegraphics[width=0.9\textwidth]{Fig_2_sc_teleportation.pdf}
\caption{\textbf{Surface-code state teleportation with lattice surgery.}
(top-center) Measurement-based scheme to teleport information of an arbitrary logical state $\ket{\psi_\ensuremath{\mathrm}{L}}=\alpha\ket{0_\ensuremath{\mathrm}{L}}+\beta\ket{0_\ensuremath{\mathrm}{L}}$ between two logical qubits using only single-qubit and two-qubit measurements. (left) We start with a logical state $\ket{\psi^\ensuremath{\mathrm}{A}_\ensuremath{\mathrm}{L}}$ encoded in a $5\times 5$ surface code and an additional logical ancilla in the state $\ket{0^\ensuremath{\mathrm}{B}_\ensuremath{\mathrm}{L}}$. (bottom) We perform LS, i.e., merging and splitting, to implement a joint measurement $M^\ensuremath{\mathrm}{m_1}_\ensuremath{\mathrm}{XX}$, where $m_1=0,1$ labels measurement outcomes. The resulting state is entangled. (right) Measuring the initial code in the logical $Z$-basis (i.e., measuring all physical qubits in the $Z$-basis) with measurement outcome $m_2=0,1$ teleports the logical information to the ancilla. Depending on the measurement outcomes $m_1,m_2=0,1$, logical Pauli corrections need to be considered.}
\label{fig:sc_teleportation}
\end{figure*}
With the rough merge we effectively merged the logical $Z$-operators and performed the desired logical joint measurement $M^\pm_\ensuremath{\mathrm}{XX}$. Its expectation value $\pm 1$ is given by the product of expectation values of merging stabilizers $S_6^\mathrm{M},S_7^\mathrm{M}$. Now, we must recover the two initial logical qubits while keeping the previously obtained expectation value of $X_\mathrm{L}^\mathrm{A}X_\mathrm{L}^\mathrm{B}$. To this end, we \emph{split} the merged code by measuring $Z$-stabilizers $S^\mathrm{A}_2$ or $S^\mathrm{B}_1$ along the merged boundaries as depicted in~\cref{fig:sc_ls} (\textbf{Schematic Split}). These operators commute with all stabilizers in $\ensuremath{\mathcal}{S}^\ensuremath{\mathrm}{M}$ that define the separated logical qubits $\mathcal{S}^\mathrm{A},\mathcal{S}^\mathrm{B}$. In particular, the measured stabilizers all commute with $X_\mathrm{L}^\mathrm{A},X_\mathrm{L}^\mathrm{B}$, i.e., the code remains in an eigenstate of $X_\mathrm{L}^\mathrm{A}X_\mathrm{L}^\mathrm{B}$. After splitting, measurement outcomes $m'',m'''\in\{0,1\}$ of stabilizers $S^\mathrm{A}_2, S^\mathrm{B}_1$, respectively, are random but can be tracked as errors. In conclusion, we have effectively performed a logical entangling operation, $M_\ensuremath{\mathrm}{XX}^\pm$, which can be used to entangle logical qubits and teleport information.
LS can also be used to realize a measurement-based scheme for logical state teleportation LS~\cite{PoulsenNautrupFriisBriegel2017}. In~\cref{fig:sc_teleportation}, we illustrate this scheme for a logical $M_\ensuremath{\mathrm}{XX}$ measurement on two $5\times 5$ surface codes. Note that a similar scheme can be used to teleport information through a logical $M_\ensuremath{\mathrm}{ZZ}$ measurement.
\noindent \textbf{Results}. We demonstrate LS in an ion-trap quantum computer, based on atomic $^{40}\mathrm{Ca}^{+}$ ions in a linear Paul trap. Each qubit is encoded in the $\ket{0}=4S_{1/2}(m_j=-1/2)$ and $\ket{1}=3D_{5/2}(m_j=-1/2)$ state of a single ion. Each experiment consists of (i) laser cooling and state preparation, (ii) coherent manipulation of the qubit states, and (iii) readout. (i) For cooling close to the motional ground state, we employ a three-stage process comprising Doppler cooling, polarization gradient cooling~\cite{cirac1992laser,cirac1993laser} and resolved sideband cooling followed by optical pumping into $\ket{0}$. (ii) The qubits are manipulated with a laser at $729$~nm. The available gate set includes single-qubit $Z$ rotations, multi-qubit $X$ and $Y$ rotations and a multi-qubit entangling M{\o}lmer-S{\o}rensen (MS) gate~\cite{sorensen1999quantum}). (iii) Qubit states are read out via electron-shelving. We utilize spectroscopic decoupling to perform operations selectively on a subset of ions by coherently shelving populations from $\ket{0}=4S_{1/2}(m_j=-1/2)$ to $3D_{5/2}(m_j=-3/2)$ and from $\ket{1}=3D_{5/2}(m_j=-1/2)$ to $3D_{5/2}(m_j=+1/2)$. For more details see Ref.~\cite{schindler2013quantum} and~\cref{app:exp_details}.\\[-2mm]
In~\cref{fig:sc_ls}, we demonstrate LS to entangle logical qubits along the rough boundary. Complementary results for smooth lattice surgery are provided in \cref{app:smooth_ls}. We start by encoding the separated logical qubits, each defined by three stabilizers (see~\cref{eq:sc_stabilizer}) and two logical operators (see~\cref{eq:sc_logical}) in~\cref{fig:sc_ls} (\textbf{Encoded}). As a first example, we choose to encode the logical qubits in the state $\ket{0_\ensuremath{\mathrm}{L}^\ensuremath{\mathrm}{A}0_\ensuremath{\mathrm}{L}^\ensuremath{\mathrm}{B}}$. We can create encoded states with average stabilizer expectation values of $\langle|S_i|\rangle=0.868(4)$, see~\cref{fig:sc_ls} (\textbf{Code stabilizers Encoded}). We make use of the obtained stabilizer information and post-select our data on states with valid code stabilizers (see~\cref{app:post_selection}), which amounts to discarding those measurements where an error was detected by the code. For the encoded states we infer fidelities of ${\ensuremath{\mathcal}{F}(\ket{0_\ensuremath{\mathrm}{L}^\ensuremath{\mathrm}{A}})=93.8(4)\fps{99.3(2)}}$ and ${\ensuremath{\mathcal}{F}(\ket{0_\ensuremath{\mathrm}{L}^\ensuremath{\mathrm}{B}})=93.4(5)\fps{99.4(2)}}$, where the first value describes the raw fidelity, while the second represents the observed fidelity after post-selection\footnote{This format is used throughout this work to present fidelities of both uncorrected and post-selected data.}. Note that this post-selection introduces a finite survival probability, for details see~\cref{app:survival_probs} and~\cref{app:post_selection}.
Performing LS requires quantum-non-demolition (QND) measurements of stabilizers implemented by series of local and entangling gates (see~\cref{fig:circuit_rough_00}). Considering two merging stabilizers mapped onto ancilla $A1$ and $A2$, we have the possibility to detect one of four possible outcomes $(m,m')=(0,0),(0,1),(1,0),(1,1)$.
In~\cref{app:ancilla_readout}, we present data for all possible outcomes for the chosen input state. For experimental simplicity the following results are for the case $(m,m')=(0,0)$. The merged surface code, as defined in~\cref{eq:sc_merged_rough}, is illustrated in~\cref{fig:sc_ls} (\textbf{Code stabilizers Merged}). The data confirms the merged stabilizers with an average stabilizer expectation value of $\langle|S_i|\rangle=0.669(8)$. Starting from the state $\ket{0^\ensuremath{\mathrm}{A}_\ensuremath{\mathrm}{L}0^\ensuremath{\mathrm}{B}_\ensuremath{\mathrm}{L}}$, the merged logical state is a $+1$ eigenstate of the logical $Z_\ensuremath{\mathrm}{L}^\ensuremath{\mathrm}{M}=Z_\ensuremath{\mathrm}{L}^\ensuremath{\mathrm}{A}Z_\ensuremath{\mathrm}{L}^\ensuremath{\mathrm}{B}$ operator, as can be seen in~\cref{fig:sc_ls} (\textbf{Logical operators Merged}). The data reveals a state fidelity of $\ensuremath{\mathcal}{F}(\ket{0_\ensuremath{\mathrm}{L}^\ensuremath{\mathrm}{M}})=86.4(1.0)\fps{97.9(5)}$ after merging.
Now, we split the merged logical qubit along the same boundary by mapping $S_2^\ensuremath{\mathrm}{A}$ onto ancilla $A1$ for the case $m''=0$. Thereby we restore the initial code space with an average stabilizer expectation value of ${\langle|S_i|\rangle=0.603(3)}$, shown in \cref{fig:sc_ls}~(\textbf{Code stabilizers Split}). The resulting projective measurement $\mathbb{I}+X_\ensuremath{\mathrm}{L}^\ensuremath{\mathrm}{A}X_\ensuremath{\mathrm}{L}^\ensuremath{\mathrm}{B}$ maps the initial product state $\ket{0^\ensuremath{\mathrm}{A}_\ensuremath{\mathrm}{L}0^\ensuremath{\mathrm}{B}_\ensuremath{\mathrm}{L}}$ onto a maximally entangled, logical Bell state $\ket{\phi_\ensuremath{\mathrm}{L}^+}=\tfrac{1}{\sqrt{2}}\left(\ket{0^\ensuremath{\mathrm}{A}_\ensuremath{\mathrm}{L}0^\ensuremath{\mathrm}{B}_\ensuremath{\mathrm}{L}}+\ket{1^\ensuremath{\mathrm}{A}_\ensuremath{\mathrm}{L}1^\ensuremath{\mathrm}{B}_\ensuremath{\mathrm}{L}}\right)$.
In order to deduce the fidelity of the generated state with respect to the logical Bell state, we measure the common logical stabilizers $\langle Z_\ensuremath{\mathrm}{L}^\ensuremath{\mathrm}{A}Z_\ensuremath{\mathrm}{L}^\ensuremath{\mathrm}{B},X_\ensuremath{\mathrm}{L}^\ensuremath{\mathrm}{A}X_\ensuremath{\mathrm}{L}^\ensuremath{\mathrm}{B},-Y_\ensuremath{\mathrm}{L}^\ensuremath{\mathrm}{A}Y_\ensuremath{\mathrm}{L}^\ensuremath{\mathrm}{B}\rangle$, obtaining the fidelity
(see, e.g.,~\cite{FriisMartyEtal2018})
\begin{align}
\begin{split}
\ensuremath{\mathcal}{F}\left(\ket{\phi_\ensuremath{\mathrm}{L}^+}\right)=\tfrac{1}{4}\left(1+\langle Z_\ensuremath{\mathrm}{L}^\ensuremath{\mathrm}{A}Z_\ensuremath{\mathrm}{L}^\ensuremath{\mathrm}{B}\rangle+\langle X_\ensuremath{\mathrm}{L}^\ensuremath{\mathrm}{A}X_\ensuremath{\mathrm}{L}^\ensuremath{\mathrm}{B}\rangle-\langle Y_\ensuremath{\mathrm}{L}^\ensuremath{\mathrm}{A}Y_\ensuremath{\mathrm}{L}^\ensuremath{\mathrm}{B}\rangle\right).\nonumber
\end{split}
\end{align}
In~\cref{fig:sc_ls}~(\textbf{Split}), we present the results for the Bell state generation. From the common stabilizer measurements, we infer a logical Bell state fidelity of ${\ensuremath{\mathcal}{F}(\ket{\phi_\ensuremath{\mathrm}{L}^+})=58.0(1.6)\fps{75.3(1.6)}}$, where the raw fidelity exceeds the separability limit of $50\,\%$ by $5$ sigma. Imperfect physical gate implementations can be characterized~\cite{erhard2019characterizing} and match our expectations (see~\cref{app:exp_details}). In~\cref{app:additional_meas}, we demonstrate LS for various inputs states in order to generate different maximally entangled Bell states.
LS enables teleporting quantum states from one logical qubit to another (see~\cref{fig:sc_teleportation}), which we demonstrate for the input states $\ket{0^\ensuremath{\mathrm}{A}_\ensuremath{\mathrm}{L}0^\ensuremath{\mathrm}{B}_\ensuremath{\mathrm}{L}}$, $\ket{1^\ensuremath{\mathrm}{A}_\ensuremath{\mathrm}{L}0^\ensuremath{\mathrm}{B}_\ensuremath{\mathrm}{L}}$, and $\ket{+^\ensuremath{\mathrm}{A}_\ensuremath{\mathrm}{L}0^\ensuremath{\mathrm}{B}_\ensuremath{\mathrm}{L}}$. After performing rough LS (i.e., encoding, merging, splitting), we measure logical qubit $A$ in the $Z$-basis and apply a logical $X_\ensuremath{\mathrm}{L}$ gate on qubit $B$ if qubit $A$ was found in $\ket{1^\ensuremath{\mathrm}{A}_\ensuremath{\mathrm}{L}}$ (see~\cref{fig:teleporation_data}). Succeeding the teleportation protocol, we measure logical state fidelities for qubit $B$ of ${\ensuremath{\mathcal}{F}\left(\ket{0^\ensuremath{\mathrm}{B}_\ensuremath{\mathrm}{L}}\right)=87(2)\fps{97(1)}}$, ${\ensuremath{\mathcal}{F}\left(\ket{1^\ensuremath{\mathrm}{B}_\ensuremath{\mathrm}{L}}\right)=81(2)\fps{93(2)}}$ and ${\ensuremath{\mathcal}{F}\left(\ket{+^\ensuremath{\mathrm}{B}_\ensuremath{\mathrm}{L}}\right)=71(1)\fps{85(2)}}$, given the input states $\ket{0^\ensuremath{\mathrm}{A}_\ensuremath{\mathrm}{L}0^\ensuremath{\mathrm}{B}_\ensuremath{\mathrm}{L}}$, $\ket{1^\ensuremath{\mathrm}{A}_\ensuremath{\mathrm}{L}0^\ensuremath{\mathrm}{B}_\ensuremath{\mathrm}{L}}$, and $\ket{+^\ensuremath{\mathrm}{A}_\ensuremath{\mathrm}{L}0^\ensuremath{\mathrm}{B}_\ensuremath{\mathrm}{L}}$, respectively.
\begin{figure}[hbt]
\centering
\includegraphics[width=0.4\textwidth]{Fig_3_data_teleportation.pdf}
\vspace*{-4mm}
\caption{\textbf{Teleportation of quantum information via LS.} We prepare the logical qubits $A,B$ in the states $\ket{0^\ensuremath{\mathrm}{A}_\ensuremath{\mathrm}{L}0^\ensuremath{\mathrm}{B}_\ensuremath{\mathrm}{L}}$, $\ket{1^\ensuremath{\mathrm}{A}_\ensuremath{\mathrm}{L}0^\ensuremath{\mathrm}{B}_\ensuremath{\mathrm}{L}}$, and $\ket{+^\ensuremath{\mathrm}{A}_\ensuremath{\mathrm}{L}0^\ensuremath{\mathrm}{B}_\ensuremath{\mathrm}{L}}$, and use LS to teleport the state from logical qubit $A$ to logical qubit $B$. We measure fidelities of the teleported quantum states of $\ensuremath{\mathcal}{F}\left(\ket{0^\ensuremath{\mathrm}{B}_\ensuremath{\mathrm}{L}}\right)=87(2)\fps{97(1)}$ $\ensuremath{\mathcal}{F}\left(\ket{1^\ensuremath{\mathrm}{B}_\ensuremath{\mathrm}{L}}\right)=81(2)\fps{93(2)}$, and $\ensuremath{\mathcal}{F}\left(\ket{+^\ensuremath{\mathrm}{B}_\ensuremath{\mathrm}{L}}\right)=71(1)\fps{85(2)}$.}
\label{fig:teleporation_data}
\end{figure}
\noindent \textbf{Conclusion}.
We have demonstrated entanglement generation and teleportation via LS between two logical qubits, each encoded in a $4$-qubit surface code, on a $10$-qubit ion trap quantum information processor. We have implemented both the rough and smooth variants of LS~\cite{FowlerMariantoniMartinisCleland2012, PoulsenNautrupFriisBriegel2017}, a technique that is considered~\cite{Litinski2019, GutierrezMuellerBermudez2019} to be key for operating future fault-tolerant quantum computers. For current NISQ-era devices, certification of logical entanglement~\cite{FriisVitaglianoMalikHuber2019} generated via LS can provide means for benchmarking. Besides increasing the numbers of physical and logical qubits, future challenges lie in the implementation of LS between arbitrary topological codes~\cite{PoulsenNautrupFriisBriegel2017} to exploit different features such as transversal gate implementation or high noise tolerance of the respective codes. Lattice surgery can thus function as a fault-tolerant interface between quantum memories and quantum processors.
\vspace*{-2mm}
\bibliographystyle{apsrev4-1fixed_with_article_titles_full_names}
|
2,869,038,154,422 | arxiv | \section{Proof of Lemma 1}
\label{app_lemma1}
ACE in each cluster in (\ref{zsmj}) can also be written in the following form
\begin{equation}
\begin{split}
z_{{mj}} = \norm{\overline{c}_{x_{mj}} - {x_{mj}} B_{mj}}_2^2 \label{zzz}
\end{split}
\end{equation}
where $x_{mj}$ is a vector of elements of $C_{mj}$, $\bar c_{x_{mj}}$ is a vector of all the associated $\bar c_{x_{mj(i)}}$s to the elements of $C_{mj}$ and
$B_{mj}$ is an $n_{mj}$ by $n_{mj}$ averaging matrix
\begin{equation} \label{BBB}
B_{mj} =
\begin{bmatrix}
\frac{1}{n_{mj}}& \dots{} & \frac{1}{n_{mj}} \\
\vdots &\ddots & \vdots\\
\frac{1}{n_{mj}} &\dots{} & \frac{1}{n_{mj}}
\end{bmatrix}
\end{equation}
Replacing $x_{mj}(i)$ from (\ref{eq:dd_mod}), ($x_{mj}(i) = \overline{c}_{x_{mj}(i)} +\overline{W}_{x_{mj}(i)}$ in (\ref{zzz}) we have:
\begin{equation}
\begin{split}
Z_{{mj}} = \norm{ \overline{c}_{x_{mj}}(I-B_{mj})- \overline{W}_{x_{mj}}B_{mj} }_2^2 \\
Z_{{mj}} = \norm{ \overline{c}_{x_{mj}}A_{mj} - \overline{W}_{x_{mj}}B_{mj} }_2^2
\end{split}
\end{equation}
and
$A_{mj}=I-B_{mj}$ is shown in (\ref{eq:11}).
Since $A_{mj}^TB_{mj} = 0$, we have
\begin{equation}\label{app:zsm1}
Z_{{mj}} = \norm{\Delta_{{mj}}}_2^2 + \norm{ \overline{W}_{x_{mj}}B_{mj}}_2^2
\end{equation}
where $ \norm{\Delta_{{mj}}}_2^2 = \norm{\overline{c}_{x_{mj}}A_{mj}}_2^2$
\begin{equation}\label{app:zsm2}
\scriptsize
\norm{ \overline{W}_{x_{mj}}B_{mj} }_2^2 = \frac{1}{n_{mj}}\sum_{i=1}^{n_{mj}}\overline{W}_{x_{mj}(i)}^T \overline{W}_{x_{mj}(i)} + \frac{1}{n_{mj}}\sum_{i \neq k}^{n_{mj}} \overline{W}^T_{x_{mj}(i)} \overline{W}_{x_{mj}(k)}
\end{equation}
Plugging (\ref{app:zsm2}) in (\ref{app:zsm1}) will yield the expression in (\ref{eq:10}).
The first term of (\ref{eq:10}) is a constant. The second term is chi-square random variable with a non-zero mean. Because of the independence between the random vectors $\overline{W}_{x_{mj}(i)}$ for $i=[1,2,\dots,N]$, the expectation can be brought inside the summation term and
\begin{equation}\label{exp_inside}
\begin{split}
E\left[ \frac{1}{n_{mj}}\sum_{i=1}^{n_{mj}}\overline{W}_{x_{mj}(i)}^T \overline{W}_{x_{mj}(i)} \right] =\\
\frac{1}{n_{mj}}\sum_{i=1}^{n_{mj}}E[\overline{W}_{x_{mj}(i)}^T \overline{W}_{x_{mj}(i)}] = \frac{1}{n_{mj}}\sum_{i=1}^{n_{mj}}tr(\overline{\wedge}_{x_{mj}(i)})
\end{split}
\end{equation}
The third term has an expected value of 0 due to independence between $\overline{W}_{x_{mj}(i)}$s. Thus, the overall expected value of $Z_{{mj}}$ is given by (\ref{eq:12}). Note that the first term is a constant thus, it has a zero variance. The variance of the other two terms are given below:
\begin{equation}\label{var_zsm_app1}
\scriptsize
Var \left[ \frac{1}{n_{mj}}\sum_{i=1}^{n_{mj}}\overline{W}_{x_{mj}(i)}^T \overline{W}_{x_{mj}(i)} \right] = \frac{2}{n_{mj}^2}\sum_{i=1}^{n_{mj}}tr({\overline{\wedge}_{x_{mj}(i)}}^2)
\end{equation}
\begin{equation}\label{var_zsm_app2}
\scriptsize
Var\left[ \frac{1}{n_{mj}} \sum_{i \neq k}^{n_{mj}} \overline{W}_{x_{mj}(i)}^T \overline{W}_{x_{mj}(k)}\right] = \frac{2}{n_{mj}^2}\sum_{i \neq k}^{n_{mj}}tr(\overline{\wedge}_{x_{mj}(i)} \overline{\wedge}_{x(k)})
\end{equation}
The covariance between the second and third term of $Z_{{mj}}$ in (\ref{eq:10}) is 0. Therefore, the variance of $Z_{{mj}}$ is given by adding (\ref{var_zsm_app1}) and (\ref{var_zsm_app2}) which will lead to (\ref{eq:13}).
\section{Proof of Lemma 2}
\label{app_lemma_2}
The cluster compactness of cluster $C_{mj}$ in (\ref{eq:9}) can be written in the following form
\begin{equation}
\begin{split}
y_{{mj}}=\norm{{{x_{mj}} - x_{mj}}B_{mj}}_2^2
\end{split}
\end{equation}
where $B_{mj}$ was defined in (\ref{BBB}).
Replacing $x_{mj}(i)$ with , $x_{mj}(i) = \overline{c}_{x_{mj}(i)} + \overline{W}_{x_{mj}(i)}$ in (\ref{eq:dd_mod}), we have
\begin{equation}
Y_{{mj}} = \norm{ \overline{c}_{x_{mj}(i)}A_{mj} + \overline{W}_{x_{mj}(i)}A_{mj} }_2^2
\end{equation}
\begin{equation} \label{app:3}
\scriptsize
\begin{split}
Y_{{mj}} = \norm{ \overline{W}_{x_{mj}(i)} A_{mj}}_2^2 +
\frac{2(n_{mj} - 1)}{n_{mj}}\sum_{i=1}^{n_{mj}} \overline{c}_{x_{mj}(i)}^T \overline{W}_{x_{mj}(i)}\\
- \frac{2}{n_{mj}} \sum_{i \neq k}^{n_{mj}} \left( \overline{c}_{x_{mj}(i)}^T \overline{W}_{x_{mj}(k)} + \overline{c}_{x_{mj}(k)} ^T\overline{W}_{x_{mj}(i)} \right) + \norm{\Delta_{{mj}} }_2^2
\end{split}
\end{equation}
where
\begin{equation} \label{app:33}
\begin{split}
\norm{\overline{W}_{x_{mj}(i)}A_{mj}}_2^2 = \frac{n_{mj}-1}{n_{mj}}\sum_{i=1}^{n_{mj}}\overline{W}_{x_{mj}(i)}^T \overline{W}_{x_{mj}(i)} + \\
\frac{1}{n_{mj}}\sum_{i \neq k}^{n_{mj}} \overline{W}_{x_{mj}(i)}^T \overline{W}_{x_{mj}(i)}
\end{split}
\end{equation}
Combining (\ref{app:3}) and (\ref{app:3}), $Y_{{mj}}$ can be expressed as follows:
\begin{equation}\label{eq:22}
\begin{split}
Y_{{mj}} = \norm{ \Delta_{{mj}} }_2^2 +\frac{n_{mj}-1}{n_{mj}} \sum_{i=1}^{n_{mj}}\overline{W}_{x_{mj}(i)}^T\overline{W}_{x_{mj}(i)} - \\ \frac{1}{n_{mj}} \sum_{i \neq k}^{n_{mj}}\overline{W}_{x_{mj}(i)}^T\overline{W}_{x_{mj}(k)} +2 \sum_{i=1}^{n_{mj}} \overline{c}_{x_{mj}(i)}^T \overline{W}_{x_{mj}(k)} - \\ \frac{2}{n_{mj}} \sum_{f=1}^{d}\left( \sum_{i=1}^{n_{mj}} \overline{W}_{x_{mj}(k)}(f) \sum_{i=1}^{n_{mj}}\overline{c}_{x_{mj}(i)}(f) \right)
\end{split}
\end{equation}
where $\overline{W}_{x_{mj}(i)}(f)$ and $\overline{c}_{x_{mj}(i)}(f)$ are $f$ component of these vectors of length $d$.
The expected value of $\overline{W}_{x_{mj}(i)}$ is zero, making expectations of terms 4 and 5 of (\ref{eq:22}) to be also zero. The 3rd term's expectation is also zero due to the independence $W$s. Similar to (\ref{exp_inside}), the expectation of the second term can be calculated as follows
\begin{equation}\label{app_ysm_exp}
\begin{split}
E\left[ \frac{n_{mj}-1}{n_{mj}}\sum_{i=1}^{n_{mj}}\overline{W}_{x_{mj}(i)}^T \overline{W}_{x_{mj}(i)} \right] = \frac{n_{mj}-1}{n_{mj}}\sum_{i=1}^{n_{mj}}tr(\overline{\wedge}_{x_{mj}(i)})
\end{split}
\end{equation}
Therefore, combining expectations of all the terms on (\ref{eq:22}), the expected value of $Y_{{mj}}$ is given by (\ref{eq:23}).
For calculation of variance, the first term of equation (\ref{eq:22}) is a constant thus, it has a zero variance. Variances of the other four terms are:
\begin{equation} \label{app:4}
\scriptsize
Var[\frac{n_{mj}-1}{n_{mj}}\sum_{i=1}^{n_{mj}}\overline{W}_{x_{mj}(i)}^T \overline{W}_{x_{mj}(i)}] =
\frac{2(n_{mj}-1)^2}{n_{mj}^2}\sum_{i=1}^{n_{mj}}tr((\overline{\wedge}_{x_{mj}(i)})^2)
\end{equation}
\begin{equation}
\scriptsize
Var[ \frac{1}{n_{mj}}\sum_{i \neq k}^{n_{mj}} \overline{W}_{x_{mj}(i)}^T \overline{W}_{x_{mj}(i)}] = \frac{2}{n_{mj}^2}\sum_{i \neq k}^{n_{mj}}tr(\overline{\wedge}_{x_{mj}(i)} \overline{\wedge}_{x_{mj}(i)})
\end{equation}
\begin{equation}
\scriptsize
Var[2\sum_{i=1}^{n_{mj}} \overline{c}_{x_{mj}(i)}^T\overline{W}_{x_{mj}(i)}] = \frac{4}{n_{mj} \ d} \sum_{i=1}^{n_{mj}} tr(\overline{\wedge}_{x_{mj}(i)}) \sum_{i=1}^{n_{mj}} \overline{c}_{x_{mj}(i)}^T\overline{c}_{x_{mj}(i)}
\end{equation}
\begin{equation}
\scriptsize
\begin{split}
Var[ - \frac{2}{n_{mj}} \sum_{f=1}^{d}\left( \sum_{i=1}^{n_{mj}} \overline{W}_{x_{mj}(i)}(f) \sum_{i=1}^{n_{mj}}\overline{c}_{x_{mj}(i)}(f) \right)] = \\ \frac{4}{n_{mj}^2 \ d} \sum_{i=1}^{n_{mj}} tr(\overline{\wedge}_{x_{mj}(i)}) \left(\sum_{i=1}^{n_{mj}} \overline{c}_{x_{mj}(i)}^T\overline{c}_{x_{mj}(i)} + \sum_{i \neq k}^{n_{mj}}\overline{c}_{x_{mj}(i)}^T\overline{c}_{x_{mj}(k)} \right)
\end{split}
\end{equation}
The covariance between terms 4 and 5 of equation (\ref{eq:22}) is:
\begin{equation} \label{app:5}
\scriptsize
\begin{split}
2 Cov \left[ 2 \sum_{i=1}^{n_{mj}} \overline{c}_{x_{mj}(i)}^T\overline{W}_{x_{mj}(i)} - \frac{2}{n_{mj}} \sum_{f=1}^{d}\left( \sum_{i=1}^{n_{mj}} \overline{W}_{x_{mj}(i)}(f) \sum_{i=1}^{n_{mj}}\overline{c}_{x_{mj}(i)}(f) \right) \right] \\
=\frac{-8}{n_{mj}^2 \ d }\sum_{i=1}^{n_{mj}}tr(\overline{\wedge}_{x_{mj}(i)}) \left(\sum_{i=1}^{n_{mj}} \overline{c}_{x_{mj}(i)}^T\overline{c}_{x_{mj}(i)} + \sum_{i \neq k}^{n_{mj}}\overline{c}_{x_{mj}(i)}^T\overline{c}_{x_{mj}(k)} \right)
\end{split}
\end{equation}
while the rest of the covariances between terms 2-5 of (\ref{eq:22}) are calculated to be zero. Also we have
\begin{equation} \label{app:6}
\scriptsize
\begin{split}
\norm{\Delta_{{mj}} }_2^2 =\norm{\overline{c}_{x_{mj}} A_{mj}}_2^2 =\;\;\;\;\;\;\;\;\;\;\;\;\; \\
\frac{n_{mj} - 1}{n_{mj}}\sum_{i=1}^{n_{mj}}\overline{c}_{x_{mj}(i)}^T\overline{c}_{x_{mj}(i)} - \frac{1}{n_{mj}} \sum_{i \neq k}^{n_{mj}}\overline{c}_{x_{mj}(i)}^T\overline{c}_{x_{mj}(k)}
\end{split}
\end{equation}
Adding (\ref{app:4})-(\ref{app:5}) and using (\ref{app:6}), the variance of $Y_{{m_j}}$ is given by (\ref{eq:24}).
\section{Upperbound of $\norm{\Delta_{{mj}}}_2^2$ from $y_{{mj}}$ }
\label{app:delta_upp}
To find the upper bound of $ \norm{\overline{c}_{x_{mj}} A_{mj}}_2^2 $, we should solve for the following inequality:
\begin{equation}\label{eq:31}
E[Y_{{mj}}] - \alpha_{{mj}} \sqrt{var[Y_{{mj}}]} \leq y_{{mj}}
\end{equation}
Using (\ref{eq:23}) and (\ref{eq:24}) in (\ref{eq:31}), and denoting $g_{mj}$ as $g_{mj} = \frac{(n_{mj} - 1)}{n_{mj}} \sum_{i=1}^{n_{mj}}tr(\overline{\wedge}_{x_{mj}(i)}) $, we have in footnote
\footnote{
\begin{equation}
\norm{\Delta_{{mj}}}_2^2 + g_{mj} - y_{{mj}} \leq
\alpha_{n_{mj}} \sqrt{\frac{2(n_{mj}-1)^2}{n_{mj}^2}\sum_{i=1}^{n_{mj}}tr((\overline{\wedge}_{x_{mj}(i)})^2) + \frac{2}{n_{mj}^2}\sum_{i \neq k}^{n_{mj}}tr(\overline{\wedge}_{x_{mj}(i)} \overline{\wedge}_{x_{mj}(k)}) + \norm{\Delta_{{mj}}}_2^2\frac{4}{d \times n_{mj}}\sum_{i=1}^{n_{mj}}tr(\overline{\wedge}_{x_{mj}(i)})}\nonumber\;\;\;\;\;(60)
\end{equation}}.
To solve for the boundary, (i.e. when two sides of the inequality are equal), we square both sides of the equation and consequently solve the equation in footnote
\footnote{$(\norm{\Delta_{{mj}}}_2^2)^2 +\norm{\Delta_{{mj}}}_2^2\left(2(g_{mj} - y_{{mj}}) - \alpha_{n_{mj}}^2 \frac{4}{d \times n_{mj}}\sum_{i=1}^{n_{mj}}tr(\overline{\wedge}_{x_{mj}(i)} )\right) + $
\begin{equation}
\;\;\;\;\;\;\;\;\;\;\; \left[ (g_{mj} - y_{{mj}})^2 -\alpha_{n_{mj}}^2 \left(\frac{2(n_{mj}-1)^2}{n_{mj}^2}\sum_{i=1}^{n_{mj}}tr((\overline{\wedge}_{x_{mj}(i)})^2) + \frac{2}{n_{mj}^2}\sum_{i \neq k}^{n_{mj}}tr(\overline{\wedge}_{x_{mj}(i)} \overline{\wedge}_{x_{mj}(k)} ) \right) \right]=0 \nonumber \;\;\;\;\;\;\;\;\;\;\;\; (61)
\end{equation}}
that is a quadratic equation in terms of $\norm{\Delta_{{mj}}}_2^2$. Solving for the roots of $\norm{\Delta_{{mj}}}_2^2$ can give us both the lower-bound and the upper-bound, denoted by $\overline{\norm{\Delta_{{mj}}}_2^2}$ and $\underline{\norm{\Delta_{{mj}}}_2^2}$, respectively. The higher root of $\norm{\Delta_{{mj}}}_2^2$ is considered its upper bound. Note that the roots $\norm{\Delta_{{mj}}}_2^2$ are only functions of the observed data compactness $y_{m}$, $\alpha_{{mj}}$, and the eigenvalues of the covariance of scatter factor $\overline{W}_{x_{mj}}$, $\overline{\wedge}_{x_{mj}}$.
\section{Introduction}
\label{intro}
\IEEEPARstart{C}{lustering} is one of the most used unsupervised learning tasks where unlabeled observed data samples are grouped based on their similarities and dissimilarities. It has vast applications in various areas such as image segmentation, market research, and sequence analysis. Two main challenges of clustering procedure are estimating the optimum number of clusters and partitioning the data \cite{mintro1},\cite{mintro2}. One of the well known partitional clustering methods is K-means \cite{kmeans}. Given the data, the method estimates cluster centers and partitions the data in a simple iterative procedure. Similar to other partitional approaches, K-means requires the correct number of clusters (CNC) to finalize the clustering procedure. In general, most clustering algorithms require CNC estimate as their input \cite{ieee1},\cite{ieee2},\cite{ieee3},\cite{ref:ditc}. However, in many practical applications this value is not available and CNC has to be estimated during the clustering procedure by using the same data that requires clustering.
Overestimating the number of clusters typically results in redundant splitting of a cluster, and underestimating the number of clusters, on the other hand, results in combining clusters to form a loose clustering.
Number of approaches for estimating the CNC in K-means are provided in \cite{kmcnc}. Pioneer methods of CNC estimation involve formulation of validity indexes. Validity indexes provide a quantitative measurement for comparison of clustering results with different number of clusters. To estimate CNC in K-means, validity indexes approaches use the results of $m$-clustering, e.i., clustering with $m$ clusters, for a range of $m$, $m \in [m_{min}, m_{max}]$. By solving an additional optimization problem the results of these $m$ clusterings are compared and the value of $m$ which optimizes the desired criterion is chosen as the estimate of CNC. Examples of these validity indexes are gap index \cite{gap}, Silhouette index \cite{sil}, Calinski-Harabasz index \cite{CB}, and Davies-Bouldin index\cite{DB}. More methods are proposed in \cite{vi1},\cite{vi2}.
K-means is also used in divisive hierarchical clustering methods such as X-means \cite{xmeans}, G-means \cite{gmeans}, and dip-means \cite{dip-means}. In these approaches stopping criterion for the hierarchical splitting procedure is a statistical test. X-means uses an information theoretical test criterion, G-means implements Anderson-Darling (AD) test and DIP-means, uses a criterion denoted by dip-dist.
While clustering assignment on K-means is based on the distance of a sample to its cluster center, another family of clustering algorithms are density based where clusters are formed by grouping samples based on their proximity with respect to their neighboring samples. These methods provide the CNC estimate simultaneously. Density-Based Scan (DBSCAN) \cite{dbscan}, mean-shift \cite{mean_shift} and ordering points to identify cluster structure (OPTICS) \cite{optics} are some of the well known algorithms under this category (Note that OPTICS does not provide the clustering solution but rather provides a CNC estimate). There are other methods that concentrate on Fuzzy clustering or mixture modeling, \cite{fcm},\cite{BFC},\cite{MC},\cite{NML}. Concentration of this work is on providing a proper CNC estimator for K-means that overall keeps the computational complexity of clustering as low as K-means clustering itself.
K-means uses the least square error to find the optimum cluster centers. On the other hand, the existing validity index methods, used with K-means, employ the available cluster compactness for CNC estimate to optimize a criterion that is not similar to the K-means partitioning criterion. It seems rational to choose a consistent criterion for the validity index step as well. Motivated by this, MACE-means \cite{ref:9} proposed to minimize the Average Central Error (ACE). It is worth mentioning that use of ACE in this problem setting shares the same fundamentals that use MSE in SVD order selection \cite{SVD}. However, MACE-means fails to estimate this error properly. It misses estimating the biased variance of this error and can lead to wrong CNC estimates. Here, we correct the definition of ACE to be the average least square error between the true center and the estimated center for each sample data in each $m$ clustering. Using the available cluster compactness, k-minimizing ACE (K-MACE) algorithm calculates estimate of both mean and variance of ACE to provide an accurate estimate of CNC \cite{ref:8}. While MACE-means was formulated for only uncorrelated spherical clusters, K-MACE is proposed for clustering both sphere and ellipsoid clusters. The proposed approach is very robust in estimating the covariance of each cluster. On the other hand, kernel K-means clustering has been proposed for clustering overlapping and more arbitrary shaped clusters \cite{4}. In these scenarios kernel functions are used to transform data into a feature space. Similar to K-means the method requires estimation of CNC. Here, we extend K-MACE to kernel K-MACE to provide the CNC estimates along with the clustering procedure. Note that in addition to the number of clusters $m$, existing Kernel based clustering methods require tuning the kernel function parameters. This is currently done by trial and error and no method of validation and choosing the optimum parameters is available. Consequently, the accuracy of the results depends on this ad hoc approach. However, the Kernel parameter governs the separability of clusters in feature space and its optimum value corresponds to the true estimation of CNC. Here we propose another important feature in the kernel K-MACE clustering algorithm that automatically tunes to the optimum Gaussian kernel parameters. While the computational complexity of K-MACE and kernel K-MACE are the same as that of K-means and kernel K-means, simulation results confirm advantages of the proposed approaches over the competing methods for both synthetic data and real data. It shows that K-MACE approaches outperform other methods even in the presences of major cluster overlaps.
The paper is organized as follows. Section II defines the considered clustering problem.
Average Central Error (ACE) and its importance in $m$-clustering evaluation are provided in Section III. Calculation of ACE, by only using the available data, for $m$-clustering and the K-MACE algorithm are in Section IV. In Section V kernel K-MACE is introduced. Section VI concentrates on simulation results for synthetic and real data sets and Section VII has the concluding remarks.
\section{Problem Statement}
Consider an observed data of length $N$, $x^N = [x(1) \; x(2)\; \dots \; x(N)]$ where $x(i) \in R_{d\times 1}$ represents collection of features. Each $x(i)$ is considered as a sample of a random vector with the following structure:
\begin{equation} \label{eq:1}
x(i) = \overline{c}_{x(i)} + \overline{w}_{x(i)}
\end{equation}
where scatter factor $\overline{w}_{x(i)}$ is a sample of $\overline{W}_{x(i)}$, a zero mean random vector that describes the variation of $x(i)$ around its center $\overline{c}_{x(i)}$. For simplicity and without loss of generality, it can be assumed that the scatter factor has an additive white Gaussian distribution:
\begin{equation}\label{eq:dd_mod}
\overline{w}_{x(i)} \sim \mathcal{N} (0, \overline{\Sigma}_{x(i)})
\end{equation}
where $\overline{\Sigma}_{x(i)}$ is a $d\times d$ covariance matrix.
The notion of over bar is used for the true and unavailable elements of the cluster such as the true center and the true covariance of each cluster. The data set is generated by $\overline{m}$ mutually exclusive cluster model such that for the set of observed data we have $x^N \in X^N$:
\begin{equation}\label{memb}
X^N =\overline{C}_1 \cup \overline{C}_2 \cup . . . \cup \overline{C}_{\overline{m}}
\end{equation}
Each cluster $\overline{C}(j)$ is described by its cluster center $\overline{c}_j$ paired with its covariance $\overline{\Sigma}(j)$. Therefore, for the whole data, we have:
\begin{eqnarray}\label{eq:added_1}
\rm{Cluster \ centers}&:& [\overline{c}(1) \; \overline{c}(2) \; \dots \; \overline{c}(\overline{m})] \\
\rm{Cluster \ Covariances}&:&[\overline{\Sigma}(1)\; \overline{\Sigma}(2) \; \dots \; \overline{\Sigma}(\overline{m})] \nonumber
\end{eqnarray}
Figure 1 shows an example of these clusters. Note that for each sample $x$ of observed data $X^N$ there is center of its true clusters. We denote its associated true center and true covariance with $\bar c_x$ and $\bar \Sigma_x$. For example as
Figure \ref{datamodel} shows the true center associated with $x(1)$ is the center of first cluster, $\overline{c}(1)$, while the true center associated with $x(6)$ is that of the second cluster, $\overline{c}(2)$. We rewrite this fact from the point of view of the samples $x(1)$ and $x(6)$ as follows:
\begin{equation}
\overline{c}_{x(1)} = \overline{c}(1), \ \ \overline{\Sigma}_{x(1)} = \overline{\Sigma}(1) \ \ \ \ \ \ \ \ \overline{c}_{x(6)} = \overline{c}(2), \ \ \overline{\Sigma}_{x(6)} = \overline{\Sigma}(2)
\end{equation}
The correct number of cluster (CNC) ($\overline{m}$ in (\ref{memb})) is not known. The data can be clustered with a range of possible cluster numbers $m$, ($ M_{min} \leq m \leq M_{max}$) where $M_{min}$ and $M_{max}$ are the considered upper and lower bounds for the true number of clusters. The goal is to provide a comparative measure to compare the results of $m$ clustering and choose the optimum $m$.
\section{Average Central Error (ACE) and $m$-clustering}
In this section we first formulate the $m$-clustering with proper notations. Next to compare the results of $m$ clustering, we suggest to use Average Central Error (ACE) as a criterion and choose the number of clusters, $m$, that minimizes this error.
\subsection{$m$-clustering Notations}
In $m$ clustering, the available data is used in finding $m$ clusters. Our method of clustering is K-means. The following are notations to distinguish for each data $x$ between its true unknown cluster and its $m$-clustering membership.
\begin{figure}
\centering
\includegraphics[scale=0.21]{figs/data_model}
\caption{An Example of Data Model in 2D. $(\overline{m}=2)$.}
\label{datamodel}
\end{figure}
Clustering the available data into $m$ mutually exclusive clusters results in $m$ centers. Each of these $m$ cluster is denoted by $C_{mj}$ where the subscript $j$ pertains to the $j^{th}$ cluster. Similar to (\ref{eq:added_1}) the centers of this clustering are
\begin{eqnarray}\label{estt}
\rm{Estimated \ centers}&:& [{\hat c}_{m1} \; \cdots \; {\hat c}_{mj} \; \dots \; {\hat c} _{mm}]
\end{eqnarray}
Note that the notion of over bar is now replaced by hat to represent the estimation and the new subscript has two elements to show the number of clusters $m$ used in clustering as well as indexing the cluster with $j$.
In this case the available data $x$ in (1) is now denoted by $x_{mj}$ which means that the data has been clustered by $m$ clusters and is now member of the $j$th cluster. Each element of cluster with center $c_{mj}$, that is now denoted by $x_{mj}$, already has been generated by a true unknown center. In each $m$ clustering, each $x(i)$ becomes a $x_{mj}$, its true center and covariance is now denoted by $\overline{c}_{x_{mj}}$ and we can represent (1) in the following form:
\begin{equation} \label{eq:1}
x_{mj} = \overline{c}_{x_{mj}} + \overline{w}_{x_{mj}}
\end{equation}
Therefore each element of $j$th cluster in $m$ clustering has been generated by its true center and is pointing to the center of this cluster:
\begin{equation}\label{erl}
x_{mj} \in C_{mj}:\;\; \overline{c}_{x_{mj}} \rightarrow x_{mj} \rightarrow \hat{c}_{mj}
\end{equation}
where the estimated center of the $j^{th}$ cluster is denoted by $\hat{c}_{mj}$, and is calculated by averaging the $j^{th}$ cluster members:
\begin{equation}\label{cmj}
\hat{c}_{mj} = \frac{1}{n_{mj}} \sum_{i=1}^{n_{mj}} x_{mj}(i)
\end{equation}
and $n_{mj}$ is the number of elements in $C_{mj}$
Figure \ref{fig:clust_example} illustrates an example of $m$ clustering. As the figure shows the true number of clusters is two, $\overline{m}=2$, with two true cluster centers $\bar c_1$ and $\bar c_2$. The figure shows the clustering results for $m=1,2,3$. The figure also shows one element of the first cluster as $x$. In the case of 2-clustering ($m=2$),
Figure \ref{fig:clust_example}(a) shows the two estimated centers $\hat{c}_{21}$ and $\hat{c}_{22}$. Note that in this scenario $x$ is also a member of $x_{21}$ as it belongs to the first estimated cluster. On the other hand, in the case of $m=3$, in Figure \ref{fig:clust_example}(c), $x$ is a member of the second estimated cluster $C_{32}$ and is a member of set $x_{32}$.
\subsection{ACE in m-clustering}
ACE, denoted by $Z_{m}$ is a measure of error between estimated cluster centers in (\ref{estt}) and true cluster centers defined in (\ref{eq:added_1}). In each $m$-clustring, for the $j^{th}$ cluster $C_{mj}$, this error is
\begin{equation}\label{zsmj}
\rm{Center \ Error \ for \ cluster \ } C_{mj}: z_{mj} = \sum_{i=1}^{n_{mj}} \norm{ \bar{c}_{x_{mj}(i)} -\hat{c}_{mj} }_2^2
\end{equation}
Note that the summation is over elements of each cluster.
The ACE for $m$ clustering is defined as summation of all the cluster center errors in (\ref{zsmj}) for all $m$ clusters divided by the total number of element $N$:
\begin{equation} \label{eq:6}
z_{m} = \frac{1}{N} \sum_{j=1}^{m}z_{{mj}}
\end{equation}
\begin{figure*}
\centering
\begin{subfigure}{0.4\textwidth}
\raggedright
\includegraphics[width=1.1\linewidth]{figs/ace_k2}
\caption{$m$ Clustering $(m=2)$ and $x \in C_{m1}$}
\label{overestimated_m}
\end{subfigure}\hspace{15mm}
\begin{subfigure}{0.4\textwidth}
\raggedleft
\includegraphics[width=1 \linewidth]{figs/ace_k1}
\caption{$m$ Clustering $(m=1)$ and $x \in C_{m1}$}
\label{underestimated_m}
\end{subfigure
\begin{subfigure}{0.4\textwidth}
\raggedright
\includegraphics[width=1\linewidth]{figs/ace_k3}
\caption{$m$ Clustering $(m=3)$ and $x \in C_{m2}$}
\label{overestimated_m}
\end{subfigure}
\caption{Example of Central Error for observed sample $x$.}
\label{fig:clust_example}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=0.8\linewidth]{figs/zsm}
\caption{Average Central Error as a Function of $m$}
\label{minace}
\end{figure*}
Figure \ref{fig:clust_example} also illustrates an example of behavior of the central error in a scenario that the true number of clusters is two, $\overline{m}=2$. As $x$ is generated by the first cluster, its associated correct center is $\bar {c}(1)$. Therefore, the central error for the shown $x$ in the case of two clustering ($m=2$) is $\norm{\overline{c}(1) - \hat{c}_{21}}_2^2$. In the case of $m=1$, the central error associated for $x$ is $ \norm{\overline{c}(1) - \hat{c}_{11}}_2^2$ which is a much larger value. On the other hand, in the case of $m=3$ the associate central error for $x$ is $\norm{\overline{c}(1) - \hat{c}_{32}}_2^2$ which is again larger than the first case ($m=2$). The associate central error for the other data has a similar behavior. Therefore, as we perform $m$ clustering where $m$ becomes further(smaller/larger) from the true number of cluster $\overline{m}$, we expect this distance to increase for each $x$. Consequently, adding all these central errors for all the data, in (\ref{eq:6}), expected to have the minimum value at $m$ clustering when $m=\overline{m}$. Figure \ref{minace} shows behavior of average central error with respect to $m$ for the data set depicted on Figure \ref{fig:clust_example}.
In the following sections we study ACE closely and provide a method of estimating the unavailable ACE by $only$ using the available data and without the knowledge of the true cluster centers.
\subsection{Mean and Variance of ACE}
The average central error, $z_{{mj}}$ in (\ref{zsmj}), is a sample of random variable $Z_{{mj}}$. Here, we provide the expected value and variance of ACE in (\ref{eq:6}).
\textit{Lemma 1:} The central error for the $j^{th}$ cluster in $m$ clustering, $z_{{mj}}$ in (\ref{zsmj}), is a sample of random variable $Z_{{mj}}$ that is a function of random vector $\overline{W}_{x_{mj}(i)}$, defined in (\ref{eq:dd_mod}):
\begin{equation}
\label{eq:10}
\begin{split}
Z_{{mj}} = \norm{\Delta_{{mj}}}_2^2+ \frac{1}{n_{mj}}\sum_{i=1}^{n_{mj}}\overline{W}_{x_{mj}(i)}^T\overline{W}_{x_{mj}(i)} + \\ \frac{1}{n_{mj}}\sum\limits_{i \neq k}^{n_{mj}}\overline{W}_{x_{mj}(i)}^T\overline{W}_{x_{mj}(k)}^T
\end{split}
\end{equation}
with the following expected value and variance of $Z_{{mj}}$
\begin{equation}\label{eq:12}
E[Z_{{mj}}] = \norm{\Delta_{{mj}}}_2^2 + \frac{1}{n_{mj}}\sum_{i=1}^{n_{mj}}tr(\overline{\wedge}_{x_{mj}(i)})
\end{equation}
\begin{equation}\label{eq:13}
\begin{split}
Var[Z_{{mj}}] = \frac{2}{{n_{mj}}^2}\sum_{i=1}^{n_{mj}}tr((\overline{\wedge}_{x_{mj}(i)})^2) + \\
\frac{2}{{n_{mj}}^2}\sum\limits_{i \neq k}^{n_{mj}}tr(\overline{\wedge}_{x_{mj}(i)}\overline{\wedge}_{x_{mj}(k)})
\end{split}
\end{equation}
where $tr()$ is the trace function of a diagonal matrix, $\overline{\wedge}_{x_{mj}(i)}$ whose values are the eigenvalues of $\overline{W}_{x_{mj}(i)}$'s
covariance and $\norm{\Delta_{{mj}}}_2^2$ is
\begin{eqnarray}
\norm{\Delta_{{mj}}}_2^2 = \norm{\overline{c}_{x_{mj}}A_{mj}}_2^2,
\end{eqnarray}
Matrix $A_{mj}$ is a $n_{mj}$ by $n_{mj}$ matrix
\begin{equation}\label{eq:11}
A_{mj} =
\begin{bmatrix}
\frac{n_{mj} - 1}{n_{mj}}& \frac{-1}{n_{mj}} & \dots{} & \frac{-1}{n_{mj}} \\
\frac{-1}{n_{mj}} & \frac{n_{mj} - 1}{n_{mj}} &\frac{-1}{n_{mj}} &\vdots\\
\vdots{}&\dots{}&\ddots{} & \frac{-1}{n_{mj}}\\
\frac{-1}{n_{mj}} &\dots{} & \frac{-1}{n_{mj}} & \frac{n_{mj} - 1}{n_{mj}}
\end{bmatrix}
\end{equation}
and $\bar c_{x_{mj}}$ is a vector of all the associated $\bar c_{x_{mj(i)}}$s for the elements of $C_{mj}$ .
\textit{Proof} in Appendix \ref{app_lemma1}.
Consequently, the expected value and variance of the overall ACE, $Z_{m}$, in (\ref{eq:6}), are:
\begin{equation}\label{eq:14}
E[Z_{m}] = \frac{1}{N} \sum_{j=1}^m E[Z_{{mj}}]
\end{equation}
\begin{equation}\label{eq:15}
Var[Z_{m}] = \frac{1}{N^2} \sum_{j=1}^m Var[Z_{{mj}}]
\end{equation}
Note that the variance of $Z_{m}$ is also summation of the individual variance of $Z_{{mj}}$'s in (\ref{eq:15}). Due to the independence of $W_{x(i)}$ and $W_{x(k)}$ $(i \neq k)$, $E[Z_{{mj}}Z_{{mi}}]= E[Z_{{mj}}]E[Z_{{mi}}]$ and therefore, the covariances between $Z_{{mj}}$ and $Z_{{mi}}$ is equal to zero for $j \neq i$.
From (\ref{eq:12}), the expected value of $Z_{{mj}}$ has two terms. The first term, $\norm{\Delta_{{mj}}}_2^2 $, is a function of the unknown true cluster center. This term is a decreasing function of $m$. The second term is a function of the cluster covariance and is monotonically proportional to $m$. As a result, there is a point in which $E[Z_{m}]$ reaches its minimum value at some $m$. This is a manifestation of a form of bias-variance trade-off \cite{ref:8},\cite{ref:8.1}.
\subsection{ACE Mean Estimation}
Given the available data, no direct information on ACE, $z_{m}$, is available. Here we propose a method to use the available data to estimate ACE.
\subsubsection{Cluster Compactness}
The Cluster Compactness denoted by $y_{m}$ is the available error between samples of a cluster and its estimated cluster center. The Cluster Compactness of the $j^{th}$ cluster in $m$-clustering is
\begin{equation}\label{eq:9}
y_{{mj}}=\sum_{i=1}^{n_{mj}} \norm{{{x_{mj}(i)} - \hat{c}_{mj}}}_2^2
\end{equation}
\begin{equation}\label{eq:8}
m\rm{-clustering \ Compactness: \ }\mathit{y_{m} = \frac{1}{N} \sum_{j=1}^{m}y_{{mj}}}
\end{equation}
To estimate ACE we use the available cluster compactness. The following Lemma connects the structure of mean and variance of cluster compactness with that of ACE.
\textit{Lemma 2}: The observed cluster compactness $y_{{mj}}$, in (\ref{eq:9}), is a sample of random variable $Y_{{mj}}$ with the following expected value and variance:
\begin{equation} \label{eq:23}
E[Y_{{mj}}] = \norm{\Delta_{{mj}}}_2^2 + \frac{(n_{mj} - 1)}{n_{mj}} \sum_{i=1}^{n_{mj}}tr(\overline{\wedge}_{x_{mj}(i)})
\end{equation}
\begin{equation}\label{eq:24}
\begin{split}
Var[Y_{{mj}}] = \frac{2(n_{mj}-1)^2}{n_{mj}^2}\sum_{i=1}^{n_{mj}} tr((\overline{\wedge}_{x_{mj}(i)})^2) + \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \\
\frac{1}{n_{mj}^2}\sum_{i \neq k}^{n_{mj}}tr(\overline{\wedge}_{x_{mj}(i)} \overline{\wedge}_{x_{mj}(k)}) + \frac{4}{d\times n_{mj}} \norm{\Delta_{{mj}}}_2^2 \sum_{i=1}^{n_{mj}} tr(\overline{\wedge}_{x_{mj}(i)}) \ \ \ \ \ \ \ \ \ \
\end{split}
\end{equation}
\textit{Proof} in Appendix \ref{app_lemma_2}.
Consequently, the expected value and variance of overall $m$-clustering compactness, $Y_{m}$ are:
\begin{equation}\label{eq:25}
E[Y_{m}] = \frac{1}{N} \sum_{j=1}^m E[Y_{{mj}}]
\end{equation}
\begin{equation}\label{eq:26}
Var[Y_{m}] = \frac{1}{N^2} \sum_{j=1}^m Var[Y_{{mj}}]
\end{equation}
Note that the variance of $Y_{m}$ is a summation of individual variances of $Y_{{mj}}$ due to the independence of $W_{x(i)}$ and $W_{x(k)}$ $(i \neq k)$.
\subsubsection{Probabilistic bounds of ${\norm{\Delta_{{mj}}}_2^2 }$ using cluster compactness}
Inspecting \textit{Lemma 1} and \textit{Lemma 2}, it can be seen that the mean of both $Z_{{mj}}$ and $Y_{{mj}}$ are functions of the unknown term $\norm{\Delta_{{mj}}}_2^2$. The following theorem provides probabilistic bounds on this value by using the available $y_{{mj}}$. \\
\textit{Theorem 1}: With validation probability $P_v=1-\frac{1}{\alpha_{mj}^2}$, $\norm{\Delta_{{mj}}}_2^2$ is bounded by
\begin{equation}
\label{bound_delta}
\underline{\norm{\Delta_{{mj}}}_2^2} \leq \norm{\Delta_{{mj}}}_2^2 \leq \overline{\norm{\Delta_{{mj}}}_2^2}
\end{equation}
where
\begin{equation}\label{delta_upp}
\begin{split}
\overline{\norm{\Delta_{{mj}}}_2^2} = \bar{y}_{{mj}} + k_{{mj}} \\
\underline{\norm{\Delta_{{mj}}}_2^2} = \bar{y}_{{mj}} - k_{{mj}}
\end{split}
\end{equation}
and
\begin{equation}
\bar{y}_{{mj}} = (g_{mj}- y_{{mj}}) - \alpha_{mj}^2 \frac{2}{d\times n_{mj}}\sum_{i=1}^{n_{mj}} tr(\overline{\wedge}_{x_{mj}(i)} )
\end{equation}
where $g_{mj}= \frac{(n_{mj} - 1)}{n_{mj}}\sum_{i=1}^{n_{mj}}tr(\overline{\wedge}_{x_{mj}(i)}) $, and
\begin{align}
\label{radical}
k_{{mj}} =& 2\alpha_{mj} [ \nonumber\\
&v_{{mj}}+ \left( \frac{(4\alpha_{mj}^2+2d^2)}{d^2n_{mj}^2}\sum_{i \neq k}^{n_{mj}}tr(\overline{\wedge}_{x_{mj}(i)} \overline{\wedge}_{x_{mj}(k)}) \right) \nonumber \\ &+
\left( \frac{ (4\alpha_{mj}^2+2d^2 n_{mj}^2) } {d^2n_{mj}^2} \sum_{i=1}^{n_{mj}} tr((\overline{\wedge}_{x_{mj}(i)})^2) \right) ] ^{1/2}
\end{align}
and
\begin{equation}
v_{{mj}} = \frac{-4( g_{mj}- y_{{mj}} )} {d\times n_{mj}}\sum_{i=1}^{n_{mj}}tr(\overline{\wedge}_{x_{mj}(i)})
\end{equation}
\textit{Proof}: Using Chebyshev's inequality
\begin{equation}\label{eq:30}
P(|E[Y_{{mj}}] - y_{{mj}}| \leq \alpha_{mj} \sqrt{var[Y_{{mj}}]}) \geq P_v,
\end{equation}
with $P_v=1-\frac{1}{\alpha_{mj}^2}$, the observed $y_{{mj}}$ is bounded by
\begin{equation}\label{ysmj_bounds}
E[Y_{{mj}}] - \alpha_{mj} \sqrt{var[Y_{{mj}}]} \leq y_{{mj}} \leq E[Y_{{mj}}] + \alpha_{mj} \sqrt{var[Y_{{mj}}]}
\end{equation}
Using (\ref{eq:23}) and (\ref{eq:24}) in (\ref{ysmj_bounds}) and solving for the boundaries of the two pairs of inequality in (\ref{ysmj_bounds}) results in bounds $\overline{\norm{\Delta_{{mj}}}_2^2}$ and $\underline{\norm{\Delta_{{mj}}}_2^2}$. Details are in Appendix C.
\section{K-MACE}
Using the data to calculate bounds on $\norm{\Delta_{{mj}}}_2^2$, we have bounds on the mean of ACE in (\ref{eq:12}). Here we provide bounds for ACE for $m$-clustering and suggest to use the $m$-clustering that minimizes this criterion.
\subsection{Bounds on ACE}
Employing Chebyshev's inequality, with probability $P_c=1-\frac{1}{\beta^2}$, the ACE, $z_{m}$, lies around the expected value of random variable $Z_{m}$ such that:
\begin{equation}\label{eq:16}
P(|E[Z_{m}] - z_{m}| \leq \beta \sqrt{var[Z_{m}]}) \geq P_c
\end{equation}
Therefore, with confidence probability $P_c$, $z_{m}$ is bounded as follows:
\begin{equation}\label{eq:19}
\underline{z_{{m}}} \leq z_{{m}} \leq \overline{z_{{m}}}
\end{equation}
where $\underline{z_{{m}}}$ and $\overline{z_{{m}}}$ are the resulted lower bound and upper bound of $z_{m}$ respectively
\begin{equation}\label{eq:18}
\overline{z_{{m}}} = E[Z_{{m}}] + \beta \sqrt{var[Z_{{m}}]}
\end{equation}
\begin{equation}\label{eq:17}
\underline{z_{{m}}} = E[Z_{{m}}] - \beta \sqrt{var[Z_{{m}}]}
\end{equation}
In this calculation $E[Z_{{m}}]$ in (\ref{eq:12}) is replaced by its estimate using bounds on $\norm{\Delta_{{mj}}}_2^2$ in (\ref{bound_delta}).
\subsection{Validation and Confidence Probabilities}\label{vall}
Values of $P_v$ and $P_c$ validation and confidence probabilities are close to one as long as $\alpha_{mj}$s and $\beta$ are chosen large enough and definitely larger than one. Simultaneously, the upper bounds and lower bounds on $\norm{\Delta_{{mj}}}_2^2$ in (\ref{bound_delta}) and $z_{m}$ in (\ref{eq:19}) are close to each other as long as $\frac{\beta}{N}$ and
$\frac{\alpha_{mj}}{\sqrt{d \times n_{mj}}}$ are small. Note that both cluster compactness $Y_{{mj}}$, in (\ref{eq:9}) and ACE, $z_{{mj}}$ in (\ref{zsmj}), are summations of squared Euclidean distances. The number of elements in these summations is the number of elements of each cluster and when this number is large enough, by using the Central Limit Theorem (CLT), the inequalities in (\ref{eq:16}) and (\ref{eq:30}) can be turned into equalities in the following forms with $P_v=Q(\alpha_{mj})$, $P_c=Q(\beta)$:
\begin{eqnarray}
P(|E[Y_{{m}}] - y_{{m}}| \leq \alpha_{mj} \sqrt{var[Y_{{m}}]}) = Q(\alpha_{mj})\\
P(|E[Z_{{mj}}] - z_{{mj}}| \leq \beta \sqrt{var[Z_{{mj}}]}) = Q(\beta)
\end{eqnarray}
For example when the scatter factor $W$ has a Gaussian structure, these two errors are Chi-squared and in practical applications sum of chi-square when the number of elements are ten or more is well approximated with a Gaussian distribution.
\subsection{K-MACE Using the Available Data}
\label{finalstep}
When clustering the data with $k$ number of clusters, associated with each sample $x(i)$ that now belongs to cluster $C_{kj}$, the covariance estimate of scatter factor in (\ref{eq:dd_mod}) is the covariance of cluster $C_{kj}$. We denote this covariance estimate for each $x_{kj}$ as $\hat \Sigma_{x_{kj}}$. This covariance estimate is then used in calculating bound of $z_{m}$ in (\ref{eq:18}) and (\ref{eq:17}) , with $m$-clustering for all values of $m$. The estimated $z_{m}$ with this covariance estimate is denoted by $z_{{m,k}}$ and calculated upper and lower bounds for the ACE in this case are denoted by $\overline{z_{{m,k}}}$ and $\underline{z_{{m,k}}}$. Minimizing this upperbound, we have
\begin{equation}
\label{mk}
\hat{m}_k = \arg \min_m (\overline{z_{{m,k}}})
\end{equation}
If $k$ is the CNC, it is expected for $\hat{m}_k$ to be equal to $k$ itself. If this condition does not hold, it means that $k$ is further from the CNC. Consequently, the optimum CNC estimate $\hat{m}$ is the $k$ value for which this normalized
discrepancy is minimum
\begin{equation}
\label{hhhhh}
k^* = \arg \min_k ( \frac{\overline{z_{{\hat{m}_{k},k}}} - \overline{z_{{k,k}}}} {\overline{z_{{\hat{m}_{k},k}}}})
\end{equation}
and
\begin{equation}
\label{finalm}
\hat{m} = \hat{m}_{k^*}
\end{equation}
Detailed pseudo-code for implementation of K-MACE algorithm is described in Algorithm \ref{kmace_alg}. Iin this procedure, probabilistic bounds on $||\Delta_{mj}||_2^2$ are calculated first in order to provide ACE bounds. Note that in line 10 of the algorithm if this term have a negative estimated value, it indicates that we are very far from the true value of $k$ and therefore the case should be discarded. Consequently, for these values of $k$ we set the upperbound to a large values $\Delta_{max}$ to get excluded form the comparison. As the algorithm shows, the complexity of K-MACE method in finding the CNC is linear, $O(dN)$.
\begin{algorithm*}
\centering
\caption{K-MACE Algorithm}
\begin{algorithmic}[1]
\INPUT Data set $\textbf{x} = [x_1,x_2,...,x_N]$, range of m, $[m_{min},m_{max}]$
\OUTPUT Estimate of the number of cluster $\hat{m}$ and optimum clustering solution.
\FOR{($m=m_{min}; m \leq m_{max};m_{++}$)}
\STATE $[C_{m1},C_{m2},..,C_{mm}] =kmeans(x,m)$
\FOR{each cluster $C_{mj} \ j=1,..,m$}
\STATE Solve cluster compactness $y_{mj}$ of cluster $C_{mj}$ using (\ref{eq:9})
\ENDFOR
\STATE Solve for total cluster compactness $y_{m}$ using (\ref{eq:8})
\ENDFOR
\FOR{($k=m_{min}; k \leq m_{max};k_{++}$)}
\STATE Calculate for cluster covariances denoted by for $\hat{\wedge}_{x_i}$ from clustering solution $[C_{k1},C_{k2},..,C_{kk}]$ $\hat{\wedge}_{x_i}$ = eigenvalues of the covariance matrix of the cluster which $x_i$ belongs to.
\STATE Use $\hat{\wedge}_{x_i} \rightarrow \overline{\wedge}_{x_i}$ to calculate for the upperbound of $\norm{\Delta_{{mj}}}_2^2$, $\overline{\norm{\Delta_{{mj}}}_2^2}$ using equations (\ref{delta_upp})-(\ref{radical}). In case the value is negative, set $\overline{\norm{\Delta_{{mj}}}_2^2}$ to $\Delta_{max}$.
\STATE From the results of steps 9 and 10, calculate for $E[Z_{{mj}}]$ using (\ref{eq:12}) and $Var[Z_{{mj}}]$ using (\ref{eq:13})
\STATE Calculate for $E[Z_{m}]$ and $Var[Z_{m}]$ in (\ref{eq:14}) and (\ref{eq:15}).
\STATE The upperbound of $z_{m}$ under the assumption that there are $k$ cluster covariances, $\overline{z_{{m,k}}}$, can then be found using (\ref{eq:18}).
\STATE From $\overline{z_{{m,k}}}$, we can obtain $\hat{m}_k$ using (\ref{mk})
\STATE Save values $\overline{z_{{\hat{m}_k,k}}}$ and $\overline{z_{{k,k}}}$.
\ENDFOR
\STATE From values saved on Step 15, the estimated CNC $\hat{m}$ can be found using (\ref{hhhhh}) and then (\ref{finalm}) and the optimum clustering solution is given by $[C_{\hat{m}1},C_{\hat{m}2},..,C_{\hat{m}\hat{m}}]$(One of the clustering solution from Step 2).
\end{algorithmic}
\label{kmace_alg}
\end{algorithm*}
\section{Kernel K-MACE}
In this section we utilize the K-MACE algorithms for the cases of kernel K-means to find the optimum number of clusters in this scenario. In addition, by using the kernel K-MACE we are also able to provide the optimum tuning parameter of the Gaussian kernels.
\subsection{Initial Cluster Assignment in Kernel K-MACE}
The kernel K-means algorithm initially randomly assigns data samples to clusters. This is similar to K-mean, however, kernel k-means algorithm displays increased sensitivity to random assignment of data samples to clusters which decreases the consistency of the clustering results. Here we expand on the proposed K-means technique in \cite{mod_kmeans_p} for the kernel k-means scenario to eliminates the sensitivity problem. The approach uses the distance between data samples in feature space to iteratively assign data samples to clusters hence reducing the number of iterations required for the clustering algorithm to reach convergence.
\subsection{K-MACE in feature space}
Calculation of the optimum cluster and CNC is analogous to that of the K-MACE itself. In the feature space we have the following replacement of data $x$ with its feature counterparts $\phi$:
\begin{equation} \label{link}
x_{mj} \to \phi_{mj}
\end{equation}
Consequently $\overline{c}(\phi_{mj})$ generates the data and $ \hat{c}_{\phi_{mj}}$ is the respective cluster center estimate in $m$-clustering. ACE and cluster compactness defined in (\ref{eq:6}) and (\ref{eq:8}) are now respectively in the following forms:
\begin{equation} \label{eq:6000}
\begin{gathered}
Z_{s_m} = \frac{1}{N} \sum_{j=1}^{m}Z_{s_{mj}},\;\; Z_{s_{mj}} = {\lVert \overline{c}_{\phi_{mj}} - \hat{c}_{\phi_{mj}} \rVert }_F^2
\end{gathered}
\end{equation}
\begin{equation}\label{eq:18000}
\begin{gathered}
Y_{s_m} = \frac{1}{N} \sum_{j=1}^{m}Y_{s_{mj}},\;\; Y_{s_{mj}} = {\lVert \hat{c}_{\phi_{mj}} - {\phi}_{mj} \rVert}_F^2
\end{gathered}
\end{equation}
\subsection{Optimum Gaussian Kernel Parameter}
The Gaussian kernel function parameter $\sigma$ dictates the structure of data in feature space and therefore the clustering of a dataset. Very small or very large values of $\sigma$ result in data losing its structure which makes correctly clustering a dataset very difficult. Clustering results are calculated for a range of $\sigma_i$ : $[\sigma_{k_{min}},...,\sigma_{k_{max}}]$ values.
To obtain the best $\sigma_i$, we propose to obtain the gradient of the minimum $\overline{Z_{s_{m}}}$ at each $\sigma_i$ and for each $\sigma_i$ after the value is peaked, we add the absolute value of the gradient between the previous and the next value of $\sigma_i$. The value of $\sigma_i$ which corresponds to the maximum value of this sum is chosen and the corresponding $\hat{m}$ and clustering result are chosen as the correct result \cite{kkmace_p}:
{\scriptsize\begin{equation}\label{sigma_grad}
\begin{aligned}
\hat \sigma= \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\ \;\;\;\\
\max\limits_{\sigma_{k}}(|\frac{\partial (\min(\overline{Z_{s_m}}(m-1 \to m,\sigma_k)))}{\partial \sigma_k} + \frac{\partial (\min(\overline{Z_{s_m}}(m \to m+1,\sigma_k)))}{\partial \sigma_k}|) \\
for \quad \sigma_k > \max\limits_{\sigma_{k}}(\min_m (\overline{Z_{s_m}}(m,\sigma_k)))
\end{aligned}
\end{equation}}
The minimum $\overline{Z_{s_{m}}}$ rises for increasing $\sigma_i$ as the clusters do not have any structure for very small values of $\sigma_i$ i.e. for $\sigma_i$ $<$ 1. As the value of $\sigma_i$ increases to the optimum, a rapid decrease in minimum $\overline{Z_{s_{m}}}$ occurs corresponding to the estimated value of $\sigma_i$ and the correct clustering result.
Algorithm 2 is a pseudo code of Kernel K-MACE and line 2 can be used for Gaussian Kernel K-MACE to tune the Gaussian parameter. Computational complexity of kernel K-MACE remains the same as kernel K-means which is $O(N^2)$.
\begin{algorithm}[ht!]
\small
\centering
\caption{Kernel k-MACE Algorithm}
\begin{algorithmic}[1]
\INPUT Data set $\textbf{x} = [x_1,x_2,...,x_N]$, range of m
$[m_{min},m_{max}]$ and range of values for $\sigma_k$ =
$[\sigma_{k_{min}},...,\sigma_{k_{max}}]$ with a fixed interval
\OUTPUT Estimated number of clusters $\hat{m}$, the clustering solution
$[\hat{C}_1,\hat{C}_2,..,\hat{C}_{\hat{m}}]$ and the optimum value of
$\sigma_k$
\STATE K-MACE$(x,m_{min}...m_{max},\sigma_{i_{min}}...\sigma_{i_{max}})$ in
feature space (replacing $kmeans(x,m)$ with the kernel
$kmeans(x,m,\sigma)$ clustering algorithm and (\ref{eq:8}) with
(\ref{eq:18000}))
\STATE Use ($\ref{sigma_grad}$) to obtain the optimum $\sigma_i$, $\hat \ sigma$, the final
clustering solution and $\hat{m}$
\end{algorithmic}
\end{algorithm}
\section{Simulations and Results}
\begin{figure}
\centering
\begin{subfigure}[b]{0.23\textwidth}
\centering
\includegraphics[width=.8\linewidth]{figs/d1}
\caption{S1: Uniform variance \\ and no overlaps ($N=900$)}
\label{cv1}
\end{subfigure}
\begin{subfigure}[b]{0.23\textwidth}
\centering
\includegraphics[width=.8\linewidth]{figs/d2}
\caption{S2:Varying variance \\ and major overlaps ($N=500$)}
\label{cv2}
\end{subfigure}
\vskip\baselineskip
\begin{subfigure}[b]{0.23\textwidth}
\centering
\includegraphics[width=.8\linewidth]{figs/d3}
\caption{S3: Varying covariance \\ and minor overlaps ($N=900$)}
\label{cv3}
\end{subfigure}
\begin{subfigure}[b]{0.23\textwidth}
\centering
\includegraphics[width=.8\linewidth]{figs/d4}
\caption{S4: Varying covariance \\ and major overlaps ($N=500$)}
\label{cv4}
\end{subfigure}
\caption{Synthetic Data}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=0.2]{figs/vi_example}
\caption{ACE and validity index values for data set S4. For each approach Optimum cluster number happens at $m=\hat{m}$. ($\overline{m}=9$)}
\label{figure:0}
\end{figure}
We compare the proposed methods with other well known methods for several synthetic and real data sets. K-MACE is compared with its predecessor MACE-means \cite{ref:9} as well as other well known index validity methods that are CH+K-means \cite{CB}, DB+K-means \cite{DB}, Sil+K-means\cite{sil} and gap+K-means \cite{gap}. We also compare K-MACE and kernel-KMACE (using Gaussian Kernel) with two divisive hierarchical clustering methods that are also partitioning clustering schemes, G-means \cite{gmeans} that is mainly proposed for Gaussian clustering, as well as Dip-means \cite{dip-means} the is a well known most recent approach. Another class of partitioning clustering approach is the density based clustering. We compare the methods with DBSCAN \cite{dbscan} and mean-shift \cite{mean_shift}, two of the most well known algorithm under this category. We also include OPTICS \cite{optics} which is an improved version of the DBSCAN. Note that OPTICS does not provide the clustering solution but only provides the number of clusters estimate. Results of X-means are worse than dip-means and G-means for all our simulation results and therefore, we do not provide them. This observation is consistent with what has already been claimed about G-means and dip-means outperforming X-means \cite{gmeans}, \cite{dip-means}.
For the purpose of evaluating the performance of clustering algorithms, we record the average and the standard deviation of the value of the CNC estimate, $E[\hat{m}] \pm std[\hat{m}]$, as well as the accuracy of the CNC estimate. The accuracy is the percentage that the algorithm identifying the CNC correctly. The Adjusted Random Index (ARI) \cite{ari} and the Normalized Variation Information (NVI) \cite{nvi} measure agreement between the estimated clustering solution to that of the true partition. ARI scores close to $100\% $ and NVI scores close to $0\%$ indicate full agreement between the algorithm's clustering solution and the true partition.
\subsection{Synthetic Data}
The first set of generated 2-D data are shown in Figures \ref{cv1} - \ref{cv4}. These data sets have varying complexity in terms of degrees of overlap, type of cluster distribution, and number of elements. To generate these data different samples of scatter factors are generated 100 times around fixed centers $\bar c$s. While S1 has nine non-overlapped clusters, there is a major overlapping of two of the five clusters of S2. clusters in S1 and S2 have uncorrelated (spherical) scatter factors, i.e., $\Sigma(i)$s in (\ref{eq:added_1}), are diagonal matrices. However, S3 and S4 have non-diagonal $\Sigma(i)$s (ellipsoidal) and we denote that as correlated scatter factors for each cluster. Clusters in S3 have some overlap while clusters in S4 have relatively much more overlap.
Figure \ref{figure:0} shows validity index values of different approaches that are minimum (or maximum) at their estimated CNC. As the figure shows, the ACE criterion, $z_{m}$, in Figure \ref{figure:0}(f) is the most reliable validity index in this case which points to the correct number of clusters by minimization.
Figure \ref{zsm3d} shows how Algorithm \ref{kmace_alg} is implemented through a 3-dimensional plot to visualize behavior of $\overline{z_{m,k}}$ with respect to $k$ and $m$ for S4. The red line shows the values of $\hat {m}_k$ in (\ref{mk}) that minimize ACE for different values of $k$. The optimum value of $k^*$, in (\ref{hhhhh}), in this case is 9 for which its $\hat m$ is also 9.
\begin{figure}
\centering
\begin{subfigure}{0.45\textwidth}
\centering
\includegraphics[width=1\linewidth]{figs/3d}
\caption{3D View}
\label{3dview}
\end{subfigure}
\begin{subfigure}{0.45\textwidth}
\centering
\includegraphics[width=1\linewidth]{figs/top}
\caption{Top View}
\label{topview}
\end{subfigure}
\caption{Example of $\overline{z_{m,k}}$ behavior as a function of $k$ and $m$ for data set S4 shown in Figure \ref{cv4}}
\label{zsm3d}
\end{figure}
\begin{table}
\centering
\scriptsize
\caption{First Synthetic Data results from average of 100 runs. Results are in the form of $E[\hat{m}] \pm std[\hat{m}]$}
\label{art_result}
\begin{tabular}{|l|l|l|l|l|}
\hline
& \begin{tabular}[c]{@{}l@{}}S1\\ m=9\end{tabular} & \begin{tabular}[c]{@{}l@{}}S2\\ m=5\end{tabular} & \begin{tabular}[c]{@{}l@{}}S3\\ m=9\end{tabular} & \begin{tabular}[c]{@{}l@{}}S4\\ m=9\end{tabular} \\ \hline
Kernel K-MACE & {$\mathbf{ 9\pm0}$} & {$\mathbf{5\pm0}$} & {$\mathbf{ 9\pm0}$} & {$\mathbf{ 9\pm0}$} \\
Accuracy (\%) & {$\mathbf{100}$} & {$\mathbf{100}$} & {$\mathbf{100}$} & {$\mathbf{100}$} \\
ARI,NVI(\%) & {$\mathbf{100,0}$} & {$\mathbf{92,9}$} & {$\mathbf{98,4}$} & {$\mathbf{88,9}$}
\\ \hline
K-MACE & {$\mathbf{9\pm0}$} & {$\mathbf{5\pm0}$} & $\mathbf{9\pm0}$ & $\mathbf{9\pm0}$ \\
Accuracy (\%) & {$\mathbf{100}$} &$\mathbf{100}$ & $\mathbf{100}$ & $\mathbf{100}$ \\
ARI,NVI(\%) & {$\mathbf{100,0}$} & $\mathbf{90,10}$ & $\mathbf{95,5}$ & $\mathbf{86,12}$
\\ \hline
MACE-means & {$\mathbf{ 9\pm0}$} & {$ 7\pm0.8$} & {$ 12.1\pm0.3$} & {$ 14\pm0 $} \\
Accuracy (\%) & {$\mathbf{100}$} & {${0}$} & {${0}$} & {${0}$} \\
ARI,NVI(\%) & {$\mathbf{100,0}$} & {${70,30}$} & {${70,30}$} & {${80,20}$}
\\ \hline
Dipmeans & {$\mathbf{ 9\pm0}$} & {$ 4.2\pm0.2 $} & {$9\pm0$} & {$ 7\pm0 $} \\
Accuracy (\%) & {$\mathbf{100}$} & {${65}$} & {$100$} & {${0}$} \\
ARI,NVI(\%) & {$\mathbf{100,0}$} & {${76,27}$} & {$95,5$} & {${70,30}$}
\\ \hline
Gmeans & {$\mathbf{ 9\pm0}$} & {$ 6\pm0 $} & {$ 11\pm0 $} & {$ 17\pm0 $} \\
Accuracy (\%) & {$\mathbf{100}$} & {${0}$} & {${0}$} & {${0}$} \\
ARI,NVI(\%) & {$\mathbf{100,0}$} & {${70,30}$} & {${80,20}$} & {${80,20}$}
\\ \hline
DBSCAN & {$\mathbf{ 9\pm0}$} & {$ 2\pm0 $} & {$ 11\pm0$} & {$ 11\pm0 $} \\
Accuracy (\%) & {$\mathbf{100}$} & {${0}$} & {${0}$} & {${0}$} \\
ARI,NVI(\%) & {$\mathbf{100,0}$} & {${0,100}$} & {${80,20}$} & {${80,20}$}
\\ \hline
CH + K-means & {$\mathbf{ 9\pm0}$} & {$4.9\pm0.25$} & {$ 17\pm0$} & {$ 17.4\pm1.30 $} \\
Accuracy (\%) & {$\mathbf{100}$} & {${85}$} & {${0}$} & {${0}$} \\
ARI,NVI(\%) & {$\mathbf{100,0}$} & {${80,20}$} & {${70,20}$} & {${70,30}$}
\\ \hline
Sil + K-means & {$\mathbf{ 9\pm0}$} & {$ 4\pm0$} & {$9\pm0$} & {$7.8\pm0.4 $} \\
Accuracy (\%) & {$\mathbf{100}$} & {${0}$} & {$100$} & {${0}$} \\
ARI,NVI(\%) & {$\mathbf{100,0}$} & {${80,20}$} & {$95,5$} & {${80,20}$}
\\ \hline
DB + K-means & {$\mathbf{ 9\pm0}$} & {$ 4\pm0$} & {$9\pm0$} & {$ 7.7\pm0.5 $} \\
Accuracy (\%) & {$\mathbf{100}$} & {${0}$} & {$100$} & {${0}$} \\
ARI,NVI(\%) & {$\mathbf{100,0}$} & {${80,20}$} & {$95,5$} & {${80,20}$}
\\ \hline
gap + K-means & {$\mathbf{ 9\pm0}$} & {$5\pm0$} & {${ 14.3\pm2.3}$} & {$15.9\pm1.4$} \\
Accuracy (\%) & {$\mathbf{100}$} & {$100$} & {${0}$} & {${0}$} \\
ARI,NVI(\%) & {$\mathbf{100,0}$} & {$90,10$} & {${80,20}$} & {${70,20}$}
\\ \hline
Mean-Shift & {$\mathbf{ 9\pm0}$} & {$ 4\pm0$} & {$9\pm0$} & {$7.0\pm0.2 $} \\
Accuracy (\%) & {$\mathbf{100}$} & {${0}$} & {${100}$} & {${0}$} \\
ARI,NVI(\%) & {$\mathbf{100,0}$} & {${80,20}$} & {$95,5$} & {${70,20}$}
\\ \hline
Optics & {$\mathbf{9\pm0}$} & {$ 7\pm0$} & {$11\pm0$} & {$ 8\pm0 $} \\
Accuracy (\%) & {$\mathbf{100}$} & {${0}$} & {${0}$} & {${0}$} \\
ARI,NVI(\%) &NA &NA &NA &NA
\\ \hline
\end{tabular}
\end{table}
Table \ref{art_result} provides comparison of result for S1-S4 data sets. While all methods provide perfect results for S1 that was a well behaved set of clusters, only few methods are comparable with K-MACE and Kernel K-MACE for S2 and S3 and for data set S4, that is the most complicated set with relatively more overlapping, the only two methods that outperforms other methods are the K-MACE ones. Note that for all these scenarios both K-MACE approaches standard deviation are zero which indicates precision and robustness of the algorithm to change of scatter factor samples. Not only the K-MACE methods are successful in CNC estimation but also they are providing the highest value of ARI and lowest of NVI that indicates better results of clustering as well.
The second set of generated data are six clustering sets of data U1-U6 of length 900 with dimension ten that is a higher dimension compared to the first set set S1-S4. The data set is generated by randomly choosing 9 centers in a hypercube for each of the 100 runs. For all these data sets the standard deviation for scatter factors per dimension is smaller than 0.3. For U1, U3 and U5 the hypercube side length is 10 while for U2, U4 an U6 the hypercube side length is shrunk to smaller value of 8. Consequently, the latter sets of data have much greater chance of more overlapping than the first sets.
Simulation results are in Table \ref{art_result2}.
As the table shows, while for the easier case of U1 almost all methods are successful, for the other data sets the only methods that consistently outperform other methods are the K-MACE ones. Mean-shift and Dip-means were able to estimate CNC correctly for some data sets . It is work mentioning that Kernel k-MACE shows its expected superiority for the cases with major overlap. While K-MACE is compatible with kernel K-MACE for U2, for U4 and U6 Kernel K-MACE performs better and with a very high accuracy.
Third set of synthetic data are from \cite{d31}- \cite{s15} and depicted in Figures \ref{aggregation} - \ref{d31}. As the figures show while Aggregation data set has non Gaussian shaped clusters, clusters of S15 have very few points and 31 clusters of D31 are touching. Simulation results for this data are presented in Table \ref{art_result3}. As the table shows, while K-MACE approaches choose 6 clusters for Aggregation instead of 7, nevertheless are still the winners among all the methods as still are the closest CNC estimate and more importantly provide large ARIs and low NVIs. The next method that competes with K-MACE methods for this data is DBSCAN. As S15 is a much easier data set with a clean clustering structure, couple of methods, including K-MACE, perform equally well for this data set. For D31, K-MACE and especially Kernel K-MACE are on the average outperforming other methods even though Gmeans is the only other method that provides the correct CNC estimate. This example shows importance of overall performance of the clustering method. As the table shows, kernel K-MACE has $95\%$ accuracy which is the largest accuracy among the methods after Gmeans. In addition, it has the highest AVI and the lowest NVI. It is worth mentioning that all clustering methods have tuning parameters that we set to their default values \cite{dip-means}. For example, DBSCAN has two parameters, $\epsilon$ and $MinPts$ that indirectly control the number of clusters. it is known that DBSCAN is fairly sensitive to these parameters and if not finely tuned, can result into bad clustering solution.
Another example is Gmean that is using `Anderson-Darling threshold' \cite{gmeans} for the splitting procedure. This value is by default $0.95$. In Table \ref{art_result3} results of changing this value to $0.7$ is also shown. As the table shows the Gmeans performance is degraded with this new value. K-MACE approaches have parameters $\alpha_{mj}$ and $\beta$. The default value that we used is five for all these parameters. This value guarantees confidence and validation probabilities to be larger than $0.96=1-\frac{1}{25}$. Note that K-MACE is not sensitive to changing the default values. Changing this value to larger numbers, as long as the condition in Section \ref{vall} is satisfied, still provides the same results of this default value with a very small variation.
\begin{figure}
\centering
\begin{subfigure}{0.4\textwidth}
\centering
\includegraphics[width=1\linewidth]{figs/aggregation}
\caption{Aggregation Data Set}
\label{aggregation}
\end{subfigure}
\begin{subfigure}{0.4\textwidth}
\centering
\includegraphics[width=1\linewidth]{figs/s15}
\caption{S15 Data Set}
\label{s15}
\end{subfigure}
\begin{subfigure}{0.4\textwidth}
\centering
\includegraphics[width=1\linewidth]{figs/d31}
\caption{D31 Data Set}
\label{d31}
\end{subfigure}
\caption{Synthetic Data Set. }
\end{figure}
\begin{table*}
\centering
\scriptsize
\caption{Second Synthetic Data Set, nine cluster sets ($\overline{m}=9$) with higher dimension ($d=10$) and $N=900$, averaged over 100 runs. Results are in the form of $E[\hat{m}] \pm std[\hat{m}]$. (Uncorr-Scatt stands for uncorrelated (spherical) scatter factors, Corr-Scatt stands for Correlated (ellipsoidal) scatter factors)}
\label{art_result2}
\begin{tabular}{|l|l|l|l|l|l|l|}
\hline
& \begin{tabular}[c]{@{}l@{}}\textbf{U1}\\ Uncorr-Scatt \\ Same variance \\minor overlap \end{tabular} & \begin{tabular}[c]{@{}l@{}}\textbf{U2}\\ Uncorr-Scatt\\ Same variance \\major overlap\end{tabular} & \begin{tabular}[c]{@{}l@{}}\textbf{U3}\\Uncorr-Scatt\\ Different variance \\minor overlap\end{tabular} & \begin{tabular}[c]{@{}l@{}}\textbf{U4}\\Uncorr-Scatt\\ Different variance \\ major overlap\end{tabular} & \begin{tabular}[c]{@{}l@{}}\textbf{U5}\\Corr-Scatt\\ Different Covariance \\ minor overlap\end{tabular} & \begin{tabular}[c]{@{}l@{}}\textbf{U6}\\Corr-Scatt\\ Different Covariance \\ major overlap\end{tabular} \\ \hline
Kernel K-MACE & {$\mathbf{ 9\pm0}$} & {$\mathbf{9\pm0}$} & {$\mathbf{ 9\pm0}$} & {$\mathbf{ 9\pm0}$} & {$\mathbf{9\pm0}$} & {$\mathbf{ 9\pm0}$} \\
Accuracy (\%) & {$\mathbf{100}$} & {$\mathbf{100}$} & {$\mathbf{100}$} & {$\mathbf{95}$} & {$\mathbf{100}$} & {$\mathbf{96}$} \\
ARI,NVI(\%) & {$\mathbf{98,2}$} & {$\mathbf{93,8}$} & {$\mathbf{96,5}$} &
{$\mathbf{95,5}$} & {$\mathbf{96,3}$} & {$\mathbf{95,5}$}
\\ \hline
K-MACE & {$\mathbf{ 9\pm0}$} & {$\mathbf{9\pm0}$} & {$\mathbf{9\pm0}$} & {$ 8.6\pm0.7$} & {$\mathbf{9\pm0}$} & {$8.8\pm0.1$} \\
Accuracy (\%) & {$\mathbf{100}$} & {$\mathbf{100}$} & {$\mathbf{100}$} & {$92$} & {$\mathbf{100}$} & {$95$} \\
ARI,NVI(\%) & {$\mathbf{95,5}$} & {$\mathbf{90,10}$} & {$\mathbf{90,5}$} & {$90,10$} & {$\mathbf{95,5}$} & {$95,10$}
\\ \hline
MACE-means &{${ 11\pm0}$} & {$14.1\pm0.2$} & {${ 10\pm0.2}$} & {$12.8\pm1.3$} & {$11.8\pm1.2$} & {$13.5\pm2.1$} \\
Accuracy (\%) & {$0$} & {${0}$} & {${5}$} & {${12}$} & {${15}$} & {${8}$} \\
ARI,NVI(\%) & {$90,10$} & {$80,20$} & {${90,10}$} & {$80,10$} & {$70,20$} & {$80,20$}
\\ \hline
Dipmeans & {$ 9\pm0$} & {$9\pm0$} & {${ 8.6\pm1.4}$} & {$ 6.84\pm2.62 $} & {$9\pm0$} & {$ 8.56\pm1.51 $} \\
Accuracy (\%) & {$100$} & {$100$} & {${69}$} & {${61}$} & {$100$} & {${76}$} \\
ARI,NVI(\%) & {$95,5$} & {$90,10$} & {$80,20$} & {$70,20$} & {$90,10$} & {$80,20$}
\\ \hline
Gmeans & {$9\pm0$} & {$ 9.16\pm0.42 $} & {$ 9.12\pm0.38 $} & {$ 12.32\pm4.37 $} & {$ 9.12\pm0.38 $} & {$ 9.14\pm0.35 $} \\
Accuracy (\%) & {$100$} & {${88}$} & {${92}$} & {${17}$} & {${88}$} & {${91}$}\\
ARI,NVI(\%) & {$95,5$} & {${90,10}$} &{90,10} & {$80,10$} &{90,10} & {$90,10$}
\\ \hline
DBSCAN & {$8.8\pm0.4$} & {$8.9\pm0.4$} & {$6.9\pm1.6$} & {$ 11\pm0 $} & {$8.9\pm0.2$} & {$8.7\pm0.5$} \\
Accuracy (\%) & {${87}$} & {${96}$} & {${0}$} & {${0}$} & {${95}$} & {${79}$}\\
ARI,NVI(\%) & {${90,10}$} &{90,10} &{70,25} & {$90,10$} &{90,10} &{90,10}
\\ \hline
CH + K-means & {$9\pm0$} & {$9.1\pm0.3$} & {${9.1\pm0.3}$} & {$ 17.4\pm1.30 $} & {${ 18\pm1.2}$} & {${ 18.7\pm0.8}$} \\
Accuracy (\%) & {$100$} & {${90}$} & {${90}$} & {${0}$} & {${0}$} & {${0}$} \\
ARI,NVI(\%) & {$95,5$} &{95,5} &{90,10} & {$80,30$} & {${40,50}$} &{40,50}
\\ \hline
Sil + K-means & {$9\pm0$} & {$9\pm0.3$} & {$ 9\pm0.3$} & {$7.8\pm0.4 $} & {$ 9\pm0$} & {${ 8.2\pm1.1}$} \\
Accuracy (\%) & {$100$} & {${96}$} & {${96}$} & {${0}$} & {$100$} & {${68}$}\\
ARI,NVI(\%) & {$95,5$} &{95,5} &{95,2} & {$80,20$} & {$90,10$} &{90,10}
\\ \hline
DB + K-means & {$9\pm0$} & {$8.9\pm0.3$} & {$ 8.9\pm0.3$} & {$ 7.7\pm0.5 $} & {$9\pm0$} & {${ 7.1\pm1.5}$} \\
Accuracy (\%) & {$100$} & {${91}$} & {${91}$} & {${6}$} & {$100$} & {${2}$}\\
ARI,NVI(\%) & {$95,5$} &{95,5} &{90,10} & {$80,20$} & {$90,10$} &{80,10}
\\ \hline
gap + K-means & {$9\pm0$} & {$9.1\pm0.3$} & {${9.1\pm0.3}$} & {$15.9\pm1.4$} & {$11.8\pm5.5$} & {${ 13.1\pm0.8}$} \\
Accuracy (\%) & {$100$} & {${91}$} & {${91}$} & {${0}$} & {${6}$} & {${0}$}\\
ARI,NVI(\%) & {$95,5$} &{95,7} &{90,10} & {$70,30$} & {$80,30$} &{80,25}
\\ \hline
Mean-Shift & {$9\pm0$} & {$ 9\pm0$} & {${ 9\pm0.1}$} & {$7.0\pm0.2 $} & {$9\pm0$} & {$8.9\pm0.2$} \\
Accuracy (\%) & {$100$} & {$100$} & {${96}$} & {${18}$} & {$100$} & {${96}$}\\
ARI,NVI(\%) & {$95,5$} & {$90,10$} &{95,5} & {$80,20$} & {$90,10$} &{90,10}
\\ \hline
Optics & {$9.0\pm1.8$} & {$9.0\pm1.5$} & {$9.1\pm1.4$} & {$ 8\pm0 $} & {$9.6\pm1.3$} & {$9.4\pm1.9$} \\
Accuracy (\%) & {${80}$} & {${79}$} & {${82}$} & {${0}$} & {${83}$} & {$74$} \\
ARI,NVI(\%) &NA &NA &NA &NA &NA &NA
\\ \hline
\end{tabular}
\end{table*}
\begin{table}
\centering
\scriptsize
\caption{Third set of Synthetic data results from averaging of 100 runs. Results are in the form of $E[\hat{m}] \pm std[\hat{m}]$}
\label{art_result3}
\begin{tabular}{ | c | c | c | c |}
\hline
{ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ } &{aggregation} & {s15} & {D31} \\
{ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ } &{$\overline{m}=7$} & {$\overline{m}=15$} & {$\overline{m}=31$} \\ \hline
Kernel K-MACE & {$\mathbf{6\pm0}$} & {$ \mathbf{15\pm0}$} & {$\mathbf{31\pm0}$} \\
Accuracy (\%) & {$\mathbf{0}$} & {$\mathbf{100}$} & {$\mathbf{95}$} \\
ARI,NVI(\%) & {$\mathbf{91,12}$} & {$\mathbf{100,0}$} & {$\mathbf{94,7}$}
\\ \hline
K-MACE & {$\mathbf{6\pm0}$} & {$ \mathbf{15\pm0}$} & {${30.9\pm1.9}$} \\
Accuracy (\%) & {${0}$} & {$\mathbf{100}$} & {${88}$} \\
ARI,NVI(\%) & {$\mathbf{90,10}$} & {$\mathbf{100,0}$} & {${90,10}$}
\\ \hline
MACE-means & {${ 16\pm0.6}$} & {$\mathbf{15\pm0}$} & {$ 31.8\pm0.6$} \\
Accuracy (\%) & {${0}$} & {$\mathbf{100}$} & {${72}$} \\
ARI,NVI(\%) & {${40,40}$} & {$\mathbf{100,0}$} & {${90,10}$}
\\ \hline
Dipmeans & {$5\pm0$} & {$ 8\pm0 $} & {$ 23\pm0 $} \\
Accuracy (\%) & {${0}$} & {$0$} & {${0}$} \\
ARI,NVI(\%) & {${80,20}$} & {${70,30}$} & {${86,12}$}
\\ \hline
Gmeans($\alpha = 0.95$) & {$ 9\pm0$} & {${16\pm0}$} & {$ {31\pm0}$} \\
Accuracy (\%) & {${0}$} & {${0}$} & {${100}$} \\
ARI,NVI(\%) & {${60,30}$} & {${95,6}$} &{${86,12}$}
\\ \hline
Gmeans($\alpha = 0.7$) & {$ 13\pm0$} & {$19.9\pm0.35 $} & {${ 46\pm2.9}$} \\
Accuracy (\%) & {${0}$} & {$0$} & {${0}$} \\
ARI,NVI(\%) & {${60,30}$} & {${80,20}$} & {${86,12}$}
\\ \hline
DBSCAN & {${5.2\pm0.2}$} & {$ \mathbf{15\pm0} $} & {$ 34\pm0$} \\
Accuracy (\%) & {${0}$} & {$\mathbf{100}$} & {$0$} \\
ARI,NVI(\%) & {${90,10}$} & {$\mathbf{100,0}$} & {${50,30}$}
\\ \hline
CH + K-means & {${ 15.7\pm0.9}$} & {$ 15.3\pm0.5$} & {$ 3.4\pm1.1$} \\
Accuracy (\%) & {${0}$} & {$90$} & {$0$} \\
ARI,NVI(\%) & {${40,40}$} & {${95,5}$} & {${90,10}$}
\\ \hline
Sil + K-means & {${ 3\pm0}$} & {$ 15.3\pm0.5$} & {${30.9\pm1.5}$} \\
Accuracy (\%) & {${0}$} & {$90$} & {${83}$} \\
ARI,NVI(\%) & {${70,40}$} & {${95,5}$} & {${90,10}$}
\\ \hline
DB + K-means & {${ 4\pm0}$} & {$ 12.2\pm3.5$} & {${ 28.4\pm2.0}$} \\
Accuracy (\%) & {${0}$} & {$35$} & {$0$} \\
ARI,NVI(\%) & {${80,30}$} & {${90,10}$} & {${90,10}$}
\\ \hline
gap + K-means & {$14.5\pm0.8$} & {$ 12\pm2.5$} & {${ 33.5\pm1.3}$} \\
Accuracy (\%) & {${0}$} & {$18$} & {$60$} \\
ARI,NVI(\%) & {${40,40}$} & {${90,10}$} & {${90,10}$}
\\ \hline
Mean-shift & {${5.5\pm0.8}$} &{$ \mathbf{15\pm0}$} & {${ 12\pm1.8}$} \\
Accuracy (\%) & {${0}$} & {$\mathbf{100}$} & {$0$} \\
ARI,NVI(\%) & {${70,40}$} & {$\mathbf{100,0}$} & {${20,70}$}
\\ \hline
Optics & {${ 8\pm0}$} & {$ {15\pm0}$} & {${ 16.8\pm2.3}$} \\
Accuracy (\%) & {${0}$} & {${100}$} & {$0$} \\
ARI,NVI(\%) &NA & NA & NA
\\ \hline
\end{tabular}
\end{table}
\subsection{Real Data}
Results on eight real-world data sets from UCI machine learning repository website \footnote{http://archive.ics.uci.edu/ml/}, that are seeds, iris, vertebral, wine, breast, Wisconsin Diagnostic Breast Cancer, Ecoli and Multiple Features (dutch handwritten), are shown in Table \ref{real1_result}.
Values of $\overline{m}$, the actual number of clusters, $d$, the dimension of the data, and $N$, the number of samples in the data set are provided in the table. As the table shows K-MACE methods outperform most of methods. For all data sets K-MACE methods provide accurate estimate of CNC. Even for Ecoli with 8 clusters, the best method among all the methods are the K-MACE methods. However, the CNC estimate in this case is close to 5 by K-MACE. The reason is the structure of the data for which the number of elements, $n_{mj}$, of each cluster are [143, 77, 52, 35, 20, 5, 2, 2]. As these numbers show, the last three clusters have very few elements that can't be detected as independent clusters by K-MACE methods. This is perhaps similarly effecting the other methods such as G-means.
\begin{table*}
\scriptsize
\centering
\caption{Real Data examples (average of 50 runs)}
\label{real1_result}
\newlength\q
\setlength\q{\dimexpr .05\textwidth -1 \tabcolsep}
\noindent
\begin{tabular}{ | c | c | c | c | c | c | c | c | c |}
\hline
{ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ } &{\begin{tabular}[c]{@{}l@{}} Seeds\\$\overline{m}$=3, d=7\\N=210\end{tabular}} & {\begin{tabular}[c]{@{}l@{}} Iris\\ $\overline{m}$=3, d=4\\ N=150\end{tabular}} & {\begin{tabular}[c]{@{}l@{}} Vertebral \\ $\overline{m}$=3,d=6\\ N=310\end{tabular}} & {\begin{tabular}[c]{@{}l@{}} Wine\\ $\overline{m}$=3, d=13\\ N=178\end{tabular}} & {\begin{tabular}[c]{@{}l@{}} Breast\\ $\overline{m}$=2,d=9 \\ N=698\end{tabular}} & {\begin{tabular}[c]{@{}l@{}} WDBC\\ $\overline{m}$=2,d=32 \\ N=569\end{tabular}} & {\begin{tabular}[c]{@{}l@{}} Ecoli\\ $\overline{m}$=8,d=7 \\ N=335\end{tabular}} & {\begin{tabular}[c]{@{}l@{}} MF\\ $\overline{m}$=10,d=216 \\ N=2000\end{tabular}} \\ \hline
{Kernel K-MACE} & {$\mathbf{3\pm0}$} & {$\mathbf{3\pm0}$}&{$\mathbf{3\pm0}$} &{$\mathbf{3\pm0}$} & {$\mathbf{2\pm0}$} &{$\mathbf{2\pm0}$} &{$\mathbf{5\pm0}$} &{$\mathbf{10\pm0}$}\\
\hline
{K-MACE} & {$\mathbf{3\pm0}$} & {$\mathbf{3\pm0}$}&{$\mathbf{3\pm0}$} &{$\mathbf{3\pm0}$} & {$\mathbf{2\pm0}$} &{$\mathbf{2\pm0}$} &{$4.9\pm0.3$} &{$\mathbf{10\pm0}$}\\
\hline
{MACE-means} & {$\mathbf{3\pm0}$} &{$6.5\pm0.5$} &{$1\pm0$} &{$11.2\pm0$} & {$1\pm0$} &{$7.2\pm2.1$} &{$1\pm0$} &{$5\pm0$} \\
\hline
{DIP-means} & {$1\pm0$} &{$2\pm0$} &{$1\pm0$} &{$1\pm0$} & {$12\pm0$} &{$1\pm0$} &{$2\pm0$} &{$4\pm0$}\\
\hline
{G-means} & {$4\pm0$} &{$4\pm0$} &{$5\pm0$} &{$2\pm0$} & {$96\pm0$} &{$14\pm0$} &{${4\pm0}$} &{$35\pm0$}\\
\hline
{DBSCAN} & {$2\pm0$} &{$\mathbf{3\pm0}$} &{$1\pm0$} &{$1.0\pm0.0$} & {$11\pm0$} & {$1\pm0$} & {$1\pm0$} & {$1\pm0$} \\
\hline
{CH + K-means} & {$\mathbf{3\pm0}$} &{$\mathbf{3\pm0}$} &{$4\pm0$} &{$12.7\pm0.5$} & {$\mathbf{2\pm0}$} &{$11.1\pm1.0$} &{$3.0\pm0.0$}&{$4\pm0$}\\
\hline
{Sil + K-means} & {$2\pm0$} &{$2\pm0$} &{$\mathbf{3\pm0.2}$} &{$2.0\pm0.0$} & {$\mathbf{2\pm0}$} &{$2.8\pm0.4$} &{$3.0\pm0.0$}&{$4.9\pm0.3$}\\
\hline
{DB + K-means} &{$2\pm0$} &{$2\pm0$} &{$3.1\pm0.3$} &{$7\pm0.5$} & {$\mathbf{2\pm0}$} & {$\mathbf{2\pm0}$} &{$2.0\pm0.0$}& {$5.0\pm0.0$} \\
\hline
{gap + K-means} & {$2\pm0$} &{$2\pm0$} &{$5.2\pm0.5$} &{$8.7\pm5.2$} &{$10.42\pm0.7$} & {$10.5\pm1.3$} & {$16.0\pm1.4$} & {$7.0\pm0.0$} \\
\hline
{mean shift} & {$3.9\pm0.5$} &{$2\pm0$} &{$18.7\pm0.9$} &{${3.7\pm0.4}$} & {$152.5\pm2.2$} &{$7.6\pm0.6$} &{$22.4\pm0.9$} & {$112.5\pm2.6$} \\
\hline
{optics} & {$8\pm0.0$} & {$5\pm0.0$} &{$1.0\pm0.0$} &{$10\pm0.0$} &{$9\pm0$} &{$4.3\pm0.7$} & {$1.0\pm0.0$} & {$1.0\pm0.0$} \\
\hline
\end{tabular}
\end{table*}
\section{Conclusion}
K-MACE algorithm is a validity index method for CNC estimation in K-means clustering. The unique method of ACE estimation uses only the available data and the available cluster compactness. The approach estimates the required scatter factor variance of clusters by evaluating consistency of the $m$-clustering with the prior assumption on the cluster structure. Due to this insightful tactic, the method shows robustness in the clustering procedure. Kernel K-MACE algorithm is an implementation of K-MACE in feature space using kernel functions which produces better clustering results for data sets with a large degree of overlap between clusters. In addition, while existing methods tune the Gaussian kernel parameters by trial and error, the proposed method automatically tunes these parameters. The simulation results also confirm that K-MACE and kernel K-MACE not only outperform the competing methods in CNC estimation, but also perform with more precision in clustering the data in the sense of ARI and NVI even for clusters with higher level of overlapping.
\appendices
\input appendixA.tex
\input appendixB.tex
|
2,869,038,154,423 | arxiv | \section{Introduction}
The \textbf{causal dynamical triangulation} (CDT) was
introduced by theoretical physicists Jan Ambj\o rn and Renate Loll as a discrete model of Lorentzian quantum gravity in which space and time play different roles \cite{AmLo98}. Time is represented by a partition of the $d+1$-dimensional model into a sequence of $d$-dimensional layers with increasing distances from the origin. Although this model has been the subject of extensive numerical investigation \cite{AGJL12,AmJuLo05}, especially in dimension $2+1$ and $3+1$, very little is known analytically, let alone rigorously.
In the case of dimension $1+1$, the $1$-dimensional layers are simply cycles, and causal triangulations are in bijection with plane trees, see e.g. \cite{DJW10,SYZ13}. Figure \ref{fig:causal-arbre} below illustrates the mechanism used to build a causal triangulation from a plane tree $\tau$: We first add the horizontal connections between successive vertices in each layer to obtain a planar map $ \mathsf{Causal}(\tau)$ living on the sphere, and then triangulate the non-triangular faces of this map as shown in the drawing to obtain the triangulation $\mathsf{CauTrig}(\tau)$. See Section \ref{sec:causaltrig} and \cite[Section 2.3]{DJW10} or \cite[Section 2.2]{SYZ13} for more details.
\vspace{1em}
\begin{figure}[!h]
\begin{center}
\includegraphics[width=14cm]{causal-arbre-trig.pdf}
\caption{Left: A plane tree $\tau$. In any plane tree, there is a distinguished cyclic ordering of the vertices in each level. Center: The causal planar graph $ \mathsf{Causal}(\tau)$ built from $\tau$ by adding the horizontal connections between successive vertices in each level. Right: The causal triangulation $ \mathsf{CauTrig}(\tau)$ built from $ \mathsf{Causal}(\tau)$ by further triangulating the non-triangular faces from their top-right corners. \label{fig:causal-arbre}}
\end{center}
\end{figure}
\vspace{-1em}
The maps $\mathsf{Causal}(\tau)$ and $\mathsf{CauTrig}(\tau)$ are qualitatively very similar.
We shall focus in this article on the model $ \mathsf{Causal}(\tau)$ (mainly to simplify our drawings) and refer the reader to Section \ref{sec:causaltrig} for extensions of our results to other models including causal triangulations.
The geometry of large random plane trees is by now very well understood \cite{Ald91a,BK06,DLG02}. However, we shall see that causal maps have geometric and spectral properties that are dramatically different to the plane trees used to construct them. Indeed, the causal maps have much more in common with uniform random planar maps \cite{LeGallICM} such as the UIPT than they do with random trees.
\paragraph{Setup and results.} Suppose that $ \tau$ is a finite plane tree. We can associate with it a finite planar map (graph) denoted by $ \mathsf{Causal}(\tau)$ by adding the `horizontal' edges linking successive vertices in the cyclical ordering of each level of the tree as in Figure \ref{fig:causal-arbre}. If $\tau$ is an infinite, locally finite plane tree, performing the same operation yields an infinite planar map with one end, see Figure \ref{fig:nice}.
The distance between a vertex $v$ of $\tau$ and the root $\rho$, called the \textbf{height} of $v$, is clearly equal in the two graphs $\tau$ and $\mathsf{Causal}(\tau)$.
Thus, a natural first question is to understand how the distances between pairs of vertices at the \emph{same} height are affected by the addition of the horizontal edges in the causal graph.
We formalize this as follows: Let $\tau$ be a plane tree with root $\rho$. Let $[\tau] _{k}$ be the subtree spanned by the vertices of height at most $k$ and let $\partial [\tau]_{k}$ be the set of vertices of height exactly $k$. We define the \textbf{girth} at height $r$ of $\mathsf{Causal}(\tau)$ to be $$ \mathsf{Girth}_{r}\bigl(\mathsf{Causal}(\tau)\bigr) := \sup \left\{ \mathrm{d}_{\mathrm{gr}}^{ \mathsf{Causal}(\tau)}(x,y) : x,y \in \partial [\tau]_{r}\right\}, \quad \mbox{ where }\sup \varnothing =0,$$ and where $ \mathrm{d}_{\mathrm{gr}}^{ G}$ denotes the graph distance in the graph $G$. The triangle inequality yields the trivial bound $ \mathsf{Girth}_{r}(\mathsf{Causal}(\tau) ) \leq 2r$, so that the girth grows at most linearly.
We will focus first on the case that the underlying tree $\tau$ is a random Galton-Watson tree whose offspring distribution $\mu$ is critical (i.e., has mean $1$) and has finite variance $0<\sigma^{2}<\infty$.
The classical CDT model is related to the special case in which $\mu$ is a mean $1$ geometric distribution.
Let $T$ be a $\mu$-Galton-Watson tree (which is almost surely finite) and let $T_{\infty}$ be a $\mu$-Galton-Watson tree conditioned to survive forever \cite{AD13,Kes86}. Let
$\mathscr{C}_\infty=\mathsf{Causal}(T_\infty)$. It is well-known that $ \# \partial [T_{\infty}]_{r} \approx r$ under the above hypotheses on $\mu$.
Our first main result states that the addition of the horizontal edges to the causal graph makes the girth at height $r$ smaller, but only by a subpolynomial factor.
\begin{theorem}[Geometry of generic causal maps] \label{thm:geometry} Let $\mu$ be critical and have finite non-zero variance. Then
$$ (i) \quad \frac{\log \mathsf{Girth}_{r}( \mathscr{C}_\infty )}{\log r} \xrightarrow[r\to\infty]{(a.s.)} 1 \quad \mbox{ and } \quad (ii) \quad \frac{ \mathsf{Girth}_{r}( \mathscr{C}_\infty )}{ r} \xrightarrow[r\to\infty]{( \mathbb{P})} 0.$$
\end{theorem}
\hspace{0.4em}
A corollary of item $(ii)$ of Theorem \ref{thm:geometry} is that every geodesic between any two points at height $r$ in $\mathscr{C}_\infty$ stays within a strip of vertices at height $r\pm o(r)$ with high probability. This in turn implies that the scaling limit of $r^{-1} \cdot \mathscr{C}_\infty$ (in the local Gromov--Hausdorff sense) is just a single semi-infinite line $( \mathbb{R}_{+}, |\cdot|)$. In other words, the metric in the horizontal (space) direction is collapsed relative to the metric in the vertical (time) direction, leading to a degenerate scaling limit.
The proof of item $(i)$ is based on a block-renormalisation argument and also yields the quantitative result that $\mathsf{Girth}_{r}( \mathscr{C}_\infty ) \geq r e^{-O(\sqrt{\log r})}$ as $r\to\infty$ almost surely (Proposition \ref{prop:quantgeometry}). On the other hand, item $(ii)$ uses the subadditive ergodic theorem and is not quantitative.
Once the geometry of $ \mathscr{C}_\infty$ is fairly well understood, we can apply this geometric understanding to study its spectral properties. We first show that $ \mathscr{C}_\infty$ is almost surely recurrent (Proposition \ref{prop:recurrence}) generalizing the result of \cite{DJW10}. Next, we apply Theorem \ref{thm:geometry} to prove the following results, the first of which completes the work of \cite{DJW10}. Given a connected graph $G$ and a vertex $x$, we denote by $\mathbf{P}_{G,x}$ the law of the simple random walk $(X_{n})_{n\geq 0}$ started at $x$ and denote by $P_G^{t}(x,x)$ the $t$-step return probability to $x$. The \textbf{spectral dimension} $d_{s}$ of a connected graph $G$ is defined to be
\[
d_s(G)= \lim_{n\to\infty}-\frac{2\log P_G^{2n}(x,x)}{\log n}
\]
should this limit exist (in which case it does not depend on $x$). We also define the \textbf{typical displacement exponent} $\nu=\nu(G)$ of the connected graph $G$ by
\[
\lim_{n\to\infty}\mathbf{P}_{G,x}\bigl(n^{\nu-\varepsilon} \leq \mathrm{d}_\mathrm{gr}^G(x,X_n)\leq n^{\nu+\varepsilon}\bigr) =1 \text{ for every $\varepsilon>0$}
\]
should such an exponent exist (in which case it is clearly unique and does not depend on $x$). We say that $G$ is \textbf{diffusive} for simple random walk if the typical displacement exponent $\nu(G)$ exists and equals $1/2$.
\begin{theorem}[Spectral dimension and diffusivity of generic causal maps] \label{thm:spectral} Let $\mu$ be critical with finite non-zero variance. Then \[d_s(\mathscr{C}_\infty)=2 \quad \text{ and } \quad \nu(\mathscr{C}_\infty)=1/2\] almost surely. In particular, both exponents exist almost surely.
\end{theorem}
Note that these exponents are \emph{not} the same as the underlying tree $T_\infty$, which has $d_s=4/3$ and $\nu=1/3$ \cite{BK06,DJW07,MR2407556,Kes86}.
The central step in the proof of Theorem \ref{thm:spectral} is to prove that the exponent $ \mathfrak{r}$ governing the growth of the resistance between $\rho$ and the boundary of the ball of radius $n$ in $ \mathscr{C}_\infty$, defined by $ \mathsf{R_{eff}}(\rho \leftrightarrow \partial [T_\infty]_n;\, \mathscr{C}_\infty) = n^{ \mathfrak{r}+o(1)}$, is $ \mathfrak{r}=0$.
In fact, we prove the following quantitative subpolynomial upper bound on the resistance growth. This estimate is established using geometric controls on $ \mathscr{C}_\infty$ and the method of random paths \cite[Chapter 2.5]{LP10}. It had previously been open to prove any sublinear upper bound on the resistance.
\begin{theorem}[Resistance bound for generic causal maps ]
\label{thm:resistance}
Suppose $\mu$ is critical and has finite non-zero variance. Then there exists a constant $C$ such that almost surely for all $r$ sufficiently large we have
\[ \mathsf{{R}_{eff}}(\rho \leftrightarrow \partial [T_\infty]_r;\, \mathscr{C}_\infty) \leq e^{C \sqrt{\log r}}. \]
\end{theorem}
Theorem \ref{thm:spectral} can easily be deduced from Theorem \ref{thm:resistance} by abstract considerations. Indeed, by classical properties of Galton--Watson trees, the volume growth exponent $ \mathfrak{g}$, defined by $ \# \mathrm{Ball}(\rho,n) = n^{ \mathfrak{g}+o(1)}$, is easily seen to be equal to $2$. For recurrent graphs, the spectral dimension and typical displacement exponent can typically be computed from the volume growth and resistance growth exponents via the formulas
\[ d_{s} = \frac{2 \mathfrak{g}}{ \mathfrak{g}+ \mathfrak{r}},\qquad \text{ and } \qquad d_s =2\nu \mathfrak{g},\]
which yield $d_s=2$ and $\nu=1/\mathfrak{g}$ whenever $\mathfrak{r}=0$.
Although this relationship between exponents holds rather generally (see \cite{BJKS08,K10,KM08}), things become substantially simpler in our case of $\mathfrak{r}=0,$ $\mathfrak{g}=2$ and we include a direct derivation.
Indeed, in this case it suffices to use the inequalities
\[ d_s \geq 2-2\mathfrak{r}, \quad d_s \leq 2\nu \mathfrak{g}, \quad \text{ and } \quad \mathfrak{g}<\infty \Rightarrow \nu \leq 1/2,\]
which are more easily proven and require weaker controls on the graph.
Let us note in particular that the \emph{upper bounds} on $d_s$ and $\nu$ are easy consequences of the Varopoulos-Carne bound
and do not require the full machinery of this paper.
\subsection{The $\alpha$-stable case.} Besides the finite variance case, we also study the case in which the offspring distribution $\mu$ is critical and is ``$\alpha$-stable'' in the sense that it satisfies the asymptotic\footnote{Here $f(k) \sim g(k)$ means that $f(k)/g(k)\to1$ as $k\to+\infty$.}
\begin{eqnarray} \mu([k, \infty)) \quad \underset{k \to \infty}{\sim}\quad c\ k^{-\alpha}, \quad \mbox{ for some } c >0 \mbox{ and }\alpha \in (1,2). \label{eq:defstable}\end{eqnarray}
In particular the law $\mu$ is in the strict domain of attraction of the totally asymmetric $\alpha$-stable distribution (we restrict here to polynomially decaying tails to avoid technical complications involving slowing varying functions). The study of such causal maps is motivated by their connection to uniform random planar triangulations. Indeed, Krikun's skeleton decomposition \cite{Kri04} identifies an object related to the stable causal map with exponent $\alpha = \frac{3}{2}$ inside the UIPT, see Section \ref{sec:carpet}.
We still denote by $T_{\infty}$ the $\mu$-Galton--Watson tree conditioned to be infinite (the dependence in $\mu$, and hence in $\alpha$, is implicit), and denote by $\mathscr{C}_\infty$ the associated causal map. The geometry of $\mu$-Galton--Watson trees with critical ``$\alpha$-stable'' offspring distribution is known to be drastically different from the finite variance case. In particular, the size of the $n$th generation of $T_{\infty}$ is of order $n^{ \frac{1}{\alpha-1}}$ rather than $n$, and the the scaling limit is given by the (infinite) stable tree of Duquesne, Le Gall and Le Jan \cite{DLG02}, rather than the Brownian tree of Aldous \cite{Ald91a}.
We prove that there is a further pronounced difference occuring when one investigates the associated causal maps. Namely, while the girth at height $r$ was strictly sublinear in the finite variance case, it is linear in the $\alpha$-stable case. In particular, we have the following analog of Theorem \ref{thm:geometry}.
\begin{theorem}[Geometry of stable causal maps] \label{thm:geometrystable} If $\mu$ is critical and satisfies \eqref{eq:defstable} then we have
$$ (i) \quad \frac{\log \mathsf{Girth}_{r}( \mathscr{C}_\infty)}{\log r} \xrightarrow[r\to\infty]{(a.s.)} 1 \quad \mbox{ and } \quad (ii) \quad \lim_{ \varepsilon \to 0} \inf_{ r \geq 1} \mathbb{P}\left(\mathsf{Girth}_{r}(\mathscr{C}_\infty) \geq \varepsilon r\right) = 1.$$
\end{theorem}
Similar to Theorem \ref{thm:geometry}, the proof of this theorem uses a block-renormalisation argument.
We conjecture that in fact $ r^{-1}\mathsf{Girth}_{r}(\mathscr{C}_\infty)$ converges in distribution and more generally that $r^{-1} \cdot \mathscr{C}_\infty$ converges in the local Gromov--Hausdorff sense.
These questions will be addressed in a forthcoming work of the first author. This theorem (and its proof) in fact have direct consequences in the theory of uniform random planar triangulations, using Krikun's skeleton decomposition; see Section \ref{sec:comments} for further details.
The adaptation of the techniques used to prove Theorem \ref{thm:spectral} here yields that
the resistance exponent satisfies
\begin{equation}
\label{eq:stableresistancebound}
\mathfrak{r} \leq \frac{2- \alpha}{\alpha-1},
\end{equation}
while the volume growth exponent is known to be $ \mathfrak{g} = \frac{\alpha}{\alpha-1}$ \cite{CK08}.
Notice that this bound is only useful in the range $\alpha \in (3/2,2)$ since we always have $ \mathfrak{r} \leq 1$. This witnesses that our understanding of the spectral properties of $ \mathscr{C}_\infty$ in the $\alpha$-stable case is much less advanced than in the finite variance case.
The bound \eqref{eq:stableresistancebound} becomes much more interesting in the case of the $\alpha$-stable \emph{causal carpet}, which we expect to really have polynomial resistance growth; see Section \ref{sec:carpet} for further discussion.
We remark that the spectral properties of the tree $T_\infty$ have been studied by Croydon and Kumagai \cite{CK08}, who prove in particular that $T_\infty$ has spectral dimension $2\alpha/(2\alpha-1)$ almost surely.
We are embarrassed to leave the following question open:
\begin{open} Suppose that $\mu$ is critical and satisfies \eqref{eq:defstable}. Is $ \mathscr{C}_\infty$ a.s.~transient?
\end{open}
\paragraph{Organization.} The paper is organized as follows. In Section \ref{sec:halfplane} we present the renormalisation technique that enables to bound from below the girth of causal graphs in a ``quarter-plane'' model carrying more independence than $\mathscr{C}_\infty$. This technique is rather general and we hope the presentation will make it easy to adapt to other settings. We also present the subadditive argument (Section \ref{sec:subadditive}) which gives the sublinear girth in the case of finite variance offspring distribution.
Section \ref{sec:width} is then devoted to the careful proof of Theorem \ref{thm:geometry} and \ref{thm:geometrystable}, which is done by dragging the quarter-plane estimates through to the original model $\mathscr{C}_\infty$. In Section \ref{sec:spectral}, we use the geometric knowledge gathered thusfar to prove Theorem \ref{thm:resistance} and deduce Theorem \ref{thm:spectral}. Section \ref{sec:comments} is devoted to extensions and comments.
\medskip
\paragraph{Acknowledgments:} NC acknowledges support from the Institut Universitaire de France, ANR Graal (ANR-14-CE25-0014), ANR Liouville (ANR-15-CE40-0013) and ERC GeoBrown. TH and AN were supported by ISF grant 1207/15 and ERC grant 676970 RandGeom. TH was also supported by a Microsoft Research PhD Fellowship and he thanks Tel Aviv University and Universit\'e Paris-Sud Orsay for their hospitality during visits in which this work was carried out. TH also thanks Jian Ding for bringing the problem of resistance growth in the CDT to his attention. Lastly, we warmly thank the anonymous referees for many valuable comments on the manuscript.
\vspace{0.3cm}
\begin{center} \hrulefill \fbox{\begin{minipage}{12cm}\textit{For the rest of the paper, $\mu$ will be a fixed critical offspring distribution. Furthermore, we will always assume either that $\mu$ has a finite, positive variance, or else that \eqref{eq:defstable} holds for some $\alpha \in (1,2)$. We refer to these two cases as the \emph{finite variance} and \emph{$\alpha$-stable} cases respectively. To unify notation, we let $\beta=1$ in the finite variance case and $\beta = \frac{1}{\alpha-1}>1$ in the $\alpha$-stable case.} \end{minipage}}\hrulefill \end{center}
\vspace{0.1cm}
\section{Estimates on the quarter-plane model}
\label{sec:halfplane}
The goal of this section is to study the girth of random causal graphs. For this, we first define a ``quarter-plane'' model carrying more symmetries and independence properties than $T_{\infty}$. We then define the notion of a \emph{block} and establish the key renormalisation lemma than enables us to lower bound the width of a block (Proposition \ref{prop:renorm}). The outcome of this renormalisation procedure is slightly different depending on whether $\mu$ has finite variance or is ``$\alpha$-stable''. These estimates will later be transferred to the actual model $ \mathscr{C}_\infty$ in Section \ref{sec:width}. In Section \ref{sec:subadditive} we present the subadditive argument for the quarter-plane model (Proposition \ref{prop:subadditive}) which will enable us to prove that the width is sublinear in the finite variance case. \bigskip
Before presenting the quarter-plane model, let us start by recalling a few standard estimates on critical Galton--Watson trees. Recall that $\mu$ is always a \emph{critical} offspring distribution and recall the definition of $\beta$ above. The famous estimate of Kolmogorov and its extension to the stable case by Slack \cite{Slack68} states that
\begin{eqnarray} \label{eq:kolmogorov} \mathbb{P}( \mathsf{Height}(T) \geq n) \sim c\, n^{-\beta}, \quad \mbox{for some }c>0, \quad \mbox{ as } n \to \infty. \end{eqnarray}
Furthermore, conditionally on non-extinction at generation $n$, the total size of generation $n$ converges after rescaling by $n^{\beta}$ towards a non-zero random variable (Yaglom's limit and its extension by Slack \cite{Slack68}):
\begin{eqnarray} \label{eq:yaglom} \left(n^{-\beta}\# \partial [T]_{n} \in \cdot \mid \mathsf{Height}(T) \geq n\right) \xrightarrow[n\to\infty]{(d)} \mathcal{X}, \end{eqnarray} where $ \mathcal{X}$ is an explicit positive random variable (but whose exact distribution will not be used in the sequel).
\subsection{The block-renormalisation scheme}\label{sec:blocks}
\paragraph{The quarter-plane model.} We consider a sequence $T_{1},T_{2}, \dots$ of independent and identically distributed $\mu$-Galton--Watson trees. We can then index the vertices of this forest by $\{1,2,\ldots\}\times \{0,1,\ldots\}$ in an obvious way as depicted in the Figure \ref{fig:block} below.
\vspace{0.5em}
\begin{figure}[!h]
\begin{center}
\includegraphics[width=14cm]{trees2.pdf}
\caption{ \label{fig:block} The layer of height $4$ in the random graph obtained from an iid sequence of $\mu$-Galton--Watson trees. This layer can be decomposed into blocks of height $4$ and we represented the first three blocks in this sequence. The vertices on the left and right sides of the blocks are denoted with squares.}
\vspace{-1em}
\end{center}
\end{figure}
Adding the horizontals edges $(i,j) \leftrightarrow (i+1,j)$ forms an infinite planar map (graph) which we call the \textbf{quarter-plane model}
and denote by $\mathcal{Q}_\infty$.
Let $\xi_r\geq 1$ be minimal such that $T_{\xi_{r}}$ reaches height $r$. The \textbf{block of height $r$}, denoted $\mathcal{G}_r$, is defined to be the subgraph of $\mathcal{Q}_\infty$ induced by
the vertices of $T_{1}, \dots , T_{\xi_{r}}$ at height less than or equal to $r$. That is, $\mathcal{G}_r$ consists of all the vertices of $T_{1}, \dots , T_{\xi_{r}}$ at height less than or equal to $r$ and all the edges of $\mathcal{Q}_\infty$ (both horizontal and vertical) between them.
See Fig.~\ref{fig:block} for an illustration.
Clearly, we can speak of the two sets of vertices belonging respectively to the left side and right side of the block $ \mathcal{G}_{r}$, that is, the set of $r+1$ left most vertices and the set of $r+1$ right most vertices of the block. We define the \textbf{width} of $ \mathcal{G}_{r}$, denoted by $\mathsf{Width}( \mathcal{G}_{r})$, to be the minimal graph distance (in $ \mathcal{G}_{r}$) between a vertex in the left side and a vertex in the right side of $ \mathcal{G}_{r}$, see Fig.~\ref{fig:block}. The width of a block is not uniformly large when $r$ is large: Indeed, the first tree $T_{1}$ may actually reach the level $r$ in which case $ \mathsf{Width}( \mathcal{G}_{r}) = 0$. However, we will see that a large block typically has a large width. To this end, we consider the \emph{median} of $\mathsf{Width}( \mathcal{G}_{r})$:
\begin{definition} For each $r \geq 1$ let $f(r)$ be the median width of $\mathcal{G}_r$, that is, the largest number such that
\[ \mathbb{P}\big( \mathsf{Width}( \mathcal{G}_{r}) \geq f(r)\big) \geq 1/2.\]
\end{definition}
As usual, the dependence on the offspring distribution is implicit in the notation. Obviously the value $1/2$ is not special. Note that, depending on $\mu$, one might have that $f(r)=0$ for small values of $r$.
On the other hand, $\mathsf{Width}(\mathcal{G_r})$ is bounded deterministically by $2r$ since all vertices in the top layer of $\mathcal{G}_r$ share a common ancestor in level zero, so that $f(r)\leq 2r$ also.
Our main technical result shows that $f(r)$ is always roughly linear, more precisely:
\begin{theorem}[$f(r)$ is almost linear] \label{thm:main} If $\mu$ is critical and satisfies \eqref{eq:defstable} then there exists $c >0$ such that
\begin{equation}
\label{eq:flinear}
f(r) \geq c \,r\end{equation}
for all $r$ sufficiently large.
On the other hand, if $\mu$ is critical and has finite non-zero variance then there exists $C>0$ such that
\begin{equation}\label{eq:falmostlinear}
f(r) \geq r \exp( - C \sqrt{ \log r})\end{equation}
for all $r$ sufficiently large.
\end{theorem}
The above theorem is an analytic consequence of the following proposition which encapsulates the renormalisation scheme. Recall the definition of $\beta \geq 1$ at the end of the Introduction.
\begin{proposition} \label{prop:renorm} There exists $c >0$ such that for any $1 \leq m \leq c \cdot r$ we have
\begin{equation}
\label{eq:renorm}
f(r) \geq c \cdot \min \Big\{ m ;\, (r/m)^{\beta} f(m) \Big\}.
\end{equation}
\end{proposition}
The proof of this proposition relies on a renormalisation scheme in which $ \mathcal{G}_{r}$ is split into smaller blocks distributed as $ \mathcal{G}_{m}$ for $0 \leq m \leq r$. Before starting the proof, let us introduce some notation. Let $m,h \geq 0$, and consider the layer of thickness $m$ between heights $h$ and $h+m$ in the quarter-plane model $\mathcal{Q}_\infty$. This layer is composed of a sequence of blocks of height $m$ which we denote by $ \mathcal{G}_{m}(i,h)$ for $i \geq 1$. For any fixed $m,h \geq 0$, these blocks are of course independent and distributed as $ \mathcal{G}_{m}$ (indeed, $\mathcal{G}_m(1,0)$ is equal to $\mathcal{G}_m$). When $h+m \leq r$, we denote by $N_{r}(m,h)$ the maximal $i$ such that the block $ \mathcal{G}_{m}(i,h)$ is a subblock of $ \mathcal{G}_{r}$.
\vspace{1.5em}
\begin{figure}[!h]
\begin{center}
\includegraphics[width=16cm]{renorm.pdf}
\caption{ \label{fig:renormalisation} Decomposing a big block into smaller blocks: On the left we see the blocks $ \mathcal{G}_{6}(1,0)$ and $ \mathcal{G}_{6}(2,0)$. These two blocks are further decomposed into the blocks $ \mathcal{G}_{2}(i,h)$ on the right figure for $h \in \{0,2,4\}$ and $i \geq 1$.}
\end{center}
\end{figure}
\vspace{-1.5em}
We also recall the following classical Chernoff-type bound for sums of indicator random variables \cite[Corollary A.1.14]{AlonSpencer}, which will be used throughout the paper:
For every $\varepsilon>0$ there exists a constant $c_\varepsilon>0$ such that for every $k\geq 1$ and every sequence $X_1,\ldots X_k$ of mutually independent $\{0,1\}$-valued random variables, the sum $Y= \sum_{i=1}^k X_i$ satisfies the bound
\begin{equation}
\label{eq:ASChernoff}
\P\Bigl(\bigl|Y-\mathbb{E}[Y]\bigr| \geq \varepsilon\, \mathbb{E}[Y] \Bigr) \leq 2 e^{-c_\varepsilon \mathbb{E}[Y]}.
\end{equation}
We are now ready to proceed with the proof of Proposition \ref{prop:renorm}.
\proof[Proof of Proposition \ref{prop:renorm}]
We will prove that
there exists $c >0$ such that
\begin{equation}
\label{eq:renorm}
f(r) \geq c \cdot \min \Big\{ m ;\, (r/m)^{\beta} f(2m) \Big\}.
\end{equation}
for every $1 \leq m \leq c \cdot r$ that divides $r$. The full proof of the claim as originally stated (in which $m$ replaces $2m$ and $m$ is not assumed to divide $r$) is very similar but requires messier notation.
We bound from below the width of the block $ \mathcal{G}_{r}$ using the widths of the blocks $ \mathcal{G}_{2m}(i,h)$ for $h$ of the form $\ell \cdot m$ with $0 \leq \ell \leq (r/m)-2$. Suppose that we pick a point $x$ on the left side of $ \mathcal{G}_{r}$, say at height $0 \leq j \leq r$. If $m/3 \leq j \leq r-m/3$ then we can clearly take $ 0 \leq \ell \leq (r/m)-2$ such that $| \ell m -j| \geq m/3$ and $|(\ell+2)m-j| \geq m/3$. Otherwise, we either have that $ 0\leq j < m/3$ and take $\ell=0$ or we have that $ r-m/3 < j \leq r$ and take $\ell =(r/m)-2$. We then have an alternative: either the shortest path from $x$ to the other side of $ \mathcal{G}_{r}$ stays in the layer between heights $\ell m$ and $(\ell +2)m$, or else it leaves it at some point. In the second case we know that the length of the path is at least $m/3$ by our assumption on $j$ and $\ell$ and since the graph distance between any two points in the graph is at least their height difference. On the other hand, in the first case, the length of such a path is at least
$$ \sum_{i=1}^{N_{r}(2m,\ell m)} \mathsf{Width}\big( \mathcal{G}_{2m}(i, \ell m)\big).$$ This is because the path must cross, from left to right, every subblock of height $2m$ that is in that layer and that belongs to the block $ \mathcal{G}_{r}$. See Fig.~\ref{fig:trans}. We conclude that
\begin{equation}\label{eq:logic} \mathsf{Width}( \mathcal{G}_{r}) \geq \min \Big \{ {m \over 3} , \min _{0 \leq \ell \leq (r/m)-2} \sum_{i=1}^{N_{r}(2m,\ell m)} \mathsf{Width}\big( \mathcal{G}_{2m}(i, \ell m)\big) \Big \} \, .\end{equation}
\begin{figure}[!h]
\begin{center}
\includegraphics[width=10cm]{renormbis.pdf}
\caption{
\small{\label{fig:trans} Illustration of the proof. Any point on the left-hand side of $ \mathcal{G}_{r}$ is well inside a layer of height $2m$ starting at some height $\ell m$. The geodesic from $x$ to the other side of the block either leaves this layer (in orange) or must traverse all the $N_{r}(2m, \ell m)$ sub-blocks (in red).}}
\end{center}
\end{figure}
For fixed $h,m \geq 1$ the blocks $ \mathcal{G}_{2m}(i,h)$ are independent and distributed as $ \mathcal{G}_{2m}$. Thus, by the definition of the function $f(2m)$ and \eqref{eq:ASChernoff}, we have that
$$ \mathbb{P}\left( \sum_{i=1}^{k} \mathsf{Width}( \mathcal{G}_{2m}(i,h)) \leq \frac{k \cdot f(2m)}{4}\right) \leq \mathbb{P}\Big( \mathrm{Binomial}(k,1/2) \leq k/4\Big) \leq e^{- \eta k}$$ for every $k\geq 1$, where $\eta >0$ is a constant independent of $k,h$ and $m$. Summing-up over all possibilities for $h = \ell \cdot m$ with $\ell \in \{0,\ldots,(r/m)-2\}$ we deduce that with probability at least $ 1- \frac{r}{m} e^{- \eta k}$ we have
\begin{eqnarray} \label{eq:LD} \forall 0 \leq \ell \leq (r/m)-2, \quad \quad \sum_{i=1}^{k} \mathsf{Width}( \mathcal{G}_{2m}(i, \ell m)) \geq \frac{k \cdot f(2m)}{4}. \end{eqnarray}
We now estimate $N_{r}(2m ,\ell m)$:
\begin{lemma} \label{lem:manymany} There exists $c>0$ such that for every $1 \leq m \leq c r$ we have
$$ \mathbb{P}\left(\min_{0 \leq \ell \leq (r/m)-2} N_{r}(2m, \ell m) \geq c \left( \frac r m\right)^{\beta}\right) \geq \frac{7}{8}.$$
\end{lemma}
\proof We first consider the analogous estimate in the case $m=0$. In this case, $N_{r}(0,h)$ is the just the number of vertices at height $h$ in the block $ \mathcal{G}_{r}$. We claim that we can find $c>0$ sufficiently small such that
\begin{eqnarray} \label{eq:CSBP}
\mathbb{P}\left(\min_{0 \leq h \leq r} N_{r}(0,h) \geq c \cdot r^\beta \right) \geq 15/16 \end{eqnarray}
for every $r\geq 1$. This kind of result is part of the folklore in the theory of branching processes (see e.g. \cite{DLG02}) but since we were not able to locate a precise reference for it we include a direct derivation at the end of this subsection (Lemma \ref{lem:CSBP}).
We now apply \eqref{eq:CSBP} to prove the claim in the statement of the lemma. Let $c$ be the constant from \eqref{eq:CSBP}.
Fix $0 \leq h \leq r$ and denote by $X(h,2m,k)$ the number of trees whose origin in the line of height $h$ has index less than $k \geq 1$ and which reach height $2m$ relative to its starting height of $h$. With this notation we have $N_{r}(2m,h) = X(h,2m,N_{r}(0,h))$. For fixed $k \geq 1$ the random variable $X(h,2m,k)$ has binomial distribution with $k$ trials and success parameter $\mathbb{P}(\mathrm{Height}(T) \geq 2m)$. On the event $\{N_{r}(0,h) \geq c r^{\beta}\} \cap \{N_{r}(2m,h) \leq c' (r/m)^{\beta}\}$ we have $X(h,2m,\lceil c r^{\beta} \rceil) \leq c' (r/m)^{\beta}$. Thus, applying \eqref{eq:ASChernoff} and \eqref{eq:kolmogorov} we deduce that there exist constants $c'>0$ and $\delta>0$ such that
\[ \P\Bigl(N_{r}(2m,h)\leq c'(r/m)^{\beta} \mbox{ and } N_{r}(0,h) \geq c \cdot r^\beta \Bigr) \leq \P\Bigl(X(h,2m,\lceil c r^{\beta} \rceil) \leq c' (r/m)^{\beta}\Bigr) \leq
e^{-\delta (r/m)^{\beta}}\] for every $r,m,$ and $h$. For sufficiently large values of $r/m$ we have that $ (r/m) e^{-\delta (r/m)^{\beta}} \leq 1/16$, and can proceed to apply a union bound over values of $h$ of the form $\ell m$ for $\ell \in \{0, \ldots, (r/m)-2\}$. Indeed, gathering up the pieces above we have that
\begin{align*} \mathbb{P}( \exists 0 \leq \ell \leq (r/&m)-2 : N_{r}(2m, \ell m) \leq c'(r/m)^{\beta})\\ &\leq \mathbb{P}\left( \min_{0 \leq h \leq r} N_{r}(0,h) \leq c \cdot r^{\beta}\right) + \sum_{\ell=0}^{(r/m)-2} \mathbb{P}\left( N_{r}(2m,\ell m) \leq c'(r/m)^{\beta}, \,\, N_{r}(0,\ell m) \geq cr^{\beta}\right)\\ & \leq \frac{1}{16} + (r/m) e^{-\delta (r/m)^{\beta}} \leq \frac{1}{16}+ \frac{1}{16} = \frac{1}{8}, \end{align*} and this proves the lemma. \endproof
We now return to the proof of Proposition \ref{prop:renorm}. Let $c$ be the constant from Lemma~\ref{lem:manymany}. We take $k = \lfloor c(r/m)^{\beta} \rfloor$ in \eqref{eq:LD} and assume that $r/m$ is large enough to ensure that $ (r/m) e^{- \eta k} \leq 1/8$. Using Lemma \ref{lem:manymany} and intersecting with the event in \eqref{eq:LD} we deduce by \eqref{eq:logic} that
\begin{eqnarray*} \mathbb{P}\left( \mathsf{Width}( \mathcal{G}_{r}) < \frac{m}{3} \wedge \frac{k\cdot f(2m)}{4}\right) &\leq& \mathbb{P}\left(\min_{0 \leq \ell \leq (r/m)-2} N_{r}(2m, \ell m) < c \left( \frac r m\right)^{\beta}\right) + (r/m) e^{-\eta k} \\ & \leq & \frac{1}{8} + \frac{1}{8} = \frac{1}{4}. \end{eqnarray*}
By definition of $f(r)$ we thus deduce that
$$f(r) \geq \min \Big\{ \frac{m}{3} ;\, \frac{c}{4} (r/m)^{\beta}f(2m)\Big\} \geq c'' \inf \big\{ m ;\, (r/m)^{\beta} f(2m)\big\},$$ for some $c''>0$ and every $r \geq m \geq 1$ such that $m$ divides $r$ and $r/m$ is sufficiently large.
\endproof
\begin{proof}[Proof of Theorem \ref{thm:main} from Proposition \ref{prop:renorm}]
Assume that $f$ satisfies the conclusions of Proposition \ref{prop:renorm} with the appropriate $\beta$ and that $f(r)>0$ for all sufficiently large $r$, which is easily seen to be satisfied by our function $f$. (One way to prove this formally is to note that the width is zero if and only if $\min_{0\leq h \leq r} N_r(0,h) =1$, and then apply Lemma \ref{lem:CSBP}, below. Easier and more direct proofs are also possible.)
First suppose that $\beta >1$ (i.e.~that we are in the stable case) and let $k$ be an integer that is larger than both $c^{-1}$ and $c^{-1/(\beta-1)}$. Let $a_n = f(k^n)/k^n$. We claim that
\[
\liminf_{n\to\infty} a_n > 0.
\]
Indeed, applying Proposition~\ref{prop:renorm} with $r=k^{n+1}$ and $m=k^{n}$ yields that
\[a_{n+1} \geq \min\left\{\frac{c k^n}{k^{n+1}},\, c k^{-1} \left(\frac{k^{n+1}}{k^n}\right)^\beta a_{n} \right\}
\geq \min\left\{ \frac{c}{k},\, a_n\right\}, \]
and since $a_n>0$ for some $n\geq 1$ it follows by induction that $\liminf_{n\to\infty} a_n>0$ as claimed. This establishes the inequality \eqref{eq:flinear} for values of $r$ of the form $r=k^n$. The inequality \eqref{eq:flinear} follows for general values of $r$ by taking $m= k^{\lfloor \log_k r -1\rfloor}$ in Proposition~\ref{prop:renorm}.
Now suppose that $\beta=1$, and let $k \geq 1/c$ be an integer.
We put $b_n =f(k^{n^2})/k^{(n-1)^2}$ and will show that
\[
\liminf_{n\to\infty} b_n > 0.
\]
Indeed, applying Proposition~\ref{prop:renorm} with $r=k^{(n+1)^2}$ and $m=k^{n^2} \leq c k^{(n+1)^2}$ yields that
\[
b_{n+1} \geq \min \left\{ c\frac{k^{n^2}}{k^{n^2}},\, c \frac{k^{(n+1)^2+(n-1)^2}}{k^{2n^2}} b_n \right\} = \min\{c,\, ck^2 b_n \} \geq \min\{c,\, b_n\},
\]
and since $b_n>0$ for some $n\geq 1$ it follows by induction that $\liminf_{n\to\infty} b_n>0$ as desired. This establishes the inequality \eqref{eq:falmostlinear} for values of $r$ of the form $r=k^{n^2}$.
The inequality \eqref{eq:falmostlinear} for other values of $r$ follows by applying Proposition \ref{prop:renorm} with
$m=k^{\left\lfloor \sqrt{\log_k r} -1 \right\rfloor^2}$. \qedhere
\end{proof}
\begin{remark}
With a little further analysis, it can be shown that (disregarding constants) the bound $f(r)\geq r e^{-O(\sqrt{\log r})}$ is the best that can be obtained from Proposition \ref{prop:renorm} in the case $\beta=1$.
\end{remark}
We now owe the reader the proof of \eqref{eq:CSBP}:
\begin{lemma}
\label{lem:CSBP}
With the notation of Lemma \ref{lem:manymany}, for any $ \varepsilon>0$ we can find $\delta >0$ such that for every $r \geq 1$ we have that
\[ \mathbb{P}\Bigl( \min_{0 \leq h \leq r} N_{r}(0,h) \leq \delta r^{\beta}\Bigr) \leq \varepsilon.\]
\end{lemma}
\proof Fix $ \varepsilon>0$. Recall that $T_{1}, T_{2}, \dots$ are independent $\mu$-Galton--Watson trees and that $T_{\xi_{r}}$ is the first of these trees reaching height $r$. We denote by $ \mathbf{X}_{i}(h)$ the total number of vertices at height $h$ belonging to the trees $T_{1}, \dots , T_{i}$ so that $X_{\xi_{r}}(h) = N_r(0,h)$. It suffices to show that if $\delta >0$ is sufficiently small then
\[\mathbb{P}\Bigl( \min_{0 \leq h \leq r} \mathbf{X}_{\xi_{r}}(h) \leq \delta r^{\beta}\Bigr) \leq \varepsilon\] for all $r \geq 1$.
We start with two remarks. First, observe that the law of $\xi_{r}$ is a geometric random variable with success probability $ \mathbb{P}( \mathrm{Height}(T) \geq r)$. By \eqref{eq:kolmogorov} this success probability is asymptotic to $cr^{-\beta}$ as $r\to\infty$, and it follows that if $ \eta >0$ is sufficiently small then $ \mathbb{P}(\xi_{r} \notin [\eta r^{\beta}, \eta^{-1} r^{\beta}]) \leq \varepsilon/3$ for all $r\geq 1$. Secondly, using \eqref{eq:kolmogorov} again, it is easy to see that we can find $ \varepsilon' >0$ such that the height of $T_{\xi_{r}}$ is at least $(1+ \varepsilon')r$ with probability at least $ 1-\varepsilon/3$. Thus, by the union bound, it suffices to prove that if $\delta>0$ is sufficiently small then
\begin{equation}
\label{eq:lem2desired}
\mathbb{P}\left( \left\{\min_{0 \leq h \leq r} \mathbf{X}_{\xi_{r}}(h) \leq \delta r^{\beta} \right\} \cap \left\{ \eta r^{\beta}<\xi_{r}< \eta^{-1} r^{\beta}\right\} \cap \left\{ \xi_{r} = \xi_{(1+ \varepsilon')r} \right \}\right) \leq \varepsilon/3
\end{equation} for every $r \geq 1$.
Let $k,r\geq 1$. Let $\mathcal{W}_{r,k}$ be the event that $\xi_{r}=\xi_{(1+ \varepsilon')r} = k$, and let $\mathcal{U}_{r,k}$ be the event that exactly one of the trees $T_1,\ldots,T_k$ reaches height $(1+\varepsilon')r$ while every other tree indexed by $\{1,\ldots, k\}$ reaches height strictly less than $r$. If $\sigma$ is a uniform random permutation of $\{1,\ldots, k\}$ independent of $T_1,T_2,\ldots$, notice the following equality of conditional distributions
\begin{equation}
\label{eq:lemma2distributionalequality}
\Bigl( \bigl(T_1,\ldots,T_k\bigr) \mid \mathcal{U}_{r,k} \Bigr) \overset{d}{=} \Bigl( \bigl(T_{\sigma(1)},\ldots,T_{\sigma(k)}\bigr) \mid \mathcal{W}_{r,k} \Bigr).
\end{equation}
On the other hand, if we define $\mathcal{V}_{r,k}$ to be the event that \emph{at least one} of the $k$ trees $T_{1}, \dots , T_{k}$ reaches height $(1 + \varepsilon')r$, then a little calculation using \eqref{eq:kolmogorov} shows that there exist constants $0<c_{1}<c_{2}<1$ (depending on $ \varepsilon'$ and $\eta$) such that
\begin{eqnarray} \label{eq:condi}0<c_{1}<\mathbb{P}( \mathcal{U}_{r,k}) \leq \mathbb{P}( \mathcal{V}_{r,k})<c_{2}<1 \end{eqnarray}
for every $r \geq 1$ and all $\eta r^\beta \leq k \leq \eta^{-1} r^\beta$.
We deduce that there exists a constant $c_3>0$ such that
\begin{align*} &\mathbb{P}\biggl( \biggl\{\min_{0 \leq h \leq r} \mathbf{X}_{\xi_{r}}(h) \leq \delta r^{\beta} \biggr\} \cap \biggl\{ \eta r^{\beta}<\xi_{r}< \eta^{-1} r^{\beta}\biggr\} \cap \left\{ \xi_{r} = \xi_{(1+ \varepsilon')r} \right \}\biggr) \\
&\hspace{1.6cm}
= \sum_{ \eta r^{\beta} < k < \eta^{-1} r^{\beta}} \mathbb{P}\left( \left\{\min_{0 \leq h \leq r} \mathbf{X}_{k}(h) \leq \delta r^{\beta} \right\} \cap \mathcal{W}_{r,k}\right)\\
&\hspace{1.6cm}\hspace{1.6cm}
\underset{ \eqref{eq:kolmogorov}}{\leq} c_3
\sup_{\eta r^{\beta} < k < \eta^{-1} r^{\beta}} \mathbb{P}\left(\min_{0 \leq h \leq r} \mathbf{X}_{k}(h) \leq \delta r^{\beta} \mid \mathcal{W}_{r,k}\right) \\
&\hspace{1.6cm}\hspace{1.6cm}\hspace{1.6cm}
\underset{\eqref{eq:lemma2distributionalequality}}{=} c_3\sup_{\eta r^{\beta} < k < \eta^{-1} r^{\beta}} \mathbb{P}\left(\min_{0 \leq h \leq r} \mathbf{X}_{k}(h) \leq \delta r^{\beta} \mid \mathcal{U}_{r,k} \right) \\
&\hspace{1.6cm}\hspace{1.6cm}\hspace{1.6cm}\hspace{1.6cm}
\underset{ \eqref{eq:condi}}{\leq} \frac{c_3}{c_{1}}\sup_{\eta r^{\beta} < k < \eta^{-1} r^{\beta}} \mathbb{P}\left(\left\{\min_{0 \leq h \leq r} \mathbf{X}_{k}(h) \leq \delta r^{\beta} \right\} \cap \mathcal{V}_{r,k} \right)
\end{align*}
for every $r\geq 1$.
But now one can easily estimate $\mathbb{P}\left(\min_{0 \leq h \leq r} \mathbf{X}_{k}(h) \leq \delta r^{\beta} \cap \mathcal{V}_{r,k} \right)$: If $0 \leq h_{0} \leq r$ is the first height at which we have $ \mathbf{X}_{k}(h_{0}) \leq \delta r^{\beta}$, then by the Markov property of the branching process, the probability that one of the descendants of the $ \mathbf{X}_{k}(h_{0})$ points at generation $h_{0}$ reach height $(1+ \varepsilon')r$ is bounded from above by $ \delta r^{\beta} \mathbb{P}( \mathrm{ht}(T) \geq \varepsilon' r)$. Using \eqref{eq:kolmogorov} again, we can choose $\delta>0$ small enough so that this probability is less than $ c_{1} \cdot \varepsilon /3$ for all $r \geq 1$. For this choice of $\delta$ we indeed have \eqref{eq:lem2desired}.
\qed
\subsection{The dual width}
\label{sec:dualdiam}
In order to analyze resistances, it is more convenient to have control of the width of the \emph{dual} of a block than of the block itself.
Given $r\geq 1$ and a block $\mathcal{G}_r$,
we define the
\textbf{dual width} of $\mathcal{G}_r$, denoted $\mathsf{Width}^\dagger(\mathcal{G}_r)$, to be length of the shortest path in the planar dual of $\mathcal{G}_r$ that starts and ends in the outside face, has its first and last edges in the left and right-hand boundaries of $\mathcal{G}_r$ respectively, and which does not visit the outside face other than at its endpoints. We call such a path a \textbf{dual left-right crossing} of $\mathcal{G}_r$.
We claim that the dual width of $ \mathcal{G}_{r}$ is equal to the maximal size of a set of edge-disjoint (primal) paths from the bottom to the top of $\mathcal{G}_r$ (we call such a path a \textbf{primal bottom-top crossing}), whence its close connection to resistances. One such maximal set of primal bottom-top crossings can be found algorithmically by first taking the left-most primal bottom-top crossing, then the left-most primal bottom-top crossing that is edge-disjoint from the first one, and so on. The claim can be proved using the cut-cycle duality \cite[Theorem 14.3.1]{MR1829620} and Menger's theorem, but can also easily be checked in our situation, see Figure~\ref{fig:dualblock} below.
\vspace{1em}
\begin{figure}[!h]
\begin{center}
\hspace{2em} \includegraphics[width=9.75cm]{dual_path.pdf}
\caption{
\small{\label{fig:dualblock} A block of height four that has width $2$ and dual width $3$. On the left is a dual left-right crossing of length three, on the right is a set of three edge-disjoint primal paths from the bottom to the top of the block. (Note that in general these paths might not have increasing heights as they do in this example.)}}
\end{center}
\vspace{-1em}
\end{figure}
\noindent
For each $r\geq 1$, we define $g(r)$ to be the median dual width of $\mathcal{G}_r$, that is, the largest number such that \[ \mathbb{P}\big( \mathsf{Width}^\dagger( \mathcal{G}_{r}) \geq g(r)\big) \geq 1/2.\]
The proof of Theorem \ref{prop:renorm} goes through essentially unchanged to yield that there exists a constant $c>0$ such that
\begin{align*} g(r) \geq c \cdot \min \Big\{ m ; (r/m)^{\beta} g(m) \Big\}
\qquad \forall 1 \leq m \leq c\cdot r,
\end{align*}
from which we obtain as before the following analogue of Theorem \ref{thm:main}.
\begin{theorem}[$g(r)$ is almost linear] \label{thm:maindual} If $\mu$ is critical and satisfies \eqref{eq:defstable} then there exists $c >0$ such that for all $r$ large enough we have $$g(r) \geq c \,r.$$
On the other hand, if $\mu$ is critical and has finite non-zero variance then there exists $C>0$ such that for all $r$ large enough we have
$$g(r) \geq r \exp( - C \sqrt{ \log r}).$$
\end{theorem}
\subsection{The subadditive argument} \label{sec:subadditive}
In this section we suppose that $\mu$ is critical and has finite variance. We use the same notation as in the preceding section.
In particular, we let $\mathcal{Q_\infty}$ be the quarter-plane model, as defined in Section \ref{sec:blocks}, and recall that $\mathcal{Q}_\infty$ is indexed by $\{1,2,\ldots\}\times \{0,1,\ldots\}$. For each $m \geq n\geq 1$ let $\mathcal{Q}_{n,m}$ be the subgraph of $\mathcal{Q}_\infty$ induced by the trees $T_n,\ldots, T_m$, and let $L_{n,m}$ be the graph distance between $(n,0)$ and $(m,0)$ in $\mathcal{Q}_{n,m}$.
Our aim is to prove the following.
\begin{proposition} \label{prop:subadditive}If $\mu$ is critical and has finite non-zero variance then
$\lim_{n\to\infty}\frac{1}{n}L_{1,n}= 0$ a.s.
\end{proposition}
\proof The proof is based on a simple observation together with Kingman's subadditive ergodic theorem.
We clearly have that the random array $(L_{n,m})_{m \geq n \geq 1}$ is stationary in the sense that $(L_{n+k,m+k})_{m\geq n \geq 1}$ has the same distribution as $(L_{n,m})_{m\geq n \geq 1}$ for every $m\geq n \geq 1$ and $k\geq 1$, and is subadditive in the sense that $L_{n,m+k} \leq L_{n,m} + L_{m,m+k}$ for every $n,m,k \geq 1$. Moreover,
since the $T_{i}$'s are i.i.d., the stationary sequence $((L_{n+k,m+k})_{m\geq n \geq 1})_{k\geq 0}$ is ergodic and we can apply Kingman's subadditive ergodic theorem to deduce that \begin{eqnarray} \label{eq:kingman} n^{-1}L_{1,n} \xrightarrow[n\to\infty]{a.s.} c, \end{eqnarray} for some non-random constant $c\in [0,1]$.
To finish the proof and show that $c=0$, we use the following observation. Recall that for $r \geq1$, we denoted by $\xi_{r}=:\xi_{r}^{(1)}$ the index of the first tree among $T_{1}, T_{2}, \dots$ that reaches height $r$. We also denote by $\xi^{{(2)}}_{r}$ the index of the second such tree. Considering the path that starts at $(1,0)$, travels horizontally to $(\xi_{r}^{(1)},0)$, travels vertically up to the right-most element of $T_{\xi_r^{(1)}}$ in level $r$, takes one step to the right, and then travels vertically downwards to $(\xi_r^{(2)},0)$, as illustrated in Figure \ref{fig:shortcut}, yields the bound
\begin{eqnarray} L_{1,\xi^{(2)}_{r}} \leq \xi_{r}^{(1)}+2r. \label{eq:shortcut}\end{eqnarray}
Using \eqref{eq:kolmogorov}, it is easy to show that $ r^{-1}\xi_{r}^{(1)}$ and $r^{-1} (\xi_{r}^{(2)}-\xi_{r}^{(1)})$ converge in distribution towards a pair of independent exponential random variables with the same parameter. In particular, it follows that
\[
\liminf_{r\to\infty} \P\bigl( \xi_{r}^{(1)}+2r \leq \varepsilon \cdot \xi_{r}^{(2)} \bigr)>0
\]
for every $ \varepsilon>0$.
This observation together with the a.s.\ convergence \eqref{eq:kingman} and the bound \eqref{eq:shortcut} yields that $c \leq \varepsilon$. Since this inequality is valid for every $ \varepsilon>0$ we must have that $c=0$. \endproof
\begin{figure}[t]
\begin{center}
\includegraphics[width=8cm]{shortcut.pdf}
\caption{Illustration of the bound \eqref{eq:shortcut} \label{fig:shortcut}.}
\end{center}
\vspace{-0.5em}
\end{figure}
\section{Estimating the girth}
\label{sec:width}
In this section we will derive our Theorems \ref{thm:geometry} and \ref{thm:geometrystable} from the estimates on the geometry of blocks derived in the last section.
In order to do this, we relate the $\mu$-Galton--Watson tree conditioned to survive $T_\infty$ (and the graph $\mathscr{C}_\infty= \mathsf{Causal}( T_{\infty})$ obtained by adding the horizontal edges to $T_{\infty}$ as in Figure~\ref{fig:causal-arbre}) to the unconditioned quarter-plane model made of i.i.d.~$\mu$-Galton--Watson trees that we considered in Section \ref{sec:halfplane}. The main ingredient is the standard representation of $T_{\infty}$ using the spine decomposition \cite{LPP95b}, which we now review. We refer to \cite{AD13} for precise statements and proofs regarding this decomposition.\medskip
The plane tree $T_{\infty}$ has a unique spine (an infinite line of descent) which can be seen as the genealogy of a mutant particle which reproduces according to the biased distribution $\overline{\mu} =( k \cdot \mu_{k})_{k \geq 0}$ and from which exactly one of its offspring is chosen at random and declared mutant. All other particles reproduce according to the underlying offspring distribution $\mu$, see Figure \ref{fig:subforest} (left) and \cite{AD13} for more details. We deduce from this representation that, for every $n_{0} \geq 1$, conditionally on $\# \partial [T_{\infty}]_{n_{0}}$, if at generation $n_{0}$ we erase the only mutant particle and all its offspring then we obtain a forest of $(\# \partial[T_{\infty}]_{n_{0}}-1)$ independent $\mu$-Galton--Watson trees. We order the trees in this forest using the plane ordering of $T_{\infty}$ so that the first tree is the one immediately to the right of the spine and the last tree is the one immediately to the left of the spine. Denote this forest by $ \mathcal{F}_{n_{0}}$ and note that it can be empty.
We add the horizontal connections between inner vertices of $\mathcal{F}_{n_0}$ (except those linking the extreme vertices of a line) to get the graph $ \mathcal{C}_{n_{0}}$ which is then a subgraph of $ \mathscr{C}_\infty$. The graph $ \mathcal{C}_{n_{0}}$ truncated at height $k$ will be denoted by $[ \mathcal{C}_{n_{0}}]_{k}$. See Figure \ref{fig:subforest}.
\begin{figure}[!h]
\begin{center}
\includegraphics[width=14cm]{subforest.pdf}
\caption{
\small{Left: A piece of $T_{\infty}$ where the ancestral line of the mutant particles is highlighted. Right: the sub forest $ \mathcal{F}_{4}$ and its causal version $ \mathcal{C}_4$ obtained by adding the horizontal connections. \label{fig:subforest}}
}
\end{center}
\vspace{-2em}
\end{figure}
It is also standard that $T_{\infty}$ is the martingale biasing of the random variable $T$ by the non-negative martingale $( \# \partial [T]_{n})_{n \geq 0}$. That is, for every positive function $f$ on the set of finite plane trees we have
$$ \mathbb{E}[f([T_{\infty}]_{n})] = \mathbb{E}[f([T]_{n}) \# \partial [T]_{n}]$$
for every $n\geq 0$. In particular the size of the $n$-th generation $\# \partial [T_{\infty}]_{n}$ has the law of $\# \partial [T]_{n}$ biased by itself. It is also standard (see \cite[Theorem 4]{Pak76}) that $\# \partial [T_{\infty}]_{n}$ is of order $n^{\beta}$ (recall the definition of $\beta$ at the end of the Introduction) and more precisely that once rescaled by $n^{-\beta}$ it converges in distribution towards a positive random variable: \begin{eqnarray} n^{-\beta}\# \partial [T_{\infty}]_{n} \quad \xrightarrow[n\to\infty]{d} \quad \mathcal{X}'>0. \label{eq:yaglombis} \end{eqnarray}
Here again, the precise distribution of the random variable $ \mathcal{X}'$ will not be used. We shall however use a version of this estimate which is rougher for a given $n$ but holds simultaneously for all $n \geq 1$:
\begin{lemma} \label{lem:lil} There is some positive contant $0<C<\infty$ such that almost surely, for all $n$ large enough
$$ n^\beta \big(\log n \big)^{-C} \leq \#\partial [T_{\infty}]_{n} \leq n^\beta \big(\log n \big)^{C}. $$
\end{lemma}
\proof We set $Z_{n}^{*} = \# \partial [T_{\infty}]_{n}$ and $Z_n = \# \partial [T]_{n}$ for an independent unconditioned $\mu$-Galton-Watson tree $T$, so that $(Z_n^*)_{n
\geq 0}$ is distributed as martingale biasing of $(Z_n)_{n
\geq 0}$ by itself. We first prove the lemma along the subsequence $n= 2^{k}$. By \cite[Propositions 2.2 and 2.6]{CK08} there exist constants $c>0$ and $\delta >0$ such that
\begin{equation}\label{eq:CK08} \mathbb{P}\bigl( Z_{n}^{*} \notin (\lambda^{-1} n^{\beta }; \lambda n^{\beta })\bigr) \leq c \lambda ^{-\delta}\end{equation}
for every $n\geq 1$ and $\lambda >1$.
Putting $ n= 2^{k}$ and $\lambda = k^{ 2/\delta}$ we can use the Borel--Cantelli lemma to deduce that indeed \begin{eqnarray}
k^{-2/\delta}2^{\beta k} \leq Z^{*}_{2^{k}} \leq k^{2/\delta}2^{\beta k} \label{eq:2k} \end{eqnarray}
for all sufficiently large $k$ almost surely.
We now extend this estimate to all values $n \geq 1$, at the price of changing the exponent of the logarithm from $2/\delta$ to $C=(8/\delta) \vee 8$. We begin with the upper bound. Let $n\geq 1$ and let
\[
T_n = \inf\left\{m \geq n : Z^{*}_{m} \geq m^{\beta} (\log m)^{C}\right\},
\]
where we set $\inf \emptyset = \infty$.
Since
\[
\left\{Z^{*}_n \geq n^\beta (\log n)^{C} \text{ for infinitely many $n\geq 1$}\right\} = \bigcap_{n\geq 1}\left\{T_n < \infty\right\}
\]
It suffices to prove that $\lim_{n\to\infty} \P(T_n < \infty) =0$.
Condition on the stopped $\sigma$-algebra $\mathcal{F}_{T_n}$, and let $K_n = \lceil \log_2 T_n \rceil$. If $T_n<\infty$ then $ 4 T_n \geq 2^{K_n+1}- T_n \geq T_n$, and it follows from \eqref{eq:kolmogorov} and \eqref{eq:yaglom} that each of the $Z_{T_n}^* \geq T_n^{\beta} (\log T_n)^{C}$ particles in generation $T_n$ have conditional probability at least $c \, T_n^{-\beta}$ of having at least $T_n^{\beta}$ descendants at level $2^{K_n+1}$ for some constant $c>0$ (the one backbone particle having an even higher conditional probability). The conditional probability that this occurs for at least $\lceil c(\log T_n)^{C}/2 \rceil $ particles is uniformly positive by \eqref{eq:ASChernoff}, and we deduce that
$$
\mathbb{P}\Bigl( T_n<\infty \text{ and } Z^{*}_{2^{K_n+1}} \geq c'2^{\beta K_n} K_n^{C} \;\Big|\; \mathcal{F}_{T_n} \Bigr) \geq c'' {1}(T_n<\infty)$$
for some positive constants $c'$ and $c''$. Taking expectations over $\mathcal{F}_{T_n}$, we deduce that
\begin{align*}
c'' \P(T_n<\infty) \leq \mathbb{P}\Bigl( T_n<\infty \text{ and } Z^{*}_{2^{K_n+1}} \geq c'2^{\beta K_n} K_n^{C} \Bigr)
\leq \mathbb{P}\Bigl(Z^{*}_{2^{k+1}} \geq c'2^{\beta k} k^{C} \text{for some $k \geq \log_2 n$}\Bigr).
\end{align*}
The right hand side tends to zero as $n\to\infty$ by \eqref{eq:2k}, and we deduce that $\P(T_n<\infty) \to 0$ as $n\to\infty$ as claimed.
We now prove the lower bound, for which
we adapt the proof of
\cite[Proposition 13]{BCsubdiffusive}.
Notice that by \eqref{eq:2k} we know that $ n^{-C/4} 2^{n\beta}\leq Z^{*}_{2^{n}} \leq 2^{n\beta} n^{C/4}$ eventually so we just need to prevent the process $Z^{*}$ from going down too low in-between times $2^{n}$ and $2^{n+1}$. For this, we consider the conditional probability
\[ \mathbb{P}\left.\Bigl( 2^{n\beta}n^{-C/4} \leq Z_{2^{n+1}}^{*} \leq 2^{n\beta}n^{C/4} \mbox{ and } \exists 2^{n} < k <2^{n+1} \mbox{ s.t. } Z_{k}^{*} \leq 2^{n\beta}n^{-C} \ \right| Z_{2^{n}}^{*} = z_{0} \Bigr),\]
for $n^{-C/4} 2^{n\beta}\leq z_{0} \leq 2^{n\beta} n^{C/4}$. Since the Markov chain $Z^{*}$ is the size-biasing of the chain $Z$ by itself (i.e., the $h$-tranform of $Z$ with respect to the function $h(n)=n$), this conditional probability is bounded above by
\begin{multline*}
\frac{2^{n\beta}n^{C/4}}{2^{n\beta}n^{-C/4}}
\\
\cdot\mathbb{P}\left.\Bigl( 2^{n\beta}n^{-C/4} \leq Z_{2^{n+1}} \leq 2^{n\beta}n^{C/4} \mbox{ and } \exists 2^{n} < k <2^{n+1} \mbox{ s.t. } Z_{k} \leq 2^{n\beta}n^{-C} \ \right| Z_{2^{n}} = z_{0} \Bigr).
\end{multline*}
But now, since the process $Z$ is a non-negative martingale absorbed at $0$, the optional sampling theorem implies that the probability that the process $Z$ drops below $2^{n\beta}n^{-C}$ and then later reaches a value larger than $2^{n\beta}n^{C/4}$ is less than $n^{-3C/4}$. Hence, the last display is bounded above by $n^{-C/4}$. Since $C\geq 8$ these probabilities are summable in $n$. Applying Borel--Cantelli, we deduce that $Z_{n}^{*} \geq n^{\beta} (\log n)^{-C}$ eventually almost surely on the event that $n^{-C/4} 2^{n\beta}\leq Z^{*}_{2^{n}} \leq 2^{n\beta} n^{C/4}$ eventually. \endproof
A straightforward corollary of Lemma \ref{lem:lil} concerns the volume growth from the origin (as alluded to in the introduction): If $B_{r}$ denotes the graph ball of radius $r$ around in origin in $ \mathscr{C}_\infty$ and $\# B_{r}$ the number of vertices of $B_{r}$ then
\begin{align} \label{eq:volumegrowth} r^{\beta+1} \big(\log r\big)^{-C} \leq \# B_{r} \leq r^{\beta+1} \big(\log r \big)^{C}
\end{align}
for all sufficiently large $r$ almost surely.
See \cite[Proposition 2.8]{BK06} and \cite[Lemma 5.1]{CK08} for more precise estimates. We now have all the ingredients to complete the proof of our Theorems \ref{thm:geometry} and \ref{thm:geometrystable}. In fact, we will prove the following quantitative version of item $(i)$ of Theorem \ref{thm:geometry}.
\begin{proposition}[Quantitative girth lower bound for generic causal maps] \label{prop:quantgeometry} Suppose $\mu$ is critical and has finite non-zero variance. Then there exists a constant $C$ such that
\[ \frac{1}{r} \mathsf{Girth}_{r}( \mathscr{C}_\infty) \geq e^{-C \sqrt{\log r} } \]
almost surely for all $r$ sufficiently large.
\end{proposition}
\proof[Proof of Theorem \ref{thm:geometry} (i), Theorem \ref{thm:geometrystable} (i), and Proposition \ref{prop:quantgeometry}] Fix $ \varepsilon>0$. Pick $n \geq 1$ large and consider the forest $ \mathcal{F}_{n}$ introduced above. This forest is obviously finite.
By \eqref{eq:kolmogorov} and \eqref{eq:ASChernoff}, conditionally on the event that $\# \partial [T_{\infty}]_{n} \geq n^{\beta}(\log n)^{-C_{1}}$, the probability that there are at least $(\log n)^{2}$ trees inside this forest which reach height at least $m=m(n)=n(\log n)^{-{(3+C_1)\over \beta}}$ (relative to their starting height of $n$) is lower bounded by
$$ \mathbb{P}\Big(\mathrm{Binomial}\Big(\lfloor n^{\beta}(\log n)^{-C_{1}} \rfloor , \mathbb{P}\big( \mathrm{Height}(T) \geq m(n)\big)\Big) \geq (\log n)^{2} \Big) \geq 1- \exp\bigl(- \delta (\log n)^3\bigr),$$ for some $\delta>0$. On this event, using our Theorem \ref{thm:main}, we see that the probability that $ [ \mathcal{C}_{n}]_{m}$ contains at least two disjoint blocks $\mathcal{G}^{(n,1)},\mathcal{G}^{(n,2)}$ of height $m$ whose widths are both at least
\begin{align*} m e^{-C_2\sqrt{\log m}} &\geq n e^{-C_3\sqrt{\log n}} && \text{ if } \beta =1 \\
c m &= c n (\log n)^{-C_4} &&\text{ if } \beta >1
\end{align*} is bounded from below by $1 - \exp(-\delta' (\log n)^2)$ for some finite constants $C_2,C_3,C_4$ and some $\delta' >0$. Thus, by Lemma \ref{lem:lil} and the Borel--Cantelli lemma we deduce that
both events occur for all sufficiently large $m$ almost surely.
Let $n+m/4 \leq \ell \leq n+3m/4$, and let $u,v$ be vertices at height $\ell$ that are in the left and right boundaries of the block $\mathcal{G}^{(n,1)}$ respectively. Then any path from $u$ to $v$ must either leave the strip of vertices of heights between $n$ and $n+m$, or else must cross at least one of the blocks $\mathcal{G}^{(n,1)}$ or $\mathcal{G}^{(n,2)}$. From here we see immediately that there exists $C_5<\infty$ such that the bound
\begin{align*} \mathsf{Girth}_{\ell}(\mathscr{C}_\infty) &\geq \begin{cases}
n e^{-C_5\sqrt{\log n}}
& \text{ if }\beta =1\\
n (\log n)^{-C_5}& \text{ if } \beta>1,
\end{cases}
& n+m/4\leq \ell \leq n+3m/4
\end{align*}
holds eventually for all sufficiently large $n$ almost surely,
concluding the proof.\endproof
The second part of Theorem \ref{thm:geometrystable} follows the same lines. Let us sketch the argument.
\proof[Sketch of proof of the second part of Theorem \ref{thm:geometrystable}] Suppose that $\mu$ satisfies \eqref{eq:defstable} and recall that $\beta = \frac{1}{\alpha-1}$ >1. Since by \eqref{eq:yaglombis} the variable $\# \partial [T_{\infty}]_{n}$ is typically of order $n^{\beta}$, using \eqref{eq:kolmogorov} we deduce that the number of trees in the forest $ \mathcal{F}_{n}$ which reach height $ \eta n$ tends in probability to $\infty$ as $ \eta \to 0$ and $n \to \infty$. In particular, for any $ \varepsilon>0$ and any $k_{0} \geq 1$ we can find $\eta$ small enough such that for any large enough $n$, the graph $\mathcal{C}_{n}$ contains at least $k_{0}$ independent blocks of height $\eta n$ with probability at least $1 - \varepsilon$. By Theorem \ref{thm:main}, with probability at least $1- (1+2k_0)2^{-k_{0}}$ the left-right width of at least two of these blocks is larger than $c \eta n$. Choosing $k_{0}$ so that $(1+2k_0)2^{-k_{0}}\leq \varepsilon$, we deduce (using the same argument as above) that with probability at least $1 - 2 \varepsilon$ the girth at level of $ \mathscr{C}_\infty$ between $n(1+\eta/4)$ and $n(1+3\eta/4)$ is at least $c\eta n$. This entails Theorem \ref{thm:geometrystable}.\endproof
Finally, we prove the upper bound on the girth in the finite-variance case.
\proof[Proof of Theorem \ref{thm:geometry} (ii)] We fix $\mu$ critical and having a finite, non-zero variance. Fix $n \geq 1$ large and consider the graph $ \mathcal{C}_{n}$, which is a subgraph of $ \mathscr{C}_\infty$. As before, conditional on $ \# \partial [T_{\infty}]_{n} = \ell $ this graph is made of $\ell-1$ i.i.d.~Galton--Watson trees together with the added horizontal connections. Proposition \ref{prop:subadditive} directly tells us that the distance inside $ \mathcal{C}_{n}$ between its bottom-left corner $x$ and its bottom-right corner $y$ is $o(\ell)$ with high probability.
\medskip
\noindent \begin{minipage}{9cm}
Since $x$ and $y$ are both incident to the spine vertex at level $n$, we can use two horizontal edges to link $x$ to $y$ in $ \mathscr{C}_\infty$ as depicted on the right. Since $\ell=O(n)$ with high probability by \eqref{eq:yaglombis},
this argument shows that for each $\varepsilon>0$ there exists $N<\infty$ such that if $n\geq N$ then we can construct, with probability at least $1-\varepsilon$, a loop $ \mathcal{L}$ inside $ \mathscr{C}_\infty$ of length at most $\varepsilon n$ that separates $\rho$ from $\infty$ and that only contains vertices of height between $n$ and $(1+\varepsilon)n$.
\end{minipage}
\hspace{1cm}
\begin{minipage}{3cm}
\includegraphics[width=3cm]{joiningpaths3}
\end{minipage}
\medskip
Let $n\geq N$, condition on this event, and consider the set of vertices of $ \mathscr{C}_\infty$ at height $n'=n+\lceil \varepsilon n \rceil$. Each such vertex is connected to a vertex at height $n$ by a path of length $\lceil \varepsilon n\rceil$, and this path must intersect $\mathcal{L}$. We deduce that the distance between any vertex of $[T_{\infty}]_{n'}$ and $ \mathcal{L}$ is less than $ \varepsilon n$. We deduce that $\mathsf{Girth}_{n'}(T_{\infty}) \leq 3\varepsilon n$ with probability at least $1-\varepsilon$ for every $n\geq N$, from which the proof may easily be concluded. \endproof
\section{Resistance growth and spectral dimension}
\label{sec:spectral}
In this section we will prove Theorem \ref{thm:spectral}, via Theorem \ref{thm:resistance}. Since certain arguments are valid in general, we highlight when finite variance is needed.
\subsection{Resistance}
The resistance will be controlled through the method of random paths and builds upon the geometric estimates established in the preceding section. In this section, all resistances will be taken with respect to the graph $\mathscr{C}_\infty$. Before diving into the proof of Theorem \ref{thm:resistance}, we first prove that $ \mathscr{C}_\infty$ is recurrent (i.e., that $ \mathsf{R_{eff}}(\rho \leftrightarrow B_r^c)=\mathsf{R_{eff}}( \rho \leftrightarrow \partial [T_\infty]_{r+1} ; \mathscr{C}_\infty) \to \infty$ as $r\to\infty$).
\begin{proposition} \label{prop:recurrence} If $\mu$ has finite non-zero variance then $ \mathscr{C}_\infty$ is recurrent almost surely.
\end{proposition}
\proof We apply the Nash--Williams criterion for recurrence \cite[(2.14)]{LP10}, using the obvious collection of cut-sets given by the sets of edges linking level $r$ to level $r+1$ for each $r\geq1$. This edge set has cardinality precisely $\#\partial [T_{\infty}]_{r+1}$ so the proposition reduces to checking that
$$ \sum_{r=1}^{\infty} \frac{1}{ \# \partial [T_{\infty}]_{r}} = \infty, \quad \mbox{ almost surely}.$$
Since we have that $\frac{1}{ \# \partial [T_{\infty}]_{r}} = \frac{1}{r} \frac{r}{ \# \partial [T_{\infty}]_{r}}$ and by \eqref{eq:yaglombis} that the random variable $\frac{r}{ \# \partial [T_{\infty}]_{r}}$ converges in law towards a positive random variable, the last display is a direct consequence of Jeulin's Lemma \cite[Proposition 4 c]{Jeu82}. \endproof
\begin{remark}
Let us briefly discuss quantitative resistance lower bounds.
It follows immediately from Nash-Williams that
\[
\mathbb{E} \left[\mathsf{R_{eff}}(\rho \leftrightarrow B_r^c) \right] \geq \sum_{k=1}^r \frac{1}{k} \mathbb{E} \left[\frac{k}{ \# \partial [T_{\infty}]_{k}}\right] \geq c \log r
\]
for some $c>0$ by \eqref{eq:yaglombis}.
With a little further effort
one can prove an almost sure lower bound on the resistance growth of the form
\[
\mathsf{R_{eff}}(\rho \leftrightarrow B_r^c) \geq \sum_{k=1}^r \frac{1}{ \# \partial [T_{\infty}]_{k}} \geq c \log r \qquad \text{ for all $n$ sufficiently large a.s.}
\]
Indeed, in the analogous statement for the CSBP the contributions from successive dyadic scales form a stationary ergodic sequence and the result follows from the ergodic theorem. Pushing this argument through to the discrete case requires one to handle some straightforward but tedious technical details.
We do not pursue this further here.
\end{remark}
\proof[Proof of Theorem \ref{thm:resistance}.]
Recall the assumption that $\mu$ is critical and has finite, non-zero variance. By Lemma \ref{lem:lil} there exists a constant $C_1>0$ such that the number of vertices in level $r$ satisfies $r (\log r)^{-C_{1}} \leq \# \partial [T_{\infty}]_{r} \leq r (\log r)^{C_1}$ for all $r$ larger than some almost surely finite random $r_0$.
Arguing as in the proof of Proposition \ref{prop:quantgeometry} but applying Theorem \ref{thm:maindual} instead of Theorem \ref{thm:main}, we obtain that there exist constants $C_2,C_3$ such that there exists an almost surely finite $m_0$ such that for every $m\geq m_0$ and every $m (\log m)^{-C_2} \leq k\leq 3m (\log m)^{-C_2}$,
the subgraph $[\mathcal{C}_{m}]_{k}$ of $\mathscr{C}_\infty$, defined in Section \ref{sec:width}, contains a block of height equal to $k$ and dual width at least $k e^{-C_3\sqrt{\log k}}$.
Consider the increasing sequence of natural numbers $h_n$ defined by
\[
h_n = \left\lfloor
\exp\left((C_2+1)^{1/(C_2+1)}n^{1/(C_2+1)}\right)
\right\rfloor,
\]
and let $k_n=h_{n+2}-h_n$.
These numbers have been chosen to satisfy the asymptotics $k_n \sim 2 h_n (\log h_n )^{-C_2}$ as $n\to\infty$, so that in particular $ h_n (\log h_n )^{-C_2} \leq k_n \leq 3 h_n (\log h_n )^{-C_2}$ for all $n$ larger than some $n_0'<\infty$. Thus, it follows from the discussion in the previous paragraph that there exists an almost surely finite $n_0''$ such that for each $n\geq n_0''$, the subgraph $[\mathcal{C}_{h_n}]_{k_n}$ of $\mathscr{C}_\infty$ contains a block $\mathcal{G}^{(n)}$ of height equal to $k_n$ and dual width at least
$k_n e^{-C_3\sqrt{\log k_n}}$.
Let $n_0 \geq n''_0$ be minimal such that $h_{n_0} \geq r_0$ and let $\Omega$ be the almost sure event that $n_0$ is finite.
Since the resistance $\mathsf{R_{eff}}(\rho \leftrightarrow B_{r}^{c})$ is increasing in $r$, it suffices to prove that there exists a constant $C_4$ such that
\[\mathsf{R_{eff}}\bigl(\rho \leftrightarrow B_{h_n}^{c}\bigr) \leq e^{C_4\sqrt{\log h_n}}\]
almost surely as $n\to\infty$. We will prove that this is the case deterministically on the event $\Omega$. In order to do this, we use the \emph{method of random paths} (see \cite[Chapter 2.5]{LP10}). In particular, we will construct a random path $\Gamma$ from $\rho$ to the boundary of the ball of radius $h_n$, and
then bound the resistance by
the ``energy'' of the path\footnote{Strictly speaking the quantity on the right of \eqref{eq:energy} is not the energy of $\Gamma$, but rather an upper bound on the energy of $\Gamma$.}:
\begin{eqnarray} \label{eq:energy} \mathsf{R_{eff}}(\rho \leftrightarrow B_{r}^{c}) \leq 2 \sum_{ e \in \mathsf{Edges}(B_{r})} \mathbb{P}( \Gamma \mbox{ goes through }e \mid \mathscr{C}_\infty \mbox{ and } \Omega)^{2}. \end{eqnarray}
Condition on $\mathscr{C}_\infty$ and the event $\Omega$.
By the discussion of Section \ref{sec:dualdiam}, for each $m \geq n_0$, the subgraph $\mathcal{G}^{(m)}$ of $[\mathcal{C}_{h_m}]_{k_m}$ contains a set of at least $k_m e^{-C_3\sqrt{\log k_m}}$ edge-disjoint paths linking its bottom boundary to its top boundary. Indeed, the maximal size of such a set is equal to the dual left-right width of $\mathcal{G}^{(m)}$. Fix one such maximal set for each $m$ and let $\Gamma^{(m)}$ be a uniformly chosen element of this set. We let $s_{n_0}=h_{n_0}$ and for each $m> n_0$ we let $s_m$ be a uniform index between $h_m$ and $h_{m+1}$.
We now build the random path $\Gamma$ starting from $\rho$ inductively as follows. To start, we pick arbitrarily a path from $\rho$ to level $h_{n_0}$ to be the initial segment of $\Gamma$. We then let $\Gamma$ travel horizontally around level $h_{n_0}$ to meet the starting point of the path $\Gamma^{(n_0)}$. Following this, for each $n_0 \leq m\leq n-3$,
between heights $s_{m}$ and $s_{m+1}$, the path $\Gamma$ follows the segment of $\Gamma^{(m)}$ between its last visit to height $s_m$ and its first visit to height $s_{m+1}$. When $\Gamma$ reaches level $s_{m+1}$, it travels horizontally around that level to join the path $\Gamma^{(m+1)}$ at the site of its last visit to that level. Finally, $\Gamma$ takes the segment of $\Gamma^{(n-2)}$ between levels $s_{n-2}$ and $h_{n}$, at which point it stops.
\vspace{1em}
\begin{figure}[!h]
\begin{center}
\includegraphics[width=12cm]{randompath2.pdf}
\caption{ \label{fig:randompath}
\small{
Illustration of the random path $\Gamma$ used in the proof of Theorem \ref{thm:resistance}.
Left: for each $n_0 \leq m \leq n-2$ we have a path $\Gamma^{(m)}$ from level $h_m$ to level $h_{m+2}$. Right: The path $\Gamma$ switches from $\Gamma^{(m)}$ to $\Gamma^{(m+1)}$ by turning through a horizontal arc at the random height $s_{m+1}$.
}}
\vspace{-1em}
\end{center}
\end{figure}
We shall now estimate the energy of this random path.
Let $e \in B_{h_{n}}$ be an edge of height $h_m \leq \ell < h_{m+1}$ for some $n_0 \leq m < n$, where the height of an edge is defined to be the minimal height of its endpoints. Then we can compute that
\begin{multline*}
\P( \Gamma \text{ goes through } e \mid \mathscr{C}_\infty, \Omega) \\
\leq\P( \Gamma^{(m-1)} \text{ goes through } e \mid \mathscr{C}_\infty, \Omega) + \P( \Gamma^{(m)} \text{ goes through } e \mid \mathscr{C}_\infty, \Omega) + \P(s_m = \ell)\\
\leq k_{m-1}^{-1} e^{C_3 \sqrt{\log k_{m-1}}} + k_{m}^{-1} e^{C_3 \sqrt{\log k_{m}}} + \frac{1}{h_{m+1}-h_m} \leq \ell^{-1} e^{C_5 \sqrt{\log \ell}} \, ,
\end{multline*}
where $C_5>0$ is another constant. Note that the number of edges at height $\ell$ is equal to $\#\partial [T_\infty]_\ell + \#\partial [T_\infty]_{\ell+1}$, and hence is at most $O(\ell \log^{C_1} \ell)$ on the event $\Omega$.
On the other hand, the initial segment of $\Gamma$ reaching from $\rho$ to level $h_{n_0}$ increases the energy of $\Gamma$ by at most a constant. Thus, we have that
\[
\mathsf{R_{eff}}\bigl(\rho \leftrightarrow B^c_{h_n}\bigr) \leq O(1) +
\sum_{\ell=h_{n_0}}^{h_n} \left[\ell^{-1} e^{C_5\sqrt{\log \ell}} \right]^2 O(\ell \log^{C_1} \ell) \leq e^{C_6 \sqrt{\log h_n}} \, ,
\]
for some constant $C_6>0$, as claimed.
\endproof
\begin{remark} One can adapt the proof of Theorem \ref{thm:spectral} to the $\alpha$-stable case.
Following the same construction of the random path as in the above proof and applying Lemma \ref{lem:lil} with the appropriate $\beta >1$ we now deduce that the energy of the path $\Gamma$ linking $\rho$ to $B_{r}^{c}$ is of order
\[ \mathsf{R_{eff}}(\rho \leftrightarrow B_{r}^{c}) \leq \sum_{h=1}^{r} h^{-2} h^{\beta} \big(\log h\big)^{C_{1}} \leq r^{\beta-1} \big( \log r \big)^{C_{2}}\]
for some $C_{1},C_{2}>0$ as $r\to\infty$.
In particular, the resistance exponent $\mathfrak{r}$ (if it is well-defined) satisfies $ \mathfrak{r} \leq \beta-1 = \frac{2- \alpha}{\alpha-1}$. However, this bound on the resistance becomes trivial in the regime $\alpha \in(1;3/2]$ since then $\beta \geq 2$ and we obtain a super-linear upper bound on the resistance... which is trivially at most $r$!
\end{remark}
\subsection{Spectral dimension and diffusivity (Theorem \ref{thm:spectral})}
We can now prove Theorem \ref{thm:spectral}.
\proof[Proof of Theorem \ref{thm:spectral}]
We denote by $P^{n}(x,y)$ the $n$-step transition probabilities of the simple random walk $(X_n)_{n\geq0}$ on the graph $\mathscr{C}_\infty$. Recall that $ \mathrm{P}_{ \mathscr{C}_\infty,\rho}$ is the law of the random walk on $ \mathscr{C}_\infty$ started from $\rho$ and recall also that $B_{r}$ denote the ball of radius $r$ around the origin vertex $\rho \in \mathscr{C}_\infty$. We will split the proof of Theorem \ref{thm:spectral} into lower and upper bounds for return probabilities and typical displacements. As we will see, the upper bound for the return probability is a simple consequence of our resistance estimates (Theorem \ref{thm:resistance}) while the upper bound on the typical displacement is a standard application of the Varopoulos--Carne heat kernel bound for polynomially growing graph. Let us proceed.
\medskip
\noindent (\textbf{Return probability upper bound.}) Recall that $\deg(\rho) \mathsf{R_{eff}}\bigl(\rho \leftrightarrow B_r^c \bigr)$ is equal to the expected number of times that the random walk started at $\rho$ visits $\rho$ before first leaving $B_r$. By the spectral decomposition for reversible Markov chains (see \cite[Lemma 12.2]{MCMT2E}) we know that $P^{2n}(x,x)$ is a decreasing function of $n$ for every vertex $x$ of $\mathscr{C}_\infty$.
Hence letting $\tau_r$ be the first time that the random walk visits $B_r^c$, we have the bound
\[
(n+1) P^{2n}(\rho,\rho)\leq \sum_{m=0}^n P^{2m}(\rho,\rho) \leq \mathbf{E}_{\mathscr{C}_\infty,\rho}\left[\sum_{m=0}^{\tau_{2n}} {1}\bigl( X_m = \rho\bigr) \right] = \deg(\rho)R_{\rm eff}\bigl(\rho \leftrightarrow B_{2n}^c \bigr).
\]
Thus, applying Theorem \ref{thm:resistance} yields that
\begin{equation}
\label{eq:Pupper}
P^{2n}(\rho,\rho) \leq n^{-1} e^{C\sqrt{ \log n}}
\end{equation}
for all sufficiently large $n$ almost surely.
To obtain a similar bound for odd $n$, we
use the well-known fact that return probabilities are log-convex in the sense that $P^{n+m}(\rho,\rho) \leq \sqrt{P^{2n}(\rho,\rho)P^{2m}(\rho,\rho)}$ for every $n,m\geq 0$ \cite[Lemma 3.20]{aldous-fill-2014}:
Applying this fact together with \eqref{eq:Pupper} we obtain that
\begin{equation}
\label{eq:Pupper2}
P^{n}(\rho,\rho) \leq \sqrt{P^{2\lfloor n/2 \rfloor}(\rho,\rho)P^{2\lceil n/2 \rceil}(\rho,\rho)} \leq 2 n^{-1} e^{C\sqrt{ \log n}}
\end{equation}
for all sufficiently large $n$ almost surely.
\medskip
\noindent
(\textbf{Typical displacement upper bound.})
Recall the classical Varopulous-Carne bound \cite[Section 13.2]{LP10}, which implies that for every vertex $x$ of $\mathscr{C}_\infty$ and every $n\geq 1$ we have that
\[ P^n(\rho,x) \leq 2\sqrt{\frac{\deg(x)}{\deg(\rho)}} \exp\left[- \frac{1}{2n}\mathrm{d}^{\mathscr{C}_\infty}_\mathrm{gr}(\rho,x)^2 \right] \leq 2\deg(x) \exp\left[- \frac{1}{2n}\mathrm{d}^{\mathscr{C}_\infty}_\mathrm{gr}(\rho,x)^2 \right].\]
Observe that, since every vertex of $\mathscr{C}_\infty$ at height $n\geq 1$ has at most three edges emanating from it that connect to vertices at height less than or equal to $n$, we have that $ \sum_{x \in B_n} \deg(x) \leq 4 \cdot \# B_{n+1}$. Thus, it follows from \eqref{eq:volumegrowth} that there exists a constant $C$ such that $\sum_{x \in B_n} \deg(x) \leq n^2 \big(\log n\big)^{C}$ for all sufficiently large $n$ almost surely. By the last display $\sum_n \mathrm{P}_{ \mathscr{C}_\infty, \rho}(X_n \notin B_{\sqrt{5 n \log n}} )<\infty$ for almost all realizations of $ \mathscr{C}_\infty$. It follows by Borel--Cantelli (under $ \mathrm{P}_{\mathscr{C}_\infty,\rho}$) that
\begin{equation}
\label{eq:VC}
\limsup_{n\to\infty} \frac{ \mathrm{d}^{\mathscr{C}_\infty}_\mathrm{gr}(\rho,X_n)}{\sqrt{n\log n}} \leq \sqrt{5}
\end{equation}
almost surely. This gives one side of the claim that $\nu=1/2$.
\medskip
\noindent (\textbf{Return probability lower bound}.) To get a lower bound on $P^{n}(\rho,\rho)$, first observe that
\begin{align*}P^{2n}(\rho,\rho) = \sum_{x\in V} P^n(\rho,x) P^n(x,\rho) = \sum_{x\in V} \frac{\deg(\rho)}{\deg(x)} P^n(\rho,x)^2.
\end{align*}
It follows that, for every $r\geq 0$,
\begin{align}
P^{2n}(\rho,\rho) \geq \sum_{x\in B_r} \frac{\deg(\rho)}{\deg(x)} P^n(\rho,x)^2 \geq
\deg(\rho) \mathbf{P}_{\mathscr{C}_\infty,\rho}\bigl(X_n \in B_r\bigr)^2 \frac{1}{\sum_{x \in B_r} \deg(x)}
\label{eq:dsleq2nugproof}
\end{align}
where the second inequality follows by Cauchy--Schwarz.
Taking $r=\lfloor \sqrt{5 n \log n} \rfloor$ we deduce by \eqref{eq:volumegrowth} and the above application of Varopoulos--Carne that there exists a constant positive $C$ such that
\begin{align*}
P^{2n}(\rho,\rho) &\geq \deg(\rho) \mathbf{P}_{\mathscr{C}_\infty,\rho}\bigl(X_n \in B_{r} \bigr)^2 \left(4 \# B_{r +1 }\right)^{-1} \\& \geq \deg(\rho) \bigl(1-o(1)\bigr)n^{-1} \big(\log n \big)^{-C} \geq n^{-1}\big(\log n\big)^{-2C}
\end{align*}
almost surely as $n\to\infty$. Together with \eqref{eq:Pupper} this implies that $d_s(\mathscr{C}_\infty)$ exists and equals $2$ a.s.
\medskip
\noindent (\textbf{Typical displacement lower bound.})
Finally, to bound the probability that the displacement of the random walk is smaller than $n^{1/2-\varepsilon}$, we rearrange \eqref{eq:dsleq2nugproof} and apply \eqref{eq:volumegrowth} and \eqref{eq:Pupper2} to deduce that there exists a constant $C$ and some almost surely finite $n_0$ and $r_0$ such that
\[
\mathbf{P}_{\mathscr{C}_\infty,\rho}\bigl( X_n \in B_r\bigr)^2 \leq r^2 n^{-1} e^{C\sqrt{\log n}} \big( \log r\big)^{C}
\]
for every $n\geq n_0$ and $r\geq r_0$, and it follows immediately that
\[
\lim_{n\to\infty} \mathbf{P}_{\mathscr{C}_\infty,\rho}\Bigl( X_n \in B_{n^{1/2-\varepsilon}}\Bigr) =0
\]
for every $\varepsilon>0$ a.s. Together with \eqref{eq:VC} this implies that $\nu(\mathscr{C}_\infty)$ exists and equals $1/2$ a.s.
\endproof
\section{Extensions and comments} \label{sec:comments}
\subsection{Back to Causal Triangulations} \label{sec:causaltrig}
\begin{definition} A causal triangulation is a finite rooted triangulation of the sphere such that the maximal distance to the origin of the map is attained by a single vertex, and for each $k\geq 0$ the subgraph induced by the set of vertices at distance $k \geq 0$ from the origin is a cycle.
\end{definition}
In this work, we focused on the model $ \mathsf{Causal}(\tau)$ which is obtained from a plane tree $\tau$ by adding the horizontal connections between vertices are the same generation. As explained in Figure \ref{fig:causal-arbre}, to get a causal triangulation one needs also to triangulate the faces from their top right corners. (Furthermore, one must add a point at the top of the graph to triangulate the top most face, even if this face is already a triangle). As explained in \cite{DJW10} this construction is a bijection between the set of finite rooted plane trees and the set of finite causal triangulations.
When this procedure is applied to the uniform infinite random tree $ \mathbb{T}_{\infty}$ (which is distributed as a critical geometric Galton-Watson tree conditioned to survive forever) the resulting map $ \mathsf{CauTri}( \mathbb{T}_{\infty})$ is the uniform infinite causal triangulation (UICT) as considered in \cite{DJW10,SYZ13}. The large scale geometries of $ \mathsf{CauTri}( {T}_{\infty})$ and $\mathsf{Causal}( {T}_\infty)$ are very similar and it is easy to adapt the results of the present paper to this setting.
Moreover, while it is certainly possible to simply run our arguments again to analyze $\mathsf{CauTrig}(T_\infty)$ instead of $\mathsf{Causal}(T_\infty)$, it is also possible to simply \emph{deduce}
versions of each of our main theorems concerning $\mathsf{CauTri}(T_\infty)$ from the statements that we give. Indeed, using the fact that the largest face in the first $n$ levels of $\mathsf{Causal}(T_\infty)$
is at most logarithmically large in $n$ yields that distances within the first $n$ levels of $\mathsf{CauTrig}(T_\infty)$ are smaller than those in $\mathsf{Causal}(T_\infty)$ by at most a logarithmic factor. Moreover, an analogue of the resistance upper bound of Theorem \ref{thm:resistance} follows immediately since $\mathsf{Causal}(T_\infty)$ is a subgraph of $\mathsf{CauTri}(T_\infty)$.
We let the reader stare at the two beautiful pictures in Figure \ref{fig:beauty}.
\begin{figure}[!h]
\begin{center}
\includegraphics[width=6.5cm]{cgNoGrey}\hspace{1cm} \includegraphics[width=6.5cm]{psyco3}
\caption{Two discrete uniformizations of a large ball in two different random causal triangulations. On the left, using the circle packing (so that the origin is at the center); on the right using Tutte's barycentric embedding (so that the vertices on the boundary are evenly spread). The generations are represented with colors on the left and using a spiral contour on the right. The circle packing was generated by Thomas Budzinski using Ken Stephenson's software.}
\label{fig:beauty}
\end{center}
\end{figure}
\subsection{Causal carpets}
\label{sec:carpet}
Finally, we want to stress that our results can be adapted to various other graphs obtained from trees by ``adding the horizontal connections''. For example one could decide, when transforming a plane tree $\tau$ to add the horizontal connections but only keeping the extreme most edges of each branch point, see Figure \ref{kri1}.
\begin{figure}[!h]
\begin{center}
\includegraphics[width=10cm]{krikun.pdf}
\caption{ \small{\label{kri1} The causal carpet is obtained from the causal map by deleting all but the extreme-most vertical edges emanating upwards from each vertex.
}}
\end{center}
\vspace{-.75em}
\end{figure}
We call this graph the \textbf{causal carpet} associated to the tree. The geometry of the $\alpha$-stable causal carpet is very different from the maps studied in this work, since the faces of this map may now have very large degree. In spite of this, the block-renormalisation methodology developed in Section \ref{sec:halfplane} carries through to this model, and analogs of Theorem \ref{thm:geometrystable}, as well as of the resistance exponent bound
\[
\mathfrak{r} \leq \frac{2-\alpha}{\alpha-1}
\]
hold true.
Alas, this resistance bound becomes trivial precisely at the most interesting value of $\alpha=3/2$, for which a graph closely related to the causal carpet can be realized as a subgraph of the UIPT via Krikun's skeleton decomposition or via the recent construction of \cite{curien2018skeleton}.
Indeed, it remains open to prove any sublinear resistance upper bound for the UIPT. Such a bound would (morally) follow from the $\alpha=3/2$ case of the following conjecture.
\begin{conjecture}
\label{conj:carpets}
Let $\mu$ be critical and satisfy \eqref{eq:defstable} for some $\alpha\in (1,2)$. Then the resistance growth exponent $\mathfrak{r}$ of the associated causal carpet exists and satisfies $0<\mathfrak{r}<1$ almost surely. In particular, the causal carpet is recurrent almost surely.
\end{conjecture}
Despite the sub-optimality of our spectral results in this context, the \emph{geometric} results obtained by our methods are sharp. The applications of our methodology to uniform random planar triangulations will be explored further in
a future work. \medskip
Finally, we remark that a model essentially equivalent to the uniform CDT arises as a certain $\gamma \downarrow 0$ limit of Liouville Quantum Gravity (LQG) in the mating-of-trees framework \cite{wedges,ghs-dist-exponent}. (More specifically, it arises as the $\gamma \downarrow 0$ limit of the mated-Galton-Watson-tree model of LQG, in which the correlation of the encoding random walks tends to $-1$). Thus, further study of the uniform CDT may prove useful for understanding LQG in the small $\gamma$ regime, which has recently been of great interest following Ding and Goswami's refutation of the Watabiki formula \cite{ding2016upper}.\medskip
\textbf{Added in proof.} Gwynne and Miller \cite{gwynne2017random} recently proved sub-polynomial resistance estimates for the UIPT. Conjecture \ref{conj:carpets} and the analogous question for the skeleton of the UIPT remain open, however.
\linespread{1}
\bibliographystyle{siam}
|
2,869,038,154,424 | arxiv |
\section{Introduction}\label{intro}
\vspace{-2mm}
In different areas of machine learning and signal \mbox{processing}, low-rank models have turned out to be a powerful tool for the acquisition, storage and computation of \mbox{information}.
In many of these applications, an important sub-problem is to infer the low-rank model from partial or incomplete data \cite{Davenport16,ChiLuChen19}.
This problem is called \emph{low-rank matrix completion}: Given a matrix $\f{X}^0 \in \Rdd$ of rank-$r$ and an index set $\Omega \subset [d_1] \times [d_2]$, the task is to reconstruct $\f{X}^0$ just from the knowledge of $\Omega$ and $P_{\Omega}(\f{X}^0)$, where $P_{\Omega}: \Rdd \to \mathbb{R}^m$ is the subsampling operator that maps a matrix to the entries indexed by $\Omega$. It is well-known that this can be reformulated as the NP-hard \emph{rank minimization} problem \cite{Recht10}
\vspace{-2mm}
\begin{equation}\label{rank_equation}
\min_{\f{X} \in \R^{d_1 \times d_2}} \rank(\f{X}) \quad \mbox{ subject to } P_{\Omega}(\f{X}) = P_{\Omega}(\f{X}^0).
\end{equation}
From an optimization point of view, \cref{rank_equation} is particularly difficult to handle due to two properties: its \emph{non-convexity} and its \emph{non-smoothness}. A widely studied approach in the literature is to replace the $\rank(\f{X})$ by the convex nuclear norm $\|\f{X}\|_* = \sum_{i=1}^d \sigma_i(\f{X})$ \cite{FHB03}. However, such a convex relaxation approach has two main drawbacks: it is \emph{computational demanding} for large problems, since it is equivalent to a semidefinite program \cite{Recht10}, and it is not \emph{data efficient}, since the nuclear norm minimizer under the affine constraint might not coincide with the minimizer $\f{X}^0$ of \cref{rank_equation} if the number of samples $m$ is just slightly larger than the number of the degrees of freedom of $\f{X}_0$ \cite{Amelunxen14}.
To overcome these drawbacks, a variety of alternative approaches have been proposed and studied, many of which optimize an empirical loss defined for a matrix factorization model, or which optimize the empirical loss using Riemannian manifold structures, see \cite{ChiLuChen19} for a recent survey. These approaches are much more scalable, and furthermore, are often able to reconstruct the low-rank matrix from fewer samples than a convex formulation.
However, a closer inspection of the theoretical guarantees of these algorithms suggests that the performance of those algorithms deteriorates as the \emph{condition number} $\kappa$ of $\f{X}^0$ increases. For example, if $D=\max(d_1,d_2)$, the sufficient condition on the required random samples for the Riemannian gradient descent algorithm \cite{wei_cai_chan_leung} is $m=\Omega(\kappa^6 D r^2 \log D)$, and a polynomial dependence on $\kappa$ can be found for results on gradient descent for matrix factorization \cite{ChiLuChen19}.\footnote{An exception is the result of \cite{hardt_wotters}, whose sample complexity exhibits a \emph{logarithmic} dependence on $\kappa$. On the other hand, its dependence on the rank $r$ is a high-order polynomial.}
Retrieving ill-conditioned matrices from partial information is a problem that arises in many important areas such as the discretization of PDE-based inverse problems with Fredholm equations \cite{cloninger_czaja_bai_basser} or spectral estimation problems, where Hankel matrices with condition number $\approx 10^{15}$ may appear \cite{liao_fannjiang_music16}. Not only theoretically, but also in practice, non-convex approaches often struggle to recover such ill-conditioned matrices. To overcome this, this paper proposes \texttt{MatrixIRLS}, an efficient second order least squares approach that aims to solve \emph{ill-conditioned} matrix completion problems that are \emph{statistically hard}, i.e., problems in which just very few entries are known. In the following, let $d=\min(d_1,d_2)$.
\vspace{-4mm}
\section{MatrixIRLS for log-det rank surrogate}\label{surrogate}
\vspace{-2mm}
We propose a method that is based on the minimization of quadratic models of smoothed log-det objectives to obtain a scalable, but unbiased method for rank minimization \eqref{rank_equation}. It has already been observed in several papers \cite{Fazel02,Candes13} that optimizing the smoothed log-det objective $\sum_{i=1}^d \sigma_i(\f{X}+\epsilon \f{I})$ for some $\epsilon > 0$ can lead to minimum rank solutions. In particular, it can be shown that the minimizer of the smoothed log-det objective coincides as least as often with the rank minimizer as the convex nuclear norm minimizer \cite{Foucart18}.
However, finding the global minimizer of a non-convex and non-smooth rank surrogate can be very challenging, as the existence of ample sub-optimal local minima might deter the success of many local optimization approaches. Furthermore, applications such as in recommender systems \cite{koren_bell_volinsky} require solving very high-dimensional problem instances so that it is impossible to store full matrices, let alone to calculate many singular values of these matrices.
Let now $\epsilon > 0$ and $F_{\epsilon}:\Rdd \to \R$ be the \emph{smoothed log-det objective} defined as
$F_{\epsilon}(\f{X}):= \sum_{i=1}^d f_{\epsilon}(\sigma_i(\f{X}))$
with
\vspace*{-2mm}
\begin{equation} \label{eq:smoothing:Fpeps}
f_{\epsilon}(\sigma) =
\begin{cases}
\log|\sigma|, & \text{ if } \sigma \geq \epsilon, \\
\log(\epsilon) + \frac{1}{2}\Big( \frac{\sigma^2}{\epsilon^2}-1\Big), & \text{ if } \sigma < \epsilon.
\end{cases}
\vspace*{-1mm}
\end{equation}
It can be shown that that $F_{\epsilon}$ is continuously differentiable with $\epsilon^{-2}$-Lipschitz gradient \cite{AnderssonCarlssonPerfekt16}. It is clear that the optimization landscape and of $F_{\epsilon}$ crucially depends on the smoothing parameter $\epsilon$.
We propose now an iterative algorithm that minimizes quadratic upper bounds of $F_{\epsilon}$ under the data constraint to provide a descent in $F_{\epsilon}$, before updating the smoothing parameter $\epsilon$. It can be interpreted within the framework of \emph{Iteratively Reweighted Least Squares (IRLS)} \cite{Weiszfeld37,Fornasier11,Mohan10,KS18}, as the main step of each iteration can be regarded as solving a weighted least squares problem.
The precise shape of the quadratic model can be described by a \emph{weight operator} $W\hk$, which we define as follows: Let $\epsilon_k > 0$ and $\f{X}\hk \in \Rdd$ be a matrix with singular value decomposition $\f{X}\hk = \f{U}_k \dg(\sigma^{(k)}) \f{V}_k^{*}$, i.e., $\f{U}_k \in \R^{d_1 \times d_1}$ and $\f{V}_k \in \R^{d_2 \times d_2}$ are orthonormal matrices. Then we define the linear operator $W\hk: \Rdd \to \Rdd$ such that
\begin{equation} \label{eq:def:W}
W^{(k)}(\f{Z}) = \f{U}_k \left[\f{H}_k \circ (\f{U}_k^{*} \f{Z} \f{V}_k)\right] \f{V}_k^{*},
\end{equation}
where $\f{H}_k \circ (\f{U}_k^{*} \f{Z} \f{V}_k)$ is the entrywise product of $\f{H}_k$ and $\f{U}_k^{*} \f{Z} \f{V}_k$, and $\f{H}_k \in \R^{d_1 \times d_2}$ is a matrix with positive entries such that
$(\f{H}_k)_{ij} := \left(\max(\sigma_i^{(k)},\epsilon_k) \max(\sigma_j^{(k)},\epsilon_k)\right)^{-1}$
for all $i \in [d_1]$ and $j \in [d_2]$. The weight operator $W^{(k)}$ is a positive, self-adjoint operator with strictly positive eigenvalues that coincide with the entries of the matrix $\f{H}_k \in \Rdd$. Using the definition of $W\hk$, we describe \texttt{MatrixIRLS} in \Cref{algo:MatrixIRLS}.
\begin{algorithm}[tb]
\caption{\texttt{MatrixIRLS} for low-rank matrix recovery} \label{algo:MatrixIRLS}
\begin{algorithmic}
\STATE{\bfseries Input:} Indices $\Omega$, observations $\f{y} \in \R^m$, rank estimate $\widetilde{r}$.
\STATE Initialize $k=0$, $\epsilon^{(0)}=\infty$ and $W^{(0)} = \Id$.
\FOR{$k=1$ to $K$}
\STATE \textbf{Solve weighted least squares:} Use a \emph{conjugate gradient method} to solve
\vspace*{-2mm}
\begin{equation} \label{eq:MatrixIRLS:Xdef}
\f{X}^{(k)} =\argmin\limits_{P_{\Omega}(\f{X})=\f{y}} \langle \f{X}, W^{(k-1)}(\f{X}) \rangle.
\end{equation}
\vspace*{-4mm}
\STATE \textbf{Update smoothing:} \label{eq:MatrixIRLS:bestapprox} Compute $\widetilde{r}+1$-th singular value of $\f{X}\hk$ to update
\vspace*{-5mm}
\begin{equation} \label{eq:MatrixIRLS:epsdef}
\epsilon_k=\min\left(\epsilon_{k-1},\sigma_{\widetilde{r}+1}(\f{X}\hk)\right).
\end{equation}
\vspace*{-3mm}
\STATE \textbf{Update weight operator:} For $r_k := |\{i \in [d]: \sigma_i(\f{X}\hk) > \epsilon_k\}|$, compute the first $r_k$ singular values $\sigma_i\hk := \sigma_i(\f{X}\hk)$ and matrices $\f{U}\hk \in \R^{d_1 \times r_k}$ and $\f{V}\hk \in \R^{d_2 \times r_k}$ with leading $r_k$ left/ right singular vectors of $\f{X}\hk$ to update $W\hk$ defined in \Cref{eq:def:W}. \label{eqdef:Wk}
\vspace{-.3cm}
\ENDFOR
\STATE{\bfseries Output:} $\f{X}^{(K)}$.
\end{algorithmic}
\end{algorithm}
\vspace{-4mm}
\section{Computational Complexity}
\vspace{-2mm}
A crucial property of \Cref{algo:MatrixIRLS} is that due to the choice of the smoothing function \cref{eq:smoothing:Fpeps}, the weight operator \cref{eq:def:W} and the smoothing update rule \cref{eq:MatrixIRLS:epsdef}, the action $W\hk$ on a matrix $\f{Z}$ can be implemented by a scalar multiplications and multiplications with rectangular $(d_1 \times r_k)$- and $(r_k \times d_2)$-matrices.
Utilizing this specific structure of the weight operators, we obtain an implementation of \texttt{MatrixIRLS} with a time and space complexity of the same order as for state-of-the-art first-order algorithms based on matrix factorization \cite{chen_chi18}. We refer to the supplementary materials for details and a proof.
\begin{theorem} \label{thm:MatrixIRLS:computationalcost:Xkk}
Let $\f{X}\hk \in \Rdd$ be the $k$-th iterate of \texttt{MatrixIRLS} for an observation vector $\f{y} \in \R^m$ and $\widetilde{r}=r$. Assume that $\sigma_i\hk \leq \epsilon_k$ for all $i > r$ and $\sigma_r\hk > \epsilon_k$. Then an implicit representation of the new iterate $\f{X}\hkk \in \R^{d_1 \times d_2}$ can be calculated in a \emph{time complexity} of
\[
O \left( (m r + r^2 D) \cdot N_{\text{CG\_inner}} \right),
\]
where $N_{\text{CG\_inner}}$ is the number of inner iterations used in the conjugate gradient method and $D=\max(d_1,d_2)$. More precisely, $\f{X}\hkk$ can be represented as
\[
\f{X}\hkk = P_{\Omega}^*(\f{r}_{k+1}) +\f{U}\hk \f{M}_{1}^{(k+1)*} + \f{M}_2^{(k+1)} \f{V}^{(k)*},
\]
where $\f{r}_{k+1} \in \R^m$, $\f{M}_{1}^{(k+1)} \in \R^{d_2 \times r}$ and $\f{M}_2^{(k+1)} \in \R^{d_1 \times r}$,
i.e., with a \emph{space complexity} of $O ( m+ r D)$.
\end{theorem}
\vspace{-2mm}
\Cref{thm:MatrixIRLS:computationalcost:Xkk} illustrates the computational advantage of \texttt{MatrixIRLS} compared to previous iteratively reweighted least squares algorithms for low-rank matrix recovery problems \cite{Fornasier11,Mohan10,KS18}, which all require the storage and updates of full $(d_1 \times d_2)$-matrices and the calculation of singular value decompositions of these. We comment on why a constant number of inner iterations $N_{\text{CG\_inner}}$ typically suffices in the supplementary materials.
As $P_{\Omega}^*(\f{r}_{k+1}) \in \Rdd$ is $m$-sparse, $\f{X}\hkk$ is a sum of a sparse and two rank-$r$ matrices and thus fast matrix-vector multiplications can be used in methods such as Lanczos bidiagonalization or randomized Block Krylov \cite{MuscoMusco15} to compute $r_{k+1}$ singular values and vectors of $\f{X}\hkk$ in step 3 of \Cref{algo:MatrixIRLS}.
\vspace*{-4mm}
\section{Theoretical Analysis} \label{sec:convergence}
\vspace{-2mm}
As it is beyond the scope of this format, we leave a detailed convergence analysis of \texttt{MatrixIRLS} to an upcoming paper. By establishing a global majorization property of the quadratic model function implicitly defined by the weight operator $W\hk$, it is possible to show that accumulation points of $(\f{X}\hk)_{k \geq 1}$ are stationary points of the $\overline{\epsilon}$-smoothed log-det objective $F_{\overline{\epsilon}}$ for $\overline{\epsilon}:=\lim_{k\to \infty} \epsilon_k$. Furthermore, we can establish local convergence \emph{with quadratic rate} of $\texttt{MatrixIRLS}$ to an incoherent rank-$r$ ground truth $\f{X}^0 \in \Rdd$ if a random sampling set $\Omega$ of size $\Omega(r(d_1 + d_2) \log(d_1 + d_2))$ is given, with high probability. In this result, the bound on the sample complexity does \emph{not} depend on the condition number of $\f{X}^0$. Such a result is new for matrix completion, as the results of \cite{KS18} studying a similar algorithm only cover measurement operators fulfilling a \emph{null space property}.
\vspace*{-2.5mm}
\subsection{MatrixIRLS as saddle-escaping smoothing Newton method} \label{sec:Newton:interpretation}
From a theoretical point of view, the local quadratic convergence rate is an inherently local property that does not explain the numerically observed global convergence, which is remarkable due to the non-convexity of the objective.
A possible avenue to explain this is to interpret \texttt{MatrixIRLS} as a \emph{saddle-escaping smoothing Newton method}. Smoothing Newton methods minimize a non-smooth and possibly non-convex function $F$ by using derivatives of certain smoothings of $F$ \cite{ChenQiSun98,Chen2012smoothing}. Interpreting the optimization problem
$\min_{\f{X}:P_{\Omega}(X)=\f{y}} F_{\epsilon_k}(\f{X})$
as an unconstrained optimization problem over the null space of $P_{\Omega}$, we can write
\vspace{-.14cm}
\begin{equation*}
\begin{split}
&\f{X}^{(k+1)} = \f{X}^{(k)} - P_{\Omega^c}^* \left(P_{\Omega^c} W\hk P_{\Omega^c}^*\right)^{-1} P_{\Omega^c} W\hk (\f{X}\hk) \\
\end{split}
\end{equation*}
\begin{equation*}
\begin{split}
&\!\!= \f{X}^{(k)}\!\! -\!P_{\Omega^c}^* \left(P_{\Omega^c}\! \overline{\nabla^2 F_{\epsilon_k}(\f{X}\hk)}\! P_{\Omega^c}^*\right)^{-1}\!\!\!\!\!\! P_{\Omega^c} \nabla F_{\epsilon}(\f{X}\hk),
\end{split}
\end{equation*}
if $\Omega^c = [d_1] \times [d_2] \setminus \Omega$ corresponds to the unobserved indices, where $\overline{\nabla^2 F_{\epsilon_k}(\f{X}\hk)}: \Rdd \to \Rdd$ is a \emph{modified} Hessian of $F_{\epsilon_k}$ at $\f{X}\hk$ that replaces negative eigenvalues of the Hessian $\nabla^2 F_{\epsilon_k}(\f{X}\hk)$ by positive ones and slightly increases small eigenvalues. We refer to the supplementary material for more details. In \cite{PaternainMokhtariRibeiro19}, it has been proved that for a fixed smooth function $F_{\epsilon_k}$, similar modified Newton-type steps are able to escape the first-order saddle points at a rate that is independent of the problem's condition number.
\vspace*{-3.5mm}
\section{Numerical Experiments} \label{sec:numerics}
We explore the performance of \texttt{MatrixIRLS} for the completion of synthetic low-rank matrices in terms of data efficiency and computational efficiency in comparison to state-of-the-art algorithms in the literature. Algorithmic parameters are detailed in the supplementary materials. We consider the following setup: We sample a pair of random matrices $\f{U} \in \R^{d_1 \times r}$ and $\f{V} \in \R^{d_2 \times r}$ with $r$ orthonormal columns, and define the diagonal matrix $\Sigma \in \R^{r \times r}$ such that $\Sigma_{ii} = \kappa \exp(-\log(\kappa)\frac{i-1}{r-1})$ for $i \in [r]$. With this definition, we define a ground truth matrix $\f{X}^0 =\f{U}\Sigma \f{V}^*$ of rank $r$ that has exponentially decaying singular values between $\kappa$ and $1$. Furthermore, for a given factor $\rho \geq 1$, we sample a set $\Omega \subset [d_1] \times [d_2]$ of size $m = \lfloor \rho r (d_1 +d_2 - r) \rfloor$ indices randomly without replacement, implying that $\rho$ can be interpreted as an \emph{oversampling factor} as $r (d_1 +d_2 - r)$ is equal to the number of degrees of freedom of an $(d_1 \times d_2)$-dimensional rank-$r$ matrix.\footnote{The experiments can be reproduced with the code provided in \url{https://github.com/ckuemmerle/MatrixIRLS}.}
\vspace*{-2.5mm}
\subsection{Data-efficient recovery of ill-conditioned matrices}
First, we run \texttt{MatrixIRLS} and the algorithms \texttt{R2RILS} \cite{BauchNadler20}, \texttt{RTRMC} \cite{boumal_absil_15}, \texttt{LRGeomCG} \cite{Vandereycken13}, \texttt{LMaFit} \cite{Wen12}, \texttt{ScaledASD} \cite{TannerWei16}, \texttt{ScaledGD} \cite{tong_ma_chi} and \texttt{NIHT} \cite{TannerWei13} to complete $\f{X}^0$ from $P_{\Omega}(\f{X}^0)$ where $\Omega$ corresponds to different oversampling factors $\rho$ between $1$ and $4$, and where the condition number of $\f{X}^0$ is $\kappa= \sigma_1(\f{X}^0)/\sigma_r(\f{X}^0) = 10$. In \Cref{fig:sampcomp:1}, we report the median Frobenius errors $\|\f{X}^{(K)}-\f{X}^0\|_F/\|\f{X}^0\|_F$ of the the respective algorithmic outputs $\f{X}^{(K)}$ across $100 $ independent realizations.
\begin{figure}[h!]
\setlength\figureheight{35mm}
\setlength\figurewidth{66mm}
\input{MCsamplecomp_d1_1000_d2_1000_r_5_kappa_10_2020-06-10_10-10-24_rho1.05-4_rho1.05-4.tex}
\vspace*{-8mm}
\caption{Comparison of matrix completion algorithms for $1000 \times 1000$ matrices of rank $r=5$ with condition number $\kappa=10$, given $m= \lfloor \rho r (d_1 + d_2-r)\rfloor$ random samples. Median of Frobenius errors $\|\f{X}^{(K)}-\f{X}^0\|_F/\|\f{X}^0\|_F$ of $100$ independent realizations.}
\label{fig:sampcomp:1}
\end{figure}
\vspace{-.3cm}
We see that \texttt{MatrixIRLS} and \texttt{R2RILS} are the only algorithms that are able to complete $\f{X}^0$ already for $\rho=1.5$, whereas the other algorithms, except from \texttt{NIHT}, are able to reconstruct the matrix most of the times if $\rho$ is at least between $2.4$ and $3.0$. This confirms the findings of \cite{BauchNadler20} that even for quite well-conditioned matrices, fewer samples are required if second-order methods such as \texttt{R2RILS} or \texttt{MatrixIRLS} are used.
We repeat this experiment for ill-conditioned matrices $\f{X}^0$ with $\kappa = 10^5$. In \Cref{fig:sampcomp:2}, we see that current state-of-the-art methods are \emph{not able} to achieve exact recovery of $\f{X}^0$. This is in particular true as given the exponential decay of the singular values, in order to recover the subspace corresponding to the smallest singular value of $\f{X}^0$, a relative Frobenius error of $10^{-5}$ or even several orders of magnitude smaller needs to be achieved. We observe that \texttt{MatrixIRLS} is the only method that is able to complete $\f{X}^0$ for any of the considered oversampling factors.
\vspace*{-0.5cm}
\begin{figure}[ht]
\setlength\figureheight{25mm}
\setlength\figurewidth{66mm}
\input{MCsamplecomp_d1_1000_d2_1000_r_5_kappa_100000_2020-06-11_20-00-39_rho1.05-4.tex}
\vspace*{-6mm}
\caption{Comparison of matrix completion algorithms as in \Cref{fig:sampcomp:1}, but with $\kappa = 10^5$. Median of $50$ realizations.}
\label{fig:sampcomp:2}
\end{figure}
\vspace{-.5cm}
\subsection{Running time for ill-conditioned problems}
In Figure \ref{running_time_ill_cond}, for an oversampling rate of $\rho = 4$, we illustrate the completion of one single highly ill-conditioned matrix. We again can see that only second-order methods such as \texttt{R2RILS} or \texttt{MatrixIRLS} are able to achieve a relative Frobenius error $\approx 10^{-5}$ or smaller but \texttt{MatrixIRLS} does it in a much \emph{faster} way. Besides that, it also retrieves all the singular values with high precision by attaining a very low relative error.
\begin{figure}[ht]
\setlength\figureheight{25mm}
\setlength\figurewidth{66mm}
\input{relativeerror_vs_time.tex}
\vspace*{-5mm}
\caption{Completion task for a highly ill-conditioned $1000 \times 1000$ matrix of rank $r=10$ with $\kappa=10^5$ ($\rho= 4$).}
\label{running_time_ill_cond}
\end{figure}
In Figure \ref{running_time_MatrixIRLS_vs_R2RILS}, we compare the execution time of \texttt{R2RILS} and \texttt{MatrixIRLS} when the dimension increases, for an oversampling rate of $\rho = 2.5$ with singular values linearly interpolated between $\kappa$ and $1$. We observe that the larger the dimensions are, the larger is the discrepancy in the running time of the two algorithms. This is accentuated for even higher condition numbers.
\begin{figure}[h!]
\setlength\figureheight{25mm}
\setlength\figurewidth{66mm}
\input{executiontime_vs_dimension.tex}
\vspace*{-5mm}
\caption{Comparison of \texttt{R2RILS} and \texttt{MatrixIRLS} for completion of rank $r \in \{5,10\}$ matrices of size $m \times (m+100)$ and condition number $\kappa=10^2$ in terms of their execution time. Every point represents the average over 50 experiments. The other algorithms are not shown in this experiment because they typically do not reach a relative error below $10^{-4}$ for $\kappa\gg10^2$.}
\label{running_time_MatrixIRLS_vs_R2RILS}
\end{figure}
\vspace{-.8cm}
\section{Conclusion}\label{conclusion}
\vspace{-.2cm}
We formulated \texttt{MatrixIRLS}, a second order method that is able to efficiently complete large, highly ill-conditioned matrices from few samples, a problem for which most state-of-the-art methods fail. It improves on previous approaches for the optimization of non-convex rank objectives by applying a suitable smoothing strategy combined with saddle-escaping Newton-type steps.
As one goal of our investigation has been also to provide an efficient implementation, we focused on the matrix completion problem, leaving the extension of the ideas to other low-rank matrix estimation problems to future work including the case of inexact data or measurement errors. Furthermore, while we establish a local convergence guarantee for the algorithm, a precise analysis of its global convergence behavior might be of interest.
\section*{Acknowledgements}
The authors thank Cong Ma for providing their code for the algorithm ScaledGD. The authors gratefully acknowledge the Leibniz Supercomputing Centre (LRZ) for providing computing time on their Linux cluster. CMV gratefully acknowledges support by German Science Foundation (DFG) within the Gottfried Wilhelm Leibniz Prize under Grant BO 1734/20-1, under grant KR 4512/1-1, under contract number PO-1347/3-2 and within Germany's Excellence Strategy EXC-2111 390814868.
|
2,869,038,154,425 | arxiv | \section{Introduction}
Finding an accurate (semi-)analytical description of the infrared sector
of QCD is still one of the most important challenges of present-day
quantum field theory. In this work we concentrate on Yang-Mills theory,
QCD without dynamical quarks, since it is in this sector
where the peculiar properties of QCD, in particular the confining
interaction between quarks, arise. Recently, much of the activity in this
area has focused on the formulation and (approximate) solution
of Yang-Mills theory in the Coulomb
gauge \cite{SS01}--\cite{PR09}, the primary reason being that the Coulomb
gauge Hamiltonian explicitly contains the color-Coulomb potential which
furnishes the dominant nonperturbative contribution to the static or heavy
quark potential.
Semi-analytical functional approaches to the calculation of gluon and ghost
propagators in the infrared, mostly using Dyson-Schwinger equations, have
been successful in Landau gauge Yang-Mills theory \cite{SHA97}. In the
so-called ghost dominance approximation, even very simple analytical
solutions exist in the far infrared \cite{SLR06,LSZ02}. Although the
consistency of these solutions is still under discussion, it is natural to
inquire whether a similar approach could be useful in the Coulomb gauge.
However, the breaking of Lorentz covariance through the Coulomb gauge
condition makes the usual Lagrangian functional integral approach quite
cumbersome in this gauge, see, e.g., Ref.\ \cite{WR07a}. For this reason,
semi-analytical approaches in Coulomb gauge have mostly used a Hamiltonian
formulation. A set of equations similar to Dyson-Schwinger equations is
obtained from a variational principle using a Gaussian type of ansatzes for
the vacuum wave functional in the Schr\"odinger representation
\cite{SS01,FRE04}. In the ghost dominance approximation, furthermore, simple
analytical solutions are available for the far infrared \cite{Zwa04,SLR06}.
However, the status of the semi-analytical and analytical solutions in the
Coulomb gauge is not yet entirely clear, for two reasons: first, two different
solutions with an infrared scaling behavior
(differing in the infrared exponents) have been found in both the
analytical and the semi-analytical approaches \cite{Zwa04}--\cite{SLR06}, and
there is as yet no theoretical guidance to what the physical solution should
be; second, the inclusion of the Coulomb form factor (the form factor for the
color-Coulomb potential, which measures the deviation of the Coulomb potential
from a factorization in terms of ghost propagators) in the set of equations
of Dyson-Schwinger type results problematic. In Ref.\ \cite{FRE04},
the equation for the Coulomb form factor has been considered subleading
compared to the equations for the gluon and ghost propagators and
therefore treated in the tree-level approximation, while in Ref.\ \cite{ERS08}
all equations have been considered to be of the same order and therefore
treated on an equal footing,
with the result that solutions with infrared scaling behavior cease to exist.
It should be emphasized that only solutions with scaling behavior can
give rise to a linearly rising Coulomb potential, and that the latest lattice
calculations also show a scaling behavior for the equal-time correlation
functions in the deep infrared \cite{BQR09}. It is not clear at present
how to improve the approximation used in the variational approach in order
to arrive at a unique and consistent solution.
On the other hand, an interesting relation between Landau and Coulomb gauge
Yang-Mills theory has been pointed out in the ghost dominance approximation
in Refs.\ \cite{Zwa04,SLR06}: the equal-time correlation functions
of the Hamiltonian approach in
Coulomb gauge are the formal counterparts in three dimensions of the
covariant correlation functions in Landau gauge in four dimensions.
Building on this analogy, a possible strategy seems to be to replace the
variational principle by a calculation of equal-time correlation functions
in the Coulomb gauge and trying to formulate Dyson-Schwinger equations for
the latter. In the present work, we take a first step in this direction:
we set up a functional integral representation of the equal-time correlation
functions (without taking a detour to the space-time correlation functions)
that is the precise three-dimensional analogue of the usual
functional integral representation of the covariant correlation functions
in the Lagrangian approach to Landau gauge Yang-Mills theory. We
also develop a diagrammatic representation and a set of Feynman rules for
the equal-time correlation functions. We use this new formulation
here in order to calculate the equal-time gluon and ghost
two-point correlation functions and the potential for static color charges
in Coulomb gauge perturbatively to one-loop order. We extract the one-loop
beta function and determine the asymptotic ultraviolet behavior of the
equal-time two-point functions and the static potential. We also show that
our results coincide with those obtained in a Lagrangian functional integral
approach \cite{WR07b,WR08} and use the latter for the renormalization of the
equal-time correlation functions and the static potential.
The organization of the paper is as follows: in the next section,
we determine the vacuum wave functional perturbatively to order $g^2$ from
the solution of the Schr\"odinger equation. With the vacuum functional
determined to the corresponding order, we turn to the calculation of
the equal-time gluon and ghost two-point correlation functions in Section 3.
We also calculate the one-loop corrections to the static or heavy quark
potential (and thus to the Coulomb form factor) in the same section.
Although for the latter calculation we need to go beyond the terms
that we have calculated for the vacuum functional in Section 2,
the relevant additional contributions are quite simply determined. In
Section 4, we provide another representation of the equal-time
two-point functions by choosing equal times (zero) in the
space-time correlation functions determined before in
the Lagrangian functional integral representation \cite{WR07b,WR08} of the
theory. The static potential can also be obtained from a two-point function
that arises in the Lagrangian approach. We use the alternative representations
of the two-point functions and the static potential to perform the
renormalization of our results. We show the nonrenormalization of the
ghost-gluon vertex in the same section and use it to determine the beta
function and the asymptotic ultraviolet behavior of the two-point functions.
We also show that the same beta function is found from considering the
static potential. Finally, in Section 5, we briefly summarize our findings.
We give some details on an important difference that arises between
the Lagrangian and the Hamiltonian approach when it comes to the
implementation of the Coulomb gauge in the Appendix.
\section{Perturbative vacuum functional}
It is very simple to write down a functional integral representation of the
equal-time correlation functions, given that they are nothing but the
vacuum expectation values of products of the field operators. In
the Schr\"odinger representation of Yang-Mills theory in Coulomb gauge,
the equal-time $n$-point correlation functions in (3-)momentum space
have the following representation:
\bmu
\langle A_i^a (\bf{p}_1, t=0) A_j^b (\bf{p}_2, t=0) \cdots
A_r^f (\bf{p}_n, t=0) \rangle \\
= \int D [\bf{A} ] \, \delta( \nabla \cdot \bf{A} ) \, \textsf{FP} (\bf{A})
\, A_i^a (\bf{p}_1) A_j^b (\bf{p}_2) \cdots A_r^f (\bf{p}_n) \,
\abs{\psi (\bf{A})}^2 \:. \label{corrfunc}
\emu
Here, $\psi (\bf{A})$ is the true vacuum wave functional of the
theory. The (absolute) square $\abs{\psi (\bf{A})}^2$ then plays the r\^ole of
the exponential of the negative Euclidean classical action
in the corresponding representation of the covariant correlation
functions (in Euclidean space). $\textsf{FP} (\bf{A}) \equiv \text{det}
[-\nabla \cdot \bf{D} (\bf{A})]$, with the covariant derivative in
the adjoint representation defined as
\be
\bf{D}^{ab} (\bf{A}) = \delta^{ab} \, \nabla + g f^{abc} \bf{A}^c \:,
\ee
is the Faddeev-Popov determinant (in
3 dimensions) which forms a part of the integration measure for the scalar
product of states in the Schr\"odinger representation (see Ref.\
\cite{CL80}). Note that the fields $A_i^a (\bf{p})$ on the left-hand side
of Eq.\ \eqref{corrfunc} are spatially transverse, $\bf{p} \cdot \bf{A}^a
(\bf{p}) = 0$. We will assume the transversality of the fields
$\bf{A}^a$ in all of the following formulae, which
we could make manifest by introducing a transverse basis in momentum
space. However, there is usually no need to do so explicitly.
In order to write down the functional integral for the equal-time
correlation functions explicitly, the vacuum wave functional
needs to be specified. The analogy with the covariant theory
suggests to make an exponential ansatz for this wave functional,
in the spirit of the $e^S$ expansion in many-body physics \cite{KLZ78}.
We consider a full Volterra expansion of the exponent:
\bmu
\psi (\bf{A}) = \exp \bigg( - \sum_{k=2}^\infty \frac{1}{k!}
\int \frac{d^3 p_1}{(2 \pi)^3} \cdots \frac{d^3 p_k}{(2 \pi)^3} \,
\sum_{i_1, i_2, \ldots, i_k} \, \sum_{a_1, a_2, \ldots, a_k}
f_{k; i_1 i_2 \ldots i_k}^{a_1 a_2 \ldots a_k}
(-\bf{p}_1, \ldots, -\bf{p}_k) \\
\times A_{i_1}^{a_1} (\bf{p}_1) \cdots A_{i_k}^{a_k} (\bf{p}_k)
(2 \pi)^3 \delta (\bf{p}_1 + \ldots + \bf{p}_k) \bigg) \:. \label{vacans}
\emu
Any normalization factor can be conveniently absorbed in the
functional integration measure in Eq.\ \eqref{corrfunc}.
Terms linear in $\bf{A}$ in the exponent ($k=1$) are excluded by the symmetry
of the wave functional under global gauge transformations (in the
absence of external color charges). Regarding notation, given
that our Hamiltonian formalism is not manifestly covariant, we will denote
the contravariant spatial components of 4-vectors by (latin)
\emph{subindices}.
We insert this ansatz for the vacuum wave functional into the Schr\"odinger
equation
\be
H \psi (\bf{A}) = E_0 \psi (\bf{A}) \:, \label{ETSE}
\ee
where $H$ is the
Christ-Lee Hamiltonian for Coulomb gauge Yang-Mills theory \cite{CL80},
\bal
H &= \frac{1}{2} \int d^3 x \left( - \frac{1}{\textsf{FP} (\bf{A})}
\frac{\delta}{\delta A_i^a (\bf{x})} \textsf{FP} (\bf{A})
\frac{\delta}{\delta A_i^a (\bf{x})} +
B_i^a (\bf{x}) B_i^a (\bf{x}) \right) \notag \\
&\phantom{=} {}+ \frac{g^2}{2} \int d^3 x \, d^3 y \,
\frac{1}{\textsf{FP} (\bf{A})} \, \rho^a (\bf{x}) \,
\textsf{FP} (\bf{A}) \, \langle \bf{x}, a | (-\nabla \cdot \bf{D})^{-1}
(-\nabla^2) (-\nabla \cdot \bf{D})^{-1} | \bf{y}, b \rangle \,
\rho^b (\bf{y}) \:. \label{christlee}
\eal
Here,
\be
B^a_i = -\frac{1}{2} \, \epsilon_{i j k} F^a_{j k} =
\left( \nabla \times \bf{A}^a -
\frac{g}{2} \, f^{abc} \bf{A}^b \times
\bf{A}^c \right)_i
\ee
is the chromo-magnetic field, and
\be
\rho^a (\bf{x}) = \rho^a_q (\bf{x}) + f^{abc} A_j^b (\bf{x}) \,
\frac{1}{i} \frac{\delta}{\delta A_j^c (\bf{x})} \label{colch}
\ee
the color charge density, including external static charges $\rho_q$
for later use. Note that we have extracted a factor $g$ from the
color charges in order to simplify the counting of orders of $g$ in the
rest of the paper. The notation $\langle \bf{x}, a | C | \bf{y}, b \rangle$
refers to the kernel of the operator $C$ in an integral representation.
In most of the following perturbative calculation, we will need the
Hamiltonian only up to order $g^2$, where
\bal
\lefteqn{H = \frac{1}{2} \int \frac{d^3 p}{(2 \pi)^3} \left( -(2 \pi)^3
\frac{\delta}{\delta A_i^a (\bf{p})} (2 \pi)^3
\frac{\delta}{\delta A_i^a (-\bf{p})} + A_i^a (-\bf{p}) \,
\bf{p}^2 A_i^a (\bf{p}) \right)} \hspace{6mm} \label{clA2} \\
&{}+ \frac{1}{2} \int \frac{d^3 p}{(2 \pi)^3} \,
A_i^a (-\bf{p}) \left( \frac{N_c g^2}{2} \int \frac{d^3 q}{(2 \pi)^3}
\frac{1 - (\hat{\bf{p}} \cdot \hat{\bf{q}})^2}{(\bf{p} - \bf{q})^2} \right)
(2 \pi)^3 \frac{\delta}{\delta A_i^a (-\bf{p})} \label{cls1} \\
&{}+ \frac{g}{3!} \int \frac{d^3 p_1}{(2 \pi)^3}
\frac{d^3 p_2}{(2 \pi)^3} \frac{d^3 p_3}{(2 \pi)^3} \, i f^{abc} \left[
\delta_{jk} (p_{1,l} - p_{2,l}) + \delta_{kl} (p_{2,j} - p_{3,j})
+ \delta_{lj} (p_{3,k} - p_{1,k}) \right] \notag \\
&\hspace{3cm} {}\times A^a_j (\bf{p}_1) A^b_k (\bf{p}_2) A^c_l (\bf{p}_3)
(2 \pi)^3 \delta (\bf{p}_1 + \bf{p}_2 + \bf{p}_3) \label{clA3} \\
&{}+ \frac{g^2}{4!} \int \frac{d^3 p_1}{(2 \pi)^3}
\cdots \frac{d^3 p_4}{(2 \pi)^3} \big[
f^{abe} f^{cde} (\delta_{ik} \delta_{jl} - \delta_{il} \delta_{jk}) +
f^{ace} f^{bde} (\delta_{ij} \delta_{kl} - \delta_{il} \delta_{jk}) \notag \\
&\phantom{+}
{}+ f^{ade} f^{bce} (\delta_{ij} \delta_{kl} - \delta_{ik} \delta_{jl})
\big] A^a_i (\bf{p}_1) A^b_j (\bf{p}_2) A^c_k (\bf{p}_3) A^d_l (\bf{p}_4)
(2 \pi)^3 \delta (\bf{p}_1 + \bf{p}_2 + \bf{p}_3 + \bf{p}_4) \label{clA4} \\
&{}+ \frac{g^2}{2} \int \frac{d^3 p}{(2 \pi)^3} \,
\rho^a (-\bf{p}) \frac{1}{\bf{p}^2} \, \rho^a (\bf{p}) + \cal{O} (g^3) \:.
\label{clcoul}
\eal
The term \eqref{cls1} stems from the
application of the functional derivative to the Faddeev-Popov determinant.
In this term, $N_c$ stands for the number of colors,
$f^{acd} f^{bcd} = N_c \delta^{ab}$,
and $\hat{\bf{p}} \equiv \bf{p}/|\bf{p}|$ denotes a unit vector.
In the \emph{absence} of external charges, we get for the term \eqref{clcoul}
\bal
\lefteqn{\frac{g^2}{2} \int \frac{d^3 p}{(2 \pi)^3} \,
\rho^a (-\bf{p}) \frac{1}{\bf{p}^2} \, \rho^a (\bf{p})} \hspace{1cm} \notag \\
&= \frac{1}{2} \int \frac{d^3 p}{(2 \pi)^3} \,
A_i^a (-\bf{p}) \left( N_c g^2 \int \frac{d^3 q}{(2 \pi)^3}
\frac{t_{ij} (\bf{q})}{(\bf{p} - \bf{q})^2} \right)
(2 \pi)^3 \frac{\delta}{\delta A_j^a (-\bf{p})} \label{cls2} \\
&\phantom{=} {}- \frac{g^2}{4} \int \frac{d^3 p_1}{(2 \pi)^3}
\cdots \frac{d^3 p_4}{(2 \pi)^3} \left(
f^{ace} f^{bde} \, \frac{\delta_{ik} \delta_{jl}}{(\bf{p}_1 + \bf{p}_3)^2}
+ f^{ade} f^{bce} \, \frac{\delta_{il} \delta_{jk}}{(\bf{p}_1 + \bf{p}_4)^2}
\right) \notag \\
&\phantom{= {}-} {}\times (2 \pi)^3 \delta(\bf{p}_1 + \bf{p}_2 + \bf{p}_3
+ \bf{p}_4) \, A^a_i (\bf{p}_1) A^b_j (\bf{p}_2)
(2 \pi)^3 \frac{\delta}{\delta A_k^c (-\bf{p}_3)} (2 \pi)^3
\frac{\delta}{\delta A_l^d (-\bf{p}_4)} \:. \label{cls3}
\eal
In the term \eqref{cls2} on the right-hand side, $t_{ij} (\bf{q})$
denotes the spatially transverse projector or transverse Kronecker delta
\be
t_{ij} (\bf{q}) \equiv \delta_{ij} - \hat{q}_i \hat{q}_j \:.
\ee
We shall now show, explicitly up to order $g^2$, that there is a
unique perturbative solution of the Schr\"odinger equation
\eqref{ETSE} for the wave functional $\psi (\bf{A})$ in Eq.\
\eqref{vacans}, if we only suppose that the
dominant contribution to the coefficient function $f_k$ is at least of
order $g^{k-2}$ for $k \geq 2$. We will consider the case
\emph{without} external charges to begin with, and include charges
$\rho_q$ later on in the context of the static potential. To order $g^0$,
the Schr\"odinger equation reads
\be
(N_c^2 - 1) \left( \int \frac{d^3 p}{(2 \pi)^3} f_2 (\bf{p}) \right)
(2 \pi)^3 \delta (\bf{0}) + \frac{1}{2} \int \frac{d^3 p}{(2 \pi)^3}
A_i^a (-\bf{p}) \left[ \bf{p}^2 - \big( f_2 (\bf{p}) \big)^2
\right] A_i^a (\bf{p}) = E_0 \:, \label{g0eq}
\ee
where we have used that
\be
f_{2;ij}^{ab} (\bf{p}, -\bf{p}) = f_2 (\bf{p}) \delta_{ij} \delta^{ab}
= f_2 (-\bf{p}) \delta_{ij} \delta^{ab}
\ee
(to be contracted with spatially transverse fields) as a consequence of
the symmetry under the exchange of the arguments, of spatially rotational
and global gauge symmetry, and of the fact that $f_{2;ij}^{ab} (\bf{p}_1,
\bf{p}_2)$ is only defined for $\bf{p}_1 + \bf{p}_2 = 0$. Equation
\eqref{g0eq} implies that, to the current order,
\bga
f_2 (\bf{p}) = \abs{\bf{p}} \:, \label{g0solf2} \\
E_0 = (N_c^2 - 1) \left( \int \frac{d^3 p}{(2 \pi)^3} \abs{\bf{p}} \right)
(2 \pi)^3 \delta (\bf{0}) \:. \label{solE0}
\ega
Generally, the energy $E_0$ cancels any field-independent terms
multiplying the vacuum functional in the Schr\"odinger equation to any
order in $g$. Eqs.\ \eqref{g0solf2} and \eqref{solE0} represent
nothing but the well-known solution of the free
($g=0$) theory. The choice of the sign in Eq.\
\eqref{g0solf2} is dictated by the normalizability of the wave functional
\eqref{vacans} to order $g^0$. As usual, $(2 \pi)^3 \delta (\bf{0})$ is
to be understood as the total volume of space.
To the next (first) order of $g$, the Schr\"odinger equation is not much
more complicated: it reads
\bmu
\frac{1}{3!} \int \frac{d^3 p_1}{(2 \pi)^3}
\frac{d^3 p_2}{(2 \pi)^3} \frac{d^3 p_3}{(2 \pi)^3} \Big\{ i g f^{abc}
\left[ \delta_{jk} (p_{1,l} - p_{2,l}) + \delta_{kl} (p_{2,j} - p_{3,j})
+ \delta_{lj} (p_{3,k} - p_{1,k}) \right] \\
{}- 3 \abs{\bf{p}_1} f_{3;jkl}^{abc} (-\bf{p}_1, -\bf{p}_2, -\bf{p}_3)
\Big\} A^a_j (\bf{p}_1) A^b_k (\bf{p}_2) A^c_l (\bf{p}_3)
(2 \pi)^3 \delta (\bf{p}_1 + \bf{p}_2 + \bf{p}_3) = 0 \:, \label{g1eq}
\emu
where we have already taken into account the results \eqref{g0solf2},
\eqref{solE0} and the fact that
\be
f_{3;ijk}^{abc} (\bf{p}_1, \bf{p}_2, \bf{p}_3) = f^{abc} f_{3;ijk}
(\bf{p}_1, \bf{p}_2, \bf{p}_3) \:,
\ee
which is a consequence of global gauge symmetry and the invariance of the
vacuum wave functional under charge conjugation. The unique solution of
Eq.\ \eqref{g1eq} with the full symmetry under the exchange of the
arguments is
\be
f_{3;ijk}^{abc} (\bf{p}_1, \bf{p}_2, \bf{p}_3) = -\frac{i g f^{abc}}
{\abs{\bf{p}_1} + \abs{\bf{p}_2} + \abs{\bf{p}_3}}
\left[ \delta_{ij} (p_{1,k} - p_{2,k}) + \delta_{jk} (p_{2,i} - p_{3,i})
+ \delta_{ki} (p_{3,j} - p_{1,j}) \right] \:. \label{solf3}
\ee
This equality, and all the following equalities with explicit spatial
(Lorentz) indices, are proper equalities only after contracting
with the corresponding number of transverse vector fields $\bf{A}$, or,
equivalently, after contracting every external spatial index with a transverse
projector, for example in Eq.\ \eqref{solf3} the index $i$ with
$t_{il} (\bf{p}_1)$.
Now we will move on to consider the Schr\"odinger equation to order
$g^2$. On the left-hand side, terms with four and two powers of $\bf{A}$
appear, which have to cancel separately, and an $\bf{A}$-independent term
which must equal $E_0$ to this order. We begin with the term with four
powers of $\bf{A}$. The quartic coupling \eqref{clA4} in the Hamiltonian
has to be cancelled by terms stemming from the second functional derivative
in Eq.\ \eqref{clA2} acting on the vacuum wave functional, and a contribution
from Eq.\ \eqref{cls3}. To order $g^2$, the coefficient functions
$f_2$ of Eq.\ \eqref{g0solf2} and $f_3$ of Eq.\ \eqref{solf3} contribute,
and the function $f_4$ which we will determine. As a result, the
coefficient function $f_4$ in the vacuum wave functional takes the following
(fully symmetric) form to order $g^2$:
\bal
\lefteqn{(|\bf{p}_1| + \ldots + |\bf{p}_4|) f_{4;ijkl}^{abcd}
(\bf{p}_1, \ldots, \bf{p}_4)} \hspace{0.3cm} \notag \\
&= g^2 \big[
f^{abe} f^{cde} (\delta_{ik} \delta_{jl} - \delta_{il} \delta_{jk}) +
f^{ace} f^{bde} (\delta_{ij} \delta_{kl} - \delta_{il} \delta_{jk}) +
f^{ade} f^{bce} (\delta_{ij} \delta_{kl} - \delta_{ik} \delta_{jl})
\big] \label{solf4bv} \\[1mm]
&\phantom{=} - \big[
f^{abe}_{3;ijm} (\bf{p}_1, \bf{p}_2, -\bf{p}_1 - \bf{p}_2) t_{mn}
(\bf{p}_1 + \bf{p}_2)
f^{cde}_{3;kln} (\bf{p}_3, \bf{p}_4, \bf{p}_1 + \bf{p}_2) \notag \\
&\phantom{={}-} \hspace{1.5cm} {}+
f^{ace}_{3;ikm} (\bf{p}_1, \bf{p}_3, -\bf{p}_1 - \bf{p}_3)
t_{mn} (\bf{p}_1 + \bf{p}_3)
f^{bde}_{3;jln} (\bf{p}_2, \bf{p}_4, \bf{p}_1 + \bf{p}_3) \notag \\
&\phantom{={}-} \hspace{3cm} {}+
f^{ade}_{3;ilm} (\bf{p}_1, \bf{p}_4, -\bf{p}_1 - \bf{p}_4) t_{mn}
(\bf{p}_1 + \bf{p}_4) f^{bce}_{3;jkn} (\bf{p}_2, \bf{p}_3, \bf{p}_1 + \bf{p}_4)
\big] \label{solf4gl} \\[1mm]
&\phantom{=} - g^2 \left(
f^{abe} f^{cde} \, \delta_{ij} \delta_{kl} \, \frac{(|\bf{p}_1| - |\bf{p}_2|)
(|\bf{p}_3| - |\bf{p}_4|)}{(\bf{p}_1 + \bf{p}_2)^2} \right. \notag \\
&\phantom{={}-} \hspace{3cm} {}+ f^{ace} f^{bde} \, \delta_{ik} \delta_{jl}
\, \frac{(|\bf{p}_1| - |\bf{p}_3|)
(|\bf{p}_2| - |\bf{p}_4|)}{(\bf{p}_1 + \bf{p}_3)^2} \notag \\
&\phantom{={}-} \hspace{6cm} \left. {}+
f^{ade} f^{bce} \, \delta_{il} \delta_{jk} \, \frac{(|\bf{p}_1| - |\bf{p}_4|)
(|\bf{p}_2| - |\bf{p}_3|)}{(\bf{p}_1 + \bf{p}_4)^2} \right) \:. \label{solf4ct}
\eal
This result for $f_4$ is represented diagrammatically in Fig.\
\ref{figf4}.
\begin{figure}
\begin{equation*}
2 f_4 =
- \parbox{1cm}{\begin{center}
\pspicture(-0.4,-0.4)(0.4,0.4)
\pscoil[coilaspect=35,coilwidth=0.1,coilarmA=0.1,coilarmB=0](0,0)(0.38,0.38)
\pscoil[coilaspect=35,coilwidth=0.1,coilarmA=0.1,coilarmB=0](0,0)(0.38,-0.38)
\pscoil[coilaspect=35,coilwidth=0.1,coilarmA=0.1,coilarmB=0](0,0)(-0.38,0.38)
\pscoil[coilaspect=35,coilwidth=0.1,coilarmA=0.1,coilarmB=0](0,0)(-0.38,-0.38)
\psdots(0,0)
\endpspicture
\end{center}}
- \bigg( \parbox{1.6cm}{\begin{center}
\pspicture(-0.4,-0.4)(0.97,0.4)
\pscoil[coilarmA=0.1,coilarmB=0](0.57,0)(0.95,0.38)
\pscoil[coilarmA=0.1,coilarmB=0](0.57,0)(0.95,-0.38)
\pscoil[coilarm=0.1](0,0)(0.57,0)
\pscoil[coilarmA=0.1,coilarmB=0](0,0)(-0.38,0.38)
\pscoil[coilarmA=0.1,coilarmB=0](0,0)(-0.38,-0.38)
\psdots(0,0)(0.57,0)
\endpspicture
\end{center}}
+ \text{2 perms.} \bigg)
- \bigg( \parbox{1.6cm}{\begin{center}
\pspicture(-0.4,-0.4)(0.97,0.4)
\pscoil[coilarmA=0.1,coilarmB=0](0.57,0)(0.95,0.38)
\pscoil[coilarmA=0.1,coilarmB=0](0.57,0)(0.95,-0.38)
\psline[doubleline=true,linewidth=0.8pt,doublesep=0.6pt](0,0)(0.57,0)
\pscoil[coilarmA=0.1,coilarmB=0](0,0)(-0.38,0.38)
\pscoil[coilarmA=0.1,coilarmB=0](0,0)(-0.38,-0.38)
\psdots(0,0)(0.57,0)
\endpspicture
\end{center}}
+ \text{2 perms.} \bigg)
\end{equation*}
\caption{A diagrammatic representation of Eqs.\
\eqref{solf4bv}--\eqref{solf4ct}. Every diagram corresponds to precisely one
of the Eqs.\ \eqref{solf4bv}--\eqref{solf4ct}, in the same order. The
``2 perms.''\ refer to permutations of the external legs. \label{figf4}}
\end{figure}
Equation \eqref{solf4bv}, divided by $(|\bf{p}_1| + \ldots +
|\bf{p}_4|)$, is interpreted as the elementary or ``bare'' four-gluon vertex.
The r\^ole of the factor 2 and the signs in Fig.\ \ref{figf4} will become
clear in the next section. Equation \eqref{solf4gl} and the second diagram in
Fig.\ \ref{figf4} represent the contraction of two elementary three-gluon
vertices, the latter being given mathematically by Eq.\ \eqref{solf3}.
The contraction refers to spatial and color indices and the momenta, with
opposite signs. Note that there is no ``propagator'' factor associated with
the contraction (except for a transverse Kronecker delta), and there is a
factor $1/(|\bf{p}_1| + \ldots + |\bf{p}_4|)$ for the external momenta which
is unusual from a diagrammatic point of view. Finally, Eq.\ \eqref{solf4ct}
and the last diagram in Fig.\ \ref{figf4} describe an
``elementary'' Coulomb interaction between the external gluon lines.
With this result in hand, we can go on to consider the terms quadratic in
$\bf{A}$ in the Schr\"odinger equation to order $g^2$. The relevant
contributions originate from Eqs.\ \eqref{clA2}, \eqref{cls1}, \eqref{cls2},
and \eqref{cls3}, and involve the functions $f_2$ and $f_4$. As a
result, we obtain the following equation for the coefficient function $f_2$
to order $g^2$:
\bal
\lefteqn{\big( f_2 (\bf{p}) \big)^2 \delta^{ab} \delta_{ij}
= \left( \bf{p}^2 - \frac{N_c g^2}{2} \, |\bf{p}| \int \frac{d^3 q}{(2 \pi)^3}
\frac{1 - (\hat{\bf{p}} \cdot \hat{\bf{q}})^2}{(\bf{p} - \bf{q})^2} \right)
\delta^{ab} \delta_{ij}} \hspace{1.5cm} \notag \\
&{}+ \frac{1}{2} \int \frac{d^3 q}{(2 \pi)^3} \,
f_{4;ijkl}^{abcc} (-\bf{p}, \bf{p}, -\bf{q}, \bf{q})
t_{kl} (\bf{q})
- N_c g^2 \delta^{ab} \int \frac{d^3 q}{(2 \pi)^3} \frac{|\bf{p}| - |\bf{q}|}
{(\bf{p} - \bf{q})^2} \, t_{ij} (\bf{q}) \:. \label{g2solf2gen}
\eal
The explicit result for $f_2$ to order $g^2$ is
\bal
f_2 (\bf{p}) &= |\bf{p}| - \frac{N_c g^2}{4} \int \frac{d^3 q}{(2 \pi)^3}
\frac{1 - (\hat{\bf{p}} \cdot \hat{\bf{q}})^2}{(\bf{p} - \bf{q})^2}
+ \frac{N_c g^2}{2 |\bf{p}|} \, \frac{4}{3} \int \frac{d^3 q}{(2 \pi)^3}
\frac{1}{2 |\bf{p}| + 2 |\bf{q}|} \label{g2solf2tp} \\
&\phantom{=} {}- \frac{N_c g^2}{2 |\bf{p}|} \, 2 \int \frac{d^3 q}{(2 \pi)^3}
\frac{\big( \delta_{ik} p_l + \delta_{kl} q_i - \delta_{li} p_k \big)
\, t_{km} (\bf{p} - \bf{q}) \, t_{ln} (\bf{q})}
{2 |\bf{p}| + 2 |\bf{q}|} \notag \\
&\hspace{3.5cm} {}\times \frac{\big( \delta_{jm} p_n + \delta_{mn} q_j
- \delta_{nj} p_m \big) \, t_{ij} (\bf{p})}
{(|\bf{p}| + |\bf{q}| + |\bf{p} - \bf{q}|)^2} \label{g2solf2gl} \\
&\phantom{=} {}- \frac{N_c g^2}{2 |\bf{p}|} \, \frac{1}{2}
\int \frac{d^3 q}{(2 \pi)^3} \frac{1 + (\hat{\bf{p}} \cdot \hat{\bf{q}})^2}
{2 |\bf{p}| + 2 |\bf{q}|} \, \frac{\left( |\bf{p}| - |\bf{q}| \right)^2}
{(\bf{p} - \bf{q})^2} \label{g2solf2ctl} \\
&\phantom{=} {}- \frac{N_c g^2}{2 |\bf{p}|} \, \frac{1}{2}
\int \frac{d^3 q}{(2 \pi)^3}
\left( 1 + (\hat{\bf{p}} \cdot \hat{\bf{q}})^2 \right)
\frac{|\bf{p}| - |\bf{q}|}{(\bf{p} - \bf{q})^2} \:, \label{g2solf2ct}
\eal
where we have used the contraction of an arbitrary tensor $T_{ij} (\bf{p})$
\be
\frac{1}{2} \, T_{ij} (\bf{p}) t_{ij} (\bf{p}) = T^t (\bf{p})
\ee
in order to extract the transverse part. We have presented the
diagrams corresponding to Eqs.\ \eqref{g2solf2tp}--\eqref{g2solf2ct} in
Fig.\ \ref{figf2}.
\begin{figure}
\begin{equation*}
2 f_2 =
\big(
\parbox{0.74cm}{\begin{center}
\pspicture(0,-0.25)(0.74,0.25)
\psCoil{50}{3190}
\psdots[dotstyle=o,dotscale=1.1](0.37,0)
\endpspicture
\end{center}}
\big)^{-1}
- \parbox{2.1cm}{\begin{center}
\pspicture(-0.6,-0.5)(1.5,0.5)
\pscoil[coilarmA=0.1,coilarmB=0](0,0)(-0.47,0)
\pscoil[coilarmA=0.1,coilarmB=0](0.9,0)(1.37,0)
\pscircle[linewidth=0.5pt,linestyle=dashed,dash=2pt 1.5pt](0.45,0){0.45}
\psdots(0,0)(0.9,0)
\endpspicture
\end{center}}
- \parbox{1.7cm}{\begin{center}
\pspicture(-0.85,-0.55)(0.85,0.5)
\SpecialCoor
\multido{\notag=-73.00+11.25}{31}{%
\FPadd{-90}{\notag}{\m}
\rput{\m}(0.4;\notag){\psCoil{-65}{425}}}
\pscoil[coilarmA=0.1,coilarmB=0](0,-0.45)(-0.715,-0.45)
\pscoil[coilarmA=0.1,coilarmB=0](0,-0.45)(0.715,-0.45)
\psdots(0,-0.45)
\endpspicture
\end{center}}
- \parbox{2.2cm}{\begin{center}
\pspicture(-1.05,-0.5)(1.05,0.5)
\SpecialCoor
\multido{\notag=17.00+11.25}{15}{%
\FPadd{-90}{\notag}{\m}
\rput{\m}(0.4;\notag){\psCoil{-65}{425}}}
\multido{\notag=197.00+11.25}{15}{%
\FPadd{-90}{\notag}{\m}
\rput{\m}(0.4;\notag){\psCoil{-65}{425}}}
\pscoil[coilarmA=0.1,coilarmB=0](-0.45,0)(-0.92,0)
\pscoil[coilarmA=0.1,coilarmB=0](0.45,0)(0.92,0)
\psdots(-0.45,0)(0.45,0)
\endpspicture
\end{center}}
- \parbox{2.2cm}{\begin{center}
\pspicture(-1.05,-0.5)(1.05,0.5)
\SpecialCoor
\multido{\notag=17.00+11.25}{15}{%
\FPadd{-90}{\notag}{\m}
\rput{\m}(0.4;\notag){\psCoil{-65}{425}}}
\psline[doubleline=true,linewidth=0.8pt,doublesep=0.6pt](-0.45,0)(0.45,0)
\pscoil[coilarmA=0.1,coilarmB=0](-0.45,0)(-0.92,0)
\pscoil[coilarmA=0.1,coilarmB=0](0.45,0)(0.92,0)
\psdots(-0.45,0)(0.45,0)
\endpspicture
\end{center}}
- \parbox{2.2cm}{\begin{center}
\pspicture(-1.05,-0.5)(1.05,0.5)
\SpecialCoor
\multido{\notag=17.00+11.25}{15}{%
\FPadd{-90}{\notag}{\m}
\rput{\m}(0.4;\notag){\psCoil{-65}{425}}}
\psline[doubleline=true,linewidth=0.8pt,doublesep=0.6pt](-0.45,0)(0.45,0)
\pscoil[coilarmA=0.1,coilarmB=0](-0.45,0)(-0.92,0)
\pscoil[coilarmA=0.1,coilarmB=0](0.45,0)(0.92,0)
\psdots(-0.45,0)(0.45,0)
\psdots[dotstyle=square,dotscale=1.4](0,0.4)
\psdots[dotstyle=+,dotscale=1.4,dotangle=45](0,0.4)
\endpspicture
\end{center}}
\end{equation*}
\caption{The diagrams corresponding to Eqs.\
\eqref{g2solf2tp}--\eqref{g2solf2ct}. The bare propagator, the inverse
$2 |\bf{p}|$ of which appears in Eq.\ \eqref{g2solf2tp}, is marked with an
open circle for later use. The first two one-loop diagrams correspond to
the integrals in Eq.\ \eqref{g2solf2tp}, in the same order. The following
diagrams represent Eqs.\ \eqref{g2solf2gl}--\eqref{g2solf2ct}, respectively.
See the text for a motivation of the ``crossed'' gluon propagator notation
in the last loop diagram. \label{figf2}}
\end{figure}
The first loop integral in Eqs.\ \eqref{g2solf2gen} and \eqref{g2solf2tp}
results from Eq.\ \eqref{cls1} and is represented in Fig.\ \ref{figf2} as a
ghost loop because it stems from the Faddeev-Popov determinant. The
following three loop diagrams in Fig.\ \ref{figf2} are obtained by contracting
two external legs in the diagrams of Fig.\ \ref{figf4}, see Eq.\
\eqref{g2solf2gen}. The last loop integral in Eq.\ \eqref{g2solf2gen}, or
the integral \eqref{g2solf2ct}, on the other hand, originates from the terms
\eqref{cls2} and \eqref{cls3} in the Hamiltonian. Lacking a better
notation, we distinguish this contribution from the contraction of
Eq.\ \eqref{solf4ct}, the previous diagram, by marking the gluon propagator
with a cross (because there is no term $|\bf{q}|$ in the denominator that
would indicate the presence of an internal gluon propagator --- in
fact, the diagram may be interpreted to contain a $\Pi \Pi$-correlator,
where $\bm{\Pi}$ is the momentum conjugate to $\bf{A}$).
We have thus completed the determination of the (exponent of the) perturbative
vacuum wave functional to order $g^2$. The result is given in Eqs.\
\eqref{solf3}, \eqref{solf4bv}--\eqref{solf4ct}, and
\eqref{g2solf2tp}--\eqref{g2solf2ct}, to be substituted in Eq.\
\eqref{vacans}. We can also extract the perturbative vacuum energy to the
same order from the $\bf{A}$-independent terms in the Schr\"odinger equation
with the result
\be
E_0 = (N_c^2 - 1) \left( \int \frac{d^3 p}{(2 \pi)^3} f_2 (\bf{p}) \right)
(2 \pi)^3 \delta (\bf{0}) \label{g2E0}
\ee
[cf.\ Eq.\ \eqref{g0eq}], where Eqs.\ \eqref{g2solf2tp}--\eqref{g2solf2ct}
have to be substituted for $f_2 (\bf{p})$.
The explicit expression is not relevant for our purposes. We shall come back
to the vacuum energy later in the context of the static potential in the
presence of external charges. It should also be clear by now how to take the
determination of the perturbative vacuum functional and the vacuum energy
systematically to higher orders.
\section{Equal-time two-point correlation functions}
For the calculation of the equal-time correlation functions,
we need to include the Faddeev-Popov determinant in the measure of the
functional integral, see Eq.\ \eqref{corrfunc}. For our diagrammatic
procedure, it is very convenient to introduce ghost fields and write
\be
\textsf{FP} (\bf{A}) = \int D [c, \bar{c}] \exp \left( - \int d^3 x \,
\bar{c}^a (\bf{x}) [ - \nabla \cdot \bf{D}^{ab} (\bf{A}) ] c^b (\bf{x})
\right) \:. \label{FPrep}
\ee
In our conventions, we have explicitly
\bal
\lefteqn{\int d^3 x \, \bar{c}^a (\bf{x}) [ - \nabla \cdot
\bf{D}^{ab} (\bf{A}) ] c^b (\bf{x})
= \int \frac{d^3 p}{(2 \pi)^3} \, \bar{c}^a (-\bf{p}) \, \bf{p}^2
c^a (\bf{p})} \hspace{1cm} \label{gg} \\
&{}+ g \int \frac{d^3 p_1}{(2 \pi)^3} \frac{d^3 p_2}{(2 \pi)^3}
\frac{d^3 p_3}{(2 \pi)^3} \, i f^{abc} \, p_{1,j} \, \bar{c}^a (\bf{p}_1)
c^b (\bf{p}_2) A^c_j (\bf{p}_3)
(2 \pi)^3 \delta(\bf{p}_1 + \bf{p}_2 + \bf{p}_3) \:. \label{ggv}
\eal
Note that $p_{1,j}$ under the integral in Eq.\ \eqref{ggv} can be replaced by
$-p_{2,j}$ due to the transversality of $\bf{A}$.
We now have a representation of the equal-time correlation functions as a
functional integral over the transverse components of $\bf{A}$, the ghost
and the antighost fields, see Eq.\ \eqref{corrfunc}. The integration measure,
which would be the exponential of the negative of the Euclidean action in the
usual four-dimensional formulation (in Euclidean space), is now given by
the exponential in Eq.\ \eqref{FPrep} and the square of Eq.\ \eqref{vacans}.
Note that the vacuum functional is real (at least to order $g^2$) because the
coefficient functions fulfill the reality condition
\be
\left( f^{a_1 \ldots a_k}_{k; i_1 \ldots i_k} (-\bf{p}_1, \ldots, -\bf{p}_k)
\right)^\ast =
f^{a_1 \ldots a_k}_{k; i_1 \ldots i_k} (\bf{p}_1, \ldots, \bf{p}_k) \:.
\ee
We shall use the analogy of this representation with the familiar functional
integral representation of the covariant correlation functions in the usual
four-dimensional formulation for the perturbative determination of the
equal-time correlation functions \eqref{corrfunc}. The corresponding
Feynman rules are easily identified: the (static) gluon propagator is the
inverse of $2 |\bf{p}|$, cf.\ Eq.\ \eqref{g0solf2} (the factor of two is due
to the square of the wave functional in the measure), the other contributions
$-2 \big( f_2 (\bf{p}) - |\bf{p}| \big)$ and the other coefficient functions
$-2 f_3 (\bf{p}_1, \bf{p}_2, \bf{p}_3)$ and
$-2 f_4 (\bf{p}_1, \ldots, \bf{p}_4)$ determine the two-, three-, and
four-gluon vertices. Furthermore, from Eq.\ \eqref{gg} we identify the free
ghost propagator $1/ \bf{p}^2$ and from Eq.\ \eqref{ggv} the ghost-gluon
vertex.
We consider the gluon equal-time two-point function
$\langle A_i^a (\bf{p}_1) A_j^b (\bf{p}_2) \rangle$
(with $t=0$ in the arguments of the gluon fields to be understood)
first. One of the contributions to be taken into account is the
ghost loop, constructed from two ghost-gluon vertices \eqref{ggv} and two
ghost propagators [see Eq.\ \eqref{gg}], and furthermore two static gluon
propagators from Eq.\ \eqref{g0solf2} for the external lines. As it turns
out, this contribution is exactly cancelled by the other contribution
with the same graph ``topology'' which arises from contracting one of the
two-gluon vertices, (minus twice) the first integral in Eq.\
\eqref{g2solf2tp}, with two external gluon propagators. Both
contributions are represented diagrammatically in the first line of Fig.\
\ref{figAA}.
\begin{figure}
\begin{align*}
\parbox{2.4cm}{\begin{center}
\pspicture(-0.75,-0.5)(1.65,0.5)
\pscoil[coilarmA=0.1,coilarmB=0](0,0)(-0.63,0)
\pscoil[coilarmA=0.1,coilarmB=0](0.9,0)(1.53,0)
\pscircle[linewidth=0.5pt,linestyle=dashed](0.45,0){0.45}
\psdots(0,0)(0.9,0)
\psdots[dotstyle=o,dotscale=1.1](-0.37,0)(1.27,0)(0.45,0.45)(0.45,-0.45)
\endpspicture
\end{center}}
+ \parbox{2.4cm}{\begin{center}
\pspicture(-0.75,-0.5)(1.65,0.5)
\pscoil[coilarmA=0.1,coilarmB=0](0,0)(-0.63,0)
\pscoil[coilarmA=0.1,coilarmB=0](0.9,0)(1.53,0)
\pscircle[linewidth=0.5pt,linestyle=dashed,dash=2pt 1.5pt](0.45,0){0.45}
\psdots(0,0)(0.9,0)
\psdots[dotstyle=o,dotscale=1.1](-0.37,0)(1.27,0)
\endpspicture
\end{center}}
&= 0 \\[-8mm]
\parbox{1.7cm}{\begin{center}
\pspicture(-0.85,-0.55)(0.85,0.5)
\SpecialCoor
\multido{\notag=-73.00+11.25}{31}{%
\FPadd{-90}{\notag}{\m}
\rput{\m}(0.4;\notag){\psCoil{-65}{425}}}
\pscoil[coilarmA=0.1,coilarmB=0](0,-0.45)(-0.715,-0.45)
\pscoil[coilarmA=0.1,coilarmB=0](0,-0.45)(0.715,-0.45)
\psdots(0,-0.45)
\psdots[dotstyle=o,dotscale=1.1](-0.45,-0.45)(0.45,-0.45)(0,0.4)
\endpspicture
\end{center}}
+ \parbox{1.7cm}{\begin{center}
\pspicture(-0.85,-0.55)(0.85,0.5)
\SpecialCoor
\multido{\notag=-73.00+11.25}{31}{%
\FPadd{-90}{\notag}{\m}
\rput{\m}(0.4;\notag){\psCoil{-65}{425}}}
\pscoil[coilarmA=0.1,coilarmB=0](0,-0.45)(-0.715,-0.45)
\pscoil[coilarmA=0.1,coilarmB=0](0,-0.45)(0.715,-0.45)
\psdots(0,-0.45)
\psdots[dotstyle=o,dotscale=1.1](-0.45,-0.45)(0.45,-0.45)
\endpspicture
\end{center}}
&= E \bigg( \parbox{1.5cm}{\begin{center}
\pspicture(-0.75,-0.55)(0.75,0.5)
\SpecialCoor
\multido{\notag=-73.00+11.25}{31}{%
\FPadd{-90}{\notag}{\m}
\rput{\m}(0.4;\notag){\psCoil{-65}{425}}}
\pscoil[coilarmA=0.1,coilarmB=0](0,-0.45)(-0.715,-0.45)
\pscoil[coilarmA=0.1,coilarmB=0](0,-0.45)(0.715,-0.45)
\psdots(0,-0.45)
\psdots[dotstyle=o,dotscale=1.1](-0.45,-0.45)(0.45,-0.45)(0,0.4)
\endpspicture
\end{center}} \bigg) \\[-6mm]
\parbox{2.4cm}{\begin{center}
\pspicture(-1.2,-0.5)(1.2,0.5)
\SpecialCoor
\multido{\notag=17.00+11.25}{15}{%
\FPadd{-90}{\notag}{\m}
\rput{\m}(0.4;\notag){\psCoil{-65}{425}}}
\multido{\notag=197.00+11.25}{15}{%
\FPadd{-90}{\notag}{\m}
\rput{\m}(0.4;\notag){\psCoil{-65}{425}}}
\pscoil[coilarmA=0.1,coilarmB=0](-0.45,0)(-1.08,0)
\pscoil[coilarmA=0.1,coilarmB=0](0.45,0)(1.08,0)
\psdots(-0.45,0)(0.45,0)
\psdots[dotstyle=o,dotscale=1.1](-0.82,0)(0.82,0)(0,0.4)(0,-0.4)
\endpspicture
\end{center}}
+ \parbox{2.4cm}{\begin{center}
\pspicture(-1.2,-0.5)(1.2,0.5)
\SpecialCoor
\multido{\notag=17.00+11.25}{15}{%
\FPadd{-90}{\notag}{\m}
\rput{\m}(0.4;\notag){\psCoil{-65}{425}}}
\multido{\notag=197.00+11.25}{15}{%
\FPadd{-90}{\notag}{\m}
\rput{\m}(0.4;\notag){\psCoil{-65}{425}}}
\pscoil[coilarmA=0.1,coilarmB=0](-0.45,0)(-1.08,0)
\pscoil[coilarmA=0.1,coilarmB=0](0.45,0)(1.08,0)
\psdots(-0.45,0)(0.45,0)
\psdots[dotstyle=o,dotscale=1.1](-0.82,0)(0.82,0)(0,0.4)
\endpspicture
\end{center}}
+ \parbox{2.4cm}{\begin{center}
\pspicture(-1.2,-0.5)(1.2,0.5)
\SpecialCoor
\multido{\notag=17.00+11.25}{15}{%
\FPadd{-90}{\notag}{\m}
\rput{\m}(0.4;\notag){\psCoil{-65}{425}}}
\multido{\notag=197.00+11.25}{15}{%
\FPadd{-90}{\notag}{\m}
\rput{\m}(0.4;\notag){\psCoil{-65}{425}}}
\pscoil[coilarmA=0.1,coilarmB=0](-0.45,0)(-1.08,0)
\pscoil[coilarmA=0.1,coilarmB=0](0.45,0)(1.08,0)
\psdots(-0.45,0)(0.45,0)
\psdots[dotstyle=o,dotscale=1.1](-0.82,0)(0.82,0)
\endpspicture
\end{center}}
&= E \bigg( \parbox{2.24cm}{\begin{center}
\pspicture(-1.12,-0.5)(1.12,0.5)
\SpecialCoor
\multido{\notag=17.00+11.25}{15}{%
\FPadd{-90}{\notag}{\m}
\rput{\m}(0.4;\notag){\psCoil{-65}{425}}}
\multido{\notag=197.00+11.25}{15}{%
\FPadd{-90}{\notag}{\m}
\rput{\m}(0.4;\notag){\psCoil{-65}{425}}}
\pscoil[coilarmA=0.1,coilarmB=0](-0.45,0)(-1.08,0)
\pscoil[coilarmA=0.1,coilarmB=0](0.45,0)(1.08,0)
\psdots(-0.45,0)(0.45,0)
\psdots[dotstyle=o,dotscale=1.1](-0.82,0)(0.82,0)(0,0.4)(0,-0.4)
\endpspicture
\end{center}} \bigg) \\[-6mm]
\parbox{2.4cm}{\begin{center}
\pspicture(-1.2,-0.5)(1.2,0.5)
\SpecialCoor
\multido{\notag=17.00+11.25}{15}{%
\FPadd{-90}{\notag}{\m}
\rput{\m}(0.4;\notag){\psCoil{-65}{425}}}
\psline[doubleline=true,linewidth=0.8pt,doublesep=0.6pt](-0.45,0)(0.45,0)
\pscoil[coilarmA=0.1,coilarmB=0](-0.45,0)(-1.08,0)
\pscoil[coilarmA=0.1,coilarmB=0](0.45,0)(1.08,0)
\psdots(-0.45,0)(0.45,0)
\psdots[dotstyle=o,dotscale=1.1](-0.82,0)(0.82,0)(0,0.4)
\endpspicture
\end{center}}
+ \parbox{2.4cm}{\begin{center}
\pspicture(-1.2,-0.5)(1.2,0.5)
\SpecialCoor
\multido{\notag=17.00+11.25}{15}{%
\FPadd{-90}{\notag}{\m}
\rput{\m}(0.4;\notag){\psCoil{-65}{425}}}
\psline[doubleline=true,linewidth=0.8pt,doublesep=0.6pt](-0.45,0)(0.45,0)
\pscoil[coilarmA=0.1,coilarmB=0](-0.45,0)(-1.08,0)
\pscoil[coilarmA=0.1,coilarmB=0](0.45,0)(1.08,0)
\psdots(-0.45,0)(0.45,0)
\psdots[dotstyle=o,dotscale=1.1](-0.82,0)(0.82,0)
\endpspicture
\end{center}}
+ \parbox{2.4cm}{\begin{center}
\pspicture(-1.2,-0.5)(1.2,0.5)
\SpecialCoor
\multido{\notag=17.00+11.25}{15}{%
\FPadd{-90}{\notag}{\m}
\rput{\m}(0.4;\notag){\psCoil{-65}{425}}}
\psline[doubleline=true,linewidth=0.8pt,doublesep=0.6pt](-0.45,0)(0.45,0)
\pscoil[coilarmA=0.1,coilarmB=0](-0.45,0)(-1.08,0)
\pscoil[coilarmA=0.1,coilarmB=0](0.45,0)(1.08,0)
\psdots(-0.45,0)(0.45,0)
\psdots[dotstyle=o,dotscale=1.1](-0.82,0)(0.82,0)
\psdots[dotstyle=square,dotscale=1.4](0,0.4)
\psdots[dotstyle=+,dotscale=1.4,dotangle=45](0,0.4)
\endpspicture
\end{center}}
&= E \bigg( \parbox{2.24cm}{\begin{center}
\pspicture(-1.12,-0.5)(1.12,0.5)
\SpecialCoor
\multido{\notag=17.00+11.25}{15}{%
\FPadd{-90}{\notag}{\m}
\rput{\m}(0.4;\notag){\psCoil{-65}{425}}}
\psline[doubleline=true,linewidth=0.8pt,doublesep=0.6pt](-0.45,0)(0.45,0)
\pscoil[coilarmA=0.1,coilarmB=0](-0.45,0)(-1.08,0)
\pscoil[coilarmA=0.1,coilarmB=0](0.45,0)(1.08,0)
\psdots(-0.45,0)(0.45,0)
\psdots[dotstyle=o,dotscale=1.1](-0.82,0)(0.82,0)(0,0.4)
\endpspicture
\end{center}} \bigg)
+ \parbox{2.4cm}{\begin{center}
\pspicture(-1.2,-0.5)(1.2,0.5)
\SpecialCoor
\multido{\notag=17.00+11.25}{15}{%
\FPadd{-90}{\notag}{\m}
\rput{\m}(0.4;\notag){\psCoil{-65}{425}}}
\psline[doubleline=true,linewidth=0.8pt,doublesep=0.6pt](-0.45,0)(0.45,0)
\pscoil[coilarmA=0.1,coilarmB=0](-0.45,0)(-1.08,0)
\pscoil[coilarmA=0.1,coilarmB=0](0.45,0)(1.08,0)
\psdots(-0.45,0)(0.45,0)
\psdots[dotstyle=o,dotscale=1.1](-0.82,0)(0.82,0)
\psdots[dotstyle=square,dotscale=1.4](0,0.4)
\psdots[dotstyle=+,dotscale=1.4,dotangle=45](0,0.4)
\endpspicture
\end{center}}
\end{align*}
\caption{Diagrammatic representation of the various contributions to the
gluonic equal-time two-point function, see Eqs.\
\eqref{exprepdet}--\eqref{AActsum}. The propagators marked with open circles
are taken from Eqs.\ \eqref{g0solf2} and \eqref{gg}, respectively, while the
``direct'' contractions without open circles refer to the contractions
that appear in the course of the determination of the vacuum wave functional,
see Figs.\ \ref{figf4} and \ref{figf2}, so that the corresponding parts of
the diagrams translate into (minus two times) the mathematical expressions
\eqref{solf4gl}--\eqref{solf4ct} [divided by $(|\bf{p}_1| + \ldots +
|\bf{p}_4|)$] and \eqref{g2solf2tp}--\eqref{g2solf2ct}. The notation
``$E (\cdot)$'' for the sum of all diagrams with the same topology is
explained in the text, following Eq.\ \eqref{g2AAct}. \label{figAA}}
\end{figure}
The cancellation of the ghost loop contribution from the
perturbative gluon two-point function (to one-loop order) is interesting
given that this contribution plays a major role in the nonperturbative
approaches to the infrared behavior of the equal-time gluon two-point
function \cite{Zwa04}--\cite{SLR06}. The cancellation of ghost loops was found
to be a general feature in the Lagrangian functional integral approach
\cite{WR07a,WR07b}. An alternative way to see the cancellation in our present
approach is to write the Faddeev-Popov determinant as
\be
\textsf{FP} (\bf{A}) = \exp \big[ \text{tr} \, \ln \big( -\nabla \cdot
\bf{D} (\bf{A}) \big) \big] \:. \label{exprepdet}
\ee
The coefficient of the term quadratic in $\bf{A}$ in $\text{tr} \, \ln
\big( -\nabla \cdot \bf{D} (\bf{A}) \big)$ precisely equals twice the first
integral in Eq.\ \eqref{g2solf2tp} and hence cancels out in the exponent.
Next we turn to the tadpole contribution which is obtained from the elementary
four-gluon vertex extracted from Eq.\ \eqref{solf4bv} appropriately contracted
with three static gluon propagators. Again, there is a second contribution
with the same ``topology'' given by the two-gluon vertex from the last integral
in Eq.\ \eqref{g2solf2tp} contracted with two external propagators,
cf.\ the second line in Fig.\ \ref{figAA}. The sum of these two
contributions to the gluon equal-time two-point function is (with
$\bf{p} \equiv \bf{p}_1$)
\bmu
-\frac{2 N_c g^2}{(2 |\bf{p}|)^2} \, \frac{4}{3} \int \frac{d^3 q}{(2 \pi)^3}
\frac{1}{2 |\bf{p}| + 2 |\bf{q}|} \frac{1}{2 |\bf{q}|}
- \frac{2 N_c g^2}{(2 |\bf{p}|)^3} \, \frac{4}{3} \int \frac{d^3 q}{(2 \pi)^3}
\frac{1}{2 |\bf{p}| + 2 |\bf{q}|} \\
= -\frac{2 N_c g^2}{(2 |\bf{p}|)^3} \, \frac{4}{3} \int \frac{d^3 q}{(2 \pi)^3}
\frac{1}{2 |\bf{q}|} \:, \label{AAtpsum}
\emu
to be multiplied with $\delta^{ab} t_{ij} (\bf{p})$.
The most complicated contribution to the two-point function comes from diagrams
with the gluon loop topology (with two three-gluon vertices). There are three
different diagrams of this type, represented in the third line of
Fig.\ \ref{figAA}, the first from contracting two three-gluon
vertices extracted from Eq.\ \eqref{solf3} with four static gluon propagators
(two internal and two external), the second from contracting the part of
the four-gluon vertex given by Eq.\ \eqref{solf4gl} with one internal and two
external gluon propagators, and the third by contracting the two-gluon vertex
from Eq.\ \eqref{g2solf2gl} with two static gluon propagators. The sum of
these contributions is
\bmu
\frac{2 N_c g^2}{(2 |\bf{p}|)^3} \, 2 \int \frac{d^3 q}{(2 \pi)^3}
\frac{2 |\bf{p}| + 2 |\bf{p} - \bf{q}|}
{(|\bf{p}| + |\bf{q}| + |\bf{p} - \bf{q}|)^2 \, 2 |\bf{q}| \,
2 |\bf{p} - \bf{q}|} \\
{}\times \big( \delta_{km} p_n + \delta_{mn} q_k - \delta_{nk} p_m \big) \,
t_{mr} (\bf{p} - \bf{q}) \, t_{ns} (\bf{q}) \,
\big( \delta_{lr} p_s + \delta_{rs} q_l
- \delta_{sl} p_r \big) \, t_{kl} (\bf{p}) \label{AAglsum}
\emu
(again, to be multiplied with $\delta^{ab} t_{ij} (\bf{p})$,
and $\bf{p} \equiv \bf{p}_1$). The tensor structure in this expression is
invariant under the transformation $\bf{q} \to \bf{p} - \bf{q}$, a
fact we can use to replace $2 |\bf{p}| + 2 |\bf{p} - \bf{q}|$ in the
numerator with $2 |\bf{p}| + |\bf{q}| + |\bf{p} - \bf{q}|$. The tensor
structure itself can be simplified by performing the contractions explicitly.
A straightforward, but somewhat tedious calculation gives
\bmu
\big( \delta_{km} p_n + \delta_{mn} q_k - \delta_{nk} p_m \big) \,
t_{mr} (\bf{p} - \bf{q}) \, t_{ns} (\bf{q}) \,
\big( \delta_{lr} p_s + \delta_{rs} q_l
- \delta_{sl} p_r \big) \, t_{kl} (\bf{p}) \\
= \big( 1 - (\hat{\bf{p}} \cdot \hat{\bf{q}})^2 \big)
\left( 2 \bf{p}^2 + 2 \bf{q}^2 + \frac{\bf{p}^2 \, \bf{q}^2 +
(\bf{p} \cdot \bf{q})^2}{(\bf{p} - \bf{q})^2} \right) \:.
\emu
At last, we turn to the contributions that involve the (non-Abelian) Coulomb
potential. There are, again, three such terms, represented in the last
line of Fig.\ \ref{figAA}, the first from the four-gluon vertex
derived from Eq.\ \eqref{solf4ct} contracted with three static gluon
propagators, and the other two using the two-gluon vertices
corresponding to the two integrals \eqref{g2solf2ctl} and
\eqref{g2solf2ct} contracted with two gluon propagators each. The
sum of these terms is
\bmu
\frac{2 N_c g^2}{(2 |\bf{p}|)^3} \, \frac{1}{2}
\int \frac{d^3 q}{(2 \pi)^3} \frac{1 + (\hat{\bf{p}} \cdot \hat{\bf{q}})^2}
{2 |\bf{q}|} \, \frac{(|\bf{p}| - |\bf{q}|)^2}{(\bf{p} - \bf{q})^2}
+ \frac{2 N_c g^2}{(2 |\bf{p}|)^3} \, \frac{1}{2}
\int \frac{d^3 q}{(2 \pi)^3} \left( 1 + (\hat{\bf{p}} \cdot \hat{\bf{q}})^2
\right) \frac{|\bf{p}| - |\bf{q}|}{(\bf{p} - \bf{q})^2} \\
= \frac{2 N_c g^2}{(2 |\bf{p}|)^3} \, \frac{1}{2}
\int \frac{d^3 q}{(2 \pi)^3} \frac{1 + (\hat{\bf{p}} \cdot \hat{\bf{q}})^2}
{2 |\bf{q}|} \, \frac{|\bf{p}|^2 - |\bf{q}|^2}{(\bf{p} - \bf{q})^2} \:,
\label{AActsum}
\emu
to be multiplied with $\delta^{ab} t_{ij} (\bf{p})$ as before,
and $\bf{p} \equiv \bf{p}_1$. On the left-hand side of Eq.\
\eqref{AActsum}, we have added up the contributions from the contraction
of the four-gluon vertex and the two-gluon vertex in Eq.\ \eqref{g2solf2ctl}
to give the first loop integral. The left-hand side of Eq.\ \eqref{AActsum}
is represented diagrammatically as the right-hand side of the last line in
Fig.\ \ref{figAA}.
Putting it all together, the result for the equal-time gluon two-point function
is, to order $g^2$,
\bal
\lefteqn{\langle A_i^a (\bf{p}_1) A_j^b (\bf{p}_2) \rangle
= \left[ \frac{1}{2 |\bf{p}_1|} - \frac{2 N_c g^2}{(2 |\bf{p}_1|)^3} \,
\frac{4}{3} \int \frac{d^3 q}{(2 \pi)^3} \frac{1}{2 |\bf{q}|} \right.}
\hspace{1.5cm} \label{g2AAtp} \\
&\phantom{=\:\;} {}+ \frac{2 N_c g^2}{(2 |\bf{p}_1|)^3} \, 2
\int \frac{d^3 q}{(2 \pi)^3}
\frac{\big( 1 - (\hat{\bf{p}}_1 \cdot \hat{\bf{q}})^2 \big)
(2 |\bf{p}_1| + |\bf{q}| + |\bf{p}_1 - \bf{q}|)}
{(|\bf{p}_1| + |\bf{q}| + |\bf{p}_1 - \bf{q}|)^2 \, 2 |\bf{q}| \,
2 |\bf{p}_1 - \bf{q}|} \notag \\
&\phantom{=} \hspace{4cm} {}\times
\left( 2 \bf{p}_1^2 + 2 \bf{q}^2 + \frac{\bf{p}_1^2 \, \bf{q}^2 +
(\bf{p}_1 \cdot \bf{q})^2}{(\bf{p}_1 - \bf{q})^2} \right) \label{g2AAgl} \\
&\phantom{=\:} \left. {}+ \frac{2 N_c g^2}{(2 |\bf{p}_1|)^3} \, \frac{1}{2}
\int \frac{d^3 q}{(2 \pi)^3} \frac{1 + (\hat{\bf{p}}_1 \cdot \hat{\bf{q}})^2}
{2 |\bf{q}|} \, \frac{\bf{p}_1^2 - \bf{q}^2}{(\bf{p}_1 - \bf{q})^2} \right]
\delta^{ab} \, t_{ij} (\bf{p}_1)
(2 \pi)^3 \delta (\bf{p}_1 + \bf{p}_2) \:. \label{g2AAct}
\eal
Looking at the diagrams on the left-hand sides of the first three
lines in Fig.\ \ref{figAA}, it is clear that there is precisely one diagram
among those of the same topology that is contructed exclusively from the
elementary vertices \eqref{solf3}, \eqref{solf4bv}, and \eqref{ggv} and the
``bare'' propagators taken from Eqs.\ \eqref{g0solf2} and \eqref{gg}. We will
call this kind of diagram an ``F-diagram''. Interestingly, the sum of all
diagrams with the same topology which we will refer to as an ``E-diagram'',
can be constructed from the corresponding F-diagram by a formal operation
that we call the ``E-operator''. It is denoted as ``$E (\cdot)$'' in Fig.\
\ref{figAA} and will be illustrated considering the example of the third
line in Fig.\ \ref{figAA}: starting from the mathematical expression for
the corresponding F-diagram, one multiplies the integrand (of the integral over
loop momentum) with the sum of all $|\bf{k}|$ where $\bf{k}$ runs over the
momenta of all propagators in the diagram, and divides by the
corresponding sum restricted to the momenta of the external propagators.
Indeed, the result of this operation is given by Eq.\ \eqref{AAglsum} in the
$(\bf{q} \to \bf{p} - \bf{q})$-symmetric form, see the remark after Eq.\
\eqref{AAglsum}.
The same rule applies to the second line in Fig.\ \ref{figAA},
or Eq.\ \eqref{AAtpsum}, only that the propagator that starts and ends at
the same vertex has to be counted twice in the sum over (internal and
external) $|\bf{k}|$. Similarly, the E-operator can be used to sum the
first two diagrams on the left-hand side of the last line in Fig.\
\ref{figAA}. We have to consider the Coulomb interaction as an elementary
vertex given by \eqref{solf4ct} to this end, and count the gluon propagator
that starts and ends at this vertex twice in the sum over $|\bf{k}|$ just as
in the case of the other elementary four-gluon vertex. The same rule for the
generation of the E-diagrams (given the elementary vertices and propagators)
has been shown to hold up to
two-loop order for the equal-time two-point function and to one-loop order for
the four-point function in the context of a scalar $\phi^4$ theory
\cite{Web08}. Note that two contributions, the second diagram in the first
line of Fig.\ \ref{figAA} and the third diagram in the last line in the same
figure corresponding to the first loop integral in Eq.\ \eqref{g2solf2tp}
and to Eq.\ \eqref{g2solf2ct}, respectively, do not fit into this general
scheme.
The calculation of the equal-time ghost two-point function
from the graphical rules is much simpler. One obtains directly
\be
\langle c^a (\bf{p}_1) \bar{c}^b (\bf{p}_2) \rangle
= \left( \frac{1}{\bf{p}_1^2} + \frac{N_c g^2}{\bf{p}_1^2}
\int \frac{d^3 q}{(2 \pi)^3} \frac{1 - (\hat{\bf{p}}_1 \cdot \hat{\bf{q}})^2}
{(\bf{p}_1 - \bf{q})^2 \, 2 |\bf{q}|} \right) \delta^{ab} \,
(2 \pi)^3 \delta (\bf{p}_1 + \bf{p}_2) \:, \label{g2gg}
\ee
corresponding to the diagrams in Fig.\ \ref{figcc}.
\begin{figure}
\begin{equation*}
\langle c \bar{c} \rangle =
\parbox{0.96cm}{\begin{center}
\pspicture(-0.11,-0.25)(0.85,0.25)
\psline[linestyle=dashed](0.03,0)(0.71,0)
\psdots[dotstyle=o,dotscale=1.1](0.37,0)
\endpspicture
\end{center}}
+ \parbox{2.4cm}{\begin{center}
\pspicture(-1.2,-0.5)(1.2,0.5)
\SpecialCoor
\multido{\notag=17.00+11.25}{15}{%
\FPadd{-90}{\notag}{\m}
\rput{\m}(0.4;\notag){\psCoil{-65}{425}}}
\psline[linestyle=dashed](-0.45,0)(0.45,0)
\psline[linestyle=dashed](-0.45,0)(-1.08,0)
\psline[linestyle=dashed](0.45,0)(1.08,0)
\psdots(-0.45,0)(0.45,0)
\psdots[dotstyle=o,dotscale=1.1](-0.82,0)(0.82,0)(0,0.4)(0,0)
\endpspicture
\end{center}}
\end{equation*}
\caption{Diagrammatic representation of Eq.\ \eqref{g2gg}. \label{figcc}}
\end{figure}
Note that one of the factors $1/\bf{p}_1^2$ for the external ghost
propagators cancels against the momentum dependence of the ghost-gluon
vertices.
We shall close this section with a calculation of the static (heavy quark)
potential, the energy for a configuration of static external color charges
$\rho_q (\bf{x})$. To this end, we introduce charges
$\rho_q (\bf{x})$ into the Hamiltonian, see Eqs.\ \eqref{colch} and
\eqref{clcoul}. Compared to the
Coulomb term \eqref{cls2}--\eqref{cls3} which was calculated in the absence
of external charges, there are two new terms of order $g^2$:
\be
\frac{g^2}{2} \int \frac{d^3 p}{(2 \pi)^3} \, \rho_q^a (-\bf{p})
\frac{1}{\bf{p}^2} \, \rho_q^a (\bf{p}) \:, \label{g2sp}
\ee
which is $\bf{A}$-independent and hence only contributes to the vacuum
energy but leaves the vacuum wave functional unchanged, and
\be
g^2 \int \frac{d^3 p_1}{(2 \pi)^3} \frac{d^3 p_2}{(2 \pi)^3}
\frac{d^3 p_3}{(2 \pi)^3} \, f^{abc} \, \frac{1}{\bf{p}_1^2}
(2 \pi)^3 \delta (\bf{p}_1 + \bf{p}_2 + \bf{p}_3) \, \rho^a_q (\bf{p}_1)
A_j^b (\bf{p}_2) \, \frac{1}{i} (2 \pi)^3
\frac{\delta}{\delta A^c_j (-\bf{p}_3)} \:. \label{clco}
\ee
The latter term, when applied to the vacuum wave functional, generates the
following (properly symmetrized) expression to order $g^2$,
\be
- \frac{g^2}{2} \int \frac{d^3 p_1}{(2 \pi)^3} \frac{d^3 p_2}{(2 \pi)^3}
\frac{d^3 p_3}{(2 \pi)^3} \, i f^{abc} \, \frac{|\bf{p}_2| - |\bf{p}_3|}
{\bf{p}_1^2} \, \rho^a_q (\bf{p}_1) A_j^b (\bf{p}_2) A_j^c (\bf{p}_3)
(2 \pi)^3 \delta (\bf{p}_1 + \bf{p}_2 + \bf{p}_3) \:, \label{clAAq}
\ee
which implies that the vacuum wave functional to order $g^2$ has to be
modified in order to fulfill the Schr\"odinger equation with this new term.
A term cancelling the expression \eqref{clAAq} in the Schr\"odinger equation
can only result from the second derivative term in the Hamiltonian
[Eq.\ \eqref{clA2}]. It is then simple to see that we have to add
the expression
\be
- \frac{g^2}{2} \int \frac{d^3 p_1}{(2 \pi)^3} \frac{d^3 p_2}{(2 \pi)^3}
\frac{d^3 p_3}{(2 \pi)^3} \, i f^{abc} \, \frac{1}{\bf{p}_1^2} \,
\frac{|\bf{p}_2| - |\bf{p}_3|}{|\bf{p}_2| + |\bf{p}_3|}
\, \rho^a_q (\bf{p}_1) A_j^b (\bf{p}_2) A_j^c (\bf{p}_3)
(2 \pi)^3 \delta (\bf{p}_1 + \bf{p}_2 + \bf{p}_3) \label{g2solAAq}
\ee
to the negative of the exponent of the vacuum wave functional in order to
satisfy the Schr\"odinger equation to order $g^2$. This term describes the
back-reaction of the vacuum to the presence of the external charges to order
$g^2$. Observe that due to the
presence of the external charges $\rho_q (\bf{p})$, the coefficient function
of $A^a_i (\bf{p}_1) A^b_j (\bf{p}_2)$ in the vacuum wave functional
ceases to be of the form $f_2 (\bf{p}_2) \delta^{ab} \, \delta_{ij}
(2 \pi)^3 \delta (\bf{p}_1 + \bf{p}_2)$. Furthermore, contrary
to the terms found before, the contribution \eqref{g2solAAq} is imaginary.
The rest of the vacuum wave functional determined in the previous section
remains without change.
We now have to calculate the vacuum energy in the presence of the external
charges. To order $g^2$, the result is the former one, Eq.\ \eqref{g2E0},
without any contribution from the new term \eqref{g2solAAq}, plus Eq.\
\eqref{g2sp} which is the part of the energy that depends on the external
charges and hence defines the potential to this order. Of course, this is
just the well-known Coulomb potential of electrodynamics. We are really
interested in the first quantum corrections to this ``bare'' potential, which
are of order $g^4$.
In general, the vacuum energy is given by
\be
E_0 = \frac{g^2}{2} \int \frac{d^3 p}{(2 \pi)^3} \,
\rho^a_q (-\bf{p}) \frac{1}{\bf{p}^2} \, \rho^a_q (\bf{p})
- \frac{1}{2} \int \frac{d^3 p}{(2 \pi)^3} \left. (2 \pi)^3
\frac{\delta}{\delta A_i^a (\bf{p})} (2 \pi)^3
\frac{\delta}{\delta A_i^a (-\bf{p})} \, \psi (\bf{A}) \right|_{\bf{A} = 0}
\:, \label{xcE0}
\ee
which reduces to Eq.\ \eqref{g2E0} in the absence of external charges.
A contribution of order $g^4$ can hence only originate from the terms in the
vacuum wave functional that are quadratic in $\bf{A}$. As long as we are
only interested in the potential between static sources, we can concentrate
on terms that contain precisely two powers of $\rho_q$. We then start
by identifying all the contributions to the Schr\"odinger equation of order
$g^4$ that contain two powers of $\bf{A}$ and two powers of $\rho_q$.
One of these contributions results from expanding the Coulomb kernel
\be
\langle \bf{x}, a | (-\nabla \cdot \bf{D})^{-1}
(-\nabla^2) (-\nabla \cdot \bf{D})^{-1} | \bf{y}, b \rangle
\ee
in Eq.\ \eqref{christlee} to second order in $\bf{A}$ for $\rho = \rho_q$.
The result is the term
\bmu
- \frac{3}{4} \, g^4 \int \frac{d^3 p_1}{(2 \pi)^3} \cdots
\frac{d^3 p_4}{(2 \pi)^3} \left( f^{ace} f^{bde} \, \frac{p_{1,i} p_{2,j}}
{\bf{p}_1^2 \, \bf{p}_2^2 \, (\bf{p}_1 + \bf{p}_3)^2}
+ f^{ade} f^{bce} \, \frac{p_{1,j} p_{2,i}}
{\bf{p}_1^2 \, \bf{p}_2^2 \, (\bf{p}_1 + \bf{p}_4)^2} \right) \\[2mm]
{}\times \rho^a_q (\bf{p}_1) \rho^b_q (\bf{p}_2) A^c_i (\bf{p}_3)
A^d_j (\bf{p}_4) (2 \pi)^3 \delta (\bf{p}_1 + \ldots + \bf{p}_4) \label{g4xc1}
\emu
on the left-hand side of the Schr\"odinger equation.
Another contribution of the same type arises from the second functional
derivative in Eq.\ \eqref{clA2} acting (twice) on the term \eqref{g2solAAq},
which gives the contribution
\bal
\frac{g^4}{4} \int \frac{d^3 p_1}{(2 \pi)^3} \cdots
\frac{d^3 p_4}{(2 \pi)^3} \frac{1}{\bf{p}_1^2 \, \bf{p}_2^2}
&\left( f^{ace} f^{bde} \, t_{ij} (\bf{p}_1 + \bf{p}_3) \,
\frac{|\bf{p}_1 + \bf{p}_3| - |\bf{p}_3|}
{|\bf{p}_1 + \bf{p}_3| + |\bf{p}_3|} \,
\frac{|\bf{p}_1 + \bf{p}_3| - |\bf{p}_4|}
{|\bf{p}_1 + \bf{p}_3| + |\bf{p}_4|} \right. \notag \\
&\left. {}+ f^{ade} f^{bce} \, t_{ij} (\bf{p}_1 + \bf{p}_4) \,
\frac{|\bf{p}_1 + \bf{p}_4| - |\bf{p}_3|}
{|\bf{p}_1 + \bf{p}_4| + |\bf{p}_3|} \,
\frac{|\bf{p}_1 + \bf{p}_4| - |\bf{p}_4|}
{|\bf{p}_1 + \bf{p}_4| + |\bf{p}_4|} \right) \notag \\[2mm]
&\hspace{1cm} {}\times \rho^a_q (\bf{p}_1) \rho^b_q (\bf{p}_2) A^c_i (\bf{p}_3)
A^d_j (\bf{p}_4) (2 \pi)^3 \delta (\bf{p}_1 + \ldots + \bf{p}_4) \label{g4xc2}
\eal
to the Schr\"odinger equation.
The last contribution of the same type comes from the ``mixed'' term
where the operator \eqref{clco} acts upon the expression
\eqref{g2solAAq} in the wave functional. The result is
\bal
-\frac{g^4}{4} \int \frac{d^3 p_1}{(2 \pi)^3} \cdots
\frac{d^3 p_4}{(2 \pi)^3} \frac{1}{\bf{p}_1^2 \, \bf{p}_2^2}
&\left[ f^{ace} f^{bde} \, t_{ij} (\bf{p}_1 + \bf{p}_3)
\left( \frac{|\bf{p}_1 + \bf{p}_3| - |\bf{p}_3|}
{|\bf{p}_1 + \bf{p}_3| + |\bf{p}_3|}
+ \frac{|\bf{p}_1 + \bf{p}_3| - |\bf{p}_4|}
{|\bf{p}_1 + \bf{p}_3| + |\bf{p}_4|} \right) \right. \notag \\
&\hspace{-2mm} \left. {}+ f^{ade} f^{bce} \,
t_{ij} (\bf{p}_1 + \bf{p}_4)
\left( \frac{|\bf{p}_1 + \bf{p}_4| - |\bf{p}_3|}
{|\bf{p}_1 + \bf{p}_4| + |\bf{p}_3|}
+ \frac{|\bf{p}_1 + \bf{p}_4| - |\bf{p}_4|}
{|\bf{p}_1 + \bf{p}_4| + |\bf{p}_4|} \right) \right] \notag \\[2mm]
&\hspace{0.7cm} {}\times \rho^a_q (\bf{p}_1) \rho^b_q (\bf{p}_2) A^c_i
(\bf{p}_3) A^d_j (\bf{p}_4) (2 \pi)^3 \delta (\bf{p}_1 + \ldots + \bf{p}_4) \:,
\label{g4xc3}
\eal
to be included in the Schr\"odinger equation. It can be shown that no
other contributions quadratic in $\bf{A}$ and in $\rho_q$ exist to order
$g^4$.
In analogy to the determination of the expression \eqref{g2solAAq}
from Eq.\ \eqref{clAAq},
the three contributions \eqref{g4xc1}--\eqref{g4xc3} to the Schr\"odinger
equation are taken care of by including the following expression in the
negative exponent of the vacuum wave functional [in addition to \eqref{solf3},
\eqref{solf4bv}--\eqref{solf4ct}, \eqref{g2solf2tp}--\eqref{g2solf2ct},
and \eqref{g2solAAq}]
\bmu
-\frac{g^4}{4} \int \frac{d^3 p_1}{(2 \pi)^3} \cdots
\frac{d^3 p_4}{(2 \pi)^3} \frac{1}{\bf{p}_1^2 \, \bf{p}_2^2 \,
(|\bf{p}_3| + |\bf{p}_4|)} \left\{ f^{ace} f^{bde} \left[
\frac{3 p_{1,i} p_{2,j}}{(\bf{p}_1 + \bf{p}_3)^2}
- t_{ij} (\bf{p}_1 + \bf{p}_3) \right. \right. \\
\left. {}\times \left(
\frac{|\bf{p}_1 + \bf{p}_3| - |\bf{p}_3|}
{|\bf{p}_1 + \bf{p}_3| + |\bf{p}_3|} \,
\frac{|\bf{p}_1 + \bf{p}_3| - |\bf{p}_4|}
{|\bf{p}_1 + \bf{p}_3| + |\bf{p}_4|}
- \frac{|\bf{p}_1 + \bf{p}_3| - |\bf{p}_3|}
{|\bf{p}_1 + \bf{p}_3| + |\bf{p}_3|}
- \frac{|\bf{p}_1 + \bf{p}_3| - |\bf{p}_4|}
{|\bf{p}_1 + \bf{p}_3| + |\bf{p}_4|} \right) \right] \\
{}+ f^{ade} f^{bce} \left[ \frac{3 p_{1,j} p_{2,i}}
{(\bf{p}_1 + \bf{p}_4)^2} - t_{ij} (\bf{p}_1 + \bf{p}_4)
\left( \frac{|\bf{p}_1 + \bf{p}_4| - |\bf{p}_3|}
{|\bf{p}_1 + \bf{p}_4| + |\bf{p}_3|} \,
\frac{|\bf{p}_1 + \bf{p}_4| - |\bf{p}_4|}
{|\bf{p}_1 + \bf{p}_4| + |\bf{p}_4|}
- \frac{|\bf{p}_1 + \bf{p}_4| - |\bf{p}_3|}
{|\bf{p}_1 + \bf{p}_4| + |\bf{p}_3|} \right. \right. \\
\left. \left. \left. {}- \frac{|\bf{p}_1 + \bf{p}_4| - |\bf{p}_4|}
{|\bf{p}_1 + \bf{p}_4| + |\bf{p}_4|} \right) \right] \right\}
\rho^a_q (\bf{p}_1) \rho^b_q (\bf{p}_2) A^c_i (\bf{p}_3)
A^d_j (\bf{p}_4) (2 \pi)^3 \delta (\bf{p}_1 + \ldots + \bf{p}_4) \:.
\label{g4solAAqq}
\emu
This result (multiplied by 2) is represented
diagrammatically in Fig.\ \ref{figAArr}, where we have denoted the
``Coulomb propagator'' $1/\bf{p}^2$ as a double line.
\begin{figure}
\begin{equation*}
- 3 \bigg( \parbox{1.6cm}{\begin{center}
\pspicture(-0.4,-0.4)(0.97,0.4)
\pscoil[coilarmA=0.1,coilarmB=0](0.57,0)(0.95,0.38)
\psline[doubleline=true,linewidth=0.8pt,doublesep=0.6pt](0.57,0)(0.95,-0.38)
\psline[doubleline=true,linewidth=0.8pt,doublesep=0.6pt](0,0)(0.57,0)
\pscoil[coilarmA=0.1,coilarmB=0](0,0)(-0.38,0.38)
\psline[doubleline=true,linewidth=0.8pt,doublesep=0.6pt](0,0)(-0.38,-0.38)
\psdots(0,0)(0.57,0)
\endpspicture
\end{center}}
+ \text{1 perm.} \bigg)
- 2 \bigg( \parbox{1.6cm}{\begin{center}
\pspicture(-0.4,-0.4)(0.97,0.4)
\pscoil[coilarmA=0.1,coilarmB=0](0.57,0)(0.95,0.38)
\psline[doubleline=true,linewidth=0.8pt,doublesep=0.6pt](0.57,0)(0.95,-0.38)
\pscoil[coilarm=0.1](0,0)(0.57,0)
\pscoil[coilarmA=0.1,coilarmB=0](0,0)(-0.38,0.38)
\psline[doubleline=true,linewidth=0.8pt,doublesep=0.6pt](0,0)(-0.38,-0.38)
\psdots(0,0)(0.57,0)
\endpspicture
\end{center}}
+ \text{1 perm.} \bigg)
\end{equation*}
\caption{Diagrammatic representation of the contributions \eqref{g4solAAqq}
(multiplied by 2) to the vacuum wave functional. \label{figAArr}}
\end{figure}
The first diagram (and its
permutation) corresponds to the expression \eqref{g4xc1}, while the second
diagram (plus its permutation) corresponds to the sum of the expressions
\eqref{g4xc2} and \eqref{g4xc3}. From Eq.\ \eqref{xcE0}, we find the
contribution to the vacuum energy
\bmu
\frac{g^2}{2} \int \frac{d^3 p}{(2 \pi)^3} \, \rho^a_q (-\bf{p})
\frac{1}{(\bf{p}^2)^2} \Bigg\{ \frac{N_c g^2}{2}
\int \frac{d^3 q}{(2 \pi)^3} \frac{1}{2 |\bf{q}|} \Bigg[
3 \, \frac{\bf{p}^2 - (\bf{p} \cdot \hat{\bf{q}})^2}{(\bf{p} - \bf{q})^2} \\
{}+ t_{ij} (\bf{q}) t_{ij} (\bf{p} - \bf{q}) \left(
\left( \frac{|\bf{p} - \bf{q}| - |\bf{q}|}{|\bf{p} - \bf{q}| + |\bf{q}|}
\right)^2 - 2 \, \frac{|\bf{p} - \bf{q}| - |\bf{q}|}
{|\bf{p} - \bf{q}| + |\bf{q}|} \right)
+ 3 \, \frac{\bf{p}^2 - (\bf{p} \cdot \hat{\bf{q}})^2}{(\bf{p} + \bf{q})^2} \\
{}+ t_{ij} (\bf{q}) t_{ij} (\bf{p} + \bf{q}) \left(
\left( \frac{|\bf{p} + \bf{q}| - |\bf{q}|}{|\bf{p} + \bf{q}| + |\bf{q}|}
\right)^2 - 2 \, \frac{|\bf{p} + \bf{q}| - |\bf{q}|}
{|\bf{p} + \bf{q}| + |\bf{q}|} \right) \Bigg] \Bigg\}
\rho_q^a (\bf{p}) \:.
\emu
This latter expression can be simplified by shifting
$\bf{q} \to \bf{q} - \bf{p}$ in the last two terms (in the round
bracket). Together with Eq.\ \eqref{g2sp}, we find for the part of the vacuum
energy that is quadratic in the external static charge $\rho_q$,
\be
E_0^{(\rho_q, 2)} =
\frac{g^2}{2} \int \frac{d^3 p}{(2 \pi)^3} \, \rho^a_q (-\bf{p}) V (\bf{p})
\rho_q^a (\bf{p}) \:, \label{defsp}
\ee
the following result for the static potential to order $g^2$
\bal
V (\bf{p}) &= \frac{1}{\bf{p}^2} + \frac{N_c g^2}{\bf{p}^2} \,
3 \int \frac{d^3 q}{(2 \pi)^3} \frac{1 - (\hat{\bf{p}} \cdot \hat{\bf{q}})^2}
{2 |\bf{q}| \, (\bf{p} - \bf{q})^2} \label{g4spat} \\
&\hspace{1.7cm} {}- \frac{N_c g^2}{(\bf{p}^2)^2} \int \frac{d^3 q}{(2 \pi)^3}
\left( 1 + \frac{\big( (\bf{p} - \bf{q}) \cdot
\bf{q} \big)^2}{(\bf{p} - \bf{q})^2 \, \bf{q}^2} \right)
\frac{(|\bf{p} - \bf{q}| - |\bf{q}|)^2}
{2 |\bf{p} - \bf{q}| \, 2 |\bf{q}| \, (|\bf{p} - \bf{q}| + |\bf{q}|)}
\:. \label{g4spst}
\eal
Note that to the proper color Coulomb potential (see, e.g., Ref.\
\cite{Zwa98}), only the antiscreening term, the integral in
Eq.\ \eqref{g4spat}, contributes, while the full static potential also
contains the screening contribution \eqref{g4spst}.
The semi-analytical variational approaches \cite{FRE04,ERS08} have only
considered the proper color Coulomb potential so far.
We can associate diagrams with the different contributions in
Eqs.\ \eqref{g4spat}--\eqref{g4spst} in a natural way. The vertex that joins
two Coulomb (double) lines and one gluon line corresponds to the same
mathematical expression as the ghost-gluon vertex since both objects
originate from the Faddeev-Popov operator $(-\nabla \cdot \bf{D})$ (or its
inverse). On the other hand, the vertex with two gluon lines and one Coulomb
line translates to the expression
\be
i g f^{abc} \, \frac{|\bf{p}_2| - |\bf{p}_3|}{|\bf{p}_2| + |\bf{p}_3|}
\, \delta_{jk} \:,
\ee
where the gluon lines carry the momenta $\bf{p}_2$ and $\bf{p}_3$, (spatial)
Lorentz indices $j$ and $k$, and color indices $b$ and $c$ [cf.\ Eq.\
\eqref{g2solAAq}]. Note that the ``elementary'' Coulomb interaction
\eqref{solf4ct} is \emph{different} from the contraction of two such vertices
with a Coulomb propagator. With these conventions, we can represent the static
potential as in Fig.\ \ref{figV}. The E-operator in Fig.\ \ref{figV}
exclusively refers to the internal gluon propagators and thus amounts to
multiplying the integrand with $|\bf{p} - \bf{q}| + |\bf{q}|$.
\begin{figure}
\begin{equation*}
V =
\parbox{0.96cm}{\begin{center}
\pspicture(-0.11,-0.25)(0.85,0.25)
\psline[doubleline=true,linewidth=0.8pt,doublesep=0.6pt](0.03,0)(0.71,0)
\endpspicture
\end{center}}
+ 3 \parbox{2.2cm}{\begin{center}
\pspicture(-1.05,-0.5)(1.05,0.5)
\SpecialCoor
\multido{\notag=17.00+11.25}{15}{%
\FPadd{-90}{\notag}{\m}
\rput{\m}(0.4;\notag){\psCoil{-65}{425}}}
\psline[doubleline=true,linewidth=0.8pt,doublesep=0.6pt](-0.45,0)(0.45,0)
\psline[doubleline=true,linewidth=0.8pt,doublesep=0.6pt](-0.45,0)(-0.92,0)
\psline[doubleline=true,linewidth=0.8pt,doublesep=0.6pt](0.45,0)(0.92,0)
\psdots(-0.45,0)(0.45,0)
\psdots[dotstyle=o,dotscale=1.1](0,0.4)
\endpspicture
\end{center}}
+ 2 E \bigg( \parbox{1.9cm}{\begin{center}
\pspicture(-0.95,-0.5)(0.95,0.5)
\SpecialCoor
\multido{\notag=17.00+11.25}{15}{%
\FPadd{-90}{\notag}{\m}
\rput{\m}(0.4;\notag){\psCoil{-65}{425}}}
\multido{\notag=197.00+11.25}{15}{%
\FPadd{-90}{\notag}{\m}
\rput{\m}(0.4;\notag){\psCoil{-65}{425}}}
\psline[doubleline=true,linewidth=0.8pt,doublesep=0.6pt](-0.45,0)(-0.92,0)
\psline[doubleline=true,linewidth=0.8pt,doublesep=0.6pt](0.45,0)(0.92,0)
\psdots(-0.45,0)(0.45,0)
\psdots[dotstyle=o,dotscale=1.1](0,0.4)(0,-0.4)
\endpspicture
\end{center}} \bigg)
\end{equation*}
\caption{A diagrammatic interpretation of the static potential to order
$g^2$. \label{figV}}
\end{figure}
We hence have succeeded in calculating the equal-time gluon and ghost
two-point functions and the static potential to one-loop order in our
functional perturbative approach, with the results \eqref{g2AAtp}--\eqref{g2gg}
and \eqref{g4spat}--\eqref{g4spst}. The same results can be obtained from a
straightforward application of Rayleigh-Schr\"odinger perturbation theory
\cite{CRW09}. Compared to these latter calculations, we have here
developed a functional integral and diagrammatical approach that is
potentially advantageous in higher-order perturbative calculations. We
have also described a set of simplified diagrammatic rules (the
``E-operator'') for the determination of equal-time correlation functions
that is expected to carry over to higher perturbative orders and is
hoped to eventually lead to nonperturbative equations for the equal-time
correlation functions analogous to Dyson-Schwinger equations.
\section{Lagrangian approach and renormalization}
Naive power counting shows that the results of the preceding section,
Eqs.\ \eqref{g2AAtp}--\eqref{g2gg} and \eqref{g4spat}--\eqref{g4spst}, are
ultraviolet (UV) divergent and need to be renormalized. However, some of the
denominators occuring in the loop integrals are of a different
type from those that usually appear in covariant perturbation theory, and
efficient techniques for the handling of these terms have yet to be
developed. These remarks apply in particular to Eqs.\ \eqref{g2AAgl} and
\eqref{g4spst}.
The equal-time correlation functions we have been calculating are a
special or limiting case of the usual space-time correlation functions, whence
we naturally obtain the representation
\be
\langle A_i^a (\bf{p}_1, t=0) A_j^b (\bf{p}_2, t=0) \rangle =
\int_{-\infty}^\infty \frac{d p_{1,4}}{2 \pi} \frac{d p_{2,4}}{2 \pi}
\langle A_i^a (\bf{p}_1, p_{1,4}) A_j^b (\bf{p}_2, p_{2,4}) \rangle
\label{etreduct}
\ee
(with the space-time correlation functions written in Euclidean space-time).
Since the regularization and renormalization program has been
developed for the space-time correlation functions, this representation
is quite useful for our purposes. In the case of Coulomb gauge Yang-Mills
theory, however, covariance is explicitly broken through the gauge
condition, and the calculation of the space-time correlation functions
in the usual Lagrangian functional integral approach
represents a difficulty by itself. Techniques have been developed to
overcome these difficulties and applied in Ref.\ \cite{WR07b}
to the calculation of the two-point correlation functions to one-loop
order.
Before properly considering the renormalization of our results, we will
verify that these coincide on a formal level with the expressions
obtained via Eq.\ \eqref{etreduct}, taking for the space-time correlation
functions the formulas derived in the Lagrangian functional integral approach
in Ref.\ \cite{WR07b}. In our notation,
\bal
\lefteqn{\langle A_i^a (p_1) A_j^b (p_2) \rangle
= \Bigg( \frac{1}{p_1^2} - \frac{N_c g^2}{\left( p_1^2 \right)^2} \,
\frac{4}{3} \int \frac{d^4 q}{(2 \pi)^4} \frac{1}{q^2}} \hspace{1.7cm}
\label{covAAtp} \\
&\phantom{=} {}+ \frac{N_c g^2}{\left( p_1^2 \right)^2}
\int \frac{d^4 q}{(2 \pi)^4}
\frac{\big( \delta_{km} p_{1,n} + \delta_{mn} q_k - \delta_{nk} p_{1,m} \big)
\, t_{mr} (\bf{p}_1 - \bf{q}) \, t_{ns} (\bf{q})}
{q^2} \notag \\
&\phantom{=} \hspace{3cm} {}\times
\frac{\big( \delta_{lr} p_{1,s} + \delta_{rs} q_l
- \delta_{sl} p_{1,r} \big) \, t_{kl} (\bf{p}_1)}{(p_1 - q)^2} \\
&\phantom{=} {}+ \frac{N_c g^2}{\left( p_1^2 \right)^2} \,
\frac{1}{2} \int \frac{d^4 q}{(2 \pi)^4} \frac{t_{kl} (\bf{p}_1)
t_{kl} (\bf{q}) \, (p_{1,4}^2 - \bf{q}^2)}
{q^2 \, (\bf{p}_1 - \bf{q})^2} \Bigg) \delta^{ab} \,
t_{ij} (\bf{p}_1) (2 \pi)^4 \delta (p_1 + p_2) \:, \label{covAAct}
\eal
with $p_1^2 \equiv \bf{p}_1^2 + p_{1,4}^2$ in Euclidean space-time.
Formal integration of $p_{1,4}$ and $p_{2,4}$ as in Eq.\
\eqref{etreduct} and of the component $q_4$ of the loop momentum, most
easily using the residue theorem, leads to our equal-time correlation function
\eqref{g2AAtp}--\eqref{g2AAct}. In Eq.\ \eqref{covAAtp}, we have included the
tadpole diagram in order that the correspondence with the equal-time gluon
two-point function \eqref{g2AAtp}--\eqref{g2AAct} be term by term.
The tadpole diagram was not considered explicitly in Refs.\ \cite{WR07b,WR08}
because it vanishes in dimensional regularization.
The case of the ghost two-point function is even simpler, because it is
instantaneous already in the Lagrangian approach: from Ref.\ \cite{WR07b},
\be
\langle c^a (p_1) \bar{c}^b (p_2) \rangle
= \left( \frac{1}{\bf{p}_1^2} + \frac{N_c g^2}{\left( \bf{p}_1^2 \right)^2}
\int \frac{d^4 q}{(2 \pi)^4} \frac{p_{1,i} p_{1,j} \,
t_{ij} (\bf{q})}{q^2 \, (\bf{p}_1 - \bf{q})^2} \right)
\delta^{ab} \, (2 \pi)^4 \delta (p_1 + p_2) \:, \label{covgg}
\ee
and the fact that the dependence on $p_{1,4}$ and $p_{2,4}$ is exclusively
through the delta function for energy conservation implies that
$\langle c^a (\bf{p}_1, t_1) \bar{c}^b (\bf{p}_2, t_2) \rangle$ contains
the factor $\delta(t_1 - t_2)$. Integrating over $p_{1,4}$ \emph{and} $p_{2,4}$
or putting $t_1$ \emph{and} $t_2$ to zero then results in a factor
$\delta (0)$. In this case, in order to reproduce Eq.\ \eqref{g2gg}, we
integrate \emph{either} over $p_{1,4}$ \emph{or} over $p_{2,4}$, which just
eliminates the delta function for energy conservation [more symmetrically,
one may integrate over $(p_{1,4} + p_{2,4})$ instead]. Performing the
integral over $q_4$ then converts Eq.\ \eqref{covgg} to Eq.\ \eqref{g2gg}.
Although it is certainly not surprising that the Lagrangian functional integral
approach of Refs.\ \cite{WR07a,WR07b,WR08} gives the same results for
the equal-time correlation functions as our Hamiltonian approach, it is also
not trivial. Concerning the gauge fixing procedure, there is the
following important difference between the two approaches: in
the Lagrangian formulation, the Weyl gauge $A_0 \equiv 0$ cannot be
implemented in addition to the Coulomb gauge condition
$\nabla \cdot \bf{A} \equiv 0$ \cite{WR07c}. Indeed, in the
first-order Lagrangian formalism, integrating out the $A_0$-field rather
than setting $A_0$ to zero yields an expression in the exponent of the measure
for the functional integral that resembles the Christ-Lee
Hamiltonian \cite{WR07a}. The derivation of the Hamilton operator
\eqref{christlee} by Christ and Lee \cite{CL80}, on the other hand, relies on
the existence of a gauge transformation that makes any gauge field $A_\mu$
satisfy both the Coulomb and Weyl gauge conditions. We discuss
the possibility of simultaneously implementing the Weyl and Coulomb gauges
in the Hamiltonian and the Lagrangian approaches in the Appendix.
Although not defined as an equal-time correlation function in our approach,
it turns out that the static potential \eqref{g4spat}--\eqref{g4spst} is
related to the space-time two-point function $\langle A_0^a (p_1) A_0^b (p_2)
\rangle$ in the Lagrangian functional integral approach,
as was first pointed out by Zwanziger \cite{Zwa98}. The formal
expression for the space-time correlation function is \cite{WR07b}
\bal
\lefteqn{ \langle A_0^a (p_1) A_0^b (p_2) \rangle
= \Bigg( \frac{1}{\bf{p}_1^2} + \frac{N_c g^2}{(\bf{p}_1^2)^2} \, 3 \int
\frac{d^4 q}{(2 \pi)^4} \frac{p_{1,i} p_{1,j} \, t_{ij} (\bf{q})}
{q^2 \, (\bf{p}_1 - \bf{q})^2}} \hspace{1.6cm} \label{covspat} \\
&{}+ \frac{N_c g^2}{(\bf{p}_1^2)^2} \int \frac{d^4 q}{(2 \pi)^4}
\frac{q_4}{p_{1,4}} \, \frac{\bf{p}_1 \cdot (\bf{p}_1 - 2 \bf{q})}
{q^2 \, (p_1 - q)^2} \, t_{ij} (\bf{p}_1 - \bf{q})
t_{ij} (\bf{q}) \Bigg) \delta^{ab} \, (2 \pi)^4 \delta (p_1 + p_2)
\:. \label{covspst}
\eal
Integrating over either $p_{1,4}$ or $p_{2,4}$ and over the energy
component $q_4$ of the loop momentum, we obtain the antiscreening
contribution \eqref{g4spat} to the static potential from Eq.\ \eqref{covspat}
because the latter is already instantaneous.
In order to find Eq.\ \eqref{g4spst} starting from Eq.\
\eqref{covspst}, we have to put the respective other energy component,
$p_{2,4}$ or $p_{1,4}$, to zero in addition [this is not necessary in the
cases of Eqs.\ \eqref{covgg} and \eqref{covspat}, because there the result of
integrating over one of the energy components is independent of the other].
For $\langle A_0^a (\bf{p}_1, t_1) A_0^b (\bf{p}_2, t_2) \rangle$, this
procedure amounts to integrating over the relative time $t_1 - t_2$, which is,
in fact, intuitively quite appealing for a non-instantaneous contribution
to the potential between static sources.
We shall now use the representation \eqref{etreduct} of the equal-time
gluon two-point function and the corresponding representations of
the ghost two-point function and the static potential for the renormalization
of these equal-time correlation functions. To this end, we
make use of the explicit expressions obtained for Eqs.\
\eqref{covAAtp}--\eqref{covspst} in
Ref.\ \cite{WR07b} in dimensional regularization. Thus, by integrating
the result for \eqref{covAAtp}--\eqref{covAAct} according to Eq.\
\eqref{etreduct}, we obtain for the equal-time correlation function
\be
\langle A_i^a (\bf{p}_1) A_j^b (\bf{p}_2) \rangle =
\left[ \frac{1}{2 |\bf{p}_1|} + \frac{N_c g^2}{(4 \pi)^2} \,
\frac{1}{2 |\bf{p}_1|} \left( \frac{1}{\epsilon} - \ln \frac{\bf{p}_1^2}{\mu^2}
+ C_A \right) \right] \delta^{ab} \, t_{ij} (\bf{p}_1)
(2 \pi)^3 \delta (\bf{p}_1 + \bf{p}_2) \label{regAA}
\ee
in the limit $\epsilon \to 0$, where $d = 3 - 2 \epsilon$ is the dimension
of space and $\mu$ an arbitrary mass scale. The value of the constant
$C_A$ is not relevant to our purposes, but being an integral over an
explicitly known function of $p_{1,4}^2/\bf{p}_1^2$, we have carefully
checked that it is finite.
From the explicit expressions for Eqs.\ \eqref{covgg}--\eqref{covspst} in
dimensional regularization \cite{WR07b}, we find directly
\bal
\langle c^a (\bf{p}_1) \bar{c}^b (\bf{p}_2) \rangle &=
\left[ \frac{1}{\bf{p}_1^2} + \frac{N_c g^2}{(4 \pi)^2} \,
\frac{1}{\bf{p}_1^2} \, \frac{4}{3} \left( \frac{1}{\epsilon} -
\ln \frac{\bf{p}_1^2}{\mu^2} + C_c \right) \right] \delta^{ab} \,
(2 \pi)^3 \delta (\bf{p}_1 + \bf{p}_2) \:, \label{reggg} \\
V (\bf{p}_1) &= \frac{1}{\bf{p}_1^2} + \frac{N_c g^2}{(4 \pi)^2} \,
\frac{1}{\bf{p}_1^2} \, \frac{11}{3} \left( \frac{1}{\epsilon} -
\ln \frac{\bf{p}_1^2}{\mu^2} + C_V \right) \:. \label{regsp}
\eal
This procedure to regularize the equal-time correlation functions finds
further support in the cases where the equal-time functions in the form
\eqref{g2AAtp}--\eqref{g2gg} and \eqref{g4spat}--\eqref{g4spst} can be
evaluated directly in dimensional regularization (in $d = 3 - 2 \epsilon$
dimensions). For Eqs.\ \eqref{g2AAct} and \eqref{g2gg}, identical results are
obtained in both ways \cite{CRW09} [also trivially for Eq.\
\eqref{g2AAtp} and the loop integral in Eq.\ \eqref{g4spat} which is just
three times the one of Eq.\ \eqref{g2gg}].
The results \eqref{regAA} and \eqref{reggg} for the equal-time two-point
correlation functions can be renormalized in analogy to the procedures
developed for covariant theories: we introduce renormalized correlation
functions (or correlation functions of the renormalized fields)
\bal
\langle A_{R,i}^a (\bf{p}_1) A_{R,j}^b (\bf{p}_2) \rangle &=
\frac{1}{Z_A} \langle A_i^a (\bf{p}_1) A_j^b (\bf{p}_2) \rangle \:, \notag \\
\langle c_R^a (\bf{p}_1) \bar{c}_R^b (\bf{p}_2) \rangle &=
\frac{1}{Z_c} \langle c^a (\bf{p}_1) \bar{c}^b (\bf{p}_2) \rangle \:.
\label{defmultren}
\eal
The simplest choice of the normalization conditions is
\bal
\left. \langle A_{R,i}^a (\bf{p}_1) A_{R,j}^b (\bf{p}_2) \rangle
\right|_{\bf{p}_1^2 = \kappa^2} &= \frac{1}{2 |\bf{p}_1|} \,
\delta^{ab} \, t_{ij} (\bf{p}_1)
(2 \pi)^3 \delta (\bf{p}_1 + \bf{p}_2) \:, \notag \\
\left. \langle c_R^a (\bf{p}_1) \bar{c}_R^b (\bf{p}_2) \rangle
\right|_{\bf{p}_1^2 = \kappa^2} &= \frac{1}{\bf{p}_1^2} \,
\delta^{ab} \, (2 \pi)^3 \delta (\bf{p}_1 + \bf{p}_2) \:, \label{rencond}
\eal
at the renormalization scale $\kappa$. With these normalization conditions
and the results \eqref{regAA}, \eqref{reggg}, we obtain
\bal
Z_A (\kappa) &= 1 + \frac{N_c g^2}{(4 \pi)^2} \left( \frac{1}{\epsilon} -
\ln \frac{\kappa^2}{\mu^2} + C_A \right) \:, \notag \\
Z_c (\kappa) &= 1 + \frac{N_c g^2}{(4 \pi)^2} \, \frac{4}{3}
\left( \frac{1}{\epsilon} - \ln \frac{\kappa^2}{\mu^2} + C_c \right) \:,
\label{wfr}
\eal
to order $g^2$.
The expression \eqref{regsp} for the static potential needs to be
renormalized, too. This is most naturally achieved by a
renormalization of the coupling constant as was first suggested in Ref.\
\cite{DL81}, through
\be
\left. g^2 V (\bf{p}) \right|_{\bf{p}^2 = \kappa^2}
= \frac{\bar{g}_R^2 (\kappa)}{\bf{p}^2} \:, \label{defgbar}
\ee
see Eq.\ \eqref{defsp}. Hence, from Eq.\ \eqref{regsp},
\be\label{gbar}
\bar{g}_R^2 (\kappa) = g^2 \left[ 1 + \frac{N_c g^2}{(4 \pi)^2} \,
\frac{11}{3} \left( \frac{1}{\epsilon} - \ln \frac{\kappa^2}{\mu^2} + C_V
\right) \right] \:,
\ee
which implies for the corresponding beta function to one-loop order,
\be
\kappa^2 \frac{\d}{\d \kappa^2} \, \bar{g}_R^2 (\kappa) =
\frac{\bar{\beta}_0}{(4 \pi)^2} \, \bar{g}_R^4 (\kappa) \:, \label{defbeta}
\ee
that
\be
\bar{\beta}_0 = -\frac{11}{3} \, N_c \:. \label{g2beta}
\ee
This is the well-known result from covariant perturbation theory (for
Yang-Mills theory in covariant gauges), and has also been found in Ref.\
\cite{WR07b}.
For the rest of this section, we will pursue a more conventional way of
renormalizing the coupling constant (which, however, leads to the same
result). To this end, we consider the equal-time ghost-gluon three-point
correlation function
$\langle c^a (\bf{p}_1) \bar{c}^b (\bf{p}_2) A^c_i (\bf{p}_3) \rangle$
to order $g^3$ (one loop). The calculation
of this correlation function is performed in analogy with
the determination of the equal-time two-point correlation functions in
Section 3, using the result for the vacuum wave functional obtained in
Section 2. In this particularly simple case (and to the order considered), the
external ``propagators'' (equal-time two-point functions) can be factorized
to define the equal-time proper three-point vertex
$\Gamma^{abc}_i (\bf{p}_1, \bf{p}_2, \bf{p}_3)$ as
\bmu
\langle c^a (\bf{p}_1) \bar{c}^b (\bf{p}_2) A^c_i (\bf{p}_3) \rangle
= - \int \frac{d^3 p_4}{(2 \pi)^3} \frac{d^3 p_5}{(2 \pi)^3}
\frac{d^3 p_6}{(2 \pi)^3} \,
\langle c^a (\bf{p}_1) \bar{c}^d (-\bf{p}_4) \rangle \\
{}\times \Gamma^{def}_j (\bf{p}_4, \bf{p}_5, \bf{p}_6) \,
\langle c^e (-\bf{p}_5) \bar{c}^b (\bf{p}_2) \rangle \,
\langle A_j^f (-\bf{p}_6) A_i^c (\bf{p}_3) \rangle \:.
\emu
The explicit perturbative result is
\bal
\lefteqn{\Gamma^{abc}_j (\bf{p}_1, \bf{p}_2, \bf{p}_3) = -i g f^{abc}
\Bigg( p_{1,k} - \frac{N_c g^2}{2} \int \frac{d^3 q}{(2 \pi)^3}
\frac{\big[ \bf{p}_1 \cdot \bf{p}_2 - (\bf{p}_1 \cdot \hat{\bf{q}})
(\bf{p}_2 \cdot \hat{\bf{q}}) \big] (p_{1,k} - q_k)}{2 |\bf{q}| \,
(\bf{p}_1 - \bf{q})^2 \, (\bf{p}_2 + \bf{q})^2}} \hspace{2cm}
\label{g3ggv1} \\
&{}+ \frac{2 N_c g^2}{2} \int \frac{d^3 q}{(2 \pi)^3}
\frac{p_{1,l} p_{2,n} \, t_{lm} (\bf{p}_1 - \bf{q})
t_{nr} (\bf{p}_2 + \bf{q})}
{\bf{q}^2 \, 2 |\bf{p}_1 - \bf{q}| \, 2 |\bf{p}_2 + \bf{q}|} \notag \\
&\phantom{+} {}\times \frac{\delta_{km} (p_{1,r} - p_{3,r} - q_r)
- \delta_{mr} (p_{1,k} - p_{2,k} - 2 q_k) - \delta_{rk} (p_{2,m} - p_{3,m}
+ q_m)}{|\bf{q}| + |\bf{p}_1 - \bf{q}| + |\bf{p}_2 + \bf{q}|} \Bigg)
\label{g3ggv2} \\[2mm]
&\hspace{6cm} {}\times t_{jk} (\bf{p}_3) (2 \pi)^3
\delta (\bf{p}_1 + \bf{p}_2 + \bf{p}_3) \:. \notag
\eal
It is represented diagrammatically in Fig.\ \ref{figGamma}.
\begin{figure}
\begin{equation*}
\Gamma = - \parbox{1.0cm}{\begin{center}
\pspicture(-0.5,-0.5)(0.5,0.6)
\psline[linestyle=dashed](0,0)(-0.38,-0.38)
\psline[linestyle=dashed](0,0)(0.38,-0.38)
\pscoil[coilarmA=0.1,coilarmB=0](0,0)(0,0.47)
\psdots(0,0)
\endpspicture
\end{center}}
- \parbox{1.6cm}{\begin{center}
\pspicture(-0.8,-0.6)(0.8,0.7)
\psline[linestyle=dashed](-0.37,-0.18)(0,0.19)
\psline[linestyle=dashed](0.37,-0.18)(0,0.19)
\pscoil[coilarm=0.1](-0.37,-0.18)(0.37,-0.18)
\psline[linestyle=dashed](-0.37,-0.18)(-0.68,-0.49)
\psline[linestyle=dashed](0.37,-0.18)(0.68,-0.49)
\pscoil[coilarmA=0.1,coilarmB=0](0,0.19)(0,0.58)
\psdots(-0.37,-0.18)(0.37,-0.18)(0,0.19)
\psdots[dotstyle=o,dotscale=1.1](0,-0.18)(-0.185,0.005)(0.185,0.005)
\endpspicture
\end{center}}
- \parbox{1.7cm}{\begin{center}
\pspicture(-0.85,-0.64)(0.85,0.73)
\pscoil[coilarm=0.1](-0.405,-0.2)(0,0.205)
\pscoil[coilarm=0.1](0.405,-0.2)(0,0.205)
\psline[linestyle=dashed](-0.405,-0.2)(0.405,-0.2)
\psline[linestyle=dashed](-0.405,-0.2)(-0.715,-0.51)
\psline[linestyle=dashed](0.405,-0.2)(0.715,-0.51)
\pscoil[coilarmA=0.1,coilarmB=0](0,0.205)(0,0.595)
\psdots(-0.405,-0.2)(0.405,-0.2)(0,0.205)
\psdots[dotstyle=o,dotscale=1.1](0,-0.2)(-0.203,0.003)(0.203,0.003)
\endpspicture
\end{center}}
\end{equation*}
\caption{The proper ghost-gluon vertex to one-loop order. \label{figGamma}}
\end{figure}
Note that due to the transversality of the gauge, two powers of the external
momenta can be factorized from the loop integrals [cf.\ Eq.\
\eqref{g2gg}] and, as a result, the integrals are UV finite. This phenomenon
is well-known in another transverse gauge, the Landau gauge \cite{TMP71}.
For future use, we note that by very lengthy algebra the tensor
structure in Eq.\ \eqref{g3ggv2} can be simplified as follows:
\bal
&\phantom{=} p_{1,l} \, p_{2,n} \, t_{lm} (\bf{p}_1 - \bf{q})
t_{nr} (\bf{p}_2 + \bf{q}) \big[ \delta_{km} (p_{1,r} - p_{3,r} - q_r)
\notag \\[1mm]
&\phantom{=} \hspace{1cm} {}- \delta_{mr} (p_{1,k} - p_{2,k} - 2 q_k) -
\delta_{rk} (p_{2,m} - p_{3,m} + q_m) \big] t_{jk} (\bf{p}_3) (2 \pi)^3
\delta (\bf{p}_1 + \bf{p}_2 + \bf{p}_3) \notag \\
&= 2 \bigg( \bf{q}^2 p_{1,k} + (\bf{p}_1 \cdot \bf{p}_2) q_k
- \frac{(\bf{p}_1 - \bf{q}) \cdot (\bf{p}_2 + \bf{q})}
{(\bf{p}_1 - \bf{q})^2 \, (\bf{p}_2 + \bf{q})^2} \Big\{
[\bf{q} \cdot (\bf{p}_1 - \bf{q})] [\bf{q} \cdot (\bf{p}_2 + \bf{q})] p_{1,k}
\notag \\
&\phantom{=} \hspace{3.2cm} {}+ [\bf{p}_1 \cdot (\bf{p}_1 - \bf{q})]
[\bf{p}_2 \cdot (\bf{p}_2 + \bf{q})] q_k \Big\} \bigg)
t_{jk} (\bf{p}_3) (2 \pi)^3 \delta (\bf{p}_1 + \bf{p}_2 + \bf{p}_3) \:.
\eal
We define the renormalized coupling constant in analogy with the covariant
case as
\bal
\Gamma^{abc}_{R,j} (\bf{p}_1, \bf{p}_2, \bf{p}_3)
\Big|_{\bf{p}_1^2 = \bf{p}_2^2 = \bf{p}_3^2 = \kappa^2}
&\equiv Z_c (\kappa) Z_A^{1/2} (\kappa) \,
\Gamma^{abc}_j (\bf{p}_1, \bf{p}_2, \bf{p}_3)
\Big|_{\bf{p}_1^2 = \bf{p}_2^2 = \bf{p}_3^2 = \kappa^2} \notag \\
&= -i g_R (\kappa) f^{abc} \, p_{1,k} \, t_{jk} (\bf{p}_3) (2 \pi)^3
\delta (\bf{p}_1 + \bf{p}_2 + \bf{p}_3) \label{grendef}
\eal
at the symmetric point. As a consequence, using Eq.\ \eqref{wfr}
and the UV finiteness of the loop integrals \eqref{g3ggv1}--\eqref{g3ggv2},
\be
g_R (\kappa) = g \left[ 1 + \frac{N_c g^2}{(4 \pi)^2} \,
\frac{11}{6} \left( \frac{1}{\epsilon} - \ln \frac{\kappa^2}{\mu^2} + C
\right) \right] \:, \label{gren}
\ee
with a finite constant $C$ given by $(11/6) C = (4/3) C_c + (1/2) C_A
+ C_v$, where $C_v$ is obtained from the finite loop integrals in Eqs.\
\eqref{g3ggv1}--\eqref{g3ggv2}.
For the beta function defined in analogy with Eq.\ \eqref{defbeta}, we obtain
from Eq.\ \eqref{gren},
\be
\beta_0 = - \frac{11}{3} \, N_c \:.
\ee
This beta function coincides with the one obtained in Eq.\ \eqref{g2beta}
before with the renormalized coupling constant defined through the static
potential. The integration of the renormalization group equation
\eqref{defbeta} gives the well-known (one-loop) result
\be
g_R^2 (\kappa) = \frac{(4 \pi)^2}{\displaystyle \frac{11}{3} N_c
\ln \left( \frac{\kappa^2}{\Lambda_{QCD}^2} \right)} \label{grensol}
\ee
[and the same for $\bar{g}_R^2 (\kappa)$ \eqref{gbar}].
It must be noted that for renormalization group improvements like Eq.\
\eqref{grensol} to be sensible we have to suppose that the three-dimensional
formulation presented here is multiplicatively renormalizable to all orders
in the same way as the usual formulation of a renormalizable covariant quantum
field theory, which is not known at present (even the renormalizability of the
Lagrangian functional integral approach to Coulomb gauge Yang-Mills theory
has not yet been shown). Equation \eqref{grensol} and the developments to
follow are hence to some degree speculative, but it seemed of some interest
to us to explore the consequences of the natural assumption of multiplicative
renormalizability.
With these qualifications, we go on to use a standard
renormalization group argument to extract the asymptotic UV behavior of the
equal-time two-point correlation functions. To this end, differentiate
Eq.\ \eqref{defmultren} with respect to $\kappa^2$ using the
$\kappa$-independence of the ``bare'' two-point functions. It is then
seen that the $\kappa$-dependence of the renormalized two-point functions
is determined by the anomalous dimensions
$(\kappa^2 \, \d \ln Z_{A,c}/\d \kappa^2)$. Evaluating the latter
from Eq.\ \eqref{wfr} and replacing $g^2$ in the results with
$g_R^2 (\kappa)$, we obtain the desired renormalization group equations for
the equal-time two-point functions, explicitly
\bal
\kappa^2 \frac{\d}{\d \kappa^2} \,
\langle A_{R,i}^a (\bf{p}_1) A_{R,j}^b (\bf{p}_2) \rangle
&= \frac{N_c g_R^2 (\kappa)}{(4 \pi)^2} \,
\langle A_{R,i}^a (\bf{p}_1) A_{R,j}^b (\bf{p}_2) \rangle \:, \notag \\
\kappa^2 \frac{\d}{\d \kappa^2}
\langle c_R^a (\bf{p}_1) \bar{c}_R^b (\bf{p}_2) \rangle
&= \frac{4}{3} \, \frac{N_c g_R^2 (\kappa)}{(4 \pi)^2}
\, \langle c_R^a (\bf{p}_1) \bar{c}_R^b (\bf{p}_2) \rangle \:.
\eal
In these equations, we substitute from Eq.\ \eqref{grensol} for
$g_R^2 (\kappa)$ and integrate. Using the normalization conditions
\eqref{rencond} for the determination of the integration constants,
one obtains the momentum dependence of the equal-time two-point functions:
\bal
\langle A_{R,i}^a (\bf{p}_1) A_{R,j}^b (\bf{p}_2) \rangle
&= \frac{1}{2 |\bf{p}_1|} \left(
\frac{\displaystyle \ln \left( \frac{\kappa^2}{\Lambda_{QCD}^2} \right)}
{\displaystyle \ln \left( \frac{\bf{p}_1^2}{\Lambda_{QCD}^2} \right)} \right)^{3/11}
\hspace{-2mm} \delta^{ab} \, t_{ij} (\bf{p}_1)
(2 \pi)^3 \delta (\bf{p}_1 + \bf{p}_2) \:, \notag \\[2mm]
\langle c_R^a (\bf{p}_1) \bar{c}_R^b (\bf{p}_2) \rangle
&= \frac{1}{\bf{p}_1^2} \left(
\frac{\displaystyle \ln \left( \frac{\kappa^2}{\Lambda_{QCD}^2} \right)}
{\displaystyle \ln \left( \frac{\bf{p}_1^2}{\Lambda_{QCD}^2} \right)} \right)^{4/11}
\hspace{-2mm} \delta^{ab} \, (2 \pi)^3 \delta (\bf{p}_1 + \bf{p}_2) \:.
\label{2prensol}
\eal
The momentum dependence of the ``bare'' two-point functions, obtained from
Eq.\ \eqref{2prensol} simply by multiplying with the corresponding wave
function renormalization constants $Z_{A,c}$, is obviously the same. By
solving the renormalization group equations for $Z_A$ and $Z_c$ that involve
the anomalous dimensions, it may be shown explicitly that the bare
two-point functions are $\kappa$-independent, as they must be.
For the static potential, on the other hand, we immediately obtain from
Eqs.\ \eqref{defgbar} and \eqref{grensol} [for $\bar{g}_R^2 (\kappa)$] the
renormalization group improved result
\be
g^2 V (\bf{p})= \frac{(4 \pi)^2}{\displaystyle \frac{11}{3} N_c \, \bf{p}^2
\ln \left( \frac{\bf{p}^2}{\Lambda_{QCD}^2} \right)} \:.
\ee
Note that the latter one-loop formula constitutes a very direct expression
of asymptotic freedom.
The result \eqref{2prensol} for the momentum dependence of the equal-time
two-point functions has also been obtained in Ref.\ \cite{Sch08} from a
Dyson-Schwinger equation for the equal-time ghost correlator, where the
gauge-invariant one-loop running \eqref{grensol} of the renormalized coupling
constant is used as an input. We briefly discuss that derivation here,
adapted to the conventions of the present paper.
The renormalized equal-time two-point functions are parameterized as
\be
\langle A_{R,i}^a (\bf{p}_1) A_{R,j}^b (\bf{p}_2) \rangle =
\frac{1}{2\omega(\bf{p}_1^2)} \, \delta^{ab} \, t_{ij} (\bf{p}_1)
(2 \pi)^3 \delta (\bf{p}_1 + \bf{p}_2)
\ee
and
\be
\langle c_R^a (\bf{p}_1) \bar{c}_R^b (\bf{p}_2) \rangle =
\frac{d(\bf{p}_1^2)}{\bf{p}_1^2} \, \delta^{ab} \,
(2 \pi)^3 \delta (\bf{p}_1 + \bf{p}_2) \:,
\ee
and normalized according to the conditions \eqref{rencond}. The
renormalized coupling constant is defined as before in Eq.\ \eqref{grendef}.
Then the Dyson-Schwinger equation for the equal-time ghost two-point function
reads \cite{FRE04}
\be
d^{-1} (\bf{p}^2) = Z_c - N_c \, g_R^2 (\kappa) \int \frac{d^3 q}{(2 \pi)^3}
\, \frac{1 - (\hat{\bf{p}} \cdot \hat{\bf{q}})^2}{2 \omega (\bf{q}^2)} \,
\frac{d \big( (\bf{p} - \bf{q})^2 \big)}{(\bf{p} - \bf{q})^2} \:. \label{DS}
\ee
Here we have approximated the full ghost-gluon vertex appearing in the
exact equation by the tree-level vertex, as it is appropriate in order to
obtain the (renormalization-group improved) one-loop expressions.
In order to solve Eq.\ \eqref{DS}, we make the following, properly normalized,
ansatzes for the two-point functions in the ultraviolet limit
$\bf{p}^2 \gg \Lambda_{QCD}^2$,
\be
\frac{\abs{\bf{p}}}{\omega (\bf{p}^2)} = \left(
\frac{\displaystyle \ln \left( \frac{\kappa^2}{\Lambda_{QCD}^2} \right)}
{\displaystyle \ln \left( \frac{\bf{p}^2}{\Lambda_{QCD}^2} \right)} \right)^\gamma \:,
\qquad d (\bf{p}^2) = \left(
\frac{\displaystyle \ln \left( \frac{\kappa^2}{\Lambda_{QCD}^2} \right)}
{\displaystyle \ln \left( \frac{\bf{p}^2}{\Lambda_{QCD}^2} \right)} \right)^\delta \:,
\ee
with the exponents $\gamma$ and $\delta$ to be determined. The integral in
Eq.\ \eqref{DS} can then be calculated in the limit
$\bf{p}^2 \gg \Lambda_{QCD}^2$ and the Dyson-Schwinger equation yields the
relation \cite{Sch08}
\be
\ln^{-\delta} \left( \frac{\kappa^2}{\Lambda_{QCD}^2} \right)
\ln^{\delta} \left( \frac{\bf{p}^2}{\Lambda_{QCD}^2} \right)
= N_c \, g_R^2 (\kappa) \frac{1}{(4\pi)^2} \frac{4}{3\delta}
\ln^{\gamma+\delta} \left( \frac{\kappa^2}{\Lambda_{QCD}^2} \right)
\ln^{1-\gamma-\delta} \left( \frac{\bf{p}^2}{\Lambda_{QCD}^2} \right) \:,
\ee
from which we infer the sum rule
\be
\label{thelogsumrule}
\gamma + 2 \delta = 1
\ee
for the exponents as well as the identity
\be
\label{coeffCoulUV}
g_R^2 (\kappa) \frac{1}{(4\pi)^2} \frac{4}{3\delta} N_c
\ln \left( \frac{\kappa^2}{\Lambda_{QCD}^2} \right) = 1
\ee
for the coefficients. Consistency of the latter relation with the
well-known perturbative result \eqref{grensol} yields the exponents
\be
\gamma = \frac{3}{11} \;, \qquad \delta = \frac{4}{11} \;,
\ee
where we have used the sum rule \eqref{thelogsumrule} again.
We have thus regained the result of Eq.\ \eqref{2prensol}.
\section{Conclusions}
In this work, we have
accomplished a systematic perturbative solution of the Yang-Mills
Schr\"o\-din\-ger equation in Coulomb gauge for the vacuum wave functional
following the $e^S$ method in many-body physics. This resulted in
a novel functional integral representation for
the calculation of equal-time correlation functions. We have derived a
diagrammatical representation of these functions, order
by order in perturbation theory, where the vertices in the diagrams are
determined from the perturbative calculation of the vacuum wave functional.
The number of the vertices, which by themselves have a perturbative expansion,
grows with the perturbative order. We have determined the equal-time gluon
and ghost two-point correlation functions and the potential between static
color charges to one-loop order in this way.
The results coincide with those of a straightforward calculation in
Rayleigh-Schr\"odinger perturbation theory \cite{CRW09}, and also with the
values for equal times of the two-point space-time correlation
functions from a Lagrangian functional integral representation \cite{WR07b}.
We have emphasized that the latter
coincidence is not trivial since the gauge fixing procedures in the
Hamiltonian and the Lagrangian approach are profoundly different. We
have also used the results of the Lagrangian approach to renormalize
the equal-time two-point correlation functions and the static potential.
With the help of the non-renormalization of the ghost-gluon vertex which we
also show, or, alternatively, from the static potential, we can extract
the running of the correspondingly defined renormalized coupling
constant. The result for the beta function is the one also found in covariant
and other gauges, $\beta_0 = - (11/3) N_c$ to one-loop order. We have used
standard renormalization group arguments to determine the asymptotic
ultraviolet behavior of the equal-time two-point functions and
the static potential under the assumption of multiplicative renormalizability
to all orders, with the result that
\bal
\langle A_i^a (\bf{p}_1) A_j^b (\bf{p}_2) \rangle &\propto
\frac{\big( \ln (\bf{p}_1^2/\Lambda_{QCD}^2) \big)^{-3/11}}{2 |\bf{p}_1|} \,
\delta^{ab} \, t_{ij} (\bf{p}_1)
(2 \pi)^3 \delta (\bf{p}_1 + \bf{p}_2) \:, \notag \\
\langle c^a (\bf{p}_1) \bar{c}^b (\bf{p}_2) \rangle &\propto
\frac{\big( \ln (\bf{p}_1^2/\Lambda_{QCD}^2) \big)^{-4/11}}{\bf{p}_1^2} \,
\delta^{ab} \, (2 \pi)^3 \delta (\bf{p}_1 + \bf{p}_2) \notag \\
g^2 V (\bf{p}_1) &\propto
\frac{\big( \ln (\bf{p}_1^2/\Lambda_{QCD}^2) \big)^{-1}}{\bf{p}_1^2}
\eal
to one-loop order in the perturbative (asymptotically free) regime.
It is clear from the presence of an infinite number of vertices
in the functional integral representation of the equal-time correlation
functions (to infinite perturbative order) that the corresponding
Dyson-Schwinger equations contain an infinite number of terms, a very
serious problem for the determination of an appropriate approximation scheme
for nonperturbative solutions. The existence of simplified diagrammatic
rules for the calculation of equal-time correlation functions via the
E-operator, to be appropriately extended to all perturbative orders,
seems to point toward the possibility of formulating similar nonperturbative
equations with a finite number of terms. It would indeed be very interesting
to repeat the type of infrared analysis applied before to Yang-Mills theory
in the Landau gauge \cite{SLR06,SHA97,LSZ02} and to a variational ansatz in
the Coulomb gauge \cite{SS01}--\cite{SLR06} for such a set of equations.
\subsection*{Acknowledgments}
It is a pleasure to thank Adam Szczepaniak and
Peter Watson for many valuable discussions on the Coulomb gauge.
A.W. is grateful to the Institute for Theoretical Physics at the University of
T\"ubingen for the warm hospitality extended to him during a two-months stay
in the summer of 2008. Support by the Deutscher Akademischer Austauschdienst
(DAAD), Conacyt grant 46513-F, CIC-UMSNH,
Deutsche Forschungsgemeinschaft (DFG) under contract Re 856/6-3, and
Cusanuswerk--Bisch\"ofliche Studienf\"orderung
is gratefully acknowledged.
\begin{appendix}
\section{Gauge transformations in Lagrangian and Hamiltonian formalisms}
While the Lagrangian approach to Yang--Mills theory offers some convenient
features (such as manifestation of Lorentz invariance), the more cumbersome
Hamiltonian approach yields equations of motion invariant under a larger set
of gauge transformations. Prior to quantization, we discuss gauge invariance
starting from the classical Lagrangian and Hamiltonian functions, respectively.
Note that we will employ standard covariant notation in this
appendix; in particular, spatial subindices refer to the covariant
components of the corresponding 4-vector or tensor.
The Lagrangian function of the gauge sector,
\be
L=-\frac{1}{4}\int d^3x\: F_{\mu\nu}^a(x)F^{\mu\nu}_a(x)\; ,
\ee
is invariant under gauge transformations of the gauge field $A_\mu(x)\equiv A_\mu^a (x)T^a$,
\be
\label{Ltra}
A_\mu(x)\rightarrow U(x)A_\mu(x)U^\dagger(x) + \frac{1}{g}U(x)\partial_\mu U^\dagger(x) \; ,
\ee
where $U\in SU(N)$ and $[T^a,T^b]=f^{abc}T^c$.
The Weyl gauge, $A_0^a(x)=0$, can be found by choosing the time-ordered exponential
\be
\label{Weyl}
U^\dagger(x)= \textrm{T}\:\exp\left\{- g \int^t dt' A_0(\bf{x},t') \right\}\; .
\ee
To remain in the Weyl gauge, the transformation \eqref{Weyl} may be followed by
time-independent transformations $U(\bf{x})$ only. We can therefore fix the Coulomb
gauge, $\partial_i A^i_a(x)=0$, at one instant of time but it is impossible to fix
both gauges simultaneously for all times.
In the Hamiltonian formalism, on the other hand, gauge transformations are generated
by (first-class) constraints in configuration space \cite{Dir64}. To see that,
supplement the Hamiltonian function
\be
H=\frac{1}{2}\int d^3x\: \left(\bf{\Pi}_a^2(x) + \bf{B}_a^2(x)\right) - \int d^3 x A_0^a(x) \hat D_i^{ab}(x)\Pi_b^i(x)
\ee
by the constraints
\be
\label{phis}
\phi_1^a(x)=\Pi_0^a(x)\approx 0\; ,\quad \phi_2^a(x)=\hat D^{ab}_i(x)\Pi^i_b(x)\approx 0\;
\ee
with some arbitrary Lagrange multiplier fields $\{\lambda_k^a(x)\}$,
\be
\label{H_E}
H_E=H+\sum_{k=1,2}\int d^3 x \:\lambda_k^a(x)\phi_k^a(x)\; .
\ee
We defined $\Pi_\mu^a(x)=F_{\mu 0}^a(x)$ and
$\hat D^{ab}_i(x)=\delta^{ab}\partial_i-gf^{abc}A_i^c(x)$.
The extended
Hamiltonian $H_E$ in Eq.\ \eqref{H_E} is equivalent to the original
Hamiltonian $H$ since the constraints $\{\phi_k^a(x)\}$ vanish weakly
(in the Dirac sense \cite{Dir64}). The infinitesimal time evolution of
the gauge field $A_\mu^a({\bf{x}},t)$ from $t_0$ to $t=t_0+\delta t$,
generated by $H_E$ through the Poisson brackets,
\be
A_\mu^a({\bf{x}},t)= A_\mu^a({\bf{x}},t_0)+\delta t \:\{ A_\mu^a({\bf{x}},t_0),H\} + \delta t \sum_{k=1,2}\int d^3 y \: \lambda_k^b(y) \{ A_\mu^a({\bf{x}},t_0), \phi_k^b(y)\} \; ,
\ee
gives for two different sets of Lagrange multiplier functions
$\{\lambda_k'^b(x)\}$ and $\{\lambda_k''^b(x)\}$ two different
results $A_\mu'^a$ and $A_\mu''^a$, respectively. These differ to
${\cal{O}}( \delta t )$ by
\be
\label{fdiff}
A_\mu''^a({\bf{x}},t) - A_\mu'^a({\bf{x}},t) = \delta t \sum_{k=1,2} \int d^3y \left(\lambda_k''^b(y)-\lambda_k'^b(y)\right) \{ A_\mu^a({\bf{x}},t), \phi_k^b(y)\}
\ee
and are physically equivalent. Thus, the function
\be
G=\sum_{k=1,2} \int d^3y \:\tau_k^a(y)\phi_k^a(y)
\ee
generates infinitesimal gauge transformations in the (extended) Hamiltonian
formalism with arbitrary functions $\tau_1^a(x)$ and $\tau_2^a(x)$. Computing
the Poisson brackets in Eq.\ \eqref{fdiff} yields
\bal
\label{A0trafo}
A_0^a(x) & \rightarrow A_0^a(x) + \tau_1^a(x) \\
\label{Aktrafo}
A_i^a(x) &\rightarrow A_i^a(x) - \hat D^{ab}_i (x)\tau_2^b(x)
\eal
The difference to the gauge transformations \eqref{Ltra} in the Lagrangian
formalism is that the time component and the spatial components of the gauge
field transform independently. The two functions $\tau_1^a(x)$ and $\tau_2^a(x)$
allow for a larger set of gauge transformations than the single function $U(x)$
in the Lagrangian formalism. The simultaneous fixing of Weyl and Coulomb gauges,
which is impossible in the Lagrangian formalism, can be accomplished in the
Hamiltonian formalism by appropriately choosing $\tau_1^a(x)$ and $\tau_2^a(x)$.
See Ref.\ \cite{Cos84} for the abelian case. Subsequently, the non-abelian
gauge-fixed theory can be canonically quantized with projection on the physical
Hilbert space \cite{CL80}, or with Dirac brackets \cite{Sch08} enforcing all
constraints strongly. Both quantization prescriptions produce the Hamiltonian
operator given by Eq.\ \eqref{christlee}.
\end{appendix}
|
2,869,038,154,426 | arxiv | \section{Introduction}
\begin{figure}[!ht]
\centering
\begin{subfigure}{1.0\columnwidth}
\includegraphics[draft=false,trim={0 200px 0 200px},clip,width=\columnwidth]{figures/smoke_stable0027.jpeg}
\end{subfigure} \\
\begin{subfigure}{1.0\columnwidth}
\includegraphics[draft=false,trim={0 200px 0 200px},clip,width=\columnwidth]{figures/smoke_stable0126.jpeg}
\end{subfigure} \\
\begin{subfigure}{1.0\columnwidth}
\includegraphics[draft=false,trim={0 200px 0 200px},clip,width=\columnwidth]{figures/smoke_stable0260.jpeg}
\end{subfigure}
\caption{{\textbf{Colorful smoke jets}}. Multicolored jets of smoke are simulated with BSLQB. Intricate mixing is induced as the flows collide at the spherical boundary.}\label{fig:rainbowsmoke}
\end{figure}
Whether it be billowing smoke, energetic explosions, or breaking waves, simulation of incompressible flow is an indispensable tool for modern visual effects. Ever since the pioneering works of Foster and Metaxas \shortcite{foster:1996:liquids}, Stam \shortcite{stam:1999:stable} and Fedkiw et al. \shortcite{fedkiw:2001:visual,foster:2001:practical}, the Chorin \shortcite{chorin:1967:numerical} splitting of advective and pressure projection terms has been the standard in computer graphics applications \cite{bridson:2008:fluid-simulation}. Most techniques use regular grids of Marker-And-Cell (MAC) \cite{harlow:1965:viscous-flow} type with pressure and velocity components staggered at cell centers and faces respectively. Furthermore, advection is most often discretized using semi-Lagrangian techniques originally developed in the atmospheric sciences \cite{stam:1999:stable,robert:1981:stable}. Although well-established, these techniques are not without their drawbacks. For example, the staggering utilized in the MAC grids is cumbersome since variables effectively live on four different grids. This can complicate many algorithms related to incompressible flow. E.g. Particle-In-Cell (PIC) \cite{harlow:1964:pic} techniques like FLIP \cite{brackbill:1986:flip-pic,zhu:2005:sand-fluid}, Affine/Polynomial Particle-In-Cell (APIC/PolyPIC) \cite{jiang:2015:apic,fu:2017:poly} and the Material Point Method (MPM) \cite{sulsky:1994:history-materials,stomakhin:2014:augmented-mpm} must transfer information separately to and from each individual grid. Similarly, semi-Lagrangian techniques must separately solve for upwind locations at points on each of the velocity component grids. Moreover, while semi-Lagrangian techniques are renowned for the large time steps they admit (notably larger than the Courant-Friedrichs-Levy (CFL) condition), their inherent stability is plagued by dissipation that must be removed for most visual effects phenomena. Another limitation of the MAC grid arises with free-surface water simulation. In this case, the staggering prevents many velocity components near the fluid free surface from receiving a correction during projection (see e.g. \cite{bridson:2008:fluid-simulation}). Each of these velocity components must then be separately extrapolated to from the interior to receive a pressure correction. \\
\\
MAC grids are useful because the staggering prevents pressure null modes while allowing for accurate second order central differencing in discrete grad/div operators. However, there are alternatives in the computational physics literature. Many mixed Finite Element Method (FEM) techniques use collocated velocities \cite{hughes:2000:book} without suffering from pressure mode instabilities. For example, Taylor-Hood elements \cite{taylor:1973:TH} use collocated multi-quadratic velocity interpolation and multilinear pressure interpolation to enforce incompressiblity. Recently, B-spline interpolation \cite{deboor:1978:splines} has been used with Taylor-Hood \cite{bressan:2010:isogeometric}. We build on this work and develop an approach based on collocated multi-quadratic B-spline interpolation for velocities. This choice is motivated by the simplicity of collocated grids compared to staggering, but also because of the ease of attaining continuous derivatives with B-spline interpolation. For example, this interpolation is often chosen with MPM applications since $C^1$ interpolation is essential for stability \cite{steffen:2008:analysis}. In the context of fluids, we show that this allows for extremely stable and accurate advection.
\\
\\
We develop a new approach for Chorin splitting \shortcite{chorin:1967:numerical} based on the collocated multiquadratic B-spline velocity, multilinear pressure Taylor-Hood element \cite{bressan:2010:isogeometric}. However, unlike the fully collocated technique of Bressan \shortcite{bressan:2010:isogeometric}, we stagger pressures on the nodes of the grid and velocities at cell centers as in \cite{ando:2013:surfacing}, since it reduces coupling in the pressure projection system and naturally accommodates particle-based definition of the flow domain for free-surface simulation of water. Notably, our formulation does not require velocity extrapolation after pressure projection for free-surface flow calculations as is typically needed with MAC grids. We use regular grids, but as in \cite{batty:2007:solid-fluid,batty:2008:buckling,larionov:2017:stokes}, we allow for irregular domains in a variational way using cut cells. However, rather than a weighted finite different approach, we use an FEM approach as in XFEM \cite{belytschko:2009:review,koschier:2017:xfem} and virtual node (VNA) \cite{schroeder:2014:vna} techniques. In VNA and XFEM approaches, integrals arising in the variational formulation are carried out over the intersection of the grid with the domain geometry. \\
\\
\begin{figure}[!ht]
\centering
\begin{subfigure}{\columnwidth}
\includegraphics[draft=false,trim={0 100px 0 50px},clip,width=\columnwidth]{figures/dambreak_1.jpeg}
\end{subfigure} \\
\begin{subfigure}{\columnwidth}
\includegraphics[draft=false,trim={0 100px 0 50px},clip,width=\columnwidth]{figures/dambreak_14.jpeg}
\end{subfigure} \\
\begin{subfigure}{\columnwidth}
\includegraphics[draft=false,trim={0 100px 0 50px},clip,width=\columnwidth]{figures/dambreak_0090.jpeg}
\end{subfigure}
\caption{{\textbf{Dam break}}. A block of water falls in a rectangular domain with obstacles. Dynamic splashing behavior is followed by settling of the water in the tank. White water rendering effects are added based on \cite{ihmsen:2012:unified}.}\label{fig:dambreak_final}
\end{figure}
We leverage $C^1$ continuity guaranteed by our quadratic B-spline velocity interpolation to develop BSLQB, a novel Backward Semi-Lagrangian (BSL) \cite{robert:1981:stable} technique that achieves second order accuracy in space and time. BSL techniques utilize the implicit form of semi-Lagrangian advection. We show that our novel BSL method for quadratic B-splines dramatically reduces numerical dissipation with only a small modification to the widely-adopted explicit semi-Lagrangian formulations typically used in graphics applications. Semi-Lagrangian techniques for velocity advection utilize the implicit relation associated with solution of Burgers' equation
\begin{align}\label{eq:impBurg}
\mb{u}(\mb{x},t)=\mb{u}(\mb{x}-(t-s)\mb{u}(\mb{x},t),s)\iff \ \frac{D\mb{u}}{Dt}=\frac{\partial \mb{u}}{\partial t}+\frac{\partial \mb{u}}{\partial \mb{x}}\mb{u}=\mathbf{0}
\end{align}
for $s\leq t$ \cite{evans:2010:pde}. Traditionally, graphics applications have preferred the explicit variant of semi-Lagragian advection whereby grid velocities are updated through the expression
\begin{align}\label{eq:SL}
\mb{u}_\mb{i}^{n+1}=\mb{u}(\mb{x}_\mb{i}-\Delta t \mb{u}_\mb{i}^n,t^n)
\end{align}
where $\mb{x}_\mb{i}$ is the location of grid node $\mb{i}$, $\mb{u}_\mb{i}^n,\mb{u}_\mb{i}^{n+1}$ are velocities at the node at times $t^n$ and $t^{n+1}$ respectively and interpolation over the velocity grid is used to estimate $\mb{u}(\mb{x}_\mb{i}-\Delta t \mb{u}_\mb{i}^n,t^n)$ at non-grid node locations \cite{sawyer:1963:semi,stam:1999:stable}. In contrast, BSL techniques leverage Equation ~\eqref{eq:impBurg} directly
\begin{align}\label{eq:BSL}
\mb{u}_\mb{i}^{n+1}=\mb{u}(\mb{x}_\mb{i}-\Delta t \mb{u}_\mb{i}^{n+1},t^n)
\end{align}
which requires the solution of an implicit equation for $\mb{u}_\mb{i}^{n+1}$ \cite{robert:1981:stable}. Since our grid interpolation is naturally $C^1$, we show that this can be done very efficiently using a few steps of Newton's method. While this is more expensive than the explicit semi-Lagrangian formulations, we note that each node can still be updated in parallel since the implicit equations for $\mb{u}_\mb{i}^{n+1}$ are decoupled in $\mb{i}$. We show that solution of the implicit Equation~\eqref{eq:BSL}, rather than the traditionally used explicit Equation~\eqref{eq:SL} improves the order of convergence from first to second (in space and time). Notably, this does not require use of multiple time steps for backward/forward estimations of error, as is commonly done \cite{kim:2006:advections,kim:2005:bfecc,selle:2008:unconditionally,xiu:2001:semi,schroeder:2014:vna}. Furthermore, our method allows for larger-than-CFL time steps and is as stable or more so than explicit semi-Lagrangian formulations.\\
\\
\begin{figure}[t]
\centering
\begin{subfigure}{.49\columnwidth}
\includegraphics[draft=false,trim={0 60px 0 60px},clip,width=\columnwidth]{figures/Inner_circle_8.jpeg}
\end{subfigure}
\begin{subfigure}{.49\columnwidth}
\includegraphics[draft=false,trim={0 60px 0 60px},clip,width=\columnwidth]{figures/Inner_circle52.jpeg}
\end{subfigure}\\
\begin{subfigure}{.49\columnwidth}
\includegraphics[draft=false,trim={0 60px 0 60px},clip,width=\columnwidth]{figures/Inner_circle_401.jpeg}
\end{subfigure}
\begin{subfigure}{.49\columnwidth}
\includegraphics[draft=false,trim={0 60px 0 60px},clip,width=\columnwidth]{figures/Inner_circle_600.jpeg}
\end{subfigure}
\caption{{\textbf{SL vs. BSLQB}}. We compare semi-Lagrangian (left) and BSLQB (right) in a vorticity-intensive example. BSLQB breaks symmetry and exhibits a more turbulent flow pattern. Note we only use particles for flow visualization and not for PolyPIC advection in this example.}\label{fig:inner_circle}
\end{figure}
Lastly, we develop a hybrid particle/BSLQB advection technique that utilizes PolyPIC \cite{fu:2017:poly} in portions of the domain covered by particles and BSLQB in portions without particles. Our formulation naturally leverages the strengths of both approaches. Dense concentrations of particles can be added to regions of the domain where more detail is desired. Also, if particle coverage becomes too sparse because of turbulent flows, BSLQB can be used in the gaps. We demonstrate the efficacy of this technique with smoke simulation and narrow banding of particles near the fluid surface with water simulations as in \cite{chentanez:2015:coupling,ferstl:2016:narrow,sato:2018:nb}. In this case, level set advection naturally enabled with our BSLQB formulation is preferred in deeper water regions. We summarize our contributions as:
\begin{itemize}
\item A novel cut-cell collocated velocity B-spline mixed FEM method for Chorin \shortcite{chorin:1967:numerical} splitting discretization of the incompressible Euler equations.
\item BSLQB: a novel BSL technique designed for collocated multiquadratic B-spline velocity interpolation that achieves second order accuracy in space and time.
\item A hybrid BSLQB/PolyPIC method for narrow band free-surface flow simulations and concentrated-detail smoke simulations.
\end{itemize}
\begin{figure*}[!ht]
\centering
\begin{subfigure}{.33\textwidth}
\includegraphics[draft=false,trim={0 0 0 0},clip,width=\columnwidth]{figures/bunnydrown_0003.jpeg}
\end{subfigure}
\begin{subfigure}{.33\textwidth}
\includegraphics[draft=false,trim={0 0 0 0},clip,width=\columnwidth]{figures/bunnydrown_0024.jpeg}
\end{subfigure}
\begin{subfigure}{.33\textwidth}
\includegraphics[draft=false,trim={0 0 0 0},clip,width=\columnwidth]{figures/bunnydrown_0114.jpeg}
\end{subfigure}
\caption{{\textbf{Dam break with bunny}}: Opposing blocks of water collapse in a tank and flow around the irregular domain boundary placed in the middle of the tank. Particles are colored from slow (blue) to fast (white) speed. }\label{fig:bunny_dambreak_final}
\end{figure*}
\section{Previous Work}
\subsection{Advection}
Stam \shortcite{stam:1999:stable} first demonstrated the efficacy of semi-Lagrangian techniques for graphics applications and they have since become the standard, largely due to the large time steps they engender and their simple interpolatory nature. Many modifications to the original approach of Stam \shortcite{stam:1999:stable} have been developed, often inspired by approaches in the engineering literature. Fedkiw et al. \shortcite{fedkiw:2001:visual} use vorticity confinement \cite{steinhoff:1994:modification} to counterbalance vorticity lost to dissipation and cubic grid interpolation. Kim et al. \shortcite{kim:2006:advections,kim:2005:bfecc} and Selle et al. \cite{selle:2008:unconditionally} combine forward and backward semi-Lagrangian steps to estimate and remove dissipative errors. Constrained Interpolation Profile \cite{kim:2008:semi,yabe:2001:multiphase-analysis,song:2009:derivative-particles} techniques additionally advect function derivatives to reduce dissipation. Molemaker et al. \shortcite{molemaker:2008:low} use the QUICK technique of Leonhard \shortcite{leonard:1979:stable} which is essentially upwinding with quadratic interpolation and Adams-Bashforth temporal discretization, although this does not have the favorable stability properties of semi-Lagrangian. Backward Difference Formula techniques are useful because they use an implicit multistep formulation for higher-order semi-Lagrangian advection yet still only require one projection per time step \cite{xiu:2001:semi,schroeder:2014:vna}.\\
\\
The main idea in semi-Lagrangian techniques is to interpolate data from a characteristic point. This idea goes back to the Courant-Issaacson-Rees \shortcite{courant:1952:solution} method. However, as noted in \cite{fedkiw:2001:visual} semi-Lagrangian advection is very popular in atmospheric science simulation and the variants used in graphics that account for characteristics traveling beyond the local cell in one time step go back to Sawyer \shortcite{sawyer:1963:semi}. The first BSL approach utilizing Equation~\eqref{eq:BSL} was done by Robert \shortcite{robert:1981:stable} in which they use fixed point iteration to solve the nonlinear equation. They fit a bicubic function to their data over $4\times4$ grid patches, then use that function in the fixed point iteration. If the upwind point leaves the grid, they clamp it to the boundary of the $4\times4$ patch. This clamping will degrade accuracy for larger time steps. In this case, more general interpolation is typically used (see \cite{staniforth:1991:semi,falcone:1998:convergence} for useful reviews). Pudykiewicz and Staniforth \shortcite{pudykiewicz:1984:some} investigate the effects of BSL versus explicit semi-Lagrangian. Specifically, they compare Bates and McDonald \shortcite{bates:1982:multiply} (explicit) versus Robert \shortcite{robert:1981:stable} (BSL). They show that keeping all things equal, the choice of Equation~\eqref{eq:SL} (explicit) instead of Equation~\eqref{eq:BSL} (BSL) leads to more dissipation and mass loss. This is consistent with our observations with BSLQB.\\
\\
Interestingly, multiquadratic B-splines have not been adopted by the semi-Lagrangian community, despite their natural regularity. Hermite splines, multicubic splines and even Lagrange polynomials are commonly used \cite{staniforth:1991:semi}. Preference for Hermite splines and Lagrange polynomials is likely due to their local nature (they do not require solution of a global system for coefficients) and preference for multicubic splines (over multi-quadratic) is possibly due to the requirement of odd degree for natural splines (odd degree splines behave like low pass filters and tend to be smoother than even degree splines \cite{cheng:2001:quadratic,cheney:2012:numerical}). Cubic splines are considered to be more accurate than Hernite splines and Lagrange interpolation \cite{staniforth:1991:semi,makar:1996:basis}. Interestingly, Riish{\o}jgaard et al. \shortcite{riishojgaard:1998:use} found that cubic spline interpolation gave rise to a noisier solution than cubic Lagrange interpolation with a technique analogous to that of Makar and Karpik \shortcite{makar:1996:basis}. However, they also note that addition of a selective scale diffusion term helps reduce noise associated with cubic splines. Wang and Layton \shortcite{wang:2010:new} use linear B-splines with BSL but only consider one space dimension which makes Equation~\eqref{eq:BSL} linear and easily solvable.\\
\\
Dissipation with explicit semi-Lagrangian advection is so severe that many graphics researchers have resorted to alternative methods to avoid it. Mullen et al. \shortcite{mullen:2009:energy} develop energy preserving integration to prevent the need for correcting dissipative behavior. Some authors \cite{qu:2019:mcm,tessendorf:2011:MCM,sato:2017:long,sato:2018:spatially} resolve the flow map characteristics for periods longer than a single time step (as opposed to one step with semi-Lagrangian) to reduce dissipation. Hybrid Lagrange/Eulerian techniques like PIC (and related approaches) \cite{bridson:2008:fluid-simulation,jiang:2015:apic,fu:2017:poly,zhu:2005:sand-fluid} explicitly track motion of particles in the fluid, which is nearly dissipation-free, but can suffer from distortion in particle sampling quality. Vorticity formulations are also typically less dissipative, but can have issues with boundary conditions enforcement \cite{selle:2005:vortex,angelidis:2005:simulation,chern:2016:schrodinger,elcott:2007:stable,park:2005:vortex,weissmann:2010:filament}. Zehnder et al., Zhang et al. and Mullen et al. \shortcite{mullen:2009:energy,zehnder:2018:advection,narain:2019:ref,zhang:2015:restoring} have noted that the Chorin projection itself causes dissipation. Zhang et al. \shortcite{zhang:2015:restoring} reduced artificial dissipation caused by the projection step by estimating lost vorticity and adding it back into the fluid. Zehnder et al. \shortcite{zehnder:2018:advection,narain:2019:ref} propose a simple, but very effective modification to the splitting scheme that is similar to midpoint rule integration to reduce the projection error.
\subsection{Pressure projection}
Graphics techniques utilizing pressure projection typically use voxelized MAC grids with boundary conditions enforced at cell centers and faces, however many methods improve this by taking into account sub-cell geometric detail. Enright et al. \shortcite{enright:2003:using} showed that enforcing the pressure free surface boundary condition at MAC grid edge crossings (rather than at cell centers) dramatically improved the look of water surface waves and ripples. Batty, Bridson and colleagues developed variational weighted finite difference approaches to enforce velocity boundary conditions with MAC grids on edge crossings and improved pressure boundary conditions at the free surface in the case of viscous stress \cite{batty:2007:solid-fluid,batty:2008:buckling,larionov:2017:stokes}. XFEM \cite{belytschko:2009:review,koschier:2017:xfem} and virtual node (VNA) \cite{schroeder:2014:vna} techniques also use cut cell geometry with variational techniques. Schroeder et al. \shortcite{schroeder:2014:vna} use cut cells with MAC grids, but their technique is limited to moderate Reynolds numbers. \\
\\
\begin{figure}[!t]
\centering
\begin{subfigure}{0.24\columnwidth}
\includegraphics[draft=false,trim={300px 0 300px 0},clip,width=\columnwidth]{figures/globe0000.jpeg}
\end{subfigure
\begin{subfigure}{0.24\columnwidth}
\includegraphics[draft=false,trim={300px 0 300px 0},clip,width=\columnwidth]{figures/globe0007.jpeg}
\end{subfigure}
\begin{subfigure}{0.24\columnwidth}
\includegraphics[draft=false,trim={300px 0 300px 0},clip,width=\columnwidth]{figures/globe0013.jpeg}
\end{subfigure
\begin{subfigure}{0.24\columnwidth}
\includegraphics[draft=false,trim={300px 0 300px 0},clip,width=\columnwidth]{figures/globe0062.jpeg}
\end{subfigure}
\caption{{\textbf{Water in a globe}}. A block of water splashes and naturally slides along cut cell boundaries in an irregular domain interior to one large sphere and exterior to one small sphere.}\label{fig:snowglobe}
\end{figure}
There is a vast literature on enforcing incompressibility in the FEM community \cite{hughes:2000:book}. Our approach is most similar to the B-spline Taylor-Hood element of Bressan \cite{bressan:2010:isogeometric}. Adoption of B-spline interpolation in FEM is part of the isogeometric movement \cite{hughes:2005:isogeometric,ruberg:2012:subdivision}. Originally motivated by the desire to streamline the transition from computer-aided design (CAD) to FEM simulation, isogeometric analysis explores the use of CAD-based interpolation (e.g. B-splines and nonuniform rational B-splines (NURBS)) with FEM methodologies. Hughes et al. \shortcite{hughes:2005:isogeometric} show that in addition to simplifying the transition from CAD to simulation, the higher regularity and spectral-like properties exhibited by these splines makes them more accurate than traditionally used interpolation. We enforce Dirichlet boundary conditions weakly as in XFEM and VNA approaches \cite{belytschko:2009:review,koschier:2017:xfem,schroeder:2014:vna}. Bazilevs et al. \shortcite{bazilevs:2007:weak} show that weak Dirichlet enforcement with isogeometric analysis can be more accurate than strong enforcement.\\
\\
Graphics applications are typically concerned with turbulent, high-Reynolds numbers flows. Interestingly, B-splines have proven effective for these flows by researchers in the Large Eddy Simulation (LES) community \cite{kim:1998:mixed,kravchenko:1999:bspline}. Kravchenko et al. \shortcite{kravchenko:1999:bspline} use a variational weighted residuals approach with B-splines for turbulent LES and show that the increased regularity significantly reduces computational costs. Boatela et al. \shortcite{botella:2002:collocation} use a similar approach, but apply a collocation technique where the strong form of the div-grad formulation of incompressibility is enforced point wise. They show that their B-spline approach attains optimal order of accuracy with accurate resolution of quadratic flow invariants. Boatela et al. \shortcite{botella:2002:collocation} also introduce a notion of sparse approximation to the inverse mass matrix to avoid dense systems of equations in the pressure solve.
\begin{figure}[t]
\centering
\begin{subfigure}{0.5\columnwidth}
\includegraphics[draft=false,trim={300px 0 300px 0},clip,width=\columnwidth]{figures/bunny_smoke_thin40_0001.jpeg}
\end{subfigure
\begin{subfigure}{0.5\columnwidth}
\includegraphics[draft=false,trim={300px 0 300px 0},clip,width=\columnwidth]{figures/bunny_smoke_thin40_0026.jpeg}
\end{subfigure} \\[-1ex]
\begin{subfigure}{0.5\columnwidth}
\includegraphics[draft=false,trim={300px 0 300px 0},clip,width=\columnwidth]{figures/bunny_smoke_thin40_0065.jpeg}
\end{subfigure
\begin{subfigure}{0.5\columnwidth}
\includegraphics[draft=false,trim={300px 0 300px 0},clip,width=\columnwidth]{figures/bunny_smoke_thin40_0090.jpeg}
\end{subfigure}
\caption{{\textbf{Smoke in an irregular domain}}. Multicolored spheres of smoke with non-zero initial velocity conditions flow and collide inside the Stanford bunny. Zero normal velocity is enforced with our cut cell formulation.}\label{fig:bunnysmoke}
\end{figure}
\section{Governing Equations and Operator Splitting}
We solve the incompressible Euler equations that describe the evolution of a fluid in terms of its mass density $\rho$, velocity $\mb{u}$, pressure $p$ and gravitational constant $\gg$ as
\begin{align}
\rho\frac{D\mb{u}}{Dt} &=\rho\left(\frac{\partial \mb{u}}{\partial t} + \frac{\partial \mb{u}}{\partial \mb{x}}\mb{u}\right)=-\nabla p + \rho\gg, \ \mb{x}\in\Omega \label{eq:mom_cont}\\
\nabla\cdot\mb{u} &= 0, \ \mb{x}\in\Omega \label{eq:div_cont} \\
\mb{u}\cdot\mb{n}&=a, \ \mb{x}\in\partial \Omega_D \label{eq:bcv_cont}\\
p&=0, \ \mb{x}\in\partial \Omega_N \label{eq:bcp_cont}
\end{align}
where Equation~\eqref{eq:mom_cont} is balance of linear momentum, Equation~\eqref{eq:div_cont} is the incompressibility constraint, Equation~\eqref{eq:bcv_cont} is the boundary condition for the normal component of the velocity and Equation~\eqref{eq:bcp_cont} is the free surface boundary condition. We use $\Omega$ to denote the region occupied by the fluid, $\partial \Omega_D$ to denote the portion of the boundary of the fluid domain on which velocity is prescribed to be $a$ (which may vary over the boundary) and $\partial \Omega_N$ is the surface of the water where the pressure is zero (see Figure~\ref{fig:dAndg}).\\
\\
\begin{figure}[h]
\centering
\begin{tabular}{cc}
\subf{\includegraphics[draft=false,width=.45\columnwidth]{figures/domain}}
&
\subf{\includegraphics[draft=false,width=.45\columnwidth]{figures/cutcell.eps}}
\\
\end{tabular}
\caption{{\textbf{Flow domain and grid}}. {\textbf{Left}}: we use $\Omega$ to denote the fluid domain, with $\partial\Omega_D$ used to indicate the portion of the fluid domain subject to velocity boundary conditions and $\partial\Omega_N$ to indicate the free-surface portion of the boundary with pressure condition $p=0$. {\textbf{Right}}: We use multiquadratic interpolation for velocity ($\bar{\mb{u}}_\mb{i}$ at cell centers, blue) and multilinear for pressure ($p_\mb{c}$ at nodes, red). The fluid domain is defined with sub-grid-cell accuracy.}\label{fig:dAndg}
\end{figure}
In a Chorin \shortcite{chorin:1967:numerical} operator splitting of the advective and pressure terms, velocity is first updated to an intermediate field $\mb{w}$ under the convective $\rho\frac{D\mb{u}}{Dt}=\mathbf{0}$, followed by an update from the pressure and gravitational body forcing under $\rho\frac{\partial \mb{u}}{\partial t}=-\nabla p + \rho\gg$ where the pressure is determined to enforce $\nabla\cdot\mb{u} = 0$. Dividing by the mass density, the convective step is seen to be an update under Burgers' equation~\eqref{eq:impBurg}. Burgers' equation governs temporally constant Lagrangian velocity (zero Lagrangian acceleration). The characteristic curves for flows of this type are straight lines (since the Lagrangian acceleration is zero), on which the velocity is constant (see Figure~\ref{fig:burgers}). This gives rise to the implicit relation $\mb{u}(\mb{x},t)=\mb{u}(\mb{x}-(t-s)\mb{u}(\mb{x},t),s)$ for $s\leq t$. Intuitively, if we want to know the velocity $\mb{u}(\mb{x},t)$ at point $\mb{x}$ at time $t$, we look back along the characteristic passing through $\mb{x}$ at time $t$ to any previous time $s$; however, the characteristic is the straight line defined by the velocity $\mb{u}(\mb{x},t)$ that we want to know. Hence we take an implicit approach to the solution of this equation, which when combined with the operator splitting amounts to
\begin{align}
\frac{\mb{w}-\tilde{\mb{u}}^n}{\Delta t} &=\mathbf{0} \label{eq:split_a} \\
\rho\frac{\mb{u}^{n+1}-\mb{w}}{\Delta t} &=-\nabla p^{n+1} + \rho\gg \label{eq:split_p}\\
\nabla\cdot\mb{u}^{n+1} &= \mathbf{0} \label{eq:split_div}
\end{align}
where we use the notation $\mb{u}^{n+\alpha}(\mb{x})=\mb{u}(\mb{x},t^{n+\alpha})$, $\alpha=0,1$ to denote the time $t^{n+\alpha}$ velocities. Furthermore, the intermediate velocity $\mb{w}$ is related to $\tilde{\mb{u}}^n$ through $\tilde{\mb{u}}^n(\mb{x})=\mb{u}(\mb{x}-\Delta t \mb{w}(\mb{x}),t^n)$.
\begin{figure*}[ht!]
\includegraphics[draft=false,width=\textwidth]{figures/burgers}
\caption{{\textbf{BSL versus SL}}. We illustrate the difference between explicit semi-Lagrangian and BSL in 1D. {\textbf{Left}}: The exact solution of Burgers' equation has straight line characteristics shown in blue, green and red on which velocity (plotted above the plane in gray) is constant. {\textbf{Center}}: BSL (green) uses Newton's method to solve for the exact characteristic going through $x_i$ at time $t^{n+1}$ to determine $u_i^{n+1}$. {\textbf{Right}}: explicit semi-Lagrangian (red) uses a stale, time $t^n$ approximation of the characteristic which over shoots, resulting in an underestimate of the velocity and energy loss.}\label{fig:burgers}
\end{figure*}
\section{Spatial Discretization}
We discretize in space by first representing velocity and pressure in terms of mulitquadratic and multilinear B-splines for velocity and pressure respectively. We use a regular grid with spacing $\Delta x$ and define pressure degrees of freedom at grid vertices and velocity degrees of freedom at grid cell centers as in \cite{ando:2013:surfacing} (see Figure~\ref{fig:dAndg}). This efficiently aligns the support of the multiquadratic and multilinear interpolating functions which naturally allows for a grid-cell-wise definition of the flow domain (see Figure~\ref{fig:fsdnb}). We use $N_\mb{i}(\mb{x})$ to represent the multiquadratic B-spline basis function associated with velocity degree of freedom $\bar{\mb{u}}_\mb{i}$ at grid cell center $\mb{x}_\mb{i}$ and $\chi_\mb{c}(\mb{x})$ for the multilinear basis function associated with pressure $p_\mb{c}$ at grid node $\mb{x}_\mb{c}$. These are defined as
\begin{align}
N_\mb{i}(\mb{x})&=\prod_{\alpha}\hat{N}(\frac{x_\alpha-x_{\alpha \mb{i}}}{\Delta x}), \ \chi_\mb{c}(\mb{x})=\prod_{\alpha}\hat{\chi}(\frac{x_\alpha-x_{\alpha \mb{c}}}{\Delta x})\\
\hat{N}(\eta)&=\left\{\begin{array}{lcc}
\frac{\left(\eta +\frac{3}{2}\right)^2}{2},&\eta\in(-\frac{3}{2},-\frac{1}{2})\\
-\eta^2+\frac{3}{4},&\eta\in[-\frac{1}{2},\frac{1}{2}]\\
\frac{\left(\eta -\frac{3}{2}\right)^2}{2},&\eta\in(\frac{1}{2},\frac{3}{2})\\
0,&\textrm{otherwise}
\end{array}\right. \
\hat{\chi}(\nu)=\left\{\begin{array}{lcc}
1+\nu,&\nu\in(-1,0)\\
1-\nu,&\nu\in[0,1)\\
0,&\textrm{otherwise}
\end{array}\right.
\end{align}
where we use Greek indices $\alpha$ to indicate components of the vectors $\mb{x}$, $\mb{x}_\mb{i}$ and $\mb{x}_\mb{c}$. With this convention we interpolate to define velocity and pressure fields
\begin{align}\label{eq:interp}
\mb{u}(\mb{x})=\sum_\mb{i} \bar{\mb{u}}_\mb{i} N_\mb{i}(\mb{x}), \ p(\mb{x})=\sum_\mb{c} p_\mb{c} \chi_\mb{c}(\mb{x}).
\end{align}
We use the notation $\bar{\mb{u}}_\mb{i}$ to distinguish it from the velocity at the grid node $\mb{u}(\mb{x}_\mb{i})=\sum_\mb{j} \bar{\mb{u}}_\mb{j} N_\mb{j}(\mb{x}_\mb{i})$ since the multiquadratic B-splines are not interpolatory and these will in general be different. Note that multilinear interpolation is interpolatory and $p_\mb{c}=\sum_\mb{d} p_\mb{d} \chi_\mb{d}(\mb{x}_\mb{c})$.
\subsection{BSLQB Advection}\label{sec:bslqb}
With this interpolation choice, we first solve for intermediate grid node velocity values $\mb{w}(\mb{x}_\mb{i})$ from Equation~\eqref{eq:split_a} as
\begin{align}\label{eq:ad_disc}
\mb{w}(\mb{x}_\mb{i})=\sum_\mb{j} \bar{\mb{u}}^n_\mb{j} N_\mb{j}\left(\mb{x}_\mb{i}-\Delta t \mb{w}(\mb{x}_\mb{i})\right).
\end{align}
We can solve this equation using Newton's method since the multiquadratic B-splines are $C^1$. We use $\mb{w}^k_\mb{i}$ to denote the $k^\textrm{th}$ Newton approximation to $\mb{w}(\mb{x}_\mb{i})$. Explicit semi-Lagrangian is used as an initial guess with $\mb{w}^0_\mb{i}=\sum_\mb{j} \bar{\mb{u}}^n_\mb{j} N_\mb{j}\left(\mb{x}_\mb{i}-\Delta t \sum_\ll \bar{\mb{u}}^n_\ll N_\ll(\mb{x}_\mb{i})\right)$ and then we update iteratively via $\mb{w}^k_\mb{i}\mathrel{+}=\boldsymbol\delta\mb{u}^k$ with Newton increment $\boldsymbol\delta\mb{u}^k$ satisfying
\begin{align*}
\boldsymbol\delta\mb{u}^k&=\left(\mb{I}+\Delta t \frac{\partial \mb{u}^n}{\partial \mb{x}}\left(\mb{x}_\mb{i}-\Delta t \mb{w}^k_\mb{i}\right)\right)^{-1}\left(\sum_\mb{j} \bar{\mb{u}}^n_\mb{j} N_\mb{j}\left(\mb{x}_\mb{i}-\Delta t \mb{w}^k_\mb{i}\right)-\mb{w}^k_\mb{i}\right)
\end{align*}
where $\frac{\partial \mb{u}^n}{\partial \mb{x}}\left(\mb{x}_\mb{i}-\Delta t \mb{w}^k_\mb{i}\right)=\sum_\mb{j} \bar{\mb{u}}^n_\mb{j}\frac{\partial N_\mb{j}}{\partial \mb{x}}\left(\mb{x}_\mb{i}-\Delta t \mb{w}^k_\mb{i}\right)$. It is generally observed \cite{kuo:1990:semi,pudykiewicz:1984:some} that with BSL approaches of this type, this iteration will converge as long as $\mb{I}+\Delta t \sum_\mb{j} \bar{\mb{u}}^n_\mb{j} \frac{\partial N_\mb{j}}{\partial \mb{x}}\left(\mb{x}_\mb{i}-\Delta t \mb{w}^k_\mb{i}\right)$ is non-singular. We note that this condition holds as long as no shocks form under Burgers' equation \cite{evans:2010:pde} (forward from time $t^n$). This is a safe assumption since we are modeling incompressible flow with which shock formation does not occur, but it may be a problem for compressible flows. In practice, this iteration converges in 3 or 4 iterations, even with CFL numbers larger than 4 (see Section~\ref{sec:ex_hybrid}). When it does fail (which occurs less than one percent of the time in the examples we run), it is usually for points near the boundary with characteristics that leave the domain (since we cannot estimate $\frac{\partial \mb{u}^n}{\partial \mb{x}}$ using grid interpolation if the upwind estimate leaves the grid). In this case we use explicit semi-Lagrangian and interpolate from the boundary conditions if the characteristic point is off the domain.\\
\\
Once we have obtained the grid node values of the intermediate velocity $\mb{w}(\mb{x}_\mb{i})$, we must determine interpolation coefficients $\bar{\mb{w}}_\mb{j}$ such that $\mb{w}(\mb{x}_\mb{i})=\sum_\mb{j} \bar{\mb{w}}_\mb{j} N_\mb{j}(\mb{x}_\mb{i})$. On the boundary of the grid, we set $\bar{\mb{w}}_\mb{j} = \mb{w}(\mb{x}_\mb{j})$ since we can only interpolate to $\mb{x}_\mb{i}$ if all of its neighbors have data. This yields a square, symmetric positive definite system of equations for the remaining $\bar{\mb{w}}_\mb{j}$. The system is very well conditioned with sparse, symmetric matrix $N_\mb{j}(\mb{x}_\mb{i})$ consisting of non-negative entries and rows that sum to one. The sparsity and symmetry of the system arises from the compact support and geometric symmetry, respectively, of the B-spline basis functions $N_\mb{j}$. The system can be solved to a residual of machine precision in one iteration of PCG (or tens of iterations of unpreconditioned CG). In practice, we have noticed that for some flows, determining the coefficients $\bar{\mb{w}}_\mb{j}$ can lead to increasingly oscillatory velocity fields. This is perhaps due to the unfavorable filtering properties of even order B-splines \cite{cheng:2001:quadratic,cheney:2012:numerical}. However, we found that a simple stabilization strategy can be obtained as
\begin{align}\label{eq:BSLQBsys}
\sum_\mb{j} \left(\lambda N_\mb{j}(\mb{x}_\mb{i}) + (1-\lambda)\delta_{\mb{i}\mb{j}}\right)\bar{\mb{w}}_\mb{j}=\mb{w}(\mb{x}_\mb{i})
\end{align}
where $\lambda\in[0,1]$ and $\delta_{\mb{i}\mb{j}}$ is the Kronecker delta. A value of $\lambda=0$ is very stable, but extremely dissipative. Stable yet energetic behavior is achieved by decreasing the value of $\lambda$ under grid refinement. In practice we found that $\lambda\in (.95,1 ]$ with $\lambda =c\Delta x$ for constant $c$ provided a good balance without compromising second order accuracy of the method (see Section~\ref{sec:ex_hybrid}). We note that Riish{\o}jgaard et al. \shortcite{riishojgaard:1998:use} also added diffusion to cubic spline interpolation based semi-Lagrangian to reduce noise.
\subsection{Hybrid BSLQB-PolyPIC Advection}\label{sec:poly}
In some portions of the domain, we store particles with positions $\mb{x}_p^n$ and PolyPIC \cite{fu:2017:poly} velocity coefficients $\mb{c}^n_p$. In the vicinity of the particles, we use PolyPIC \cite{fu:2017:poly} to update the intermediate velocity field $\bar{\mb{w}}_\mb{j}$. First we update particle positions as $\mb{x}^{n+1}_p=\mb{x}^{n}_p + \Delta t \mb{v}_p^{n}$ (where the velocity $\mb{v}_p^{n}$ is determined from $\mb{c}^n_p$ following \cite{fu:2017:poly}). Then the components $\bar{w}_{\mb{j}\alpha}$ of the coefficients $\bar{\mb{w}}_\mb{j}$ are determined as
\begin{align}\label{eq:polyp2g}
\bar{w}_{\mb{j}\alpha}=\frac{\sum_p m_pN_\mb{j}(\mb{x}_p^{n+1})\left(\sum_{r=1}^{N_r} s_r(\mb{x}_\mb{j}-\mb{x}^{n+1}_p)c^n_{pr\alpha}\right)}{\sum_p m_pN_\mb{j}(\mb{x}_p^{n+1})}
\end{align}
where $N_r$ is the number of polynomial modes $s_r(\mb{x})$, as in Fu et al. \shortcite{fu:2017:poly}. To create our hybrid approach, we update $\bar{w}_{\mb{j}\alpha}$ from Equation~\eqref{eq:polyp2g} whenever the denominator is greater than a threshold $\sum_p m_pN_\mb{j}(\mb{x}_p^{n+1})>\tau^m$, otherwise we use the BSLQB update from Equation~\eqref{eq:BSLQBsys}. We use this threshold because the grid node update in Equation\eqref{eq:polyp2g} loses accuracy when the denominator is near zero and in this case the BSLQB approximation is likely more accurate. Note that the polynomial mode coefficients for the next time step $\mb{c}^{n+1}_p$ are determined from the grid velocities at the end of the time step (using particle positions $\mb{x}_p^{n+1}$ and after pressure projection).
\section{Pressure Projection}
We solve Equations~\eqref{eq:split_p}-\eqref{eq:split_div} and boundary condition Equations~\eqref{eq:bcv_cont}-\eqref{eq:bcp_cont} in a variational way. To do this, we require that the dot products of Equations~\eqref{eq:split_p}, \eqref{eq:split_div} and Equations~\eqref{eq:bcv_cont} with arbitrary test functions $\mb{r}$, q and $\mu$ respectively integrated over the domain are always equal to zero. The free surface boundary condition in Equation~\eqref{eq:bcp_cont} is naturally satisfied by our treatment of Equation~\eqref{eq:split_p}. We summarize this as
\begin{align}
\int_\Omega \mb{r}\cdot\rho\left(\frac{\mb{u}^{n+1}-\mb{w}}{\Delta t}\right)d\mb{x}&=\int_\Omega p^{n+1}\nabla\cdot\mb{r} + \rho\mb{r}\cdot \gg d\mb{x} \label{eq:var_mom}\\
&-\int_{\partial \Omega}p^{n+1} \mb{r}\cdot \mb{n} ds(\mb{x})\nonumber\\
\int_{\Omega} q \nabla \cdot \mb{u}^{n+1} d\mb{x}&=0 \label{eq:var_div}\\
\int_{\partial \Omega_D} \mu \left(\mb{u}^{n+1}\cdot\mb{n}-a\right) ds(\mb{x})&=0.\label{eq:var_bcv}
\end{align}
Here we integrate by parts in the integral associated with Equation~\eqref{eq:split_p}. Furthermore, we modify the expression $\int_{\partial \Omega}p^{n+1} \mb{r}\cdot \mb{n} ds(\mb{x})$ in Equation~\eqref{eq:var_mom} in accordance with the boundary conditions. We know that the pressure is zero on $\partial \Omega_N$, however we do not know its value on $\partial \Omega_D$. We introduce the pressure on this portion of the domain as a Lagrange multiplier $\lambda^{n+1}$ associated with satisfaction of the velocity boundary condition in Equation~\eqref{eq:var_bcv}. Physically, this is the external pressure we would need to apply on $\partial \Omega_D$ to ensure that $\mb{u}^{n+1}\cdot\mb{n}=a$. With this convention, we have $\int_{\partial \Omega}p^{n+1} \mb{r}\cdot \mb{n} ds(\mb{x})=\int_{\partial \Omega_D}\lambda^{n+1} \mb{r}\cdot \mb{n} ds(\mb{x})$. We note that unlike Equation~\eqref{eq:var_bcv} (and its strong form counterpart $\eqref{eq:bcv_cont}$) that requires introduction of a Lagrange multiplier, Equation~\eqref{eq:bcp_cont} is naturally enforced through the weak form simply by setting $p^{n+1}=0$ in the integral over $\partial \Omega_N$ in Equation~\eqref{eq:var_mom}.\\
\\
To discretize in space, we introduce interpolation for the test functions $\mb{r}$, $q$ and $\mu$. We use the same spaces as in Equation~\eqref{eq:interp} for velocity and pressure for $\mb{r}=\sum_\mb{i} \bar{\mb{r}}_\mb{i} N_\mb{i}$ and $q=\sum_\mb{d} q_\mb{d} \chi_\mb{d}$. For the test functions $\mu$, we choose the same space as $q,p$, but with functions restricted to $\partial \Omega_D$, $\mu=\sum_\mb{b} \mu_\mb{b} \chi_\mb{b}$ for $\mb{b}$ with grid cell $\Omega_\mb{b} \cap \partial \Omega_D \neq \emptyset$ (see Figure~\ref{fig:fsdnb}). We choose the same space for $\lambda^{n+1}=\sum_\mb{b}\lambda^{n+1}_\mb{b} \chi_\mb{b}$ to close the system. With these choices for the test functions, the variational problem is projected to a finite dimensional problem defined by the interpolation degrees of freedom. This is expressed as a linear system for velocities $\bar{\mb{u}}_\mb{j}^{n+1}$, internal pressures $p^{n+1}_\mb{c}$, and external pressures $\lambda_\mb{b}^{n+1}$ that is equivalent to
\begin{align}
\left(\begin{array}{ccc}
\mb{M}&-\mb{D}^T&\mb{B}^T\\
-\mb{D}&&\\
\mb{B}&&
\end{array}\right)
\left(
\begin{array}{c}
\mb{U}^{n+1}\\
\mb{P}^{n+1}\\
\boldsymbol\Lambda^{n+1}
\end{array}
\right)=
\left(
\begin{array}{c}
\mb{M}\mb{W} + \hat{\gg}\\
\mathbf{0}\\
\AA
\end{array}
\right).
\end{align}
Here $\mb{U}^{n+1}$, $\mb{P}^{n+1}$ and $\boldsymbol\Lambda^{n+1}$ are the vectors of all unknown $\bar{\mb{u}}_\mb{j}^{n+1}$, $p_\mb{c}^{n+1}$ and $\lambda_\mb{b}^{n+1}$ respectively. Furthermore $\mb{M}$ is the mass matrix, $\mb{B}$ defines the velocity boundary conditions and $\mb{D}$ defines the discrete divergence condition. Lastly, $\mb{W} $ is the vector of all $\bar{\mb{w}}_\mb{i}$ that define the intermediate velocity, $\hat{\gg}$ is from gravity and $\AA$ is the variational boundary condition. Using the convention that Greek indices $\alpha,\beta$ range from $1-3$, these matrices and vectors have entries
\begin{align}
M_{\alpha\mb{i}\beta\mb{j}}=\delta_{\alpha\beta}\int_\Omega \frac{\rho}{\Delta t} N_\mb{i} N_\mb{j} d\mb{x}, \ D_{\mb{d} \beta \mb{j}}&=\int_\Omega \chi_\mb{d}\frac{\partial N_\mb{j}}{\partial x_\beta}d\mb{x}, \ \hat{g}_{\alpha \mb{i}}=\int_\Omega \rho g_\alpha N_\mb{i} d\mb{x} \label{eq:vol_int} \\
B_{\mb{b} \beta\mb{j}}=\int_{\Omega_D} \chi_\mb{b} N_\mb{j} n_\beta ds(\mb{x}), \ A_\mb{b} &= \int_\Omega a\chi_\mb{b} ds(\mb{x}).\label{eq:b_int}
\end{align}
If we define $\mb{G}=[-\mb{D}^T,\mb{B}^T]$, we can convert this system into a symmetric positive definite one for $\mb{P}^{n+1}$ and $\boldsymbol\Lambda^{n+1}$ followed by a velocity correction for $\mb{U}^{n+1}$
\begin{align}
\left(
\begin{array}{c}
\label{eq:spd_system}
\mb{P}^{n+1}\\
\boldsymbol\Lambda^{n+1}
\end{array}
\right)&=\left(\mb{G}^{T}\mb{M}^{-1}\mb{G}\right)^{-1}
\left(\mb{G}^T\left(\mb{W}+\mb{M}^{-1}\hat{\gg}\right)-\left(\begin{array}{c}\mathbf{0}\\\AA\end{array}\right)\right)\\
\mb{U}^{n+1}&=-\mb{M}^{-1}\mb{G}\left(
\begin{array}{c}
\mb{P}^{n+1}\\
\boldsymbol\Lambda^{n+1}
\end{array}
\right)+\mb{W}+\mb{M}^{-1}\hat{\gg}.
\end{align}
Unfortunately, this system will be dense in the current formulation since the full mass matrix $M_{\alpha\mb{i}\beta\mb{j}}$ is non-diagonal with dense inverse \cite{botella:2002:collocation}. However, a simple lumped mass approximation
\begin{align}\label{eq:mass_lump}
M^l_{\alpha\mb{i}\beta\mb{j}}=\left\{
\begin{array}{lcc}
\delta_{\alpha\beta}\int_\Omega \frac{\rho}{\Delta t} N_\mb{i} d\mb{x},&\mb{i}=\mb{j}\\
0,&\textrm{otherwise}
\end{array}
\right.
\end{align}
gives rise to a sparse matrix in Equation~\eqref{eq:spd_system}.
\begin{figure}[!ht]
\centering
\begin{tabular}{cc}
\subf{\includegraphics[draft=false,width=.45\columnwidth]{figures/narrow_band}}
&
\subf{\includegraphics[draft=false,width=.4\columnwidth]{figures/narrow_band2}}
\\
\end{tabular}
\caption{{\textbf{Discrete free surface fluid domain}}. {\textbf{Left}}: We define the fluid domain to consist of cells that either have (1) a particle (dark blue) in it or (2) a node with non-positive level set value (light blue). {\textbf{Right}}: Boundary Lagrange multiplier external pressure $\lambda_\mb{b}$ (orange circles) are like the interior pressures $p_\mb{c}$ except only defined on fluid domain cells that intersect $\partial \Omega_D$.}\label{fig:fsdnb}
\end{figure}
\begin{figure}[!ht]
\centering
\begin{subfigure}{.49\columnwidth}
\includegraphics[draft=false,clip,width=1.0\columnwidth]{figures/narrow_band_2d0000_cropped.jpeg}
\end{subfigure}
\begin{subfigure}{.49\columnwidth}
\includegraphics[draft=false,clip,width=1.0\columnwidth]{figures/narrow_band_2d0048_cropped.jpeg}
\end{subfigure}\\
\begin{subfigure}{0.33\columnwidth}
\includegraphics[draft=false,trim={920px 430px 830px 470px},clip,width=1.0\columnwidth]{figures/narrowband3d_0.jpeg}
\end{subfigure
\begin{subfigure}{0.33\columnwidth}
\includegraphics[draft=false,trim={920px 430px 830px 470px},clip,width=1.0\columnwidth]{figures/narrowband3d_12.jpeg}
\end{subfigure}
\begin{subfigure}{0.33\columnwidth}
\includegraphics[draft=false,trim={920px 430px 830px 470px},clip,width=1.0\columnwidth]{figures/narrowband3d_26.jpeg}
\end{subfigure
\caption{{\textbf{Narrow band free surface}}. A circle/sphere falls in a tank of water under gravity. Using only a narrow band of particles saves computational cost and enables increased resolution of the free surface. {\textbf{Top}}: In 2D we illustrate the hybrid particle(dark blue)/level set (light blue) representation. {\textbf{Bottom}}: Particles are colored based on velocity magnitude.}\label{fig:narrow_band_2d}
\end{figure}
\subsection{Cut cells}
\label{sec:cutcells}
As in XFEM and VNA approaches \cite{belytschko:2009:review,koschier:2017:xfem,schroeder:2014:vna}, we resolve sub-grid-cell geometry by simply performing the integrations in Equations~\eqref{eq:vol_int}-\eqref{eq:b_int} over the geometry of the fluid domain. We use a level set to define solid boundaries (green in Figure~\ref{fig:fsdnb}) on which velocity boundary conditions are defined. We triangulate the zero isocontour using marching cubes \cite{chernyaev:1995:marching} (see Figure~\ref{fig:cutcell}). The integrals in Equations~\eqref{eq:vol_int}-\eqref{eq:b_int} all involve polynomials over volumetric polyhedra (Equations~\eqref{eq:vol_int}, blue in Figure~\ref{fig:cutcell}) or surface polygons (Equations~\eqref{eq:b_int}, green in Figure~\ref{fig:cutcell}) and we use Gauss quadrature of order adapted to compute the integrals with no error (see \cite{gagniere:2020:tech_doc}). For free surface flows, we use particles (and additionally a level set function in the case of narrow banding, see Section~\eqref{sec:nb}) to denote grid cells with fluid in them. Cells near the solid boundary are clipped by the marching cubes geometry. The fluid domain $\Omega$ is defined as the union of all clipped and full fluid cells (see Figure~\ref{fig:fsdnb}). \\
\\
Notably, taking a cut cell approach with our variational formulation allows us to prove that our method can resolve a standing pool of water exactly without producing numerical currents. We know that with gravitational force $\rho\gg$ (e.g. with $\gg$ pointing in the $y$ direction with magnitude $g$), steady state is maintained if the pressure increases with depth as $p=\rho g\left(y_0-y\right)$ where $y_0$ is the height of the water surface at rest, since $-\nabla p + \rho\gg = \mathbf{0}$. Since we use multilinear interpolating functions for $p$, the exact solution is representable in our discrete space and with a short proof we show (see \cite{gagniere:2020:tech_doc}) that this means our method will choose it to maintain a standing pool of water, independent of fluid domain boundary geometry.
\begin{figure}[!ht]
\includegraphics[draft=false,width=\columnwidth]{figures/cut_cell_combo}
\caption{{\textbf{Cut cells}}. We show the 14 essential cases used in determining the cut cell fluid domain geometry. Blue faces indicate the intersection of the grid cell with the fluid domain. Green faces indicate the velocity boundary condition faces on $\partial \Omega_D$.}\label{fig:cutcell}
\end{figure}
\section{Narrow band free surface}\label{sec:nb}
For free surface flows, we develop a narrow band approach as in \shortcite{chentanez:2015:coupling,ferstl:2016:narrow,sato:2018:nb}. We represent the fluid domain with a level set and seed particles in a band of width $W$ from the zero isocontour (see Figure~\ref{fig:fsdnb}). Particles are advected and used to augment BSLQB advection as detailed in Section~\ref{sec:poly}. We also advect the level set by interpolating its value at the previous step from the upwind location $\mb{x}_\mb{i}-\Delta t \mb{w}(\mb{x}_\mb{i})$ determined in Equation~\eqref{eq:ad_disc}. We then use the updated particle locations to compute a narrow band level set from the particles based on the method of Boyd and Bridson \cite{boyd:2012:multiflip}. We update the level set to be the union of that defined by the narrow band and that from advection. This is done by taking the minimum of the two level set values and then redistancing with the method of Zhao \shortcite{zhao:2005:fast}.
\section{Examples}
\begin{figure}[h]
\includegraphics[draft=false,width=\columnwidth]{figures/vonKarman_cropped}
\caption{{\textbf{Von Karman vortex shedding}}. We demonstrate the accuracy of our Hybrid BSLQB/PolyPIC with vortex shedding past a notch in 2D. Note the smooth transition between regions with particles (PolyPIC) and those without (BSLQB).}\label{fig:vonK}
\end{figure}
\subsection{Hybrid BSLQB/PolyPIC}\label{sec:ex_hybrid}
We demonstrate our hybrid BSLQB/PolyPIC advection with water simulation. We prevent excessive run times by utilizing a narrow band of particles near the free surface and a level set (with BSLQB advection) in deeper levels. Figure~\ref{fig:narrow_band_2d} Top shows a disc of water splashing in a rectangular tank with dimension $1\times2$ and grid cell size $\Delta x = 1/255$. The time step is restricted to be in the range $\Delta t \in \left[0.005, 0.01\right]$. 20 particles are initialized in every cell that is initially in a narrow band of $7 \Delta x$ below the zero isocontour of the level set. Figure~\ref{fig:narrow_band_2d} Bottom shows an analogous 3D example where a sphere of water splashes in a tank. A cell size of $\Delta x = \frac{1}{63}$ is used in a domain with dimensions $1\times 2 \times 1$. We take a fixed time step of $\Delta t = 0.01$ and demonstrate that narrow banding does not prevent larger-than-CFL time steps. 1,008,187 particles are used to resolve the free surface in a narrow band of width $5\Delta x$. As in 2D, the particles capture highly-dynamic behavior of the free surface while the level set is sufficient to represent the bulk fluid in the bottom half of the domain.\\
\\
We also demonstrate our hybrid advection with a vortex shedding example (see Figure~\ref{fig:vonK}). The flow domain $\Omega$ is a $3\times 1$ rectangle with circle of radius $0.05$. We seed a band of particles of width $.2$ above the midline $y=.5$ for PolyPIC advection. Advection in the rest of the domain is done with BSLQB. The vorticity plot illustrates a seamless transition between the two advection schemes. The simulation was run with a grid resolution of $\Delta x=\frac{1}{255}$, CFL number of 4 (i.e. $\Delta t = \frac{4\Delta x}{v_\textrm{max}}$), and inlet speed of $1.5$.
\begin{figure}[!b]
\centering
\begin{subfigure}{.49\columnwidth}
\includegraphics[draft=false,clip,width=\columnwidth]{figures/cutcell_compare1.jpeg}
\end{subfigure}
\begin{subfigure}{.49\columnwidth}
\includegraphics[draft=false,clip,width=\columnwidth]{figures/cutcell_compare2.jpeg}
\end{subfigure}
\caption{{\textbf{Cut cell vs.\ voxelized domain}}. Using a cut cell domain (right) instead of a voxelized domain (left) yields marked improvements in simulation quality.}\label{fig:voxelized_vs_cut_cell}
\end{figure}
\subsection{BSLQB comparison with explicit semi-Lagrangian}
We demonstrate improved resolution of flow detail with BSLQB compared to explicit semi-Lagrangian in a 2D example of smoke flowing past a circle (see Figure~\ref{fig:interp_correction}) and with a 2D spinning circle example (see Figure~\ref{fig:inner_circle}). Note that particles are only used for flow visualization and not for PolyPIC advection in these examples. BSLQB exhibits more energetic, turbulent flows than semi-Lagrangian advection. Notably, the BSLQB result breaks symmetry sooner. In Figure~\ref{fig:interp_correction} we also examine the effect of extremal values of the $\lambda$ parameter described in Equation~\eqref{eq:BSLQBsys}. A zero value of $\lambda$ is quite dissipative compared to a full value of $\lambda = 1$ for both semi-Lagrangian and BSLQB. As mentioned in Section~\ref{sec:bslqb}, we generally found that keeping $\lambda$ close to 1 provided the least dissipative behavior, while setting the value slightly less than 1 helped restore stability when necessary (one can also dynamically adjust this value over the course of a simulation). In Figure~\ref{fig:inner_circle}, we initially set the angular velocity to 4 radians per second in a circle of radius $.2$ (with $\Omega=[0,1]\times[0,1]$). The simulation is run with $\Delta x=\frac{1}{511}$ and a $\Delta t = .02$ (CFL number of 3).\\
\\
We examine the convergence behavior of BSLQB for the 2D Burgers' equation $\frac{D\mb{u}}{Dt}=\mathbf{0}$ with initial data $\mb{u}(\mb{x})=\mb{x}\cdot\left(\AA\mb{x}\right)$ for $\AA=\mb{R}\boldsymbol\Lambda\mb{R}^T$ for diagonal $\boldsymbol\Lambda$ with entries $1$ and $.25$ and rotation (of $.1$ radians) $\mb{R}$ (see Figure~\ref{fig:conv}). We examine the convergence behavior under refinement in space and time with $\Delta t=\Delta x$. We compute the best fit line to the plot of the logarithm of the $L^\infty$ norm of the error versus the logarithm of $\Delta x$ for a number of grid resolutions. We observe slopes of approximately 2 for BSLQB with interpolation parameter $\lambda=1$ and $\lambda=1-c\Delta x$ (with $c = 2.95$), indicating second order accuracy in space and time under refinement. We observe slopes of approximately 1 for explicit semi-Lagrangian, indicating first order.
\begin{figure}[ht]
\centering
\begin{subfigure}{1.0\columnwidth}
\includegraphics[draft=false,trim={0 60px 0 60px},clip,width=\columnwidth]{figures/interp_correction_0300.pdf}
\end{subfigure}
\\
\begin{subfigure}{1.0\columnwidth}
\includegraphics[draft=false,trim={0 40px 0 300px},clip,width=\columnwidth]{figures/interp_correction_0060.jpeg}
\end{subfigure}
\caption{{\textbf{Interpolation correction}}. BSLQB exhibits more fine-scale flow detail and vorticity than semi-Lagrangian for extremal values of interpolation parameter $\lambda$ (Equation~\eqref{eq:BSLQBsys}). From left to right: semi-Lagrangian with $\lambda = 0$, BSLQB with $\lambda = 0$, semi-Lagrangian with $\lambda = 1$, BSLQB with $\lambda = 1$.}\label{fig:interp_correction}
\end{figure}
\begin{figure}[ht]
\includegraphics[draft=false,width=\columnwidth]{figures/Convergence}
\caption{{\textbf{Convergence}}. We compare explicit semi-Lagrangian (SL, red), with BSLQB (blue) and interpolation coefficient $\lambda=1$ (Equation~\eqref{eq:BSLQBsys}) and BSLQB with interpolation coefficient $\lambda=1-c\Delta x$ (orange). We plot $\log(\Delta x)$ versus $\log(e)$ (where $e$ is the infinity norm of the error) for a variety of grid resolutions $\Delta x$ and compute the best fit lines. The slope of the line provides empirical evidence for the convergence rate of the method.}\label{fig:conv}
\end{figure}
\begin{figure}[!ht]
\centering
\begin{subfigure}{\columnwidth}
\includegraphics[draft=false,trim={0 200px 0 200px},clip,width=\columnwidth]{figures/smoke_fast0104.jpeg}
\end{subfigure}\\
\begin{subfigure}{\columnwidth}
\includegraphics[draft=false,trim={0 200px 0 200px},clip,width=\columnwidth]{figures/smoke_fast0161.jpeg}
\end{subfigure}\\
\begin{subfigure}{\columnwidth}
\includegraphics[draft=false,trim={0 200px 0 200px},clip,width=\columnwidth]{figures//smoke_fast0350.jpeg}
\end{subfigure}
\caption{{\textbf{Smoke jet}}. A plume of smoke is simulated with BSLQB. Zero normal velocity boundary conditions are enforced on the irregular boundary of the sphere inducing intricate flow patterns as the smoke approaches it.}\label{fig:smokejet}
\end{figure}
\subsection{Cut cell examples}
We demonstrate the ability of our cut cell method to produce detailed flows in complicated irregular domains for smoke and free surface water examples. Figure~\ref{fig:rainbowsmoke} demonstrates the subtle and visually interesting behavior that arises as two plumes of multicolored smoke flow to the center of a cubic domain colliding with a spherical boundary. We use $\Delta x = 1/63$ and $\Delta t = .02$. We demonstrate a more complex domain in Figure~\ref{fig:bunnysmoke}. Puffs of colored smoke with converging initial velocities are placed in a bunny shaped clear domain. We use grid size $\Delta x = 1/127$ and a fixed time step of $\Delta t = 0.01$ (CFL number $>1$). In Figure~\ref{fig:snowglobe}, we demonstrate water splashing, while accurately conforming to the walls of an irregular domain defined as the interior of a large sphere and exterior of a small inner sphere. The spatial resolution of the domain is $\Delta x = 1/127$, and 30 particles per cell are seeded in the initial fluid shape. A minimum time step of $\Delta t=0.001$ is enforced, which is often larger than the CFL condition. We also consider dam break simulations in rectangular domains with column obstacles (Figure~\ref{fig:dambreak_final}) and a bunny obstacle (Figure~\ref{fig:dambreak_final}). Both examples use a grid cell size of $\Delta x = 1/127$, 8 particles per cell and a fixed time step of $\Delta t = 0.003$. Lastly, we demonstrate the benefits of our cut cell formulation over a more simplified, voxelized approach in Figure~\ref{fig:voxelized_vs_cut_cell}. Notice the water naturally sliding in the cut cell domain compared with the jagged flow in the voxelized domain.
\subsection{Performance considerations}
\begin{table}[]
\begin{tabular}{@{}llll@{}}
\toprule
Example & Seconds & \# Particles & $\Delta x^{-1}$ \\ \midrule
Smoke Jet (Fig.~\ref{fig:smokejet}) & 1,212 & 12,502,349 & 127 \\
Multiple Jets (Fig.~\ref{fig:rainbowsmoke}) & 53 & 25,004,699 & 63 \\
Bunny Smoke (Fig.~\ref{fig:bunnysmoke}) & 160 & 24,000,000 & 127 \\
Smoke Spheres$\text{*}$ (Fig.~\ref{fig:bigsmoke}) & 428 & 64,000,000 & 255 \\
Narrow Band (Fig.~\ref{fig:narrow_band_2d}) & 396 & 1,008,187 & 63 \\
Water Globe (Fig.~\ref{fig:snowglobe}) & 242 & 524,415 & 127 \\
Dam Break (Fig.~\ref{fig:dambreak_final}) & 870 & 3,251,409 & 127 \\
Bunny Dam Break (Fig.~\ref{fig:bunny_dambreak_final}) & 1,171 & 4,797,535 & 127 \\ \bottomrule
\end{tabular}
\caption{Average time per frame (in seconds) for each of the 3D examples shown in the paper. Examples were run on workstations with 16-core CPUs running at 2.20 GHz, except for the smoke spheres example, which was run on a cluster equipped with CPUs running at 3.07 GHz and Nvidia Tesla V100 GPUs which were used for the linear solves.}
\label{tbl:perf}
\end{table}
The implementation of our method takes advantage of hybrid parallelism (MPI, OpenMP, and CUDA/OpenCL) on heterogeneous compute architectures in order to achieve practical runtime performance (see Table~\ref{tbl:perf} for 3D example performance numbers). The spatial domain is uniformly divided into subdomains assigned to distinct MPI ranks, which distributes much of the computational load at the expense of synchronization overhead exchanging ghost information across ranks. On each rank, steps of our time integration loop such as BSLQB advection are multithreaded using OpenMP or CUDA when appropriate. The dominant costs per time step are the solution of the pressure projection system and, in the case of free surface simulation, assembly of the pressure system and its preconditioner. We permute Equation~\eqref{eq:spd_system} so that each rank's degrees of freedom are contiguous in the solution vector then solve the system using AMGCL \cite{demidov:2019:amgcl} using the multi-GPU VexCL backend (or the OpenMP CPU backend on more limited machines). Using a strong algebraic multigrid preconditioner with large-degree Chebyshev smoothing allows our system to be solved to desired tolerance in tens of iterations, even at fine spatial resolution. An important step in minimizng the cost of system assembly is to scalably parallelize sparse matrix-matrix multiplication, for which we use the algorithm of Saad \shortcite{saad:2003:sparse}. In the future, we are interested in implementing load balancing strategies such as the simple speculative load balancing approach of \cite{shah:2018:balancing}, particularly for free surface flows. We note that our implementation enables high-resolution simulations such as that in Figure~\ref{fig:bigsmoke} at relatively modest computational cost (see Table~\ref{tbl:perf}).
\section{Discussion and Limitations}
Our approach has several key limitations that could be improved. First, our adoption of collocated multiquadratic velocity and multilinear pressure is a significant departure from most fluid solvers utilized in graphics applications. We note that BSLQB and BSLQB/PolyPIC could be used with a MAC grid; however, each velocity face component would have to be solved for individually. Another drawback for our multiquadratic velocity and multilinear pressure formulation is that it gives rise to a very wide pressure system stencil consisting of 49 non-zero entries per row in 2D and 343 in 3D. Collocated approaches that make use of multilinear velocities and constant pressure give rise to 9 (2D) and 27 (3D) entries per row \cite{zhang:2017:impm}, however they do not allow for $C^1$ continuity and require spurious pressure mode damping. Our wide stencils likely negatively affect the efficacy of preconditioning techniques as well, however we were very pleased with the efficiency of the AMGCL \cite{demidov:2019:amgcl} library. Also, while the use of mass lumping in Equation~\eqref{eq:mass_lump} is necessary to ensure a sparse pressure projection system, Boatella et al. \shortcite{botella:2002:collocation} note that this has been shown to degrade accuracy. In fact, Boatela et al. \shortcite{botella:2002:collocation} introduce a sparse approximate inverse to the full mass matrix to avoid dense systems of equations in the pressure solve without degrading accuracy. Split cubic interpolation, which approximates similar systems with tridiagonal ones could also possibly be used for this \cite{huang:1994:semi}. Adoption of one of these approaches with our formulation would be an interesting area of future work. Also, we note that the more sophisticated transition criteria for narrow banding techniques in Sato et al. \shortcite{sato:2018:nb} could naturally be used with our method. Finally, we note that the work of Zehnder et al. \shortcite{zehnder:2018:advection,narain:2019:ref} could be easily applied to our technique to further reduce dissipation since it is based on the Chorin \shortcite{chorin:1967:numerical} splitting techniques (Equations~\eqref{eq:split_a}-\eqref{eq:split_div}) that we start from.
\begin{figure}[!hb]
\centering
\begin{subfigure}{.49\columnwidth}
\includegraphics[draft=false,trim={0 0 0 0},clip,width=\columnwidth]{figures/bigsmoke_0001.jpeg}
\end{subfigure} %
\begin{subfigure}{.49\columnwidth}
\includegraphics[draft=false,trim={0 0 0 0},clip,width=\columnwidth]{figures/bigsmoke_0064.jpeg}
\end{subfigure} \\ %
\begin{subfigure}{.49\columnwidth}
\includegraphics[draft=false,trim={0 0 0 0},clip,width=\columnwidth]{figures/bigsmoke_0128.jpeg}
\end{subfigure} %
\begin{subfigure}{.49\columnwidth}
\includegraphics[draft=false,trim={0 0 0 0},clip,width=\columnwidth]{figures/bigsmoke_0240.jpeg}
\end{subfigure}
\caption{{\textbf{High-resolution smoke}}: Two spheres of smoke collide in a high-resolution 3D simulation ($\Delta x = 1/255$). BSQLB accurately resolves vorticial flow detail.}\label{fig:bigsmoke}
\end{figure}
\bibliographystyle{ACM-Reference-Format}
|
2,869,038,154,427 | arxiv | \section{Introduction}
Magnetic skyrmions in chiral magnets are spin-swirling vortex-like matters as topologically defined by an integer winding number, and hence show versatile emergent electromagnetic responses \cite{review}. Because of possible electrical controls of skyrmions, such as ultralow current-density drive \cite{Jonietz,Yu3}, electric-field induced motion \cite{White}, and read/write operations by spin-polarized currents \cite{Romming}, skyrmions are considered as a promising candidate of the information carrier in emerging spintronics \cite{Sampaio,Koshibae}. Forms of skyrmion aggregate or skyrmion crystal in confined geometries of chiral magnets, including thin films \cite{Karhu,Huang,Yokouchi}, nanowires \cite{Kanazawa} and nano areas \cite{Park}, are of particular interest in the light of enhanced stability of skyrmions with higher density.
Magnetic phase diagrams for the skyrmion-hosting bulk materials with the common space group $P2_1 3$ share a universal profile \cite{Bauer} as shown in Fig.~1(a). While the skyrmion state in bulk is stabilized only near the transition temperature ($\Tc$) by thermal fluctuation \cite{Muhlbauer}, the skyrmion phase extends over a wide temperature ($T$)-magnetic field ($H$) region in case of thin films with a magnetic field perpendicular to the film plane \cite{LTEMFeCoSi}. The geometrical effect suppresses the formation of conical structure [Fig.~1(d)] modulating along the magnetic field direction (the out-of-plane direction); consequently, periodically-arranged skyrmions in the plane normal to $H$ [Fig.~1(e)] become a globally stable state in the thin films. A systematic real-space observation on freestanding MnSi thin plates with thickness gradients and different crystalline orientations has revealed that the relative skyrmion's stability against the film thickness in the respective crystallographic planes of the films is determined by competition among Dzyaloshinskii-Moriya interaction, dipole-dipole interaction, and uniaxial magnetic anisotropy \cite{Yuetal}.
However, until now there have been few studies on an effect of uniaxial magnetic anisotropy on stability of skyrmion, which effect was first proposed for explaining the enhanced skyrmion phase in the thin films \cite{Butenko}. Here, to gain insight into this effect, we focus on one other type of skyrmion aggregate, that is, an array of skyrmion rows stretching in the plane of strained MnSi thin films with an in-plane $H$ \cite{Wilson}. Epitaxial MnSi(111) thin films on Si substrates receive a tensile strain due to the lattice mismatch, which increases the hard-axis uniaxial anisotropy along the direction normal to the film plane \cite{Wilson2}. Detection of such an in-plane skyrmion formation, however, is challenging experimentally because the established detection methods, such as Lorentz transmission microscopy (TEM) and topological Hall effect \cite{Lee,Neubauer}, are difficult to be applied, in principle, for an in-plane $H$ configuration. Thus far, the formation of the in-plane skyrmion [Fig.~3(a)] in the thin films has been proposed only from magnetization measurements \cite{Wilson}.
In this paper, we demonstrate a new detection method for the formation of the in-plane skyrmion strings appearing in a thin film. By measurements of planar Hall effect (PHE), which sensitively extracts an anisotropic component of electrical conductance, we identify the emergence of skyrmions as a prominent stepwise field profile in the PHE signal for both a single-crystalline bulk and epitaxial thin films of MnSi. A $T$-$H$ phase for the in-plane skyrmions appears at low temperatures, which is distinct from the hitherto known skyrmion phase stretching from $\Tc$. The in-plane skyrmion strings are stabilized by the magnetic anisotropy, which is enhanced at low temperatures.
\begin{figure}
\begin{center}
\includegraphics*[width=8.5cm]{fig1.eps}
\caption{(color online). (a) Magnetic phase diagram for the bulk crystal of MnSi determined by magnetization measurements. Magnetic-field dependence of (b) magnetoresistivity and (c) planar Hall resistivity in the bulk sample. The inset of panel (b) is a magnified image of magnetoresistivity at the skyrmion phase. Schematic illustrations of (d) conical structure and (e) skyrmion crystal. (f) Experimental setup for the measurement of PHE.}
\end{center}
\end{figure}
\section{Experiments}
The MnSi single crystal was grown by the Czochralski method and cut into rectangular shape with a typical size of $2\times1\times0.3$ $\rm{mm}^{3}$. The MnSi epitaxial films were grown on a Si (111) substrate by solid phase epitaxy as detailed in Ref. 10. Planar Hall effect is measured with a setup shown in Fig.~1(f). Magnetic field is applied in the $x$ (current direction)-$y$ (voltage direction) plane. Measured planar Hall resistivity $\rphe$ reads
\begin{equation}
\rphe=\frac{1}{2}(\rpara-\rperp)\sin2\theta,
\end{equation}
where $\theta$ is angle between the current ($J$) and the magnetic field, $\rpara$ and $\rperp$ are resistivities with the current parallel and perpendicular to the magnetic field, respectively.
Note that PHE originates from the anisotropic magnetoresistivity, not the conventional Hall effect. Because a dominant contribution to anisotropic magnetoresistance in ferromagnetic $3d$-transition-metal alloys \cite{AMR} is usually related to the magnetization ($\bm{M}$) direction, we use the magnetization vector as a reference direction for PHE measurements. Partly because the magnetization direction is parallel with the magnetic field in MnSi as well as in most ferromagnets, we generally equate Eq. (1) with the following relation: $\rPHEM=\frac{1}{2}(\rparam-\rperpm)\sin2\theta_{\bm M}$, where $\rparam$, $\rperpm$, and $\theta_{\bm M}$ are corresponding parameters measured with reference to ${\bm M}$. To remove voltages from Hall effect and longitudinal resistivity due to misalignments of the sample mounting and the electrodes, we measured the transversal voltage for $\pm H$ and $\pm\theta$ and then symmetrized it against $H$ and antisymmetrized it against $\theta$. Hereafter we define $\rPHE$ as its signal at $\theta = 45^{\circ}$ unless otherwise noted.
\section{Results and Discussions}
We first demonstrate that the PHE is a sensitive probe for identifying skyrmion formation through measurements on a well-studied skyrmion material, the MnSi single-crystalline bulk sample.
Figure~1(b) shows the $H$-dependence of magnetoresistivity $\rho_{xx}(H)/\rho_{xx}(0)$ at various temperatures for a setup of $H\parallel J \parallel [110]$. The magnetoresistivity (MR) shows an inflection at the critical field $\Hc$, where the transition occurs between conical and ferromagnetic structures. In the magnetic field scan crossing the skyrmion phase, a small kink (0.1 \% change) in MR is also observed [inset of Fig.~1(b)], which is consistent with previous reports \cite{MR_Date,Demishev}.
We compare planar Hall signals at the corresponding temperatures measured with $J \parallel [110]$ and $H$ lying in $(001)$ plane in Fig.~1(c). $\rPHE$ exhibits clear changes at the magnetic phase boundaries [see also Fig.~1(a)]. In particular, $\rPHE$ displays a distinctive stepwise anomaly at the skyrmion phase, which enables us to use $\rPHE$ as a sensitive probe for the skyrmion phase. Here we again note that the step-like behavior of PHE in SkX is not a contribution from THE because the symmetrization against $H$ removes Hall contribution as mentioned above; in fact the magnitude is approximately ten times larger than THE in MnSi \cite{Neubauer} [see also Fig.~2(c)]. To build further assurance about the correspondence between the skyrmion phase boundaries and $\rPHE$ anomalies, we present development of $\rPHE$ in the $T$-$H$ region around the skyrmion phase in Fig.~2(a). Sharp stepwise structures are confirmed between 27.0--28.5 K. In Fig.~2(b), we map the $H$-derivative of PHE [inset of Fig.~2(b)], which emphasizes the abrupt change in PHE, for comparison with the established phase diagram. The abrupt rises and falls of $\rPHE$ coincide with the phase boundaries determined by the magnetization measurements in the $T$-$H$ plane, from which we confirmly assign the PHE anomaly to the skyrmion formation.
The PHE anomaly at the skyrmion phase can be accounted for with a following phenomenological model. Provided that resistivity in a periodically modulated magnetic texture also depends on the orientation of the modulation vector ($\bm{Q}$), an additional contribution will appear obeying the following relation in a similar way to the conventional PHE with reference to the magnetization: $\rPHEQ=\frac{1}{2}(\rparaq-\rperpq)\sin2\theta_{{\bm Q}}$, where $\rparaq$, $\rperpq$, and $\theta_{\bm Q}$ are corresponding parameters measured with reference to ${\bm Q}$.
Indeed, a recent study on anisotropic magnetoresistance (AMR) associated with the helical structure in $B$20-type (Fe, Co)Si has revealed that the magnetic modulation itself affects the electrical conductance, resulting in the difference between $\rparaq$ and $\rperpq$ \cite{Huang2}. Upon the transformation to the skyrmion state, $\rPHEQ$ changes its sign due to the sign inversion of $\sin2\theta_{\bm Q}$ accompanied by the 90$^{\circ}$-flop of ${\bm Q}$, which causes the distinctive anomaly. We note that the magnetic-field dependence of $\rPHE$ with passing through other magnetic phases [Fig.~1(c)] can be also explained on the basis of this phenomenological model: While the formation of a multidomain state of the single-${\bm Q}$ helical structure nearly cancels out $\rPHEQ$, the AMR feature is restored by $H$-alignment of the domains of the helical (conical) structure, as the enhanced absolute value of $\rPHEQ$ in the conical phase. When the ferromagnetic state is induced above $\Hc$, the contribution from $\rPHEQ$ disappears, leading to the reduction of $\rPHE$ magnitude.
The phenomenological expression is further verified by the angular dependence of PHE.
Figure~3(a) shows PHE signals normalized by $\sin2\theta$ at various $\theta$ measured with the same setting for Figs.~1(c) and 2(a), i.e., $J\parallel [110]$ and $H\parallel (001)$. Since the spin $\bm{Q}$ vectors of the conical and skyrmion structures are parallel and perpendicular to $H$, respectively, each $\rPHEQ$ as well as $\rPHEM$ obeys the $\sin2\theta$ dependence. The angles between the electric current and magnetic modulation direction ($\theta_{\bm Q}$) become $\theta$ and $\theta +90^{\circ}$ in the conical and skyrmion phases, respectively. Consequently, the angle dependencies of PHE remain $\sin 2\theta$ in the both phases: $\sin2\theta_{\bm Q}=\sin2\theta$ and $\sin2\theta_{\bm Q}=\sin2(\theta+90^{\circ})=-\sin2\theta$. In fact, the signals of PHE normalized by $\sin2\theta$ trace the identical curve [Fig.~3(a)]. This is further confirmed by $\theta$ dependence of $\rPHE$ [Figs.~3 (d)-(e)]; planar Hall signals at each magnetic phase clearly follow $\sin2\theta$ curves. The same angular dependence of $\rPHE$ is confirmed also in different settings of magnetic field and crystallographic orientation [Figs.~3(b) and 3(c)], although they show much different $H$-profiles. The AMR ratio, i.e., the difference between $\rparaq$ and $\rperpq$, in the $B$20 compounds largely depends on the complex combination of anisotropic nature of scattering processes and band structure \cite{Helix_kang}, which probably causes the significant difference in both the magnitude and sign of $\rPHE$ as observed.
\begin{figure}
\begin{center}
\includegraphics*[width=8.5cm]{fig2.eps}
\caption{(color online). (a) Magnetic-field dependence of planar Hall resistivity normalized by longitudinal resistivity at zero field around the skyrmion phase in the bulk sample. (b) A contour map of $H$-derivative of planar Hall resistivity. The solid circles and squares represent phase boundaries determined by magnetization measurements and the open triangles represent the points where the kinks of planar Hall resistivity are observed, corresponding to solid triangles in panel (a). The inset of panel (b) shows the $H$-derivative of planar Hall resistivity.}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=8.5cm]{fig3.eps}
\caption{(color online). Magnetic-field dependence of planar Hall resistivity normalized by $\sin2\theta$ at various $\theta$ with (a) $H $ lying in (001) plane and $J\parallel[110]$, (b) $H $ lying in (001) plane and $J\parallel[100]$, and (c) $H $ lying in (111) plane and $J\parallel[1\bar{1}0]$, respectively. The insets of panels (a)-(c) are experimental setups for the measurement of PHE. Angular ($\theta$) dependence of PHE in (d) conical phase, (e) skyrmion phase, and (f) ferromagetic (FM) phase with $H $ lying in (001) plane and $J\parallel[110]$. Here, $\theta$ is angle between the current and the magnetic field. The light blue lines are fits to $\sin2\theta$.}
\end{center}
\end{figure}
We apply the PHE measurement to detection of the in-plane skyrmion strings forming in the epitaxial MnSi thin film. In Fig.~4 are presented the magnetic field dependencies of magnetization $M$, magnetoresistivity normalized by its value at zero field $\rho_{xx}(H)/\rho_{xx}(0)$, and PHE signal normalized by the longitudinal resistivity at zero field $\rPHE/\rho_{xx}(0)$, at three temperatures (2, 10, 30 K). Magnetoresistivity and PHE are measured with electric current $J\parallel [1\bar{1}0]$ and with magnetic field $H\parallel J$ and $H\parallel (111)$ surface, respectively.
It is obvious that PHE signal shows a distinctive anomaly characteristic of the skyrmion formation at low temperatures below 20 K [Figs.~4(i) and 4(j)]. Given the theoretical prediction \cite{Wilson}, the skyrmion strings stretching along the in-plane $H$ in the thin film are likely responsible for the PHE anomalies, as schematically shown in Fig.~4(a). Between 20 K and 40 K($\approx \Tc$), all the three quantities [$M$, $\rho_{xx}(H)/\rho_{xx}(0)$, and $\rPHE/\rho_{xx}(0)$] indicate only one distinct magnetic transition at $H_{\rm{c}}$ as exemplified in Figs.~4(e), 4(h), and 4(k). Above $T_{\rm{c}}$, no significant signals are observed (not shown). We note that there are observed tiny anomalies in $M$ and $\rPHE/\rho_{xx}(0)$ at intermediate fields between the zero field and the critical field $\Hc$ at $T=$ 25--35 K [see also Figs.~4(e) and 4(k)]. These may indicate sparse formation of skyrmion strings
The magnetic field range of the PHE anomaly ($H_{\mathrm{sk1}} < H < H_{\mathrm{sk2}}$) extends well above $H_{\rm{c}}$ [Figs.~4(i) and 4(j)] and even reaches zero field in the decreasing field process at 10 K [Fig.~4(j)]. Once skyrmions are created, they coexist with other magnetic phase persisting beyond their thermodynamical-stability $H$-region. This originates from the first-order phase transition nature associated with topological change in the magnetic texture, i.e., unwinding the skyrmions costs a considerable barrier energy. Because of the topologically-stable nature of skyrmions, the hysteretic skyrmion formation with respect to magnetic field change also shows up as the hysteresis in the PHE signal [Figs.~4(i) and 4(j)]. The PHE anomaly is more prominent in the course of increasing field than decreasing field at 2 K [Fig.~4(i)]. Since the magnitude of the PHE anomaly should be associated with the skyrmion density, the large hysteresis in PHE indicates that the density of packed skyrmion strings depends on the precedented magnetic structure determined by the magnetic field history; the helical structure is more prone to the development of the skyrmions than the ferromagnetic state. With a slight elevation of temperature from 2 K, for example at 10 K, skyrmion formation occurs in different $H$-ranges between the increasing and decreasing field processes [Fig.~4(j)]. With increasing field, the transformation of the in-plane skyrmion strings from the helical structure occurs at $H_{\mathrm{sk1}}$, followed by the continued existence of skyrmions well above $\Hc$; with decreasing field, skyrmions appear at $\Hc$, remaining even near zero field. Here we note that while there are also discerned kinks and/or hysteretic behaviors corresponding to the skyrmion phase in the magnetization and magnetoresistivity curves, the planar Hall signal shows much better sensitivity for the skyrmion formation.
\begin{figure}
\begin{center}
\includegraphics*[width=8.5cm]{fig4.eps}
\caption{(color online). (a) A schematic illustration of the skyrmion formation in the presence of in-plane magnetic field. (b) An experimental setup for the measurement of PHE. Magnetic-field dependence of (c)-(e) magnetization, (f)-(h) magnetoresistivity, and (i)-(k) planar Hall resistivity of 26-nm MnSi thin film at 2 K, 10 K, and 30 K. Red lines indicate the data taken with increasing field and blue lines the data with decreasing field. The vertical dashed lines represent $\Hsko$, $\Hskt$, and $\Hc$; $\Hsko$ and $\Hskt$ correspond to the lower and upper critical fields of the $\rPHE$-hysteretic regime, where the $\rPHE$ originating from the in-plane skyrmions appears, and $\Hc$ stands for the critical field above which the spin collinear ferromagnetic state shows up.}
\end{center}
\end{figure}
We show contour mapping of $\rPHE/\rho_{xx}(0)$ for the increasing field process in Fig.~5(a), along with phase boundaries determined by measurements of $M$ and PHE.
In contrast to the skyrmion phase in the bulk MnSi as stabilized by the large thermal fluctuations near $\Tc$, the in-plane skyrmion phase for the thin film appears at low temperatures; this indicates a different driving force is involved in the formation of the in-plane skyrmions. The uniaxial magnetic anisotropy enhanced at low temperatures is perhaps the major contribution as theoretically suggested \cite{Wilson}.
To highlight the hysteretic formation of the in-plane skyrmion, we map in Fig.~5(b) the $\Delta\rho^{\rm{PHE,\, Hys}}_{yx}$ defined as difference calculated by subtracting $\rPHE$ with decreasing field from that with increasing field, which removes the $M$-induced PHE showing a significant contribution above $\Hc$ between 20--50 K. As described above, the in-plane skyrmion formation largely depends on the magnetic field history; namely, skyrmions tend to coexist with the ferromagnetic (helical) state in the increasing (decreasing) field process. That hysteretic behavior is presented as positive [blue part in Fig.~5(b)] or negative [red part in Fig.~5(b)] $\Delta\rho^{\rm{PHE,\, Hys}}_{yx}$, while there is no hysteretic signal in the other $T$-$H$ region. We note that the magnetic phase diagram determined by PHE is different from that of previous study \cite{Wilson} based on the magnetization measurement.
\begin{figure}
\begin{center}
\includegraphics*[width=8.5cm]{fig6.eps}
\caption{(color online). Color maps of (a) planar Hall resistivity ($\rPHE$) normalized by longitudinal resistivity at zero field with increasing field and (b) $\Delta\rho^{\rm{PHE, Hys}}_{yx}/\rho_{xx}(0)$ defined by the hysteretic component of $\rPHE$ in the magnetic field scans [see Figs. 4(i)-(k)]. Squares represent $\Hsko$ and $\Hskt$. Solid triangles and open circles represent $\Hc$ with increasing field and decreasing field, respectively. Orange and green circles represent $\Tco$ and $\Tct$, which show the onsets of the long-range order and the intermediate regime (chiral spin short-range order), respectively. }
\end{center}
\end{figure}
Finally, we discuss the thickness ($t$) dependence of planar Hall signal [Fig. 6]. At low temperatures, where we demonstrate the in-plane skyrmion formation, a polarized neutron reflectometry study \cite{Wilson_3} has proposed a helicoidal state. The helicoidal state proposed in Ref. 27 shows discrete changes in its helix turns with a magnetic field variation. When the sample thickness is $n\lambda\le t<(n+1)\lambda$, where $\lambda$ is helical period, the helicoidal state with $n$-turns is realized. With application of the magnetic field, the turns would be discretely unwound. If we assume the large kink in PHE [e.g. see Fig. 4(i)] originates from the helicoidal structure, namely the discrete change in the number of turns, additional kink would appear in a thicker film. Figure 6 shows that the PHE signals in 26 and 50-nm thick films. Even if we increase the thickness twice, the overall feature remains unchanged ; this is inconsistent with the model of the helical structure formation, but supports the present interpretation, i.e. the in-plane skyrmion formation.
\begin{figure}
\begin{center}
\includegraphics[width=8.5cm]{fig5.eps}
\caption{(color online). Magnetic-field dependence of $\rho_{yx}^{\rm{PHE}}$ at 2K in (a) 26 nm film and (b) 50 nm film.}
\end{center}
\end{figure}
In conclusion, by measurements of PHE, we have revealed the formation of the in-plane skyrmions in the MnSi epitaxial thin film, which can hardly be detected by the conventional detection methods such as Lorentz TEM and topological Hall effect. PHE sensitively detects the 90$^{\circ}$-flop of the magnetic modulation associated with the skyrmion formation and destruction, showing the prominent stepwise anomaly in the skyrmion phase.
We could determine the development of the respective magnetic texture in the MnSi film under the in-plane magnetic field, including the hysteretic formation of the in-plane skyrmions against the magnetic field change.
The uniaxial magnetic anisotropy due to the strain is likely the cause of the in-plane skyrmion formation at low temperatures.
\begin{acknowledgments}
The authors thank T. Ideue for enlightening discussions. This work is supported by JSPS through the Funding Program for World-Leading Innovative R\&D on Science and Technology (FIRST program), and Grant-in-Aids for Scientific Research (S) (No. 24224009 and No. 24226002) and for Young Scientists (Start-up) (No. 26886005).
\end{acknowledgments}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.